Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
An examination of the direct/indirect measures used in the assessment practices of AACSB-accredited schools
(USC Thesis Other)
An examination of the direct/indirect measures used in the assessment practices of AACSB-accredited schools
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Running head: DIRECT/INDIRECT MEASURES 1
AN EXAMINATION OF THE DIRECT/INDIRECT MEASURES USED IN THE
ASSESSMENT PRACTICES OF AACSB-ACCREDITED SCHOOLS
By
Kristopher Tesoro
A Dissertation Presented to the
FACULTY OF THE USC ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
December 2015
Copyright 2015 Kristopher Tesoro
DIRECT/INDIRECT MEASURES 2
Acknowledgements
I would like to thank my dissertation committee (Dr. Robert Keim, Dr. Patricia Tobey, &
Dr. Paul Woolston) for their guidance, patience, and flexibility throughout the dissertation
process. I am also very appreciative of the support received by the AACSB community in
pursuing this research project. I would also like to acknowledge the assistance and support of
my thematic group members; it was a great experience to share ideas with all of you. I would
like to acknowledge my WPC ohana who have always been supportive by allowing me to pursue
my career goals. Finally, to my immediate ohana – thank you for your sacrifices & steadfast
support no matter what the cost. I am forever in your debt & promise to always do my best.
DIRECT/INDIRECT MEASURES 3
Table of Contents
Acknowledgements 2
List of Tables 5
Abstract 6
CHAPTER ONE: OVERVIEW OF THE STUDY 7
Background of the Problem 7
Statement of the Problem 8
Purpose of the Study 9
Importance of the Study 9
Definition of Terms 11
CHAPTER TWO: LITERATURE REVIEW 13
History of Accreditation 13
Early Institutional Accreditation 13
Regional Accreditation, 1885 to 1920 14
Regional Accreditation, 1920-1950 16
Regional Accreditation 1950-2006 17
Current and Future State of Accreditation 20
Specialized Accreditation 24
Overview of AACSB Accreditation 25
History 26
Mission 27
Assurance of Learning Standards (AoLs) 28
Direct Measures 30
Indirect Measures 31
Who is in Charge of Asessment Programs? 32
Financial Costs 33
Effects of Accreditation 34
Trend toward Learning Assessment 34
Framework for Learning Assessment 35
Benefits of Accreditation on Learning 36
Organizational Effects of Accreditation 37
Future Assessment Recommendations 38
Challenges to Student Learning Outcomes 39
Organization Learning Challenges 40
Lack of Faculty Buy-In 40
Lack of Institutional Investment 41
Difficulty with Integration into Local Practice 42
Tension between Improvement and Accountability 44
Transparency Challenges 44
Conclusion 46
CHAPTER THREE: RESEARCH DESIGN AND METHODOLOGY 47
Research Design 47
Research Question 1 48
Research Question 2 48
Research Question 3 49
DIRECT/INDIRECT MEASURES 4
Research Question 4 49
Research Question 5 50
Research Question 6 50
Population and Sample 51
Instrumentation 52
Reliability and Validity 54
Data Collection and Analysis 54
CHAPTER FOUR: RESEARCH FINDINGS 56
Demographics 56
Findings by Research Question 59
Research Question 1 59
Research Question 2 62
RQ 2 Comments Regarding Specific Improvements in Student Learning 64
Research Question 3 65
Research Question 4 67
RQ 4 Comments Regarding Specific Improvements in Student Learning 69
Research Question 5 70
Research Question 6 72
RQ6 Comments Regarding Other Options for Assessment Methods 79
Summary 80
CHAPTER FIVE: DISCUSSION 83
Research Question 1 83
Research Question 2 84
Research Question 3 84
Research Question 4 85
Research Question 5 86
Research Question 6 87
Implications 88
Limitations 90
Areas of Future Research 90
Conclusion 91
APPENDIX A: EMAIL LETTER TO PARTICIPANTS 93
APPENDIX B: WEB-BASED SURVEY 94
REFERENCES 98
DIRECT/INDIRECT MEASURES 5
List of Tables
Table 3.1 Semi-structured table of the relationship between Survey Questions and Research
Questions 53
Table 4.1 Survey respondents by Institution Type (Private vs. Public) 56
Table 4.2 Survey respondents by Job Title 57
Table 4.3 Survey respondents by number of Undergraduate Degree Programs 57
Table 4.4 Survey respondents by number of Graduate Degree Programs 58
Table 4.5 Undergraduate Core Business Curriculum Required 58
Table 4.6 Survey respondents by Student Enrollment 59
Table 4.7 Direct Measures used to satisfy the Assurance of Learning Standards (Cumulative
Totals) 60
Table 4.8 Direct Measures used to satisfy the Assurance of Learning Standards (Percentage of
Institutions) 61
Table 4.9 Improvements in Student Learning that have resulted from the Direct Measures
(Cumulative Totals) 63
Table 4.10 Improvements in Student Learning that have resulted from the Direct Measures
(Percentage of Institutions) 63
Table 4.11 Indirect Measures used to satisfy the Assurance of Learning Standards (Cumulative
Totals) 66
Table 4.12 Indirect Measures sued to satisfy the Assurance of Learning Standards (Percentage
of Institutions) 67
Table 4.13 Improvements in Student Learning that have resulted from the Indirect Measures
(Cumulative Totals) 68
Table 4.14 Improvements in Student Learning that have resulted from the Indirect Measures
(Percentage of Institutions) 69
Table 4.15 Individual Most Directly Responsible for Helping Develop, Coordinate, and Report
Your AACSB Assessment Activities and Results (Cumulative Totals) 71
Table 4.16 Individual Most Directly Responsible for Helping Develop, Coordinate, and Report
Your AACSB Assessment Activities and Results (Percentage of Institutions) 72
Table 4.17 Percentage of Time Devoted to Meeting AoL Standards in Years Leading Up To
Accreditation Visit 74
Table 4.18 Percentage of Time Devoted to Meeting AoL Standards in Year before the
Accreditation Visit 74
Table 4.19 Rough Estimate of Financial Support to Conduct Assessment Programs 75
Table 4.20 Allocation of Financial Support that is used to conduct Assessment Programs –
Faculty Training 76
Table 4.21 Allocation of Financial Support that is used to conduct Assessment Programs –
Books and Materials 77
Table 4.22 Allocation of Financial Support that is used to conduct Assessment Programs –
Instruments (i.e. Standardized) 77
Table 4.23 Allocation of Financial Support that is used to conduct Assessment Programs –
Questionnaire Development 78
Table 4.24 Allocation of Financial Support that is used to conduct Assessment Programs –
Other 79
DIRECT/INDIRECT MEASURES 6
ABSTRACT
In recent years, higher education has transitioned into an era that is more focused on
accountability and student outcomes for accredited institutions. Specialized program
accreditation is viewed as an additional external review that provides an emphasis on field-
specific education. However, there is limited research on the assessment practices of AACSB-
Accredited Business Schools. Therefore, this study is pertinent in identifying the most commonly
used direct and indirect measures within the Assurance of Learning requirements of AACSB-
Accredited Business Schools & to examine the effectiveness of these measures in improving
student learning outcomes. In addition, this study sought to identify the specific resources and
support that are required to maintain the AACSB-accreditation standard.
This mixed-method study surveyed the primary Accreditation Liaison Office (ALO) or
Dean for 77 AACB-Accredited Business Schools. The data was collected and analyzed to present
an overview of the current state of assessment practices of AACSB-Accredited Business Schools.
The findings for the most commonly used direct/indirect measures & the improvements in
student learning that resulted from these measures were consistent with previous literature on the
subject. The individual(s) in charge of the assessment program for AACSB-Accredited schools
has not deviated from past trends. Finally, the financial costs associated with assessing the AoL
requirements continue to trend upwards.
DIRECT/INDIRECT MEASURES 7
CHAPTER ONE: OVERVIEW OF THE STUDY
Accreditation is utilized as a term referring to the certification of an institution of higher
learning. These institutions must pass a comprehensive review and meet certain minimum
qualifications/standards in order to attain a certain level of accreditation (Gaston, 2014). In recent
years, there has been a transition into an era that is more focused on accountability and student
outcomes for these accredited institutions. The Spellings Report, released in 2006, called for a
sizeable improvement in higher education accreditation and emphasized the flaws within the
current system (Eaton, 2012). A major finding of the report was that students and parents do not
have a clear picture of how much students are learning in college; the report stated, “parents and
students have no solid evidence…of how much students learn in college or whether they learn
more at one college than another” (U.S. Department of Education, 2006, p. 13).
Background of the Problem
This significant finding about potential student knowledge shortcomings within accredited
institutions has led to a growing need to systematically assess how much students are learning
within higher education. The Council for Higher Education Accreditation (CHEA), the
organization that oversees the various higher education accreditation bodies within the United
States, has acknowledged the notion that both institutions and accrediting agencies have to
address the public’s desire to view various forms of evidence of student learning outcomes
(Eggers, 2000).
Accreditors have held its member institutions to demonstration their ability to measure
educational quality, assess student learning, and implement improvements based on those
assessments. Due to the fact that accreditation is required for access to federal student financial
aid, almost all colleges and universities have the ability to use institutional accreditation as a
DIRECT/INDIRECT MEASURES 8
source of information about educational quality (Gaston, 2014). Specialized program
accreditation reports are viewed as additional external reviews that provide an emphasis on field-
specific education, with professional specializations such as business, engineering, and medicine
leading the way in “shifting the spotlight from educational inputs and processes to direct
evidence of student learning” (Chaffee, 2014, p. 2)
The Association to Advance Collegiate Schools of Business (AACSB), a specialized
accrediting body made up of institutions dedicated to business education, voted in April 2003 to
emphasize assessment of student learning through the adoption of its Assurance of Learning
(AOL) standards (Pringle & Michel, 2007). Although the AACSB has required assessment prior
to April 2003, less than 10% of the accreditation criteria/sub criteria were dedicated to
assessment at the time– the new AOL standards dramatically increased the number of
assessment-related accreditation standards to 30% (AACSB, 2006). This action of improving
accountability amongst institutions by focusing on specific measures of student learning falls in
line with the growing demand for more transparency in student learning outcomes. AACSB-
accredited institutions have had over 10 years to monitor their student learning outcomes and
ensure that they are adhering to the AOL standards – it is time to determine if these institutions
have seen improvements in learning outcomes as a result of these standards over that period of
time.
Statement of the Problem
There exists a growing body of literature that examines assessment practices in AACSB-
accredited institutions (Kelley, Tong & Choi, 2010; Martell, 2007; Pringle & Michel, 2007;
Wheeling, Miller & Slocombe, 2015). The research/findings from these studies were conducted
only a few years after the implementation of the AOL standards in 2003. As a result, many
DIRECT/INDIRECT MEASURES 9
schools were still in the process of adjusting their curriculum and assessment programs to
comply with the new standards and there was a dearth of information to distinguish a definite
trend in learning outcomes (Pringle & Michel, 2007). In particular, information about specific
improvements in student learning outcomes due to the use of direct and indirect measures were
not available because the measures were not required in the previous accreditation/assessment
requirements (Pringle & Michel, 2007).
Purpose of the Study
The purpose of this study is to address the following research questions related to the
Assurance of Learning requirements of AACSB-Accredited Business Schools:
1. What are the most frequently used direct measures within the Assurance of Learning
requirements of AACSB-Accredited Business Schools?
2. What improvements in student learning have resulted from these direct measures?
3. What are the most frequently used indirect measures within the Assurance of Learning
requirements of AACSB-Accredited Business Schools?
4. What improvements in student learning have resulted from these indirect measures?
5. Who is in charge of the assessment program of AACSB-Accredited Business Schools?
6. What financial costs are associated with assessing the Assurance of Learning
requirements of AACSB-Accredited Business Schools?
Importance of the Study
The significance of this study is rooted in the fact that AACSB-institutions have had more than a
decade to adhere to the required AOL standards; information from the institutions about the trends of
student learning outcomes (as a result of the AOL standards) over this lengthy time period will be
collected and analyzed. Previous studies were conducted only a few years after the AOL standards
DIRECT/INDIRECT MEASURES 10
were required by the AACSB; this study will allow for a comprehensive review of trends over a longer
period of time (Kelley et al., 2010; Martell, 2007; Pringle & Michel, 2007). The study will also
look at what improvements (if any) have been made at the program level as a result of the direct
and indirect assessment measures. The use of a quantitative approach will allow for a targeted
approach to identifying the most commonly used direct and indirect measures and also provide
evidence regarding the effectiveness of these measures in improving student learning outcomes.
The study will also identify the specific resources and support that are required to maintain the
prestigious AACSB-accreditation standard (Pringle & Michel, 2007).
The results of this study could be useful to business school faculty and administrators,
prospective and current students, institutions considering AACSB accreditation, and the AACSB
organization. In particular, business school faculty and administrators would be able to view the
most commonly used direct and indirect measures that are being used by their colleagues and
compare the results to their own assessment practices.
This study will include five chapters. Chapter One discusses the significance of the study,
research questions, the topics of assessment/AoLs, the AASCB, and a definition of key terms.
Chapter Two provides a brief history of schools of business, along with an in-depth discussion of
the AACSB assurance of learning requirements (AoL) and a review of the literature of
assessment in colleges of business. Chapter Three reviews the research design and methodology
for the study. Chapter Four will include the research findings and data analysis of the study.
Chapter Five will discuss the summary of the findings, implications for practice, and provide
areas for future research.
DIRECT/INDIRECT MEASURES 11
Definition of Terms
AACSB-Accreditation: The Association to Advance Collegiate Schools of Business is a
membership organization for business schools—a place where business schools are able to
network & discuss issues that affect the business education industry and their institutions.
Accreditation standards are used as the basis to evaluate a business school’s mission, operations,
faculty qualifications/contributions, programs, & other critical areas. AACSB accreditation seeks
to certify (to students & parents) that the business school is providing a top-quality education.
Accreditation: A quality review process conducted by professional peers whereby an
institution or program is evaluated to determine whether it has met a minimum level of adequate
quality.
Assessment: The ongoing process of establishing clear & measureable expected outcomes
of student learning, ensuring that students have sufficient opportunities to achieve those
outcomes, systematically gathering, analyzing, and interpreting evidence to determine how well
student learning matches our expectations, and using the resulting information to understand and
improve student learning.
Assurance of Learning (AOL) Standards: Enacted in 2003, these standards are used to
evaluate how effectively an AACSB-accredited institution has accomplished their educational
goals within its core activities.
Benefits of accreditation: The advantages an institution gains by having accreditation.
Costs of assessment: The institutional commitment in terms of budgetary spending and
time contributed by various individuals/departments to the assessment effort.
Council for Higher Education Accreditation (CHEA): The national body coordinating
advocacy efforts for accreditation and performing the function of recognizing accrediting
DIRECT/INDIRECT MEASURES 12
entities. CHEA reviews the effectiveness of accrediting bodies and primarily assures the
academic quality and improvement within institutions.
Direct Measures: Measurements of student learning that allow students to demonstrate
the achievement of learning objectives.
Indirect Measures: Measurements of student learning that are not tied directly to having
students demonstrate the achievement of learning objectives.
Institutional accreditation: Recognition of a minimum level of adequate quality at the
institutional level and without respect to individual programs of study.
Learning Goals: The knowledge, skills, & attitudes that students take with them from a
learning experience; it is the desired educational outcomes that students should be able to
accomplish when they graduate from their program (regardless of their major or concentration).
Learning Objectives: Detailed aspects of goals; describes measurable attributes of the
overall learning goal(s).
Outcomes Assessment: The systematic collection, review, & use of information about
educational programs undertaken for the purpose of improving student learning and development.
Regional accreditation: Quality review at the institutional level conducted on a regional
scope rather than on a national or state scope.
Specialized accreditation (or programmatic accreditation): The recognition of a
minimum level of adequate quality at the level of the individual program of study without
respect to the rest of the institution as a whole.
DIRECT/INDIRECT MEASURES 13
CHAPTER TWO: LITERATURE REVIEW
This chapter will provide a literature review of the history and themes of accreditation, a
brief history of specialized accreditation, an overview of the Association to Advance Collegiate
Schools of Business (AACSB), and a review of assessment effects of accreditation.
History of Accreditation
According to Gaston (2014), accreditation in higher education may be viewed as having
four primary goals: to ensure that a quality education is being provided to students, to allow for
continuous quality improvement, to enable and facilitate student mobility (particularly amongst
transfer students), and to provide safe access to federal and state funds. These goals highlight the
importance of obtaining and maintaining accreditation for institutions of higher education.
Early Institutional Accreditation
Accreditation has a long parentage among the universities and colleges of the United
States dating back to the self-initiated external review of Harvard in 1642. This external review,
done only six years after Harvard’s founding, was intended to ascertain rigor in its courses by
peers from universities in Great Britain and Europe (Davenport, 2000; Brittingham, 2009). This
type of self-study is not only the first example in America of peer-review, but it also highlights the
need for self and peer regulation in the U.S. educational system due to the lack of federal
governmental regulation. This lack of federal government intervention in the evaluation process
of educational institutions is a main reason for the way accreditation in the U.S. developed
(Brittingham, 2009).
While the federal government does not directly accredit educational institutions, the first
example of an accrediting body was through a state government. In 1784, the New York Board of
Regents was established as the first regionally organized accrediting organization. The Board
DIRECT/INDIRECT MEASURES 14
was set up like a corporate office with the educational institutions being franchisees. The Board
created mandated standards that had to be met by each college or university if that institution were
to receive state financial aid (Blauch, 1959).
Not only did Harvard pioneer accreditation in the U.S. with its early external review of its
own courses, but the president of Harvard University initiated a national movement in 1892 when
he organized and chaired the Committee of Ten, which was an alliance formed among educators
(mostly college and university presidents) to seek out standardization regarding educational
philosophies and practices in the U.S. through a system of peer approval (Davis, 1945; Shaw,
1993).
Around this same time there began to be different associations and foundations that
undertook an accreditation review of educational institutions in the U.S. based on their own
standards. Associations such as the American Association of University Women, the Carnegie
Foundation, and the Association of American Universities would (for a variety of different
reasons and clientele (e.g. gender equality, professorial benefits)) evaluate various institutions and
generate lists of approved or accredited schools. These associations were responding to the desire
of their constituents to have accurate information regarding the validity and efficacy of the
different colleges and universities (Orlans, 1975; Shaw, 1993).
Regional Accreditation, 1885 to 1920
When these associations declined to broaden or continue their accrediting practices,
individual institutions began to unite together to form regional accrediting bodies to assess
secondary schools’ adequacy in preparing students for college (Brittingham, 2009). Colleges
were then measured by the quality of students they admitted based on standards at the secondary
DIRECT/INDIRECT MEASURES 15
school level that were measured by the accrediting agency. The regional accrediting agencies
began to focus also on creating a list of colleges that were good destinations for in-coming
freshmen. If an institution was a member of the regional accreditation agency, it was considered
an accredited college; in other words, the institutions that belonged to an accrediting agency were
considered colleges while those that did not belong were not (Blauch, 1959; Davis, 1932; Ewell,
2008; Orlans, 1974; Shaw, 1993).
Regional accrediting bodies were formed in the following years: in 1885, the New
England Association of Schools and Colleges (NEASC), the Middle States Association of
Colleges and Secondary Schools (MSCSS) and Middle States Commission on Higher Education
(MSCHE) in 1887, the North Central Association of Colleges and Schools (NCA) and the
Southern Association of Colleges and Schools (SACS) in 1895, the Northwest Commission on
Colleges and Universities (NWCCU) in 1917, and, finally, the Western Association of Schools
and Colleges (WASC) in 1924 (Brittingham, 2009).
Regional accrediting associations began to create instruments for the purpose of
establishing unity and standardization in regards to entrance requirements and college academic
standards (Blauch 1959). For example, in 1901, the MSCHE and MSCSS created the College
Entrance Examination Board to standardize college entrance requirements. The NCA also
published its first set of standards for its higher education members in 1909 (Brittingham, 2009).
Although there were functioning regional accreditation bodies in most of the states, the
Department of Education created its own national list of recognized (accredited) colleges in 1910.
Due to the public’s pressure to keep the federal government from controlling higher education
directly, President Taft blocked the publishing of the list of colleges and the Department of
Education discontinued the active pursuit of accrediting schools. Instead, it reestablished itself as
DIRECT/INDIRECT MEASURES 16
a resource for the regional accrediting bodies in regards to data collection and comparison
(Blauch, 1959; Ewell, 2008b; Orlans, 1975).
Regional Accreditation, 1920-1950
With the regional accrediting bodies in place, the ideas of what an accredited college was
became even more diverse (e.g. vocational colleges, community colleges). Out of the greater
differences among schools with regards to school types and institutional purposes, there arose a
need to apply more qualitative measures and a focus on higher outcomes (Brittingham, 2009).
School visits by regional accreditors became necessary once a school demonstrated struggles
and ever since, qualitative standards became the norm. The regional organizations began to
measure success (and therefore grant accredited status) on whether an institution met its own
standards outlined in its own mission, rather than a predetermined set of criteria (Brittingham,
2009). In other words, if a school did what it said it would do, it could be in line to be
accredited. The accreditation process later became a requirement for all member institutions.
Self-reviews and peer-reviews, which became a standard part of the accreditation process, were
undertaken by volunteers from the member institutions (Ewell, 2008b).
Accrediting bodies began to be challenged as to their legitimacy in classifying colleges
as being accredited or not. The Langer Case in 1938 is viewed as a landmark case that
established the standing of accrediting bodies in the United States. Governor William Langer of
North Dakota lost in his legal challenge of the NCA’s denial of accreditation to North Dakota
Agricultural College; this ruling carried over to other legal cases that upheld the decision that
accreditation was legitimate as well as a voluntary process (Fuller & Lugg, 2012; Orlans, 1974).
In addition to the regional accrediting bodies, there arose other associations that were
meant to regulate the accrediting agencies themselves. The Joint Commission on Accrediting
DIRECT/INDIRECT MEASURES 17
was formed in 1938 to validate legitimate accrediting agencies and discredit questionable or
redundant ones. After some changes to the mission and the membership of the Joint
Commission on Accreditation, the name was changed to the National Commission on
Accrediting (Blauch, 1959) (Barlow et al., 2014).
Regional Accreditation, 1950 to 2006
The period 1950 - 1985 has been coined the golden age of higher education and was
marked by increasing federal regulations (Ewell, 2008b). During this period, key developments in
the accreditation process occurred, such as the institutional self-study becoming standard, the
practice of site visits being conducted by colleagues from peer institutions, and institutions being
visited regularly on a pre-determined cycle (Ewell, 2008b). With the passage of the Veterans'
Readjustment Assistance Act of 1952, the U.S. Commissioner of Education was required to
publish a list of recognized accreditation associations (Bloland, 2001). This act provided for
education benefits to veterans of the Korean War directly rather than to the educational institution
being attended, increasing the importance of accreditation as a mechanism for recognition of
legitimacy (Woolston, 2012). A more "pivotal event" occurred in 1958 with the National
Defense Education Act's (NDEA) allocation of funding for NDEA fellowships and college loans
(Weissburg, 2008). The NDEA limited participating institutions to those that were accredited
(Gaston, 2014). In 1963, the U.S. Congress passed the Higher Education Facilities Act - this act
required that higher education institutions receiving federal funds through enrolled students be
accredited. Arguably the most striking expansion in accreditation's mission coincided with the
passage of the Higher Education Act (HEA) in 1964 (Gaston, 2014). Title IV in this legislation
expressed the intent of Congress to use federal funding to broaden access to higher education.
According to Gaston (2014), having committed to this much larger role in encouraging college
DIRECT/INDIRECT MEASURES 18
attendance, the federal government found it necessary to affirm that institutions benefitting from
such funds were worthy of it. That same year, the National Committee of Regional Accrediting
Agencies (NCRAA) became the Federation of Regional Accrediting Commissions of Higher
Education (FRACHE) (Bloland, 2001).
In 1965, the Higher Education Act was first signed into law - it strengthened the resources
available to higher education institutions and provided financial assistance to students enrolled at
those institutions (Bloland, 2001). The law was especially important to accreditation because it
forced the U.S. Department of Education (USDE) to determine and list a much larger number of
institutions eligible for federal programs (Trivett, 1976). In 1967, the NCA revoked Parsons
College accreditation citing "administrative weakness" and a $14 million debt. The college
appealed but the courts denied it on the basis that the regional accrediting associations were
voluntary bodies (Woolston, 2012).
According to Bloland (2012), “The need to deal with a much larger number of potentially
eligible institutions led the office of Education's U.S Commissioner of Education to create…the
Accreditation and Institutional Eligibility Staff (AIES) with an advisory committee” (pp.24-25).
The creation of the AIES in 1968 allowed for the federal recognition and review process to be
involved with the accrediting agencies (Dickey and Miller, 1972). In 1975, the National
Committee on Accrediting (NCA) and FRACHE merged to form a new organization called the
Council on Postsecondary Accreditation (COPA) (Ewell, 2008b). The newly created national
accreditation association encompassed an astonishing array of types of postsecondary education
to include community colleges, liberal arts colleges, proprietary schools, graduate research
programs, bible colleges, trade and technical schools, and home-study programs (Chambers,
1983).
DIRECT/INDIRECT MEASURES 19
Since 1985, accountability has become the issue of paramount importance in the field of
education. According to Woolston (2012), key developments in the accreditation process during
this period included higher education experiencing rising costs that resulted in high student loan
default rates. Accreditation bodies also faced increasing criticism for a number of apparent
shortcomings, most ostensibly a lack of demonstrable student learning outcomes (Gaston, 2014).
Similarly, accreditation is increasingly and formally defended by various champions of the
practice - for example, congressional hostility reached a crisis stage in 1992 when Congress, in
the midst of debates on the reauthorization of the Higher Education Act, threatened to bring to a
close the role of the accrediting agencies as gatekeepers for financial aid (Bloland, 2001).
During the early 1990s the federal government grew increasingly intrusive in matters
directly affecting the accrediting agencies (Bloland, 2001). As a direct consequence, in 1992, the
Subpart 1 (Part H) of the Higher Education Act amendments involved an increase role for the
states in determining the eligibility of instructions to participate in the student financial aid
programs of the aforementioned Title IV (Bloland, 2001). For every state, this meant the creation
of a State Postsecondary Review Entity (SPRE) that would review institutions that the USDE
secretary had identified as having triggered such review criteria as high default rates on student
loans (Bloland, 2001). The SPREs were short lived; in 1994, they were abandoned largely
because of a lack of adequate funding (Bloland, 2001). The 1992 reauthorization of the 1965
Higher Education Act also created the National Advisory Committee on Institutional Quality and
Integrity (NACIQI) to replace the AIES (Bloland, 2001).
For several years, the regional accrediting agencies had entertained the idea of pulling out
of COPA and forming their own national association. Based on dissatisfaction with the
organization, regional accrediting agencies proposed a resolution to terminate COPA by the end
DIRECT/INDIRECT MEASURES 20
of 1993 - following a successful vote on the resolution, COPA was effectively terminated
(Bloland, 2001). A special committee, generated by the COPA plan of dissolution of April 1993,
created the Commission on Recognition of Postsecondary Accreditation (CORPA) to continue
the work of recognizing accrediting agencies (Bloland, 2001). However, CORPA was formed
primarily as an interim organization to continue national recognition of accreditation. In 1995,
national leaders in accreditation formed the National Policy Board (NPB) to shape the creation
and legitimation of a national organization overseeing accreditation. The national leaders in
accreditation were adamant that the new organization should reflect higher education's needs
rather than those of postsecondary education. Following numerous intensive meetings, a new
organization named the Council for Higher Education Accreditation (CHEA) was formed in
1996 as the official successor to CORPA (Bloland, 2001). In 2006, the Spellings Commission
delivered the verdict that accreditation "has significant shortcomings" (USDE Test, 2006, p. 7)
and accused accreditation of being both ineffective and a barrier to innovation (Barlow et al.,
2014).
Current and Future State of Accreditation
Accreditation in higher education is at a crossroads. Since the Spellings Report was
released in 2006 (which called for more government oversight of accreditation to ensure public
accountability), the government and critics have begun scrutinizing a system that had been
nongovernmental and autonomous for several decades (Eaton, 2012). The U.S. Congress is
currently in the process of reauthorizing the Higher Education Act (HEA), and it is expected that
they will address accreditation head-on. All the while, CHEA and other accreditation supporters
have been attempting to convince Congress, the academy, and the public at-large of
accreditation’s current and future relevance in quality higher education.
DIRECT/INDIRECT MEASURES 21
In anticipation of the HEA’s reauthorization, NACIQI was charged with providing the
U.S. Secretary of Education with recommendations on recognition, accreditation, and student aid
eligibility (NACIQI, 2012). The committee advised that accrediting bodies should continue their
gatekeeping role for student aid eligibility, but also recommended some changes to the
accreditation process. These changes included more communication and collaboration between
accreditors, states, and the federal government to avoid overlapping responsibilities; moving
away from regional accreditation and toward sector or mission-focused accreditation, creating an
expedited review process and developing more gradations in accreditation decisions; developing
more cost-effective data collection and consistent definitions and metrics; and making
accreditation reports publically available (NACIQI, 2012).
However, two members of the committee did not agree with the recommendations and
submitted a motion to include the Alternative to the NACIQI Draft Final Report, which suggested
eliminating accreditor’s gatekeeping role; creating a simple, cost-effective system of quality
assurance that would revoke financial aid to campuses not financially secure; eliminating the
current accreditation process altogether as means of reducing institutional expenditures; breaking
the regional accreditation monopoly; and developing a user-friendly, expedited alternative for the
re-accreditation process (NACIQI, 2012). The motion failed to pass, and the alternative view was
not included in NACIQI’s final report. As a result, Hank Brown, the former U.S. Senator from
Colorado and founding member of the American Council of Trustees and Alumni, drafted a
report seeking accreditation reform and reiterating the alternatives suggested above, because
accreditation had “failed to…protect consumers and taxpayers.” (Brown, 2013, p. 1).
The same year the final NACIQI report was released, the American Council of
Education’s (ACE) Task Force on Accreditation released its own report that identified challenges
DIRECT/INDIRECT MEASURES 22
and potential solutions for accreditation (ACE, 2012). The task force made six
recommendations: a) increase transparency and communication, b) increase the focus on student
success and institutional quality, c) take immediate and noticeable action against failing
institutions, d) adopt a more expedited process for institutions with a history of good
performance, e) create common definitions and a more collaborative process between
accreditors, and f) increase cost- effectiveness (ACE, 2012). They also suggested that higher
education “address perceived deficiencies decisively and effectively, not defensively or
reluctantly.” (ACE, 2012, p. 8).
President Obama has also recently spoken out regarding accountability and accreditation
in higher education. In his 2013 State of the Union address, Obama asked Congress to “change
the Higher Education Act, so that affordability and value are included in determining which
colleges receive certain types of federal aid” (Obama, 2013a, para 39). The address was
followed by The President’s Plan for a Strong Middle Class and a Strong America, which
suggested achieving the above change to the HEA “either by incorporating measures of value
and affordability into the existing accreditation system; or by establishing a new, alternative
system of accreditation that would provide pathways for higher education models and colleges to
receive federal student aid based on performance and results.” (Obama, 2013b, p. 5).
Furthermore, in August 2013, President Obama called for a performance-based rating system that
would connect institutional performance with financial aid distributions (Obama, 2013c).
Though accreditation was not specifically mentioned in his plan, it is not clear if the intention is to
replace accreditation with this new rating system or utilize both systems simultaneously (Eaton,
2013b).
DIRECT/INDIRECT MEASURES 23
The President’s actions over the last year have CHEA and other supporters of
nongovernmental accreditation concerned. Calling it the “most fundamental challenge that
accreditation has confronted to date,” Eaton (2012) has expressed concern over the standardized
and increasingly regulatory nature of the federal government’s influence on accreditation. Astin
(2014) also stated that if the U.S. government creates its own process for quality control, the
U.S. higher education system is “in for big trouble” (para. 9), like the government-controlled,
Chinese higher education system.
Though many agree there will be an inevitable increase in federal oversight after the
reauthorization of the HEA, supporters of the accreditation process have offered
recommendations for minimizing the effect. Gaston (2014) provides six categories of
suggestions, which include stages for implementation: consensus and alignment, credibility,
efficiency, agility and creativity, decisiveness and transparency, and a shared vision. The
categories maintain the aspects of accreditation that have worked well and that are strived for
around the world, – nongovernmental, peer review – as well as addressing the areas receiving the
most criticism. Eaton (2013a) adds that accreditors and institutions must push for streamlining of
the federal review of accreditors as a means to reduce federal oversight; better communicate the
accomplishments of accreditation and how quality peer-review benefits students; and anticipate
any further actions the federal government may take.
While the HEA undergoes the process of reauthorization, the future of accreditation
remains uncertain. There have been many reports and opinion pieces on how accreditation should
change and/or remain the same, much of them with overlapping themes. Only time will tell if the
accreditors, states, and the federal government reach an acceptable and functional common
DIRECT/INDIRECT MEASURES 24
ground that ensures the quality of U.S. higher education into the future (Barlow et al., 2014).
The next section will provide a brief review of specialized accreditation.
Specialized Accreditation
The Flexner report of 1910 is not only viewed as helping to advance the quality of medical
education, but as Booth (1991) stated, “it is viewed by most students of accreditation as the
seminal event from which specialized accreditation evolved.” (Booth, 1991, p. 290). Specialized
accreditation focuses on the specialized training and knowledge needed for professional degrees
and careers - The Association to Advance Collegiate Schools of Business (AACSB), the Accreditation
Council for Pharmacy Education (ACPE), Accrediting Council on Education in Journalism and
Mass Communications (ACEJMC), and the Teacher Education Accreditation Council, Inc.
(TEAC) are just a few examples of specialized accrediting bodies that are noted by CHEA. All
told, there are 62 specialized accrediting organizations that are recognized in the United States
(Gaston, 2014). Programmatic accreditation is granted and monitored by national organizations;
this notion is unlike the regional accrediting organizations (i.e., W ASC, SACS, and NCA) that are
associated regionally according to geographic region (Adelman & Silver, 1990; Eaton, 2009;
Hagerty & Stark, 1989).
As noted by the Global University Network for Innovation (2007) publication - just as
institutional accreditation needs to focus on academic programs in order to be effective,
programmatic accreditation also needs to be cognizant whether the larger institutional
environment is meeting its goals, thus working hand-in-hand for overall institutional success.
Coordinating institutional accreditation efforts (where possible) can also be cost effective since
overlap exists between the process of both regional and programmatic accreditation. However,
DIRECT/INDIRECT MEASURES 25
the review process and resource allocations can become complicated and overwhelming (Western
Association of Schools and Colleges, 2009; Shibley & Volkwein, 2002).
Specialized accrediting organizations (recognized by CHEA) affirm that the standards and
processes of the accrediting organizations are consistent with the academic quality/improvement
& accountability expectations that CHEA has established (Gaston, 2014). Institutions
acknowledge the pressure of meeting not only institutional accreditation, but also the specialized
accreditation of individual programs (Bloland, 2001). Specialized program accreditation
distinctively carries institutional quality assurance because differences between the individual
programs within a single institution can be compared to other institutions. The credibility of
programmatic accreditation review is strengthened on the basis of its achievement; it is also
strengthened because it is more focused on a particular area of study and evaluated by colleagues
from peer institutions who are specialists in these specific disciplines (Ratcliff, 1996).
Prior research on programmatic accreditation has been found to be similarly affected by
the lack of quantity and rigor as research on institutional accreditation. In some studies, strong
faculty involvement and instruction have been linked to specialized accreditation (Cabrera,
Colbeck, & Terenzini, 2001; Daoust, Wehmeyer, & Eubank, 2006), while other studies that were
focused on student learning outcomes found that specialized accreditation does not provide
enough academic support for student success (Hagerty & Stark, 1989). Further empirical research
on specialized accreditation is imperative due to its importance on student educational and
professional achievement in this current day and age (Barlow et al., 2014).
Overview of AACSB Accreditation
The American Assembly of Collegiate Schools of Business was established in 1916 by
sixteen colleges and universities (Zammuto, 2008). The organization, now known as The
DIRECT/INDIRECT MEASURES 26
Association to Advance Collegiate Schools of Business (AACSB) International, was the only
premier accrediting body for colleges/schools of business for over seventy years. Roller,
Andrews, & Bovee (2003) conducted a survey of 411 business school leaders (deans/directors)
and found that the leaders considered AACSB accreditation to be the most prestigious of all
business school accrediting bodies due to their focus of research. The following section will
provide an overview of AACSB Accreditation.
History
According to Capon (1996), the evolution of business education in the United States is
best viewed as a growth over the period of three trimesters (Inception – 1950, the 1950’s, and
1960 – present). The Wharton School of Finance and Commerce at the University of
Philadelphia was the forefather of American business education and opened in 1881. Soon
thereafter, business schools such as Dartmouth’s Tuck School (established in 1900 as the first
graduate school of business) and the Harvard Business School (founded in 1908) were
established to usher in the first wave of American business schools. As mentioned earlier, the
AACSB International was founded in 1916 to accredit the earliest business schools and
maintained its status as the lone significant accrediting business school body for seventy years.
The first accreditation standards for business schools were adopted in 1919 by the AACSB
(AACSB, 2014).
Following a request by the higher education and business community to re-examine the
curriculum of business schools, two reports (by the Ford Foundation and Carnegie Corporation)
were conducted in the 1950’s to assess the programs of the schools (Capon, 1996). A seminal
finding of these reports was the concern that the curricula offered at the schools were narrow and
simple; faculty were faulted for not conducting enough research and the schools were urged to
DIRECT/INDIRECT MEASURES 27
include mathematics and strategy (amongst other subjects) into their curriculum (Gordon &
Howell, 1959). The catalyst for these recommendations stemmed from the fact that the evolution
of the American corporation in the late 1950’s had necessitated a workforce of strategic thinkers
and professional managers (Gordon & Howell, 1959). The reports also suggested that the
AACSB organization be strengthened because, according to Gordon & Howell (1959), “It (the
AACSB) has shown no leadership whatsoever in helping the best to become still better.”
(Gordon & Howell, 1959, p.445).
In the period following these landmark reports, business schools restructured their
curriculum to adhere to the recommendations put forth by both authors (Capon, 1996). Other
accrediting bodies for business schools appeared in the late 1980’s; the Association of Collegiate
Business Schools and Programs (ACBSP) was established in 1988 to accredit business schools
that could not meet the rigorous criteria set forth by the AACSB (Francisco, Noland, & Sinclair,
2008). In 1997, The International Assembly for Collegiate Business Education (IACBE) was
established as yet another accrediting body for business education; all three accrediting bodies
accredit bachelors, masters, and doctoral programs in business. As of May 2014, there were 711
business schools (in 47 countries/territories) that had AACSB-accreditation (AACSB, 2014).
Mission
The current mission statement of AACSB states: “AACSB International advances quality
management education worldwide through accreditation, thought leadership, and value-added
services” (AACSB, 2014). The AACSB’s mission is important, as it is a critical component of
the continuous improvement standards that have been set forth by the organization. According
to Francisco et al. (2008), the AACSB accreditation standards (since 1991) have included
scholarly production and a focused effort on teaching. AACSB International also devotes
DIRECT/INDIRECT MEASURES 28
resources to identity global trends and challenges that may affect business schools in the future
(AACSB, 2014). Business schools are able to apply for AACSB accreditation for both their
undergraduate and graduate programs; however, AACSB accreditation is selective and less than
five percent of business schools worldwide have been successful in obtaining the standard
(AACSB, 2014).
Assurance of Learning Standards (AoLs)
The Assurance of Learning Standards (AoLs) are defined by the AACSB International as,
“evaluating how well the school accomplishes the educational aims at the core of its activities.”
(AACSB, 2012, p. 59) AACSB goes on to describe AoLs as, “…an important reason to assess
learning accomplishments. Measure of learning can assure external constituents such as
potential students, trustees, public officials, supporters, and accreditors, that the organization
meets its goals.” (AACSB, 2012, p.59) AoLs were first introduced in 1992 when the AACSB
established new criteria for its accreditation standards; at the time, these AoLs made up ten
percent of accreditation within the AACSB standards (Pringle & Michel, 2007). These standards
were revised again in 2003 and the changes resulted in a dramatic increase (from ten to thirty
percent of accreditation) of AoLs standards (Pringle & Michel, 2007). Specifically, the AACSB
wanted the institutions to identify specific learning goals, assess the learning goals via
appropriate measures, review the results to see if they were successful, and, lastly, implement
any changes that were necessary for improvement; these changes were so comprehensive that the
AACSB offered its assessment seminars several times a year – and almost always to a capacity
crowd (Pringle & Michel, 2007).
The major changes to the AoLs were as follows: (1) the focus of a college’s assessment
activities shifted from the majors to degree programs, (2) assessment required the usage of direct
DIRECT/INDIRECT MEASURES 29
measures instead of indirect measures, (3) the primary focus of assessment was on the
knowledge and skills acquired throughout the entire degree program rather than a particular set
of specific courses, (4) faculty members were required to be more heavily-involved in the
development of learning goals for each degree program via the use of embedded measures, and
(5) all results from the measurements were discussed and used to improve student learning
(Pringle & Michel, 2007). With the realization that these changes were monumental, the
AACSB allowed the institutions five years to implement the new standards (Pringle & Michel,
2007).
In order to understand what core activities business students should be learning via the
AoLs, it is useful to review a few of the AACSB’s suggested areas of learning experiences
(AACSB, 2012, p. 70):
• Global and environmental business concepts
• Ethical behavior responsibilities
• Statistical data analysis abilities
• Group and individual abilities within organizations
• Financial theories and analytical skills
• Strategic management
• Information management abilities
• Ethnic, Cultural, and Gender diversity knowledge
• Human Resource Management (AACSB, 2012, p. 70)
The aforementioned suggested areas of learning speak to the ability of the AACSB to
dedicate itself to the continuous improvement of business education. As the organization states,
“Curricular contents must assure that program graduates are prepared to assume business and
DIRECT/INDIRECT MEASURES 30
management careers as appropriate to the learning goals of the program.” (AACSB, 2012, p.70)
The AACSB is leveraging the AoLs to ensure that the institutions are accountable for providing
a quality education to its students that will ultimately allow them to seamlessly enter the
workforce upon graduation.
Direct Measures
As mentioned earlier, a major change of the 2003 revisions to the AACSB accreditation
standards was the requirement to use direct measures for assessment. According to Martell &
Calderon (2005), direct measures of learning call for the demonstration of a student’s knowledge
and skills. These measures, or metrics, would be utilized by AACSB-institutions to directly
measure their students’ progress in meeting the established learning goals; examples of such
measures would be examinations, portfolios, and projects (Pringle & Michel, 2007).
There exist a number of studies relating to the topic of direct measures and assessment.
The Educational Testing Service (ETS) offers the Major Field Test (MFT) to both
undergraduates and MBA students; according to ETS (2014):
The ETS Major Field Tests are comprehensive undergraduate and MBA outcomes
assessments designed to measure the critical knowledge and understanding obtained by
students in a major field of study. The Major Field Tests go beyond the measurement of
factual knowledge by helping you evaluate students’ ability to analyze and solve
problems, understand relationships and interpret material from their major field of study.
(para. 1)
Bycio & Allen (2007) conducted a study to determine if there was a correlation between
a student’s undergraduate grade point average (GPA) in business and their performance on the
ETS MFT as graduating seniors in college. The study found that there was a significantly large
DIRECT/INDIRECT MEASURES 31
correlation between the results of the MFT and a student’s business GPA; a student with a high
business GPA performed well on the MFT (Bycio & Allen, 2007). A more recent study
conducted by Green, Stone, & Zegeye (2014) concluded that the MFT (in Business) possess
critical problems relating to meeting student learning goals. Green et al. (2014) asserted that,
“studies show that individual and institutional MFTB scores are significantly influenced by
specific student characteristics. Consequently, use of these scores for AOL assessment required
detailed analysis of these characteristics.” (p. 22) In other words, factors such as the admission
requirements and student demographics of a business school play a large role in the ability of a
student to perform well on the MFT. Green et al. (2014) suggested the following alternatives to
the MFT: locally designed comprehensive examinations, computer simulation models, and the
Peregrine Outcomes Assessment, which utilizes a pretest-posttest procedure to assess student
learning.
Martell (2007) conducted a study on the state of practice in 2006 for AACSB-accredited
business schools and found that the most frequently used (at least by 40% of respondents) direct
assessment methods were written assignments (evaluated with a rubric), oral presentation
(evaluated with a rubric), course-embedded assessments (i.e. case studies or written/oral
assignments), cases evaluated with a rubric, ETS MFT’s, and the systematic evaluation of
teamwork.
Indirect Measures
Although indirect measures may also be used in the assessment process, the AACSB
(2012) states:
As part of a comprehensive learning assessment program, schools may supplement direct
measures of achievement with indirect measures. Such techniques as surveying alumni
DIRECT/INDIRECT MEASURES 32
about their preparedness to enter the job market or surveying employers about the
strengths and weaknesses of graduates can provide some information about perceptions
of student achievement. Such indirect measures, however, cannot replace direct
assessment of student performance. (p. 68)
Pringle & Michel (2007) describe indirect measures as the process of asking various
stakeholders (i.e. employers, campus recruiters, alumni, and students) for their opinions
regarding the performance of students in jobs (related to their major) or how effectively students
have completed their program’s learning goals. Pringle & Michel (2007) list several examples of
indirect measures: focus groups, graduating senior exit interviews, and surveys (both oral and
written). Kelley et al. (2010) concluded that the most frequently used indirect measures by
AACSB-accredited institutions were (in order of popularity): surveys of graduating seniors,
surveys of alumni, surveys of employers, exit interviews with graduating seniors, surveys of job
placement by graduating seniors, evaluations by supervisors of interns, and student performance
on licensing examinations. It was also found that all of the respondents had indicated that they
were using at least one indirect measure to supplement their preferred direct measures (Kelly et
al., 2010).
Martell’s (2007) study found that in 2006 (3 years after the AACSB announced the shift
from indirect to direct measures of assessment), indirect measures were still viewed as an
assessment measure but its usage had reduced since the establishment of the new AoL standards.
Who is in Charge of the Assessment Programs?
AACSB-accredited institutions are highly regarded within the higher education
community. Students who enroll in these institutions expect that every effort will be made to
ensure that student learning is the central activity of the institution. Faculty and staff members are
DIRECT/INDIRECT MEASURES 33
relied upon to contribute valuable perspectives to the needs of the student. In order for
assessment programs to be successful, designated personnel (either an individual or group) must
step up and take responsibility to implement the program (Kelley et al., 2010). It is therefore
important to identify those responsible for ensuring that the assurance of learning standards are
being met and also to be cognizant of the resources that are being allocated to the cause.
Pringle & Michel (2007) found that the person most often responsible for leading the
AACSB assessment activities was an associate dean (who also has other administrative duties).
The study also identified a significant relationship between the program size and the individual
selected to head the assessment activities – larger programs (i.e. more than 2,000 students) were
more likely to assign an associate dean or full-time assessment coordinator as their leader while
smaller programs (i.e. fewer than 1,000 students) were more likely to assign their dean, an
assessment committee, or a faculty member (with no release time) to coordinate their efforts
(Pringle & Michel, 2007). Kelley et al. (2010) also conducted a study to determine who is in
charge of assessment activities for AACSB-accredited institutions; their results also found that
the person most often responsible for assessment activities was an associate dean. Assessment
committees and a faculty member were also acknowledged as some of the more common
individuals in charge of assessment activities (Kelley et al., 2010).
Financial Costs
Studies have demonstrated that the financial commitment/resources that are secured by the
assessment efforts of institutions have increased since the modified AoL standards were
introduced in 2003. Martell’s (2007) study confirmed an increase in the devotion of financial
resources to assessment between 2004-2006; in 2004, only 20% of the schools had allocated
$5,000 or greater per year to assessment – this percentage experienced a huge growth to 78% in
DIRECT/INDIRECT MEASURES 34
2006. The survey also found that between 2004-2006, there was a 13% increase (38% to 51%) in
the use of the deans’ office to lead AoL assessment efforts (Martell, 2007). Finally, Martell
(2007) indicated that on average, the 2006 survey participants had spent $20,000 on AoL
assessment activities; this was nearly a five times increase from the 2004 average spending
amount on AoL assessment activities. Pringle & Michel (2007) supported Martell’s findings by
estimating that over half (54%) of the schools in their study had spent more than $10,000 on their
AoL assessment activities in 2006. These financial values, combined with the time devoted by
faculty and those responsible for the assessment activities, demonstrate a trend of an increasing
amount of financial resources dedicated to the assessment of AoL requirements. This trend also
indicates that the assessment of AoL standards has become a priority of AACSB-accredited
institutions.
Effects of Accreditation
This section of the literature review will examine the effects of accreditation, focusing
primarily on the assessment of student learning outcomes. Specifically, outcome assessment
serves two main purposes - quality improvement and external accountability (Bresciani, 2006;
Ewell, 2009). Over the years, institutions of higher education have made considerable strides
with regard to learning assessment practices and implementation. Yet despite such progress, key
challenges still remain.
Trend toward Learning Assessment
The shift within higher education accreditation toward greater accountability and student
learning assessment began in the mid-1980s (Beno, 2004; Ewell, 2001; Wergin, 2005, 2012).
During that time, higher education was portrayed in the media as “costly, inefficient, and
insufficiently responsive to its public” (Bloland, 2001, p. 34). The impetus behind the public’s
DIRECT/INDIRECT MEASURES 35
concern stemmed from two reasons: first was the perception that students were underperforming
academically, and second was the demand of the business sector (Ewell, 2001). Employers and
business leaders expressed their need for college graduates who could demonstrate high levels of
literacy, problem solving ability, and collaborative skills in order to support the emerging
knowledge economy of the 21
st
Century. In response to these concerns, institutions of higher
education started emphasizing student learning outcomes as the main process of evaluating
effectiveness (Beno, 2004).
Framework for Learning Assessment
Accreditation is widely considered to be a significant driving force behind advances in
both student learning and outcomes assessment. According to Rhodes (2012), in recent years,
accreditation has contributed to the proliferation of assessment practices, lexicon, and even
products such as e-portfolios, which are used to show evidence of student learning.
Kuh and Ikenberry (2009) surveyed provosts or chief academic officers at all regionally
accredited institutions granting undergraduate degrees, and found that student assessment was
driven more by accreditation than by external pressures such as government or employers.
Another major finding was that most institutions planned to continue their assessment of student
learning outcomes despite budgetary constraints. They also found that gaining faculty support and
involvement remained a major challenge - an issue that will be examined in more depth later in
this section.
Additionally, college and university faculty and student affairs practitioners have stressed
how students must now acquire proficiency in a wide scope of learning outcomes to adequately
address the unique and complex challenges of today’s ever-changing, economically competitive,
and increasingly globalizing society. In 2007, the Association of American Colleges and
DIRECT/INDIRECT MEASURES 36
Universities published a report focusing on the aims and outcomes of a 21
st
Century collegiate
education, with data gathered through surveys, focus groups, and discussions with postsecondary
faculty. Emerging from the report were four “essential learning outcomes” which include: (1)
knowledge of human cultures and the physical and natural world, through study in science and
mathematics, social sciences, humanities, history, languages, and the arts; (2) intellectual and
practical skills, including inquiry and analysis, critical and creative thinking, written and oral
communication, quantitative skills, information literacy, and teamwork and problem-solving
abilities; (3) personal and social responsibility, including civic knowledge and engagement,
multicultural competence, ethics, and foundations and skills for lifelong learning; and (4)
integrative learning, including synthesis and advanced understanding across general and
specialized studies (p. 12). With the adoption of such frameworks or similar tools at institutions,
accreditors can be well positioned to connect teaching and learning and, as a result, better engage
faculty to improve student-learning outcomes (Rhodes, 2012).
Benefits of Accreditation on Learning
Accreditation and student performance assessment have been the focus of various
empirical studies, with several pointing to benefits of the accreditation process. Ruppert (1994)
conducted case studies in 10 states – Colorado, Florida, Illinois, Kentucky, New York, South
Carolina, Tennessee, Texas, Virginia, and Wisconsin – to evaluate different accountability
programs based on student performance indicators. The report
concluded that “quality indicators appear most useful if integrated in a planning process designed
to coordinate institutional efforts to attain state priorities” (p. 155).
Furthermore, research has also demonstrated how accreditation is helping shape outcomes
inside college classrooms. Specifically, Cabrera, Colbeck, and Terenzini (2001) investigated
DIRECT/INDIRECT MEASURES 37
classroom practices and their relationship with the learning gains in professional competencies
among undergraduate engineering students. The study involved 1,250 students from seven
universities and found that the expectations of accrediting agencies may be encouraging more
widespread use of effective instructional practices by faculty.
A study by Volkwein, Lattuca, Harper, and Domingo (2006) measured changes in student
outcomes in engineering programs, following the implementation of new accreditation standards
by the Accreditation Board for Engineering and Technology (ABET). Based on the data
collected from a national sample of engineering programs, the authors noted that the new
accreditation standards were indeed a catalyst for change, finding evidence that linked the
accreditation changes to improvements in undergraduate education. Students experienced
significant gains in the application of knowledge of mathematics, science, and engineering; usage
of modern engineering tools; use of experimental skills to analyze and interpret data; designing
solutions to engineering problems; teamwork and group work; effective communication;
understanding of professional and ethical obligations; understanding of the societal and global
context of engineering solutions; and recognition of the need for life-long learning. The authors
also found accreditation also prompted faculty to engage in professional development-related
activity. Thus, the study showed the effectiveness of accreditation as a mechanism for quality
assurance (Volkwein et al., 2006).
Organizational Effects of Accreditation
Beyond student learning outcomes, accreditation also has considerable effects on an
organizational level. Procopio (2010) noted that the process of acquiring accreditation influences
perceptions of organizational culture. According to the study, administrators are more satisfied
than staff – and especially more so than faculty – when rating organizational climate, information
DIRECT/INDIRECT MEASURES 38
flow, involvement in decisions, and utility of meetings. “These findings suggest institutional role
is an important variable to consider in any effort to affect organizational culture through
accreditation buy-in” (p. 10). Similarly, a study by Wiedman (1992) describes how the two-year
process of reaffirming accreditation at a public university drives the change of institutional
culture.
Meanwhile, Brittingham (2009) explains that accreditation offers organizational-level
benefits for colleges and universities. The commonly acknowledged benefits include students’
access to federal financial aid funding, legitimacy in the public marketplace, consideration for
foundation grants and employer tuition credits, positive reflection among peers, and government
accountability. However, Brittingham (2009) points out that there are “not often recognized”
benefits as well (p. 18). For example, accreditation is cost-effective, particularly when
contrasting the number of personnel to carry out quality assurance procedures here in the U.S.
versus internationally, where it’s far more regulated. Second, “participation in accreditation is
good professional development” because those who lead a self-study come to learn about their
institution with more breadth and depth (p. 19). Third, self-regulation by institutions – if done
properly – is a better system than government regulation. And fourth, “regional accreditation
gathers a highly diverse set of institutions under a single tent, providing conditions that support
student mobility for purposes of transfer and seeking a higher degree” (p. 19).
Future Assessment Recommendations
Many higher education institutions have developed plans and strategies to measure
student-learning outcomes, and such assessments are already in use to improve institutional
quality (Beno, 2004). For future actions, the Council for Higher Education Accreditation
DIRECT/INDIRECT MEASURES 39
(CHEA), in its 2012 Final Report, recommends to further enhance commitment to public
accountability:
“Working with the academic and accreditation communities, explore the adoption and
implementation of a small set of voluntary institutional performance indicators based on
mission that can be used to signal acceptable academic effectiveness and to inform
students and the public of the value and effectiveness of accreditation and higher
education. Such indicators would be determined by individual colleges and universities,
not government” (p. 7).
In addition, Brittingham (2012) outlines three developments that have the capacity to
influence accreditation and increase its ability to improve educational effectiveness. First,
accreditation is growing more focused on data and evidence, which strengthens its value as a
means of quality assurance and quality improvement. Second, “technology and open-access
education are changing our understanding of higher education” (p. 65). These innovations – such
as massive open online courses – hold enormous potential to open up access to higher education
sources. As a result, this trend will heighten the focus on student learning outcomes. Third,
“with an increased focus on accountability – quality assurance – accreditation is challenged to
keep, and indeed strengthen, its focus on institutional and programmatic improvement” (p. 68).
This becomes particularly important amid the current period of rapid change.
Challenges to Student Learning Outcomes
Assessment is critical to the future of higher education. As noted earlier, outcome
assessment serves two main purposes – quality improvement and external accountability
(Bresciani, 2006; Ewell, 2009). The practice of assessing learning outcomes is now widely
adopted by colleges and universities since its introduction in the mid-1980s. Assessment is also a
DIRECT/INDIRECT MEASURES 40
requirement of the accreditation process. However, outcomes assessment in higher education is
still a work in progress and there is still a fair amount of challenges (Kuh & Ewell, 2010).
Organization Learning Challenges
First, there is the organizational culture and learning issue. Assessment, as clearly stated
by the American Association for Higher Education (1992), “is not an end in itself but a vehicle
for educational improvement.” The process of assessment is not a means unto its own end.
Instead, it provides an opportunity for continuous organizational learning and improving (Maki,
2010). Too often, institutions assemble and report sets of mountainous data just to comply with
federal or state accountability policy or accreditation agency’s requirements. However, after the
report is submitted and the evaluation team leaves, there are little incentives to act on the findings
for further improvement once the accreditation is confirmed. The root causes of deficiencies
identified are rarely followed up and real solutions are never sought (Ewell, 2005; Wolff, 2005).
Another concern pointed out by Ewell (2005) is that accreditation agencies tend to
emphasize the process of, rather than the outcomes, once the assessment infrastructure is
established. The accreditors are satisfied with formal statements and goals of learning outcomes,
but do not query further about how, the appropriateness, and to what degree these learning goals
are applied in the teaching and learning process. As a result, the process tends to be a single-loop
learning outcome (where changes reside at a surface level) instead of a double-loop learning
outcome (where changes are incorporated in the practices, belief, and norms) (Bensimon, 2005).
Lack of Faculty Buy-in
The lack of faculty buy-in and participation is another hurdle in the adoption of
assessment practices (Kuh & Ewell, 2010). In a 2009 survey by the National Institute for
Learning Outcomes Assessment, two-thirds of the 2,809 surveyed schools noted that more
DIRECT/INDIRECT MEASURES 41
faculty involvement in learning assessment would be helpful (Kun & Ikenberry, 2009).
According to Ewell (1993, 2002, 2005), there are several reasons why faculty members are not
inclined to be directly involved in the assessment process. First, faculty members view teaching
and curriculum development their domain. Assessing their teaching performance and student
learning outcomes by external groups may be viewed as an intrusion of their professional
authority and academic freedom. Second, the extra time and effort required for engaging
outcome assessment and the perceived unconvincing added-value perspective by faculty may be
another deterrent. Furthermore, external bodies impose the compliance-oriented assessment
requirements and most faculty members participate in the process indirectly. Faculty tend to have
a lukewarm attitude and leave the assessment work to administrative staff - in addition, faculty
might have a different view on the definitions and measures of “quality” than that of institution or
accreditors (Perrault, Gregory, & Carey, 2002, p. 273). Finally, the assessment process incurs a
tremendous amount of work and resources. To cut costs, the majority of the assessment work is
done by administration at the institution. As a consequence, faculty perceives assessment as an
exercise performed by the administration for external audiences instead of embracing the process
themselves.
Lack of Institutional Investment
Shortage of resources and institutional support is another challenge in the implementation
of assessment practices. As commented by Beno (2004), “[d] eciding on the most effective
strategies for teaching and for assessing learning will require experimentation, careful research,
analyses, and time” (p. 67). With continuously dwindling federal and state funding in the last two
decades, higher education, particularly at public institutions, is stripped of the resources to
support such an endeavor. A prime example would be the recession of the early 1990’s. Budget
DIRECT/INDIRECT MEASURES 42
cuts forced many states to abandon the state assessment mandates that originated in mid-1980s
and instead switched to process-based performance indicators as a way to gain efficiency in large
public institutions (Ewell, 2005). The 2009 National Institute for Learning Outcomes
Assessment survey shows that the majority of the surveyed institutions undercapitalized
resources, tools, and expertise for assessment work. Twenty percent of the respondents indicated
that they had no assessment staff and 65% indicated that they had two staff members or less (Kuh
& Ewell, 2010; Kuh & Ikenberry, 2009). This resource issue is further described by Beno
(2004):
“A challenge for community colleges is to develop the capacity to discuss what the results
of learning assessment mean, to identify ways of improving student learning, and to make
institutional commitments to that improvement by planning, allocating needed resources,
and implementing strategies for improvement” (p. 67)
Difficulty with Integration into Local Practice
Integrating the value and institutionalizing the practice of assessment into daily operations
can be viewed as another challenge for many institutions. In addition to redirecting resources,
leadership’s involvement and commitment, faculty’s participation, and adequate assessment
personnel contribute to the success of cultivating a sustainable assessment culture and framework
on campus (Banta, 1993; Kuh & Ewell, 2010; Lind & McDonald, 2003; Maki, 2010).
Furthermore, assessment activities, imposed by external authorities, tend to be implemented as an
addition to, rather than an integral part of, an institutional practice (Ewell, 2002). Assessment,
like accreditation, is viewed as a special process with its own funding and committee, instead of
being part of regular business operations. Finally, the work of assessment, program reviews, self-
study, and external accreditation at institutional and academic program levels tends to be handled
DIRECT/INDIRECT MEASURES 43
by various offices on campus and coordinating the work can be viewed as another challenge
(Perrault, Gergory, & Carey, 2002).
Colleges also tend to adopt the institutional isomorphic approach by modeling itself after
aspirational peers who are more legitimate or successful in dealing with similar situations and the
practices widely used to gained acceptance (DiMaggio & Powell, 1983). As reported by Ewell
(1993), institutions are prone to “second-guess” themselves and adopt the type of assessment
practices that are deemed acceptable by external agencies as a safe approach instead of adopting
(or customizing) the one appropriate to the local needs and situation (Ewell, 1993). Institutional
isomorphism offers a safer and more predictable route for institutions to deal with uncertainty and
competition, to confirm to government mandates or accreditation requirements, or to abide by
professional practices (Bloland, 2001). However, the strategy of following the crowd might
hinder in-depth inquiry of a unique local situation, as well as the opportunity for innovation and
creativity. Furthermore, decision makers may be unintentionally trapped in a culture of doing
what everyone else is doing without carefully examining their unique local situation as well as
the logic, the appropriateness, and the limitations behind the common practice (Miles, 2012).
Lack of assessment standards and clear terminology presents another challenge in
assessment and accreditation practice (Ewell, 2001). With no consensus on vocabulary, methods,
and instrument, assessment practices and outcomes may have limited value. As reported by
Ewell (2005), the absence of outcome metrics makes it difficult for state authorities to aggregate
performance across multiple institutions and to communicate the outcomes to the public.
Bresciani (2006) stressed the importance of developing a conceptual definition, framework, and
common language at the institutional level.
DIRECT/INDIRECT MEASURES 44
Tension between Improvement and Accountability
The tension between the goals of outcomes assessment and quality improvement via
external accountability can be another factor affecting outcome assessment practice. According
to Ewell (2008a, 2009), assessment practices have evolved over the years into two contrasting
paradigms. The first paradigm, assessment for improvement, emphasizes the constant evaluating
and enhancing of the process or outcomes, while the other paradigm, assessment for
accountability, demands conformity to a set of established standards mandated by the state or
accrediting agencies. The strategies, the instrumentation, the methods of gathering evidences, the
reference points, and the way results are utilized of these two paradigms tend to be at the opposite
end of the spectrum (Ewell, 2008a, 2009). For example, institutions to address deficiencies and
enhance teaching and learning mainly use the assessment for improvement paradigm internally.
It requires the periodic evaluation and formative assessment practices to track progress over time.
On the other hand, the assessment for accountability paradigm is designed to demonstrate
institutional effectiveness and performance to external constituencies and to comply with pre-
defined standards or expectations. The process tends to be performed on set schedules as a
summative assessment. The nature of these two constraints can create tension and conflict within
an institution. Consequently, an institution’s assessment program is unlikely to achieve both
objectives. Ewell (2009) further pointed out “when institutions are presented with an intervention
that is claimed to embody both accountability and improvement, accountability wins.” (p. 8)
Transparency Challenges
Finally, for outcomes assessment to be meaningful and accountable, the process and
information need to be shared and open to the public (Ewell, 2005). Accreditation has long been
criticized as mysterious/secretive with little information shared with stakeholders (Ewell, 2010).
DIRECT/INDIRECT MEASURES 45
In a 2006 survey, CHEA reported that only 18% of the 66 accreditors surveyed provided
information about the results of individual reviews publicly - less than 17% of accreditors
provided a summary on student academic achievement or program performance and just over
33% of accreditors offer a descriptive summary about the characteristics of accredited institutions
or programs (Council of Higher Education Accreditation, 2006). In a 2014 Inside Higher
Education survey, only 9% of the 846 college presidents surveyed indicated that it is very easy to
find student outcomes data on their institution’s website; only half of the respondents agreed that
it is appropriate for the federal government to collect and publish data on the learning outcomes
of college graduates (Jaschik & Ledgerman, 2014). With the public disclosure requirements of
the No Child Left Behind Act of 2001, there is an impetus for higher education and accreditation
agencies to be more open to public and policy makers. It is therefore expected that further
openness will contribute to more effective and accountable business practices as well as the
improvement of educational quality.
It has been three decades since the birth of the assessment movement in U.S. higher
education and a reasonable amount of progress has been made (Ewell, 2005). Systematic
assessment of student learning outcomes is now a common practice at most institutions. The
2009 National Institute for Learning Outcomes Assessment survey revealed that more than 75%
of surveyed institutions have adopted common learning outcomes for all undergraduate students
and most institutions conduct learning assessments at both the instructional and program level
(Kun & Ikenberry, 2009). A 2008 survey performed by the Association of American Colleges
and Universities also reported that 78% of the 433 surveyed institutions have a common set of
learning outcomes for all their undergraduate students and 68% of the institutions also assess
learning outcomes at the departmental level (Hart Research Associates, 2009).
DIRECT/INDIRECT MEASURES 46
As the public concern about the performance and quality of American colleges and
universities continues to grow, it is more imperative than ever to embed assessment in teaching
and to use assessment outcomes to further improve practice, inform decision makers,
communicate effectively with the public, and to be accountable for preparing the future leaders in
the knowledge economy. With enough effort, transparency, continuous improvement, and
responsiveness to society’s demands, higher education institutions will be able to regain the trust
from the public (Barlow et al., 2014).
Conclusion
This chapter provided an overview of the history of accreditation from its earliest
beginnings to its current and future state. A description of specialized accreditation was
provided as a precursor to the review of the premier business school accreditation body, the
AACSB. Its history, mission, AoL standards, and specific assessment characteristics were
specified and summarized. Lastly, the link between accreditation and its effect on learning
assessment was detailed in the final section. As indicated in the history of the AACSB section,
the AACSB was challenged in the early 1900’s to strengthen itself. The literature has shown that
this accrediting body has continued to re-evaluate itself in its quest to have its members provide
the best possible business education to its students. As stated by the organization itself, “At this
point, schools should be demonstrating a high degree of maturity in terms of delineation of clear
learning goals, implementation of outcome assessment processes, and demonstrated use of
assessment information to improve curricula.” (AACSB, 2012, p. 69) This study will aim to
assist by providing new knowledge as to how these institutions are living up to these standards.
DIRECT/INDIRECT MEASURES 47
CHAPTER THREE: RESEARCH DESIGN AND METHODOLOGY
This chapter will detail how this quantitative study will be conducted to determine what
are the most frequently used direct and indirect measures by AACSB-Accredited Business
Schools and what improvements in student learning have resulted from these measures. The
study will also provide information about who is in charge of the assessment process and what
resources are allocated to the assessment process of AoL requirements. The research design,
research questions, population and sample, instrumentation, data collection & analysis, and
limitations/delimitations will be discussed in this chapter.
Research Design
To test current practices against the historical record, data for this study was collected via
an online survey. The survey collected quantitative data on the direct and indirect measures that
are currently being used to satisfy the AoL requirements of AACSB-accredited business schools.
The survey also collected information regarding the individual responsible for assessment
activities and the financial costs of satisfying the AoL requirements of AACSB-accredited
business schools. The results were analyzed using descriptive statistics and compared to results
found in previous studies related to assessment measures used to satisfy the AoL requirements of
AACSB-accredited business schools. The survey instrument was primarily modeled after the
studies administered by Pringle & Michel (2007) & Kelley et al. (2010). These studies were
conducted to obtain information about assessment practices in AACSB-Accredited Business
Schools; permission was obtained from Pringle & Michel (2007) & Kelley et al. (2010) allowing
the reuse of the survey questions. Additional information from a previous study by Martell
(2007) relating to the Assurance of Learning requirements in AACSB-accredited business
DIRECT/INDIRECT MEASURES 48
schools was also used to influence the creation of the survey questions (Martell, 2007). The
population/sample and instrumentation will be further discussed in this section.
Research Question 1
Research question 1 asked, “What are the most frequently used direct measures within
the Assurance of Learning requirements of AACSB-Accredited Business Schools?” The
researcher defines the term, “Direct Measures”, as an assessment measure in which a, “student
demonstration of achievement” (Martell, 2007, p.189) is captured. The following direct
measures are listed as possible answers under the “check all that apply” question format: written
assignments graded with rubric, oral assignments graded with rubric, course embedded
assignments with rubric, cases evaluated with rubric, ETS major field test, common school
exams, systematic evaluation of teamwork, simulations, individually written business plan,
assessment center, mock interview, psychometric measures and other (please specify). These
direct measures options were obtained from the surveys conducted by Martell (2007), Pringle &
Michel (2007), and Kelley et al. (2010).
Research Question 2
Research question 2 asked, “What improvements in student learning have resulted from
these direct measures?” The following improvements in student learning from direct measures
were listed as possible answers under the “check all that apply” question format: new or
modified courses, major modifications to the curriculum, minor modifications to the curriculum,
modifications to teaching methods, modifications to teaching styles, modifications to student
learning objectives, modifications to grading methods, closer coordination of multi-section
courses, new admission standards, greater use of out-of-the classroom learning experience (e.g.
internships), and other (please specify). The researcher also asked the following open-ended
DIRECT/INDIRECT MEASURES 49
question: “Please share with us any comments you may have regarding specific improvements”
in order to gain more knowledge about improvements from the usage of direct measures for
assessment.
Research Question 3
Research question 3 asked, “What are the most frequently used indirect measures within
the Assurance of Learning requirements of AACSB-Accredited Business Schools?” The
researcher defines the term, “Indirect Measures” as an assessment measure which, “include any
measures that are not tied directly to having student demonstrate the achievement of learning
objectives” (Kelley et al., 2010, Appendix). The following indirect measures were listed as
possible answers under the “check all that apply” question format: survey of graduating students,
survey of alumni, survey employers of alumni, conduct exit interviews with graduating students,
evaluation by supervisors of student interns, survey job placement of graduating students,
evaluate students’ performance in licensing exams, conduct focus groups with graduating
students, conduct focus groups with recruiters, and other. These indirect measures options were
obtained from the surveys conducted by Martell (2007), Pringle & Michel (2007), and Kelley et
al. (2010).
Research Question 4
Research question 4 asked, “What improvements in student learning have resulted from
these indirect measures?” The following improvements in student learning from indirect
measures were listed as possible answers under the “check all that apply” question format: new
or modified courses, major modifications to the curriculum, minor modifications to the
curriculum, modifications to teaching methods, modifications to teaching styles, modifications to
student learning objectives, modifications to grading methods, closer coordination of multi-
DIRECT/INDIRECT MEASURES 50
section courses, new admission standards, greater use of out-of-the classroom learning
experience (e.g. internships), and other (please specify). The researcher also asked the following
open-ended question: “Please share with us any comments you may have regarding specific
improvements” in order to gain more knowledge about improvements from the usage of direct
measures for assessment.
Research Question 5
Research question 5 asked, “Who is in charge of the assessment programs at AACSB-
Accredited Business Schools?” The survey question was slightly modified to help the
respondents understand the scope of the question; the text, “Who is the individual most directly
responsible for helping develop, coordinate, and report your AACSB assessment activities and
results” was used to capture information relating to research question 3. The following options
were listed as possible answers: a fulltime assessment coordinator, an associate dean who also
has other administrative duties, the dean, a faculty member who is given release time, a faculty
member who is not given release time, an assessment committee, and other (please specify). The
researcher asked the respondents to specify what their answer of “other” was in order to gain
more knowledge about who is in charge of assessment programs at AACSB-accredited business
schools.
Research Question 6
Research question 6 asked, “What financial costs are associated with assessing the
Assurance of Learning requirements of AACSB-Accredited Business Schools?” Financial costs
are defined as costs of attending assessment workshops, release time for faculty, time spent in
assessment committee meeting, time required for computer programming, software costs, and so
on; the definition of these financial costs were directly transcribed (with permission) from
DIRECT/INDIRECT MEASURES 51
Pringle & Michel’s (2007) study about assessment practices in AACSB-accredited business
schools. The following options were listed as possible answers: less than $1,000, $1,001-$5,000,
$5,001-$10,000, $10,001-$15,000, $15,001-$20,000, $20,001-$25,000 and $25,001-$30,000.
These numerical ranges were established by the researcher based on previous studies on
assessment practices in AACSB-accredited business schools (Martell, 2007; Pringle & Michel
2007; Kelly et al. 2010). The options in this question sought to identify a trend regarding the
rising amount of financial resources dedicated to assessment activities; prior studies have shown
that AACSB-accredited business schools were spending an increasing amount of funds on their
assessment activities in order to meet the AoL requirements (Martell, 2007; Pringle & Michel
2007; Kelly et al. 2010).
The survey also contained a question asking the respondent to indicate how your
department/school allocates the financial support for assessing student learning. The following
options were directly transcribed (with permission) from Kelley et al.’s (2010) study about
assessment practices in AACSB-accredited business schools: faculty training, books and
material, instruments (i.e. standardized), questionnaire development, and other (please specify).
The researcher asked the respondents to specify what their answer of “other” was in order to gain
more knowledge about how AACSB-accredited business schools are allocating their financial
support for assessment. The data from this question will further help to identify the documented
trend of the rising costs of assessment in AACSB-business schools (Martell, 2007; Pringle &
Michel 2007; Kelly et al. 2010).
Population and Sample
In order to more fully understand how business schools have responded to the modified
2003 Assurance of Learning requirements, the population for this study was the primary
DIRECT/INDIRECT MEASURES 52
Accreditation Liaison Officer (ALO) or Dean of all AACSB-Accredited business schools (both
inside and outside of the United States). The researcher used purposeful sampling due to the fact
that the Assurance of Learning requirements are specific to AACSB-Accredited business
schools. According to Bloomberg & Volpe (2008), the usage of purposeful sampling involves
the, “selection of information-rich cases, with the objective of yielding insight and understanding
of the phenomenon under investigation” (p. 69). A spreadsheet that contains the specific
contact/mailing information of the intended population was purchased by the researcher from the
AACSB and was used as the primary source of information to distribute the web-based survey.
Instrumentation
The survey questionnaire was used as the primary data-gathering instrument for this
study (Appendix A). The questionnaire was divided into two main sections: background
questions about individual filling out the survey and their respective institution and specific
questions regarding the research questions: direct and indirect measures of assessment, the
impact of these measures on the institution, who is in charge of assessment, and the financial
costs dedicated to assessment activities. A majority of the survey questions were directly
modeled after Pringle & Michel (2007) and Kelley et al. (2010) studies on the assessment
practices of AACSB-Accredited Business Schools; permission was obtained from the authors of
both studies allowing the replication of survey questions. Additional survey questions were
inspired by Martell’s (2007) previous study of assessment practices at ACSB-accredited schools.
Please see Table 3.1 for a semi-structured table of the relationship between survey questions and
background/research questions.
DIRECT/INDIRECT MEASURES 53
Table 3.1: Semi-structured table of the relationship between Survey Questions and
Research Questions
Question
Back-
ground
Research
Q1
Research
Q2
Research
Q3
Research Research Research
Q4 Q5 Q6
1 X
2 X
3 X
4 X
5 X
6
7
8
9
10
11
12
13
14
15
16
X
X
X
X
X
X
X
X
X
X
X
Total 8 1 1 1 1 1 3
The first section of the survey consists of background questions about the individual
filling out the survey and their respective institution/department. Questions 1-8 (8 total
questions) are dedicated to these background questions.
The second section of the survey addresses the research questions of the study.
Specifically, question 13 (1 total question) is related to Research Question 1, question 14 (1 total
question) is related to Research Question 2, question 15 (1 total question) is related to Research
Question 3, question 16 (1 total question) is related to Research Question 4, question 10 (1 total
question) is related to Research Question 5, and questions 9 & 11-12 (3 total questions) are
related to Research Question 6.
DIRECT/INDIRECT MEASURES 54
An email/letter containing background information of the study and a description of the
steps to ensure confidentiality was sent along with a link to the survey to appeal to the
participants. The researcher provided their contact information to address any follow up
questions and offer to share survey results to the participants.
Reliability and Validity
The reliability of a survey instrument is generally determined by the consistency and
accuracy of the scores/results (Creswell, 2009). This study sought to ensure the reliability of the
survey instrument by minimizing potential errors such as sampling error, coverage error, and
non-response error (Dillman, Smyth, & Christian, 2009). In order to minimize sampling error,
the author reached out to the AACSB in order to obtain and purchase an AACSB-approved
mailing list; this list ensured that the respondents were the primary point of contact for any
AACSB communication. The AACSB-approved mailing list minimized coverage error by
identifying specific individuals as the primary contact person for the AACSB. Lastly, non-
response error was minimized by formatting the survey similarly to past surveys that were
previously sent to these individuals.
Creswell (2009) describes the three traditional forms of validity as content validity,
predictive/concurrent validity, and construct validity. Content validity was established through
the use of a content expert in AACSB AoL standards to review the survey questions prior to its
dissemination. In addition, each question of the survey was formulated to reflect current themes
in AACSB AoL standards.
Data Collection and Analysis
Upon approval by the Institutional Review Board (IRB), the researcher purchased an
AACSB-approved mailing list and further research found 654 valid email addresses to utilize for
DIRECT/INDIRECT MEASURES 55
the survey distribution. The online survey thoroughly tested prior to its distribution and its
contents were delivered via the Qualtrics survey service to the AACSB contacts. In early
November 2014, an email message containing a link to the survey instrument was sent to the 654
email addresses of the AACSB-approved contacts. The subject line of the email was carefully
worded in order to prevent the message from being flagged as junk mail and the message was
sent from one of the author’s educational email accounts (.edu) in order to add legitimacy to the
email. The message also specifically stated that all responses would be anonymous and
confidential and the respondents were given the author’s contact information if they had any
questions or concerns. The researcher followed up with a similar email message two-three
weeks after the initial message was sent in order to encourage further participation from non-
respondents.
Once the survey closed, the data collected was downloaded from the Qualtrics survey
service. This data was then imported into the Statistical Package for the Social Sciences (SPSS)
version 22 in order to be coded and analyzed according to the proposed research design.
Descriptive statistics were run via SPSS in order to compute the frequencies for each survey
question.
DIRECT/INDIRECT MEASURES 56
CHAPTER FOUR: RESEARCH FINDINGS
This section will detail the results of the survey described in chapter three and initial
findings from this analysis. The survey collected quantitative data on the direct and indirect
measures that are currently being used to satisfy the AoL requirements of AACSB-accredited
business schools. The survey also collected information regarding the individual responsible for
assessment activities and the financial costs of satisfying the AoL requirements of AACSB-
accredited business schools. The results were analyzed using descriptive statistics and compared
to results found in previous studies related to assessment measures used to satisfy the AoL
requirements of AACSB-accredited business schools. The survey instrument was primarily
modeled after the studies administered by Pringle & Michel (2007) & Kelley et al. (2010).
Demographics
The survey instrument was emailed to approximately 654 fully accredited AACSB
institutions were contacted for this study (There are a total 708 fully accredited AACSB
institutions; however, 54 institutions were not contacted due to insufficient contact information).
A total of 77 institutions participated in the survey, yielding a response rate of 11.8%. A
majority (77.9%) of the institutions that responded were public institutions (see Table 4.1
below).
Table 4.1: Survey respondents by Institution Type (Private vs. Public)
Institution Type Frequency Percent
Public 60 77.9%
Private 17 22.1%
Total 77 100.0%
The respondents were extremely varied according to job title (see Table 4.2); the full-
time faculty job title (29.9%) was the most frequently listed, followed by Dean (27.3%),
DIRECT/INDIRECT MEASURES 57
Administrative Staff (23.4%), and Accreditation Liaison Officer (19.5%).
Table 4.2: Survey respondents by Job Title
Job Title Frequency Percent
Full-time Faculty 23 29.9%
Dean 21 27.3%
Administrative Staff 18 23.4%
Accreditation Liaison Officer 15 19.5%
Total 77 100.0%
A majority of the institutions represented offered (see Table 4.3) either four or more
(33.8%) or one (32.5%) undergraduate degree programs. The next highest proportion of
undergraduate degree programs was two (22.1%), followed by three (7.8%) and none (3.9%).
In terms of graduate degree program offerings (see Table 4.4), the most frequent response was
four or more (54.5%), followed by three (18.2%), one (11.7%), and two/none (both 7.8%)
Table 4.3: Survey respondents by number of Undergraduate Degree Programs
Number of Undergraduate Degree Programs Frequency Percent
Four or More 26 33.8%
One 25 32.5%
Two 17 22.1%
Three 6 7.8%
None 3 3.9%
Total 77 100.0%
DIRECT/INDIRECT MEASURES 58
Table 4.4: Survey respondents by number of Graduate Degree Programs
Number of Graduate Degree Programs Frequency Percent
Four or More 42 54.5%
Three 14 18.2%
One 9 11.7%
Two 6 7.8%
None 6 7.8%
Total 77 100.0%
An overwhelming majority (94.8%) of the institutions indicated (see Table 4.5) that an
undergraduate core business curriculum is required. These statistics show that a huge majority
of respondents are able to address the survey questions regarding undergraduate business
programs.
Table 4.5: Undergraduate Core Business Curriculum Required
Undergraduate Core Business Curriculum Required Frequency Percent
Yes 73 94.8%
No 4 5.2%
Total 77 100.0%
Lastly, the size of the institutions (see Table 4.6) varied greatly amongst the
respondents. A majority (23.4%) stated that their institutions enrolled either 501-1000 or 1001-
2000 students. The next largest representation was 201-500 students (20.8%), followed by
more than 3000 (11.7%) and 2001-3000 students (10.4%). The smallest student population
represented was 1-100 (3.9%).
DIRECT/INDIRECT MEASURES 59
Table 4.6: Survey Respondents by Student Enrollment
Student Enrollment Frequency Percent
1-100 3 3.9%
101-200 5 6.5%
201-500 16 20.8%
501-1000 18 23.4%
1001-2000 18 23.4%
2001-3000 8 10.4%
More than 3000 9 11.7%
Total 77 100.0%
Findings by Research Question
Research Question 1
This research question sought to identify the most frequently used direct measures within
the Assurance of Learning requirements of AACSB-Accredited Business Schools . The
researcher defines the term, “Direct Measures”, as an assessment measure in which a, “student
demonstration of achievement” (Martell, 2007, p.189) is captured.
The survey yielded the following results (as illustrated by the cumulative totals in Table
4.7): Written Assignments Graded With Rubric (18.0% of cumulative total), Course Embedded
Assignment With Rubric (16.9% of cumulative total), and Oral Assignments Graded With
Rubric (16.2% of cumulative total) were the most commonly identified direct measures used to
satisfy the AoL standards. Cases Evaluated with Rubric (12.8% of cumulative total) and
Systematic Evaluation of Teamwork (9.7% of cumulative total) also stood out as popular choices
amongst respondents. The least commonly used direct measures were identified as Assessment
Center (1.8% of cumulative total), Mock Interview (1.3% of cumulative total) & Psychometric
DIRECT/INDIRECT MEASURES 60
measures (1.3% of cumulative total). It is useful to note that 12 responses (3.1% of cumulative
total) indicated that “Other” direct measures were used to satisfy the AoL standards; some of
these responses included “Employer evaluations of individual student performance”, “Evaluation
of internship employer”, “course embedded measure without rubric”, and “Our version of ETS
test.”
Table 4.8 displays the most commonly used direct measures according to percentage of
institutions. Written Assignments Graded With Rubric (90.9% of institutions), Course
Embedded Assignment With Rubric (85.7% of institutions), and Oral Assignments Graded With
Rubric (81.8% of institutions) were by far the most commonly reported direct measures used to
satisfy the AoL standards. Cases Evaluated with Rubric (64.9% of cumulative total) and
Systematic Evaluation of Teamwork (49.4% of cumulative total) were also identified as a
widespread choice amongst respondents. Similar to the cumulative totals, the least popular
direct measures were Assessment Center (9.1% of institutions), Mock Interview (6.5% of
cumulative total) & Psychometric measures (6.5% of cumulative total).
Table 4.7: Direct Measures used to satisfy the Assurance of Learning Standards
(Cumulative Totals)
Direct Measures (Cumulative Totals) Frequency Percent
Written Assignments Graded With Rubric 70 18.0%
Course Embedded Assignment With Rubric 66 16.9%
Oral Assignments Graded With Rubric 63 16.2%
Cases Evaluated With Rubric 50 12.8%
Systematic Evaluation Of Teamwork 38 9.7%
Simulations 22 5.6%
ETS Major Field Test 20 5.1%
Common School Exams 17 4.4%
DIRECT/INDIRECT MEASURES 61
Individually Written Business Plan 15 3.8%
Assessment Center 7 1.8%
Mock Interview 5 1.3%
Psychometric measures 5 1.3%
Other
1
12 3.1%
Total 390 100.0%
1
“Other” includes such responses as “Embedded multiple-choice questions”, “Our version of ETS
test”, “Student performance in prerequisite courses” and “Evaluation of internship employer.”
Table 4.8: Direct Measures used to satisfy the Assurance of Learning Standards
(Percentage of Institutions)
Direct Measures (Percentage of Institutions) n Percent
Written Assignments Graded With Rubric 70 90.9%
Course Embedded Assignment With Rubric 66 85.7%
Oral Assignments Graded With Rubric 63 81.8%
Cases Evaluated With Rubric 50 64.9%
Systematic Evaluation Of Teamwork 38 49.4%
Simulations 22 28.6%
ETS Major Field Test 20 26.0%
Common School Exams 17 22.1%
Individually Written Business Plan 15 19.5%
Assessment Center 7 9.1%
Mock Interview 5 6.5%
Psychometric measures 5 6.5%
Other
2
12 15.6%
2
“Other” includes such responses as “Embedded multiple-choice questions”, “Our version of ETS
test”, “Student performance in prerequisite courses” and “Evaluation of internship employer.”
DIRECT/INDIRECT MEASURES 62
Research Question 2
Research Question 2 sought to find the answer to the question, “What improvements in
student learning have resulted from these direct measures?” Immediately following the survey
question about the most commonly used direct measures to satisfy AoL standards, the
respondents were asked to identify the improvements in student learning that have resulted from
the direct measures.
The survey found (as illustrated by the cumulative totals in Table 4.9 & percentage of
institutions in Table 4.10) that the most common improvements in student learning are New or
Modified Courses (17.6% of the cumulative total & 80.5% of institutions), Minor Modifications
to the Curriculum (15% of the cumulative total & 68.8% of institutions), Modifications to
Teaching Methods (13.4% of cumulative total & 61% of institutions), Modifications to Student
Learning Objectives (12.8% of cumulative total & 58.4% of institutions), and Closer
Coordination of Multi-section Courses (11.6% of cumulative total & 53.2% of institutions). The
least reported improvements were Modifications to Grading Methods (5.1% of cumulative total
& 23.4% of institutions), Greater use of out-of-the classroom learning experiences (e.g.
internships) (5.1% of cumulative total & 23.4% of institutions), & New Admission Standards
(2.3% of cumulative total & 10.4% of institutions).
It is meaningful to note that some of the “Other” responses included such improvements
as, “Program review”, “More awareness of what we are doing across courses”, and “Created a
Writing Center.”
DIRECT/INDIRECT MEASURES 63
Table 4.9: Improvements in Student Learning that have resulted from the Direct Measures
(Cumulative Totals)
Improvements in Student Learning that have resulted
from the Direct Measures (Cumulative Totals)
Frequency Percent
New or Modified Courses 62 17.6%
Minor Modifications To The Curriculum 53 15.0%
Modifications To Teaching Methods 47 13.4%
Modifications To Student Learning Objectives 45 12.8%
Closer Coordination Of Multi-section Courses 41 11.6%
Major Modifications To The Curriculum 27 7.7%
Modifications To Teaching Styles 25 7.1%
Greater use of out-of-the classroom learning
experiences (e.g. internships)
18 5.1%
Modifications To Grading Methods 18 5.1%
New Admission Standards 8 2.3%
Other
3
8 2.3%
Total 352 100.0%
3
“Other” includes such responses as “Created a Writing Center”, “More awareness of what we are
doing across courses”, and “Program Review.”
Table 4.10: Improvements in Student Learning that have resulted from the Direct
Measures (Percentage of Institutions)
Improvements in Student Learning that have resulted
from the Direct Measures (Percentage of Institutions)
n Percent
New or Modified Courses 62 80.5%
Minor Modifications To The Curriculum 53 68.8%
Modifications To Teaching Methods 47 61.0%
Modifications To Student Learning Objectives 45 58.4%
Closer Coordination Of Multi-section Courses 41 53.2%
Major Modifications To The Curriculum 27 35.1%
DIRECT/INDIRECT MEASURES 64
Modifications To Teaching Styles 25 32.5%
Greater use of out-of-the classroom learning
experiences (e.g. internships)
18 23.4%
Modifications To Grading Methods 18 23.4%
New Admission Standards 8 10.4%
Other
4
8 10.4%
4
“Other” includes such responses as “Created a Writing Center”, “More awareness of what we are
doing across courses”, and “Program Review.”
Comments Regarding Specific Improvements in Student Learning
The respondents were offered a chance to provide any open-ended comments regarding
specific improvements in student learning that have resulted from the direct measures. The two
primary themes that emerged from these comments included the following: improvements in
student learning are viewed as a continual and ongoing process & ethics is a subject of increased
importance amongst student learning.
The first theme that became evident through the comments was that improvements in
student learning are viewed as a continual and ongoing process. A few respondents provided
comments such as, “Identification on continuous improvements happens at all the levels above as
AoL results are discussed at CIR committee levels and loops are attempted to be closed.” and “It
is a continual process of closing the loop.” This theme is best summarized by the following
statement, “Learning Objectives were seen to be too broad and hard to measure, so programme
committee were required to rework the learning objectives.”
The other theme that emerged was the increased importance of ethics as a subject
amongst student learning. Regarding the increased importance of ethics, one response stated:
We have substantially reduced a course to increase emphasis on ethics; we enhanced
current content on corporate governance and placed it earlier into the core curriculum; we
DIRECT/INDIRECT MEASURES 65
have improved feedback to the core class teaching faculty on concepts that are no being
retained later in the student's time in the College.
Another respondent stated the following regarding ethics:
We added a stand-alone course in business ethics and one in international business to
improve our attention to these topics. We modified coverage of some subjects in certain
core classes to reinforce learning objectives and outcomes and we now coordinate classes
like Foundations of Business and Principles of Financial Accounting to assure the level
of rigor is the same across all sections.
It is evident from these comments that ethics is becoming an important factor in improving
student learning within AACSB institutions.
Research Question 3
Research question 3 asked, “What are the most frequently used indirect measures within
the Assurance of Learning requirements of AACSB-Accredited Business Schools?” The
researcher defines the term, “Indirect Measures” as an assessment measure which, “include any
measures that are not tied directly to having student demonstrate the achievement of learning
objectives” (Kelley et al., 2010, Appendix).
The survey found the following most commonly used indirect measures to satisfy the
AoL standards (as illustrated by the cumulative totals and percentage of institutions in Table
4.11 and Table 4.12): Survey of Graduating Students (18.3% of cumulative total and 57.1% of
institutions), Evaluation by Supervisors of Student Interns (15.4% of cumulative total and 48.1%
of institutions), and Survey Job Placement of Graduating Students (12.5% of cumulative total
and 39% of institutions). Survey of Alumni (10.8% of cumulative total and 33.8% of
institutions), Survey Employers of Alumni (8.3% of cumulative total and 30% of institutions),
DIRECT/INDIRECT MEASURES 66
and Conduct Exit Interviews with Graduating Students (8.3% of cumulative total and 30% of
institutions) were also identified as popular choices. The least commonly used indirect measures
were found to be Conduct Focus Groups with Graduating Students (6.3% of cumulative total and
19.5% of institutions) and Conduct Focus Groups with Recruiters (5% of cumulative total and
15.6% of institutions). It is important to note that 15 respondents (6.3% of cumulative total and
19.5% of institutions) indicated that “Other” indirect measures were used to satisfy the AoL
standards; some of these responses included “Alumni focus groups by department”, “NSSE,
focus groups with current students”, “Focus groups with alumni”, and “Participation in certain
extra-curricular activities (quality and quantity).”
Table 4.11: Indirect Measures used to satisfy the Assurance of Learning Standards
(Cumulative Totals)
Indirect Measures (Cumulative Totals) Frequency Percent
Survey of Graduating Students 44 18.3%
Evaluation By Supervisors Of Student Interns 37 15.4%
Survey Job Placement Of Graduating Students 30 12.5%
Survey of Alumni 26 10.8%
Evaluate Students’ Performance In Licensing Exams 21 8.8%
Conduct Exit Interviews With Graduating Students 20 8.3%
Survey Employers Of Alumni 20 8.3%
Conduct Focus Groups With Graduating Students 15 6.3%
Conduct Focus Groups With Recruiters 12 5.0%
Other
5
15 6.3%
Total 240 100.0%
5
“Other” includes such responses as “Alumni focus groups by department”, “NSSE/focus group
with current students”, and “Focus groups with alumni.”
DIRECT/INDIRECT MEASURES 67
Table 4.12: Indirect Measures used to satisfy the Assurance of Learning Standards
(Percentage of Institutions)
Indirect Measures (Percentage of Institutions) n Percent
Survey of Graduating Students 44 57.1%
Evaluation By Supervisors Of Student Interns 37 48.1%
Survey Job Placement Of Graduating Students 30 39.0%
Survey of Alumni 26 33.8%
Evaluate Students’ Performance In Licensing Exams 21 27.3%
Conduct Exit Interviews With Graduating Students 20 30.0%
Survey Employers Of Alumni 20 30.0%
Conduct Focus Groups With Graduating Students 15 19.5%
Conduct Focus Groups With Recruiters 12 15.6%
Other
6
15 19.5%
6
“Other” includes such responses as “Alumni focus groups by department”, “NSSE/focus group
with current students”, and “Focus groups with alumni.”
Research Question 4
Research Question 4 asked, “What improvements in student learning have resulted from
these indirect measures?” Following the survey question asking respondents to identify the most
commonly used indirect measures to satisfy AoL standards, they were asked to identify the
improvements in student learning that have resulted from the indirect measures.
The survey results indicated (as illustrated by the cumulative totals in Table 4.13 &
percentage of institutions in Table 4.14) that the most common improvements in student learning
from indirect measures are Minor Modifications to the Curriculum (18.9% of the cumulative
total & 40.3% of institutions), Greater use of out-of-the classroom learning experiences (e.g.
internships) (15.3% of cumulative total & 32.5% of institutions) and New or Modified Courses
(14% of the cumulative total & 29.9% of institutions). The least reported improvements were
DIRECT/INDIRECT MEASURES 68
Modifications to Teaching Styles (2.4% of cumulative total & 5.2% of institutions),
Modifications to Grading Methods (2.4% of cumulative total & 5.2% of institutions), and New
Admission Standards (3.1% of cumulative total & 6.5% of institutions).
It is useful to note that some of the “Other” responses included improvements such as,
“Greater emphasis on oral and written communication being examined”, “A new course was
added especially for freshmen business majors as the result of focus group discussions”, and
“maybe better survey instruments.”
Table 4.13: Improvements in Student Learning that have resulted from the Indirect
Measures (Cumulative Totals)
Improvements in Student Learning that have resulted
from the Indirect Measures (Cumulative Totals)
Frequency Percent
Minor Modifications To The Curriculum 31 18.9%
Greater use of out-of-the classroom learning
experiences (e.g. internships)
25 15.3%
New or Modified Courses 23 14.0%
Modifications To Student Learning Objectives 20 12.2%
Closer Coordination Of Multi-section Courses 14 8.5%
Modifications To Teaching Methods 11 6.7%
Major Modifications To The Curriculum 6 3.7%
New Admission Standards 5 3.1%
Modifications To Grading Methods 4 2.4%
Modifications To Teaching Styles 4 2.4%
Other
7
21 12.8%
Total 164 100.0%
7
“Other” includes such responses as “Additions to curriculum”, “Better Survey Instruments”, and
“Greater emphasis on oral and written communication being examined.”
DIRECT/INDIRECT MEASURES 69
Table 4.14: Improvements in Student Learning that have resulted from the Indirect
Measures (Percentage of Institutions)
Improvements in Student Learning that have resulted
from the Indirect Measures (Percentage of Institutions)
n Percent
Minor Modifications To The Curriculum 31 40.3%
Greater use of out-of-the classroom learning
experiences (e.g. internships)
25 32.5%
New or Modified Courses 23 29.9%
Modifications To Student Learning Objectives 20 26.0%
Closer Coordination Of Multi-section Courses 14 18.2%
Modifications To Teaching Methods 11 14.3%
Major Modifications To The Curriculum 6 7.8%
New Admission Standards 5 6.5%
Modifications To Grading Methods 4 5.2%
Modifications To Teaching Styles 4 5.2%
Other
8
21 27.3%
8
“Other” includes such responses as “Additions to curriculum”, “Better Survey Instruments”, and
“Greater emphasis on oral and written communication being examined.”
Comments Regarding Specific Improvements in Student Learning
The respondents were offered a chance to provide any open-ended comments regarding
specific improvements in student learning that have resulted from the indirect measures. The
two primary themes that emerged from these comments included the following: Alumni
connections and Networking through the use of indirect measures are very meaningful & there is
a perception that indirect measures are not fully utilized.
DIRECT/INDIRECT MEASURES 70
The first theme that became evident through the comments was that Alumni connections
and Networking through the use of indirect measures are very meaningful. A meaningful
comment regarding this theme stated:
Feedback from alumni and advisory board has lead to an ongoing examination of
experiential learning and communication (oral and written) abilities of the students. As
this process is recently underway, no changes have yet been made, but some are expected
at the conclusion of the process”.
Another respondent stated, “Our internship program is new and helping students with hands on
job experience and networking.” The following comment also added to the importance of
networking, “Alumni consistently mention skills as the number one thing to be emphasized:
presentation, communication, and teamwork skills. This has resulted in a substantial shift in our
teaching methods that require many team based projects with presentations.”
The other theme that emerged was the perception that indirect measures are not fully
utilized. For example, one response stated, “we haven't used much in the way of indirect
measures. Another respondent commented that they have, “No real use of indirect measures”
while another respondent stated, “We have indirect measures available to us, but we do not use
them.” There also was a respondent who was on the fence, stating, “None of these are in use
now, but some are being considered. Many of these are used in our overall accreditation
stakeholder section, but not in AoL.”
Research Question 5
Research question 5 asked, “Who is in charge of the assessment programs at AACSB-
Accredited Business Schools?” The survey question was slightly modified to help the
respondents understand the scope of the question; the text, “Who is the individual most directly
DIRECT/INDIRECT MEASURES 71
responsible for helping develop, coordinate, and report your AACSB assessment activities and
results” was used to capture information relating to research question 5.
The results show (see Table 4.15 and Table 4.16) that an Associate Dean who also has
other Administrative Duties was that far and away the most common response (34% of
cumulative total and 41.6% of institutions), followed by an Assessment Committee (21.3% of
cumulative total and 26% of institutions), a Faculty Member who is given Release Time (12.8%
of cumulative total and 15.6% of institutions) and a Fulltime Assessment Coordinator (10.6% of
cumulative total and 13% of institutions). In a result that mirrors previous studies, the least
popular answer was the Dean (3.2% of cumulative total and 3.9% of institutions).
It is important to note that “Other” (8.5% of cumulative total and 10.4% of institutions)
included such titles as, “Part-time staff administrator”, “Part-time Assessment Coordinator
(60%)”, “Staff member”, “Assurance of Learning Manager”, “Key administrator and core
management committee”, and “Deputy Dean External and International.” These titles illustrate
the notion that assessment in accreditation is a vital part of AACSB institutions and newly
created job titles have appeared as a result.
Table 4.15: Individual Most Directly Responsible for Helping Develop, Coordinate, and
Report Your AACSB Assessment Activities and Results (Cumulative Totals)
Individual Most Directly Responsible (Cumulative
Totals
Frequency Percent
An Associate Dean Who Also Has Other
Administrative Duties
32 34.0%
An Assessment Committee 20 21.3%
A Faculty Member Who Is Given Release Time 12 12.8%
A Fulltime Assessment Coordinator 10 10.6%
A Faculty Member Who Is Not Given Release Time 9 9.6%
The Dean 3 3.2%
DIRECT/INDIRECT MEASURES 72
Other
9
8 8.5%
Total 94 100.0%
9
“Other” includes such responses as “Part-time Assessment Coordinator”, “Assurance of
Learning Manager”, “Deputy Dean External and International” and “Staff Member.”
Table 4.16: Individual Most Directly Responsible for Helping Develop, Coordinate, and
Report Your AACSB Assessment Activities and Results (Percentage of Institutions)
Individual Most Directly Responsible (Percentage
of Institutions)
n Percent
An Associate Dean Who Also Has Other
Administrative Duties
32 41.6%
An Assessment Committee 20 26.0%
A Faculty Member Who Is Given Release Time 12 15.6%
A Fulltime Assessment Coordinator 10 13.0%
A Faculty Member Who Is Not Given Release Time 9 11.7%
The Dean 3 3.9%
Other
10
8 10.4%
10
“Other” includes such responses as “Part-time Assessment Coordinator”, “Assurance of
Learning Manager”, “Deputy Dean External and International” and “Staff Member.”
Research Question 6
Research question 6 asked, “What financial costs are associated with assessing the
Assurance of Learning requirements of AACSB-Accredited Business Schools?” Financial costs
are defined as costs of attending assessment workshops, release time for faculty, time spent in
assessment committee meeting, time required for computer programming, software costs, and so
on; the definition of these financial costs were directly transcribed (with permission) from
Pringle & Michel’s (2007) study about assessment practices in AACSB-accredited business
schools. The following options were listed as possible answers: less than $1,000, $1,001-$5,000,
$5,001-$10,000, $10,001-$15,000, $15,001-$20,000, $20,001-$25,000 and $25,001-$30,000.
These numerical ranges were established by the researcher based on previous studies on
DIRECT/INDIRECT MEASURES 73
assessment practices in AACSB-accredited business schools (Martell, 2007; Pringle & Michel
2007; Kelly et al. 2010). The options in this question seek to identify a trend regarding the rising
amount of financial resources dedicated to assessment activities; prior studies have shown that
AACSB-accredited business schools were spending an increasing amount of funds on their
assessment activities in order to meet the AoL requirements (Martell, 2007; Pringle & Michel
2007; Kelly et al. 2010).
Prior to investigating the total financial support that an institution provides to conduct
assessment programs, it is important to review the amount of workforce time that is dedicated to
meeting AoL standards in both the years leading up to the accreditation visit and the year
immediately prior to the visit.
In the years leading up to the accreditation visit, a large majority (see Table 4.17) of the
respondents stated that 1-25% (75.3%) of their time was devoted to meeting AoL standards. The
next highest response was 26-50% (14.3%), followed by 0% (5.2%) and 76-100% (3.9%). The
lowest reported response was 51-75% (1.3%).
For the year prior to the accreditation visit, the most popular answer was again 1-25%
(57.1%); however, it did not represent the vast majority. The next most popular answer was 26-
50% (23.4%), followed by 0% (9.1%) and 51-75% (7.8%). The least common response was 76-
100% (2.6%). These results indicate that while AoL standards are viewed as an important
component of AACSB accreditation, there are many other elements of accreditation that need
attention.
DIRECT/INDIRECT MEASURES 74
Table 4.17: Percentage of Time Devoted to Meeting AoL Standards in Years Leading Up
To Accreditation Visit
Percentage of Time Frequency Percent
0% 4 5.2%
1-25% 58 75.3%
26-50% 11 14.3%
51-75% 1 1.3%
76-100% 3 3.9%
Total 77 100.0%
Table 4.18: Percentage of Time Devoted to Meeting AoL Standards in Year before the
Accreditation Visit
Percentage of Time Frequency Percent
0% 7 9.1%
1-25% 44 57.1%
26-50% 18 23.4%
51-75% 6 7.8%
76-100% 2 2.6%
Total 77 100.0%
Table 4.19 reports the level of financial support that institutions provide in order to
conduct assessment programs. The results are somewhat in line with recent trends that more
financial support is given to assessment programs; however, the largest response was an
assessment budget of $1,001-$5,000 (19.5%), followed by less than $1,000 (15.6%) and more
than $30,000 (15.6%). It is interesting to note that while a majority (54.6%) of our respondents
had student enrollments of less than 1,000 students, the majority of responses (54.6%) indicated
DIRECT/INDIRECT MEASURES 75
a budget of $10,001 or more to conduct assessment programs. This data indicates that there is a
growing trend of institutions dedicating more financial resources to assessment programs.
Table 4.19: Rough Estimate of Financial Support to Conduct Assessment Programs
Rough Estimate Frequency Percent
Less Than $1,000 12 15.6%
$1,001-$5,000 15 19.5%
$5,001-$10,000 8 10.4%
$10,001-$15,000 10 13.0%
$15,001-$20,000 6 7.8%
$20,001-$25,000 10 13.0%
$25,001-$30,000 4 5.2%
More than $30,000 12 15.6%
Total 77 100.0%
The survey also contains a question asking the respondent to indicate how your
department/school allocates the financial support for assessing student learning. The following
options were directly transcribed (with permission) from Kelley et al.’s (2010) study about
assessment practices in AACSB-accredited business schools: faculty training, books and
material, instruments (i.e. standardized), questionnaire development, and other (please specify).
The researcher asked the respondents to specify what their answer of “other” is in order to gain
more knowledge about how AACSB-accredited business schools are allocating their financial
support for assessment. The data from this question will further help to identify the documented
trend of the rising costs of assessment in AACSB-business schools (Martell, 2007; Pringle &
Michel 2007; Kelly et al. 2010).
DIRECT/INDIRECT MEASURES 76
Faculty training was the first option listed for the allocation of financial support. Table
4.20 shows that the two most popular choices were 1-25% (34.2%) and 0% (28.9%) and the least
popular choice was 51-75% (4%). This data demonstrates that a majority of the institutions do
not devote a large amount of financial support to faculty training.
Table 4.20: Allocation of Financial Support that is used to conduct Assessment Programs –
Faculty Training
Allocation of Financial Support – Faculty Training Frequency Percent
0% 22 28.9%
1-25% 26 34.2%
26-50% 18 19.7%
51-75% 3 4.0%
76-100% 10 13.2%
Total 76 100.0%
The next option listed on the survey was books and materials; according to Table 4.21,
the vast majority of institutions (68.8%) selected 0% financial support for books and materials.
The next most popular response was 1-25% (28.6%), followed by 26-50% (2.6%). Two choices
(51-75% and 76-100%) did not garner a single response from those surveyed. This data
illustrates the notion that institutions allocate either none or very limited financial resources
towards books and materials.
DIRECT/INDIRECT MEASURES 77
Table 4.21: Allocation of Financial Support that is used to conduct Assessment Programs –
Books and Materials
Allocation of Financial Support – Books and Materials Frequency Percent
0% 53 68.8%
1-25% 22 28.6%
26-50% 2 2.6%
51-75% 0 0%
76-100% 0 0%
Total 77 100.0%
Instruments (i.e. Standardized) was the next option listed on the survey; Table 4.22 shows
that the most popular choice was 0% (54.5%), followed by 1-25% (20.8%) and 26-50% (18.2%).
The two least popular choices were 51-75% (1.3%) and 76-100% (5.2%). Although a majority
(54.5%) of the respondents do not devote any financial resources to instruments (i.e.
standardized), the ones that devote resources primarily utilize a rate between 1-25% and 26-50%.
Table 4.22: Allocation of Financial Support that is used to conduct Assessment Programs –
Instruments (i.e. Standardized)
Allocation of Financial Support – Instruments (i.e.
Standardized)
Frequency Percent
0% 42 54.5%
1-25% 16 20.8%
26-50% 14 18.2%
51-75% 1 1.3%
76-100% 4 5.2%
Total 77 100.0%
Questionnaire Development garnered similar data results to books and materials.
According to Table 4.23, the vast majority of institutions (72.7%) selected 0% financial support
DIRECT/INDIRECT MEASURES 78
for Questionnaire Development. The next most popular response was 1-25% (23.4%), followed
by 26-50% (3.9%). Two choices (51-75% and 76-100%) did not garner a single response from
those surveyed. Similarly to books and materials, this data illustrates the notion that institutions
allocate either none or very limited financial resources towards Questionnaire Development.
Table 4.23: Allocation of Financial Support that is used to conduct Assessment Programs –
Questionnaire Development
Allocation of Financial Support – Questionnaire
Development
Frequency Percent
0% 56 72.7%
1-25% 18 23.4%
26-50% 3 3.9%
51-75% 0 0%
76-100% 0 0%
Total 77 100.0%
The final option listed on the survey instrument was Other and the data yielded some
interesting results. Table 4.24 shows that the two most popular choices were either 76-100%
(34.2%) or 0% (30.3%). The next most popular choice was 26-50% (19.7%), followed by 1-25%
(10.5%). The lowest amount of responses belonged to 51-75% (5.3%). The data indicates that
the AACSB institutions may be split between solely using the traditional methods of assessment
versus implementing new or innovative methods of assessment. The responses that were listed
under this Other option are discussed in the next section.
DIRECT/INDIRECT MEASURES 79
Table 4.24: Allocation of Financial Support that is used to conduct Assessment Programs –
Other
Allocation of Financial Support – Other Frequency Percent
0% 23 30.3%
1-25% 8 10.5%
26-50% 15 19.7%
51-75% 4 5.3%
76-100% 26 34.2%
Total 76 100.0%
Comments regarding Other options for assessment methods
The respondents were able to list in specific detail what other financial support options
they have utilized in other to conduct Assessment programs. A few themes emerged from
analyzing this data, such as the fact that a majority of responses were geared towards
faculty/staff training via AACSB conferences/workshops. Another theme that was evident was
that a lot of financial support was dedicated towards release time (or staff time). Finally, another
theme was that financial support was given to staff members who specialized in accreditation
(i.e. Accreditation Coordinator position).
A main theme that emerged from these comments was that a majority of the respondents
dedicated financial resources towards faculty/staff training via AACSB conferences/workshops.
A few of the statements regarding this theme included, “conference/workshop attendance”,
“AACSBS courses, conferences, expenses of measurement: professors payment, technicians
payment”, “Seminars, Conferences”, “Conferences”, and “professional development and
AACSB engagement.”
DIRECT/INDIRECT MEASURES 80
Release time for faculty/staff to conduct assessment was also heavily mentioned within
the comments. For example, the following comments were noted: “Release time”, “Faculty time
to do the assessment and food for faculty”, “course release/remuneration”, and “staff time.”
Finally, the last theme that emerged was that financial support is given to staff members
who specialize in accreditation. Statements such as, “Accreditation coordinator”, “special pay
for assessors”, “Dedicated staff”, and “stipend for grading assessments” all lend evidence to this
theme. This theme also emulates the results from Research Question 5 (regarding the
identification of the person in charge of assessment) because a significant number of respondents
indicated that the person responsible was one solely dedicated to assessment.
Summary
This chapter detailed the results of the survey administered to the AACSB-accredited
institutions and the initial findings from the analysis of data. Amongst the demographic analysis
of the survey, some key findings indicated that a majority (77.9%) of respondents were from a
public institution (vs. private) and that the job titles of respondents were extremely varied across
four distinct options.
Research Question 1 sought to identify the most frequently used direct measures within
the Assurance of Learning requirements of AACSB-Accredited Business Schools. The survey
results showed that Written Assignments Graded With Rubric, Oral Assignments Graded With
Rubric, & Course Embedded Assignment With Rubric were the most frequently used direct
measures to satisfy the Assurance of Learning Standards.
Research Question 2 looked at the improvements in student learning resulted from these
direct measures. The survey results indicated that New or Modified Courses, Minor
Modifications to the Curriculum, and Modifications to Teaching Methods were the most
DIRECT/INDIRECT MEASURES 81
commonly reported improvements in student learning that have resulted from the direct
measures. In addition to these results, two themes emerged from the respondents regarding
specific improvements in student learning that have resulted from the direct measures:
improvements in student learning are viewed as a continual and ongoing process & ethics is a
subject of increased importance amongst student learning.
Research Question 3 sought to identify the most frequently used indirect measures within
the Assurance of Learning requirements of AACSB-Accredited Business Schools. The survey
results showed that Survey of Graduating Students, Survey of Alumni, Survey Employers of
Alumni, and Conduct Exit Interviews With Graduating Students were the most frequently used
indirect measures to satisfy the Assurance of Learning Standards.
Research Question 4 looked at the improvements in student learning that resulted from
these indirect measures. The survey results indicated that Minor Modifications To The
Curriculum, Greater use of out-of-the classroom learning experiences (e.g. internships), and New
or Modified Courses were the most commonly reported improvements in student learning that
have resulted from the indirect measures. In addition to these results, two themes emerged from
the respondents regarding specific improvements in student learning that have resulted from the
indirect measures: Alumni connections and Networking through the use of indirect measures are
very meaningful & there is a perception that indirect measures are not fully utilized.
Research Question 5 sought to identify the individual in charge of the assessment
programs at AACSB-Accredited Business Schools. The survey found that an Associate Dean
who also has other Administrative Duties was the most popular answer, followed by an
Assessment Committee and a Faculty Member who is given release time.
DIRECT/INDIRECT MEASURES 82
Research Question 6 looked to evaluate the financial costs that are associated with
assessing the Assurance of Learning requirements of AACSB-Accredited Business Schools. The
respondents were also asked to disclose the amount of time that was devoted to meeting AoL
standards in both the years leading up to an accreditation visit and the year prior to an
accreditation visit. A majority of respondents devoted between 1-25% of their time both in the
years leading up to an accreditation visit and in the year prior to an accreditation visit. In terms
of the level of financial support that AACSB-Accredited institutions provide in order to conduct
assessment programs, the survey results are in line with recent trends that institutions are devoted
more financial resources to assessment programs. For example, the majority of respondents
stated that they had a budget of more than $10,000 in order to conduct assessment programs.
The respondents also indicated how their institutions allocated their financial support in
order to assess student learning via 5 different outlays: faculty training, books and material,
instruments (i.e. standardized), questionnaire development, and other (please specify). The
greatest allocation of funds was towards Faculty Training & Other options; three themes
emerged from the Other options portion of the survey. These themes included a dedication of
financial support towards faculty/staff training (via AACSB conferences/workshops), release
time (or staff time), and staff members who specialized in accreditation (i.e. Accreditation
Coordinator position).
DIRECT/INDIRECT MEASURES 83
CHAPTER FIVE: DISCUSSION
The focus of the study was to identify the most commonly used direct and indirect
measures within the Assurance of Learning requirements of AACSB-Accredited Business
Schools and to examine the effectiveness of these measures in improving student learning
outcomes. The study also sought to identify the specific resources and support that are required
to maintain the prestigious AACSB-accreditation standard (Pringle & Michel, 2007). In the
following section, each finding is presented by research question and discussed along with
relevant past literature. Implications of the findings, limitations, and future research will also be
presented in this chapter.
Research Question 1
The survey found that the results relating to Research Question 1 were supported by the
literature from previous studies (Martell, 2007; Pringle & Michel 2007; Kelley et al. 2010;
Wheeling et al. 2015). An important observation is that the three most commonly used direct
measures (Written Assignments Graded With Rubric, Oral Assignments Graded With Rubric, &
Course Embedded Assignment With Rubric) derived from the study mirrored those results from
the most recent studies involving the use of direct measures (Kelley et al. 2010; Wheeling et al.
2015). These results indicate that AACSB-Accredited Business Schools continue to utilize these
specific direct measures (within the AoL requirements) and have not identified any new direct
measures to use in place of these established measures. It may be concluded that these three
direct measures are the preferred options used by most AACSB-Accredited Business Schools.
Another conclusion based on the results of the survey is that these three direct measures may
hold the best cost-benefit value when schools must decide which direct measures they will use
within the AoL requirements. Lastly, the results indicate that business schools may have
DIRECT/INDIRECT MEASURES 84
positively viewed the track record of utilizing these direct measures in the past and sought to
utilize them as the most popular direct measure options.
Research Question 2
The study also found that the results relating to Research Question 2 (that the two most
commonly reported improvements in student learning as a result of the direct measures were
Minor Modifications to the Curriculum and Modifications to Teaching Methods) were in line
with results from previous studies (Pringle & Michel 2007; Kelley et al. 2010). This finding
could be attributed to the fact that the most commonly used direct measures found in this study
were similar to those in previous studies, yielding similar improvements in student learning
(Kelley et al. 2010; Wheeling et al. 2015).
One of the themes (improvements in student learning are viewed as a continual and
ongoing process) that emerged from the data on specific improvements in student learning due to
direct measures was consistent with the Wheeling et al. (2015) study that stated, “In terms of
integrating information into program enhancements, they (faculty) recognized the importance of
providing viable curriculum offerings and retaining a focus on continuous improvement as well
as the need to close the loop.” (p. 48). In addition, the other theme (ethics is a subject of
increased importance amongst student learning) of this study that focused on Research Question
1 built upon a trending topic found within the Wheeling et al. (2015) study that, “Of these (most-
assessed skills of students) professional integrity and ethics was reported by more schools than in
previous research, suggesting greater emphasis of this topic in business curricula.” (p. 47).
Research Question 3
The results of the survey relating to Research Question 3 were also found to be similar to
literature from previous studies (Martell, 2007; Pringle & Michel 2007; Kelley et al. 2010;
DIRECT/INDIRECT MEASURES 85
Wheeling et al. 2015). The most commonly reported indirect measures (Survey of Graduating
Students, Survey of Alumni, Survey Employers of Alumni, and Conduct Exit Interviews With
Graduating Students) might be viewed as predictable results due to the fact that all of these
options were listed as suggested indirect measures by the AACSB (to assess AoL standards)
(Kelley et al., 2010). It may be concluded that business schools had already been using these
popular indirect measures and viewed them as cost-effective enough to continue to utilize them
along with their selected direct measures of assessment.
In addition, the most commonly reported indirect measures were also found to have a
connection with either current/future alumni (Survey of Graduating Students/Survey of Alumni)
or employers (Survey Employers of Alumni). Both are important groups for business schools to
establish a meaningful relationship/partnership with in order to ensure that their students are
graduating with meaningful skills (Kelley et al., 2010). The use of these indirect measures
indicates that the business schools have recognized the importance of these groups in support of
meeting the AoL standards.
Research Question 4
The survey results also found that Minor Modifications To The Curriculum, Greater use
of out-of-the classroom learning experiences (e.g. internships), and New or Modified Courses
were the most commonly reported improvements in student learning that have resulted from the
indirect measures. Of these three results, only Minor Modifications To The Curriculum was also
identified as a popular choice in the previous literature (Martell, 2007; Pringle & Michel 2007;
Kelley et al. 2010; Wheeling et al. 2015). This suggests that both the Greater use of out-of-the
classroom learning experiences (e.g. internships), and New or Modified Courses may be viewed
as emergent indirect measures that ultimately help to improve student learning due to indirect
DIRECT/INDIRECT MEASURES 86
measures. A possible reason for the emergences of these measures would be that assessment
requirements are evolving within the current education landscape (Wheeling et al., 2015).
The two themes found within the results of the survey are noteworthy because they are
not highlighted in the previous literature. In particular, the comments listed by respondents
indicated a shift in the direction of placing an increasing importance on alumni connections and
networking in order to advance student learning outcomes. It can be concluded that the greater
use of out-of-the classroom learning experiences (e.g. internships) as an indirect measure may be
directly related to this shift amongst institutions. The other theme emphasizes the notion that
although indirect measures are explicitly listed by the AACB in order to assess AoLs, there
exists a community of institutions who have not fully embraced the notion of utilizing indirect
measures within their assessment practices (Kelley et al., 2010).
Research Question 5
The survey findings regarding the identification of the individual in charge of the
assessment program at AACSB-Accredited Business Schools were consistent with those of the
previous literature (Pringle & Michel, 2007). It is interesting the note that the order of the results
(Associate Dean who also has other Administrative Duties was the most popular answer,
followed by an Assessment Committee and a Faculty Member who is given release time) were in
the same exact order of frequency as those in the Pringle & Michel (2007) study that sought to
identify the individual or group most directly responsible for helping develop, coordinate, and
report AACSB assessment activities (Pringle & Michel, 2007).
The takeaway from these findings is that there is consistency in identifying the individual
in charge of assessment programs at AACSB-Accredited Business Schools. Stakeholders for the
business schools would be able to easily identify and contact the individual in charge of
DIRECT/INDIRECT MEASURES 87
assessment programs by their title. In addition, it is evident that the business schools place a
high value on their assessment programs because they are entrusting them to an individual in a
leadership position (Associate Dean who also has other Administrative Duties) to oversee them.
Research Question 6
The range of available answers for this question was expanded from those in previous
studies in order to verify the documented trend of the rising costs of assessment in AACSB-
Accredited schools (Martell, 2007; Pringle & Michel 2007; Kelly et al. 2010). The data shows
that a majority (54.6%) of respondents had spent at least $10,001 or more on assessment
activities. This result is in line with the recent literature that has identified the notion that more
financial support is given to assessment programs in order to meet the AoL requirements
((Martell, 2007; Pringle & Michel 2007; Kelly et al. 2010). A significant component of these
findings are that while a majority (54.6%) of our respondents had student enrollments of less
than 1,000 students, the majority of responses (54.6%) indicated a budget of $10,001 or more to
conduct assessment programs. The high dollar to student spending ratio supports the notion that
a significant amount of financial support is being used to fund assessment programs.
The supporting survey questions focused on the allocation of financial support that is
used to conduct assessment programs. These results are consistent with those found in the
Kelley et al. (2010) study; most financial support was allocated to faculty training (10
institutions devoted between 76-100% of their financial resources) and other types of support (26
institutions devoted between 76-100% of their financial resources). With regard to the other
types of supports, three themes emerged from the responses: faculty/staff training via specific
AACSB conferences/workshops was heavily cited as a source of financial allocation, release
time/staff time used up financial resources, & staff members who specialized in accreditation
DIRECT/INDIRECT MEASURES 88
were also mentioned as a source of financial allocation. These themes are also consistent with
those mentioned in the Kelley et al. (2010) study regarding other types of support.
A noteworthy observation is that a majority of institutions do not devote any financial
support to Questionnaire Development (72.7%), Books and Materials (68.8%), & Instrument
Development (54.5%). These results suggest that those three methods of assessment are viewed
as outdated and not applicable to assessment in the current climate.
All in all, the assessment activities of AACSB-Accredited schools have grown in scale
where more financial resources are required to operate them in order to meet AoL standards.
The results from this survey show that a majority are devoting a large amount of money to
assessment activities and are following the trend of rising assessment costs. Business schools
must be cognizant of this trend and forecast it into any future budget discussions within their
leadership. A failure to recognize the trend of rising costs could result in a school not having
enough financial resources to conduct their assessment activities to the level required by the
AACSB.
Implications
The implications from this study provide evidence on the specific measures of assessing
student learning outcomes and the improvements in these learning outcomes as a result of the
AoL standards implemented by the AACSB in 2003. The institutions that responded to this
survey have had over 10 years to adjust their curriculum and assessment programs in order to
comply with the new AoL standards. The implications allow for AACSB-Accredited schools to
reflect upon their own curriculum and assessment programs and compare them to the results of
this study.
DIRECT/INDIRECT MEASURES 89
The three most commonly used direct measures (Written Assignments Graded With
Rubric, Oral Assignments Graded With Rubric, & Course Embedded Assignment With Rubric)
found in this study have been consistent with previous literature. These results were not
surprising due to the heavy reliance of group projects and presentations that are widely used
within business coursework.
Institutions have realized the importance of being accountable for producing graduates
who have earned a quality business education. Not only do the survey results show that these
institutions recognize that improvements in student learning outcomes are a continual and
ongoing process, they have also emphasized ethics as an important portion of a student’s
business education training.
Indirect measures in the form of surveying graduating students, alumni, & employers of
alumni were found to be the most commonly used by the institutions. It is important for schools
to maintain a strong relationship to their soon-to-be alums (i.e. graduating seniors) and alumni
base in order to advance student-learning outcomes. Institutions that have not yet placed a high
importance of reaching out to their alumni base should start planning on allocating more
resources to this increasingly important group.
The Associate Dean and Assessment Committee are viewed as those individuals most
likely in charge of the assessment program at AACSB-Accredited schools. Although
respondents were not asked to identify the members of their assessment committee, it is assumed
that faculty members would be participants in the committee due to the requirement by the
AACSB to have faculty participate in the assessment program (Kelley et al., 2010) In addition, it
would be helpful if AACSB-Accredited schools identified these individuals on their
website/marketing materials to provide transparency & accountability to the general public
DIRECT/INDIRECT MEASURES 90
The financial resources dedicated to assessment practices are growing. The study found
that the majority of respondents (54.6%) were devoting more than $10,001 to assessment
activities; in fact, 26 institutions (33.8%) had stated that they were allocating more than $20,000
to conduct assessment programs. These figures suggest that institutions that aspire to become an
AACSB-Accredited institution will need to be aware of allocating a significant amount of their
budget to assessment activities. These aspiring institutions should also be aware that a majority
of funding may specifically go towards the area of faculty training and staff release time.
The results relating to financial resources also emphasize the importance of forecasting a
budget for assessment within AACSB-Accredited institutions to ensure that they are meeting the
standards of accreditation. If an institution fails to correctly forecast the financial resources
needed to conduct their assessment activities, it is very likely that they will not be able to garner
the results needed to meet the AoL requirements. The bottom line is that every AACSB-
Accredited institution needs to have a significant budget line that is solely dedicated to
conducting assessment activities.
Limitations
The small sample size of AACSB-Accredited Business schools that responded to the
survey posed a threat of internal validity. In addition to the small sample size, selection bias may
have occurred because the survey was only sent to the contacts listed on an AACSB-provided
spreadsheet.
Areas of Future Research
In order to expand upon the results of these findings, it is strongly suggested that a large
sample size be considered for any future studies. If a large sample size were to be obtained, the
researcher would be able to break down the results geographically.
DIRECT/INDIRECT MEASURES 91
In addition, researchers could pursue a qualitative or mixed methods study in order to
obtain even greater insight into the most commonly used direct/indirect measures and the
improvements in student learning that have resulted from these measures.
The financial resource budget/allocation breakdown of institutions is an important
component of assessment activities within AACSB-Accredited schools. An in-depth exploration
of the specific decisions that these schools must make with regard to their assessment budget
would contribute greatly to the literature surrounding assessment within AACSB-Accredited
institutions. In addition, the researcher could focus on expanding the number of financial
categories in order to allow for more information to be gathered about this aspect of AACSB-
Accreditation. The expansion of financial categories would allow any current or aspiring
AACSB-Accredited institution insight as to how to properly forecast a budget that is dedicated to
assessment activities.
Conclusion
The purpose of the study was to contribute to the growing body of literature that
examines assessment practices in AACSB-Accredited institutions. Specifically, the
implementation of the AoL standards in 2003 required the schools to adjust their curriculum and
assessment programs to comply with these new requirements. These schools have had over a
decade to align with these standards and this study sought to report key answers to questions
relating to the AoL requirements of AACSB-Accredited Business Schools.
Findings from the study revealed that the most frequently used direct/indirect measures
were consistent with the previous literature on the subject. In addition, the improvements in
student learning that resulted from both direct & indirect measures were also in line with those
found in previous studies. The individual(s) in charge of the assessment program for AACSB-
DIRECT/INDIRECT MEASURES 92
Accredited schools has not deviated from past trends. Finally, the financial costs associated with
assessing the AoL requirements continue to trend upwards. Although the results did not show
that an overwhelming majority of respondents were devoting significantly large amounts of
financial resources to the AoL requirements, there was evidence that the floor of financial costs
for these institutions has risen above the levels previous cited in past studies. In addition, faculty
training and release time were once again identified as those areas that required the highest
amount of financial support.
The study has offered a brief glimpse into the current state of assessment practices within
AACSB-Accredited institutions. The sample size was limited and future researchers are
recommended to greatly expand the sample size in order to continuously contribute to the limited
research of examining the ongoing assessment practices of AACSB-Accredited Business
schools.
DIRECT/INDIRECT MEASURES 93
APPENDIX A: EMAIL LETTER TO PARTICIPANTS
Dear Participant,
Thank you for your consideration and time in taking this survey on the most frequently
used direct and indirect measures within the Assurance of Learning (AoL) requirements of
AACSB-Accredited Business Schools. The AoL requirements have been in place for over a
decade and your response will contribute to the literature about the effectiveness of assessment
for AACSB-Accredited Business Schools.
The brief survey will take 5 to 10 minutes to complete and entails a total of 16 questions.
All questions must be answered in order for the survey to be considered in the final analysis. All
responses will remain anonymous and confidential and will be conducted in an ethical manner
that adheres to the integrity of the higher education community.
I appreciate your participation in this process and it is my intention to contribute this
research to the national accreditation’s body of literature. Please feel free to contact me if you
have any questions or concerns.
Sincerely,
Kristopher Tesoro, MPA
Doctorate in Education Candidate 2015
USC Rossier School of Education
kktesoro@usc.edu
DIRECT/INDIRECT MEASURES 94
APPENDIX B: WEB-BASED SURVEY
1. Please select your gender:
a. Male
b. Female
2. I am current employed at:
a. A private college/university
b. A public college/university
3. I am current employed at:
a. (Please list name of institution)
4. I am currently considered:
a. Full-time Faculty
b. Part-time Faculty/Adjunct
c. Accreditation Liaison Officer
d. Administrative Staff
e. Dean
f. Unsure
5. How many undergraduate degree programs does your business school offer?
a. None
b. One
c. Two
d. Three
e. Four or More
6. At the undergraduate level, does your business school require a core business curriculum
for the majority of your majors?
a. Yes
b. No
7. How many graduate degree programs does your business school offer?
a. None
b. One
c. Two
d. Three
e. Four or More
8. How Many Students (Fulltime Plus Part-Time) Are Currently Enrolled In Your Largest
Undergraduate Degree Program? (If You Do Not Offer An Undergraduate Program,
Please Answer About Your Largest Graduate Program)
a. ___0 or fewer
b. ___1-100
DIRECT/INDIRECT MEASURES 95
c. ___101-200
d. ___201-500
e. ___501-1000
f. ___1001-2000
g. ___2001-3000
h. ___More than 3000
9. What percentage of your time is devoted to meeting the assurance of learning standards?
a. Years leading up to accreditation visit- ____%
b. The year before the accreditation visit- ____%
10. Who Is The Individual Most Directly Responsible For Helping Develop, Coordinate, And
Report Your AACSB Assessment Activities And Results?
a. ___A Fulltime Assessment Coordinator
b. ___An Associate Dean Who Also Has Other Administrative Duties
c. ___The Dean
d. ___A Faculty Member Who Is Given Release Time
e. ___A Faculty Member Who Is Not Given Release Time
f. ___An Assessment Committee
g. ___Other (please specify) __________________________
11. Please Provide A Rough Estimate Of The Financial Support that your Department/School
provides each year to conduct Assessment Programs (Include Such Costs As Those Of
Attending Assessment Workshops, Release Time For Faculty, Time Spent In Assessment
Committee Meetings, Time Required For Computer Programming, Software Costs, And
So On)
a. ___Less Than $1,000
b. ___$1,001 - $5,000
c. ___$5,001-$10,000
d. ___$10,001 - $15,000
e. ___$15,001- $20,000
f. ___$20,001 - $25,000
g. ___$25,001 - $30,000
h. ___More than $30,000
12. Please indicate how your department/school allocates the financial support that is used to
conduct assessment programs
a. Faculty Training - ____%
b. Books and Materials - ____%
c. Instruments (i.e. Standardized) - ____%
d. Questionnaire Development - ____%
e. Other (please specify) - ___________%
We would like to learn about the direct measures that you use for assessment. Direct Measures
include anything that allows students to demonstrate the achievement of learning objectives.
Please answer Question 13 and Question 14 regarding only direct measures.
DIRECT/INDIRECT MEASURES 96
13. Which of the following direct measures does your department/school use to satisfy the
Assurance of Learning in your undergraduate Business Administration degree program
(check all that apply)
a. ___Written Assignments Graded With Rubric
b. ___Oral Assignments Graded With Rubric
c. ___Course Embedded Assignment With Rubric
d. ___Cases Evaluated With Rubric
e. ___ETS Major Field Test
f. ___Common School Exams
g. ___Systematic Evaluation Of Teamwork
h. ___Simulations
i. ___Individually Written Business Plan
j. ___Assessment Center:
k. ___Mock Interview
l. ___Psychometric measures
m. ___Other (please specify) _________________________
14. Please indicate any improvements in student learning that have resulted from the direct
measures used by your department/school (check all that apply)
Improvements:
a. ___New or Modified Courses
b. ___Major Modifications To The Curriculum
c. ___Minor Modifications To The Curriculum
d. ___Modifications To Teaching Methods
e. ___Modifications To Teaching Styles
f. ___Modifications To Student Learning Objectives
g. ___Modifications To Grading Methods
h. ___Closer Coordination Of Multi-section Courses
i. ___New Admission Standards
j. ___Greater use of out-of-the classroom learning experience (e.g. internships)
k. ___Other (please specify)_________________________
Please share with us any comments you may have regarding specific improvements?
_________________________________________________________________
_________________________________________________________________
_________________________________________________________________
We would like to learn about the indirect measures that you use for assessment. Indirect
Measures include any measures that are not tied directly to having students demonstrate the
achievement of learning objectives. Please answer Question 14 and Question 15 regarding only
indirect measures.
15. Which of the following indirect measures does your department/school use to satisfy the
Assurance of Learning in your undergraduate Business Administration degree program
DIRECT/INDIRECT MEASURES 97
(check all that apply)
a. ___Survey Of Graduating Students
b. ___Survey Of Alumni
c. ___Survey Employers Of Alumni
d. ___Conduct Exit Interviews With Graduating Students
e. ___Evaluation By Supervisors Of Student Interns
f. ___Survey Job Placement Of Graduating Students
g. ___Evaluate Students’ Performance In Licensing Exams
h. ___Conduct Focus Groups With Graduating Students
i. ___Conduct Focus Groups With Recruiters
j. ___Other (please specify) _________________________
16. Please indicate any improvements in student learning that have resulted from the indirect
measures used by your department/school (check all that apply)
Improvements:
a. ___New or Modified Courses
b. ___Major Modifications To The Curriculum
c. ___Minor Modifications To The Curriculum
d. ___Modifications To Teaching Methods
e. ___Modifications To Teaching Styles
f. ___Modifications To Student Learning Objectives
g. ___Modifications To Grading Methods
h. ___Closer Coordination Of Multi-section Courses
i. ___New Admission Standards
j. ___Greater use of out-of-the classroom learning experience (e.g. internships)
k. ___Other: (please specify)_________________________
Please share with us any comments you may have regarding specific improvements?
_________________________________________________________________
_________________________________________________________________
_________________________________________________________________
DIRECT/INDIRECT MEASURES 98
REFERENCES
Adelman, C., & Silver, H. (1990). Accreditation: The American experience. London, England:
Council for National Academic Awards.
American Association for Higher Education. (1992). 9 principles of good practice for assessing
student learning. North Kansas City, MO: American Association for Higher Education.
American Council on Education, Task Force on Accreditation. (2012). Assuring Academic
Quality in the 21st Century: Self-regulation in a New Era. Retrieved from
http://www.acenet.edu/news-room/Documents/Accreditation-TaskForce-revised-
070512.pdf
Association to Advance Collegiate Schools of Business (2012). Eligibility procedures and
accreditation standards for business accreditation. Retrieved from
http://www.aacsb.edu/~/media/AACSB/Docs/Accreditation/Standards/2012-business-
accreditation-standards-update.ashx
Association to Advance Collegiate Schools of Business. (2014). About the aacsb. Retrieved May
1, 2014, from http://www.aacsb.edu/about/aboutus.asp.
Association of American Colleges and Universities (2007). College Learning for the New Global
Century. Washington, DC: Association of American Colleges and Universities. Retrieved
from: http://www.aacu.org/leap/documents/GlobalCentury_final.pdf
Astin, A.W. (2014, February 18). Accreditation and autonomy. Inside Higher Ed. Retrieved from
http://www.insidehighered.com/views/2014/02/18/accreditation-helps-limit-
governmentintrusion-us-higher-education-essay
Banta, T. W. (1993). Summary and conclusion: Are we making a difference? In T. W. Banta
(Ed.), Making a difference: Outcomes of a decade of assessment in higher education (pp.
357-376). San Francisco, CA: Jossey-Bass.
Barlow, N., Barczykowski, J., Cayetano, R., Dimapindan, B., Kinley, D., May, R., McGovern, J.,
Payroda, D., Richardson, J., Shih, W., Tesoro, K. (2014). Accreditation in Higher
Education: Value Added, Yes or No? Unpublished doctoral dissertation, University of
Southern California, Los Angeles, CA.
Beno, B. A. (2004). The role of student learning outcomes in accreditation quality review. New
Directions for Community College, 236, 65-72.
Bensimon, E. M. (2005). Closing the achievement gap in higher education: An organizational
learning perspective. New Directions for Higher Education, 131, 99-111.
Bitter, M.E., Stryker, J.P. and Jens, W.G. (1999). A preliminary investigation of the choice to
obtain AACSB accounting accreditation. Accounting Educator’s Journal, Volume XI.
DIRECT/INDIRECT MEASURES 99
Blauch, L. E. (1959). Accreditation in higher education. Washington, DC: United States
Government Printing Office.
Bloland, H. G. (2001). Creating the Council for Higher Education Accreditation (CHEA).
Phoenix, AZ: Oryx Press.
Bloomberg, L.D. & Volpe, M. (2008). Completing your qualitative dissertation: A
Roadmap from beginning to end. Thousand Oaks, CA: Sage.
Booth, M. J. (1991). The past, present and future of accreditation. Journal of the American
Association of Nurse Anesthetists, 59(3), 289-293.
Bresciani, M. J. (2006). Outcomes-based academic and co-curricular program review: A
Compilation of institutional good practice. Sterling, VA: Stylus.
Brittingham, B. (2009). Accreditation in the United States: How did we get to where we are?
New Directions for Higher Education, 145, 7-27. Doi: 10.1002/he.331
Brittingham, B. (2012). Higher education, accreditation, and change, change, change: What’s
teacher education to do? In M. LaCelle-Peterson & D. Rigden (Eds.), Inquiry, evidence,
and excellence: The promise and practice of quality assurance (59-75). Washington, DC:
Teacher Education Accreditation Council.
Brown, H. (2013, September). Protecting Students and Taxpayers: The Federal Government’s
Failed Regulatory Approach and Steps for Reform. American Enterprise Institute, Center
on Higher Education Reform. Retrieved from http://www.aei.org/files/2013/09/27/-
protecting-students-and-taxpayers_164758132385.pdf
Bycio, P. & Allen, J. S. (2007). Factors related to performance on the ETS major field
Achievement test in business. Journal of Education for Business, 82(4), 196-201.
Cabrera, A. F., Colbeck, C. L., & Terenzini, P. T. (2001). Developing performance indicators for
assessing classroom teaching practices and student learning: The case of engineering.
Research in Higher Education, 42(3), 327-352.
Capon, N. (1996). Planning the Development of Builders, Leaders, and Managers for
21st Century Business: Curriculum Change at Columbia Business School.
Chaffee, E. (2014). Learning Metrics: How can we know that students know what they are
suppose to know? Trusteeship, 1(22). Retrieved from
http://agb.org/trusteeship/2014/1/learning-metrics-how-can-we-know-students-know-
what-they-are-supposed-know
Chambers, C. M. (1983). "Council on Postsecondary Education." P. 289-314 in Understanding
Accreditation, edited by K. E. Young, C. M. Chambers, and H. R. Kells. San Francisco:
DIRECT/INDIRECT MEASURES 100
Jossey-Bass.
Council for Higher Education Accreditation (2006). CHEA survey of recognized accrediting
organizations: Providing information to the public. Washington, DC: Author.
Council for Higher Education Accreditation. (2012). The CHEA initiative final report.
Washington, DC: Council for Higher Education Accreditation. Retrieved from:
http://www.chea.org/pdf/TheCHEAInitiative_Final_Report8.pdf
Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods
approaches. Los Angeles, CA: Sage Publications, Inc.
Daoust, M. P., Wehmeyer, W., & Eubank, E. (2006). Valuing an MBA: Authentic outcome
measurement made easy. Unpublished manuscript. Retrieved from
http://www.momentumbusinessgroup.com/resourcesValuingMBA.pdf
Davenport, C. A. (2000). Recognition chronology. Retrieved from
http://www.aspausa.org/documents/Davenport.pdf
Davis, C. O. (1945). A history of the North Central Association of Colleges and Secondary
Schools 1895-1945. Ann Arbor, MI: The North Central Association of Colleges and
Secondary Schools.
Dickey, F. G., & Miller, J. W. (1972). A current perspective on accreditation. Washington, DC:
American Association for Higher Education.
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet, mail, and mixed-mode surveys:
The tailored design method. Hoboken, NJ: John Wiley & Sons, Inc.
DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and
collective rationality in organizational fields. American Sociological Review, 48(2), 147-
160.
Dowd, A. C. (2003). From access to outcome equity: Revitalizing the democratic mission of the
community college. Annals of the American Academy of Political and Social Science,
586, 92-119.
Dowd, A. C., & Grant, J. L. (2006). Equity and efficiency of community college appropriations:
The role of local financing. The Review of Higher Education, 29(2), 167-194.
Eaton, J. S. (2009). Accreditation in the United States. New Directions for Higher Education,
145, 79-86. Doi: 10.1002/he.337
Eaton, J. S. (2012). The future of accreditation. Planning for Higher Education, 40(3), 6-7.
Eaton, J.S. (2013a, June 13). Accreditation and the next reauthorization of the Higher Education
Act. Inside Accreditation with the President of CHEA, 9(3). Retrieved from
DIRECT/INDIRECT MEASURES 101
http://www.chea.org/ia/IA_2013.05.31.html
Eaton, J.S. (2013b, November-December). The changing role of accreditation: Should it matter
to governing boards? Trustee. Retrieved from
http://agb.org/trusteeship/2013/11/changing-role-accreditation-should-itmattergoverning-
boards
Educational Testing Service (2014). About ETS Major Field Tests. Retrieved from
https://www.ets.org/mft/about
Eggers, W. (2000). The value of accreditation in planning. CHEA Chronicle, 3(1). January.
Accessed from http://www.chea.org/Chronicle/vol3/no1/value.html
Ewell, P. T. (1993). The role of states and accreditors in shaping assessment practice. In T. W.
Banta (Ed.), Making a difference: Outcomes of a decade of assessment in higher
education (pp. 339-356). San Francisco, CA: Jossey-Bass.
Ewell, P. T. (2001). Accreditation and student learning outcomes: A proposed point of departure.
Washington, DC: Council for Higher Education Accreditation. Retrieved from
http://www.chea.org/award/StudentLearningOutcomes2001.pdf
Ewell, P. T. (2002). An emerging scholarship: A brief history of assessment. In T. W. Banta
(Ed.), Building a scholarship of assessment (pp. 3-25). San Francisco, CA: Jossey-Bass.
Ewell, P. T. (2005). Can assessment serve accountability? It depends on the question. In J. C.
Burke (Ed.), Achieving accountability in higher education: Balancing public, academic,
and market demands (pp. 78-105). San Francisco, CA: Jossey-Bass.
Ewell, P. T. (2008a). Assessment and accountability in America today: Background and context.
New Directions for Institutional Research, 2008(S1), 7–17.
Ewell, P. T. (2008b). U.S. accreditation and the future of quality assurance: A tenth anniversary
report from the Council for Higher Education Accreditation. Washington, DC: Council
for Higher Education Accreditation.
Ewell, P. T. (2009). Assessment, accountability, and improvement: Revisiting the tension.
Champaign, IL: National Institute for Learning Outcomes Assessment. Retrieved from
http://www.learningoutcomeassessment.org/documents/PeterEwell_005.pdf
Ewell, P. T. (2010). Twenty years of quality assurance in higher education: What’s happened
what’s different? Quality in higher education, 16(2), 173-175.
Francisco, W., Noland, T., & Sinclair, D. (May 2008). AACSB accreditation: Symbol of
Excellence or March toward mediocrity?. Journal of College Teaching & Learning. 5(5)
25-30.
DIRECT/INDIRECT MEASURES 102
Fuller, M. B., & Lugg, E. T. (2012). Legal precedents for higher education accreditation
Journal of Higher Education Management 27(1). Retrieved from
http://www.aaua.org/images/JHEM_-_Vol_27_Web_Edition_.pdf#page=53
Gaston, P. L. (2014). Higher Education Accreditation: How It’s Changing, Why it Must.
Sterling, VA. Stylus Publishing.
Gordon, R. A. & Howell, J. E. (1959). Higher education for business. New York:
Columbia University Press.
Green, J., Stone, C. C., & Zegeye, A. (2014). The Major Field Test in Business: A Solution to
the Problem of Assurance of Learning Assessment? Journal of Education for Business,
89(1). 20-26, DOI: 10.1080/08832323.2012.749206
Hagerty, B. M. K., & Stark, J. S. (1989). Comparing educational accreditation standards in
selected professional fields. The Journal of Higher Education, 60(1), 1-20.
Hart Research Associates. (2009). Learning and assessment: Trends in undergraduate education
(A survey among members of the Association of American College and Universities).
Washington, DC: Author. Retrieved from
https://www.aacu.org/membership/documents/2009MemberSurvey_Part1.pdf
Jaschik, S., & Ledgerman, D. (2014). The 2014 Inside Higher Ed survey of college & university
presidents. Washington, DC: Inside Higher Ed.
Kelley, C., Tong, P. & Choi, B. (2010). A review of assessment of student learning
programs at AACSB schools: A dean’s perspective. Journal of Education for Business,
85(5), 299-306.
Kuh, G. D., & Ewell, P. T. (2010). The state of learning outcomes assessment in the United
States. Higher education management and policy, 22(1), 1-20.
Kuh, G., & Ikenberry, S. (2009). More than you think, less than we need: Learning outcomes
assessment in American higher education. Champaign, IL: National Institute for Learning
Outcomes Assessment. Retrieved from
http://www.learningoutcomeassessment.org/documents/niloafullreportfinal2.pdf
Lind, C. J., & McDonald, M. (2003). Creating and assessment culture: a case study of success
and struggles. In S. E. Van Kollenburg (Ed.), A collection of papers on self-study and
institutional improvement, 3. Promoting student learning and effective teaching, pp.21-
23. (ERIC Document Reproduction Service No. ED 476 673). Retrieved from
http://files.eric.ed.gov/fulltext/ED476673.pdf#page=22
Maki, P. L. (2010). Assessing for learning: Building a sustainable commitment across the
institution (2nd ed.). Sterling, VA: Stylus Publishing.
DIRECT/INDIRECT MEASURES 103
Martell, K. (2007). Assessing student learning: Are business schools making the grade?
Journal of Education for Business , 82(4), 189–95. Retrieved from
http://libproxy.usc.edu/login?url=http://search.proquest.com.libproxy.usc.edu/docview/62
049345?accountid=14749
Martell, K., & Calderon, T. (Eds.). (2005). Assessment in the disciplines, vol. 1, no. 2:
Assessment of student learning in business schools: Best practices each step of the way.
Tallahassee: Association for Institutional Research, Florida State University.
Miles, J. A. (2012). Jossey-Bass business and management reader: Management and
organization theory. Hoboken, NJ: Wiley.
National Advisory Committee on Institutional Quality and Integrity. (2012). Higher education
accreditation reauthorization policy recommendations. Retrieved from
http://www2.ed.gov/about/bdscomm/list/naciqidir/naciqi_draft_final_report.pdf
Obama, B. (2013a, February 12). State of the Union Address. The White House. Retrieved from
http://www.whitehouse.gov/the-press-office/2013/02/12/president-barack-obamas-
stateunion-address
Obama, B. (2013b, February 12). The President’s Plan for a Strong Middle Class and a Strong
America. The White House. Retrieved from
http://www.whitehouse.gov/sites/default/files/uploads/sotu_2013_blueprint_embargo.pdf
Obama, B. (2013c, August 22). Fact Sheet on the President’s Plan to Make College More
Affordable: A Better Bargain for the Middle Class. The White House. Retrieved from
http://www.whitehouse.gov/the-press-office/2013/08/22/fact-sheet-president-s-planmake-
college-more-affordable-better-bargain
Orlans, H. O. (1974). Private accreditation and public eligibility: Volumes 1 and 2. Retrieved
from ERIC database. (ED097858)
Orlans, H. O. (1975). Private accreditation and public eligibility. Lexington, MA: D.C. Heath
and Company.
Palomba, C. A., & Banta, T. W. (1999). Assessment essentials: Planning, implementing,
and improving assessment in higher education. San Francisco, CA: Jossey-Bass.
Perrault, A. H.; Gergory, V. L.; & Carey, J. O. (2002). The integration of assessment of student
learning outcomes with teaching effectiveness. Journal of Education for Library and
Information Science, 43(4), 270-282.
Pokharel, Anjoo (2007). Assurance of learning (aol) methods just have to be good enough.
Journal of Education for Business , 82(4), 241–43.
http://libproxy.usc.edu/login?url=http://search.proquest.com.libproxy.usc.edu/docview/20
2821103?accountid=14749
DIRECT/INDIRECT MEASURES 104
Pringle, C., and Michel, M. (2007). Assessment practices in AACSB-accredited business
schools. Journal of Education for Business, 202-211.
Procopio, C. H. (2010). Differing administrator, faculty, and staff perceptions of organizational
culture as related to external accreditation. Academic Leadership Journal, 8(2), 1-15.
Ratcliff, J. L. (1996). Assessment, accreditation, and evaluation of higher education in the US.
Quality in Higher Education, 2(1), 5-19.
Rhodes, T. L. (2012). Show me the learning: Value, accreditation, and the quality of the degree.
Planning for Higher Education, 40(3), 36-42.
Roberts Jr.,W.A., Johnson, R., and Groesbeck, J. (2006). The perspective of faculty hired after
AACSB accreditation on accreditation’s impact and importance. Academy of Educational
Leadership Journal,10(3).
Roller, R. H., Andrews, B. K., & Bovee, S. L. (2003). Specialized accreditation of
business schools: A comparison of alternative costs, benefits, and motivations.
Journal of Education for Business, 78(4), 197-205.
Shaw, R. (1993). A backward glance: To a time before there was accreditation. North Central
Association Quarterly, 68(2), 323-335.
Shibley, L. R., & Volkwein, J. F. (2002, June). Comparing the costs and benefits of re-
accreditation processes. Paper presented at the annual meeting of the Association for
Institutional Research, Toronto, Ontario, Canada.
Suskie, L. (2004). Assessing student learning: A common sense guide. San Francisco:
Jossey-Bass.
Suskie, L. (2009). Assessing student learning: A common sense guide. 2nd edition. San
Francisco: Jossey- Bass.
Trivett, D. A. (1976). Accreditation and institutional eligibility. Washington, DC: American
Association for Higher Education.
USDE Test. (2006). A test of leadership: Charting the future of US higher education. A report of
the commission appointed by Secretary of Education Margaret Spellings. Washington,
DC: USDE. Accessed from http://www2.ed.gov/about/bdscomm/list/hiedfuture/-
reports/final-report.pdf.
U.S. Department of Education. (2006). A test of leadership: Charting the future of U.S. Higher
Education. Retrieved from
http://www2.ed.gov/about/bdscomm/list/hiedfuture/reports/final-report.pdf
DIRECT/INDIRECT MEASURES 105
Volkwein, J. F., Lattuca, L. R., Harper, B. J., & Domingo, R. J. (2007). Measuring the impact of
professional accreditation on student experiences and learning outcomes. Research in
Higher Education, 48(2), 251-282.
Weissburg, P. (2008). Shifting alliances in the accreditation of higher education: self-regulatory
organizations. Dissertation Abstracts International, DAI-A 70/02, August 2009.
ProQuest ID 250811630.
Wergin, J. F. (2005). Waking up to the importance of accreditation. Change, 37(3) 35-41.
Wergin, J. F. (2012). Five essential tensions in accreditation. In M. LaCelle-Peterson & D.
Rigden (Eds.), Inquiry, evidence, and excellence: The promise and practice of quality
assurance (27-38). Washington, DC: Teacher Education Accreditation Council.
Western Association of Schools and Colleges. (2009). WASC resource guide for ‘good
practices’ in academic program review. Retrieved from
http://www.wascsenior.org/findit/files/forms/WASC_Program_Review_Resource
_Guide_Sept_2009.pdf
Wheeling, B., Miller, D., & Slocombe, T. (2015). Assessment at AACSB Schools: A Survey of
Deans. Journal of Education for Business, 90(1), 44-49. DOI:
10.1080/08832323.2014.973824
Wiedman, D. (1992). Effects on academic culture of shifts from oral to written traditions: The
case of university accreditation. Human Organization, 51(4), 398-407.
Wolff, R. A. (2005). Accountability and accreditation: Can reforms match increasing demands?.
In J. C. Burke (Ed.), Achieving accountability in higher education: Balancing public,
academic, and market demands (pp. 78-105). San Francisco, CA: Jossey-Bass.
Woolston, P. J. (2012). The Costs of Institutional Accreditation: A study of direct and indirect
costs. (Doctoral dissertation, University of Southern California).
Zammuto, R. (2008). Accreditation and the globalization of business [Electronic
Version]. Academy of Management Learning & Education, 7(2), 256-268.
Abstract (if available)
Abstract
In recent years, higher education has transitioned into an era that is more focused on accountability and student outcomes for accredited institutions. Specialized program accreditation is viewed as an additional external review that provides an emphasis on field-specific education. However, there is limited research on the assessment practices of AACSB-Accredited Business Schools. Therefore, this study is pertinent in identifying the most commonly used direct and indirect measures within the Assurance of Learning requirements of AACSB-Accredited Business Schools & to examine the effectiveness of these measures in improving student learning outcomes. In addition, this study sought to identify the specific resources and support that are required to maintain the AACSB-accreditation standard. This mixed-method study surveyed the primary Accreditation Liaison Office (ALO) or Dean for 77 AACB-Accredited Business Schools. The data was collected and analyzed to present an overview of the current state of assessment practices of AACSB-Accredited Business Schools. The findings for the most commonly used direct/indirect measures & the improvements in student learning that resulted from these measures were consistent with previous literature on the subject. The individual(s) in charge of the assessment program for AACSB-Accredited schools has not deviated from past trends. Finally, the financial costs associated with assessing the AoL requirements continue to trend upwards.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Learning outcomes assessment at American Library Association accredited master's programs in library and information studies
PDF
Assessment, accountability & accreditation: a study of MOOC provider perceptions
PDF
Priorities and practices: a mixed methods study of journalism accreditation
PDF
Perspectives on accreditation and leadership: a case study of an urban city college in jeopardy of losing accreditation
PDF
The effects of accreditation on the passing rates of the California bar exam
PDF
The costs of institutional accreditation: a study of direct and indirect costs
PDF
An evaluation of nursing program administrator perspectives on national nursing education accreditation
PDF
The goals of specialized accreditation: A study of perceived benefits and costs of membership with the National Association of Schools of Music
PDF
A descriptive analysis focusing on similarities and differences among the U.S. service academies
PDF
An exploratory, quantitative study of accreditation actions taken by the Western Association of Schools and Colleges' Accrediting Commission for Community and Junior Colleges Since 2002
PDF
States of motivation: examining perceptions of accreditation through the framework of self-determination
PDF
The benefits and costs of accreditation of undergraduate medical education programs leading to the MD degree in the United States and its territories
PDF
The predictive validity of DIBELS oral reading fluency on the Smarter Balanced Assessment Consortium
PDF
Examining the implications of return on investment using the gap analysis framework on the executive master of business administration program at a four year research university
PDF
An exploratory study on flipped learning and the use of self-regulation amongst undergraduate engineering students
PDF
Institutional student loan cohort default rates by institution type
PDF
Examining the use of online storytelling as a motivation for young learners to practice narrative skills
PDF
Assessing and addressing random and systematic measurement error in performance indicators of institutional effectiveness in the community college
PDF
The efficacy of regional accreditation compared to direct public regulation of post-seconadary institutions in the United States
PDF
Examining the use of mnemonic devices in instructional practices to improve the reading skills of third grade public school students with learning disabilities
Asset Metadata
Creator
Tesoro, Kristopher
(author)
Core Title
An examination of the direct/indirect measures used in the assessment practices of AACSB-accredited schools
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education (Leadership)
Publication Date
09/24/2015
Defense Date
06/02/2015
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
AACSB,accreditation,assessment,assurance of learning,direct measures,indirect measures,OAI-PMH Harvest
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Keim, Robert (
committee chair
), Tobey, Patricia (
committee member
), Woolston, Paul (
committee member
)
Creator Email
kktesoro@usc.edu,ktesoro@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-187141
Unique identifier
UC11275521
Identifier
etd-TesoroKris-3950.pdf (filename),usctheses-c40-187141 (legacy record id)
Legacy Identifier
etd-TesoroKris-3950.pdf
Dmrecord
187141
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Tesoro, Kristopher
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
AACSB
accreditation
assurance of learning
direct measures
indirect measures