Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Assessment and teaching improvement in higher education: investigating an unproven link
(USC Thesis Other)
Assessment and teaching improvement in higher education: investigating an unproven link
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Assessment and Teaching Improvement in Higher Education
1
Assessment and Teaching Improvement in Higher Education:
Investigating an Unproven Link
by
Elizabeth Marshall Holcombe
A dissertation presented to the
FACULTY OF THE USC GRADUATE SCHOOL
in partial fulfillment of the requirements for the degree
DOCTOR OF PHILOSOPHY
(URBAN EDUCATION POLICY)
December 2018
Assessment and Teaching Improvement in Higher Education
2
TABLE OF CONTENTS
LIST OF TABLES 5
LIST OF FIGURES 6
COMMITTEE MEMBERS 7
DEDICATION 8
ACKNOWLEDGEMENTS 9
ABSTRACT 11
CHAPTER 1: INTRODUCTION 12
Study Focus and Research Questions 17
Significance 18
Organization 20
CHAPTER 2: LITERATURE REVIEW AND THEORETICAL
FRAMEWORK
21
What Is Assessment? 22
Brief History of Assessment in Higher Education 24
Beginning of the Modern Era 27
Assessment Today: Contemporary Landscape 30
Differences by Discipline and Institution Type 32
Challenges to Assessment 35
Assessment and Teaching Effectiveness and Improvement 36
Teaching Effectiveness in Higher Education 36
Teaching Improvement Initiatives 38
Assessment and Teaching Improvement 44
Theoretical Framework 44
Systems Theory 45
Individual Level 47
Departmental Level 52
Institutional Level 56
External Level 59
Types of Outcomes Assessment and Measures 60
Summary and Conclusions 62
CHAPTER 3: STUDY DESIGN AND METHODS 63
Overview of Case Study 63
Site Selection Criteria 64
Site Selection and Sample Description 66
Valley University 66
University of the Pines 67
Data Collection 68
Document Analysis 68
Survey 69
Interviews 71
Data Analysis 74
Assessment and Teaching Improvement in Higher Education
3
Validity and Trustworthiness 75
Limitations 77
CHAPTER 4: ASSESSMENT AND TEACHING AT THE
INSTITUTIONAL LEVEL
79
University of the Pines: Institutional Level 80
Cultural Change 81
Changes to Policies and Structures 86
Changes to Curriculum and Teaching 88
How and Why Change Happened 90
Faculty-Driven Process 91
Role of Institutional Structures 92
Intentional Messaging around Assessment for Improvement 94
Attitude towards Accreditation 96
Support and Training 98
Leadership Support 100
Valley University: Institutional Level 101
Lack of Change to Teaching and Learning Environment 102
How and Why Little Institution-Level Change Occurred 104
Administratively-Driven Process 104
Attitude towards Accreditation 106
Lack of Structural Supports and Training 107
Lack of Leadership Support and Alignment around Assessment for
Improvement
109
Summary and Conclusions 110
CHAPTER 5: ASSESSMENT AND TEACHING AT THE
DEPARTMENTAL LEVEL
112
University of the Pines Departments 114
UP Social Science 114
UP STEM 119
UP Humanities 126
Valley University Departments 132
VU Humanities 132
VU STEM 139
VU Social Science 147
Summary and Conclusions 151
CHAPTER 6: ASSESSMENT AND TEACHING AT THE INDIVIDUAL
LEVEL
154
How Assessment Type Influenced Individuals’ Teaching 154
Formal Assessment/Program Assessment 156
Informal Assessment/Classroom Assessment 160
Themes Associated with Relationship at Individual Level 162
Attitudes towards Assessment 163
Favorable Attitudes 164
Assessment and Teaching Improvement in Higher Education
4
Antagonistic Attitudes 167
Neutral or Ambivalent Attitudes 171
Multiple Simultaneous Attitudes 173
Prior Experiences with Teaching and Assessment 174
Career Stage 176
Summary and Conclusions 178
CHAPTER 7: LOOKING ACROSS THE SYSTEM: ALIGNMENT AND
TENSIONS
180
Alignment 180
Alignment in Action: Close-Up of Craig 183
Departmental Level 183
Institutional Level 185
Individual Level 186
Tension: Influence of Assessment Type 187
Summary and Conclusions 190
CHAPTER 8: DISCUSSION, IMPLICATIONS, AND CONCLUSIONS 191
Review and Summary of Findings 191
Complex Relationship between Assessment and Teaching Improvement 197
Using Systems Theory to Understand the Connection between Assessment and
Teaching
200
Institutional Logics 202
Organizational Cultures 204
Professionalization Theories 208
Assessment as Reflective Practice 210
Cognitive Theories of Belief Change 211
Implications for Policy and Practice 213
Suggestions for Future Research 219
Conclusions and Reflections 222
REFERENCES 225
APPENDICES 249
Appendix A: Approaches to Teaching Inventory (ATI) 249
Appendix B: Demographic Survey Questions Appended to ATI 250
Appendix C: Interview Protocols 252
Appendix D: Code List 258
Appendix E: Participation Request Letters 263
Assessment and Teaching Improvement in Higher Education
5
LIST OF TABLES
Table 1: Ewell’s (2008) Two Paradigms of Assessment 30
Table 2: Levels of the Teaching System and Levers that Influence
Relationship between Assessment and Teaching
47
Table 3: Interview Sample 73
Table 4: Overview of Changes to Teaching and Learning Environment at
Institution Level
79
Table 5: Differences in Institutional Language, Norms, and Values around
Teaching and Assessment
103
Table 6: Overview of Changes at Department Level 113
Table 7: Assessment Type and Individual-Level Changes 155
Table 8: Individual Attitudes towards Assessment 164
Table 9: Presence of Change Levers Suggested by Theory 201
Assessment and Teaching Improvement in Higher Education
6
LIST OF FIGURES
Figure 1: Types of Institution-Level Change 81
Figure 2: Values, Language, and Norms at University of the Pines 82
Figure 3: Institution-Level Change Levers 90
Figure 4: Department-Level Change Levers 113
Figure 5: Interplay between Department and Institution Levels 153
Figure 6: Individual-Level Change Levers 163
Figure 7: Diagram of the System 197
Assessment and Teaching Improvement in Higher Education
7
Dissertation Committee Members
Dr. Adrianna Kezar (chair)
Rossier School of Education
Dr. Estela Mara Bensimon
Rossier School of Education
Dr. Nelson Eugene Bickers
Dornsife College of Arts and Sciences
Assessment and Teaching Improvement in Higher Education
8
Dedication
This dissertation is dedicated to the hard-working faculty, staff, and administrators who
graciously agreed to participate in my study. Their willingness to share their time, experiences,
and opinions allowed this research to occur, and I will be forever grateful for their openness.
I would also like to dedicate this work more broadly to two groups of people: faculty and
assessment professionals (who are sometimes one and the same). The work that faculty do to
teach undergraduates is at the heart of what a college education is about. It is difficult, sometimes
thankless, and infrequently rewarded, but it is critically important work. I am honored to
contribute a small bit of knowledge about undergraduate teaching, and I hope that some faculty
ultimately find it useful. Finally, I believe that assessment professionals are some of the most
beleaguered people working in higher education. Just in the last 7 months while I was conducting
this study, prominent op-eds and opinion pieces attacking assessment (and the people who do
assessment work) have appeared in the Chronicle of Higher Education, Inside Higher Ed, and
even the New York Times. I hope the results of this study can help demonstrate assessment’s
role in improving the teaching and learning environment, as well as contribute complexity and
nuance to conversations about assessment and teaching.
Assessment and Teaching Improvement in Higher Education
9
Acknowledgements
This work would not have been possible without the support of my friends, family, and
colleagues over the last four years of this PhD journey. First, I’d like to thank my advisor Dr.
Adrianna Kezar for all your support, encouragement, high expectations, and confidence in my
abilities. I have learned so much from you and truly enjoyed our work together. Thank you for
all the opportunities you have given me and for exposing me to such a broad range of projects,
studies, and people over the last four years. You have been an incredible role model and I feel
very fortunate to have had the opportunity to work with you.
Thank you also to my committee members, Dr. Estela Bensimon and Dr. Gene Bickers,
for your insights, advice, and feedback on this study. Dr. Bensimon’s thoughtful, probing
questions always pushed me in exactly the right directions, and Dr. Bickers’ generosity and
openness helped me to pilot parts of this study three years ago. Your contributions have
improved this work and helped me learn and develop as a scholar.
I also want to acknowledge my friends and colleagues in the Urban Education Policy
PhD program for their support since the program began. Thanks especially to the fantastic 2014
cohort! I have learned so much from you in our classes and beyond. Our quals and dissertation
proposal prep meetings helped me feel less alone and more confident as I embarked upon this
often isolating process. Thank you for the coffees, lunches, drinks, camping trips, and words of
encouragement along the way.
I am also grateful for the funding I received from USC’s Graduate School and from the
Rossier School of Education to sponsor this research. The material support has been extremely
helpful and allowed me to both conduct a stronger study and finish it in a more expedient manner
than I would have otherwise been able to.
Assessment and Teaching Improvement in Higher Education
10
I would be remiss if I didn’t acknowledge the role my family has played in helping me
achieve this milestone. Thank you to my sister, Anna, who has been a model of perseverance and
hard work in her own educational journey. Your relentless drive to complete your degree and
certifications has been an inspiration. I’m so proud of you. I am also thankful for my parents,
who raised me with a love of learning and spent so much time nurturing my curiosity, answering
my incessant questions, countering my sometimes obnoxious attempts at debating every issue,
and sacrificing to provide me with an excellent education. You have always believed I could do
anything I wanted to, and you helped me believe it was possible, too. Thank you for giving me
the confidence to always push myself further.
The ones who lived with me over the last four years probably know better than anyone
else the amount of effort, mental gymnastics, and anxiety that went into this dissertation. Matt,
thank you for taking a leap of faith with me when I decided to pursue a PhD. You were willing to
uproot everything and move across the country for my education (twice!), and you’ve never once
questioned my decision or my ability to do this. I am so lucky to have you in my life and on my
team. And finally, thanks to my sweet little pup Rufus, the best dissertation-writing companion
around, for his silliness and for giving me an excuse to get out of the house and take breaks from
writing.
Assessment and Teaching Improvement in Higher Education
11
Abstract
Calls for evidence of what students know and are able to do as a result of their college
experiences have grown over the last thirty years. As a result, assessment activity on campuses
has increased dramatically. Campuses often undertake assessment with the hope of both meeting
accountability demands and improving student learning. Underlying these hopes for
improvement is an often implicit assumption that assessment of student learning will lead to
instructional improvement. However, there is very little empirical evidence that assessment does
actually improve teaching, and little is known about the relationship between assessment and
teaching more broadly. This study was designed to examine how assessment shapes teaching at
two research universities. Using a systems theory perspective, this multiple-case study examined
the link between assessment and teaching improvement among individual faculty, departments,
and institutions. The findings provide an overview of assessment activity at each level of the
system and how it shaped change (or lack of change) to the teaching and learning environment.
The study also includes a detailed overview of levers for change at each level (individual,
departmental, and institutional), as well as relationships and influences across levels. Based on
these findings it appears that assessment has the potential to improve teaching at research
universities, but it is complicated. Teaching improvement does not automatically result once
institutions engage in assessment. Rather, a particular approach to assessment can foster teaching
improvement, and attention to multiple supportive levers across all levels of the system is
necessary to drive change.
Assessment and Teaching Improvement in Higher Education
12
Assessment and Teaching Improvement in Higher Education: Investigating an Unproven
Link
Chapter 1: Introduction
As access to higher education has increased over the last 70 years, more students are
going to college in the United States than ever before. Nearly 90 percent of high school
graduates will enroll in some sort of postsecondary education over the course of their lives
(American Academy of Arts and Sciences, 2017; National Center for Education Statistics, 2016).
As more students enter institutions of higher education, concerns about the quality of the
education they receive have emerged (American Academy of Arts and Sciences, 2017; Arum &
Roksa, 2011). Despite improvements in access to higher education, less than two-thirds of
students who enroll in college end up completing a bachelor’s degree (National Center for
Education Statistics, 2016). Those students who do complete a degree often lack key skills and
knowledge that employers or graduate school faculty expect (American Academy of Arts and
Sciences, 2017; Arum & Roksa, 2011).
Increasing numbers of students in college and concerns about the quality of
undergraduate education have led to an increased focus on assessing what it is that students are
actually learning in their courses and majors. As a result, assessment activity on campus has
increased dramatically. Assessment is the measurement of what students know and are able to do
as a result of their experience with the educational environment (Banta & Palomba, 2014;
Entwistle, 1984; Marsh, 2007). Also termed student learning outcomes assessment (SLOA), this
measurement of what students have learned can occur at the classroom level, the department or
Assessment and Teaching Improvement in Higher Education
13
program level, or even the institutional level, as with assessments of learning in general
education programs (Banta & Palomba, 2014).
1
Assessment can take many different forms: formal (tests, exams, written assignments)
and informal (classroom questions or checks for understanding, reflections on learning);
summative (at the end of a class or program) and formative (during the learning process);
embedded (integrated into regular coursework) or add-on (ungraded assessments that are outside
of course requirements); direct (tangible evidence of student learning) or indirect (“proxy signs
that students are probably learning,” such as course grades or student self-reports of learning)
(Suskie, 2009, p. 20). These dichotomies are helpful frameworks for conceptualizing different
methods of conducting assessment. However, it is important to note that none of them are
mutually exclusive; rather, most institutions are using multiple methods to assess student
learning at various levels. At the classroom level, for example, faculty use both formal and
informal assessments. Assessments at the program level may use both embedded and add-on
methods. Institution-level assessments may use both indirect and direct measures of student
learning. Some well-known instruments for assessing undergraduate student learning include
externally-created performance assessments or tests of subject-specific knowledge such as the
Diagnostic of Undergraduate Chemistry Knowledge (DUCK) in Chemistry; general knowledge
measures such as the Collegiate Learning Assessment (CLA) or the Proficiency Profile; rubrics
that are used to assess authentic student work like course papers; classroom-based performance
assessments; capstone projects; and portfolios of student work (Jankowski, Timmer, Kinzie, &
1
I will use “assessment,” “student learning outcom es assessm ent,” and “SLOA” interchangeably throughout this study. While assessm ent is
som etim es also used m ore broadly to describe evaluation of non-academic programs and support services (Banta & Palomba, 2014), my work
will focus only on assessm ent as it relates to undergraduate student learning.
Assessment and Teaching Improvement in Higher Education
14
Kuh, 2018). Assessment thus encompasses a wide spectrum of different approaches, methods,
and purposes.
Campuses often undertake assessment with the hope of both meeting external
accountability demands and improving student learning (Ewell, 2002). As noted above,
stakeholders both inside and outside higher education have raised concerns about the quality of
the undergraduate experience and called for increasing evidence of student learning.
Accountability pressures have emanated from both state and federal policymakers, with some
states moving towards performance-based funding policies based on assessment results,
generally using large-scale, externally created standardized assessments (Banta, Ewell, &
Cogswell, 2016; Ewell, 2008). Despite the growing prevalence of these accountability policies,
regional accreditors remain the primary external drivers of assessment activity on college and
university campuses. Decentralized just like the American higher education system, the six
regional accreditors examine and certify institutions as being of sufficient quality to continue
receiving federal funding, so their influence is fairly significant (Rice, 2006). Since the early
years of the twenty-first century, “each of the regional accreditors has updated and strengthened
standards for assessment” (Provezis, 2010, p. 9). Campus leaders have taken notice of
accreditors’ increased emphasis on SLOA: a 2013 survey of provosts and other academic leaders
at over 1200 colleges and universities across the U.S. found that external accountability, such as
accreditation, is the primary driver of assessment activity on most campuses (Kuh, Jankowski,
Ikenberry, & Kinzie, 2014).
As accountability pressures from external stakeholders have accelerated, some
stakeholders within the academy have made a parallel push for increased assessment activity, but
with a different primary purpose: to improve student learning and faculty teaching practice.
Assessment and Teaching Improvement in Higher Education
15
Ewell (2008) terms this movement the improvement paradigm. As opposed to the accountability
paradigm described above, the improvement paradigm is focused on improving student learning
rather than proving that students are learning. The push to use assessment to improve arose from
discussions about curricular reform and other major changes to educational practice, especially
in general education. These discussions were centered on development of “coherent curricular
experiences which could best be shaped by ongoing monitoring of student learning and
development” (Ewell, 2002, p. 6). Some institutions, such as Alverno College, began voluntarily
experimenting with different forms of SLOA focused on providing direct feedback to faculty on
their students’ learning.
Underlying the assessment for improvement paradigm is an assumption that assessment
of student learning will lead to instructional improvement through improved faculty
understanding of student learning or a shift in faculty focus from teaching to learning (Barr &
Tagg, 1995; Hutchings, 2010). While faculty and institutional leaders find the most value in the
idea of using assessment to improve their practice (Jankowski, Timmer, Kinzie, & Kuh, 2018),
there is actually very little empirical evidence of assessment’s efficacy for improving teaching
and learning. There are a plethora of books and articles promoting the use and value of
assessment (e.g. Astin & antonio, 2012), several how-to guides for campus stakeholders (e.g.
Suskie, 2009), and many single-institution case studies of either challenges or best practices in
implementing assessment (see Kezar, 2013a). However, there are almost no studies explicitly
examining the relationship between assessment and improved teaching (Provezis, 2010;
Cogswell, 2016). In fact, there is minimal research on the characteristics of the relationship
between assessment and teaching generally, much less on assessment’s potential to improve
teaching.
Assessment and Teaching Improvement in Higher Education
16
In order to better understand the potential role of assessment in improving teaching, it is
helpful to consider existing evidence on teaching improvement in higher education. Improving
teaching has been a particular concern in STEM disciplines, as poor teaching has been linked to
declining interest in and increased attrition from STEM majors (Seymour & Hewitt, 1997).
Fairweather (2008) describes several assumptions that undergird most approaches to
instructional improvement in STEM. Most relevant for this study, he identifies the assumptions
that a) simply exposing faculty to empirical evidence of the efficacy of certain pedagogical
approaches is sufficient to change teaching practice, and b) that “the instructional role can be
addressed independently from other aspects of the faculty position…and from the larger
institutional context” (Fairweather, 2008, p. 3). A mounting body of evidence demonstrates that
these assumptions are misguided. While certain highly motivated individual faculty members
may change their teaching practice under such conditions, the majority of faculty do not. Rather,
faculty adoption of more effective teaching methods is influenced by a multitude of other factors
such as rewards, workload, curriculum, leadership, and resources (Fairweather, 1996). For
example, faculty who do not perceive that instructional improvement is valued by leadership or
that they will be rewarded for it are unlikely to take steps to improve their teaching, regardless of
their interest in or commitment to good teaching (Leslie, 2002).
The assumption that mere exposure to or engagement in assessment will lead to
instructional improvement is similarly misguided. There are a variety of aspects of both the
faculty role and the departmental and institutional environments that influence how faculty
engage in assessment and may differently influence its relationship with teaching (Banta, 2007;
Guetterman & Mitchell, 2016; Kezar, 2013a). For example, if assessment is primarily conducted
at the institutional level and faculty get very little actionable feedback on their teaching, it is
Assessment and Teaching Improvement in Higher Education
17
unlikely that assessment will lead to teaching improvement regardless of faculty members’ level
of exposure. Similarly, if incentives or rewards structures do not support faculty engagement
with assessment, faculty are unlikely to spend much time assessing and therefore also unlikely to
get much helpful information about their students’ learning (Kezar, 2013a).
A more effective and comprehensive approach to teaching improvement and the role that
assessment might play in improvement should focus on teaching systems, or the complex and
interrelated contextual factors that influence the teaching and learning process, rather than solely
on faculty knowledge and experiences (Austin, 2011; Fairweather, 2008). The extent to which
assessment influences and is influenced by various aspects of the teaching system in higher
education will determine how it can shape improvements to the teaching and learning
environment.
Study Focus and Research Questions
There are a handful of studies that demonstrate the potential for classroom-based
assessment to give faculty effective feedback on elements of their teaching practice that both
facilitate and hinder student learning (Angelo & Cross, 1993; Steadman, 1998). And a few
studies show that assessment at the program level can lead to improved student learning (Francis
& Steven, 2003; Kandlbinder, 2015; Volkwein, Lattuca, Harper, & Domingo, 2006). However,
given the extensive assessment work that has occurred on campuses across the country over the
last three decades and the assumption that assessment can improve teaching, there is remarkably
little empirical evidence linking assessment at any level to improved teaching and learning. The
studies that do examine the effects or outcomes of assessment in practice at American
universities tend to be descriptive, atheoretical, single-institution case studies (see Kezar, 2013a;
Provezis, 2010; Cogswell, 2016). Further, we know very little about how assessment shapes and
Assessment and Teaching Improvement in Higher Education
18
is shaped by various aspects of the teaching system, from faculty beliefs, values, and actions to
departmental policies, norms, and practices (Kezar, 2013a). No studies have examined the
relationship between assessment and teaching while also examining the broader context within
which instructional improvement occurs, as Fairweather (2008) and others recommend. And no
studies have examined this issue within the context of research universities, where one-third of
all baccalaureate degrees are awarded (Kuh, 2004). This study aims to examine this complex
relationship by answering the following research questions:
1. In what ways does assessment shape teaching and learning environments at research
universities?
a. How does assessment shape teaching and learning at the individual level?
b. How does assessment shape teaching at the departmental level?
c. How does assessment shape teaching and learning environments at the
institutional level?
2. How do departmental and institutional policies and supports shape assessment’s ability to
improve teaching and learning contexts? How do external factors shape the relationship?
3. How do different forms of assessment shape teaching and learning contexts differently?
I frame these questions using a systems theory approach, which I describe in greater detail in the
next chapter. By taking a broader view of teaching in higher education, this study elucidates the
relationship between assessment and teaching in new ways and provides a more complex and
rich way of thinking about this work.
Significance
This research was designed to enhance our understanding of a topic that has been
underrepresented in the empirical research literature: the link between assessment and teaching
Assessment and Teaching Improvement in Higher Education
19
improvement (Angelo & Cross, 1993; Cogswell, 2016; Kezar, 2013a; Provezis, 2010; Steadman,
1998). While contributing to an empirical and theoretical understanding of this issue, my study
also has several areas of practical significance, as well. First, it is important to understand
whether assessment actually fosters teaching improvement. If it does, institutional leaders will
want to know how they can best use assessment to improve teaching on their campuses. For
example, which types of assessments are most valuable for improving the teaching and learning
environment? Is there a particular level of the system in which this link is strongest or most
meaningful? Leaders may want to reflect on better ways to link assessment to the teaching
system, such as changing policies and structures or incentives and rewards to better reflect the
value of assessment for improving teaching. If, however, assessment does not contribute to
teaching improvement, leaders should think critically about asking faculty members to spend
their already limited time on an activity that does not further their professional development. If
assessment is merely an exercise in accountability, stakeholders should be clear about that fact
and not frame it as a way to improve teaching and learning.
Finally, since assessment is already happening in some form at over 84% of colleges and
universities across the United States, if there is evidence that it can improve teaching, assessment
could be leveraged to make large-scale, significant changes to teaching and learning in higher
education (Kuh, Jankowski, Ikenberry, & Kinzie, 2014). And since it already exists on most
campuses, building off existing assessment processes to enhance teaching and learning could
represent a significant cost savings to institutions. In a time of strapped budgets and increasing
concern about educational quality for an ever-larger population of college students, this potential
is especially meaningful.
Assessment and Teaching Improvement in Higher Education
20
Organization
In the remaining chapters of this study, I describe in more detail the existing literature on
this issue, the theoretical framework that guided the study, the study’s methodology, results, and
implications. Specifically, in Chapter 2 I provide a more detailed definition of assessment and a
description of various forms it takes; an overview of its history in higher education; and
contemporary issues in assessment, including challenges to its implementation. I then briefly
review the literature on teaching improvement in higher education and narrow in on links
between assessment and teaching improvement in existing research. In the second half of
Chapter 2, I explain how systems theory provides an overarching framework for understanding
the relationship between assessment and teaching improvement. Chapter 3 describes the study’s
methodology: a multi-method, multi-site embedded case study. I describe the site selection and
data collection methods, as well as data analysis and strategies for validity and trustworthiness.
The study’s findings are presented in Chapters 4, 5, 6, and 7. In Chapter 4, I discuss the ways in
which assessment shaped changes to the teaching and learning environment at the institutional
level at the two universities I studied. Chapter 5 provides an overview of the departmental level
and a description of each of the six departments that were included in this study. Chapter 6
describes the ways in which assessment changed teaching among individuals in the study. In
Chapter 7, I discuss several cross-level issues that emerged in my findings, namely issues around
alignment and tensions. Finally, Chapter 8 describes the implications for theory and practice.
Assessment and Teaching Improvement in Higher Education
21
Chapter 2: Literature Review and Theoretical Framework
Assessment is the measurement of what students know and are able to do as a result of
their experience with the educational environment (Banta & Palomba, 2014; Marsh, 2007;
Entwistle, 1984). The knowledge and skills that students are expected to demonstrate as a result
of their college courses or programs are also known as student learning outcomes (SLOs).
Assessment of SLOs for groups of students at an institution is called student learning outcomes
assessment (SLOA). While assessment is sometimes also used more broadly to describe
evaluation of campus programs in student affairs and support services at any level or even
institutional effectiveness (Banta & Palomba, 2014), my work will focus only on assessment as it
relates to student learning at the undergraduate level.
In the first section of the chapter, I will first briefly define assessment and discuss
common methods and measures and then delve into the history of assessment in higher
education. I will then describe in more detail the contemporary landscape of SLOA across the
United States, as well as challenges to its implementation. I conclude the first section of the
chapter with an overview of literature on teaching improvement in higher education and how
assessment may be linked to teaching improvement. An understanding of these areas of
empirical research is critical for framing my study and understanding how and why assessment
could shape changes to the teaching and learning environment in the contexts I investigated. The
second section applies a systems theory framework to this problem and examines how the
relationship between assessment and teaching improvement may influence or be influenced by
various levels of the teaching system—individual, departmental, institutional, and external.
These sections offer a theoretical grounding for my work and provide additional lenses for
interpreting and framing my findings.
Assessment and Teaching Improvement in Higher Education
22
What Is Assessment?
In the previous chapter, I mentioned several ways that assessments that can be conducted
in higher education (formal/informal, summative/formative, embedded/add-on, direct/indirect).
There are also a number of different types of learning that should be considered when conducting
assessment in higher education. Shavelson (2010) describes these types of learning as falling into
three major categories. The first is domain-specific subject matter, such as “declarative,
procedural, and schematic knowledge” in an academic major (p. 18). The second area consists of
broad abilities that are often associated with liberal arts or general education, such as critical
thinking, problem-solving, and verbal or quantitative reasoning. The third area includes “so-
called soft skills,” such as creativity, civic or moral responsibility, teamwork, or intercultural
competence (p. 18). Shavelson (2010) claims that direct assessment of all three areas of learning
is critical for measuring student learning that accounts for the diverse goals and missions of
higher education institutions. These areas can be measured at a variety of levels, from the
individual course to the program (major or general education), school, or institutional level.
Learning can also be measured at different points in the process and for different purposes.
Assessment of learning can be formative (measurement in the middle of the course or program,
results used for improvement) or summative (measurement at the end of a student’s experience
in a course or program, results used to determine whether a certain standard was met) (Brown &
Knight, 1994; Provezis, 2010).
Given the diversity of outcomes that are associated with SLOA, the methods or
instruments used to measure them are varied. Assessment in general is often associated with
standardized tests; in higher education there are a number of standardized tests used to assess
student learning. These include the Collegiate Assessment of Academic Proficiency (CAAP), the
Assessment and Teaching Improvement in Higher Education
23
Collegiate Learning Assessment (CLA), and the Proficiency Profile (formerly the Measure of
Academic Proficiency and Progress, or MAPP) (Delaney, 2015; Shavelson, 2010). CAAP and
the Proficiency Profile are both multiple-choice tests that measure reading, writing, mathematics,
and critical thinking (Shavelson, 2010). Both tests are designed to measure student learning
outcomes in general education. The CLA differs somewhat from these other two standardized
assessments in that it is not a multiple-choice test; rather, it uses real-world perform ance tasks
and open-ended questions to assess general education competencies such as analytic reasoning
and problem-solving in addition to critical thinking and writing. Because the CLA is not a
multiple-choice test yet still allows for comparison across programs and institutions, it is seen as
a more authentic measure of student learning in college than some of the other standardized
assessments (Shavelson, 2010). All of these tests, however, are generally used to assess student
learning outcomes at the program or institution level and offer summative judgement on the
efficacy of a program or institution. While they are helpful for determining what a program or
institution is doing well and useful for making comparisons across institutions, they offer little
actionable feedback to faculty members who want to use assessment results to improve their
teaching. Additionally, many faculty are skeptical of standardized assessments and view them as
an infringement on academic freedom or an attempt to narrow the curriculum (Kramer, 2008).
Further, these tests are often not part of regular classroom work so motivation for students to
take them seriously is often lacking, calling into question the validity of their results (Benjamin
et al., 2012).
By contrast, other methods of assessing student learning outcomes aim to use authentic
examples of student work to provide feedback to students and faculty on student learning and
offer actionable information to help faculty improve their teaching. These embedded assessments
Assessment and Teaching Improvement in Higher Education
24
often use rubrics to appraise student papers or presentations. Rubrics can measure student
learning in a single class, but they can also capture learning across courses or programs, such as
the Valid Assessment of Learning in Undergraduate Education (VALUE) rubrics created by the
American Association of Colleges and Universities (AAC&U) (Banta & Palomba, 2014; Palmer,
2012; Rhodes, 2008). Electronic portfolios of student work and performance assessments are
other direct measures of student learning that are more focused on giving feedback to faculty and
students than on providing a summative judgement of student or faculty performance (Banta &
Palomba, 2014). These authentic assessment methods are embedded in the regular work of
faculty, and they can easily incorporate the specific disciplinary perspectives that faculty value.
The downside of these measures is that they make comparability across programs difficult or
impossible, and their development and use is more time-consuming than standardized
instruments. Clearly, there are a variety of methods and instruments currently used to assess
undergraduate student learning in higher education, often with competing purposes and uses.
This abundance of measures arose out of assessment’s complex history and diverse influences,
which I describe in the next section.
Brief History of Assessment in Higher Education
Evaluation of student learning has been a part of the educational process in higher
education since at least the early 1900s, though it did not become a major focus of public
attention until the 1980s (Carey & Gregory, 2003; Ewell, 2008; Shavelson, 2010). Shavelson
(2010) describes three eras of assessment in American higher education that spanned the first
three-quarters of the twentieth century. The first era, encompassing the first thirty years of the
century, saw the origins of standardized testing of learning. Two landmark studies, one
conducted by researchers with the Carnegie Foundation for the Advancement of Teaching and
Assessment and Teaching Improvement in Higher Education
25
one called the Pennsylvania Study, used multiple-choice examinations to test students on their
content knowledge in various subjects such as English, mathematics, or physics. The latter study
was unique in that it distinguished between achievement, “the accumulation of knowledge and
reasoning capacity at a particular point in time,” and learning, the “change in knowledge and
reasoning over the college years” (Shavelson, 2010, p. 25). This distinction presages our modern
debates about achievement at a discrete point in time versus growth in knowledge over the
course of a student’s college career, or the “value-added” by college (Astin, 1977; Ewell, 2002).
The second era of assessment in higher education, which Shavelson (2010) places from
1933-1947, saw the advancement of general education and assessment of learning in general
education programs, as well as development of the Graduate Record Examination (GRE) as a
summative measure of student learning at the baccalaureate level and gateway for admittance to
graduate school. Assessment in general education was particularly innovative, as scholars and
practitioners attempted to measure not only the cognitive outcomes of general education but also
the “personal, social, and moral” outcomes (Shavelson, 2010, p. 26). This effort has reemerged
in the twenty-first century through such initiatives as VALUE, which identified 16 learning
outcomes and offered accompanying rubrics to measure student learning along both cognitive
and personal, social, and moral dimensions (Rhodes, 2008). The third era of assessment, which
Shavelson (2010) terms the “rise of the test providers,” began after World War II and lasted until
the late 1970s (p. 29). This era is marked by the massification of American higher education in
the post-war period as a result of the GI Bill and the subsequent emergence of the Educational
Testing Service (ETS) in 1949 and the American College Testing (ACT) program in 1959.
While Shavelson (2010) focuses on testing and instrumentation in his history of
assessment in higher education, other scholars have traced a somewhat broader historical
Assessment and Teaching Improvement in Higher Education
26
trajectory. Ewell (2002), for example, notes the emergence of four separate intellectual traditions
around student learning and evaluation in the middle of the twentieth century that together
presaged the current assessment movement. These emerging areas of scholarship arose partly in
response to the growth of large-scale, standardized, multiple choice assessments and partly in
response to the shifting purposes and populations of higher education institutions at mid-century.
The first tradition evolved out of the focus on student learning in college, which can be traced
back to the Pennsylvania Study that Shavelson (2010) identified as part of the first era of
assessment and evolved with the work of Chickering (1969), Astin (1977), Bowen (1977), and
Pace (1979), as these scholars developed ideas about cognitive, developmental, and attitudinal
growth that should occur in college. The second tradition, closely related to research on student
learning, focused specifically on student retention in college and behaviors that influenced
retention (Tinto, 1975). This tradition gave rise to new methodological approaches, such as
longitudinal study designs, student experience surveys, and action research, which continue to
inform assessment scholarship and practice today. The third and fourth influential traditions that
Ewell (2002) identifies represent a departure from the first two, in that they did not originate in
higher education but were later applied. These are the rise of scientific management and program
evaluation, which focused primarily on quantitative measures and introduced ideas such as cost-
benefit analysis and return on investment to the higher education setting; and mastery and
competency-based learning approaches, which originated in K-12 education but quickly found a
footing in higher education in certain types of “alternative” colleges and universities. These
intellectual traditions identified by Ewell (2002), despite their very different disciplinary
influences, goals, and purposes, intersected to influence the modern era of higher education
assessment that began in the 1980s.
Assessment and Teaching Improvement in Higher Education
27
Beginning of the Modern Era
Despite these slightly different foci in terms of historical precedent, scholars agree that
the modern era of assessment in higher education began in the 1980s, when the Department of
Education and several national higher education associations began linking assessment with
accountability at major conferences and in published reports (Banta & Palomba, 2014; Ewell,
2002, 2008; Shavelson, 2010). These reports argued that “systematic evidence about what and
how much students learn is an essential prerequisite for systematically improving undergraduate
curricula and pedagogy” (Ewell, 2008, p. 8). Given the lack of a national higher education
coordinating body in the United States and the resultant limited role of the federal government in
higher education policy, the first government mandates for assessment in higher education came
from state legislatures and higher education governing boards (Banta, Ewell, & Cogswell, 2016).
Some states, such as Texas and Florida, used standardized tests whose results could be easily
compared across institutions, such as MAPP, CLA, and the Proficiency Profile. Other states,
such as Virginia, Missouri, and Colorado, allowed institutions to “establish their own student
learning outcomes, choose valid and reliable methods for gathering evidence that these
objectives were being met, and periodically report on their results” (Ewell, 2008, p. 8). And a
few states, most notably Tennessee, began to experiment with tying small amounts of funding
directly to institutional assessment performance (Banta, Ewell, & Cogswell, 2016; Ewell, 2008).
These moves can be linked back to the scientific management and program evaluation traditions
that Ewell (2002) identified.
Around the same time that voices external to higher education began clamoring for
increased accountability and states began mandating various forms of assessment activity,
stakeholders within the academy were simultaneously calling for curricular reform and other
Assessment and Teaching Improvement in Higher Education
28
major changes to educational practice, especially in general education. These discussions were
centered around development of “coherent curricular experiences which could best be shaped by
ongoing monitoring of student learning and development” (Ewell, 2002, p. 6). Some institutions,
such as Alverno College, began voluntarily experimenting with different forms of SLOA, aimed
at improving student learning rather than proving that students were learning. These alternative
forms of assessment often used qualitative methods, were embedded into courses, and evoked
the mastery learning and student growth and development traditions identified by Ewell (2002).
By the 1990s, after a recession diminished states’ ability and willingness to back up their
assessment mandates with funding, the locus of influence for external accountability gradually
shifted to regional accreditors, with “an occasional, and frequently disruptive, foray by the
federal government” (Banta, Ewell, & Cogswell, 2016, p. 6). For example, the 1992
reauthorization of the Higher Education Act contained new accountability provisions that
ordered regional accreditors to be more proactive in requiring evidence of student achievement
as part of the accreditation process (Ewell, 2008). A significant report nearly two decades later
by the Secretary’s Commission on the Future of Higher Education (more commonly known as
the Spellings Commission) found that accreditation did not provide adequate or reliable
information to the public about “performance outcomes, including completion rates and student
learning” and should focus on “results and quality rather than dictating…process, inputs, and
governance, which perpetuates current models and impedes innovation” (Spellings, 2006, pp. 24,
20). While a reauthorization of the Higher Education Act in 2007 limited the scope of the
changes recommended by the Spellings Commission the year prior, the Commission had a
lasting impact on the assessment landscape in higher education by focusing public attention on
student outcomes and promoting the use of standardized assessments of student learning for
Assessment and Teaching Improvement in Higher Education
29
accountability purposes. Specifically, CAAP and the CLA, along with the National Survey on
Student Engagement (NSSE)
were described prominently as assessment exemplars that could be
used to compare college performance (Ewell, 2008). Additionally, the Spellings Commission
thrust accreditors into the accountability spotlight, where they assumed the predominant role in
driving SLOA.
The rise of the modern assessment movement and its diverse influences gave birth to the
major tension that has defined SLOA throughout the end of the twentieth century and into the
twenty-first century. Scholars and practitioners have termed this the tension between assessment
for accountability and assessment for improvement. As noted in the previous chapter, Ewell
(2008) describes these as two distinct paradigms; he traces the accountability paradigm to state
and federal policies and the improvement paradigm to institution-centered assessment efforts
(see Table 1). The accountability paradigm is driven by external players and assumes an
objective stance; the purpose of assessment under the accountability paradigm is to prove to
these external stakeholders that higher education institutions are effective and deserve continued
public confidence and funding support. Data gathered for accountability purposes are often
quantitative in nature and assessment instruments are primarily standardized for comparability
across institutions. Instruments such as the CLA, CAAP, and the Proficiency Profile would fall
under this category. In contrast, the improvement paradigm is driven by internal stakeholders
who are interested in enhancing teaching and learning at their institutions; evidence is gathered
directly by practitioners (generally faculty) and can be quantitative or qualitative. Generally, this
type of assessment is embedded into the everyday work of faculty and includes such instruments
as capstone projects, portfolios, or rubrics assessing authentic student work such as research
papers or essays. While external players who push for assessment for accountability often claim
Assessment and Teaching Improvement in Higher Education
30
to value its improvement purposes, in practice these paradigms often conflict with one another
and drive different behaviors on campuses. These frequently conflicting paradigms animate the
current assessment landscape, which I describe below.
Table 1: Ewell’s (2008) Two Paradigms of Assessment
Assessment for
Improvement Paradigm
Assessment for
Accountability Paradigm
Strategic Dimensions
Intent Formative (Improvement) Summative (Judgment)
Stance Internal External
Predominant Ethos Engagement Compliance
Application Choices
Instrumentation Multiple/Triangulation Standardized
Nature of Evidence Quantitative and Qualitative Quantitative
Reference Points Over Time, Comparative,
Established Goal
Comparative or Fixed
Standard
Communication of Results Multiple Internal Channels
and Media
Public Communication
Use of Results Multiple Feedback Loops Reporting
Assessment Today: Contemporary Landscape
Over the last decade or so, SLOA has become pervasive in institutions throughout the
United States. Currently, over 84% of colleges and universities have some sort of stated learning
outcomes for their students, and institutions are using more types of assessment measures than in
years past (Kuh, Jankowski, Ikenberry, & Kinzie, 2014). As mentioned above, regional
accreditors have become the primary drivers of assessment activity and require more meaningful
evidence of assessment activity than in years past. Decentralized just like the American higher
education system, the six regional accreditors examine and certify institutions as being of
sufficient quality to continue receiving federal funding, so their influence is fairly significant
(Rice, 2006). Since the early years of the twenty-first century, “each of the regional accreditors
has updated and strengthened standards for assessment” (Provezis, 2010, p. 9). Accreditors have
Assessment and Teaching Improvement in Higher Education
31
developed new assessment resources to underscore the importance they now place on SLOA as
part of the reaccreditation process. Most prominently, the Western Association of Schools and
Colleges (WASC) Senior College Commission holds an Assessment Leadership Academy
(ALA) each year to provide faculty, staff, and administrators with “foundational knowledge of
the history, theory, and concepts of assessment” and develop their “expertise in training and
consultation, campus leadership for assessment, and the scholarship of assessment” (Assessment
Leadership Academy, n.d.). Campus leaders have taken notice of accreditors’ increased
emphasis on SLOA: a 2013 survey of provosts and other academic leaders at over 1200 colleges
and universities across the U.S. found that external accountability, such as accreditation, is the
primary driver of assessment activity on most campuses (Kuh, Jankowski, Ikenberry, & Kinzie,
2014).
At the same time that accreditors have accelerated their demands for evidence of SLOA,
a number of other influential higher education organizations have also increased their advocacy
for assessment. Funders such as the Lumina Foundation, Teagle Foundation, and Carnegie
Corporation of New York have all sponsored assessment initiatives. These include the Degree
Qualifications Project (DQP) and Tuning USA, which brought groups of faculty together to
define common learning outcomes and measures for disciplines and degree programs; consortial
assessment projects like the Multi-State Collaborative (MSC), which convened institutions
across multiple states to experiment with new assessment systems; and the VALUE initiative, the
AAC&U project described above that created rubrics to measure a variety of identified student
learning outcomes (Banta, Ewell, & Cogswell, 2016). Additionally, new organizations have
formed that focus explicitly and exclusively on SLOA, most prominently the National Institute
on Learning Outcomes Assessment (NILOA).
Assessment and Teaching Improvement in Higher Education
32
Interestingly, assessment-relevant organizations and even accreditors themselves have
shifted their focus in recent years from merely examining whether students have attained certain
learning outcomes to whether institutions are improving student learning by using prior
assessment evidence (Russell & Markle, 2017). In other words, even those players most
instrumental in pushing assessment for accountability have grown to embrace the power of
assessment for improving teaching and learning, signifying the potential for harmony between
assessment’s competing purposes (Banta, 2007). Although assessment for accountability may not
ever go away, perhaps its improvement functions can be emphasized or integrated into existing
accountability frameworks for a more meaningful, holistic approach to assessing student
learning.
Differences by Discipline and Institution Type
Despite the seeming convergence of major higher education stakeholders around the
importance of assessment, there are discrepancies in the degree to which institutions have
implemented SLOA throughout their programs and departments and the degree to which they
use assessment results to improve teaching and learning. Both institution type and discipline can
affect the extent to which SLOA is implemented. For example, institutions with a teaching focus
tend to have a more sophisticated assessment infrastructure. Community colleges, baccalaureate-
and master’s-level institutions (liberal arts colleges and regional comprehensive schools) all have
higher levels of assessment of activity than doctoral institutions (Jankowski, Timmer, Kinzie, &
Kuh, 2018). Higher proportions of these institutions have established learning outcomes at the
program and institutional levels and use multiple types of assessments to measure student
learning outcomes (ibid). Doctoral degree-granting institutions are least likely of all institution
types to report having stated learning outcomes or SLOA that is aligned across courses,
Assessment and Teaching Improvement in Higher Education
33
programs, and departments, while larger and m ore selective institutions report lower assessm ent
activity than less selective institutions across every category measured (Kuh, Jankowski,
Ikenberry, & Kinzie, 2014; Jankowski, Timmer, Kinzie, & Kuh, 2018). This outcome is likely a
result of the diminished focus on instruction at research universities and corresponding over-
valuing of research compared to their regional comprehensive university or community college
counterparts, as well as a perception among research university faculty and leaders that their
prestige should exempt them from being subject to external accountability (Kember, 1997;
Fairweather & Rhoads, 1995; Fairweather & Beach, 2002).
Assessment activity also varies depending on the discipline (Swarat et. al., 2017). Indeed,
many national assessment efforts have been organized around the disciplines, as faculty
members’ identity and affinity is often linked more to their discipline than their institution
(Pallas, 2011; Pitt & Tepper, 2012; Becher, 1987). For example, the Degree Qualifications
Profile (DQP) project, which specifies what students should know and be able to do upon
completion of a bachelor’s degree in a particular field of study, emphasizes the importance of
specialized or disciplinary-specific knowledge in establishing learning outcomes and
benchmarks (Adelman, Ewell, Gaston, and Schneider, 2014). The linked Tuning project, which
convenes groups of faculty to establish discipline-specific learning outcomes and assessments,
has fleshed out that specialized knowledge for degree programs at over 340 institutions
(Marshall, Jankowski, & Vaughan, 2017). A project called Measuring College Learning (MCL)
brought together faculty from six disciplines (biology, business, communication, economics,
history, and sociology) to articulate discipline-specific learning outcomes and assessment
principles (Roksa, Arum, & Cook, 2016). The Association of Institutional Research (AIR)
created a series of papers called Assessment in the Disciplines, which examined best practices in
Assessment and Teaching Improvement in Higher Education
34
assessment in four different disciplines (business, engineering, math, and writing) (Martell &
Calderon, 2005; Kelly, 2008; Madison, 2006; Paretti & Powell, 2009).
Additionally, several disciplinary associations have engaged with assessment, either in
connection with one of the previously mentioned projects or of their own accord. For example,
the American Historical Association held two forums on assessment in 2009 and 2015
(American Historical Association, n.d.). The American Political Science Association (APSA)
and the Modern Language Association (MLA) have both released books on assessment in their
discipline (Deardorff, Hamann, & Ishiyama, 2009; White, Lutz, & Kamusikiri, 1996). The
American Sociological Association and the American Psychological Association both convened
task forces on assessment (APA Project Assessment, n.d.; ASA Task Force on Assessment the
Undergraduate Sociology Major, 2005). And several of the scientific disciplinary societies (e.g.
physics, chemistry, biology) have created or featured content assessments prominently on their
websites.
These projects all reflect the primacy of the discipline in academic work. Further,
disciplinary differences in terms of epistemology, ontology, and culture may shape distinct
approaches to assessment (Biglan, 1973; Becher, 1987; Jessop & Maleckar, 2014; Kolb, 1981;
Neumann, 2001; Neumann, Parry, & Becher, 2002; Pryor & Crossouard, 2010). For example,
many scientific disciplines, which have established bases of knowledge that must students build
on in order to get to the next level, often advocate the use of multiple-choice instruments such as
concept inventories to assess student learning (Biglan, 1973; Libarkin, 2008). Conversely,
humanities and social science disciplines, in which the knowledge base is more contested and
interpretive, tend to encourage the use of more qualitative assessments, such as essays,
presentations, or portfolios (Jessop & Maleckar, 2014). Further, disciplinary cultures and ways of
Assessment and Teaching Improvement in Higher Education
35
thinking may also shape faculty members’ receptivity to and willingness to engage in
assessment. STEM faculty, for instance, may be more open to the measurement and
quantification of student learning, whereas humanities or social science faculty may have
philosophical objections to the entire endeavor.
Challenges to Assessment
Perceptions about the competing purposes of assessment, institution type, and discipline
can all lead to differential levels of SLOA implementation, as well as differential levels of
engagement with assessment among faculty (Hutchings, 2010). Faculty engagement is especially
important if assessment results are to be used for improvement of teaching and learning.
Previous research suggests that faculty tend to be skeptical of assessment (MacDonald, Williams,
Lazowski, Horst, & Barron, 2014; Ebersole, 2009; Grunwald & Peterson, 2003), especially if
they perceive SLOA as bring externally driven or primarily an exercise in compliance (Kramer,
2008). Reasons for this skepticism include faculty perceptions of low utility value and high cost
of implementing assessment (MacDonald, Williams, Lazowski, Horst, & Barron, 2014); lack of
incentives or rewards for implementing SLOA (Carey & Gregory, 2003); few opportunities to
engage in professional development to gain a better understanding of SLOA (Peterson &
Einarson, 2001); and even concerns about assessment being a threat to faculty autonomy or a
violation of academic freedom (Kramer, 2008).
On the other hand, faculty who perceive assessment’s value for helping them improve
their practice are much more receptive to it than those who feel forced to use it for accountability
purposes. Those faculty who actually engage with assessment report that they are supportive of it
and find it useful for their practice (Ebersole, 2009; Kramer, 2008). Further, a majority of
campus leaders also consider internal use of SLOA to drive improved teaching and learning to be
Assessment and Teaching Improvement in Higher Education
36
far more valuable than SLOA driven by external stakeholders for accountability purposes (Kuh,
Jankowski, Ikenberry, & Kinzie, 2014). While faculty and institutional leaders find the most
value in the idea of using SLOA to improve their practice (and external stakeholders are
beginning to agree), there is actually very little empirical evidence of assessment’s efficacy for
improving teaching and learning. As noted in Chapter 1, there are almost no studies explicitly
examining the relationship between assessment and improved teaching (Provezis, 2010;
Cogswell, 2016).
Assessment and Teaching Effectiveness and Improvement
In order to understand how assessment might improve teaching in higher education, it is
first important to understand what researchers have found about effective teaching and teaching
improvement generally. In this section, I will briefly review the literature on effective teaching
and teaching improvement in higher education before discussing some reasons why assessment
might lead to improved teaching.
Teaching Effectiveness in Higher Education
The dominant mode of teaching in the modern history of higher education has been the
lecture, in which a faculty member dispenses information or delivers content with little or no
input or participation from students (Freeman et al., 2014; Kramer, 2017; Lederman, 1992). With
some exceptions for humanities disciplines in which small discussion-based seminars are the
norm, the lecture prevails. Studies in mathematics (Bressoud, 2011), economics (Watts &
Schaur, 2011), and the sciences (Deslauriers, Schelew, & Wieman, 2011) demonstrate the
persistent popularity of the lecture method in undergraduate teaching. The lecture’s popularity
corresponds with the traditional view of the faculty member as subject matter expert and
Assessment and Teaching Improvement in Higher Education
37
instruction as information transmission (Barr & Tagg, 1995; Halpern, 1994; Mellow, Woolis,
Klages-Bombich, & Restler, 2015; Trigwell, Prosser, & Waterhouse, 1999).
Despite the continued prevalence of the lecture, research has shown consistently that
lecture-based or information transmission modes of instruction are less effective for promoting
deep learning and long-term retention of information than discussions or other more active,
student-centered methods of instruction (Bane, 1925; Kember & Gow, 1994; McKeachie, 1990;
Samuelowicz & Bain, 2001; Trigwell & Prosser, 2004; Trigwell, Prosser, & Waterhouse, 1999).
Effective undergraduate teaching engages students in extensive interactions with faculty
members and fellow students, as well as in activities such as problem-based or inquiry learning,
group work or collaborative projects, peer instruction, and hands-on experiments or
demonstrations (Chickering & Gamson, 1991; Kuh, Kinzie, Schuh, & Whitt, 2005; Pascarella &
Terenzini, 2005). These active approaches correspond with a view of faculty as facilitators of
student learning and instruction as supporting student learning and conceptual change (Barr &
Tagg, 1995; Blumberg, 2009; Lueddeke, 1999; Tagg, 2003; Trigwell, Prosser, & Waterhouse,
1999). A growing body of studies conducted in recent years in STEM fields demonstrate that
students learn more in courses taught using active approaches (Deslauriers, Schelew, Wieman,
2011) and that they pass courses at significantly higher rates and with better grades if the courses
use active pedagogies rather than lectures (Freeman et al., 2014).
The focus on teaching effectiveness in the United States and the motivation behind it has
paralleled the assessment movement in some ways. In the 1970s and 1980s, as accountability
pressures confronted higher education, focus on teaching effectiveness intensified, with an eye
towards proving that teaching was effective (Rice, 2006). In response, Ernest Boyer’s (1990)
seminal work, Scholarship Reconsidered, called attention to the concept of teaching as
Assessment and Teaching Improvement in Higher Education
38
scholarship and attempted to make teaching a scholarly enterprise and engage faculty in teaching
improvement rather than simply being at the mercy of external accountability pressures (Rice,
2006). Boyer’s work, along with his successor at the Carnegie Foundation for the Advancement
of Teaching, Lee Shulman, led to the creation of a scholarly movement known as the scholarship
of teaching and learning, or SoTL (Hutchings, Huber, & Ciccone, 2011; Pallas, Neumann, &
Campbell, 2017). The ascendance of SoTL saw the creation of numerous journals, conferences,
books, and studies on teaching and learning in higher education. Pressure both from within and
outside the academy, along with mounting evidence on what constitutes effective college
teaching, has led to increased attention on the importance of undergraduate teaching and how it
can be improved.
Teaching Improvement Initiatives
A plethora of teaching improvement initiatives have arisen in response to concerns about
the quality of undergraduate teaching. Authors of a recent report on undergraduate teaching
improvement commissioned by the American Academy of Arts and Sciences (AAAS) categorize
teaching improvement initiatives into internal initiatives, based on campuses, and external
initiatives, based in organizations or communities beyond individual campuses (Pallas,
Neumann, & Campbell, 2017). There is actually a great deal of overlap among these categories,
as many initiatives that began on individual campuses have grown to be national in scale, and
most external initiatives have at least some foothold on individual campuses. Regardless, it
remains a helpful way to conceptualize and organize some of the many teaching improvement
initiatives that have formed in the United States over the last several decades.
2
Internal teaching improvement initiatives. Internal teaching improvement initiatives
2
This overview is not meant to be a comprehensive account of all teaching improvement initiatives in the United
States, rather a selection of some of the most prominent ones.
Assessment and Teaching Improvement in Higher Education
39
are those that tend to be campus-based, rather than housed at a national association or sponsored
by a foundation. I will briefly describe three of the most prominent initiatives in this section:
Centers for Teaching and Learning, faculty mentoring, and peer review of teaching (including
observations and portfolios). First, Centers for Teaching and Learning (CTLs) are often the locus
for teaching-focused professional development on a campus. They are “typically constituted as
formal (budgeted) organizational units staffed with members of the faculty development
profession” (Pallas, Neumann, & Campbell, 2017, p. 17). Many CTLs are affiliated with a
network called the Professional and Organizational Development Network in Higher Education
(POD), which helps faculty developers and staff of CTLs continue to learn about the best ways
to help faculty improve their teaching (Rice, 2006). CTLs can vary greatly in terms of their
staffing, space, and resources, but they typically offer workshops for groups of faculty on various
teaching—related topics, as well as more personalized services for faculty members who are
struggling or who simply want to improve their teaching (Rice, Sorcinelli, & Austin, 2000; Rice,
2006). Until recently, little research existed demonstrating the effectiveness of CTLs or faculty
developers in helping faculty improve their teaching and, in turn, improve student learning.
However, a recent study does demonstrate the links between faculty development work and
improvement of teaching and learning (Condon, Iverson, Manduca, Rutz, & Willett, 2016), and
prominent voices in the field have called for more research into the most effective methods that
CTLs can use to help improve faculty teaching and student learning (Haras, Taylor, Sorcinelli, &
van Hoene, 2017).
Another key teaching improvement initiative taking place on campuses is faculty
mentoring. Mentoring programs pair new faculty with more senior faculty at an institution
(Pallas, Neumann, & Campbell, 2017). Sometimes these programs are facilitated through CTLs,
Assessment and Teaching Improvement in Higher Education
40
pairing expert or master teachers with novice faculty, and sometimes they are run out of
departments, pairing senior scholars in a department with newly hired departmental faculty.
However, there is little evidence that these mentoring programs focus exclusively or even
extensively on teaching improvement. Pallas, Neumann, and Campbell (2017) found that most
mentoring programs “focused on mentoring as general support” and “indicated virtually no
attention to teaching improvement” (p. 19). Further, there is little high-quality research on the
efficacy of faculty mentoring programs in general (Zellers, Howard, & Barcic, 2008).
A more promising approach to teaching improvement, with more research to back its
efficacy, is broadly known as peer review of teaching (Thomas, Chie, Abraham, Jalarajan, &
Beh, 2014). Peer review of teaching consists of peer observations of teaching, teaching
portfolios, and guided reflection (Bernstein, Burnett, Goodburn, & Savory, 2006; Chism, 2007;
Crouch & Mazur, 2001; Hutchings, 1996, 1998; Mazur, 1997; Pallas, Neumann, & Campbell,
2017). Peer observation of teaching involves faculty observing one another as they teach and
offering feedback based on those observations (Chism, 2007; Fullerton 1999; Kohut, Burnap, &
Yon, 2007). Often, faculty observers will use observation protocols or teaching inventories to
account for the practices that are being used in the classroom they observe; protocols can also
offer a more structured way to deliver feedback to those being observed. Observation protocols
are often internally created, though there are several STEM-specific protocols that have gained
popularity among STEM faculty in recent years (Bandy, n.d.; Osthoff et al., 2009; Piburn et al.,
2000; Smith, Jones, Gilber, & Wieman, 2013; Wieman & Gilbert, 2014).
Another component of peer review of teaching involves teaching portfolios, in which
faculty gather various types of evidence about their teaching effectiveness into a portfolio that is
then reviewed by one or more peers (Seldin, Miller, & Seldin, 2010). Often, results of classroom
Assessment and Teaching Improvement in Higher Education
41
observations go into the portfolio, along with syllabi, student evaluations, sample assignments
and course activities, and evidence of student learning (ibid). Portfolios can be used for
evaluation purposes, or to help faculty members reflect on and improve their teaching.
The final component of peer review of teaching is known as guided reflection (Mazur,
1997; Pallas, Neum ann, & Cam pbell, 2017; Schön, 1983, 1987). Guided reflection is a process in
which faculty members reflect on their teaching, both during and after the instruction, with
guidance either from another person or group or from a tool that facilitates reflection (Pallas,
Neumann, & Campbell, 2017; Schön, 1987). Guided reflection can occur in concert with the
teaching observations and portfolios described above. One particularly well-known example of
guided reflection is Mazur’s (1997) peer-instructional model, which guides faculty to teach,
collect data on students’ learning, and reflect on that data to determine the next steps for their
instruction. Mazur’s particular model of guided reflection is backed by a good deal of research
indicating its efficacy for improving instruction and student learning (Crouch & Mazur, 2001;
Lasry, Mazur, & Watkins, 2008; Lorenzo, Crouch, & Mazur, 2006).
External teaching improvement initiatives. A number of teaching improvement
initiatives originated outside of individual institutions through a network, national association, or
intellectual community. These external initiatives fall under two broad categories: projects aimed
at current faculty and projects targeting future/aspiring faculty. First, many national associations,
organizations, networks, and foundations have created projects aimed at improving teaching
among faculty members currently teaching in colleges and universities. One of the earliest of
these initiatives was the Carnegie Academy for the Scholarship of Teaching and Learning
(CASTL), which was a primary driver of the SoTL movement. CASTL also worked with small
groups of faculty members, called CASTL Scholars, to facilitate their learning about pedagogy
Assessment and Teaching Improvement in Higher Education
42
and develop teaching improvement projects in their own classrooms (Brint, 2011; Huber &
Hutchings, 2005). While the CASTL Scholars project was fairly small (fewer than 200 scholars
participated over a decade), the faculty who participated in the program demonstrated significant
learning and improvement as a result (Huber & Hutchings, 2005). Additionally, CASTL had an
enormous influence on the broader SoTL movement and the increasing advocacy for considering
teaching as important scholarly work.
An emerging external initiative that has gained prominence in recent years comes from
the Association for College and University Educators (ACUE). ACUE partners with individual
institutions and networks, such as the American Council on Education (ACE) and the Council on
Independent Colleges and Universities (CICU), to offer faculty an online course and a certificate
in evidence-based teaching practices. While the project is still very new (founded in 2014),
recent research suggests that faculty who participate in the course strengthened their use of
evidence-based teaching practices (Morrison, Ross, Morrison, & Reid, 2017).
Another large group of projects aimed at improving teaching for current faculty focus on
those faculty teaching in STEM fields. These projects all take a slightly different approach to
improving teaching. For example, the Bay View Alliance focuses on improving teaching on a
wider scale by leveraging principles of cultural change (Bay View Alliance, n.d.); the Cottrell
Scholars program works with early-career faculty in chemistry, physics, and astronomy who are
excellent in both teaching and scholarship to support innovative education projects both on
individual campuses and in collaborative efforts (Cottrell Scholars, n.d.); Project Kaleidoscope
(PKAL), a project run by AAC&U, empowers a network of faculty, administrators, and other
stakeholders to spread evidence-based teaching practices and promote STEM education within a
liberal arts context (Project Kaleidoscope (PKAL), n.d.). And most recently the Association of
Assessment and Teaching Improvement in Higher Education
43
American Universities (AAU), an organization of the most elite research universities in the
country, embarked on a project to improve undergraduate teaching at research universities using
a systems approach to change, targeting faculty on individual campuses, the network of AAU
institutions, and the broader higher education landscape (AAU Undergraduate STEM Education
Initiative, n.d.).
3
These projects all demonstrate the growing legitimacy surrounding teaching
improvement in higher education.
Another category of external projects is directed at future/aspiring faculty members and
aims to improve training in pedagogy and instruction during graduate education. One of the
earliest and most prominent of these efforts, called Preparing Future Faculty (PFF), worked with
over 45 doctoral-granting institutions to prepare graduate students for the full complement of
faculty tasks and responsibilities (Preparing Future Faculty, n.d.). While PFF did not focus
exclusively on teaching improvement, teaching and pedagogy were a significant component of
the project’s work. The program’s work continues with projects now housed out of individual
institutions. A current STEM-specific project comes from the Center for the Integration of
Research, Teaching and Learning (CIRTL). CIRTL works with 41 research universities to
promote enhanced training in pedagogy and instruction for graduate students in STEM fields
(Center for the Integration of Research, Teaching and Learning, n.d.).
The above review of teaching improvement initiatives outlines just some of the numerous
projects aimed at improving undergraduate instruction. Stakeholders on campus and across the
broader higher education landscape are starting to take teaching more seriously and invest the
time, funds, and intellectual capacity necessary to make change. Assessment has been positioned
3
This is not a comprehensive list of external teaching improvement initiatives, but rather a selection of some of the
most well-known.
Assessment and Teaching Improvement in Higher Education
44
within this landscape as another approach to improving teaching, but we still know very little
about the mechanisms of this relationship.
Assessment and Teaching Improvement
The SLOA movement implicitly realigns norms and values around teaching by creating a
system of measurement that prioritizes student learning over delivery of content. By focusing on
the outcomes of students’ classroom experiences, SLOA reinforces a shift towards the more
active, learner-centered approaches to teaching that produce more learning for students. There is
a limited body of evidence that showcases this relationship; a few studies in higher education
demonstrate that classroom assessment gives faculty effective feedback on elements of their
teaching practice that both facilitate and hinder student learning (Angelo & Cross, 1993;
Steadman, 1998). And a handful of studies demonstrate that SLOA can lead to improved student
learning (Francis & Steven, 2003; Kandlbinder, 2015; Volkwein, Lattuca, Harper, & Domingo,
2006). However, given the abundance of attention that SLOA has received in American higher
education over the last 30 years or so and the implicit assumption that it leads to improvement,
there is remarkably little empirical evidence linking it to improved teaching and learning. The
studies that do examine the effects or outcomes of assessment in practice at American
universities tend to be descriptive, atheoretical, single-institution case studies (see Kezar, 2013a;
Provezis, 2010; Cogswell, 2016). In the next section of this chapter, I explore a theoretical
approach to this topic, using systems theory as an overarching framework.
Theoretical Framework
Concepts from the organizational change literature can help understand the ways in
which assessment shapes and is shaped by various aspects of the teaching system. Specifically, I
draw on Ann Austin’s (2011) framework examining STEM teaching improvement through
systems theory to ground this study. This framework, which I discuss in more detail below,
Assessment and Teaching Improvement in Higher Education
45
describes teaching in higher education as a system composed of multiple levels and levers that
can influence the change process. In contrast to the multitude of research on higher education
teaching and assessment that just examines one level or aspect (e.g. teacher beliefs or faculty
buy-in to assessment), a systems approach examines the various levels and their
interrelationships in tandem with one another, as they all influence and are influenced by each
other. By examining how assessment shapes the teaching system at various levels, we can
develop a more thorough understanding of this relationship rather than a simplistic one focused
on one level.
While systems theory identifies where we should study various factors that may influence
the relationship between assessment and teaching, it does not explain why or how assessment
might influence teaching at each level. Thus, I will also bring in other theories at each level of
the system as explanatory mechanisms to explicate why assessment might affect teaching or why
various aspects of the system may hinder or facilitate this relationship. In this section, I first
briefly describe systems theory in more detail before delving more deeply into the various
aspects of the teaching and learning system that both affect and are affected by assessment.
Systems Theory
A system is “an organized whole that has two or more interdependent parts (or
subsystems) and is separated from its environment by a boundary” (Birnbaum, 1988, p. 30).
Colleges and universities are complex systems with multiple subsystems that interact but are
often loosely coupled with one another (Austin, 1996; Birnbaum, 1988; Cohen & March, 1991;
Weick, 1976). A systems theory approach to examining assessment and teaching improvement
in higher education takes these various subsystems or levels into account, rather than focusing
solely on an individual faculty member’s beliefs or behaviors. This approach fits with a growing
Assessment and Teaching Improvement in Higher Education
46
body of work in K-12 education that focuses on “teaching [as] a system, [where] the teacher is
only one component of the system” (Hiebert & Stigler, 2017, p. 170). Changing merely one
component of the system (i.e. the behaviors or beliefs of individual faculty members) is unlikely
to lead to broader change without attention to changes to other components. Yet, because of
loose coupling within institutions of higher education, a change to one component of the system
will not autom atically result in a subsequent change to another component (Birnbaum, 1988).
Thus, organizational change in higher education often works best when change agents take both
a bottom-up and top-down approach, targeting multiple actors and components of the system
concurrently (Kezar, 2013a, b).
The teaching system in higher education is made up of such elements as faculty beliefs
and practices, doctoral and disciplinary socialization around teaching, incentives, promotion and
tenure structures, training and professional development, and disciplinary, departmental, and
institutional cultures around teaching. Austin (2011) conceptualizes the various components of
the teaching system as occurring at different levels: the individual level, the departmental level,
the institutional level, and the external level. Multiple factors at these levels “simultaneously
influence faculty members’ choices about their teaching practice” (Austin, 2011, p. 3).
As described above, while systems theory identifies where we should study various
aspects of the relationship between assessment and teaching, it does not explain why or how
assessment might influence teaching at each level. As each level and its elements differ, there are
likely different theoretical mechanisms operating at each level which explain the ways in which
assessment may influence teaching. Thus, examining the relationship between SLOA and
teaching must necessarily involve attention to various elements of the teaching system and their
explanatory mechanisms in tandem. In the following sections, I present an adapted version of
Assessment and Teaching Improvement in Higher Education
47
Austin’s (2011) model; I describe various components within each level that may shape the
relationship between SLOA and teaching (see Table 1); and I overlay theoretical mechanisms
onto each level that explain how and why the relationship between assessment and teaching may
operate within that space.
Table 2: Levels of the Teaching System and Levers that Influence Relationship between
Assessment and Teaching
Individual Departmental Institutional External
Prior educational
experiences and beliefs
about teaching and
assessment
Culture (value placed
on teaching, value on
assessment,
departmental norms)
Institutional
structures around
assessment and
teaching
Accreditation
Socialization
(disciplinary/doctoral)
around teaching and
assessment
Disciplinary cultures
(norms and values
around teaching and
assessment)
Institution-level
policies around
assessment and
teaching (rewards,
incentives, P&T)
State-level
accountability
policies
Career stage Leadership related to
assessment and
teaching (both chair
and faculty
leadership)
Institutional
cultures—in support
of assessment and
teaching
improvement
Appointment type Policies and
practices (rewards
and incentives
around teaching and
assessment use,
curriculum, teaching
assignments and
course scheduling)
Motivation Professional
development around
teaching and
assessment use
Individual level. Various components at the individual level can explain the potential
relationship between SLOA and teaching. First, faculty members’ doctoral and disciplinary
socialization affect their attitudes towards teaching and the extent to which SLOA represents a
Assessment and Teaching Improvement in Higher Education
48
potentially helpful tool for gauging teaching effectiveness. During doctoral education, students
apprentice with more senior scholars in the field and, in the process, learn about the implicit and
explicit norms and values of their discipline and the academic profession as a whole (Austin,
1994; 2002; Becher & Trowler, 2001). These norms and values often prioritize research over
teaching and gradually push students to deemphasize their teaching practice and any related
activities (i.e. assessment) that take time away from research. Additionally, pedagogical
strategies tend to vary by discipline, as they are aligned with curricular goals and disciplinary
content (Austin, 2011). These experiences all influence faculty members’ current teaching
approaches, as well as their receptivity to assessment’s potential to help them improve their
practice.
Second, career stage influences faculty members’ willingness to engage thoughtfully
with their teaching practice and perhaps also to engage in SLOA. Newer faculty members on the
tenure track are frequently overwhelmed with the demands of doing enough research and
publishing enough to earn tenure, leaving them with little time and energy to dedicate to teaching
(Austin, Sorcinelli, & McDaniels, 2007). These faculty members may view SLOA as a
distraction, as it could take time away from the work they need to earn tenure. Post-tenure
faculty may thus be more receptive to SLOA, and its influence on teaching may be more
pronounced for these more senior faculty. Conversely, newer faculty m ay have received m ore
pedagogical training than more senior faculty, given the wide variety of teaching initiatives for
future faculty described above. It is thus possible that junior faculty may have more of an interest
Assessment and Teaching Improvement in Higher Education
49
in teaching, learning, and assessment, and their motivation to engage in this work could
outweigh any sense of overwhelm they might feel.
Relatedly, appointment type may influence the extent to which SLOA shapes teaching.
The growing ranks of non-tenure-track faculty (NTTF), who now compose nearly 70% of
instructional faculty in the United States, have taken on a massive share of undergraduate
teaching responsibilities (National Center for Education Statistics, 2016). Unlike tenured or
tenure-track faculty, NTTF typically focus primarily on teaching and may thus be more
interested in SLOA as a helpful tool to improve their teaching. However, at many institutions
NTTF are evaluated based on student ratings rather than assessment of student learning (Langen,
2011). Thus, while it is likely that appointment type influences the relationship between SLOA
and teaching, it is also likely that departmental or institutional policies play a role in the extent to
which NTTF are able to engage with SLOA in ways that might improve their teaching.
Finally, an individual faculty member’s experiences (socialization, career stage, and
appointment type) will shape his beliefs about teaching and assessment and influence the degree
to which he is receptive to assessment processes and the extent to which information about
student learning may affect his teaching practice. Most faculty do not receive formal instruction
on how to teach and typically have little knowledge of pedagogy or learning theory (Austin,
2002; Golde & Dore, 2001). Rather, they tend to model their teaching after their own educational
experiences, “either emulating what they remember as effective teaching approaches or trying to
avoid doing what they found less helpful” (Austin, 2011, p. 5). Further, their often unconscious
beliefs or conceptions about what constitutes effective teaching shape their own teaching
practices (Gow & Kember, 1993; Johnston, 1996; Kane, Sandretto, & Heath, 2002; Kember &
Gow, 1994; Samuelowicz & Bain, 2001; Trigwell & Prosser, 1996; Trigwell & Prosser, 2004).
Assessment and Teaching Improvement in Higher Education
50
These beliefs generally range from range from a view of teaching as teacher-centered, content-
focused, and focused on knowledge transmission, to a view of teaching as student-centered,
learning-focused, and facilitative (Eley, 2006; Gow & Kember, 1993; Kember, 1997; Kember &
Gow, 1994; Postareff, Lindblom-Ylänne, & Nevgi, 2008; Samuelowicz & Bain, 2001). Those
faculty whose conception of teaching is more student-focused tend to teach in more active,
student-centered ways, whereas those who conceive of teaching as more teacher-centered tend to
use more passive, lecture-based approaches (Kember, 1997; Trigwell, Prosser, & Waterhouse,
1999; Gibbs & Coffey, 2004). Faculty members’ beliefs about teaching are not fixed; rather, they
can change in ways that are linked to improved student performance (Ho, Watkins, & Kelly,
2001). I posit that the growing prevalence of SLOA and increasing practice of using SLOA
results to drive improvement could shift faculty conceptions of teaching to a more student-
centered and facilitative view, in turn impacting their teaching practice.
To understand the ways in which faculty beliefs may change as a result of their
engagement with SLOA, it is instructive to examine theories of teacher belief change from the
educational psychology literature. Specifically, Gregoire’s (2003) Cognitive-Affective Model of
Conceptual Change (CAMCC) explains how teachers who are introduced to new information
about their teaching practice may change their beliefs as a result. For example, if a reform
message about a new teaching practice is perceived as threatening to a teacher’s “core beliefs
about herself or her teaching,” implicitly indicating that the way she has been teaching is
ineffective, she may feel anxious or fearful (p. 163). Negative emotions are associated with
higher levels of engagement with the message, which can lead to more effortful processing and
conceptual change. If another teacher perceives the new teaching practice as congruent with his
Assessment and Teaching Improvement in Higher Education
51
existing practice, he may feel more positively about it but engage with it less deeply, process it
more superficially, and not change his beliefs about teaching (Gregoire, 2003).
SLOA could facilitate belief change among faculty if assessment results provide new or
discrepant information about student learning to faculty members; for example, if a professor
assesses student learning outcomes for a course that she thought she taught effectively but found
that learning outcomes were worse than expected, these results could provoke a negative emotion
or dissatisfaction with her teaching or threaten her identity as a teacher (Gregoire, 2003).
Depending on the strength, coherence, and commitment of her existing conceptions of teaching,
she may question elements of her teaching and subsequently change her beliefs. By focusing on
the learning outcomes of students’ classroom experiences, I suggest that SLOA will encourage a
shift towards a more student-centered conception of teaching among faculty who engage with it.
It is also likely that faculty members’ prior beliefs about assessment will interact with
their existing beliefs about teaching when faced with this new information. Prior beliefs about
assessment may influence faculty motivation (Blackburn & Lawrence, 1995): is a particular
faculty member motivated to use his assessment results to change his practice? Do they, in fact,
provoke dissatisfaction? Does he feel they are personally relevant, or was the assessment forced
down on him from his department chair or dean? The answers to these questions will all
influence whether he processes his SLOA information deeply or superficially and whether it
starts to shift his beliefs about teaching.
Various factors at the individual level all shape faculty beliefs and teaching practices and
the extent to which SLOA may change those beliefs and practices. However, these factors alone
only provide a partial explanation for how SLOA could shape teaching. Faculty do not perform
Assessment and Teaching Improvement in Higher Education
52
their work in isolation, and their departmental context is a crucial influence on both their
approach to teaching and their engagement with assessment.
Departmental level. Dill (2003) suggests that the department is a critical unit of analysis
when examining issues of “academic quality” (p. 7). The department is the basic unit of
academic work at colleges and universities where many decisions about curriculum, instruction,
and assessment are made (Becher & Kogan, 1992). It is thus a crucial level in the teaching
system and one in which many elements could shape assessment’s ability to influence teaching.
First, departmental cultures have a major influence on the teaching and learning environment. A
department’s culture consists of the values, beliefs, and norms that its members hold and can be
expressed both explicitly through policies or verbal statements and implicitly or tacitly through
behaviors or mindsets (Kezar, 2013a; Schein, 1985). The values, beliefs, and norms around
teaching and assessment in a department will significantly influence the ways in which
assessment and teaching are related and the potential for assessment to improve teaching. For
example, in departments where teaching and learning is valued, faculty may dedicate more time
to planning for and reflecting on their teaching practice than in departments where research is
more valued. Faculty may be more willing to experiment with assessment in departments that
have supportive teaching cultures. Additionally, the norms, values, and beliefs around
assessment in a department will also shape the extent to which it can impact teaching. Trust is a
key issue here; faculty must trust that any assessment data gathered will not be misused for
accountability purposes without their permission or to penalize them (Banta & Palomba, 2014;
Magruder, McManis, & Young, 1997; Peterson, Einarson, Trice, & Nichols, 1997).
Relatedly, disciplinary cultures are also enacted at the departmental level, as departments
are organized around academic disciplines. The extent to which a discipline values teaching and
Assessment and Teaching Improvement in Higher Education
53
assessment often plays out the departmental level. For example, some disciplines such as physics
have devoted increasing attention to research on teaching over the last several decades;
departments that support this type of discipline-based educational research (DBER) may prove
more hospitable to faculty interested in experimenting with assessment to improve their teaching
practice (Austin, 2011).
Leadership is another key element that influences the relationship between assessment
and teaching at the departmental level. Department chairs have what Austin (2011) calls a
‘“linking pin” role, connecting institutional priorities and faculty work, translating messages
from senior institutional leaders, and interpreting questions, issues, and concerns expressed by
faculty members” (p. 9). The extent to which departmental leaders emphasize teaching and
teaching improvement will influence the extent to which faculty feel comfortable dedicating their
time and energy to teaching (Bensimon, Ward, & Sanders, 2000; Chu, 2006; Learning, 1998).
Departmental leaders who signal their valuation and prioritization of student learning and
measurement of student learning are more likely to foster environments in which assessment can
contribute to teaching improvement. In addition to department chairs, faculty leadership is
critical for successful implementation of assessment and for its potential to influence teaching
(Hadden & Davies, 2002; Peterson & Augustine, 2000). If respected faculty within the
department engage with assessment to improve their teaching and demonstrate their perception
of its value, other faculty may be more likely to do the same (Banta & Palomba, 2014).
Policies and practices at the departmental level also influence the relationship between
assessment and teaching. For example, faculty are initially evaluated for promotion and tenure at
the departmental level (Austin, 2011). The extent to which teaching quality and student learning
outcomes factor into these promotion and tenure decisions is greatly influenced by what happens
Assessment and Teaching Improvement in Higher Education
54
at the department level. Additionally, many assessment policies are set at the departmental level.
Department chairs and faculty can often choose the type of assessment they will emphasize, the
format and frequency of reporting assessment results, and the ways in which they engage
individually and collectively with assessment data (Nichols & Nichols, 2000). Often, each
department has its own assessment coordinator, which can potentially play a major role in the
ways that assessment is used and linked to teaching at the departmental level. Curricular
decisions are also made at the departmental level (Dill, 2003). The extent to which syllabi,
courses, and programs of study are oriented around student learning outcomes will influence the
prominence and value of assessment at the departmental level and its relationship to teaching.
Finally, professional development and support is critical both for teaching improvement
and for successful use of assessment (Austin, 2011). This type of development and support often
happens at the departmental level. In departments with a strong value or commitment to
professional development around assessment and teaching, faculty are more likely to feel the
confidence and support they need to experiment with assessment and serious potential use for
helping improve their teaching. Further, lack of professional development is a key reason that
faculty do not engage more extensively with assessment (Hutchings, 2010). As most faculty are
not trained in assessment during their doctoral education, professional development is a critical
means for introducing faculty to assessment and its potential value for improving instruction.
Several theoretical mechanisms could be operating at the departmental level and
influencing the relationship between teaching and assessment. First, professionalization theories
explain the extent to which disciplinary cultures and professional identity influence the
department’s approach to assessment and teaching (Austin, 1994, 1996; Sullivan, 2004).
Professionalization theories posit that “due to the unique definition of the professions, the
Assessment and Teaching Improvement in Higher Education
55
processes and phenomenon related to the work of professionals will take on a distinctive
character that reflects these professional ideals and characteristics” (Kezar & Lester, 2009, p.
720). The academic profession is even more distinctive than other professions, in that faculty
members are influenced by both their disciplinary or professional ideals and their employer
institution (Brufee, 1999; Hart, 2008). Faculty often identify more with their discipline than with
their home institution, as professionals who go through long periods of training and develop a
significant body of expertise in a particular discipline (Gerber, 2014). Professional norms and
values around teaching and assessment will likely play a significant role in the extent to which
faculty engage in assessment and the ways in which they understand its relationship to
instruction, as noted above.
However, faculty members cannot completely ignore the pressures and structures of their
employer institution that may either conflict or comport with their professional ideals, and they
do not always have the same degree of professional autonomy as lawyers or doctors, for
example. Institutional theory (IT) explains why certain factors at the departmental level that are
not necessarily associated with the discipline, such as policies and practices, leadership, and
professional development, may compete with professional norms to influence the relationship
between assessment and teaching. IT attempts to explain how institutions develop a distinctive
character and become infused with value over time (DiMaggio & Powell, 1991; Selznick, 1947;
1957). These values (for example, a value on producing research) get embedded into norms and
structures at both the departmental and institutional levels (Scott, 2014). If research production is
an institutional value, that could play out in departmental policies on workload (i.e. teaching time
vs. research time), through professional development (i.e. focused on grant-writing rather than
teaching), and through leadership behaviors (i.e. signaling that teaching is less important). These
Assessment and Teaching Improvement in Higher Education
56
values, norms, and structures may either constrain or promote faculty members’ ability to enact
various aspects of their professional norms and values, as noted above; they will certainly
influence the extent to which faculty engage with assessment and the relationship that
assessment has with their teaching practice.
The various contextual factors at the department level all influence the extent to which
faculty can assess in meaningful ways and use information about student learning to guide their
teaching practice. If these factors are not supportive of either instructional improvement or
assessment, it is unlikely that faculty will see a link between assessment and teaching, see the
value in devoting significant time to assessing and improving their instruction, or change their
beliefs about teaching and assessment. Additionally, departmental context is critical for
supporting change at scale; as Fairweather (2008) and Austin (2011) emphasize, pushing for
individual faculty members to change their beliefs and practices one by one will never lead to
meaningful engagement with assessment or instructional improvement. Contextual supports at
the departmental level would allow greater numbers of faculty to see a link between assessment
and teaching and potentially make changes to their practice.
Institutional level. Norms, values, and beliefs at the institutional level also influence the
relationship between SLOA and teaching. Institutional theory plays out to an even greater extent
at this level and explains how and why various elements at the institutional level may influence
assessment’s ability to improve teaching, so I discuss it throughout this section. As at the
departmental level, institutional norms and values also get embedded into institutional policies
and structures and guide or constrain the actions of individuals operating within that institution
(Scott, 2014). Values are “conceptions of the preferred or the desirable together with the
construction of standards to which existing structures or behaviors can be compared and
Assessment and Teaching Improvement in Higher Education
57
assessed” and norms “specify how things should be done…[and] define legitimate means to
pursue valued ends” (Scott, 2014, p. 64). Norms and values are often embodied in institutional
mission and culture, which can influence the extent to which assessment shapes teaching. While
institutions of all types have teaching as a part of their mission, it is more central at baccalaureate
universities, liberal arts colleges, and community colleges than at research universities (Gappa,
Austin, & Trice, 2007). Faculty members at these institutional types may thus perceive that the
institution values their teaching more than faculty at research universities. Norms around the
amount of time and effort that faculty dedicate to teaching versus research can also play out at
this level in addition to at the departmental level.
The norms and values represented in the institutional mission and culture become
formalized in institution-level policies (Scott, 2014). These policies also shape the teaching and
learning environment. For example, while the first steps towards promotion and tenure are taken
at the departmental level, policies that establish criteria for promotion and tenure are generally
set at the institutional level and tend to align with institutional mission (Austin, 2011). Thus,
while a department’s culture and leadership may value teaching and the use of assessment to
both prove student learning and improve teaching, institutional policies prioritizing research in
promotion decisions could hinder departments’ abilities to act on these values. Conversely, at
institutions that reward and incentivize quality teaching and teaching improvement in some way,
faculty members may direct more attention to their instructional practice and the ways in which
SLOA can inform that practice.
Additionally, institutional norms and values get embedded into institutional structures
that both reflect institutional priorities and guide individual action (DiMaggio & Powell, 1991;
Scott, 2014). For example, institutions generally have structures around assessment that may or
Assessment and Teaching Improvement in Higher Education
58
may not overlap with structures at the departmental level. While some institutions may have a
decentralized assessment infrastructure that designates an assessment coordinator or chair in
each department, some colleges and universities have a dedicated assessment office that supports
SLOA institution-wide (Kuh, Jankowski, Ikenberry, & Kinzie, 2014; Peterson, Einarson, Trice,
& Nichols, 1997). In some places, this office may be stand-alone; it may encompass assessment
in both student affairs and academic affairs; or it may be embedded within a Center for Teaching
and Learning or an office of institutional effectiveness or accreditation/compliance. The office
could consist of one analyst, one director, or a combination of several faculty and staff at varying
levels of seniority. These varying structures could have differing influences on the relationship
between SLOA and teaching. If an assessment office is tightly linked to a Center for Teaching
and Learning, for example, it is likely that assessment would play a more meaningful role in
shaping teaching practice. Conversely, if assessment is housed within an accreditation or
compliance office, it is more likely that the focus would be on assessment for accountability and
the link with teaching would likely be weaker. If there is no institution-wide assessment office,
that may be a signal that the campus does not place a strong value on assessment. Relatedly,
existing institutional structures around teaching, such as a robust Center for Teaching and
Learning, will shape the teaching environment regardless of whether they are linked with
assessment infrastructures.
These elements at the institutional level all affect the ways in which assessment is
practiced and its ability to shape teaching. Factors at the institutional level interact with factors at
other levels to shape the teaching system within a particular department or on a particular
campus. It is thus crucial to be mindful of institution-level influences when studying the
relationship between assessment and teaching.
Assessment and Teaching Improvement in Higher Education
59
External level. Colleges and universities are open systems, meaning they are influenced
by various factors in their external environment (Birnbaum, 1988). In terms of assessment and
teaching, the most prominent external factors are accreditation and state-level accountability
policies. As described in earlier sections of this paper, accreditation plays a major role in how
faculty and institutional leaders perceive assessment. At institutions where leaders and faculty
perceive that accreditation is the primary motivation for engaging in assessment, SLOA is more
likely to be used for accountability purposes and less likely to directly influence teaching (Kuh,
Jankowski, Ikenberry, & Kinzie, 2014). Similarly at institutions with state-level accountability
policies, assessment may also be perceived as primarily accountability-focused rather than
improvement-focused. While all institutions will contend with external accountability pressures
to some extent, they may be more or less influential depending on the institution and on its
political context.
Institutional theory also explains how these external factors operate to influence the
relationship between assessment and teaching improvement. At this level, the IT concept of
field-level actors is most relevant. Some institutional theorists contend that significant pressures
for either change or stability come from the field level, or beyond individual institutions (Scott,
2014). There are two types of fields: organizational fields and societal fields. An organizational
field consists of “those organizations that, in the aggregate, constitute a recognized area of
institutional life: key suppliers, resource and product consumers, regulatory agencies, and other
organizations that produce similar services and products” (DiMaggio & Powell, 1983, p. 148). A
societal field is broader and further removed from organizational life and generally consists of
such forces as the market or government (Scott, 2014). In American higher education, the
organizational field consists of, among others, individual students and faculty members, the
Assessment and Teaching Improvement in Higher Education
60
population of colleges and universities, accreditors, disciplinary societies, funders (e.g. the
National Science Foundation or the Lumina Foundation), national associations such as AAC&U,
and even technology partners that operate learning management systems, for example (Scott,
2014). It is these field-level actors that exert various pressures on other organizations within the
field (colleges and universities) and drive either stability or change. The extent to which field-
level actors push assessment as either a tool for accountability or a mechanism for instructional
im provem ent will affect institutional leaders’ perceptions and, in turn, institutional policies and
faculty response.
Types of Outcomes Assessment and Measures
In addition to the various levels of the teaching system, some research suggests that
different types of assessment may play different roles in faculty teaching improvement. For
example, faculty who engage in assessment that is closely tied to their daily work of teaching
generally have more positive experiences with assessment than faculty at institutions where
assessment is used for accountability purposes and is consciously kept separate from regular
classroom work (Hutchings, 2010). Sometimes called authentic or embedded assessment, this
form of classroom-based assessment uses examples of student work to provide feedback to
students and faculty on student learning and offers actionable information to help faculty
improve their teaching. These embedded assessments often use rubrics to appraise student papers
or presentations. Rubrics can measure student learning in a single class, but they can also capture
learning across courses or programs, such as the VALUE rubrics created by AAC&U (Banta &
Palomba, 2014; Palmer, 2012; Rhodes, 2008). Electronic portfolios of student work and
performance assessments are other direct measures of student learning that are more focused on
giving feedback to faculty and students than on providing a summative judgement of student or
Assessment and Teaching Improvement in Higher Education
61
faculty performance (Banta & Palomba, 2014). At departments or institutions that emphasize
these methods of assessment, faculty may be more receptive to assessment and may see its value
and relation to teaching more strongly. Additionally, in accordance with professionalization
theory, faculty may be more open to using assessments that they played a part in creating, or that
were created by their professional peers or other experts in their field, rather than by some
external entity.
Assessment in general is often associated with standardized tests, however. Standardized
assessments are tests in which “the questions, the scoring procedures, and the interpretation of
results are consistent and which are administered and scored in a manner allowing comparisons
to be made across individuals and groups” (Benjamin et al., 2012, p. 7). As described earlier in
this chapter, some prominent standardized tests used at the undergraduate level include the
Collegiate Assessment of Academic Proficiency (CAAP), the Collegiate Learning Assessment
(CLA), and the Proficiency Profile (formerly the Measure of Academic Proficiency and
Progress, or MAPP) (Delaney, 2015; Shavelson, 2010). All of these tests are generally used to
assess student learning outcomes at the program or institution level and offer summative
judgement on the efficacy of a program or institution; they often offer little actionable feedback
to faculty members who want to use assessment results to improve their teaching. If these
assessments are the dominant ones or only ones used at an institution or department, faculty may
not get much useful information from them and assessment may have little meaningful impact on
their teaching. Further, faculty may feel that externally-created standardized assessments conflict
with or erode their professional autonomy, which may make them more resistant to engaging
with assessment in all forms.
Assessment and Teaching Improvement in Higher Education
62
Thus, assessment type likely also plays a role in determining the nature of the
relationship between assessment and teaching. When examining how assessment influences
teaching at each level of the system, it is important to examine whether different assessment
types may affect this relationship in different ways.
Summary and Conclusions
Assessment has become increasingly important in higher education over the last 30 years,
but its competing purposes of accountability and improvement have made it a complex and
controversial endeavor. A variety of different instruments and methods for measuring
undergraduate student learning exist and are often affiliated with one of the assessment
paradigms. Standardized assessments are typically used for accountability purposes, while more
authentic assessments such as rubrics and portfolios are more typically associated with efforts to
improve. Despite a growing emphasis on teaching improvement in higher education generally—
and an increasing focus on assessment’s potential for improving teaching—remarkably little is
known about whether or how assessment actually leads to teaching improvement. In order to
study this issue, it is helpful to put teaching into the broader context of the higher education
system; teaching improvement is about much more than just faculty members’ actions in the
classroom. A variety of factors at the individual, departmental, institutional, and external levels
all influence the process of teaching improvement, as well as the potential ability of assessment
to contribute to that process. In the next chapter, I describe how I have operationalized these
ideas into a study of this complex phenomena among dozens of faculty in six departments at two
research universities.
Assessment and Teaching Improvement in Higher Education
63
Chapter 3: Study Design and Methods
To answer my research questions and examine the relationship between assessment and
teaching at various levels of the system, I conducted an embedded multiple-case study of
departments within research universities (Yin, 2014). Given that systems theory embraces
complexity and suggests that numerous interrelated levers and levels shape assessment’s ability
to improve teaching, a multi-site study helped me to gain a full appreciation of various elements
at the individual, departmental, institutional, and external levels that influence this relationship.
By examining six departments within two universities, I maximized my ability to capture the
dynamics and interactions both within and between levels of the system.
Overview of Case Study
Case study research is “an intensive description and analysis of a phenomenon or social
unit such as an individual, group, institution, or community” (Merriam, 1998, p. 8). Case studies
investigate phenomena within their real-world or naturalistic settings. Case study is thus an
appropriate methodology for studying something like assessment, as context is crucial for
understanding how faculty engage with assessment and how it relates to their teaching. A critical
characteristic of a case is that it is a bounded system; that is, cases are delimited or contained in
some way, and their boundaries can be clearly defined (Merriam, 1998; Stake, 2005). By
examining the relationship between assessment within six departments at two institutions, which
I describe in greater detail below, I had multiple cases that were clearly bounded by both
academic departments and their larger university communities. I was thus able to compare
findings across these cases, looking for similarities and differences and testing or building theory
in an iterative process (Eisenhardt, 1989). Additionally, multiple sites of study strengthened the
robustness of my conclusions and allowed me to test various theoretical explanations against one
Assessment and Teaching Improvement in Higher Education
64
another in a way that a single-site study could not (Yin, 2014). Using what Yin (2014) calls
replication logic, I created selection parameters for my cases that allowed me to see how the
relationship between assessment and teaching may be similar across departments (literal
replication) or different (theoretical replication) based on disciplinary and institutional
differences. It is possible that the relationship between assessment and teaching looks different at
different institutions or in different departments, and that there are multiple ways in which
assessment could shape teaching and learning environments. By conducting a comparative case
study, I widened my analytical lens and have attempted to build a fuller explanation for this
phenomenon.
Site Selection Criteria
In order to investigate the relationship between assessment and teaching, I examined
three departments at each of two research universities in the United States, for a total of six
departments. These sites of study are appropriate for several reasons. First, departmental and
institutional boundaries clearly delimited each case, limiting the number of individuals involved
in the study and allowing me to build an in-depth understanding of the assessment processes and
teaching and learning environments of each case. Second, by studying multiple departments
within the same university, I was able to tease out departmental or disciplinary differences that
may have influenced the relationship between assessment and teaching improvement in
divergent ways for faculty in different departments. Third, by examining departments across
multiple research universities, I was also be able to compare the ways in which institutional
contexts and assessment type interacted to shape changes to the teaching and learning
environment (Stake, 2005). Additionally, Eisenhardt (1989) notes that “while there is no ideal
number of cases, a number between 4 and 10 usually works well” for building theory that has
Assessment and Teaching Improvement in Higher Education
65
some complexity, while also maintaining a manageable amount of data for analytic purposes (p.
545).
For this study, I decided to focus on one STEM department, one social sciences
department, and one humanities department at each institution. A comparative case study
provides an opportunity to compare STEM and non-STEM departments and examine similarities
and differences by discipline and field (Biglan, 1973). Faculty in STEM tend to take different
approaches to teaching than their counterparts in the humanities or social sciences (Lindblom-
Yanne, Trigwell, Nevgi, & Ashwin, 2006). STEM and non-STEM faculty may also have
different levels of receptivity to assessment based on their disciplinary training and socialization.
Further, different disciplines have differential track records of engagement with assessment,
which played out differently within each department (N. Jankowski, personal communication,
August 9, 2017).
As mentioned above, these departments are situated in research universities that have
been identified as having high levels of assessment activity by national assessment experts. In
order to understand the link between assessment and teaching, I needed to study places that are
actually engaged in assessment. If I had selected a site with low levels of assessment activity, I
would be much less likely to see evidence of a relationship with teaching of any kind, much less
learn about the specific dynamics of that relationship. My selection of research universities
comes from the dearth of research on how assessment plays out within these contexts. As Kuh
and his colleagues found in their (2014) survey on assessment activity across the United States,
research universities are much less likely to be engaged in meaningful assessment activity than
other types of postsecondary institutions. Additionally, the teaching taking place in
undergraduate classrooms at research universities has been designated as subpar overall, despite
Assessment and Teaching Improvement in Higher Education
66
the fact that research universities award one-third of all baccalaureate degrees (Boyer
Commission on Educating Undergraduates in the Research University, 1998; Kuh, 2004).
Teaching and assessment are thus both critical issues at research universities. By studying
departments at research universities that have high levels of assessment activity, I hoped to learn
valuable lessons that could inform other research universities as they work to implement
assessment and improve teaching.
Site Selection and Sample Description
I will briefly describe the two research universities and departments I selected for my
study in this section (more in-depth case portraits of the sites are in Chapters 4 and 5). Both
universities fall within the purview of the same regional accreditor, which I refer to throughout
the rest of this study as Regional Accrediting Commission (RAC).
Valley University. Valley University (VU) is a public university with a Carnegie
classification of doctoral university with highest research activity and over 20,000 students. VU
has an assessment director who oversees learning assessments on campus. In addition to
assessment, the director is also responsible for a number of other research and evaluation
projects across campus, including student evaluations of teaching (SETs). In terms of assessment
work, the assessment office primarily supports program-level assessment and works with
designated assessment coordinators in each academic department. These assessment coordinators
work with the director and their departmental colleagues to create student learning outcomes
(SLOs) for their majors and programs and assessment plans to measure these SLOs. Each year,
the departmental assessment coordinator is responsible for submitting an assessment report to the
assessment office with evidence of the department’s progress on their assessment plan. The
director reviews each report and sends it back to the assessment coordinators with feedback and
Assessment and Teaching Improvement in Higher Education
67
a grade that is meant to indicate the quality of their report. Other than this office, there are no
other formal structures at Valley University with oversight for assessment. No faculty
committees with assessment oversight exist, other than a now-defunct ad hoc committee that was
formed several years ago in response to an accreditation report.
Within VU, I selected 3 departments: a social science department (“Social Science”), a
humanities (“Humanities”) department, and a STEM (“STEM”) department. These three
departments were selected with the help of my primary contact at VU, the assessment director.
He connected me with representatives from 2-3 departments within each disciplinary category
(social science, humanities, and STEM). I reached out to each of these contacts. In humanities
and STEM, the first departments we contacted were willing to participate and became my cases.
In the social sciences, I initially attempted to work with another department, but only one faculty
member agreed to be interviewed. A second department declined to participate before I was able
to recruit the final social science department. All departments that are included in the sample
were recommended by the assessment director because they had undertaken some sort of
meaningful assessment project; they were among the top performers in terms of assessment, in
his judgment.
University of the Pines. University of the Pines (UP) is also a public research university
with the highest Carnegie classification. UP also has around 20,000 students. At UP, there is an
assessment office with two full-time employees who oversee just learning assessment. These
employees include a director of assessment and an assessment specialist. Both of these positions
are categorized as tenure-track faculty roles by the University. Like at VU, the assessment office
at UP works with departmental assessment coordinators to create SLOs and assessment plans.
Assessment plans are submitted to the office and returned with feedback, though not a grade like
Assessment and Teaching Improvement in Higher Education
68
at VU. After several years of requiring yearly assessment reports from departments, the office
recently moved to a policy requiring reports only every other year, in order to focus more on
institution-level assessments. UP establishing Institutional Learning Outcomes (ILOs) about 5
years ago and has since been taking steps to create assessments to measure them. In addition to
the assessment office, there is a permanent faculty committee that has oversight for assessment,
as well as a newer ILO-specific committee.
At UP, I also selected departments with the guidance of the director of assessment. We
similarly reached out to 2-3 departments in each disciplinary category and ultimately selected
one social science (“Social Science”), one humanities (“Humanities”), and one STEM
(“STEM”). One other social science department declined to participate before the final
department ultimately agreed. Like at VU, the departments at UP were selected as examples of
departments that are engaged in meaningful assessment activity, where I was most likely to gain
an understanding of the ways in which assessment influences and is influenced by teaching.
Data Collection
In order to fully describe these cases, I employed multiple methods of data collection, as
is common in case study research (Stake, 2005; Merriam, 1998). Specifically, I conducted
document analysis to learn more about the context of assessment at both the departmental and
university levels; a survey of faculty teaching beliefs and practices; and semi-structured
interviews to learn about faculty experiences with teaching, learning, and assessment, as well as
departmental cultures, policies, and structures.
Document Analysis
First, I searched for and analyzed departmental and university policies around assessment
for each site. In addition to providing a foundation for understanding policies and practices at
Assessment and Teaching Improvement in Higher Education
69
each site, this exercise served as a form of triangulation (Yin, 2014); perhaps SLOA in practice
looks very different from what is indicated in the official statements, for example. Additionally, I
reviewed all aspects of each university’s assessment website. These websites were different but
each had an overview of the assessment process at each campus, as well as at least a few
examples of assessment reports. In cases where I could not find the assessment reports for the
departments in my study on the university website, I was able to procure them from a
departmental contact. These reports provided a snapshot into each department’s assessment
work. UP’s website also had a large amount of material on prior workshops that the assessment
office has held, so I reviewed all of those, as well. These documents gave me a broad sense of
the assessment process at each campus. I also examined other university and departmental
policies and practices around teaching and assessment, including promotion and tenure
guidelines, teaching awards, teaching evaluation processes, and teaching assignment guidelines.
These policies provided a glimpse into the culture around teaching at each site. In addition, I
analyzed program reviews of the departments in my study to determine the extent to which they
included indicators of student learning. At UP I was also able to get reports and minutes from
assessment committee meetings, which I analyzed to determine oversight activity at the
institutional level. I found accreditation reports, both formal 10-year reviews and interim
reports, which gave me a sense of how each institution was responding to RAC’s assessment
requirements.
Survey
Next, I used a survey instrument called the Approaches to Teaching Inventory (ATI) to
examine faculty beliefs about teaching and learning (See Appendix A). The ATI provides an
opportunity for faculty to report various teaching practices they use, as well as certain forms of
Assessment and Teaching Improvement in Higher Education
70
reflective practice (Williams, Walter, Henderson, & Beach, 2015). In its most updated form, the
ATI has 2 scales with 11 items each; these scales measure the degree to which faculty take an
Information Transmission-Teacher Focused (ITTF) approach to teaching or a Conceptual
Change-Student Focused (CCSF) approach to teaching (Trigwell, Prosser, & Ginns, 2005).
Faculty who conceive of teaching as primarily teacher-centered and focused on information
transmission tend to score higher on the ITTF scale, while faculty who conceive of teaching as
student-centered and learning-focused tend to score higher on the CCSF scale (Postareff,
Lindblom-Ylanne, & Nevgi, 2008). Originally developed to measure approaches to teaching of
physical science faculty, Prosser and Trigwell subsequently revised the ATI several times to
make it more broadly applicable to faculty from a variety of disciplines, making it an appropriate
instrument for my multiple-case study of several departments from different disciplines
(Trigwell & Prosser, 2004; Trigwell, Prosser, & Ginns, 2005). The ATI has now been used in
numerous studies across multiple disciplinary, institutional, linguistic, and national contexts, and
its validity has been repeatedly confirmed (Trigwell, Prosser, & Waterhouse, 1999; Trigwell &
Prosser, 2004; Trigwell, Prosser, & Ginns, 2005; Prosser & Trigwell, 2006; Lindblom-Ylanne,
Trigwell, Nevgi, & Ashwin, 2006; Stes, Coertjens, & Van Petegem, 2010; Stes & Van Petegem,
2014).
I administered the ATI through an online survey to faculty members in all departments in
the study at the beginning of the study. At VU, the total number of faculty members sent the
survey was 97, and at UP it was 66. The survey was administered using the Qualitrics software
platform, and weekly follow-up reminders were sent via email to each potential respondent for 6
weeks. In addition to the 22-item ATI, I appended several questions to the survey designed to
assess various characteristics of participants that could influence their engagement with teaching
Assessment and Teaching Improvement in Higher Education
71
and assessment (see Appendix B). These questions included such measures as faculty rank,
teaching experience, experience with professional development around teaching and learning,
educational background, and demographic characteristics. I had hoped that this information
would allow me to analyze across- or within-group differences on the ATI.
Unfortunately, I got very low response rates on the ATI and ultimately decided to
exclude this data from analysis for the study. At VU, only 7 individuals responded to the survey
(6 of whom were from the STEM department), representing a response rate of 7%. This response
rate was simply too low to provide trustworthy data, nor was the sample representative of the
population of departments in my study. While the response rate at UP was better (30%, with 20
individuals responding), given the low response rate at VU there was no way to make
meaningful cross-institutional comparisons. I noted the differential institutional response rates
and later reflected on it in light of my other qualitative findings. It turned out that the low
response rate at VU became more meaningful once I understood more about assessment and
teaching on that campus.
Interviews
Partially because of the low response rates on the ATI, the interviews I conducted
became especially important and provided a key source of data for this study. I selected a
purposeful sample of respondents to interview, based on both recommendations from key
departmental informants and on snowball sampling, in which one interviewee gave me the name
of one or several additional faculty members (Lincoln & Guba, 1985; Patton, 2002). These
interviews were semi-structured. Semi-structured interviews have been used to elicit teacher
beliefs in studies in the K-12 education literature (Spillane, Reiser, & Reimer, 2002), as well as
in studies about faculty beliefs about teaching and learning in higher education (Eley, 2006; Gow
Assessment and Teaching Improvement in Higher Education
72
& Kember, 1993; Kember & Gow, 1994; Kember, 1997; Light, Calkins, Luna, & Drane, 2009;
Postareff, Lindblom-Ylanne, & Nevgi, 2008; Samuelowicz & Bain, 2001). Interviews also
allowed faculty members to reflect on their beliefs about assessment and the nature of the
relationship between assessment and teaching, as well as various departmental factors that
influence this relationship. The semi-structured nature of these interviews allowed m e to create
protocols with common questions informed by theory, while also leaving space for probes and
follow-up questions based on the particularities of each interviewee’s experiences (Merriam,
1998).
The interview protocols were developed using the literature on teaching systems and on
faculty beliefs about teaching and learning, with additional questions around assessment (see
Appendix C). Specifically, I asked faculty about how they define good teaching and student
learning and the different components of teaching and learning (Prosser & Trigwell, 1999; Light,
Calkins, Luna, & Drane, 2009). I also asked about their perceptions of and experiences with
assessment, and what it means for their practice (MacDonald, Williams, Lazowski, Horst, &
Barron, 2014; Ebersole, 2009). Additionally, I asked them how they perceive their colleagues,
departmental leadership, and university as a whole think about assessment (Peterson &
Augustine, 2000). I also included several questions designed to get at departmental culture,
policies, and leadership. I interviewed the majority of faculty members in person once and
recorded each interview; eight of these interviews were conducted over the phone for faculty
members who could not travel to campus on the days I was able to visit. Each interview lasted
approximately one hour, though they ranged in length from around 40 minutes to almost 90
minutes. Audio from each interview was transcribed using a professional transcription service.
Assessment and Teaching Improvement in Higher Education
73
I also interviewed several other stakeholders on campus, including the director of the
center for teaching and learning at UP, assessment directors at each campus, and other senior-
level academic administrators. These interviews gave me additional context about various
elements at the departmental and institutional levels of the system, such as professional
development opportunities, promotion and tenure policies, incentive and reward structures,
curriculum, assessment policies, and departmental and institutional cultures (see Appendix C).
Each of these interviews also lasted approximately one hour.
My total interview sample included between 5 and 9 individual faculty in each
department, as well as 4-5 teaching and assessment staff members and administrators per
campus. The total number of interviewees at VU was 24, and at UP it was 25, for a total sample
of 49 people. This number gave me enough faculty interviews to get a sense of each department
while remaining manageable. Table 3 shows a breakdown of the sample at each site.
Table 3: Interview Sample
Departmental Interviews
(Faculty)
Institutional Interviews
(Administrators and Staff)
Valley University,
Humanities Department
9 4
Valley University, STEM
Department
6
Valley University, Social
Science Department
5
Total, Valley University 20 4
University of the Pines,
Humanities Department
8 5
University of the Pines,
STEM Department
6
University of the Pines,
Social Science Department
6
Total, University of the
Pines
20 5
Total Interviews 40 9
Assessment and Teaching Improvement in Higher Education
74
Data Analysis
Given that this study had both multiple methods of data collection and multiple levels of
analysis, data analysis was an ongoing process that began as I collected my first data (Miles &
Huberman, 1994; Hatch, 2002; Stake, 1995). By beginning to analyze data early, I quickly came
to understand whether I need to make any corrections or additions to my data collection methods
(Miles & Huberman, 1994). For example, several months after initially disseminating the ATI, I
concluded that response rates were not high enough to allow me to use ATI results to inform my
sampling strategy. Similarly, because of the low numbers of ATI responses, I determined that I
could not rely on ATI scores as a key data point and would instead depend on qualitative data in
my analyses.
Once I began collecting qualitative data, I created both reflective and analytic memos
from my field notes after conducting document analysis, which I used to generate inductive
codes (Hatch, 2002). I also read the transcripts of each interview that at least twice. First, I
conducted an inductive analysis, allowing common patterns to emerge from the interview data,
as previously mentioned with my reflective notes and analytic memos (Hatch, 2002). Then, I
read through the interviews again using a typological analysis, in which I used codes generated
from my theoretical framework (Hatch, 2002). These theoretically-based codes included
elements from systems theory, as well as from the various explanatory theories I used at each
level of the system. By using both typological and inductive analyses, I both built on existing
theory and captured “ideas that don’t fit into existing organizational or theoretical categories”
which “may get lost, or never developed” without a preliminary inductive analysis (Maxwell,
2013, p. 108). I also searched for both within-case and cross-case patterns to determ ine
similarities and differences across settings and to see whether I could begin to generalize across
Assessment and Teaching Improvement in Higher Education
75
settings in any way (Eisenhardt, 1989). I created separate spreadsheets comparing themes across
departments, across disciplines, and across institutions. These approaches to analysis represent
what Maxwell (2013) terms “categorizing strategies,” or looking for similarities and differences
in the data and creating categories to distinguish between them (p. 105). A list of themes is
available in Appendix D.
Another approach to data analysis uses “connecting strategies” to take a more holistic
approach to examining data in the context of the larger text or case (Maxwell, 2013, p. 112).
These strategies look at each case as a whole unit rather than decomposing each case into
discrete themes or codes. This approach was also important for me as I used an embedded
multiple-case study approach with multiple units of analysis embedded within one another—
individual faculty members within departments within disciplinary categories within institutions
(Yin, 2014). In addition to the thematic analysis described above, I created individual narratives
or case studies for each department and for each institution, which ensured that I did not miss the
broader relationships or contexts that are central to my systems approach to examining this
problem (Maxwell, 2013). My findings chapters represent a hybrid of these two approaches,
which I felt best reflected the data and the complexity of my approach.
Validity and Trustworthiness
Notions of validity and trustworthiness are contested within qualitative research; some
researchers believe validity is inappropriate for qualitative research given its interpretive nature
(Ely, Anzul, Friedman, Garner, & Steinmetz, 1991), while others believe that alternative terms or
conceptions are a better fit (Lincoln & Guba, 1985). For the purposes of this study, I used
Maxwell’s (2013) conception of validity as “the correctness or credibility of a description,
conclusion, explanation, interpretation, or other sort of account” (p. 122). There are numerous
Assessment and Teaching Improvement in Higher Education
76
strategies a qualitative researcher could employ to strengthen the validity of her study. In order
to ensure that the interpretations I drew from my analyses were essentially correct or credible, I
focused on four strategies that both Maxwell (2013) and Creswell (2007) recommend. They are:
clarifying researcher bias and positionality, triangulation, using rich description, and member
checking.
Because in qualitative research the researcher is the instrument, it was necessary to reflect
on how my own identity, position, and prior experiences could impact the research process and
my analysis (Maxwell, 2013; Miles & Huberman, 1994). First, it was critical to explore and
reflect upon my biases and subjectivities from the outset, as they could influence the ways in
which I approached the study and the ways in which I interpreted data (Maxwell, 2013; Peshkin,
1988). For example, I previously held an interim role as an assessment director in student affairs,
and I also used assessment (both formally and informally) on a regular basis during my time as a
classroom teacher. Therefore, I have fairly positive views about assessment and its potential to
improve teaching and learning. However, I also had direct experience with large-scale
standardized assessments and the potential drawbacks of assessment used solely for
accountability purposes. In order to ensure that I fairly interpreted the data I collected in this
study, I constantly reflected on how these experiences could be obscuring conclusions that did
not fit with my past experiences.
Additionally, various aspects or characteristics of my identity may have also influenced
the ways in which participants responded to me. Glesne (2011) terms this concern positionality
and notes that “both ascribed characteristics (nationality, ancestry) and achieved characteristics
(educational level, economic level, institutional affiliation, etc.)” can influence the ways in which
participants respond to the researcher (p. 157). I needed to remain conscious of how my identity
Assessment and Teaching Improvement in Higher Education
77
as a younger female doctoral student studying education could affect my ability to build rapport
with or gain respect of faculty, especially the older and often male faculty at research universities
(Xu, 2008). Reflecting on these issues throughout the research process assisted me as I
conducted the study, interpreted data, and wrote up results (Glesne, 2011).
Next, as described above, I used multiple sources of data to triangulate my findings
(Creswell, 2007). For example, I asked participants whether their experiences with assessment
aligned with departmental or institutional policy statements. I also examined historical
documents to determine whether faculty participants’ descriptions of past experiences with
assessment aligned with what is in the documents.
Third, I used the multiple sources of data I collected to create rich descriptions of various
aspects of the teaching system and their relationship with assessment. My findings include
multiple direct quotes from my faculty interviews (Maxwell, 2013). These detailed descriptions
help the reader get a sense of what really occurred in the setting, as well as “determine whether
the findings can be transferred” to other situations or settings (Creswell, 2007, p. 209).
Finally, I used member checking or respondent validation to clarify any lingering
misunderstandings or confusion I had after completing data collection (Creswell, 2007; Maxwell,
2013). This process involved reaching out to the faculty I interviewed to determine whether the
preliminary interpretations I drew aligned with their intentions when discussing the issue at hand.
It is important to note that I only sought to correct misinterpretations or clarify uncertainties and
not to elicit feedback on how to present the participants in the most flattering light.
Limitations
This study had a number of limitations. First, perhaps the most significant limitation was
the low response rates to the ATI and subsequent exclusion of this data source from analysis.
Assessment and Teaching Improvement in Higher Education
78
While I made every effort to encourage faculty to respond, including getting department chairs
or respected faculty to send out email reminders, I simply could not get adequate responses to
make these data usable. This unfortunate fact limited my analyses at the individual level, as I
lacked an expected data source on faculty beliefs.
Another limitation was the difficulty I had in recruiting social science departments to join
the study at Valley University. As I mentioned above, I initially recruited one department, but
my contact in that department was the only person who agreed to an interview with me; he was
unable to persuade his colleagues to give up the time for an interview or, in some cases, to even
respond to my emails. After this turn of events, I worked with the assessment director to recruit
an alternative social science department that had been doing some kind of significant assessment
work. We reached out to a second department, but they declined to participate. Finally, the third
Social Science department agreed to participate. However, this department was not as far along
in their assessment work, in the opinion of the assessment director, as the other departments from
VU that I included. There was thus a dissimilarity across departments when I had been trying to
select departments that were similarly engaged in assessment.
Finally, I did not conduct observations of faculty actually using assessment or teaching in
the classroom. Through my interviews, I was only able to collect faculty perceptions or self-
reported accounts of their experiences with teaching and assessment. Direct observation could
have strengthened my conclusions and provided more detail about how assessment shapes
teaching, especially at the classroom level.
Assessment and Teaching Improvement in Higher Education
79
Chapter 4: Assessment and Teaching at the Institutional Level
The two institutions in this study provide distinctly different examples of the influence of
assessment on teaching and learning environments at research universities. At University of the
Pines (UP), there were notable changes to institutional culture (language, norms, and values),
policies and structures, and curriculum as a result of assessment activity. Conversely, at Valley
University (VU), there were minimal noticeable changes at the institutional level. In this chapter,
I first describe the changes that occurred at UP, elaborating on how and why those changes
occurred. Then, I describe the lack of institution-level change at VU, along with the themes that
suggest why little change occurred. Table 4 provides an overview and comparison of the two
universities in the study.
Table 4: Overview of Changes to Teaching and Learning Environment at Institution Level
University of the Pines Valley University
Institution-level changes to
teaching and learning
environment?
Yes No
Type of change to teaching
and learning environment
Cultural change
NA
Changes to policies and
structures
Changes to curriculum and
teaching
Institution-level change
levers or barriers
Faculty-driven process Administratively-driven
process
Supportive institutional
structures
Lack of supportive structures
Intentional messaging around
assessment for improvement
No messaging around
assessment for improvement
Facilitative attitude towards
accreditation (internal
motivation for assessment)
Antagonistic attitude towards
accreditation
Support and training Lack of support and training
Leadership support Lack of leadership support
Assessment and Teaching Improvement in Higher Education
80
University of the Pines: Institutional Level
We perceive academic assessment to be the collection and use of systematic information about
student learning—and the factors that contribute to it—for the purposes of understanding and
improving our educational practices and programs. At UP, assessment occurs for many
purposes and across multiple levels of intended use, from decisions about individual learners, to
the improvement of courses and degree program curricula, to campus-wide articulation and
awareness-raising in pursuit of UP’s unique educational mission.—statement from University of
the Pines task force on assessment
As described in the previous chapter, there is significant assessment activity occurring at
the institutional level at UP. The driver of this activity is the assessment office, which has two
full-time employees who are actually tenure-track faculty. These faculty members’ sole role is to
support and conduct assessment at UP. They support assessment within departments, programs,
colleges/schools, and across the university at large. University-level assessment includes general
education assessment, as well as assessment of the relatively newly created institutional learning
outcomes (ILOs). Examination of policy documents and reports, along with interviews with
faculty and administrators at UP, indicate that there are several types of changes to teaching and
learning environments at the institutional level as a result of assessment work. These include
cultural changes (differences in language, norms, and values), changes to institutional policies
and structures related to teaching and assessment, and changes to curriculum. Figure 1 illustrates
these institution-level changes.
Assessment and Teaching Improvement in Higher Education
81
Figure 1: Types of Institution-Level Change
Cultural Change
The first type of change to the teaching and learning environment associated with
assessment at UP was to institutional cultures. Institutional cultures around teaching and
assessment are comprised of language (words used to describe teaching and assessment), norms
(accepted ways of behaving with regards to teaching and assessment), and values (importance of
teaching and assessment). While UP is a research university and research remains highly valued
at the institutional level, there was evidence of a cultural shift as a result of assessment to a
culture in which teaching is perceived as a student-focused endeavor and assessment is a tool to
determine whether students are meeting course or program learning goals. Additionally,
assessment activity on campus resulted in a greater awareness among individuals of teaching and
education as a collective, institutional endeavor rather than solely the provision of individual
faculty in individual classes. These cultural shifts were evident primarily through the language
that faculty used as well as the messages institutional leaders delivered about teaching, learning,
Culture
Change
Changes to
Policies and
Structures
Changes to
Curriculum
and
Teaching
Assessment and Teaching Improvement in Higher Education
82
and assessment. Figure 2 shows the relationship between changes to language, norms, and value
at UP.
Figure 2: Values, Language, and Norms at University of the Pines
One indicator of cultural change at the institutional level was evident in the language that
faculty and administrators in the study used to talk about teaching and assessment. Participants
across the institution used a particular type of language that was student-focused, outcome-
focused, and learning-focused. This language became especially notable when compared with
that used by participants at VU, which indicated, on the whole, a more traditional faculty-driven
and content-focused approach to teaching and a compliance-focused view of assessment
(described below). For example, one faculty member remarked: “One thing I appreciate about
assessment is it puts the focus on students and student learning as essential. I think that's a lesson
that's good for all of us, that student learning is essential.” Another faculty member mentioned
that assessment “should be a tool. It's not a goal. Our most important goal should be focused on
the students learning. Anything else is secondary….the focus should be the students.” Further,
Language
• outcomes-oriented
• student-focused
• learning-focused
Norms
• teaching as collective endeavor
• assessment for improvement
Values
• teaching and
undergraduate
education more
valued
Assessment and Teaching Improvement in Higher Education
83
numerous faculty spoke of their courses in terms of outcomes and what they hoped students
learned from the class, both disciplinary content and more transferable skills such as critical
thinking, writing, collaboration, or ethical reasoning. Rather than focusing on what they did as
instructors, they focused on what students got out of the course. One faculty member described
this learning-oriented approach and how he sees assessment as a way to help students constantly
reflect upon and improve their learning:
Because those are important opportunities for you to reflect and also seek help. The
reason I give you so many opportunities to improve your grade because any assessment
should be formative not finalized. We are lifelong learners and a lot of the skills they
learn should be transferrable.
Some faculty and administrators at UP specifically pointed to assessment’s role in changing the
language they and their colleagues used to discuss teaching and learning. One faculty member
mentioned that her colleagues “found a new language for talking about it [teaching and learning]
or framing it,” while several faculty mentioned the “common language” that they had developed
as a result of assessment. This common language included the focus on outcomes and learning
mentioned above, as well as the language of assessment itself, including such topics as
curriculum mapping, alignment, rubrics, and various other methods of assessing student learning.
Another indicator of cultural change was a shift in norms around teaching, from teaching
as an individual effort focused solely on a particular faculty member and his or her class to
teaching as a more collective endeavor involving multiple courses, faculty, and programs. One
participant described a shift away from the “lone genius model” of the faculty teaching role, with
little intentional connection between courses or reflection on how a program’s curriculum ties
together. Participants from all three departments described this sort of shift to a more collective
Assessment and Teaching Improvement in Higher Education
84
orientation towards teaching; one faculty member mentioned that assessment “helped me…think
about my course on a broader scale,” while another remarked that assessment helped her
understand what other people are doing in their classes in a way that enhances the
curriculum and enhances our instruction. So if I know that my class is linking to these
other classes based…on the dialogue that comes from the assessment, which is the whole
point of this, right?
Relatedly, faculty spoke about becoming more cognizant of teaching and learning across the
campus as a whole; notably, this faculty member said:
In some ways, it's caused our faculty to become more aware of campus. Things, like I
was involved in a number of these kinds of campus teaching and assessment workshops,
but a lot of faculty are completely disconnected from the larger picture, and they're very
oriented towards the department's interests and needs, and their students. Which isn't bad,
but I think faculty are more aware of the larger picture [now].
Changing norms around teaching was also part of an intentional strategy on the part of the
assessment director, who described her efforts to:
appeal to their sense of the collective good. It's not just about you in your classroom. It's
about you as part of a degree program, as part of an institution, as part of higher
education in general, because I think for too long, we just had this sense of, "Oh, I'm here
standing in front of my class. I'm gonna do what I want." It's very selfish. Right? So I try
to appeal to their collective sense. Like, if we all collectively have the same goals for our
students, they're more likely to get there, and that's true, right? 'Cause learning is hard,
and if only one professor is teaching students about writing, those students are not gonna
Assessment and Teaching Improvement in Higher Education
85
graduate being good writers. Everybody gotta contribute, so I appeal to that collective
sense.
This strategy appeared to resonate with faculty at UP, who generally described assessment as a
collective, collaborative activity, as well.
Relatedly, assessment contributed to a greater awareness among faculty of the institution
as a whole, beyond just their courses and departments. Part of this broader awareness came from
the establishment of Institutional Learning Outcomes (ILOs), which have become increasingly
visible within individual departments and to individual faculty over the last few years. The
committee with responsibility to drive implementation of ILOs on campus reported that the
implementation process has provoked new conversations about undergraduate teaching and
learning and general education.
Finally, some participants at UP explicitly mentioned cultural changes occurring as a
result of assessment activity on campus. One faculty member described the changes in this way:
But I do think the culture is changing. I really do. I feel like from when I first started
hearing about assessment, you know…10 years later, there's more of a buy-in, and
acceptance of it. Not everyone of course, but more than before. People are understanding
what it is, and its value.
Indeed, the goals of the original committee’s proposal to create a more faculty-driven assessment
process included cultural changes; they hoped to “develop and sustain a culture within which
assessment of student learning would lead to important decisions and improvements in our
educational efforts.” While UP is a research university and research remains a top institutional
priority, there is some evidence of a cultural shift in which teaching is starting to get a more
Assessment and Teaching Improvement in Higher Education
86
equal footing with research. This shift has begun to be enacted in new policies and structures on
campus, which I describe next.
Changes to Policies and Structures
Some institutional policies and structures also changed as a result of assessment activity
at UP. These policies and structures affect the teaching and learning environment both directly
and indirectly. One policy that changed as a result of assessment was the course and program
approval process. Approvals for new courses and new programs now require clear links to both
institutional and program learning outcomes, as well as detailed plans for assessing whether
those outcomes are met. New courses and programs are approved on a temporary basis at first
and must demonstrate implementation of assessment plans, as well as assessment results, in
order to obtain permanent approval. An administrator at UP mentioned several examples of
courses and programs that were unable to obtain permanent approval because of their lack of
attention to assessment and student learning. One faculty member demonstrated that this policy
is making inroads into faculty consciousness:
I know when I design a new course, first the curriculum committee of the department, we
need to agree on it. But then there's the university-wide curriculum committee that
ensures that any new courses that get added actually map on to the university overall
objectives and that make sure that it's appropriate that you're doing enough or you're not
doing too much. There's oversight anytime a new course is added to the books. There are
people from outside of [our department] who will look at it and comment on it and make
sure that it fits in with the university's overall vision.
This policy has forced faculty to not only consider what students should learn as a result of
enrolling in a new course or program, but also to articulate these learning outcomes, measure the
Assessment and Teaching Improvement in Higher Education
87
extent to which such outcomes are met, and reflect on how these outcomes fit within broader
institutional goals.
Another policy that changed as a result of assessment at UP relates to the program review
process. Programs must now include evidence of student learning and results of assessment in
their regular reviews, and reviewers are encouraged to pay attention to these aspects of the
programs’ reports. Additionally, the schedule and process of program reviews changed once
assessment became a regular feature of campus life. Instead of the traditional process of
reviewing individual programs every 7 years on a rotating schedule, academic administrators
decided to instead set up a process to review every program within a college or school at the
same time every 5 years. An administrator described the impetus behind this change:
We had one way of doing program review that was ineffective for a number of reasons.
We were reviewing programs in departments separate from the overall structure,
budgeting was separate from quality of academic program. We wanted reviews to have
teeth and wanted recommendations to be implemented, so we modified the review
process to look at all programs in a college at the same time. We [now] look at
assessment across the college vs. program by program. This helped colleges leverage
resources and share information across departments in terms of how they were doing
assessment.
This decision was intended to make it easier for programs to share information about assessment
of student learning, as well as to enable changes that require school/college resources that might
be necessitated by program assessment results.
Finally, assessment also now features in the language of promotion and tenure guidelines
at UP, as well as in descriptions of teaching responsibilities in the faculty union contract. The
Assessment and Teaching Improvement in Higher Education
88
union contract describes assessm ent as a key part of instructional work, which is defined
expansively to include many instructional activities besides classroom teaching. Campus
promotion and tenure guidelines indicate that in order to gain tenure, faculty must demonstrate
how their classes contribute to learning outcomes at both the program and institutional levels. It
is important to note, however, that despite changes to the language in these policies emphasizing
the importance of assessment for teaching, nearly everyone I spoke with at UP remarked that
student evaluations of teaching (SETs) remain the primary mode through which teaching is
assessed, and, further, that research is still far and away the primary criterion that counts for
tenure and promotion.
Changes to Curriculum and Teaching
A final change to the teaching and learning environment at the institutional level can be
seen in changes to curriculum and teaching, driven primarily by implementation of ILOs, as well
as intentional efforts to rethink the general education curriculum. As ILOs were introduced and
implemented, the ILO committee worked with leaders of the general education (GE) program to
ensure that the ILOs mapped onto general education requirements. In general, the outcomes
aligned with GE requirements; however, one of the ILOs was not represented by current GE
requirements. In this case, a working group was formed several years ago with faculty and staff
from across the university to think through how the university’s curriculum could ensure that
students received adequate opportunities to build skills aligned with that ILO. A year after its
formation, the working group presented a proposal for a new foundations course aligned with
that ILO, along with guidelines for what sorts of assignments and topics courses that met the
requirement should include.
Assessment and Teaching Improvement in Higher Education
89
Additionally, as the ILO committee has worked to map ILOs to program and course LOs,
they have worked with the assessment office to set common standards of performance for the
ILOs using the AAC&U VALUE rubrics. For example, the assessment office held workshops for
over 50 faculty in which they collected and scored student work for three ILOs. Together, the
faculty worked to norm their evaluations and rubric scores and come to a conclusion about what
an acceptable standard of performance would look like for each ILO. Minimum scores,
explanations, rubrics, and examples of student work were distributed to leaders in the GE
program, as well as departments and programs. The assessment office’s report on the ILO
workshops described the (self-reported) influence that these sessions had on participating faculty
members’ teaching. Over 90% of faculty who participated in the workshops noted that their
views of assessment changed or their teaching changed as a result of attending the workshop.
One faculty member I interviewed participated in the workshops and also mentioned their value
in terms of gaining a better understanding of how to align course activities and assessments with
ILOs.
Additionally, several programs in STEM have recently put together joint grant proposals
focusing on improving undergraduate teaching and learning through evidence-based pedagogies,
while another campus workgroup is putting together new online, interactive, open textbooks for
students in some introductory courses. One administrator linked these efforts to increased
assessment activity on campus, saying that while they “aren't the direct, "How do we assess
student learning?" [they] are building towards a more rich culture of student learning and
learning effectiveness, so that's pretty exciting.” This administrator elaborated by saying:
I think that given what I just said about what's happening in STEM, about the ways I've
seen programs rethink their curriculum, [assessment] is having, whether people want to
Assessment and Teaching Improvement in Higher Education
90
acknowledge it or not, a positive effect. I think the institutional learning objectives have
huge potential for shaping what it means to get a degree from UP, and that becomes sort
of a way of narrating this experience that we still have a lot of work to do on, but it's kind
of exciting work.
These institution-level changes to the teaching and learning environment—culture,
policies, curriculum and teaching—were driven by a variety of different experiences and
practices. I describe these phenomena in more detail in the next section.
How and Why Change Happened
Six themes explained how and why these changes occurred. While these themes were
most salient for institution-level changes, they also had influence at other levels of the system,
and I will come back to some of them again in later chapters to describe those influences. Figure
3 depicts a graphic representation of these themes, which I also refer to as levers for change.
Figure 3: Institution-Level Change Levers
INSTITUTION-
LEVEL
CHANGE
Faculty-
driven
process
Supportive
institutional
structures
Intentional
messaging
around
improvement
Neutral
attitude
towards
accreditation
Support and
training
Leadership
support
Assessment and Teaching Improvement in Higher Education
91
Faculty-driven process. First, assessment at UP was an intentionally faculty-driven
process. The leadership role that faculty played in implementation of SLOA fostered a tighter
connection between assessment and teaching than at VU. As noted previously, the first attempt at
implementing assessment at UP was not faculty-driven and did not get widespread buy-in or
have a meaningful link to teaching and learning. Initially driven by accreditor demands, this first
round of assessment was led by administrators, with implementation delegated to several faculty
senate committees but little faculty ownership. In particular, according to one interviewee, one
key member of the provost’s office “had a very rigid notion of what assessment was: a
standardized test…[to]…compare ourselves to other universities.” As a result of concerns about
this approach to assessment, a faculty task force was convened to determine how faculty could
“take ‘ownership” of assessment.” The recommendations of the task force established the current
structure for assessment on campus: an assessment office with two tenure-track faculty positions
dedicated full time to assessment work; a newly constituted permanent senate committee with
oversight for assessment; and dedicated resources from the provost’s office to fund the
assessment office, its personnel, and its resources.
The philosophy and strategy behind the task force’s recommendations reflect a view of
assessment as linked with teaching and learning and thus requiring faculty leadership. In their
report to the senate, the task force explicitly laid out assessment’s purpose as helping the
university understand and improve its educational practices and programs. The task force also
noted that since assessment’s purpose is help to improve educational experiences (namely
classroom experiences), faculty must be involved in all aspects of assessment. Less than five
years after the task force released its report, a formal university policy establishing faculty
governance of assessment had been approved by the provost’s office and the faculty senate.
Assessment and Teaching Improvement in Higher Education
92
The participants I interviewed reflected this idea of assessment as a faculty-driven
endeavor and spoke to the importance of faculty buy-in. For example, the assessment director
noted that assessment “should be faculty-supervised [because] it’s a curriculum issue.” By
connecting assessment with curriculum and focusing on the faculty role in overseeing,
conducting, and using assessment, UP was able to foster an atmosphere of acceptance or even
enthusiasm around assessment, contributing to the changes in the teaching and learning
environment described above. This point was especially noteworthy given the campus’s initial
administratively-driven approach to assessment. Participants described that approach very
negatively, whereas they were much more positive and affirming about the current approach to
assessment and the role that faculty were able to play.
Role of institutional structures. Faculty were able to drive the assessment process at UP
because of the structures created by the task force, which provided ongoing support at a variety
of different levels. In contrast to VU, which has only one staff member in charge of assessment
as only one piece of his job and no committees with assessment responsibility, UP has two full-
time, tenure-track faculty focused exclusively on assessment in a formalized assessment office,
as well as multiple committees with some responsibility for assessment and links to other key
offices on campus such as the center for teaching and learning. As noted above, the committee
with primary oversight for assessment is the UP assessment committee, which was formed after
the task force report. This committee is responsible for coordinating and monitoring assessment
activities and developing assessment policy directed at understanding and improving educational
effectiveness. The committee’s charter states that it works closely with the provost’s office, the
assessment office, the center for teaching and learning, and the general education office and
associated committees. In addition, a working group was created about five years ago to focus on
Assessment and Teaching Improvement in Higher Education
93
implementing ILOs. The faculty and administrators I spoke with reinforced this description of
multiple offices and committees working together to support assessment. One participant
described working with the GE office and committees during the ILO implementation process,
while several administrators mentioned the partnership between the assessment office and the
center for teaching and learning.
The assessment office, in addition to having a formalized structure and two dedicated
faculty employees, has an extensive website with numerous resources on assessment, teaching,
and learning. Copies of previous years’ assessment reports for each program are housed on the
site, as well as basic information about assessment, calendars of professional development
sessions and repositories of professional development materials, FAQs, and even a page
describing myths about assessment which is aimed at dismantling common misconceptions about
assessment. One faculty member spoke to the importance of the assessment office and its work:
I think even with the small assessment office that we have here, they've done a lot of
good work to try and convince people that it is important, and it's not just for show, or
just so that they can check something off of the accreditation requirements.
One key to the success of these structures, in particular the assessment office, was the people
who inhabited them. Without an effective and trusted director of assessment, the office itself
would not have been nearly as successful in supporting and growing assessment. The director of
assessment that UP hired about ten years ago was a longtime campus employee who had
experience with assessment in writing. She had existing relationships across campus, along with
the confidence and respect of faculty members across departments. She also had a carefully
designed strategy for building faculty buy-in for assessment over time and for keeping the focus
on assessment for learning rather than assessment for accountability. The structures that were put
Assessment and Teaching Improvement in Higher Education
94
in place by the faculty-driven process at UP, along with the people who inhabit them, have
helped support and sustain a link between assessment and teaching on campus and promote
institution-level changes.
Intentional messaging around assessment for improvement. Another factor that
contributed to institution-level changes to the teaching and learning environment was intentional
messaging from assessment leaders around assessment as primarily about teaching and for
improvement purposes, rather than primarily for accreditation and accountability purposes. The
director of assessment spoke repeatedly about her philosophy of assessment, and she emphasized
her belief that assessment holds great potential for helping faculty improve teaching and
learning. She described her approach to assessment in this way:
I had a particular philosophy that I wanted to move forward with for assessment that I
thought would work, and it was based on utilization-focused evaluation, participatory
evaluation, and…empowerment evaluation….So then I moved forward with those ideas,
and central to a lot of, like, participatory evaluation and utilization-focused evaluation is
bringing in the people, the client, so to speak, and in this case that’s the faculty…and
helping them understand how to carry out evaluation….So we just started this massive
capacity-building effort that we still do today….we said “everybody [each degree
program] should have a person we can work with, and we’re gonna invite them to these
capacity-building workshops, and we’re gonna teach them, ‘why do we do assessment?
How can it help your students? How can it help you be a better faculty member?’ And we
started with learning outcomes, assessment planning, etcetera, etcetera.
Assessment and Teaching Improvement in Higher Education
95
She mentioned that “one good use of assessment is to help faculty understand the principles of
good teaching.” She further elaborated on the beliefs and messaging that she drew upon when
working to get faculty to see the value of assessment:
I have the firm belief that most faculty care about students, and they really want students
to learn, and they want to be proud of their students. They want to be able to go to
cocktail parties, and when somebody says, "Oh, I hired a graduate from [UP], and that
graduate is amazing," that's what we all want, right? I think that's what ... I'm assuming
that's what faculty want, and so we try repeatedly to say, "Hey, the tools of assessment
allow you to do that, allow you to develop your curriculum, understand what students are
learning and not learning, and then to take action on that so that we can be proud of
students," so I really appeal to that sense of them, and then we also bring in the fact that if
we don't do this, our students can't come here because we'll lose accreditation and there's
no financial aid. And you want to get research grants; we want to be accredited.
Upper-level administrators also framed assessment as first about improving teaching and
learning and only secondarily about accreditation. For example, one administrator remarked that:
we went to every single college and department and talked about the ILOs and why we
needed to do them and tried to frame that not just in the "[accreditor] makes us do it,"
which is, I feel ... and I know that [our assessment faculty] feel very strongly that that's
not the first level of analysis, and I agree. You want some intrinsic reason to do it besides
"Our accreditation agent makes us," but it's a good last-resort argument. It's like, "Well,
okay. You can think what you want, but we won't be accredited if you guys don't do this."
While assessment leaders on campus did not ignore the imperatives of accreditation, they
deemphasized it in their messaging. This intentional focus on language helped make assessment
Assessment and Teaching Improvement in Higher Education
96
seem less threatening to faculty who might otherwise be resistant to engaging with it and also
helped them see the potential value for their own classes and programs.
Faculty seemed to have internalized the messaging from assessment leaders on campus
when I spoke with them. While nearly all faculty mentioned accreditation as a driving factor
behind assessment, they also mentioned its value for teaching and learning, as evidenced by this
faculty member’s remark:
I'm very cynical about this, about the goals of the administrators, but I try not to let that
creep into assessment, because it's actually been really useful for me. Primarily
assessment has been really useful for me refining what it is I really want students to know
and then really focusing on assessing them on those things at the end, so that's been
useful for me. I'm not sure that it's ... I haven't quite figured out if I'm doing a better job
of teaching them because of that, but I certainly think I'm doing a better job focusing,
making those connections between what I want and then what I actually test them on….
Accreditation’s role in impelling assessment activity remained present in the consciousness of
stakeholders across campus, but intentional messaging about its value for teaching and learning
helped faculty become more receptive to it and reflect on how to use it to improve teaching.
Attitude towards accreditation. The intentional messaging around assessment as being
primarily for improving teaching and learning led to a more facilitative attitude towards
accreditation, especially when compared with VU. Faculty at UP seemed much less threatened
by accreditation and less hostile towards accreditor demands. This attitude of begrudging
acceptance rather than active resistance can be traced back as far as the task force report over ten
years ago, when task force members wrote of being proactive towards assessment rather than let
the terms of engagement be dictated by their accreditor:
Assessment and Teaching Improvement in Higher Education
97
Either assessment will be done to us, or we will take control of assessment and ensure
that it is designed, implemented, and used as it was intended to be: that is, as an integral
component of effective educational practice, alongside and in support of curriculum and
instruction.
The members of the task force argued passionately for taking control of assessment from campus
administrators and accreditors in order to ensure that it was used as a tool for improvement—
specifically for improvement of teaching and learning. This attitude led to a more internally-
driven motivation to engage in assessment at UP. The task force members generally viewed
assessment as a useful tool but also recognized that some faculty may have different motivations
for engaging with assessment. While they emphasized the “positive possibilities that assessment
affords,” they also highlighted ongoing external pressures for assessment and the need to treat
assessment proactively: “A rod is being fashioned for us and the only way to keep it off our
backs is to place it firmly in our hands.”
The faculty and administrators I interviewed echoed this attitude towards accreditation—
one of resignation or acceptance. One administrator described the attitude in this way:
what I've tried to do and what I try to explain is this doesn't have to be something we
resist, but it's an opportunity to really be clear about what we think students ought to
know and what we think students should know when they have left higher ed.
While the attitude towards the regional accreditor and its push for assessment was not necessarily
enthusiastic or positive, there was a notable lack of hostility and resistance and a desire to make
assessment meaningful and useful rather than simply about compliance. This attitude was
particularly evident compared to VU, as I will describe later in this chapter.
Assessment and Teaching Improvement in Higher Education
98
Support and training. Another institutional factor that contributed to assessment’s
ability to shape the teaching and learning environment at UP is the support and training that the
institution provided to faculty engaging in assessment. This support is delivered through the
assessment office as well as other offices on campus. First, the assessment office holds numerous
professional development workshops each year on a variety of different assessment- and
teaching-related topics. The assessment office’s website has workshop titles and materials for
nearly 100 different sessions over the last 10 years, ranging from general topics such as an
introduction to learning outcomes and creating assessment plans to more specific assessment
approaches like using rubrics for program assessment or using Excel to analyze assessment data.
Further, many workshops are related to teaching, such as one on how to use assessment to
promote reflective teaching and scholarship. There have also been workshops facilitated in
partnership with the center for teaching and learning. Handouts and powerpoints from most
sessions are available on the website. The assessment office also offers consultations and
workshops to individual faculty or departments to provide more personalized support tailored to
specific needs.
Additionally, the assessment office hosts poster sessions every year modeled after poster
sessions at academic conferences. At these sessions, departments present their assessment work
and share ideas with one another and the assessment director gives out several awards for the
strongest posters and best assessment work. Several faculty I interviewed mentioned
participating in these sessions as an opportunity to share what their department had learned as a
result of engaging in assessment.
Perhaps the most influential training activity that the assessment office facilitates is a
three-day intensive summer assessment academy for those faculty leading assessment work in
Assessment and Teaching Improvement in Higher Education
99
each program/department. This event is a sort of crash course in how to do assessment, as well as
how to build buy-in and participation among departmental colleagues. One faculty participant
noted that it was “quite intense” and that before she attended she was “clueless as to what
assessment was supposed to be.” Other faculty also mentioned the value of this summer academy
in terms of giving them a better understanding of what assessment should look like and how to
carry it out in their departments in ways that would help improve curriculum and student
learning. An administrator commented on the various supports that this summer institute
provides:
Okay, all right, so one of the things that their office has done is created this summer
academy, which has been ... I think it's really a good model of developing across the
campus supporters for what you're trying to do, and they take on the critics and they give
'em the little iPad incentive, and so people come, and it's made a dramatic difference in
terms of the willingness of the people charged with assessment in these different
departments to develop assessment plans, do assessment, and then sort of followup with
results, and so they had, I think, a 97% assessment plan submission last year, and for the
most part, they've got ... and you'll get from them models of what's really great, who's
doing a really good job. I've seen some of this in terms of the poster sessions that they do,
which are also really cool.
The result of all this training and support is the development of a fairly large cadre of faculty
across the institution who are now familiar with assessment and its use in supporting
improvements to teaching and learning. Additionally, faculty at UP have the knowledge and
expertise to effectively create assessment plans and carry out assessment activities.
Assessment and Teaching Improvement in Higher Education
100
Leadership support. Finally, underscoring the other themes in promoting institution-
level change was leadership support. Upper-level leaders, including the president and provost,
regularly spoke about assessment, teaching, and student learning. They also worked directly with
the assessment office to align messaging to faculty in departments and across the institution. This
alignment meant that administrators would also emphasize assessment’s value for improving
teaching and learning when they spoke about it, rather than emphasizing accreditation and
compliance. As the assessment director noted:
I feel that I can always go to the administration… and say I need your help. Can you help
with this? They will at least get the message out. They might not get the money out, but
they'll at least help to get the message out. And that is powerful.
Faculty noticed that administrators were prioritizing assessment and student learning. For
example, one faculty member described her impressions of the university president’s attitude:
And I haven't talked with him about assessment, but I've talked with him about other
things and I just, I do believe that he cares about Student Learning Outcomes, that he
wants students to actually graduate and be successful as opposed to say that we have
good numbers.
Further, upper-level administrators take time to attend assessment events, such as the poster
sessions and the summer institute:
So, like I said, I think that for the most part, the administration has been supportive. They
give us time at deans and directors meetings ... they come to our events, and they, you
know, they give out the awards that [we] have. They go talk to people about their posters
at the poster session. They come to our assessment academy, and they welcome people.
In that sense, they've always been supportive.
Assessment and Teaching Improvement in Higher Education
101
One administrator I spoke with referred to this as “symbolic support,” as he understood the
import of his mere presence at assessment-related events. The support of upper-level leadership
at the institution indicated that assessment was an institutional priority. The fact that they aligned
their messaging with the assessment office, emphasizing assessment’s value for improving
teaching and learning, helped drive the changes to culture, policy, and curriculum that UP
experienced.
Valley University: Institutional Level
[Assessment Director] does his thing. He does it well. It has almost zero impact on what goes on
in this university, other than occupying people’s time generating results for our accreditor.—
senior level administrator at Valley University
Unlike University of the Pines, Valley University (VU) experienced very little institution-
level change to the teaching and learning environment as a result of assessment activity. As
explained in the previous chapter, at the institutional level VU has an assessment office staffed
by a director. The former director (who left the institution in the middle of my study) had a PhD
but held a staff/administrative position. The director is responsible for assessment, conducting
studies of the effectiveness of various undergraduate programs, supporting the online Student
Evaluation of Teaching (SET) system, and accreditation. On one of the multiple occasions when
I spoke with the former director, he mentioned that assessment comprised approximately 1/3 of
his time and work. There are no committees dedicated to overseeing assessment at VU, other
than an ad hoc committee that no longer exists, nor was there a formal center for teaching and
learning until very recently.
In this section, I describe the weak links between assessment and the teaching and
learning environment at VU, focusing first on institutional culture, then policies and practices,
Assessment and Teaching Improvement in Higher Education
102
and curriculum.
4
Because I observed few changes, this section is significantly shorter than the
one on UP.
Lack of Change to Teaching and Learning Environment
Whereas UP had many indicators of changes to language, norms, and values around
teaching and learning at the institutional level, data from VU showed little evidence of a more
student-centered or collective approach to teaching or that assessment has had much of an impact
on the institution’s culture at all. The language used to talk about assessment on campus was
nearly always compliance-oriented and rarely had to do with teaching and learning. One
administrator I interviewed noted explicitly that any changes to teaching and learning that were
happening on campus were not driven by assessment: “I would say not institutionally, I think
there are pockets across campus where it is [teaching improvement is driven by assessment], but
not as a cultural driver. I think people see it as something necessary that we need to do [for
accreditation].” One participant noted that at teaching-focused institutions, assessment
can be a bit more of a driver because people are just one, really focused on undergraduate
curriculum and sometimes they have a leadership group that is just so much more
invested and well embedded across the colleges and the campus. It can be more of a
cultural driver. I would say here, and at many research universities, I don’t think it is.
There was a recognition that assessment has not been a driver of cultural change at VU.
Additionally, nearly all faculty I spoke with at Valley spoke about assessment as a requirement
or for compliance purposes, as this was the norm at the institution. The attitude from
administrators at VU was that “the goals of assessment are to meet the requirement of
assessment—keep us accredited,” as another senior administrator remarked. Faculty “more see it
4
There was more evidence of change at the departmental and individual levels at VU, which I describe in later
sections
Assessment and Teaching Improvement in Higher Education
103
[assessment] as something necessary that we need to do” for accreditation, rather than a practice
to help improve teaching and learning. Table 5 provides a comparison of the differences in
language, norms, and values at VU and UP.
Table 5: Differences in Institutional Language, Norms, and Values around
Teaching and Assessment
University of the Pines Valley University
Language Student-focused, outcome-
focused, learning-focused
Compliance-oriented
Norms Teaching as a collective
endeavor involving multiple
courses, faculty, and
programs; assessment for
improvement
Teaching as more
individualistic; assessment
for accountability, as a
requirement
Values Teaching and undergraduate
education more valued
Research more valued
In terms of policies and practices at VU, there is no mention of student learning or
assessment in teaching policies, there are no institutional learning outcomes (ILOs) or
assessment processes, no general education (GE) learning outcomes or assessments, and no
attempts made to link learning across programs within colleges or schools. The director of
assessment noted that no conversations had taken place within upper administration about
aligning GE with the learning outcomes advanced by the regional accreditor, nor had there been
any meaningful conversations about changing promotion and tenure standards to reflect an
emphasis on teaching and learning. The only real change to policy and practice that accompanied
assessment at VU was a requirement for departments to submit assessment reports. These reports
were how the director determined whether each department was meeting assessment demands as
set forth by RAC.
Finally, there were no discernable changes to curriculum and teaching at the institutional
level as a result of assessment at VU. Some participants mentioned changes to curriculum or
Assessment and Teaching Improvement in Higher Education
104
teaching but attributed them to other initiatives or programs besides assessment of student
learning. For example, an institution-wide initiative to promote graduation in four years led to
analysis of so-called bottleneck courses that have high failure rates. At VU, the courses with the
highest failure rates were introductory math courses. As a result of this initiative and this
analysis, institutional leaders sponsored a project to redesign the introductory math curriculum
and promote the use of evidence-based pedagogies in these courses. While this initiative
represents a significant change to curriculum and teaching that started at the institutional level, it
had nothing to do with student learning outcomes assessment on campus.
How and Why Little Institution-Level Change Occurred.
In order to understand why assessment has had little impact on the teaching and learning
environment at VU, I will describe 4 themes that became evident as I analyzed data at the
institutional level. These themes largely represent the inverse of themes I identified at UP. The
only theme that was not applicable in reverse at UP was intentional messaging—there was no
real institution-level messaging or communication from leadership about assessment outside of
appeals to comply for accreditation purposes.
Administratively-driven process. Interestingly, the assessment implementation process
mirrored that of UP’s in many ways at first, though they began their work about ten years later.
Like UP, Valley’s first foray into assessment was heavily driven by administrators who were
responding to RAC directives. Also like at UP, there was something of a revolt from faculty at
VU once they realized what was happening.
I mentioned that the university rolled this assessment learning outcome program out,
initially, and then had to backtrack, because precisely for this reason—that it was
presented as a fiat, that this is how you'll do things. And then faculty and the academic
Assessment and Teaching Improvement in Higher Education
105
representatives were like, excuse me? You cannot actually tell us this. You cannot
actually dictate this. And that I think was actually very damaging to the whole process,
because from that moment on, these become administrative, bureaucratic requirements
that are forced down the throat of faculty, and which much be resistant on principle, not
even on the basis of the content, but on the principle. Not all campuses have done it that
way. And that's one reason why we ended up backtracking is because other campuses
formed joint committees at the outset of senate administration, and staff to develop
guidelines that have these requirements would be implemented. And that's what we ended
up trying to do retrospectively when they discovered that.
Unlike UP, however, when Valley had to “backtrack” on their assessment process, they did not
build in the sort of sustained faculty oversight and leadership that UP did. They created a
temporary ad hoc committee rather than a permanent faculty senate committee; assessment
remained under the purview of one staff member in the assessment office. Rather than pushing
for faculty ownership, they simply tried to pump the brakes on the entire process.
This sense of assessment as an administratively-driven process lingered in the minds of
faculty at VU. One faculty member mentioned that:
the documentation [about assessment that we received] was very much the administrator's
speak around it, so we need to comply with this for accreditation. It didn't take too much
time but it just seemed extremely...like an administrative issue more than one that has to
do with pedagogy.
Assessment was not faculty-driven at VU and was not seen as linked to teaching or pedagogy as
a result. Many faculty at VU remained “resistant on principle” because of the initial push from
administrators to implement assessment.
Assessment and Teaching Improvement in Higher Education
106
Attitude towards accreditation. Another theme that was associated with lack of
institution-level change at VU is the general attitude towards accreditation. This theme is related
to the fact that much of the assessment activity on campus was driven by administrators and thus
had more of a compliance focus than a focus on improving teaching and learning. In contrast to
UP, in which faculty were frustrated by accreditation demands but decided to get out in front of
them and “take control” of assessment to make it meaningful and useful, faculty at VU tended to
be hostile and dismissive of RAC’s demands to engage in assessment and generally proved more
resistant to assessment. One administrator noted that
Sooner or later, this is gonna hit us over the head, but the question is are we gonna lead
the way, or are we gonna be led kicking and screaming…I think I know the answer to
that question.
He implied here that VU would go “kicking and screaming” rather than “lead the way,” given
the culture of the institution and history around assessment and accreditation. A faculty member
further elaborated on assessment demands from RAC around ten years ago that laid the
groundwork for much of the resistance:
We got a very imprudent letter from RAC along the lines of “The administration is going
to have to force the faculty to follow through on this learning assessment stuff.” This did
not go well…Yeah we actually had, then the vice provost create a committee, of course,
is what you do when you have this kind of situation. They created a committee [ad hoc]
to discuss assessment and so forth, and accreditation. You know, the vice provost’s job
was to facilitate accreditation, so he was concerned. Like, “Look, we have to take this
seriously. It’s part of accreditation.” Until someone said, “Are they going to deny our
accreditation because we’re not meeting expectations on this?” Well, no. Then we had
Assessment and Teaching Improvement in Higher Education
107
someone come from RAC, talk about it all, and someone asked, “Could you deny us
accreditation if we’re not doing what you expect?” [He said] “Oh no, we would never do
that.” That’s entirely aside from the politics of, are they really going to deny [this]
diverse campus accreditation? Which is doing very well graduating students with equal
graduation rates across all ethnic groups—are we really going to deny [this campus
accreditation? No.
This quote demonstrates the hostility that faculty developed towards RAC, which was
exacerbated by the accreditor’s seeming lack of understanding of faculty ownership of
“curriculum and assessment of curriculum and design of curriculum and changes to curriculum”
in their decision to emphasize administrative force over faculty buy-in. It also shows the
dismissive attitude that many faculty at VU had towards RAC once they realized that it was
unlikely accreditation would be denied for lack of compliance with assessment demands. This
hostility and dismissal meant that faculty on the whole were not inclined to expend a lot of time
and energy on assessment for compliance purposes. The lack of leadership alignment around
assessment for improvement and lack of structural supports meant that faculty were also not
disposed to spend a lot of time and energy on assessment for purposes of internal improvement,
either.
Lack of structural supports and training. Institutional structures (or lack of certain
structures) at VU hindered change, devalued teaching and assessment, and weakened any
potential relationship between the two. First, the assessment office at VU has just one full-time
employee who has been able to spend only about one-third of his time on assessment and
supporting faculty. Faculty have recognized this lack of structural support for assessment, calling
“the operation…completely undermanned” and noting the near impossibility of one person who
Assessment and Teaching Improvement in Higher Education
108
spends only part of his time on assessment being able to make meaningful inroads in promoting
and supporting assessment and teaching improvement. While the director was able to host some
assessment workshops for faculty, sustained, ongoing support for those who attended the
workshop was difficult to provide simply because of the other demands on the director’s time.
Faculty mentioned that they did not know how to do the type of program assessment that
RAC and the University were requiring of them:
But my frustration is, one, we’re not trained to do that type of program assessment so
personally, I would like to see the university be more hands-on involved in that
somehow….even if they could simply provide a template and say hey, here are five types
of data you might collect as a department. And if you can provide us these data, we can
then carry our the analysis and help you come up with a way to summarize what was the
effectiveness of your program. I would be ecstatic if they could do that. But I think they
feel, well m aybe one, they probably don’t have the resources to do it or enough people to
help carry that out.
Faculty were looking for more support in conducting assessment, but the structures in place at
VU made it difficult to provide the support they wanted and needed. While UP also had a small
assessment office—with only 2 employees—each member of the UP assessment office is able to
dedicate all of their time to supporting assessment across the university.
Further, unlike UP, which also has a permanent senate committee dedicated to overseeing
assessment as well as a newer committee for supporting institutional learning outcomes VU had
only a temporary ad hoc committee with oversight for assessment that formed after attention
from RAC on the lack of assessment activity on campus. At VU, ad hoc committees “are created
for very specific tasks” and “generally exist for a short amount of time, until their task is dealt
Assessment and Teaching Improvement in Higher Education
109
with.” At VU, assessment was seen as a “very specific task” that was time-limited, rather than an
ongoing concern of the university requiring sustained oversight and management. The decision
to manage assessment with an ad hoc committee reflects institutional culture around assessment
at VU, as well as lack of structural support at the faculty/governance level for assessment.
Additionally, the lack of a formal center for teaching and learning at VU until very recently
reflects the lack of emphasis and value on undergraduate instruction at the institutional level, as
well as a lack of structural support for those faculty who have a personal interest in improving
their teaching.
Unlike UP, there were no real changes to any institutional structures at VU to
institutionalize assessment, other than the creation of the assessment office. One faculty member
remarked upon the “add-on” nature of assessment:
Then it might ... So it's an add-on that doesn't require changing the existing structure. I
think really that's the challenge of this assessment thing, learning outcomes thing, is that
it can't [change anything]... unless and until a professor of a department decides that there
is a real need for change, or unless there are incentives, financial or otherwise, like course
releases to redo your pedagogy in response to this new model.
There were no structural supports or incentives for faculty to get involved in assessment at VU
and, as a result, there was little meaningful change to teaching and learning environments, as this
faculty member noted.
Lack of leadership support and alignment around assessment for improvement.
Finally, there is currently little to no attention to assessment from institutional leadership
outside of the office responsible for overseeing undergraduate education. While a former vice
provost in that office was supportive and was the leader who really kick-started assessment on
Assessment and Teaching Improvement in Higher Education
110
campus, after he stepped down a few years ago the most senior champion of assessment on
campus was gone and no one of equivalent seniority stepped up to take his place. The person
who replaced him did not see assessment as a top priority. Though he is “generally supportive”
of the work of the assessment office, the assessnent director mentioned that on the whole,
administrative leaders are “not super true believers or willing to expend their own capital and
have never said that it is one of their top 2 or 3 goals.” Further, he said, “our office of the
President has said nothing about assessment of student learning.” One top administrator
confirmed this perception when mentioning his opinion that assessment is largely “a hoop-
jumping exercise” and has little value for improving teaching and learning.
Faculty picked up on this lack of support and leadership alignment across multiple levels
of the administration. One faculty member mentioned that she “never heard anything beyond [the
assessment director] about the value or importance of assessment…every so often people will
talk about accreditation.” Other faculty mentioned that their deans said they had to do assessment
in order to comply with RAC requirements. No one mentioned that campus leaders promoted
assessment as an activity to improve teaching and learning.
Summary and Conclusions
University of the Pines had notable changes to the teaching and learning environment at
the institutional level as a result of their assessment activity. The institutional culture changed to
be more supportive of undergraduate teaching and learning, as evidenced by language, values,
and norms; policies and practices changed to support assessment and its link to teaching; and
curricular changes were made to ensure that Institutional Learning Outcomes were met. A
number of distinct institution-level factors or levers drove these changes. Assessment was a
faculty-driven process at UP, which drove buy-in and emphasized assessment’s curricular links;
Assessment and Teaching Improvement in Higher Education
111
supportive institutional structures facilitated assessment’s link to teaching; there was intentional
messaging around assessment for improvement; there was a facilitative attitude towards
accreditation and an internal motivation to use assessment; support and training were extensive
and effective; and leadership at all levels publicly supported assessment as a key way to enhance
the teaching and learning environment.
Compared to University of the Pines, minimal assessment activity occurred at the
institutional level at Valley University other than the most basic level necessary to complete and
submit assessment reports. There were few, if any, links between assessment and improvement
of the teaching and learning environment at the institutional level at VU. Essentially the inverse
of the institution-level levers that facilitated change at UP were present at VU. The assessment
process was administratively-driven’ there were few supportive structures; there was no
messaging around assessment for improvement; an antagonistic attitude towards accreditation
prevailed; there was little support and training due to inadequate institutional infrastructure; and
there was a lack of leadership support at the highest levels for assessment as a tool for teaching
improvement. These institutional differences also played out in the departments I studied, which
I describe in the following chapter.
Assessment and Teaching Improvement in Higher Education
112
Chapter 5: Assessment and Teaching at the Departmental Level
At both VU and UP, as at many colleges and universities across the country, the locus of
much assessment activity, along with responsibility for teaching, is in departments. As I noted in
Chapter 2, the department is often considered the basic unit of the postsecondary organization.
Because universities are structured in this way, with programs and majors organized into
departments based on academic discipline, accreditors often focus on departments as key sites of
assessment and demand departmental or program assessments as a key indicator of compliance
for accreditation.
Departmental assessment activity is also influenced by institution-level policies,
practices, and cultures. At UP, which had more supportive structures, policies, and cultures at the
institutional level, departments typically demonstrated more meaningful change than at VU,
which lacked this supportive institutional infrastructure. However, there are also often distinct
influences at play within different departments on the same campus. For this reason, I also
examined the influence of assessment on the teaching and learning environment at each of the
six departments in my study, as I theorized that I might see differences in the link between
teaching and assessment at this level of the system compared to the institutional or individual
levels. I did, in fact see differences at this level, as well as wide variation across the departments
in terms of what type of assessment was taking place, how much assessment was taking place,
and the strength of its link with teaching and learning in each department. This variation
occurred both within and across the two institutions I studied.
In this chapter, I will first describe the assessment activity and process within each
department, as well as changes or lack of change to the teaching and learning environment
within each department. Table 6 provides an overview of changes or lack of change in each
Assessment and Teaching Improvement in Higher Education
113
department. I then elaborate on the levers that influenced change or lack of change at the
departmental level, including faculty champions, departmental leadership, culture, and
disciplinary influences (where evident). Figure 4 shows these department-level change levers.
Table 6: Overview of Changes at Department Level
University of the Pines Valley University
Social Science STEM Humanities
Social
Science
STEM Humanities
Change
or lack
of
change
Change Change Little
change
No
change
Some change Some
change
Type of
change
Curricular
change,
pedagogical
change
Curricular
change,
pedagogic
al change
Changes to
language
about
teaching
and learning
None
New
conversations
about student
learning and
mastery in the
major
New
conversations
about
teaching and
learning
Figure 4: Department-Level Change Levers
Department-
Level Change
Faculty champions
Supportive departmental
leadership
Departmental culture that
values teaching
Disciplinary influences
Assessment and Teaching Improvement in Higher Education
114
University of the Pines Departments
Overall, departments at University of the Pines demonstrated stronger ties between
assessment and teaching and more changes to departmental teaching and learning environments
as a result. These changes were driven by the supportive institutional environment at UP.
However, I also observed differences among the departments at UP, indicating that there were
additional factors unique to the department level that could be driving these differences.
UP Social Science
At University of the Pines, the Social Science department had strong links between
assessment and teaching and notable changes to the teaching and learning environment in the
department as a result of assessment activity. In this department, responsibility for program
assessment was delegated to departmental education committees. The heads of each of these
committees were responsible for overseeing assessment activity and reporting for the
undergraduate and graduate programs in Social Science.
5
The department uses e-portfolios to
collect student work and assess their programmatic student learning outcomes (SLOs). In these
e-portfolios, they gather artifacts of student work, along with the senior capstone project. Each
year, they convene an advisory board of faculty and alumni to score a random sample of the e-
portfolios against each of the SLOs using rubrics. The chair of the committee then writes up a
report using the results of these assessments.
Nearly all the faculty I spoke with in this department mentioned changes to the
undergraduate curriculum (what is taught or when something is taught) that resulted from their
assessment work. For example, one faculty member described how faculty were able to “make
small tweaks” to their courses to “link them more readily to other classes” once they saw how
5
While some references to graduate education came up throughout my interviews, I focused on undergraduate
education and assessment for my study.
Assessment and Teaching Improvement in Higher Education
115
students were performing on various SLOs. This faculty member noted one particular instance of
learning that emerged from assessment and the steps the department had taken to address gaps in
the curriculum:
For example we found in our 2013-14 assessment…we thought it was odd that one of the
areas that we did not have enough evidence for was ... The one place we fell short I
thought was ethics, which kind of startled me because I teach an ethics class and I feel
like students should have artifacts from other classes for that. But what we did was on
some of the, some of our doctoral students who are teaching sophomore- or junior-level
required classes, we asked them to add a few smaller assignments or discussions about
that earlier on. So that later when they take the senior-level classes, which m ost students
take but not everyone, they will already have some awareness of that and some
experience.
During this assessment, the department found that students did not have enough examples of
their ethical reasoning LO. The faculty concluded that they were not offering enough
opportunities for students to master this objective throughout the curriculum, so they “tried to
include more ethics-driven assignments,” as another interviewee noted. One faculty member,
reflecting on how this assessment outcome had influenced her teaching, remarked that:
I don’t want to just change my class to make it easier for assessment. But on the other
hand, it made me think maybe there’s some dimensions of the ethics that I’m not
capturing in my class that are important. In some cases I’ve looked at that and I’ve said,
“No what I’m doing is exactly what I want to do. In other cases, I’ve actually thought—I
did a big change on one assignment last year where I was having them write short essays
about ethics, ethical learning during the semester and then we’d have discussion. And
Assessment and Teaching Improvement in Higher Education
116
instead I structured it…so they’re writing more critical questions for the discussion. I
found the discussions improved.
This quote indicates how faculty used assessment to change not only what they teach, but also
how they teach. Another faculty member emphasized this point and remarked that “assessment
helps you to modify the course while it’s still going on….you can change the content and
delivery method of the course if it’s not really working.” Assessment helped Social Science
faculty reflect on their teaching and make changes to both curriculum and pedagogy when
necessary.
Several of the Social Science faculty wrote and published an article for a disciplinary
pedagogical journal about their experiences with assessment—specifically with e-portfolio.
Many faculty mentioned how they felt empowered after doing the work on that article and
having the recognition of getting it published. One faculty member mentioned how his
experience working on the article helped him understand the potential value of the process:
So yeah, I guess through all that stuff, I became—yeah, it really changed my attitude
toward assessment and its importance. I don’t know, if I didn’t have that experience with
[my colleagues], maybe I wouldn’t have felt this way….I think it’s really important and I
think we should understand what our students are coming away with.
This quote demonstrates not only the growing acceptance of assessment in the department, but
also an increased conception of assessment as an activity that is intimately connected to teaching
and learning.
Faculty champions. Another faculty member, Pat
6
, was a major champion in the
department and university-wide for assessment and its ability to improve teaching and learning.
6
Pseudonym
Assessment and Teaching Improvement in Higher Education
117
Pat led the charge among departmental colleagues to write a journal article about their
experience and pushed for them to participate in the university-wide assessment poster sessions
every year to celebrate their work. Pat also informally mentored new faculty about departmental
expectations around assessment and teaching. Everyone I spoke with in the department
mentioned Pat’s enthusiasm and leadership as infectious, as did this faculty member:
Pat’s really into it…does presentations, and…these posters, and stuff like that. And Pat’s
a good friend of mine. So, I think that enthusiasm kind of has rubbed off on me. We both
were doing an assessment at the same time, going through these workshops.
Additionally, nearly all the faculty I spoke with in the department were knowledgeable about the
assessment process and the departmental learning outcomes (which was not the case in every
department I studied). Although the department chair claimed that “maybe half [of the faculty in
the department] have any awareness of learning objectives and their implications in terms of
what we do and how we do it,” faculty painted a more optimistic picture. One faculty member
described an experience with his colleagues in this way:
…in a recent faculty meeting I brought up the fact that we should consider doing another
assessment now that we haven’t done one in a couple of years.
7
I was just like, “Can I get
a volunteer that can help me put this together?” And all the faculty raised their hand,
except one person who wasn’t there, and I was surprised to see that type of buy-in. So I
mean, we have added new faculty since that article came out, so I guess I was very
pleased that we have added newer faculty that really do…seem to value this as well. I
think this is a really good thing.
7
UP recently moved from requiring departmental assessment reports yearly to requiring them only every other year.
This decision was made in part to free up faculty in the assessment office and across the university to expand their
focus on assessing ILOs and GE.
Assessment and Teaching Improvement in Higher Education
118
Departmental leadership. In addition to faculty support for assessment, the department
chair also demonstrated that he values both the assessment process and high-quality teaching,
and he linked the two:
On our level, at the department level, I think it [assessment] is a useful reminder for
faculty about what the program is about, and what we’re trying to achieve by these
various course offerings that we do. On a simple level, what are our goals in teaching this
particular course? In effect, it translates the traditional course objectives to student
learning outcome objectives….And that’s good. That’s useful, in terms of doing a course
and measuring whether you’re achieving what the course goals are.
Faculty in the department also noted that the current chair is supportive of their efforts to
promote assessment and that he values assessment and teaching, as this faculty member noted:
I would say there’s a value placed there [on teaching and assessment], too, an
understanding that it’s important, and yeah, I’ve talked about the assessment project that
we want to do this semester and he’s been very supportive about getting it off the ground
and going.
Departmental culture. Departmental leaders who value assessment and teaching foster a
culture in which these things are valued more widely by the faculty. In the Social Science
department, several faculty mentioned the pride they take in teaching undergraduates and the
strong value that the department as a whole places on teaching. This quote exemplifies the ways
in which this culture is transmitted to new members:
This department has a culture of if your [teaching evaluation scores] are good enough, as
soon as you’re eligible they will nominate you for a teaching award. Because it’s a point
of pride. It’s like how many in this department have won teaching awards? I think after
Assessment and Teaching Improvement in Higher Education
119
[you’ve] been here three years, [you’re] eligible to go up for a teaching award. Then in
order to fill out that application, [you] need to demonstrate how [you] meet the overall
university objectives in a creative and engaging way….There’s big shoes to fill, but that’s
the mentorship I’ve been given that I should strive to want to do that.
A departmental culture that values teaching helped build a stronger link between assessment and
teaching in the Social Science department. References to assessment’s accountability purposes or
mandates to do assessment for accreditation were minimal in this department and far outweighed
by remarks linking assessment to teaching and broader curricular improvement.
UP STEM
The STEM department at UP also had a high level of assessment activity and strong links
between assessment and teaching. In this department, assessment is overseen by a tenure-track
non-instructional faculty member whose role is to oversee assessment and advising for the
undergraduate and graduate STEM programs, along with the department’s curriculum
committee. The assessment faculty position was created at the behest of a department chair
approximately six years ago. I interviewed her, as well as 5 other faculty members in the STEM
department. One faculty member described the beginning of the assessment process for the
undergraduate program:
With our undergraduate degree, what happened is we brought the assessment office in
within a few months of me starting, we brought them to a faculty retreat, and they guided
an activity to kind of come to at least a general set of learning outcomes for the
undergraduate degree programs. We have the same outcomes for all of our programs
right now. Then those outcomes were honed by the curriculum committee over the next
few months. Then from there initially we had a workshop that was for people to create
Assessment and Teaching Improvement in Higher Education
120
learning outcomes for their courses, because a lot of people [didn’t] have them on their
courses still. We then at that workshop…had people start coming up when they were
done making their outcomes, and they had to have them map to the program outcomes.
That started the curriculum mapping process….We cleaned it up quite a bit, and then I
started taking notes as to like what types of learning evidence they were actually
collecting for each of those things, just so we had a better idea. That actually I think
helped move the conversation. Like, “oh, I don’t actually have a way to know what they
learned.”
This process helped faculty think deeply about what students should learn in their courses and
how that linked with the broader curriculum. They had to reflect on whether students were
actually getting enough opportunities to master each outcome, as well as whether they had
adequate ways of assessing students were learning. Several faculty remarked upon the value of
this curriculum mapping process and the changes that were made to the curriculum as a result—
before any formal assessment even took place. The assessment planning process alone proved
remarkably useful for the UP STEM department:
I think it’s really good that there’s some oversight of curriculum and the goals of the
curriculum, and that people have to sit back and think about them every once in a while.
I think it’s also led to more integration across the curriculum. Having faculty think about,
“Okay, well what do they need to know when they come in to my class?” and “What
should they be able to do when they go out?” Trying to link those to the other courses
better.
The curriculum mapping process also led the department to streamline some of the upper-level
major requirements. Specifically, faculty worked to create a more consistent experience for
Assessment and Teaching Improvement in Higher Education
121
students by adding a required course and decreasing the amount of electives that students could
take. Additionally, the department used the curriculum mapping process to advance changes to
the introductory course sequence, which they had been discussing for some time even before
departmental assessment began. This curriculum mapping process took several years. After it
was completed, the department began conducting assessment in various classes.
There are two different undergraduate degree programs/specializations in the department,
which have distinct curricula and slightly different assessment approaches. One specialization
focuses assessment in the capstone course:
And how that works is they go to what is supposed to be the capstone course in
whichever program they’re focusing on that year and ideally there’s something, a paper
or a certain set of exam questions from that class that are meant to map to the learning
outcomes for that class. And they go through and assess them to see what’s happening.
So that’s been going on really just the last couple of years [after the curriculum mapping
process finished].
The other specialization has significantly more students and does not have a capstone course, so
assessment is focused there on two different required courses. For these assessments, the
curriculum committee uses the AAC&U VALUE rubrics to assess student work. In addition to
these assessments, the curriculum committee and assessment faculty member often target other
classes for assessment, especially newly created or reconstituted courses, or courses that are
targets for potential redesign.
Some faculty remarked on changes to their own courses or teaching approaches as a
result of assessment. For example, one faculty member discussed the ways that assessment has
changed what he does in the classroom:
Assessment and Teaching Improvement in Higher Education
122
…it’s actually been really useful for me. Primarily assessment has been really useful for
me refining what it is I really want students to know and then really focusing on assessing
those things at the end, so that’s been useful for me. I’m not sure that it’s…I haven’t
quite figured out if I’m doing a better job of teaching them because of that, but I certainly
think I’m doing a better job focusing, making those connections between what I want and
then what I actually test them on….So it’s been helpful for me personally as a teacher….I
really now take every lecture and look at those learning objectives and say “How does
this contribute to that?”…I like that, so I think that’s been the big change, and then at the
end with tests, quizzes, and exams, really focusing on the bigger picture of these
objectives.
This quote shows how this faculty member used assessment to change the way he structures his
courses, focusing on specific outcomes for the course and the program and ensuring that he is
constantly tracking back to those outcomes throughout the class and measuring progress towards
those outcomes at the end of the course. This faculty member also mentioned that he does
various forms of assessment throughout the class, including with audience response systems
(clickers).
Faculty champions. Many faculty in the department noted the value of having a tenure-
track faculty member dedicated to assessment and advising. Having this position served as a
concrete indicator to other faculty that the department values and respects assessment and
undergraduate education. She was a champion who faculty member described as the “assessment
queen” and mentioned that the department “would be rudderless without her.” Her background
as a science PhD with expertise in assessment gave her credibility with other faculty, and her
Assessment and Teaching Improvement in Higher Education
123
position in the department gave her specific knowledge of the department’s needs, as described
by this faculty member:
…she was embedded in our department, knew all our curricula and at least understood it
enough to say, “So…I know your course builds on this, this is also covered in Frank’s
course, have you guys ever talked about how that works?” No, we should.
8
Departmental culture. Additionally, faculty in the department were fairly receptive to
assessment in general, which one faculty member attributed to a broader feeling of responsibility
to undergraduate students and reflective of a departmental culture that values undergraduate
teaching:
…certainly at the faculty meetings there’s been good receptivity to the whole thing, I’d
say. I think everybody’s attitudes have been quite good about it. I don’t know what they
say when they go back to their office, whether they mumble into their computer screens
or whatever, but at the faculty meetings, I think everybody understands the importance, I
think, of [assessment]. Then in this department, it probably is really important because
it’s a big department at least in terms of undergrad students, and we have big
responsibilities because a lot of kids are taking [one of our] courses because they need the
gen ed requirement or something. I think it’s important.
Other faculty described their perceptions that the UP STEM department values and supports
teaching and assessment, pointing to the assessment faculty position as one piece of evidence:
Pretty much every faculty meeting, there is some discussion of the undergraduate
program. The fact that we have a specialist tenured faculty member whose focus is
8
One recent development in this department is that the assessment faculty member has been forced to focus much
more heavily on her advising role over the last year or so, removing a key assessment champion and facilitator in the
department. It is unclear whether the department will be able to sustain its high level of activity without her
leadership.
Assessment and Teaching Improvement in Higher Education
124
assessment and advising. They’re not a staff member, they’re a tenured faculty. That, the
last few years, we’ve put a lot of effort across the faculty into curriculum redesign and
assessment. We talk about hiring decisions, we talk about not just the research side, but
the teaching side as well. Yeah. I feel like, even though we are a research university, the
teaching matters a lot.
All the faculty I interviewed in this department mentioned the strong teaching culture and
offered additional evidence of that culture. For example, one faculty member started a “graduate
seminar that’s actually on teaching methods, so active learning methods, scientific teaching and
really trying to do STEM pedagogy well.” In addition to graduate students who take the course,
new faculty members typically sit in on classes, do the readings, and participate in discussions
and class activities, as well. The impetus for starting this seminar was the faculty member’s
participation in a Howard Hughes Medical Institute (HHMI) summer institute workshop on
research-based teaching in STEM and his subsequent increased interest in research on effective
undergraduate teaching in the sciences. Another example of the department’s value on
undergraduate teaching was the department’s interest in hiring a discipline-based educational
researcher, or DBER; this type of faculty position is for scholars whose research is on teaching
and education in the discipline rather than basic science. DBERs are often hired to help revamp
undergraduate curriculum or support other faculty in improving their teaching. The strong value
placed on teaching in the STEM department helped department faculty see links between
assessment and teaching and, indeed, led them to see assessment as just one piece of good
teaching practice.
Departmental leadership. Departmental leadership was supportive of both assessment
and undergraduate teaching and drew links between the two, emphasizing assessment’s role in
Assessment and Teaching Improvement in Higher Education
125
teaching and curricular improvement rather than its accountability role. While I did not get the
chance to interview the current department chair (who was new), several faculty spoke about the
support of both the current and former chairs, remarking that “I think our current chair and the
last few we’ve had all take it really seriously. I think they take it seriously in that they view it as
being important.” Another faculty member noted that “I think our department leadership does
value it, does take it seriously, I’ve heard nothing but support about this [pedagogy] course that
we’ve started and things like that….” Leadership support helped facilitate assessment activity in
the department by indicating to faculty that assessment is something to be taken seriously and
valued and that it has a significant role in undergraduate teaching.
Disciplinary influences. The unique disciplinary influences of the sciences were evident
in this department. First, the national push to improve undergraduate teaching in STEM fields
was evident in this department, through faculty participation in national programs like the HHMI
summer teaching institute as well as through discussions on flipping classrooms and changing
the introductory course sequence to include active learning approaches. One faculty member
mentioned the growing emphasis on teaching and learning in disciplinary societies:
I mean, there are workshops. There are even symposia on teaching, especially teaching in
science. You know, a lot of young people want to see that kind of stuff being addressed at
scientific meetings. Inquiry based teaching, that kind of stuff, yeah. In the last four or five
years, that has really picked up.
Additionally, several faculty mentioned the role of the discipline in influencing attitudes
towards assessment. While generally faculty in the department now see a link between
assessment and quality teaching, at first some of them had reservations about assessment because
Assessment and Teaching Improvement in Higher Education
126
they felt it wasn’t “scientific” enough. One faculty m em ber described a colleague’s initial
reaction to some of the assessment work in the department:
His feedback was always the same, which was he felt like when we were talking about
learning outcomes and curriculum mapping, and collecting student evidence, that he felt
it was all inconcrete [sic]. He would say, “They say you need to collect quality learning
evidence—show me quality learning evidence. Or, you need to show learning gains—
show me learning gains.” I think that’s maybe the challenge you run into in the hard
sciences is like, show me something tangible. Don’t make me feel like we’re talking in
the theoretical forever, because you’re not selling me on value.
While a disciplinary focus on teaching and learning perhaps made faculty more receptive to
assessment, at the same time disciplinary ways of thinking and knowing made some faculty
skeptical of assessment methods and evidence.
UP Humanities
In the Humanities department at University of the Pines, assessment activity is
happening, but compared with the other two departments at UP has probably driven the least
amount of change (though still more than in the departments at VU). Assessment is perceived as
more accountability-focused in this department than in the other departments I studied. There
was also more resistance in the Humanities department than in other departments I studied at UP.
The assessment committee in the department drives assessment work, and few people outside of
the committee expressed much knowledge or interest in what the committee does.
The Humanities department at UP has created program learning outcomes (PLOs) and
encouraged faculty to create SLOs for each of their courses. The department has also created a
curriculum map that identifies which courses target which PLOs, and they have identified certain
Assessment and Teaching Improvement in Higher Education
127
skills that students should be learning at various levels of the curriculum (i.e. introductory level,
sophomore, junior, or senior level). Most assessment in the Humanities department is focused on
the capstone course/senior thesis. Each semester, the committee selects a random sample of
papers from the capstone course and assesses one of the program learning outcomes on a rotating
basis using a rubric. The committee planned a five-year cycle to complete assessment of all their
PLOs. One faculty member described the rationale behind the process:
[The committee] picked a five year plan because [they] thought that would drag things
out as long as possible to cover ourselves without having to do anything too dramatic. Let
assessments settle. Let people realize that it’s not going anywhere unless there’s some
sort of upheaval. Then at the end of five years we’ll see where we’re at.
This quote indicates the relative reluctance of faculty in this department to engage in assessment
compared with their peers in other departments on campus. Another faculty member describes
some of the initial resistance:
There was controversy about our SLOs. Not everybody was happy with them. There was
great skepticism across the department as to what it means for them. I think the baseline
resistance comes from teachers who have taught for a long time, or even not, right? Who
don’t want somebody else sticking their nose into the classroom basically.
While these perceptions of assessment as interference in the classroom did come up in the other
departments at UP from time to time, they were much more common and pronounced within the
Humanities department.
There were relatively few changes to the teaching and learning environment that
Humanities department faculty attributed to assessment. I will briefly describe the changes that
did occur. Several faculty mentioned that assessment gave them a “common language” to use to
Assessment and Teaching Improvement in Higher Education
128
talk about courses and curriculum, while others mentioned that “being forced to articulate what
you’re doing [through the curriculum mapping process and assessment] is always helpful.”
Another faculty member remarked:
One thing I appreciate about assessment is it puts the focus on students and student
learning as essential. I think that’s a lesson that’s good for all of us, that student learning
is essential. [But] I don’t think the way that the actual assessment process works benefits
student learning. It seems much more institutional than student-based.
While describing the value of assessment, this quote simultaneously demonstrates this faculty
member’s skepticism of the assessment process. This skepticism came up again and again in my
interviews with UP Humanities faculty, even among those who were most enthusiastic about
assessment.
Partly as a result of this skepticism, there were not many changes to the curriculum in the
Humanities department as a result of their assessment. Several faculty remarked on the fact that
the assessment report did not really lead to any tangible changes, such as in this quote:
[The] assessment report was exhaustive, our chair was appreciative and pleased that [the
committee] did the job. [There was] a follow-up discussion with the chair about the
report, but has it had a tangible impact on curricula? I’m not so sure.
One faculty member attributed this lack of department-wide impact to the fact that the committee
is more “outwardly facing” rather than focused on internal improvement:
…my impression so far, at least with [Humanities], is that the assessment committee’s
purpose has been more outwardly facing. So kind of collecting things from faculty but
then representing that to the [administration], as opposed to being primarily a tool to
guide faculty teaching.
Assessment and Teaching Improvement in Higher Education
129
Further, some faculty mentioned that this lack of change is actually intentional. Rather than
making recommendations on “how to change or revise” the curriculum or certain courses, the
assessment committee’s goal is to conduct assessment and create a report “just for information
purposes.” As one faculty member noted that the committee was “trying to be cautious and
conservative. [They] don’t want to act like—antagonize our colleagues. And some people just
guard their own courses as their home turf. They don’t want anyone to mess with it.” The
assessment committee in this department was very wary of doing anything that could be
construed as telling their peers what or how to teach—much more so than either of the other
departments I studied at UP, where assessment and curricular change were seen as more
collective endeavors. This wariness seemed to signify a unique aspect of departmental culture
that was not evident in the other departments I studied at UP.
Departmental culture. The Humanities department does have a culture of support for
undergraduate teaching. Several faculty remarked with pride on the fact that their department
does not use adjuncts and that only tenured or tenure-track faculty teach undergraduate courses.
There are also many members of the department who have won college-wide and university-
wide teaching awards. One faculty member remarked that “I think that both as individuals and as
a collective, that people take teaching very seriously in this department.” Despite this
departmental value of undergraduate teaching, assessment was not linked with teaching and
learning in this department in the same ways that I saw in other departments. A few individual
faculty remarked on its value for their teaching, but as a whole the department did not make as
many connections between assessment and teaching.
Faculty champions. One reason that this department may have had fewer links between
assessment and teaching and fewer changes to the teaching and learning environment is the lack
Assessment and Teaching Improvement in Higher Education
130
of true champions in the department. While the Humanities department did have faculty who led
assessment, their enthusiasm and support for assessment was muted compared with their
colleagues in the other departments I studied at UP. Even those faculty who had played
leadership roles in implementing assessment in the Humanities department demonstrated some
ambivalence or skepticism towards its value. One individual who had been mentioned as a “true
believer” by a few other faculty members self-identified as an assessment “agnostic”—not
against it, but also not convinced of its worth. One caveat about this point is that I was unable to
meet with all the “true believers” in the department. It is possible that these faculty may have
expressed more enthusiasm or less ambivalence towards assessment if I had spoken with them.
However, they were not spoken of by their peers (or even mentioned by many) in the way that
departmental champions in other departments were. In these departments, key assessment
champions and their work promoting assessment across the department came up repeatedly in
multiple interviews. That same type of champion was not evident in the Humanities department.
Departmental leadership. Over the decade since assessment started in the department,
departmental leadership, while “pretty encouraging and supportive” of assessment, allowed
assessment to remain the province of the assessment committee rather than pushing or expending
political capital to help institutionalize it more deeply across the department. This approach may
be changing, however. While assessment has not driven many changes to the teaching and
learning environment in the Humanities department at UP, there are some indications that
departmental leadership is interested in pursuing an approach that is more linked to teaching,
improvement-oriented, and internally-driven. The fact that the department decided to participate
in this study was framed by leadership as an effort to better understand and improve upon their
assessment activities and determine more effective ways to link assessment to teaching.
Assessment and Teaching Improvement in Higher Education
131
Department leaders also expressed interest in prioritizing and supporting assessment work
through more explicitly supportive actions and words.
Disciplinary influences. Several faculty attributed resistance to assessment to the
disciplinary influences of the humanities: “To be honest, our department, and maybe [our
discipline] in general, tend to be a little bit skeptical of the word assessment” Some participants
expressed their opinion that assessment is “too quantitatively oriented” or “too institutional” and
not focused enough on their discipline. This disciplinary way of thinking persisted despite
acknowledgement that their discipline has actually embraced assessment. Several faculty
mentioned that they had looked to their disciplinary society for guidance on various assessment-
related tasks. One faculty member acknowledged that “now [our disciplinary society] has its own
journal [that focuses on teaching and assessment] and some of my colleagues bring my attention
to assessment-related articles…assessment is more and more important to make teaching
effectiveness more tangible…[but] I still don’t feel quite that the word assessment plays a major
role in how I develop courses or teach my students.” Despite knowing the importance that the
disciplinary association places on assessment as a tool for improving teaching and learning,
faculty in the UP Humanities department still engage with assessment grudgingly and more as a
matter of compliance.
Summary. This department proved an especially interesting case, as it differed from the
other two at UP in notable ways. While assessment activity was happening in the UP Humanities
department, it was not as strongly linked to the teaching and learning environment as it was in
the other departments I studied and was more accountability-focused. This case shows the power
of departmental culture even in an institutional environment that strongly values and supports
assessment.
Assessment and Teaching Improvement in Higher Education
132
Valley University Departments
By contrast, at Valley University, where there was a general lack of institutional and
cultural support, I did not see links between assessment and teaching that approached the level of
those in the UP Social Science or STEM departments. However, there has been some activity in
departments and some emerging evidence of change, despite the institutional environment.
VU Humanities
The VU Humanities department is one such example. Because there were fewer
meaningful changes to the teaching and learning environment at VU generally, I will organize
the following sections slightly differently than the sections for the UP departments. First, I
describe assessment work in the department, along with the few changes to the teaching and
learning environment that I observed. Then, rather than going through all the themes that help
explain the relationship between assessment and teaching (champions, departmental leadership,
culture, and discipline), I first describe aspects of these themes that facilitated the link in this
department and then describe aspects that served as barriers.
In the VU Humanities department, the education committee oversees assessment work,
with the chair of this committee leading and managing the effort. A former chair of this
committee, Taylor
9
, was among the first people in the department to see value in assessment and
take steps towards creating an authentic assessment program in the department. Prior assessment
efforts had been more tokenistic and compliance oriented: “my predecessor [on the committee]
told me basically, kind of drink a glass of wine and fill in random numbers.” Due to Taylor’s
conscientious or “obsessive” nature, she was uncomfortable with that approach. She did her own
research into how to make assessment meaningful and “ended up being convinced it could be
9
Pseudonym
Assessment and Teaching Improvement in Higher Education
133
useful for our departments. And for our majors, in particular, to know what is expected of them
at particular moments in their trajectory.” Using resources from their disciplinary association and
from other “comparable institutions”, she “mapped out…a chart of learning outcomes at the
lower division, upper division, and capstone levels, how each of those learning outcomes should
progress.” Nearly all of the eight faculty I interviewed in the department mentioned this chart
document when asked about assessment, so it seems that it has made some inroads into the
collective consciousness of the department. In addition to the chart, members of the education
committee created rubrics to assess the learning outcomes in the three capstone courses that are
required of Humanities majors at VU. Each year, the committee focuses on a single learning
outcome in the capstone courses and rotates through all the LOs:
[We built] a rubric for each of the learning outcomes so that every year we can distribute
that rubric and ask instructors of the capstone to score students. What we ask them to do
is at random select three papers from their 22 [total]. Three pieces of different work, and
then apply the rubric to those three pieces of written work. And the idea was that
members of the education committee would get this sample of…three from each, and
then we’d go through and see if it looks like it was calibrating, and then we would,
ideally, sort of be able to identify patterns. Like where are we strongest, where are we
weakest?
This quote describes the details of the assessment process in the department. Another faculty
member described the department’s eventual philosophy around assessment, after grappling with
it for years:
Over the course of the years, we realized that assessment was really a capstone effort. We
are not assessing individual students in each class, we are assessing whether over the
Assessment and Teaching Improvement in Higher Education
134
course of the major, over the four years on campus, majors in our field demonstrate
having learned what we intend to teach them….It has forced the departments to articulate
in a slightly more explicit way, what it is [our] major should be able to do.
This assessment process happened simultaneously with the implementation of some curricular
changes (the department began to require three capstone seminars instead of just one). While the
faculty I interviewed did not describe any additional curricular changes as a result of assessment,
they did note that assessment had sparked new conversations. One faculty member explained his
perceptions of these conversations and changes:
On the other hand, we have done a few things that have at least felt somewhat meaningful
and have generated some interesting discussion. And I think in some ways have really
encouraged new approaches to at least some individual teachers’ courses.
Specifically, he mentioned that faculty who teach in the capstone course are thinking more
intentionally about what the outcomes of their courses should be: “it captures what we all think
we are doing and distills it very nicely.” Faculty who teach the capstone, and even some who
teach in other courses, are talking about the rubrics and the chart of learning outcomes, even if
they have not necessarily made any concrete curricular or pedagogical changes as a result. There
is a general awareness of departmental assessment activity, its purposes, and value, even if it has
not yet provoked major changes to the teaching and learning environment. Only one faculty
member I interviewed was unfamiliar with the departmental assessment process, whereas
everyone else had some knowledge of it even if they did not typically teach a capstone course.
Faculty champion. There were several factors in the department that served as
facilitators for assessment and its link with teaching. One key facilitator for the spread of
assessment in the Humanities department was the champion it had in Taylor. Taylor was not
Assessment and Teaching Improvement in Higher Education
135
initially supportive of assessment and described her initial reaction at a meeting with
administrators from the undergraduate education office to publicize the assessment imperative at
VU:
So in the first meetings I remember I pushed back quite a bit…challenging them on
exactly what the benefit for the department is. I mean the sorts of…and I still feel like
this is not their fault, but there’s an essential difficulty which is there’s this new layer of
pedagogically-oriented policy that is top-down because it’s coming from
[administration], with a whole new mode of thinking about how to assess pedagogy that
has nowhere to sit within the existing structure and modes of operation.
As mentioned above, however, after Taylor began doing research into assessment, she became
convinced of its potential value for the department and began actively creating resources and
advocating for assessment among her colleagues. She, in turn, convinced fellow faculty to
consider the value of assessment for helping them think more carefully about major requirements
and student performance. Taylor’s name was mentioned to me by nearly everyone I spoke with
in the Humanities department, and she seemed to have a significant impact on colleagues’
willingness to entertain assessment.
Departmental culture. Additional elements of the VU Humanities department’s culture
have influenced the degree to which assessment is linked with the teaching and learning
environment. Nearly all faculty I spoke with indicated their perception that the department values
and supports teaching, as evidenced by this faculty member’s quote:
I do get the feeling that my colleagues do take teaching very, very seriously. I hear it
from students, I get small windows into teaching any time we’re assessing another faculty
member’s file, looking at one another’s student evaluations….Also, my colleagues are
Assessment and Teaching Improvement in Higher Education
136
quite excited about what they’re doing in the classroom so that will become a topic of
conversation, too, what it is we’re doing, what it is we’re trying.
This willingness and excitement to discuss teaching and pedagogical experiments have perhaps
made faculty more open to discussing the potential of learning outcomes and assessment to
influence teaching. Further, various members of the department are engaged in other innovative
teaching projects across and beyond campus. For example, one faculty member is deeply
engaged with online learning and innovative digital pedagogies. Another faculty member is
engaged with active learning and flipped classrooms, while a third has been engaged in learning
communities and writing-intensive courses. These faculty in particular were very open to the
idea that assessment could help them improve their practice.
Disciplinary influences. Additionally, various aspects of the discipline were facilitators
of assessment’s ability to influence teaching in this department. Many faculty expressed
knowledge of their disciplinary association’s engagement with assessment initiatives, and they
described how they had adapted some of that work to build their own LOs and rubrics. Further,
local and national trends of declining humanities majors have encouraged conversations about
how to improve teaching and program offerings to attract more students:
…there’s a nationwide trend of declining numbers among [humanities] majors. And quite
significant declines, so [humanities] departments across the country are reporting losses
of about a third of their numbers among…..We have another discussion happening, again
kind of parallel to these others, about how to sell ourselves, how to attract majors, how to
encourage majors to think the field is still relevant, how to deal with some of the obvious
economic considerations that are driving some of this trend.
Assessment and Teaching Improvement in Higher Education
137
Faculty indicated that awareness of this trend has perhaps made them more open to considering
innovations in teaching and learning than they might otherwise have been.
Barriers. Despite the nascent changes and facilitators of this change, ultimately there
was not the same level of change or influence in departments at VU as there was at UP. Along
with the barriers to linking assessment with teaching improvement that I described at the
institutional level at VU, there were also some barriers unique to the department level. These
barriers were variants of the same themes that served as facilitators: leadership, culture, and
discipline.
Departmental leadership. While departmental leaders in Humanities were “open to the
process…not resistant to it…and certainly open to rethinking our undergraduate curriculum,”
they have not championed assessment or advocated for its use to improve curriculum or
instruction in the same way that Taylor has. No “chair [has decided] to expend political capital
on” redesigning the Humanities major around learning outcomes or using assessment data to
make broader changes to teaching in the department. Moreover, there is a longer history of
departmental leadership antagonism towards assessment, as this faculty member explained:
The first time I heard about student outcomes and learning assessments was from a chair
of my department, this was probably 8 years ago, maybe 10 years ago—gosh, it’s been
that long—who described it in completely derogatory terms, saying “we have to do this,
this is a stupid bureaucratic thing, but our grades should be enough of an assessment of
what we do.” I remember clearly because I was very confused about what he meant. It
seemed like, “Yeah, I guess grades are the ultimate assessment of a student, what else
could we possibly have?” This was early on in my career as an instructor so to me it was
like…We had some documentation that was distributed. The documentation was very
Assessment and Teaching Improvement in Higher Education
138
much the administrator’s speak around it, so we need to comply with this for our
accreditation. It didn’t take too much time but it just seemed extremely…like an
administrative issued more than one that has to do with pedagogy. It was actually the way
pedagogy is never used, ever.
This initial attitude towards assessment—as a task for compliance that is completely unrelated to
pedagogy and teaching—was embedded in the departmental culture early on and has proven
difficult to overcome. Taylor’s grassroots leadership efforts have helped alleviate this attitude
somewhat, but suspicions remain. Additionally, there are key structural barriers that she alone
cannot overcome. There are no “incentives, financial or otherwise, like course releases to redo
your pedagogy in response to this new model.”
Disciplinary culture. Some aspects of disciplinary culture also served as a barrier to
assessment’s ability to improve teaching in this department. Because the humanities have a
primarily qualitative orientation to the world, there was a deep-seated sense of skepticism about
assessment even among those faculty who were supportive:
There is a sense that assessment is something tied to the social sciences in ways that
make humanists uneasy, and if I had to put my finger on where I see the tension, I think
it’s maybe…maybe it centers on the fear that assessment encourages a cookie cutter
approach to learning that humanists find counter to their enterprise.
This fear that assessment would force a one-size-fits-all or standardized approach to teaching and
learning came up across multiple interviews. Additionally, there was a sense that assessment was
not commensurate with a humanistic orientation and was inextricably linked with the social
sciences:
Assessment and Teaching Improvement in Higher Education
139
…we’re teaching humanities, where it’s not really about facts, or information, or things
that are easy to measure. That they’re not easy to measure doesn’t mean you shouldn’t
measure them, but it means you need to think very carefully about how you measure
them. Because learning outcomes assessment is closely associated with social
scientists…
The epistemological and ontological orientations of these faculty as scholars in the humanities
created a psychological barrier to assessment for some faculty.
Summary. Overall, the department champion seemed to play the most significant role in
more meaningful engagement with SLOA in the VU Humanities department. There is
department-wide conversation about learning and assessment in Humanities and faculty seem
open and even enthusiastic about thinking through program outcomes and assessment. However,
these conversations have not yet translated into meaningful action and change the way they have
in departments at UP. Taylor’s work was not quite enough to overcome the lack of institutional
support and other departmental barriers.
VU STEM
The STEM department at VU also has some assessment activity that is linked to teaching
and learning, but its impact tends to be narrow and focused only on the core group of faculty
who engage with it. This department was emphasized as one of the most active and
accomplished in terms of their assessment work by the VU assessment director. In STEM,
assessment is led by a teaching-only tenured faculty member, Chris
10
, with the support of the
undergraduate education committee. Chris teaches double the number of courses that traditional
research faculty in the department teach, runs the departmental assessment program, and also
10
Pseudonym
Assessment and Teaching Improvement in Higher Education
140
does research on the Scholarship of Teaching and Learning (SoTL), as well as some basic STEM
research with undergraduate students. The major assessment activity in this department includes
a standardized disciplinary exam, as well as assessment in a capstone course:
So for instance, we started doing, using...[an exam created by our disciplinary society].
So we started using that as a tool for measuring how our students are performing by the
time they leave. Yeah, so we’ve chosen a course that is required for all…majors. They
usually take it in their senior year. So we’ve been administering it over in that
course….But then when we, we’ve also assessed specific courses. So we’ve used a
capstone course…It’s a lab-based course that usually students take as a junior or senior.
All of our majors have to take it. And it involves a lot of writing, so they do a lot of lab
report writing. They do poster presentations….So we use that as one of our measures. So,
we look for student writing, for critical thinking. We use their lab reports as a way to kind
of try and assess that. And so there, the instructors of the course are deeply involved in
helping us do that….Creating a rubric for how the lab reports were assessed, the
instructors were involved in that.
The assessments are department-wide and focused on direct evidence of student learning and
mastery, rather than indirect evidence like surveys or grades. Chris was very knowledgeable
about these assessments and their results and mentioned that the department is learning more
about its students and their level of mastery as a result of assessment. On the disciplinary exam,
for example, the department was able to compare its students’ scores to national averages and
found that they were performing slightly below national averages. However, the department has
not yet figured out how to use those results to make changes:
Assessment and Teaching Improvement in Higher Education
141
That’s obviously a multi-course problem, right? Cause this exam covers things that come
from five or six different courses. So, having some guidance on what would be a useful
intervention to address that problem….Do they just need a review session at some point?
Maybe they know the material, they just haven’t seen it in a while, they need a little bit of
review? Or do we really need to change drastically what we’re doing in specific
courses?....And getting our department to actually think about that and do something, that
is another challenges.
The department has struggled to make program-wide curricular changes as a result of this
assessment. While Chris has a lot of skill and experience conducting assessment in his own
classroom and making changes based on results, he acknowledged that he was less certain of
how to effectively assess student learning across multiple courses or the entire program and
make changes. He saw this type of program assessment as very different from his classroom
assessment, and he wished that there were more resources available at the institutional level to
help support him in program assessment:
My frustration with that is…even for me, like I [do a lot with] scholarship of teaching
and learning, but I don’t feel like I’m qualified to run an assessment of a program.
Program assessment is much different than doing some experiment with two courses. But
yet, faculty have to drive this, or at least that’s what our campus has decided that the
faculty within the departments are gonna have to do this. So we’ve kind of been forced to
do this, and obviously it’s important, we can’t just ignore it. But my frustration is, one,
we’re not trained to do that type of program assessment so personally, I would like to see
the university…more hands-on involved in that somehow.
Assessment and Teaching Improvement in Higher Education
142
On the other hand, there were numerous examples in this department of individual faculty using
assessment in their classrooms to change or improve their teaching. These efforts were typically
not connected to program-level assessment efforts, however.
Aside from Chris, most of the other faculty I spoke with were not especially familiar with
departmental assessment and said it does not inform their teaching practice in any meaningful
way. They knew assessment was happening and referred to it as “RAC assessment,” as they
associate it with accreditation, but it had little influence on their own work. One faculty member
described his impressions of the departmental assessment process:
It has certainly come up, but I think there’s a couple of people that are in charge of
making sure that those documents get submitted. I’m pretty sure that the person in charge
sent out our responses to certain prompts and…[sent] that out to everyone and then asked
for feedback. I mean, I read it and provided some feedback. Kind of forget what it was all
about though. I think that there are people, that that [assessment] is definitely something
they think about. It’s on our radar but I would say the faculty at large, not so much.
This quote demonstrates the limited reach of program assessment, not trickling beyond the few
faculty members who were involved in creating the reports. Another faculty member noted that
“the results don’t get communicated to me, and so it never has an opportunity to influence my
teaching.”
Faculty champion. Chris’s presence and role as champion was the major facilitator for
assessment activity in the department, and a primary reason that assessment had any links to
teaching and learning. He remarked upon his impression that he had been hired, in part, to handle
Assessment and Teaching Improvement in Higher Education
143
the department’s assessment program. Every person I interviewed mentioned him as an exemplar
in terms of both teaching and using assessment. He described his perceptions of assessment:
I definitely recognize it’s important. I think, as much as we can, we want to have
evidence that shows what we’re doing is effective or not, and it could help us make
decisions. And so, of course, when I do these innovative approaches to teaching, I wanna
try to collect some type of evidence to show that it’s having a positive impact. And if not,
then I should no longer use that intervention.
His views of assessment as a tool to improve teaching and determine the impact of various
teaching strategies resonated with his colleagues in the department, who spoke of going to him
for advice and mentoring on teaching. They recognized his efforts to champion assessment as
part of his efforts to champion effective teaching.
Departmental culture. Additionally, the VU STEM department has a culture that is very
supportive of teaching. They were among the first departments on campus to hire a teaching-
focused tenure-track faculty member, and since hiring Chris they have added a second tenure-
track teaching position. They also have a large number of faculty who have won teaching awards
or been recognized for excellence in teaching. One faculty member described her impressions of
the departmental culture around teaching:
…our department has a disproportionate share of teaching awards and recognition for
faculty, related to teaching. So, and I think that’s why they wanted the teaching position
and why they were willing to do it. They recognize that teaching is an important part of
our department. And where that came from, I couldn’t tell you. But that’s just the way it
was. So yeah, we have—there were at least three faculty in our department who have
won the campus-wide distinguished teaching award. And at least two others who have
Assessment and Teaching Improvement in Higher Education
144
won the innovative teaching award. Which—so like five or six faculty of our department.
That’s out of about thirty, probably at the time it was out of twenty-five faculty. So pretty
large percentage….So yeah, I think for some reason, that was in our culture as a
department and I think that’s why, generally speaking, folks are open to hey, let’s—if we
can make something better, let’s make it better.
This strong culture of teaching paved the way for Chris’s position to be created and has perhaps
made faculty more willing to experiment with new modes of teaching.
Disciplinary influences. Disciplinary influences also played a role in facilitating the
emerging relationship between assessment and teaching in the STEM department at VU. First,
there are discipline-specific assessm ent resources that Chris and his colleagues drew upon, such
as the disciplinary exam, when planning and implementing assessment in their own department.
This exam has the legitimacy that comes from being created and recommended by the
disciplinary society, as opposed to an exam created by a testing or measurement organization.
Relatedly, there are numerous teaching improvement initiatives in this STEM discipline and in
the sciences more broadly. Many faculty I interviewed in this department remarked on
participating in professional development opportunities around teaching. As a result of
participating in such workshops, they had gained familiarity with many of important pedagogical
concepts, as well as basic terminology around assessment. For example, one faculty member I
interviewed described his use of formative and summative assessment. These efforts are largely
targeted at individual faculty however, rather than the program or department level. Another
disciplinary influence came from the increasing amount of funding from scientific organizations
for science education projects. Several faculty mentioned applying for and winning NSF grants
for science education projects. As part of these projects, they had to include assessment measures
Assessment and Teaching Improvement in Higher Education
145
and often an external professional evaluator to determine whether the project they created was
improving student learning and other measures of success. Advocacy from both the disciplinary
society and funders, along with a proliferation of programs for teaching improvement in the
sciences, had some effect of legitimizing assessment for members of this department. There was
not a lot of outright resistance, though there was a lot of passive disengagement.
Disciplinary influence as a barrier. However, disciplinary influence also provoked a
fair amount of skepticism of assessment. Like faculty in the STEM department at UP, many
members of this department remarked that they felt assessment was not scientific enough. One
faculty member repeatedly mentioned that he was not sure any of their assessments were
“statistically valid.” Even Chris expressed some reservations:
Yeah and that’s kind of frustrating for me as a scientist. So yeah, I wanna be scientific
about trying to measure how my students are doing but then, on the other hand, there is
ultimately a lot of uncertainty that just cannot be controlled or factored.
Their disciplinary training as scientists made the VU STEM faculty uncertain of the value or
trustworthiness of assessment data.
Departmental leadership as barrier. There were some additional barriers at the
departmental level in terms of assessment’s ability to influence the teaching and learning
environment. Like the VU Humanities department, the VU STEM department had a major
faculty champion but departmental leadership that was more neutral in its support. One faculty
member described the leadership of both the current and previous chairs as “utilitarian:”
So he [the former chair] just kinda did it cause it had to be done. So he was obviously
willing and able to do it. I think his perspective was just more of a utilitarian, just get it
Assessment and Teaching Improvement in Higher Education
146
done if it needs to be done. I don’t think he was necessarily excited about carrying out
program assessment….so he wasn’t proactive, but he also wasn’t a roadblock.
While having a strong faculty champion matters in terms of getting faculty buy-in, the lack of
enthusiastic departmental leadership support made it difficult for the department to use
assessment as a tool to change the teaching and learning environment. There were also few
incentives for the majority of faculty to engage with assessment. The ones who did engage either
saw its intrinsic value for their own practice or were on the committee that oversees assessment.
Other structural barriers. Finally, while the department has a strong teaching culture,
the reality is that all faculty except Chris and the other tenure-track teaching faculty are
incentivized and rewarded for their work as researchers, not as teachers. The research faculty I
spoke with all mentioned that their teaching just needed to be “adequate” in order to get tenure
and promotions. As scientists, their research burden was also quite heavy; they are expected to
manage labs and publish numerous journal articles in order to advance. With few incentives to
improve their teaching, there were also few incentives to engage in assessment as a means to
improve teaching.
Summary. Overall, while there is some evidence of assessment’s link to teaching in this
department, it does not seem to have spread beyond a small group of faculty who are passionate
and dedicated teachers. Chris is the major champion of assessment in the department. Everyone
respects him as a teacher, sees him as an expert, and brings him up as a resource for teaching—
when he talks about teaching and assessment, people listen. That said, his efforts have still not
trickled beyond a small group. The lack of enthusiastic leadership support at the department
level, as well as the lack of institutional incentives, structures, and supports, outweighed the
efforts of one champion. Chris recognized this reality and expressed his dissatisfaction:
Assessment and Teaching Improvement in Higher Education
147
I’m still frustrated. I feel like we could do a much better job. And yet, we’re kind of the
poster child for doing a good job, and that’s what scares me.
VU Social Science
The Social Science department at VU has the most minimal assessment activity of all the
departments I studied and few, if any, links to teaching and learning. It is important to
reemphasize that this was the third social science department I contacted at VU about
participating in the study. My contact in the first department initially agreed to participate, but he
could not get any interest from departmental colleagues, and none of them responded to my
email outreach or agreed to be interviewed. After I dropped that department, I worked with the
assessment director to contact another social science department he perceived as having strong
assessment activity. They also declined to participate. Finally, we reached out to the Social
Science department. They agreed to participate, but my contact there was adamant that they were
not a department where assessment had had any impact on teaching and learning:
We talk about assessment 0% of the time. It is not a factor for most people. We do it on
our own in a corner by ourselves and don’t share, because no one in the department cares
because we have higher teaching evaluations than average.
While the VU assessment director had flagged the Social Science department as in compliance
because they always completed an assessment report that met his guidelines, they were not using
assessment to make changes to their curriculum, pedagogy, or any other aspects of the teaching
and learning environment in the department. This department thus stood out as distinct from my
other cases from the very beginning, as the biggest champion of assessment was telling me how
little impact assessment had had in the department. The other faculty I interviewed in Social
Science all reinforced this perception. Two of the faculty I interviewed in Social Science had
Assessment and Teaching Improvement in Higher Education
148
little to no knowledge of departmental assessment. The others had some knowledge but
universally concluded that it had little meaningful connection to the core work of the faculty or
the department.
The VU Social Science assessment process consisted of sampling papers from a selection
of courses and using a rubric to evaluate a set of learning outcomes they had created for the
department. They hire graduate students to use the rubrics and evaluate the undergraduate
papers, along with the departmental assessment coordinator. Then, the assessment coordinator
uses these results to put a report together, which is submitted to the assessment office. One
faculty member described their impression of the assessment process in the Social Science
department:
I think it’s kind of a performance to be honest. I think it’s completely performative. It’s
unclear to me why we do it other than the fact that there was a box to check, either
required by the campus or some long-standing practice or norm that the department had.
It’s unclear to me that the actual assessment revealed anything other than a bunch of
quantitative data which, frankly, to me, given my commitments, I’m extremely
suspicious that that tells us anything very substantive. To me, it just felt like a
performance that I had to keep undertaking cause that’s what you did.
Their process was compliance-oriented and not linked to the teaching and learning environment
in the department. Faculty did not perceive the information they got from assessment to be
valuable or useful. Another faculty member noted that “If something came up we certainly, I
think, would have addressed it, but I don’t think anything really dramatic ever stood out.”
Assessment and Teaching Improvement in Higher Education
149
Institutional restrictions as barrier. One former assessment coordinator attempted to
change the assessment process to make it more useful for the department:
I tried to take this really earnest approach and develop something meaningful for the
department but failed. [I was told by the assessment office] that if we changed our criteria
than we lose one of our assets, which is tracking progress over several years. But I think
it’s complete bullshit, it’s not measuring anything meaningful—nobody wants to talk
about it…
This department felt stymied by institutional restrictions on what their approach should look like.
These strictures prevented a potential champion from developing in the department. As opposed
to other departments, where faculty champions were able to get colleagues on board, this former
coordinator in Social Science was unable to change the process to something that would generate
meaningful interest or buy-in from her colleagues in the department.
Departmental culture as barrier. Aside from institutional limitations, the department’s
culture prevented them from seeing assessment as a valuable tool for improving the teaching and
learning environment. The department has a culture of teaching excellence—nearly everyone I
spoke with remarked on the fact that their teaching evaluations are consistently higher than
university averages and that they are frequently held up by the University as an example of
excellent teaching or “top performers”. Thus, even when assessment was framed by the
assessment director or the departmental coordinators as an activity that could help improve
teaching, faculty in the department felt no need to engage with it because of the messaging that
their teaching was already consistently excellent. They saw assessment “as a hollow exercise.
They are committed to teaching and take offense if you say [assessment] is about commitment to
teaching.” Additionally, faculty who knew about the University’s assessment process believed
Assessment and Teaching Improvement in Higher Education
150
that it was more of a tool for institutional imperatives rather than their own success as teachers:
“the assessment process is…seen as a tool that a lot of faculty view as inconsistent with their
own interpretation of the educational mission, metrics that allow the institution to present itself a
certain way but not something that lends itself to talking about teaching.” Further, unlike the VU
Humanities department, there were no enrollment imperatives in Social Science encouraging
reflection on teaching and curricular offerings. Social Science is one of the most popular majors
in the University; faculty thus have no indicators that what they are doing is not working well
and should be improved upon.
Departmental leadership as barrier. Departmental leadership also saw assessment as
more of a compliance task than a tool to improve teaching and learning. The chair did not
promote assessment beyond the existing efforts:
I talked to the chair and asked if [assessment] could be brought up in meetings at the
beginning of the year (what the assessment director had asked us to do). The chair didn’t
really respond to that, it was like it didn’t register, it never came up again.
This lack of interest from departmental leadership contributed to the status quo of assessment for
compliance in the Social Science department.
Disciplinary influences. Disciplinary socialization and culture seemed to have less
influence in this department. Social scientists are split epistemologically and methodologically,
with some taking a more positivistic, quantitative approach and others a more interpretive,
qualitative approach to research and knowledge. Thus, I found some faculty who thought
assessment was too quantitative and should be more holistic, and others who thought it was not
rigorous enough and was too qualitative. There was no awareness of disciplinary resources or
Assessment and Teaching Improvement in Higher Education
151
projects on assessment, and limited participation in teaching and learning initiatives of the
professional society.
Summary. It was unclear whether the Social Science department was more reflective of
other departments on campus in terms of assessment and its link with teaching and learning. I
tried to select departments based on their high levels of engagement with assessment, in hopes of
better understanding the link with the teaching environment—the Social Science department
clearly did not fit those parameters. As I only studied three departments at VU, it is impossible to
say which, if any, are representative of all departments across campus. However, one participant
did indicate that there were other departments that found real value in assessment for its ability to
improve teaching and learning:
I was inspired by [the assessment director’s] workshops. Listening to other reps from
other departments like…the sciences—they were so earnest and serious about it, and
interested in how assessment can be integrated into learning process. Someone in a
science was using a survey to track how much students were learning but also alert them
to own gaps in knowledge. I thought that was really inspiring. It was provocative and
made me think about how to deliver quality undergraduate offerings in a research
university. [But it was also] clear to these people who are doing it that you can just do
[assessment] and get away with it without doing it meaningfully.
This quote indicates that there is a wide range of assessment approaches in departments on
campus, so perhaps my sample ended up being representative after all.
Summary and Conclusions
Overall, there were some distinct factors at the departmental level that influenced
assessment’s ability to change the teaching and learning environment. Departmental culture,
Assessment and Teaching Improvement in Higher Education
152
faculty champions, leadership, and disciplinary socialization and culture all influenced the extent
to which assessment was linked to teaching in departments. Disciplinary influences drove
specific assessment approaches as well as critiques of particular epistemological or
methodological assumptions inherent in assessment. A departmental culture that was supportive
of teaching and assessment; supportive department chairs who saw the value of assessment for
improving the teaching and learning environment; and the presence of faculty champions who
enthusiastically engaged in assessment and modeled its effective use for their peers were all
necessary to foster assessment’s ability to change teaching in departments. Departments that did
not have all of these levers did not demonstrate the same levels of change to the teaching and
learning environment as those which did. For example, the UP Humanities department did not
have any major faculty champions, nor did they have particularly enthusiastic departmental
leaders; while they had a supportive teaching culture, it was not enough to overcome the absence
of these other two levers.
Departments were also influenced by factors at the institutional level, such as support
structures, incentives and rewards, and institutional culture around assessment and teaching.
Figure 5 shows how departments were influenced by both departmental and institutional levers.
Assessment and Teaching Improvement in Higher Education
153
Figure 5: Interplay between Department and Institution Levels
Departments at UP, which had a strong institutional framework connecting assessment to
improvement, tended to have more assessment activity that had stronger links to the teaching and
learning environment, even in the least change-oriented department I studied (Humanities).
There were some notable changes to curriculum and pedagogy in two of the departments I
studied there. The third department had fewer notable changes, but still generally approached
assessment as an instructional or curricular activity. At VU, where there was less institutional
infrastructure and assessment was treated more as a compliance issue, departments tended to
have less assessment activity with a weaker link to teaching. However, there were pockets of
activity within some departments, driven mostly by the presence of enthusiastic faculty
champions who were respected by their colleagues and advocated for assessment’s value based
on their own experiences with it. I describe findings at this individual level in the next section.
Department-
Level
Change
Institution-Level
Factors
• support structures
• incentives and rewards
• institutional culture
around assessment and
teaching
Department-Level
Factors
• faculty champions
• departmental leadership
• departmental culture
• disciplinary influences
Assessment and Teaching Improvement in Higher Education
154
Chapter 6: Assessment and Teaching at the Individual Level
“It's totally based on personal volition. The thing with teaching is that ... the reason why
assessment is kind of important to give us a feedback mechanism. Because for a lot of things if
you don't have a feedback mechanism, you find it very difficult to carry on. That's the reason I
gave the students multiple quizzes, papers, they know where they are in the class, how they are
doing and what they need to improve. It's like a young girl and a young boy in a relationship. If
you have no feedback mechanism it gonna be very difficult. So each of the two need to give the
other some signals. Oh you are making good progress ... going on. Otherwise, one of them would
be hopeless, feel very frustrated, and then quit.”—quote from faculty member at UP
At the individual level, assessment influenced teaching in a number of different ways. In
some cases, formal program assessment led faculty to rethink aspects of their teaching. In more
cases, informal assessment or course/classroom assessment helped faculty understand what their
students were learning and make adjustments to their teaching accordingly. These classroom
assessments may or may not have been linked to program assessments or formal institutional
assessment efforts. Regardless, classroom assessment was an integral part of faculty teaching
practice across all departments and at both institutions that I studied. In this section, I first
describe some representative examples of how different forms of assessment influenced
individual faculty members’ teaching. Then I describe factors or characteristics that influenced
this relationship: individuals’ perceptions or narratives around assessment, prior experiences
with teaching and assessment, career stage, and presence of fellow champions.
How Assessment Type Influenced Individuals’ Teaching
In each interview, I spoke extensively with participants about their teaching, delving into
classroom strategies and activities, teaching philosophy, and assessment techniques. I used very
Assessment and Teaching Improvement in Higher Education
155
intentional language when asking about classroom assessment strategies; I asked each faculty
participant how s/he knew when students were learning (or not) in a particular course, rather than
using the word “assessment.” This language choice helped frame assessment as work that faculty
were already doing and helped them think about ways they were assessing that may not “count”
as part of the formal assessment process on campus. Even still, some faculty shared how formal
or program assessments had shaped their understanding of whether and how students are
learning and their teaching. In this section, I first describe exam ples of how formal or program
assessment shaped teaching, followed by examples of how informal or classroom assessment
shaped teaching. It is important to note that there is some overlap among the two types of
assessment—in some cases, for example, department-level assessment occurred in individuals’
classrooms, as when program-level outcomes were assessed as a part of a course. However, it is
still helpful to draw distinctions between the two, as there were often faculty who were quite
opposed to program-level SLOA who still used assessment extensively in their own classrooms.
Table 7 shows the different types of change that each type of assessment fostered among
individual faculty.
Table 7: Assessment Type and Individual-Level Changes
Formal Assessment/
Program Assessment
Informal Assessment/
Classroom Assessment
Type of Change
Introduction to and adoption
of rubrics
Provided feedback on student
misconceptions or
misunderstandings, which
provoked reflections on how to
change aspects of teaching to
facilitate greater understanding
Shift to more outcomes-
oriented approach to teaching
with increased attention to
course goals, organization,
and alignment of curriculum
Assessment and Teaching Improvement in Higher Education
156
Formal Assessment/Program Assessment
Many faculty members described how the formal program assessment had changed their
own approaches to teaching. These faculty tended to be at UP, though there were a few examples
at VU. Specific types of change or evidence of change fell into two major categories:
introduction to and adoption of rubrics; and a shift to a more outcomes-oriented approach to
teaching, with increased attention to course goals, organization, and alignment.
First, several faculty mentioned that they had been introduced to rubrics as an assessment
tool through the formal SLOA process, yet also found them to be quite helpful for their own
courses. Rubric adoption often led faculty to change their assignment design or clarify their
course or assignment performance expectations, as this VU Humanities faculty member
described:
I've gotten more explicit with the whole learning assessment business. I have gotten more
explicit about laying out my expectations, so that the prompt is pretty clear, "I expect
your essay to do the following things." There is an implicit rubric there. It's not
formalized, but there are certain expectations that are laid out in the essay prompt. I
mean, it's the idea of rubrics, which simply is something I never heard the word 15 years
ago, right? That strikes me as clarifying expectations is a good thing.
While this faculty member said he did not use formal rubrics, as they did for his department’s
assessment process, he was essentially giving students a narrative version of rubrics by explicitly
laying out different levels of performance for assignments and the criteria that would indicate
that students were meeting a particular level of performance. A faculty member at UP
department similarly described how rubrics had helped him with assignment design and
clarifying expectations:
Assessment and Teaching Improvement in Higher Education
157
I think exposure to rubrics was somewhat useful, but that was almost at the assignment
level. If students knew a rubric for how you're going to assess and grade better work ...
That's maybe another one of the small skills I picked up along the way. I should have ... I
think I forgot to ... That's undergraduate assessment.
Rubrics offered this faculty member a specific tool for both providing students with clearer
expectations and giving himself clearer guidelines for assessing student work. In one UP
department, the assessment coordinator described how even a faculty member who had been
somewhat resistant to assessment ended up finding rubrics to be a particularly useful tool for his
teaching practice:
…three or four semesters ago, he did say, "Oh, didn't you say something about a grading
rubric or something for some of these things?" I sent him the AAC&U link, to which he
said he needed to log in or something and he didn't want to. [Then] I sent him all the
PDFs, and then he mentioned to me the other day when we were talking about this
course, he was like, "Well, I was using those things that you sent me." He was actually
pregiving those students the rubrics and using them for scoring them.
Participation in assessment introduced these faculty members to rubrics, which in turn helped
them think differently about their assignments and clarity of their course expectations.
Another way that formal program assessment influenced faculty teaching at the
individual level is by encouraging a shift to a more outcomes-based approach to teaching. A
number of faculty remarked that engaging in assessment with their departmental colleagues
helped them think more carefully about goals and outcomes for their own courses—how they
connected to other courses in the program, but also how each course activity or class session
Assessment and Teaching Improvement in Higher Education
158
contributed to course learning outcomes. One UP \ faculty member described this as a process of
“jettison[ing]” activities and topics that did not track to course learning outcomes:
I really now take every lecture and look at those learning objectives, and say, "How does
this contribute to that?" And to maintain themes. I really like the idea of themes. I've
always liked that with lectures, especially once PowerPoint came on the scene, because it
made it a lot easier to organize and structure a lecture as a story with a beginning, with a
line and a theme, so I've just sort of made sure that, is this relating back to that? And
realizing, "No, not at all. Maybe I don't need to teach them this." But it's allowed me to
jettison. It's provided a way for me to jettison some stuff and let go of things that I've
liked teaching but I realized they either lead to often topics that are really hard to test
students on, and so just providing the focus for the whole course and creating sort of real
objectives that I'm trying to address in almost every lecture. I like that, so I think that's
been the big change, and then at the end with tests, quizzes and exams, really focusing
those on the bigger picture of these objectives.
Assessment—specifically thinking about learning outcomes—helped this faculty member ensure
that each class he taught was aligned to designated objectives and assess students’ progress
towards those objectives accordingly. Another UP faculty member described how assessment
helped him adopt “student-focused teaching” practices by constantly monitoring what students
were learning in his classes. He had already been primed to think about teaching in a student-
focused way through professional development with a master teacher at a prior institution, but
assessment helped him gain a better understanding of whether his strategies were working:
…I started saying, "Okay, what is my goal for this class that I'm teaching right now? This
lecture that I'm going to be giving, is it for me to look good, to become convincing that
Assessment and Teaching Improvement in Higher Education
159
I'm knowledgeable about the material? Or, is it really focused on what the students are
benefiting from taking this class, and sitting there, and listening to this lecture?"…Having
someone who I could tell was a good teacher mentoring me early on, when I was
nervous, and self-conscious, and inexperienced, helped me a lot. Now, later I tied that
assessment part to this. It's like, "How can you tell whether or not students are
benefiting?" That's where the assessment piece comes in. And that part, she didn't tell me.
I mean, it was part of the service that they were providing, because they would do an
assessment for you. They come in, and they do the videotaping, and you sit down, or they
interview some of the students. You know, so they did that for you, but when I got here,
and became really more involved with the assessment process, I realized I can do that
myself, and I should be doing that. So, you can have a consultant come in and help you
give an assessment, and they do it, but I can also do the assessment myself, in different
ways, throughout the semester. And it's a powerful tool, because it helps me feel like I
have a better grip of what's going on with the class.
Learning different ways to assess his students’ learning empowered this faculty member to
continue developing his student-focused approach to teaching. Another UP faculty member
emphasized this outcomes-based approach to teaching:
In your classroom, you need some type of way to understand that they've hit those
objectives. And then, the departmental one, I think it's important, too. Because if we're
going to have learning outcomes, which I think are kind of mandated from above, like we
have these, if we're going to have them, then we should be able to confidently state our
students are hitting these things….
Assessment and Teaching Improvement in Higher Education
160
Thinking about assessment and what students should be learning helped some faculty shift from
a more content- or information delivery-oriented approach to a learning- or outcome-focused
approach to teaching.
Informal Assessment/Classroom Assessment
Even in cases where faculty denied that formal SLOA had had any impact on their
teaching, I still found numerous examples of classroom assessment or informal assessment
shaping teaching. There were examples of this across all departments and at both institutions. In
most of the examples here, classroom assessment mainly operated to provide faculty members
with feedback on student misconceptions or misunderstandings and provoked reflection on how
to change aspects of their teaching to facilitate greater understanding. For example, several
faculty spoke about giving mini-quizzes throughout the semester or using clickers to get
immediate feedback on student understanding:
In some of the other classes I have a basic five question did you read it kind of quiz. If
you can't name one character in the short story, then you probably haven't read it. Even
those are surprising to me sometimes. Well what people just don't get as a point from
doing the reading or from a lecture. Now I realize that I think I'm crystal clear in a lecture
but the main argument needs to be brought to the forefront. Students need an outline in
the beginning and all that kind of stuff that I wouldn't know without reading their
responses.
Quizzes helped this VU faculty member identify student misunderstandings and prompted her to
reflect on how to structure her lessons in ways that enhance student learning. A faculty member
at UP similarly described her experience using clickers to assess student understanding in the
moment:
Assessment and Teaching Improvement in Higher Education
161
This semester with the iClickers if they're totally off track, if they answer an answer that I
thought was obviously wrong, then that gives me the option to know that I didn't explain
something properly…..Last semester when I gave the lecture on [a particular concept], I
asked them to, in their lecture workbooks, give me an example of [a case that fits the
criteria I laid out]. Most of them said [something that did not fit the criteria]. I knew that I
hadn't done my job and I hadn't explained [the concept] properly.
Results from her clicker questions gave this instructor feedback that students were not
understanding a concept and that she “hadn’t done [her] job,” leading her to try out new
examples to help facilitate student understanding. Another participant at VU described classroom
assessment as a continuous process, both formal and informal, that leads to ongoing reflection
and adjustment:
Well, I think that that's an ongoing process. It's rarely a matter of aha moments, but the
way the students perform, and what they do, and how they produce those papers, projects,
presentations, it's a constant. I'm constantly looking at what they're getting, what's getting
through, and what's not getting through, and I'm constantly adjusting how I lecture, what
things I assign, how I conceptualize the arc of the course. It's hard to say, "This is one
thing." This example, this course that I knew was experimental, I was constantly asking
the TA, I was asking the peer mentors. I was very open, "Are they confused? How
confused are they?" I was paying attention, and I did adjust very much to the feedback I
was getting. That's fundamental professionalism. Of course you pay attention to every
assessment to see how it's working. I mean, not to do that would be irresponsible.
Assessment and Teaching Improvement in Higher Education
162
This professor linked classroom assessment and reflection to professionalism, which aligns with
Schon’s (1983) descriptions of professional practice—constant, ongoing reflection leading to
changes in practice. Assessment can be a tool that facilitates reflective practice.
Finally, this faculty member at UP described assessment as a “bridge between teaching
and learning,” which helps facilitate reflection for both him and his students:
I have to reflect what's going on. Is something wrong with my presentation or it's just too
complicated? I need to spend more time on it. So that's the reason why assessment, this
kind of writing assignment is a kind of assessment. It's a bridge between learning and
teaching. Both students and teachers learn something from those kind of assessment
activities. Students know that, OK, I got the most important thing right. My learning
strategy, my studying tactics works. For those who failed miserably they need to reflect.
Obviously it's a red flag so either they improve themselves or they can come to me and
ask for suggestions. I always encourage them to do that. Because those are important
opportunities for you to reflect and also seek help. The reason I give you so many
opportunities to improve your grade because any assessment should be formative not
finalized. We are lifelong learners and a lot of the skills they learn should be
transferrable.
Even informal classroom assessment or assessment that is not necessarily explicitly linked to
learning outcomes or the formal assessment process helped faculty reflect on their teaching and
make changes to adjust and improve.
Themes Associated with Relationship at Individual Level
While nearly all faculty participants described changes to their teaching as a result of
informal or classroom assessment, formal/program assessment seemed to have little impact on
Assessment and Teaching Improvement in Higher Education
163
most individual faculty at VU, whereas it did impact individual faculty practice at UP. I
examined the data for themes associated with the relationship between assessment and teaching
at the individual level that could help explain these differences. In this section, I explore several
of the most salient themes that emerged: faculty attitudes towards assessment, prior experiences
with teaching and assessment, and career stage. Figure 6 shows these individual-level influences.
Figure 6: Individual-Level Change Levers
Attitudes towards Assessment
One theme that helped explain the link between assessment and teaching at the individual
level was faculty perceptions of assessment, or the narratives around assessment that they had
constructed. Many faculty also described their interpretations of others’ perceptions of
assessment, even if they did not necessarily agree with those perceptions. These descriptions all
helped form a number of overarching narratives around assessment. These perceptions or
narratives fell into roughly three categories: favorable, antagonistic, and neutral or ambivalent.
These attitudes are displayed in Table 8.
Individual
Level
Attitudes
Career
Stage
Prior
Experience
Assessment and Teaching Improvement in Higher Education
164
Table 8: Individual Attitudes towards Assessment
Favorable Antagonistic Neutral/Ambivalent
Useful as feedback
mechanism
Bureaucratic and compliance-
oriented, externally-driven
Assessment as inevitability
Key part of professional
obligations as teachers
Add-on or extra work Profound ambivalence
towards assessment
Value at multiple levels—for
individuals, programs, and
institutions
Provokes fear Lack of awareness of
assessment so no opinion
Inspires cynicism or
skepticism
Useless or has no impact
Favorable attitudes. There were three major categories of favorable attitudes about
assessment described by study participants. First, faculty described assessment as useful for
getting feedback on student performance, on their own performance as teachers, and for
improving teaching and learning. One faculty member mentioned the value he found in
formative assessment as a feedback mechanism:
I know the term "formative assessment", and that is actually exams and quizzes to see
what the students have learned, to assess what they've learned. To me, that is the main
component. That's ultimately what determines what grade they get in the class, and I
would say it is the main feedback mechanism for me to know how the students are doing,
how the students compare to previous students.
Getting information on student learning throughout a course helped this faculty member track
student performance, which he identified as a helpful tool in his teaching practice. Another
faculty member described assessment as useful for helping him determine whether students met
course goals and whether he as an instructor met the goals he had set out for himself:
You assess students to determine whether they learned what you wanted them to learn.
You assess students to determine whether the course did what the course was supposed to
Assessment and Teaching Improvement in Higher Education
165
do. I mean, any time you implement an activity with a goal in mind, assessment is
determining to what extent that goal was reached.
Another faculty member also described assessment as a tool for monitoring his own
effectiveness as a teacher: “But beyond that, how do I really know whether I'm effective at it?
You've got to stick that assessment tool in there somewhere to get that.” This theme of seeing
assessment as a tool for teaching was emphasized by multiple participants:
As I said, it's a bridge between learning and teaching. It should be a tool. It's not a goal.
Our most important goal should be focused on the students learning. Anything else is
secondary.
This quote also demonstrates the attitude that many faculty had of assessment as a helpful tool
for understanding and improving student learning. As one faculty member remarked, when
thinking about assessment she focuses on “how it actually changes teaching and learning, which
is the whole point.”
Second, faculty described assessment as a key part of their professional obligations as
teachers. I referred to this in a prior section when describing a professor who thought of
assessment as a tool for facilitating reflective professional practice. It is worth repeating part of
that quote here:
I was paying attention, and I did adjust very much to the feedback I was getting. That's
fundamental professionalism. Of course you pay attention to every assessment to see how
it's working. I mean, not to do that would be irresponsible.
This professor saw assessment as a crucial part of his professional responsibilities as an
instructor. Other faculty also saw assessment as an integral tool for their professional practice as
teachers. For example, one participant explained, “my point of view now is if you don't use
Assessment and Teaching Improvement in Higher Education
166
[assessment], it's almost like you don't care whether what you were trying to get across was
effective.” In his mind, assessment is so deeply linked to teaching effectiveness that to not use it
is show a disregard for the professional obligations of teaching.
Finally, some faculty described assessment as a practice with value at multiple levels—
for individuals, programs/departments, and institutions. This quote shows how faculty
understood assessment to work across levels of the institution:
…assessment, if it's done with the right spirit, is just a powerful tool at all levels of higher
education. At the institutional level on down. From the president’s office, to the dean's
office, to the department, and to the classroom, and I really say that, if I never teach a
class again, I would still use assessment in my community work. If I'm doing a
presentation in the community…I care whether or not I've done a good job, and if I didn't
do a good job, doing an assessment will help me do a better job the next time I do that
presentation.
Faculty who held this attitude saw assessment’s value in the classroom and beyond, for programs
and for the institution as a whole. Some faculty even stated that they understood how assessment
had value at the external level, for accreditors, in addition to its multi-level value within the
institution:
I think it's useful on any number of levels. On a broader scale I can understand why for
accreditation purposes, having assessment statements at least is useful, so when the
accreditation team comes down, they know what you're Institutional Learning Objectives
are, and can measure what you're doing against those learning objectives. On our level, at
the department level, I think it is a useful reminder to faculty about what the program is
about, and what we're trying to achieve by these various course offerings that we do. On
Assessment and Teaching Improvement in Higher Education
167
a simple level, what are our goals in teaching this particular course? In effect, it translates
the traditional course objectives to student learning objectives. That's why I say, on
syllabi, there's sort of an integration or overlap between the two. And that's good. That's
useful, in terms of doing a course and measuring whether you're achieving whatever the
course goals are.
This attitude was probably the most broadly positive and enthusiastic, with faculty seeing wide
value for assessment in improving the teaching and learning environment. Faculty who saw
assessment as linked to teaching and teaching improvement tended to have one or more of these
favorable attitudes.
Antagonistic attitudes. While there were several positive attitudes that emerged, there
were also numerous negative attitudes that faculty described. More specifically, there were five
categories of antagonistic attitudes around assessment. First, faculty described assessment as
bureaucratic and compliance-oriented. Rather than being an authentic way to determine how
well students were learning and to make improvements to the teaching and learning environment,
assessment was seen by these faculty as a box-checking exercise, as described in the below
quote:
I think it's kind of a performance to be honest. I think it's completely performative. It's
unclear to me why we do it other than the fact that there was a box to check either
required by the campus or by some long-standing practice or norm that the department
had. It's unclear to me that the actual assessment revealed anything other than a bunch of
quantitative data, which frankly to me, given my commitments, I'm extremely suspicious
that that tells us anything very substantive. To me, it just felt like a performance that I
had to keep undertaking 'cause that's what you did.
Assessment and Teaching Improvement in Higher Education
168
Related to this idea of assessment as a performance to meet externally set requirements was a
perception that assessment imperatives were being driven by external stakeholders rather than by
faculty:
I can't remember exactly how this was presented, but as something that we have to do.
We have to be able to show that students are coming out differently, that at the end they
know stuff, they can do stuff. You know, this emphasis on what they can do and not what
they know. What they can do. I mean, I think this comes ultimately from the legislature.
Many faculty perceived state policymakers and accreditors as driving much of the assessment
activity on campus. Even some administrators held these attitudes, as evidenced in this quote
from an institution-level administrator: “Alright, I'm gonna be honest. The goals of assessment
are to meet the requirement of the assessment. To keep us accredited. It was a hoop-jumping
exercise.” Faculty and administrators who held this attitude towards assessment saw it about
“box-checking” or “hoop-jumping” rather than teaching and learning.
Second, assessment was seen by some faculty as an add-on or extra work, rather than as a
part of their normal workflow as faculty members. This perception typically resulted from
assessment requirements that were mandated at the departmental or university level and divorced
from the normal curriculum, rather than assessments that were designed by faculty themselves as
a part of existing coursework. For example, one faculty member described assessment as: “Busy
work—I just felt like [it’s] this additional thing that is going to take time….I kind of understand
when other faculty are not too enthused about it.” Because faculty were not always involved in
creating assessment plans, as this work was often relegated to assessment professionals or small
committees, they often did not see the connection between assessment and their teaching work.
Another professor described this attitude of assessment as an “add-on”: “Certainly for me, it
Assessment and Teaching Improvement in Higher Education
169
started out as [a sense] of frustration, because I didn't really want to do this. It's an add-on, added
burden in that sense, right? I didn't want to do it….” This faculty member saw assessment as
outside the scope of his normal teaching responsibilities. Also linked to this perception of
assessment as extra work was the feeling of faculty being overworked and “underresourced,”
with assessment as “just another thing we have to do.”
Third, a strong attitude of fear towards assessment was evident among many faculty.
This fear took several forms. Some faculty were fearful of being judged for the effectiveness of
their teaching and found wanting, as described by this faculty member:
I think a large part of the friction, when I think about it, is that I think colleagues felt that
they going to be judged for the effectiveness of what they did in the classroom….that's
something that faculty are really, really sensitive about.
Many faculty saw the push for assessment as being a sort of inherent criticism of their teaching
and were fearful about what would happen to them as a result:
…it's very easy for people to feel that they're being put upon or that they're being
investigated. We do have that happening by the way, where I think there's a big fear that
the legislature is trying to do what people are describing as bean counting. Where they're
saying we're not doing enough so the board of regents is now demanding to see our CVs
and what are we actually teaching.
As this quote demonstrates, often this fear of assessment was linked to perceptions of external
interference and concerns that legislators or board members might use assessment results to
attack faculty work or to demand heavier teaching loads. Fear of assessment was also linked to
fears about narrowing the curriculum or teaching to the test, as this professor described:
“assessment, like I'm wary of it just because the whole concept of teaching for the test, we've
Assessment and Teaching Improvement in Higher Education
170
seen that go wrong in many different contexts.” Faculty fear existed not just around potential
judgement of their performance or the contours of their jobs, but also around the scope of what
gets taught.
Fourth, many faculty expressed cynicism or skepticism towards assessment and its
purposes and intentions; this cynicism was often linked to mistrust of campus administration.
The words “cynicism” and “skepticism” came up over and over in response to questions about
faculty perceptions of assessment and its uses. For example, one faculty member described his
“complex” views of assessment, which include skepticism and suspicion about its purposes:
I've got a long, complex relationship with assessment. Skepticism, weariness,
acknowledgement of some useful potential if it's intelligently carried out. I associate it
with administrative overreach, with institutional jargon, with the industrial educational
complex, which sort of produces an incredible amount of careers and material, other
things too.
While this professor acknowledges some value in assessment when done well, he primarily
associates it with “administrative overreach.” This suspicion of administrators’ motivations
behind assessment was repeated by several faculty, as evidenced in this quote:
I think assessment, quite frankly, can be seen in a more cynical vein as something that is
imposed upon [us] by administrators as a means of deciding matters such as work load
and other employment issues, so in a very cynical light, it can be seen as a management
tool.
A general mistrust of university administration, along with assessment’s history of initially being
pushed by administrators on both campuses, fostered suspicion and cynicism among this group
of faculty.
Assessment and Teaching Improvement in Higher Education
171
A final negative attitude framed assessment as useless and having no impact. One faculty
member described the views of many colleagues who think assessment has little value for
improving student learning:
I'm not sure that most faculty who I have spoken to are convinced of the use of this on the
ground to students. There's the issue of governance and then there's the issue of just
resources. But then there's also the issue of if this actually going to make a difference for
our students? And I don’t think that…I think a lot of people are not convinced that it does
make a difference for students.
In addition to lack of impact for students, many faculty thought that assessment had made little
impact on institutional policy or practice, as noted in this quote: “What is unfortunate is that
people higher up, sort of like, put it, not on the back burner, but they just get it completely off
their mind. It apparently has had no impact.” Another administrator shared his opinion that “at
the end of the day, RAC is never going to read any of these” assessment reports, reinforcing this
perception that assessment has little value or impact, even among the parties who are most
invested in its execution.
Neutral or ambivalent attitudes. Finally, some faculty held more neutral or ambivalent
attitudes toward assessment. Three themes emerged that had a neutral or ambivalent valence.
First, many faculty saw assessment as an inevitability; for some, this inevitability was positive or
negative, but for most it was simply neutral, an inevitable fact, as evidenced in this quote:
…at a certain point, you realize, the arc of politics is moving in a certain direction, and
you either figure out how it has meaning for you or you just become that old curmudgeon
that exists in higher ed and maybe nowhere else in the world that just ... And I'm not the
kind that wants to be the ... What's the word I want? ... the person that stands between
Assessment and Teaching Improvement in Higher Education
172
some kind of new idea and the old way of doing things, and so I figured out how it had
value for me.
This participant ultimately found a way to make assessment have value for her because she
viewed its ascendance as inevitable. Other faculty spoke of this sense of inevitability with more
resignation than acceptance.
Second, there was a strand of attitudes expressing a profound ambivalence towards
assessment. In theory, these faculty were fine with assessment, but they had problems with the
way it was being implemented. Some faculty took issue with the methodologies used to assess
student learning, which was often linked to their disciplinary affiliations. For example, science
faculty thought assessment methods were not rigorous and quantitative enough, while humanities
faculty tended to find assessment methods to be too quantitative and reductionist (this
phenomenon is described in more detail in the previous chapter). These faculty accepted the need
for assessment, they just disagreed with the way it was being carried out. Other faculty remarked
that, while they supported assessing student learning, they struggled to accept the efficacy of the
structures that had arisen around the assessment process, as this faculty member described:
I think, in principle, it makes sense that we should use data to make iterative
improvements in how we're teaching, but it's not entirely clear how best to do that, or
whether we need a whole other layer of bureaucracy to achieve that.
This quote reflects well the ambivalence that many faculty felt around the bureaucracy that has
arisen around assessment. Another professor noted his discomfort with the “institutionality” of
the assessment process:
It seems much more institutional than student-based. I think we all ... I don't want to
speak for everybody, but we all sort of had that inclination as we went forth. There's just
Assessment and Teaching Improvement in Higher Education
173
something about the institutionality of it. The ways that all departments essentially come
up with SLOs, curriculum map, it's very cookie cutter approaches.
This participant remarked that he supported assessment and had found value in it, but that he
doubted that the overall institutional process was sophisticated or tailored enough to support
improvements in student learning.
Finally, many faculty just lacked awareness of assessment and so had neither positive nor
negative perceptions of it. For example, many faculty were not sure what I was talking about
when I asked them about assessment. Some conflated it with student evaluations of teaching
(SETs); others remembered some vague information about the departmental assessment process
after I described in more detail the focus of my study. One faculty member commented wryly on
the fact that she could not remember much about her department’s assessment process:
I'm actually pulling up a huge blank on even the paperwork around it. If I had
remembered that you were doing a survey on assessment, I might have looked. That
might be telling…
This participant drew attention to the fact that her lack of awareness was a meaningful data point
in and of itself.
Multiple simultaneous attitudes. While I present these attitudes as discrete categories, it
is important to note that nearly every participant I interviewed held multiple beliefs
simultaneously, including some that competed with one another. For example, many faculty
expressed positive attitudes towards assessing student learning at the individual level (in their
own classes), but were very cynical or fearful of assessment at the programmatic or institutional
level. Some faculty expressed multiple positive attitudes or multiple negative attitudes.
Assessment and Teaching Improvement in Higher Education
174
In addition to attitudes or beliefs about assessment, there were several other factors at the
individual level that drove faculty engagement with assessment and its link with teaching. I
describe these factors in the next two sections.
Prior Experiences with Teaching and Assessment
Faculty who had had positive prior experiences with pedagogical training or assessment
tended to be more open to assessment and to see its potential for facilitating teaching
improvement. Many of these faculty learned about assessment in pedagogical workshops or
training experiences and saw it as a useful tool in their collection of teaching techniques. For
example, several science faculty mentioned participating in STEM-specific pedagogical
trainings. One science faculty member described what he had learned about assessment there and
how he put that into practice:
Well, I wish I remembered all the terminology that I've been taught along the way. I
know the term "formative assessment", and that is actually exams and quizzes to see what
the students have learned, to assess what they've learned. To me, that is the main
component. That's ultimately what determines what grade they get in the class, and I
would say it is the main feedback mechanism for me to know how the students are doing,
how the students compare to previous students. And then I always forget the other form
of assessment. Because like I said, I have that one mechanism, asking the students for
feedback about the course in terms of structure and pace and those things, and so I try to
gather some of that information, but I also try to just talk to students and get some
informal feedback.
Assessment and Teaching Improvement in Higher Education
175
This faculty member was primed to understand and use assessment from his experience at the
STEM teaching workshop; he not only used assessment in his classes but also was supportive of
departmental and institutional assessment efforts as a result.
Other faculty described their experiences working with instructional coaches, who
introduced them to new ways of thinking about teaching and learning. For example, one
professor described how working with an instructional coach had exposed him to more “student-
focused” approaches to teaching, which in turn made him open to using assessment to track
student learning.
Still other faculty mentioned that they had a personal interest in teaching and pedagogy
and learned about it in a more self-driven way. These faculty read books and articles and sought
out webinars and workshops on research-based pedagogies and ways to improve student
learning. One faculty member, who was an assessment champion, described this sort of self-
directed learning:
I think I had been reading about flipped classrooms at the time, if that's what I had come
across, but just sort of conversations around teaching, and around lecture courses. And in
particular, research showing that learning in a large lecture is obviously not an ideal
environment.
As with the professor who worked with an instructional coach, reading about flipped classrooms
and active learning introduced this instructor to more student-focused approaches to teaching,
which in turn made her more receptive to using assessment.
Finally, some faculty described the influential role of assessment workshops in their
enthusiasm for and use of assessment:
Assessment and Teaching Improvement in Higher Education
176
During those workshops, we were given some literature to read, we attended
presentations, we looked at what had already been done, and I just started to find it really
interesting. I thought that it could help me become a better teacher, and I was interested
in whether or not what I thought was important to teach the students is actually getting
across to them.
As this professor explained, his early experiences with assessment helped him see it as a tool to
“help me become a better teacher.”
These faculty who had positive experiences with teaching and assessment tended to be
more supportive of assessment and expressed that they saw it as a useful method of monitoring
and improving their teaching.
Career Stage
A final theme that was associated with assessment’s influence on teaching at the
individual level was career stage. Numerous participants remarked that younger or more junior
faculty tended to be more interested in both undergraduate teaching excellence and in assessment
and its potential to contribute to teaching improvement. For example, one administrator
remarked on the increasing number of new faculty who are interested in undergraduate teaching:
... and some of that's also we've gotten new faculty who are more interested in
undergraduate education, and as we've gotten a more critical mass of people interested in
undergraduate education, we're able to push those conversations, so I have seen progress
even though our actual assessment ...
These new faculty were apparently also more open to assessment due to their interest in
undergraduate teaching. This theme was reiterated by a faculty member who also serves as a
departmental assessment coordinator:
Assessment and Teaching Improvement in Higher Education
177
Those two courses are both staffed by some like, I wouldn't say junior because one of
them is now tenured, but some of our newer faculty members who are also really working
hard on implementing modern teaching methods and things like that. I think that they're
gonna be also from a practical standpoint easy to work with…. From a practical
standpoint, they're also just gonna be easier to work with, and more willing to try new
things or get creative when it comes to ways to be sure that we're collecting evidence that
works.
Newer, more junior faculty were more excited about undergraduate teaching, as well as more
willing to experiment with innovative teaching methods and with assessment, as this new faculty
member remarked:
Almost every department in the college had at least one new hire this year. Two weeks
ago, [the Dean] had a welcome back lunch. One of the things that [the Dean] was talking
about is part of the reason why they're doing so much hiring of tenure track, junior
faculty is that they want to refresh the curriculum. They want to make sure that we're
cutting edge [in terms of teaching]. We were hired specifically to shake things up.
One faculty member took a more cynical perspective on the trend of junior faculty being
more receptive to assessment:
That's another factor I think when it comes to assessment. I think you can scare and
intimidate junior faculty easier than you can tenured faculty. I would expect assessment
to be taken more seriously and maybe be more effective at the assistant, untenured
professor level.
This professor attributed junior faculty’s openness to assessment to their subordinate position
and tenuous employment security rather than their innate passion for undergraduate teaching.
Assessment and Teaching Improvement in Higher Education
178
Another faculty member offered an inverse perspective, indicating that senior faculty tend to be
most resistant to assessment:
But I do notice that it's usually older faculty that usually have the attitude that they just
know how things work and how to do it properly and since they don't do real assessment
in their classes they don't really know if it's working or not but they tell themselves that
it's working. So yeah I think that's ... I've never heard a real, data based, objection to
doing this it's just either don't have time or you know this doesn't really add anything.
Career stage appeared to influence the extent to which faculty engaged in assessment, as well as
the extent to which they saw it as linked to teaching.
Summary and Conclusions
At the individual level, there was evidence that assessment had influenced teaching in a
number of distinct ways based on assessment type or level. Formal or program assessment
fostered adoption of rubrics, which faculty indicated was helpful for their teaching practice. This
type of assessment also sparked a shift to more outcomes-oriented approach to teaching with
increased attention to course goals, organization, and alignment of curriculum. Informal or
classroom assessment provided feedback on student misconceptions or misunderstandings,
which provoked faculty reflection on how to change aspects of their teaching to facilitate greater
student understanding.
Change (or lack of change) was driven by several different individual-level levers. First,
individual attitudes towards assessment influenced the extent to which it shaped faculty teaching.
Favorable attitudes were more facilitative, while antagonistic or neutral/ambivalent attitudes
tended to limit change. Second, prior experiences with teaching and assessment also shaped this
relationship. Faculty who had prior experience with assessment or with professional
Assessment and Teaching Improvement in Higher Education
179
development for teaching tended to embrace assessment as a tool to improve their teaching. And
finally, career stage also appeared to influence the link between assessment and teaching. More
junior faculty tended to be more interested in teaching and evidence-based pedagogies, as well as
assessment’s potential to improve teaching.
Assessment and Teaching Improvement in Higher Education
180
Chapter 7: Looking across the System: Alignment and Tensions
Each level of the system provided unique insights into how assessment can shape the
teaching and learning environment and contribute to teaching improvement. It is also important
to look at the system as a whole and reflect on issues that affect the entire system. Two such
issues stood out as I examined across levels of the system. First, the issue of alignment came up
repeatedly at University of the Pines. By alignment, I mean that actors at each level of the system
shared a common view of assessment as a tool for teaching improvement. This alignment played
out in distinct ways at each level but created a whole that was greater than the sum of its parts.
There was a distinct lack of alignment at Valley University, by contrast. Second, at both
institutions I noted a tension across levels in which formal or program assessment goals
sometimes conflicted with informal or classroom assessment ideals, even at UP. This chapter
examines these issues of alignment and tension in more depth.
Alignment
As noted above, assessment appeared to have the strongest link to teaching when there
was alignment across multiple levels of the system. Alignment happened around the idea of
assessment as a tool for teaching improvement rather than for compliance or accountability
alone, as well as more generally around perceptions of the value of assessment at each level. This
alignment was more evident at University of the Pines than at Valley University.
At UP, there had been intentional efforts by faculty to reframe assessment at the
institutional level as faculty-driven and linked to teaching and learning after an initial
administratively-driven, accountability-oriented foray. For example, as noted previously, the task
force on assessment defined assessment as “the collection and use of systematic information
about student learning—and the factors that contribute to it—for the purposes of understanding
Assessment and Teaching Improvement in Higher Education
181
and improving our educational practices and programs.” In most of the departments I studied at
UP, there was also a focus on assessment for improvement purposes, as well as a connection
with institution-level assessment activity (i.e. incorporating Institutional Learning Objectives
into courses). Several faculty at UP remarked upon the multi-level value of assessment:
No, I think it's useful on any number of levels. On a broader scale I can understand why
for accreditation purposes, having assessment statements at least is useful, so when the
accreditation team comes down, they know what you're Institutional Learning Objectives
are, and can measure what you're doing against those learning objectives. On our level, at
the department level, I think it is a useful reminder to faculty about what the program is
about, and what we're trying to achieve by these various course offerings that we do. On
a simple level, what are our goals in teaching this particular course? In effect, it translates
the traditional course objectives to student learning objectives. That's why I say, on
syllabi, there's sort of an integration or overlap between the two. And that's good. That's
useful, in terms of doing a course and measuring whether you're achieving whatever the
course goals are.
This quote demonstrates alignment of a learning- and improvement-focused approach to
assessment across multiple levels at UP. Another faculty member similarly noted the value of
assessment across multiple levels and even posited that this recognition could be leveraged to
help support faculty buy-in:
So, you know, assessment, I think to get the buy-in from the faculty, they have to know
that it can be used for many different purposes. One is the university level, one is the
college level, one is the actual department, and then the individual instructor level. It can
be used for all of those things.
Assessment and Teaching Improvement in Higher Education
182
At UP, individual faculty who engaged with assessment saw their efforts reinforced by support
for assessment at the departmental and institutional levels. In the Social Science and STEM
departments, departmental leadership and culture fostered an environment in which assessment
was seen as a valuable activity linked to teaching; faculty champions inspired and motivated
their colleagues to get involved and modeled examples of how assessment could help improve
teaching and learning; and disciplinary influences drove the form and style of assessment. At the
institutional level, leadership promoted assessment as an important part of the work the
university did to improve student learning; an institutional culture that valued teaching meant
that engaging with activities to improve teaching and learning were seen as legitimate and
important; policies and structures made these values concrete; and training and professional
development from the assessment office helped faculty conduct and use assessment more
effectively. Each level was working in support of the others.
At VU, on the other hand, assessment at the institutional level was driven by more
compliance- or accountability-oriented purposes. There was neither a culture nor an
infrastructure in place to support or reward improvement-driven assessment or assessment
focused on teaching and learning. As a result, assessment activity in departments and individual
classrooms was necessarily limited in its scope and impact. For example, one faculty member
who attempted to advocate for assessment in her department found her colleagues to be
uninterested in engaging with assessment, as there was no reinforcement at either the
institutional level or department leadership level for this work. She noted that “people thought
something was wrong with me, [saying] why are you making me talk about this? Everyone is
already a good teacher.” While there were pockets of assessment activity happening in individual
Assessment and Teaching Improvement in Higher Education
183
classrooms and departments at VU, the lack of alignment across levels of the system around
assessment’s value and purpose limited the impact that it had on teaching.
Alignment in Action: Close-Up of Craig
11
In order to better understand what alignment across multiple levels of the system looks
like in practice, it is helpful to consider the case of one individual at University of the Pines,
known here as Craig. In this section, I describe how various levels of the system converged to
influence Craig and his relationship with assessment. His experience illustrates the ways in
which assessment influences the teaching and learning environment at each level of the system. I
do not mention every lever I identified at every level when describing Craig’s case; rather, I
provide a selection of particularly salient levers for Craig that illustrate the influence of
assessment on the teaching and learning environment across the system and show how they work
together to drive change. This example shows that while the presence of every lever is not
necessary to prompt change, as more levers are present more support for change is provided.
Departmental level. Craig is a faculty member in the Social Science department. He was
previously a departmental assessment coordinator, which sparked his initial interest in and
engagement with assessment. Craig remarked several times that the presence of a champion in
his department, Pat, helped nurture his nascent interest in assessment and his willingness to get
involved. Pat’s “enthusiasm” was contagious and made Craig excited to learn more about
assessment. He described the value of faculty champions in this way:
The other thing that I think will help make this better is to find other like-minded faculty
members who think assessment is good and important, and sort of not let the naysayers
and the cynics influence you. Because they’re always going to be there. And to just find
11
Pseudonym
Assessment and Teaching Improvement in Higher Education
184
other people who are really…excited about it, who want to understand it, and then to say,
share it with each other, and this is how I benefited from it. This is how I think it can be
used for the institution.
Conversations with faculty colleagues who support and use assessment helped facilitate and
spread engagement with assessment through the Social Science department and influenced Craig
personally.
Additionally, Craig noted how the department as a whole was supportive of assessment
and used it to make changes. Departmental leadership was facilitative of the growing use of
assessment, and while not every faculty member in the department was a champion, Craig felt
that acceptance was growing: “You know, I haven't really surveyed faculty members, but I hope,
my hope is that there is a growing acceptance of the importance of assessment.” He also noted
that the culture in the department had changed to become more supportive of assessment. He
described the ways that assessment was used in the Social Science department to make change:
You can learn from an assessment whether or not a very important part of your mission is
being accomplished….And if the assessment shows that they feel like they didn't get any
training, then we've really failed in trying to achieve that mission. You know, it can't be
90 percent of the students didn't get it. You know, so you've got to figure out ... You may
have had intentions, good intentions to educate students on this very important part of
your field, but if they're not getting it, then you've got to ask yourself what are we not
doing to that?....And a good department will then incorporate the feedback, and modify or
revise their curriculum.
Craig saw Social Science as “a good department” that made changes to “incorporate the
feedback, and modify or revise their curriculum” in response to assessment.
Assessment and Teaching Improvement in Higher Education
185
Institutional level. Craig also described several ways that institution-level levers
influenced assessment on campus and for him personally. First, he noted the value of
institutional support and training, specifically from the assessment office. He described what he
learned attending workshops put on by the assessment director and her colleagues:
During those workshops, we were given some literature to read, we attended
presentations, we looked at what had already been done, and I just started to find it really
interesting. I thought that it could help me become a better teacher, and I was interested
in whether or not what I thought was important to teach the students is actually getting
across to them.
He further described the ongoing support he received from the assessment director, how she
helped him think through different types of informal and formal assessments, and how she
encouraged him to persevere with assessment and not get caught up in creating a perfect
assessment tool: “early on at one of the workshops, she said, ‘Just do something. Do anything.
It's better than doing nothing.’” This support resonated with Craig and motivated him to keep
trying new ways to assess student learning in his classes.
Additionally, Craig described how the assessment office and the institution displayed an
internal motivation for assessment and a neutral or facilitative attitude towards accreditation:
But I think even with the small assessment office that we have here, they've done a lot of
good work to try and convince people that it is important, and it's not just for show, or
just so that they can check something off of the accreditation requirements.
This internal motivation and improvement focus helped motivate Craig and other colleagues at
UP to engage in assessment and be open to assessment as a tool for teaching improvement.
Assessment and Teaching Improvement in Higher Education
186
Craig noted that these institutional levers had helped to spur culture change at University
of the Pines:
But I do think the culture is changing. I really do. I feel like from when I first started
hearing about assessment, you know…10 years later, there's more of a buy-in, and
acceptance of it. Not everyone of course, but more than before. People are understanding
what it is, and its value.
This culture change was felt across the institution, throughout departments, and by individuals
like Craig, who felt empowered and supported in their assessment work.
Individual level. Finally, many levers for change at the individual level were evident in
Craig’s case. He had significant prior experiences with professional development to enhance his
teaching and described himself as “the type of person that likes to take workshops that help me
become a better teacher.” He noted that assessment was the missing piece of the puzzle that
helped him in his “student-focused teaching” approach and that it was “a powerful tool, because
it helps me feel like I have a better grip of what's going on with the class.”
Craig also had favorable or supportive attitudes towards assessment, which he recounted
to me multiple times throughout our interview, as in this quote: “You know, so if you can tell,
I'm a big supporter of assessment.” He saw assessment as a useful feedback mechanism on his
own performance as an instructor and as a part of his professional obligations as a teacher; he
also mentioned that “assessment, if it's done with the right spirit, is just a powerful tool at all
levels of higher education.” Craig was an interviewee who explicitly noted that assessment has
value at multiple levels: “one is the university level, one is the college level, one is the actual
department, and then the individual instructor level.”
Assessment and Teaching Improvement in Higher Education
187
Craig’s prior experiences and favorable attitudes, along with the supports at the
departmental and institutional levels, helped facilitate changes to his classroom teaching. He was
already fairly outcomes-oriented in his approach to teaching, so assessment reinforced that
existing inclination rather than changing it. Craig noted that assessment had helped him become
more reflective in his practice, sparking deep deliberation on his teaching:
So, it's really a point of self-reflection, when you get the assessment results back, and to
look at it. I love to look at the results….What is it about the way that I'm teaching this
content that makes them not interested in it? Then, I contrast that with maybe if they
were, I'm teaching something, and they really do seem interested, and okay, so what's the
difference? What is different from when I was teaching it and they were disengaged, and
when I'm teaching it and they're more engaged?
This reflection on what his assessment results meant for his teaching led Craig to constantly
make adjustments and changes to help improve student learning.
Conclusion. Craig’s case demonstrates how levers at multiple levels of the system come
together to influence the relationship between assessment and teaching, as well as the ways in
which assessment can lead to improvements in the teaching and learning environment across the
system. These levers work in concert, and each level is important for driving meaningful change.
Tension: Influence of Assessment Type
My last research question dealt with whether the relationship between assessment and
teaching varies based on assessment type or method. For example, I posited that more authentic
types of assessment, such as rubrics or e-portfolios, might have more of an impact on teaching
than standardized or multiple-choice tests. Based on the literature, I expected to see various types
of assessment in use at the campuses and in the departments I studies. However, there was
Assessment and Teaching Improvement in Higher Education
188
actually very little variation in assessment type across the departments and institutions. Most
departments used rubrics of some sort to assess student learning in programs. At UP, rubrics
were also used to assess ILOs. The only examples I found of externally-created or standardized
exams were in the STEM department at VU, which used an exam created by their disciplinary
society, and one mention by an administrator at UP of an instrument called the Student
Assessment of their Learning Gains (SALG) that was being considered for use in the General
Education program. Additionally, at UP much of the resistance to the initial administratively-
driven push for assessment stemmed from a proposal to use a standardized test to measure
student learning at the institutional level (exactly which instrument was unclear). Other than
these three instances, I found very few different types of assessment in use and was thus unable
to satisfactorily investigate and answer this research question.
While I did not see the differences I anticipated by assessment type or method, I did find
tensions based on assessment at different levels of the system. Specifically, I observed a distinct
disconnect between formal/program assessment and informal/classroom assessment, even (or
especially) at UP, where there was alignment across the institutional and departmental levels.
12
I
found that some faculty who happily used classroom assessment were resistant to program
assessment, or sometimes did not even recognize what they were doing in their classrooms as
assessment. This disconnect actually resulted from some of the strategies and levers that
assessment leaders at UP used to build buy-in around assessment as a tool for improving the
teaching and learning environment. Under the leadership and guidance of the assessment
director, assessment was intentionally framed as a collective or collaborative activity at UP:
12
This disconnect was also evident at VU, but there was also a less sophisticated assessment infrastructure at the
program and institution levels, so I focus on examples at UP here.
Assessment and Teaching Improvement in Higher Education
189
We promote assessment as a program endeavor not as an individual endeavor, and that
caused people some confusion, so if we start saying that assessment leadership in a
program should be weighted more heavily in the review process, does that mean
assessment is more of an individual and less of a collaborative activity?…We want
professors to treat assessment as a collaborative, departm ental- or program-level activity,
so do we want to privilege a single person leading it and doing scholarship in it? Probably
not, but we do want to make it possible for the person's scholarship to include teaching
and learning scholarship, not just subject-area original research but research on teaching,
learning, and the use of assessment in that realm…
While this was a worthy message that helped promote buy-in among faculty who may have
otherwise been skeptical, it may have had the unintended consequence of decoupling classroom
assessment from program assessment. I asked faculty separately about assessment in their classes
and assessment in their programs or departments, and then probed about connections between the
two. While some faculty with the most sophisticated understanding of assessment were able to
draw those connections, most faculty did not see the link between the classroom assessment that
was an integral part of their teaching work and broader departmental assessment efforts. This
exchange with one faculty member clearly demonstrates the disconnect:
Interviewer: Okay. But, you don't really see a connection between like that level of
classroom assessment and the program assessment or departmental assessment that
you've been doing?
Participant: I see a methodological and theoretical connection…
Interviewer: Right. I mean more practically.
Participant: I don't see a practical connection. No.
Assessment and Teaching Improvement in Higher Education
190
Because assessment was being advocated as a program-level, collective endeavor, there were not
explicit ties being built between classroom and program assessments, nor was there much work
to promote assessment use in individual classes. As a result, some faculty did not even recognize
the work they did to understand what students were learning in their classes as assessment. They
saw assessment as separate from the testing and assignments they gave in their classes. If this
everyday work of faculty to gauge student learning or progress in class had been framed as
assessment, perhaps there would be more opportunities for faculty engagement with assessment,
more opportunities for changes to teaching at the individual level, and more openness to
assessment at other levels of the system.
Summary and Conclusion
This chapter examined two salient cross-level or system-level issues that arose over the
course of this study. Alignment around the idea of assessment for improvement strengthened
assessment’s ability to shape the teaching and learning environment at UP, while lack of
alignment detracted from its impact at VU. There were also evident tensions between
formal/program assessment and informal/classroom assessment at both institutions. These
tensions represent a missed opportunity to draw connections between assessment across levels.
Assessment and Teaching Improvement in Higher Education
191
Chapter 8: Discussion, Implications, and Conclusions
This multiple-case study investigated the ways in which assessment shapes teaching and
learning environments in six departments at two research universities. To conduct this case
study, I reviewed hundreds of institutional and departmental documents and conducted extensive
interviews with nearly 50 faculty, staff, and administrators across the two sites. Using a systems
theory perspective, I examined the link between teaching and assessment at multiple levels—
institutional, departmental, and individual—while also examining how different factors or levers
shape this relationship at each level.
The two institutions I studied were very different, yet some common lessons and themes
emerged across both. In this chapter, I first review and summarize the major findings from my
study. Next, I place these findings in the context of existing research and theory, describing how
they confirm, contradict, or extend various areas of the literature. Then, I discuss implications for
policy and practice and conclude with suggesting some directions for future research.
This study was among the first to empirically examine the ways in which assessment
shapes teaching in higher education. Further, it was the first to examine assessment at research
universities specifically, and the first to use a systems framework to think about teaching. In
taking this complex approach to such an under-researched issue, this study serves as a strong
addition to the literature on assessment and teaching in higher education.
Review and Summary of Findings
The first research question I posed asked: In what ways does assessment shape teaching
and learning environments at research universities? Specifically, I examined this relationship at
the institutional, departmental, and individual levels at two research universities. The second,
related research question asked how departmental and institutional policies and supports shape
Assessment and Teaching Improvement in Higher Education
192
assessment’s ability to improve teaching and learning contexts, as well as how external factors
shape the relationship. As my systems theory framework suggested, I found multiple types of
changes within each level of the system, as well as a multitude of factors associated with change
at each level.
At the institutional level, I observed several different types of changes to the teaching
and learning environment at University of the Pines but a general lack of change at Valley
University. Institution-level changes at UP included cultural changes (differences in language,
norms, and values), changes to institutional policies and structures related to teaching and
assessment, and changes to curriculum. At UP, there was evidence of a culture in which teaching
is perceived as a student-focused endeavor and assessment as a tool to determine whether
students are meeting course or program learning goals. Additionally, assessment activity on
campus resulted in a greater awareness among individuals of teaching and education as a
collective, institutional endeavor rather than solely the provision of individual faculty in
individual classes. These cultural shifts were evident primarily through the language that faculty
used as well as the messages institutional leaders delivered about teaching, learning, and
assessment. While UP is a research university and research remains a top institutional priority,
there is some evidence of a cultural shift in which teaching is starting to get a more equal footing
with research. This shift has begun to be enacted in new policies and structures on campus.
Policy changes included changes to the course and program review process to require clear links
to both institutional and program learning outcomes (ILOs and PLOs), as well as detailed plans
for assessing whether those outcomes were met; changes to the program review process that
require evidence of student learning and assessment plans; and addition of language about
student learning and assessment to the promotion and tenure guidelines. Changes to the
Assessment and Teaching Improvement in Higher Education
193
curriculum were driven primarily by implementation of ILOs, as well as intentional efforts to
rethink the general education curriculum, define its intended outcomes, and align them with both
ILOs and accreditor requirements.
Six major themes explain why these changes occurred at UP. First, assessment was a
faculty-driven process. The assessment leaders on campus held tenured faculty positions, and
oversight of assessment was the responsibility of a number of permanent faculty committees.
Second, institutional policies and structures supported assessment and its link to teaching. These
policies included changes to course and program approval processes to incorporate evidence of
student learning and enhanced program review processes that placed assessment data in a more
prominent role. Third, there was intentional messaging around assessment for improvement at
the institutional level from multiple stakeholders. Fourth, the attitude towards accreditation
among stakeholders on campus was not antagonistic. Rather, campus stakeholders tolerated
accreditation and had a more internal motivation for engaging in assessment. Fifth, there was
ample support and training around assessment. The assessment office conducted ongoing
workshops on a variety of topics for various skill levels and hosted an intensive summer
assessment institute to support faculty engagement with assessment. And finally, there was
leadership support for assessment as an activity to improve teaching and learning environments.
Campus leaders talked about the value they placed on assessment and teaching and also showed
this value through symbolic actions like attending assessment training and poster sessions.
At VU, conversely, where the institutional assessment infrastructure was much less
sophisticated than at UP, there was little evidence of change to culture, policy, or curriculum as a
result of assessment. Four primary themes at the institutional level at VU help explain this lack
of change. First, unlike UP, assessment at VU was an administratively-driven process. Second,
Assessment and Teaching Improvement in Higher Education
194
there was a lack of leadership support and alignment around assessment for improvement. Third,
there was generally a more negative or antagonistic attitude towards accreditation as driving
assessment. And fourth, there was a lack of structural supports and training.
At the departmental level, assessment shaped changes to the curriculum or program, as
well as changes to teaching approaches or strategies. These changes were reflected more in
departments at UP, though there were some changes in VU departments, as well. Departments in
which assessment influenced the teaching and learning environment shared some common
characteristics. First, they had faculty champions within the department who were both
knowledgeable about assessment and well-respected by their colleagues. These champions’ use
of assessment (and the ways in which they shared their knowledge with colleagues) facilitated
buy-in among other faculty in the department and enabled changes to occur. Second,
departmental leadership played a meaningful role in promoting assessment and its link to
teaching. Supportive department chairs who valued assessment as a key part of the teaching and
learning process promoted its growth across the department. Third, a departmental culture that
valued and supported teaching was important for promoting assessment as a teaching-related
endeavor rather than a compliance-oriented process. And fourth, disciplinary influences shaped
faculty members’ openness to assessment, as well as the types of assessment they used and the
ways in which they used it.
At the individual level, assessment shaped teaching and learning mainly in terms of
introduction to and adoption of rubrics; providing faculty members with feedback on student
misconceptions or misunderstandings and provoking reflection on how to change aspects of their
teaching to facilitate greater understanding; and a shift to a more outcomes-oriented approach to
teaching, with increased attention to course goals, organization, and alignment. Factors
Assessment and Teaching Improvement in Higher Education
195
associated with change at this level included attitudes and beliefs about assessment, prior
experiences with teaching and assessment, and career stage. While my systems theory
framework also included motivation and appointment type as two potential influencers of this
relationship, I did not have adequate data to support their inclusion. Individuals’ motivation to
engage with assessment and change their teaching was not really evident in the single hour-long
interviews I conducted. I would need to follow individuals over time and perhaps collect
different types of data to adequately analyze faculty motivation. Unfortunately, I did not get an
adequate response rate on the Approaches to Teaching Inventory (ATI), which could have helped
me learn more about individual faculty members’ motivation, as well as their teaching practices.
Additionally, I was only able to interview two non-tenure-track faculty and so was unable to
determine how appointment type shapes the relationship between assessment and teaching.
Changes to the teaching and learning environment were most meaningful when there was
alignment across levels of the system around the idea of assessment for improvement. This
alignment was evident at University of the Pines but not at Valley University, primarily due to
differences in institution-level factors. While factors at every level were important for promoting
assessment’s ability to influence teaching, institutional factors (e.g. structures, leadership
support, intentional messaging, etc.) seemed to matter more than departmental or individual
ones, either supporting and enhancing or overriding and mediating these other levels. At VU,
conversely, where these factors were present to a much lesser degree, departments and
individuals were constrained in their ability to use assessment to improve teaching across the
system. This conclusion should not be taken to mean that departments and individuals do not
matter; indeed, the major takeaway of this study should be that complexity matters and that
concerted attention at each level is required in order for assessment to meaningfully improve
Assessment and Teaching Improvement in Higher Education
196
teaching, as Figure 7 shows. And there were departments and individuals at VU who were using
assessment to make change, most notably in the Humanities and STEM departments I studied.
Faculty champions, departmental leadership, and departmental culture all played a role in
enabling this change. However, the types of changes that occurred in these departments were
more limited in scope than those at UP due to the differences in institutional context and the lack
of alignment around assessment for improvement across the system.
Figure 7 shows the entire system in its complexity, including the types of change
occurring at each level and the levers that facilitated change at each level. The arrows in the
diagram are indicative of how each level had its own specific change levers, yet levels also
influenced one another, as well. For example, institution-level change levers (such as intentional
messaging around assessment for improvement) also influenced change at the department and
individual levels, as this type of messaging drove specific approaches to assessment in
departments and shaped individual attitudes towards assessment. Similarly, department-level
change levers influenced institutional- and individual-level changes. For exam ple, faculty
champions in departments helped contribute to broader culture change at the institutional level
by getting involved in institution-wide assessment committees or activities. And individual-level
change levers also shaped change at other levels, as well; faculty attitudes, for example, while
shaped by institutional messaging and forces, also influenced the extent to which faculty got
involved with institutional and departmental assessment efforts. When all levels were aligned
around assessment for improvement, more changes occurred.
Assessment and Teaching Improvement in Higher Education
197
Figure 7: Diagram of the System
Complex Relationship between Assessment and Teaching Improvement
As Figure 7 indicates, this study demonstrated that the relationship between assessment
and teaching improvement is complex. Simply engaging in assessment does not automatically
lead to improvements in the teaching and learning environment. Rather, if assessment is oriented
around improvement, the institutional culture supports it, policies and practices support it, there
is leadership support at all levels, various departmental levers support it, if it is used as a tool for
reflective practice, it can work. If most of these levers are not in place, however, or if assessment
Assessment and Teaching Improvement in Higher Education
198
is seen as primarily an exercise in compliance, as was the case at Valley University, assessment
work may occur but have little or no impact on the teaching and learning environment.
This finding comports to an extent with the recent work of Fuller, Skidmore, Bustamante,
and Holzweiss (2016) on cultures of assessment, which suggests that while certain factors or
levers can be present and assessment can be happening, what the authors term a “culture of
assessment” still may not exist. The authors define cultures of assessment as “institutional
contexts supporting or hindering the integration of professional wisdom with the best available
assessment data to support improved student outcomes or decision making” (Fuller, Skidmore,
Bustamente, & Holzweiss, 2016, p. 404). These researchers and, in fact, this entire line of
research on “cultures of assessment,” assume that there is a particular institutional culture that
must be in place in order to support assessment’s existence and institutionalization. This line of
research does not, however, explicitly link the existence of cultures of assessment with
improvements to teaching. The assumption underlying this and much of the other research on
assessment is that assessment just automatically leads to improvement, and if we can figure out
ways to effectively implement assessment (motivating faculty buy-in, changing the culture,
instituting new policies and practices, etc.) then improvement to teaching and learning will
follow. Many assessment scholars have not questioned the assumption that assessment leads to
improvement or studied how, exactly, assessment can improve the teaching and learning
environment. Conversely, another strand of commentary on assessment (typically not scholarly
or practitioner literature but rather op-eds or blog posts) assumes that assessment has no link to
teaching improvement whatsoever (Gilbert, 2018; Small, 2018; Worthen, 2018). These
commenters, typically academics in traditional disciplines who do not do assessment research,
offer their personal experiences and perspectives on the assessment movement, claiming that it is
Assessment and Teaching Improvement in Higher Education
199
solely an exercise in accountability and compliance and has no role in improvement of teaching
or learning.
The common thread linking these seemingly disparate perspectives is a lack of empirical
evidence on assessment’s connection to teaching improvement. Assessment scholars have
frequently studied the process of implementing assessment and more rarely studied the outcomes
of assessment; those who have attempted to study its outcomes have rarely found evidence of
assessment leading to change or improvement (Blaich & Wise, 2011; Banta & Blaich, 2011).
Anti-assessment voices typically have little or no research evidence to back up their contention
that assessment does not improve teaching—except a lack of affirmative evidence that it does.
This study is a first step towards providing the type of evidence that has been lacking in
this debate. It questions the assumptions of assessment scholars that assessment automatically
leads to improvements in the teaching and learning environment, while also questioning the
assumptions of assessment critics that assessment is entirely driven by compliance and
accountability. My findings show that neither perspective is entirely right or entirely wrong.
Assessment does not automatically lead to changes in the teaching and learning environment, as
Valley University’s experience demonstrated. Assessment can, however, lead to a number of
changes across a variety of levels if certain levers are in place, as evidenced at University of the
Pines. The systems theory framework that grounded this study allowed these findings to emerge,
as I intentionally analyzed data through the interpretive lens of each level of the system (Austin,
2011; Fairweather, 2008). I discuss the utility of this framework in more detail in the next
sections.
Assessment and Teaching Improvement in Higher Education
200
Using Systems Theory to Understand the Connection between Assessment and Teaching
Another unique contribution of this study is in its approach to conceptualizing teaching
and teaching improvement as a system with multiple interrelated levels and levers and examining
assessment within that systemic context. Such an approach to examining teaching as a system
rather than teachers has recently emerged within the K-12 teaching literature (Hiebert & Stigler,
2017) and the literature on teaching improvement in undergraduate STEM (Austin, 2011;
Fairweather, 2008) but remains underexplored. Few empirical studies have examined multiple
levels of the teaching system in higher education simultaneously (Austin, 2011; Fairweather,
2008; Hiebert & Stigler, 2017). And no one has yet examined assessment within the context of
this systems perspective on teaching. Scholars and practitioners have looked at assessment at
different levels in isolation and sometimes even at one level within the context of another, but
never in this kind of multi-level, multi-site study (Drezek McConnel & Doolittle, 2012; Judd &
Keith, 2012). Especially in a decentralized and decoupled environment like the research
universities I studied, it is overly simplistic to think that assessment might influence teaching in
only one way or at one place within the system (Birnbaum, 1988; Kezar, 2013a). Using a
systems theory approach allows for an examination of complexity that has been largely absent
from the assessment literature to date.
As my theoretical framework suggested, various levers at each level of the system
influenced the relationship between assessment and teaching. Table 9 compares the levers
identified through my theoretical framework to the findings of this study. Many of these levers
have been previously identified as important in the literature, but prior studies have typically
only identified one or a few levers in isolation. Conversely, my study examined the “mutual
interaction of [these] elements in an organized whole” (Cummings, 1980, p. 3). Intentionally
Assessment and Teaching Improvement in Higher Education
201
studying different levels of the system individually and as a whole helps understand both the
relationships between assessment and teaching at different levels, as well as various factors that
either strengthen or weaken this relationship at distinct levels and across the system.
Table 9: Presence of Change Levers Suggested by Theory
In examining this relationship across the system, my study also identifies areas of
teaching improvement that more simplistic studies might miss. Many studies of teaching
improvement have looked solely at classroom behaviors or pedagogical practices of faculty
members (Fairweather, 2008). Such studies ignore other elements of the teaching system that
Individual-Level
Levers
Department-Level
Levers
Institution-Level
Levers
External Influences
Theory
My
study
Theory
My
study
Theory
My
study
Theory My study
Prior
educational
experiences
and beliefs
about
teaching and
assessment
Yes Departmental
cultures
Yes Institutional
structures
around
assessment
and teaching
Yes Accreditation Yes
Socialization
around
teaching and
assessment
Yes Disciplinary
cultures
Yes Institution-
level policies
around
assessment
and teaching
Yes State-level
accountability
policies
Yes
Career stage Yes Leadership
related to
assessment and
teaching (both
chair and
faculty
leadership)
Yes Institutional
cultures—in
support of
assessment
and teaching
improvement
Yes
Appointment
type
NA
(sample did
not include
enough
diversity to
determine)
Policies and
practices
Yes
Motivation NA (study
design did
not allow
me to
adequately
assess
faculty
motivation)
Professional
development
around
teaching and
assessment use
Not really
salient at
department
level
Assessment and Teaching Improvement in Higher Education
202
impact what goes on in the classroom, such as curriculum, policies and regulations, rewards
structures, and culture. Through its broader conception of teaching improvement, this study has
shown that assessment can improve the teaching and learning environment in a number of ways
at the individual, departmental, and institutional levels. Interplay between and across levels is
especially interesting—there was very little perfect alignment, rather a patchwork of different
outcomes at different levels and sites. Despite this patchwork nature, the most meaningful
changes appeared to occur where there was some alignment across levels of the system around
the idea of assessment for improvement (Ewell, 2008).
In addition to highlighting the multitude of levers influencing this relationship at
multiple levels, my study also provides a unique theoretical contribution in that I used additional
theories within each level of the system as explanatory mechanisms. As I mentioned in Chapter
2, systems theory tells us where changes might occur and where this relationship plays out but
not how or why it plays out in certain ways and not others. In order to understand this how and
why, I overlaid a number of different explanatory theories across each level of my systems
theory framework. In the next sections, I discuss these various explanatory theories, exploring
the ways in which they facilitated understanding as well as the ways in which my findings
diverged from what the theories may have suggested.
Institutional Logics
It appears that different, competing institutional logics were operating at the two
institutions I studied, which reflects the broader divide in the assessment community I alluded to
earlier in this study. Ewell (2008) refers to this divide as two different paradigms of assessment;
I contend that these are actually distinct institutional logics that have driven the link between
assessment and teaching across the organizational field and are now playing out on campuses.
Assessment and Teaching Improvement in Higher Education
203
Institutional logics are the rules, norms, and values that signal legitimacy and guide behavior
across an organizational field; they “provide a link between individual agency and cognition and
socially constructed institutional practices and structures” (Thornton & Ocasio, 2008, p. 101).
Logics shape beliefs, attitudes, and behavior indirectly by dictating which values, practices, and
structures are legitimate in a particular field (Thornton, Ocasio, & Lounsbury, 2012).
Sometimes multiple logics emerge within a field and compete for dominance. In order for
a new logic to become dominant, it must have compelling content, penetration throughout the
field, compatibility with other institutional beliefs and structures, and exclusivity or lack of
contestation (Kezar & Maxey, 2014; Scott, 2008). If multiple logics exist and none has
compelling content, sufficient penetration, compatibility, or exclusiveness, they will not become
dominant in the field and guide individual action. In terms of assessment, I argue that the
competing assessment paradigms I described in earlier sections of this study (accountability vs.
improvement) are actually conflicting institutional logics that have competed for dominance over
the last several decades. Assessment’s relationship to teaching improvement has been tenuous
because of these competing logics. Under the logic of assessment for accountability, teaching
improvement is not the primary goal or value of assessment; rather, compliance with accreditor
demands is the animating impulse. Conversely, the logic of assessment for improvement has
teaching improvement at its very core.
These competing logics have been present since nearly the beginning of the assessment
movement in higher education (Ewell, 2008). Neither one has yet established complete
dominance across the field, as evidenced by ongoing debates around assessment in the popular
press and in academic circles. The two universities in this study serve as an interesting example
of how each of these different logics can become dominant at a particular institution and
Assessment and Teaching Improvement in Higher Education
204
influence it in distinctive ways. At University of the Pines, the logic of improvement was
dominant, and at Valley University, the logic of accountability was dominant. These logics
shaped distinct beliefs, norms, and practices on each campus, which were evident at each level.
For example, logics shaped individual faculty attitudes towards assessment; the logic of
accountability was associated with antagonistic attitudes, while the logic of improvement was
associated with more favorable attitudes. At the institutional level, logics manifested as
organizational culture. Institutional logics emphasize “the wider contexts within which cultures
originate,” linking external field-level phenomena to internal organizational cultures (Hinings,
2012, p. 99).
Organizational Cultures
In this study, culture emerged as relevant in two ways. First, as I described earlier in this
chapter, a strand of research on “cultures of assessment” posits that a particular culture needs to
be in place at an institution in order to foster effective implementation of assessment (Fuller,
Skidmore, Bustamante, & Holzweiss, 2016). My study took a slightly divergent approach, in that
I examined the extent to which assessment shapes teaching improvement, not just the extent to
which assessment is being implemented. I examined culture insofar as it either hindered or
facilitated assessment’s ability to improve the teaching and learning environment. The second
way in which culture was relevant for this study, which I describe in more detail below, was in
terms of the values that guided institutions as they made decisions and took action (or not) on
assessment and teaching improvement.
An organization’s culture is defined as “a pattern of shared basic assumptions learned by
a group,” which are expressed through a group’s values, beliefs, and norms (Schein, 2010, p. 18).
Different cultures can exist within different organizations, either as subcultures or smaller groups
Assessment and Teaching Improvement in Higher Education
205
within the larger whole (such as departmental cultures) (Schein, 2010), or as multiple or
competing cultures at the same level of the organizations (Martin, 1992). Each university in this
study held distinctive shared basic assumptions around assessment and teaching, which became
evident as I examined the structures, processes, and behaviors and espoused beliefs and values of
the participants at each site (Schein, 2010). These organizational cultures were shaped by
institutional logics—the logic of accountability at VU and the logic of improvement at UP.
First, institutional values and structures played a key role in shaping the influence that
assessment had on teaching. As a reminder, institutional values represent what is important to an
institution; a research university might have a value around excellence in research, for example.
Values are expressed through behaviors, processes, and structures at both the departmental and
institutional levels (Schein, 2010). In this study, I found that assessment both shaped and was
shaped by institutional values around teaching and learning. At UP, there was a stronger value
around the importance of undergraduate teaching; undergraduate education, while central to the
missions of both UP and VU, was more highly valued at UP. It was mentioned much more
frequently by faculty and administrators as something that was important to the institution, not
just to individuals. This value allowed assessment to have a stronger connection to the teaching
and learning process than at VU. At VU, conversely, research was emphasized more and
sometimes at the expense of undergraduate teaching. VU was more of a “striving” university
than UP, working to strengthen its research portfolio and prestige and advance in the rankings
(Morphew, 2002; O’Meara & Bloomgarden, 2011). While interviewees remarked that they had a
responsibility to effectively educate undergraduates, there was less of an institutional emphasis
on undergraduate teaching and learning than at UP.
Assessment and Teaching Improvement in Higher Education
206
Different values around the internal importance and nature of assessment were also
evident at each campus, driven by their competing institutional logics. As noted earlier, at UP
faculty had seized control of assessment on campus after an initial administratively-driven effort
in the early 2000s, with the goal of making assessment useful for faculty and using it to make
authentic improvements, rather than merely capitulating to demands of accreditors. This value
was evident in multiple ways—through the university’s assessment website, through frequent,
repeated messaging from campus leaders (both faculty and administrators) about the importance
of assessment for improving instruction, and through the language of faculty members that I
interviewed. Whereas UP had an institutional value around assessment as a faculty-driven
process designed to improve teaching and learning, VU had a value of assessment as an
administratively-driven activity designed for compliance purposes. This value was reflected
through the language of interviewees, as well as the lack of leadership attention to assessment.
These different values were evident in structures and policies at both institutions (Schein,
2010). At UP, stronger norms around teaching and stronger value of assessment as part of the
teaching and learning process were expressed through the assessment structures that were built
on campus, as well as the assessment policies that were established by faculty and
administrators. Having an assessment office staffed by tenured faculty who were dedicated full-
time to assessment made UP’s norms and values concrete. Additionally, the multiple permanent
faculty committees charged with overseeing assessment enacted the institutional values around
assessment as part of the purview of faculty. The fact that assessment was built into the program
review process and the new course approval process further contributed to the sense on campus
that assessment was valued by the institution. Support structures such as the summer assessment
institute, ongoing training sessions, and poster sessions provided tangible evidence that UP not
Assessment and Teaching Improvement in Higher Education
207
only valued assessment but also supported it. Faculty who engaged in assessment were
constantly bombarded with both messaging about assessment’s purpose (improvement) and
structures that supported their engagement in it. On the other hand, having an administrative staff
member who only spent about 1/3 of his time on assessment sent a different message about
institutional values around assessment to the VU community. Many faculty commented that the
assessment function was understaffed and that the director needed additional support in order to
make assessment meaningful. Additionally, a temporary, ad hoc committee charged with
overseeing assessment further enforced notions that assessment was an add-on activity pushed by
external forces (accountability) rather than a core part of faculty work. Few institutional supports
or rewards for engaging in assessment reinforced the notion that assessment was not
institutionally valued and was driven by accountability pressures. These structures and policies
both reflected and shaped organizational cultures.
The logics and cultures at each university were embedded in routines and structures and
became self-reinforcing (Schein, 2010). At UP as more faculty engaged in assessment, even if
they were not necessarily initially enthusiastic, structures and policies perpetuated and upheld
institutional values around the importance of assessment for improving teaching and learning,
leading to additional, continuing changes to the teaching and learning environment. Conversely,
at VU faculty who engaged with assessment were not faced with structures and policies that
supported their work. Rather, those individuals and departments interested in engaging with
assessment as a tool for improving the teaching environment had to fight against institutional
structures that prioritized assessment as a tool for compliance with accreditor demands. These
structures (and the lack of supportive structures) reflected institutional values and led to
Assessment and Teaching Improvement in Higher Education
208
continued marginalization of assessment as an influential lever in improving teaching and
learning.
Institutional culture concepts also played out at the departmental level, as evidenced by
the differences I saw across institutions regardless of department. For example, language about
assessment was replicated across departments at each institution. At UP it was more
improvement-oriented, whereas at VU it was more compliance-oriented. Institutional structures
and policies also facilitated or limited what faculty within each department were able to do with
their assessments. At VU, for example, lack of institutional assessment capacity limited the
extent to which departments were able to get personalized meaningful feedback on their
assessment plans or to customize their assessments to best fit the needs of their departments. The
fact that their assessment director could devote only a portion of his time to supporting learning
outcomes assessment meant that he encouraged departments to fit their work rigidly within the
templates that he provided so that they could meet accreditation requirements. As a result,
departments at VU tended to conduct assessment work that was less linked to teaching than
departments at UP.
Professionalization Theories
I also suggested that professionalization theories might help explain how assessment
influences teaching. Professionalization theories explain the extent to which disciplinary cultures
and professional identity influence the department’s approach to assessment and teaching
(Austin, 1994, 1996; Sullivan, 2004). I predicted that professional norms and values around
teaching and assessment might play a role in the extent to which faculty engage in assessment
and the ways in which they understand its relationship to instruction. This explanatory
mechanism was borne out to some extent, though institutional influences appeared to mediate
Assessment and Teaching Improvement in Higher Education
209
departmental ones. I did find some meaningful differences in approach to assessment and its link
to teaching by discipline. Specifically, faculty in humanities disciplines felt very differently
about assessment than faculty in the sciences, based on their disciplinary training and
background. Due to their disciplinary socialization and epistemological orientation, humanities
faculty generally felt that assessment ran counter to their identity and responsibility as humanists
(Linkon, 2005; MacDonald, Williams, Lazowski, Horst, & Barron, 2014). They were m ore open
to holistic or authentic assessments, for example portfolios or rubrics, but many humanities
faculty still felt that assessment was reductionist and encouraged a narrowing of the curriculum.
Faculty in the sciences, conversely, often thought that assessment was not rigorous or scientific
enough. Social scientists were more split on what they viewed as assessment’s strengths and
weaknesses and how it could help their teaching. These findings generally comport with existing
literature, which states that discipline matters for assessment and shapes it in distinct ways
(Ewell, Paulson, & Kinzie, 2011; Guetterman & Mitchell, 2016; Hutchings, 2011). Disciplinary
training shapes professional identity, which influences the ways in which faculty approach all
areas of their work (Clark, 1997). However, this line of research has often been interpreted to
mean that discipline or department matters more than institutions, and that the inherently
discipline-driven nature of the academic profession limits the extent to which institutions can
shape change. I did not find that to be the case. While disciplinary and departmental influences
certainly shaped the ways in which assessment impacted teaching, institutional factors appeared
to be foundational in determining whether assessment was linked with teaching improvement or
whether it was more accountability-oriented.
In addition to disciplinary or departmental influences, professionalization theories
contend that individuals’ status as professionals shapes their identity, beliefs, and behavior
Assessment and Teaching Improvement in Higher Education
210
(Austin, 1994, 1996; Sullivan, 2004). Sullivan (2004) notes that professions are “characterized
by three distinctive features: (1) specialized training in a field of codified knowledge; (2) a
measure of status accompanied by the autonomy necessary to independently determine and
regulate standards of practice; and (3) a commitment to support the public good and welfare” (in
Kezar, Maxey, & Holcombe, 2015, p. 39). While the professionalism of the faculty role has been
steadily eroding over time as contingency becomes increasingly common, the faculty role
remains ostensibly a professional one, at least for those faculty who are tenured or on the tenure
track (Kezar, Maxey, & Holcombe, 2015; Parsons, 1968; Sullivan, 2004). Key to assessment’s
link with faculty professionalism is Sullivan’s (2004) second point about autonomy. Many critics
of assessment claim that assessment diminishes faculty professionalism in that external
accountability pressures take away faculty autonomy (Champagne, 2011; Powell, 2011). Indeed,
I identified this belief among some of the faculty I interviewed. However, this belief was driven
by the logic of accountability, where external oversight and compliance are the animating forces
behind assessment. The logic of improvement, conversely, drove a perception of assessment as
part of faculty members’ professional responsibilities as instructors. This perception is more
easily understood through the lens of Schön’s (1983, 1987) theory of the reflective practitioner.
Assessment as Reflective Practice
Schön (1983, 1987) suggests that professionals, in the course of acting out their practice,
can reflect on their work using their expertise and experience, which leads to learning and
potentially to a change in practice. Schön notes that professionals can both “reflect-in-action,”
which happens in the course of practice, and “reflect-on-action,” which happens retrospectively
(Schön, 1987, p. 26). This reflection is sparked by a situation of surprise, unfamiliarity, or
dissatisfaction, in which the professional must grapple with a challenging practice situation. By
Assessment and Teaching Improvement in Higher Education
211
providing feedback on what and how much students have learned in a course, assessment
provides a mechanism for feedback and reflection on teaching. Assessment could spark surprise
or dissatisfaction if a student did not meet a particular learning outcome, for example, or if
students in a course demonstrated less learning than an instructor expected. These results could
trigger the reflective process for faculty, leading them to experiment with changes to their
teaching practice that may improve student learning. Several existing studies have found that
faculty who engage in reflective practice of their own accord see improvements in their teaching
(Kane, Sandretto, & Heath, 2004; McAlpine & Weston, 2000). One small study found that
classroom assessment facilitates reflective practice among faculty and promotes instructional
change (Steadman, 1998). Similarly, in the K-12 setting, scholars have found that data use,
particularly data on student learning, promotes instructional change (Coburn & Turner, 2012).
This study comports with these findings; I observed numerous examples of classroom-level
assessment provoking reflection and change at the individual level. Future research on
assessment and teaching improvement would benefit from using this theoretical lens.
Cognitive Theories of Belief Change
At the individual level, I had also speculated that engagement with assessment might lead
faculty to change their beliefs about teaching to become more learning-focused. Specifically, I
used the Cognitive-Affective Model of Conceptual Change, which predicted that assessment
would provide discrepant information to faculty that would provoke negative emotions or
threaten their identity as teachers, which would lead them to question elements of their teaching
and subsequently change their beliefs (Gregoire, 2013). This explanatory theory did not turn out
Assessment and Teaching Improvement in Higher Education
212
to be a useful way of understanding this issue for two primary reasons, one methodological and
one theoretical.
First, while I was able to elicit faculty beliefs and emotions about assessment and
teaching from my interviews, I was not able to adequately assess the extent to which assessment
results provoked negative emotions and subsequent belief change. In order to more effectively
determine whether this theoretical explanation was appropriate, observations of faculty as they
interpret assessment results or longitudinal interviews before and after conducting assessment
would provide more detailed information about faculty cognition in the moment of interpreting
assessment data, as well as any subsequent changes. However, I did find some evidence that the
CAMCC might not be fully appropriate for explaining what is happening when individual
faculty members engage with assessment and how it influences their teaching. As noted above,
the CAMCC posits that when faculty members receive discrepant information (such as
assessment data showing that students did not perform as well as expected), it triggers negative
emotions, effortful cognitive processing, and potentially dissatisfaction with one’s teaching
(Gregoire, 2003). I found some evidence of this explanation from multiple faculty who reflected
on instances when students did not learn something that faculty thought they had. This classroom
assessment provoked some dissatisfaction and reflection on what might have gone wrong and
how to teach the concept differently—a view which follows the CAMCC model’s predictions.
However, I also found some instances when faculty received discrepant assessment information
and, rather than engaging with and reflecting on that dissonance, instead became anxious and
fearful and just shut down. Rather than using assessment to change their beliefs and practice,
these faculty became more resistant to assessment and its relationship to teaching—the opposite
of what the CAMCC would have predicted. This idea of faculty resistance is well-documented in
Assessment and Teaching Improvement in Higher Education
213
the assessment literature but typically is not linked to examinations of faculty cognition
(Hutchings, 2010). I did not have enough data to determine whether this resistant response was
more frequent when faculty dealt with program- or institution-level assessment as opposed to
classroom assessment, or under what circumstances this response happened with the most
frequency. Future studies should examine in more depth the ways in which faculty think about
assessment and how their cognition shapes both their responses to assessment data and their
impressions of its value for teaching. Concerted attention to the level of assessment (classroom,
departmental, institutional) would also likely highlight differences in how assessment affects
faculty beliefs.
Implications for Policy and Practice
These case studies at Valley University and University of the Pines provide many
valuable lessons for faculty and staff responsible for assessment on other campuses, specifically
on research university campuses. This study reinforced the tension between assessment for
accountability and assessment for improvement that scholars and practitioners have long
identified (Ewell, 2008). One campus was able to effectively manage and balance this tension
(UP) by focusing on assessment’s potential for teaching improvement and aligning multiple
levels of the system around this goal. They created this alignment by pulling different levers of
influence at different levels of the system. These levers included intentional efforts by faculty to
reframe assessment at the institutional level as faculty-driven and linked to teaching and
learning; similar departmental efforts to communicate that assessment was improvement-
oriented and linked to teaching; leadership support at multiple levels (institutional, departmental,
and grassroots faculty champions); support and training that reinforced these messages; and
Assessment and Teaching Improvement in Higher Education
214
policies and structures which formalized these values. There are similar levers that exist across
all campuses that can promote assessment’s link to teaching if deployed properly.
First, assessment directors should recruit faculty champions who can give examples of
how assessment has helped them improve their teaching. Numerous articles on how to promote
the spread of assessment on campuses have emphasized the importance of faculty buy-in and the
powerful role that champions can play (Hutchings, 2010). However, many of these articles focus
on recruiting champions to help persuade fellow faculty to merely engage in assessment, rather
than demonstrating concretely how their teaching has been affected by their assessment work.
Intentional recruitment of champions from different disciplines (i.e. humanities, social sciences,
sciences) would be especially valuable, as preferred pedagogical and assessment styles (as well
as beliefs about teaching and assessment) tend to differ by discipline.
Additionally, the assessment planning process can prove nearly as valuable to faculty,
programs, and institutions as the actions of actually conducting assessment and collecting data.
For many faculty at the institutions I studied, the curriculum mapping process that followed
creation of SLOs and preceded actual assessment was one of the most enlightening and engaging
parts of the assessment process. As Jankowski and Marshall (2017) have noted, curriculum
mapping can lead to a more integrative view of student learning, promoting alignment across
previously disparate or unconnected courses. The assessment planning process, including
curriculum mapping, introduced a new way of thinking about curriculum to the faculty in the
study—from one focused on curriculum as content coverage to one focused on curriculum as
learning-oriented. Rather than thinking first about what disciplinary concepts they should teach,
faculty instead were forced to think about what they wanted students to learn—a subtle shift that
nonetheless proved powerful to many who participated. By thoughtfully reflecting on what
Assessment and Teaching Improvement in Higher Education
215
students should learn and figuring out where in the curriculum particular concepts or skills are
covered, faculty can discover gaps or redundancies in the curriculum that may have otherwise
gone undetected. While much of the drive for assessment comes from the desire to obtain data
about student learning and make changes based on that data, I suggest that many changes might
be made before any data are collected at all. The assessment planning process should be
recognized as an important end in itself, rather than just the means to conducting assessment.
Next, thoughtful attention to structures and policies at the institutional and departmental
levels is critical for supporting the spread of assessment in a way that explicitly focuses on
connections with teaching improvement (Kinzie & Jankowski, 2015). Concrete actions to change
policies and structures can signal institutional value of assessment and also clarify how
assessment is defined on campus (for improvement instead of for accountability, for example).
Structures and policies both reflect institutional and departmental cultures and continually
reinforce and shape these cultures. The ways in which assessment responsibilities were
structured and delegated at each institution in this study powerfully shaped campus-wide
approaches to assessment, either enhancing and facilitating meaningful assessment or limiting
assessment’s value and utility for faculty, leaders, and institutions. There are several examples of
structures and policies that institutions could implement to strengthen the connection between
assessment and teaching. First, creating a permanent committee as part of the faculty governance
structure that is responsible for assessment puts it squarely within the purview of faculty, rather
than making it an administrative responsibility. By creating a faculty committee with assessment
oversight, institutions send the message that assessment is an instructional endeavor and thus
should be governed by faculty. Making the committee permanent signals that assessment is an
ongoing and central part of faculty work rather than an activity to be undertaken once a decade
Assessment and Teaching Improvement in Higher Education
216
preceding an accreditor’s visit. Another interesting and valuable idea is to designate the
assessment director position as a tenured faculty role; this could be a research or clinical faculty
role, but having a tenured or tenure-track faculty member in such a position forestalls much of
the criticism from campus faculty that assessment is an administrative enterprise run by people
with little understanding of teaching or faculty work.
Leaders across the university can also greatly influence the degree to which assessment is
linked to teaching on campus, and the extent to which institutional and departmental cultures
support this link (Ewell & Ikenberry, 2015; Peterson & Augustine, 2000). Taking a systems
approach to change can help leaders at all levels ensure that they are thinking broadly about how
to connect assessment with teaching improvement across multiple levels of the institution. In
addition to supporting and implementing the policies and structures I described above, leaders
should pay careful attention to the messages they communicate about assessment and make sure
to emphasize that it is an integral part of the teaching and learning process rather than a task to
be completed for accreditation. Leaders should ensure to communicate that they value
assessment and data about student learning, that they value good teaching, and that they see these
activities as indelibly intertwined. This communication should be explicit, but leaders should
also pay attention to the implicit messages they might be communicating to campus stakeholders
through other actions. For example, university leaders may say they value teaching but then
regularly send emails about research honors and awards, never doing the same for teaching
honors and awards. Leaders should think carefully about how their actions may reinforce or
undermine the messages they communicate to faculty and staff. In terms of reinforcing
messaging around the importance of teaching and assessment, leaders can take such actions as
attending assessment presentations and awards ceremonies (as happened at UP), celebrating
Assessment and Teaching Improvement in Higher Education
217
achievements in teaching, and publicizing and rewarding examples of assessment work that has
led to teaching improvement on campus. Messages and actions from campus and departmental
leaders can stimulate and sustain faculty engagement in assessment absent any concrete
incentives or rewards.
The need to engage departments and individual faculty and to support and respect their
professional expertise as they engage with assessment also emerged as a major implication of
this study (Schön, 1983). In order for assessment to be seen as a key element of professionalism
and a tool for reflective practice, it needs to be valued and incentivized by professional peers.
More effort is needed to promote the value of assessment for reflective scholarly work on
teaching and to build its legitimacy as a necessary element of professionalism. Additionally,
assessment leaders should be mindful of epistemological and pedagogical differences among
disciplines that may drive different approaches to teaching, learning, and assessment. They
should also consider how best to support a variety of departmental stakeholders as they engage in
assessment—not just departmental assessment coordinators. Department chairs are a key lever in
determining the extent to which assessment influences teaching at the departmental level.
Cam pus leaders should be more intentional about getting buy-in from department chairs and
ensuring they are aligned around a vision of assessment for improvement.
This study exposed a critical tension for the assessment community to consider—the
tension between individual/classroom assessment and program/department assessment. Much of
the existing assessment literature treats these different levels of assessment as complementary,
yet I found that they can actually be in conflict with one another. Focusing on program or
departmental assessment emphasizes assessment as a collective activity, which is less threatening
to individual faculty and may promote greater investment and buy-in (Cain & Hutchings, 2015).
Assessment and Teaching Improvement in Higher Education
218
There is also a lot of value in assessing and changing entire programs rather than just individual
classes. However, when we emphasize program assessment, individual classroom assessment
can be inadvertently deemphasized and devalued, as well as disconnected from the larger
assessment movement. Faculty in this study often did not connect the assessment they did in
their individual classes with the “official assessment” they did for their department or for the
institution. Yet, even those faculty who were vehemently opposed to “official assessment” did
some sort of check on student learning in their own classes, whether formally or informally.
They used the results of these assessments to reflect on their teaching and sometimes make
changes. Assessment professionals should think critically about how to honor and lift up the
work that faculty already do to assess student learning in their classrooms, as well as how to
better reflect and connect this work to larger assessment projects (Cain & Hutchings, 2015). This
represents a missed opportunity for strengthening the link between assessment and teaching, as
well as a missed opportunity for getting faculty buy-in. The assessment ‘movement’ or
‘discipline’ has often been clumsily interpreted and communicated to faculty in ways that
devalue the assessment that is already happening in classrooms. How can we better
communicate the improvement purpose of assessment to faculty who have been tainted by the
accountability paradigm? How do we help spread effective practices and approaches across
institutions? How do we support campuses in making big changes? And how can we
communicate to both internal and external stakeholders that assessment is a part of the teaching
and learning process and not just about accountability? I hope that these questions can guide
ongoing conversations and reflections among members of the assessment community as we think
about how to facilitate assessment that supports teaching improvement.
Assessment and Teaching Improvement in Higher Education
219
Finally, higher education as a field must once and for all resolve the tension between
assessment paradigms or the institutional logics of accountability and improvement. This study
shows that the improvement logic supports assessment’s ability to improve teaching, while the
accountability logic detracts from assessment’s role in teaching improvement. In order for
assessment to support teaching improvement, stakeholders across the field must align around the
improvement logic. However, accreditors and policymakers play a significant role in
perpetuating the tension among logics when they push for increased accountability for
assessment data. These actors in particular should reflect on how accreditation or state funding
requirements currently promote compliance-oriented assessment and how these requirements
could be changed to support more authentic, improvement-oriented assessment. If assessment
becomes increasingly animated by accountability, it is unlikely to be a useful tool to support
teaching improvement in the future.
Suggestions for Future Research
Several potential avenues for further research emerged as a result of this study. First, the
assessment community could use a better and more detailed understanding of exactly what kinds
of assessment are happening at which levels of the system. NILOA conducts a regular survey on
assessment every 3-4 years, but they do not disaggregate their data to see which institutions are
conducting program assessment or institutional assessment. They also do not specify which types
of assessment are happening at which levels—classroom, program, institutional, etc. Their
survey also generally targets only top academic administrators and thus lacks a faculty
perspective on what types of assessment are actually happening on campus and how widespread
it is.
Assessment and Teaching Improvement in Higher Education
220
While NILOA provides the most comprehensive accounting of the extent to which
assessment is practiced across institutions of higher education in the United States, there are still
a dearth of high-quality multiple-case qualitative studies of how assessment plays out on campus.
NILOA’s studies indicate that assessment is happening, but not the extent to which it is
authentic, meaningful, or linked to teaching. Additional large qualitative studies of multiple
institutions could provide a more comprehensive picture of assessment quality, not just
assessment spread.
Additionally, it would be interesting to examine how assessment influences teaching
within different disciplines, as well as at different types of institutions. A study comparing
multiple science disciplines, for example, might flesh out my findings and lead to deeper insights
into how scientists use assessment in their teaching. The link between assessment and teaching is
almost certainly stronger at more teaching-focused institutions such as community colleges or
liberal arts colleges; a study examining these institutions could reveal additional ways in which
assessment influences teaching and highlight similarities or differences across institution type.
There are also several avenues that quantitative researchers could take to continue this
line of research. Greater use of the Approaches to Teaching Inventory (ATI) could be valuable in
determining whether teaching beliefs are linked to assessment participation. If scholars could
identify a department or institution that is in the beginning stages of implementing assessment, or
one that has gotten a warning from an accreditor about inadequate assessment, they could
conduct a pre- and post-test and approximate a more causal understanding of this phenomenon.
Quantitative methodologies that take a more complex approach, such as structural equation
modeling (SEM), would be especially useful in exploring the relationship between teaching and
assessment. SEM can determine whether certain factors are more salient than others, how
Assessment and Teaching Improvement in Higher Education
221
particular elements are linked, or whether there are indirect relationships between some of the
factors I have identified as influential here.
Future researchers interested in this topic should intentionally study institutions using
different assessment instruments or methods (i.e. CLA vs. VALUE rubrics) or institutions that
are using multiple methods to see how they use these assessments and data differently. Studies
that intentionally select sites based on assessment instrument/method could facilitate a stronger
understanding of whether certain assessment types support teaching improvement more than
others. Additional study of the tensions between classroom and program assessment would also
help further our knowledge of the ways in which each type of assessment shapes teaching
differently and how they could be connected for greater impact. Further research on the formal
and informal classroom assessments that are already happening in most faculty members’
classrooms would also benefit assessment scholars and practitioners as they work to build faculty
buy-in around assessment’s ability to improve teaching and learning.
Finally, more theoretically-driven empirical studies of assessment in higher education are
necessary. The vast majority of existing assessment research is housed in practitioner journals or
assessment-specific publications, contributing to a widespread belief that assessment is not
scholarly or a legitimate field of study (Gilbert, 2018). The lack of rigorous research on the link
between assessment and teaching improvement means there is a significant gap in our
knowledge about how this important and increasingly prevalent process functions. Additional
studies using systems theory would help support a stronger understanding of the different ways
that assessment shapes teaching across the teaching system, as well as how it compares to other
teaching improvement initiatives. Studies that embrace complexity ensure that we do not
continue to offer simplistic solutions to complicated challenges such as improving teaching in
Assessment and Teaching Improvement in Higher Education
222
higher education (Austin, 2011; Fairweather, 1996; Senge, 1990). Several additional theoretical
approaches also appear to have promise for helping explain the ways in which assessment can
shape teaching at various levels of the institution. As I mentioned, there is already an emerging
body of research on the role of organizational cultures in promoting assessment, but this line of
work is typically limited to understanding how culture promotes implementation of assessment
and not its link to teaching. Additional studies with this specific aim could be helpful in further
honing the types of cultural elements that seem to promote assessment’s ability to improve
teaching. Scholars should also consider using an institutional logics framework to further
examine how field-level influences are hindering or facilitating the link between assessment and
teaching. Further, a reflective practice framework could interrogate the contours of how
assessment provokes reflection on teaching practice and further examine the tensions around
professionalism that assessment surfaces. This topic is ripe for theoretically-driven research;
more such theoretically-driven empirical studies will continue to further our understanding of
this underexplored link.
Conclusions and Reflections
Despite over thirty years of the ascendance of assessment in higher education, there has
been remarkably little empirical research on assessment’s relationship to teaching improvement.
This study was the first to examine how assessment influences the teaching and learning
environment at research universities—institutions which have typically lagged in both their
attention to undergraduate teaching quality and their implementation of assessment. The
relationship between assessment and teaching improvement looked very different at the two
universities I studied. One institution had a focus on assessment for improvement across all
levels of the system, and multiple changes to the teaching and learning environment resulted.
Assessment and Teaching Improvement in Higher Education
223
The other institution was more compliance- and accountability-focused and saw fewer changes
to the teaching and learning environment. These results almost perfectly, if perhaps
inadvertently, capture the heart of assessment’s most enduring challenge—how to balance the
need for accountability with the desire to make assessment meaningful and use it to improve
teaching and learning. Valley University is an example of what happens when the logic of
accountability animates assessment work on campus. Assessment occurs—learning outcomes are
created and measured, reports are written and submitted—but meaningful changes to teaching do
not result. University of the Pines represents what is possible when the logic of improvement is
dominant. Changes to culture, policies and practices, curriculum, and pedagogy can result at the
institutional, departmental, or individual levels.
Based on these results it appears that assessment has the potential to improve teaching at
research universities, but it is complicated. Teaching improvement does not automatically result
once institutions engage in assessment. Rather, a particular approach to assessment can foster
teaching improvement, and attention to multiple supportive levers across all levels of the system
is necessary to drive change. University of the Pines leveraged policies and structures, leadership
at all levels (faculty, departmental, institutional), strategic communication, training and support,
and cultures to create an environment in which assessment fosters teaching improvement. The
creation of this environment took ten years. And still challenges remain, primarily differential
engagement of departments, the tension between program and classroom assessment, and the
pervasive influence of the logic of accountability. The legacy and continued presence of
accreditation as a driver of assessment is a challenge that universities interested in engaging in
this type of assessment must work carefully and thoughtfully to overcome. University of the
Assessment and Teaching Improvement in Higher Education
224
Pines has paid repeated, consistent, and focused attention to aligning stakeholders around the
value of assessment for improving teaching and learning to counteract these forces.
Ultimately, while accreditor demands may be driving the spread of assessment into a
majority of institutions of higher education (Jankowski, Timmer, Kinzie, & Kuh, 2018), it is
precisely this driver that may detract from assessment’s ability to improve teaching at scale.
Unless institutions can use the impetus of accreditation as an opportunity to build consensus
around a shared value on assessment for improvement, as University of the Pines was able to do,
they are unlikely to see meaningful changes to teaching. Hopefully, lessons from this study can
inform institutions wanting to emulate UP’s model so that more universities can find meaning
and value in assessment as a lever for improving the teaching and learning environment.
Assessment and Teaching Improvement in Higher Education
225
References
AAU Undergraduate STEM Education Initiative (n.d.).
https://www.aau.edu/education-service/undergraduate-education/undergraduate-stem-
education-initiative
Adelm an, C., Ewell, P., Gaston, P., & Schneider, C. G. (2014). The degree qualifications
profile. Indianapolis, IN: Lumina Foundation for Education.
American Academy of Arts & Sciences (2017). The future of undergraduate education: The
future of america. Retrieved from:
https://www.amacad.org/multimedia/pdfs/publications/researchpapersmonographs/CFUE
_FinalReport/Future-of-Undergraduate-Education.pdf
American Historical Association (n.d.). Other perspectives articles on assessment and outcomes.
Retrieved from: https://www.historians.org/teaching-and-learning/teaching-resources-for-
historians/approaches-to-teaching/other-perspectives-articles-on-assessment-and-
outcomes
Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college
teachers. San Francisco: Jossey-Bass.
APA Project Assessment (n.d.). Welcome to Project Assessment. Retrieved from:
http://pass.apa.org/?_ga=2.141106747.821214220.1532039128-1566892648.1532039128
Arum, R., & Roksa, J. (2011). Academically adrift: Limited learning on college campuses.
University of Chicago Press.
ASA Task Force on Assessment the Undergraduate Sociology Major (2005). Creating an
effective assessment plan for the sociology major. Retrieved from:
http://www.asanet.org/sites/default/files/savvy/images/asa/docs/pdf/Task%20Force%20o
n%20Assessing%20Undergraduate%20Major.pdf
Assessment and Teaching Improvement in Higher Education
226
Assessment Leadership Academy (n.d.). Retrieved from: https://www.wscuc.org/ala/overview
Astin, A. W. (1977). Four critical years: Effects of college on beliefs, attitudes, and
knowledge. San Francisco: Jossey-Bass.
Astin, A. W., & antonio, a.l.. (2012). Assessment for excellence: the philosophy and practice of
assessment and evaluation in higher education, 2
nd
Ed. Lanham, MD: Rowman &
Littlefield Publishers.
Austin, A. E. (2011). Promoting evidence-based change in undergraduate science education.
Washington, D.C.: National Academies National Research Council Board on Science
Education.
Austin, A. E. (2002). Preparing the next generation of faculty: Graduate school as socialization
to the academic career. The journal of higher education, 73(1), 94-122.
Austin, A. E. (1996). Institutional and departmental cultures: The relationship between teaching
and research. New Directions for Institutional Research, 1996(90), 57-66.
Austin, A. E. (1994). Understanding and assessing faculty cultures and climates. New directions
for institutional research, 1994(84), 47-63.
Austin, A. E., Sorcinelli, M. D., & McDaniels, M. (2007). Understanding new faculty
background, aspirations, challenges, and growth. In The scholarship of teaching and
learning in higher education: An evidence-based perspective, 39-89.
Bandy (n.d.). Peer review of teaching. Retrieved from:
https://cft.vanderbilt.edu/guides-sub-pages/peer-review-of-teaching/
Bane, C. L. (1925). The lecture vs. the class-discussion m ethod of college teaching. School and
Society, 21, 300-302.
Banta, T. (Ed.) (2007). Assessing student learning in the disciplines: Assessment update
Assessment and Teaching Improvement in Higher Education
227
collections (Vol. 4). John Wiley & Sons.
Banta, T. W., & Blaich, C. (2011). Closing the assessment loop. Change: The Magazine of
Higher Learning, 43(1), 22-27.
Banta, T. W., & Palomba, C. A. (2014). Assessment essentials: Planning, implementing, and
improving assessment in higher education. San Francisco: Jossey-Bass.
Banta, T. W., Ewell, P. T., & Cogswell, C. A. (2016). Tracing assessment practice as reflected in
Assessment Update (NILOA Occasional Paper No. 28). Urbana, IL: University of Illinois
and Indiana University, National Institute for Learning Outcomes Assessment.
Barr, R. B., & Tagg, J. (1995). From teaching to learning—A new paradigm for undergraduate
education. Change: The magazine of higher learning, 27(6), 12-26.
Bay View Alliance (n.d.). http://bayviewalliance.org/
Becher, T. (1987). Disciplinary discourse. Studies in Higher Education, 12(3), 261-274.
Becher, T. and Kogan, M. (1992). Process and structure in higher education (2
nd
ed.). London:
Routledge.
Becher, T., & Trowler, P. (2001). Academic tribes and territories: Intellectual enquiry and the
culture of disciplines. Buckingham , UK: McGraw-Hill Education (UK).
Benjamin, R., Miller, M. A., Rhodes, T. L., Banta, T. W., Pike, G. R., & Davies, G. (2012). The
seven red herrings about standardized assessments in higher education. Occasional Paper,
15. Urbana, IL: University of Illinois and Indiana University, National Institute for
Learning Outcomes Assessment.
Bensimon, E. M., Ward, K., & Sanders, K. (2000). The department chair's role in developing
new faculty into teachers and scholars. Bolton, MA: Anker.
Bernstein, D., Burnett, A. N., Goodburn, A. M., & Savory, P. (2006). Making teaching and
Assessment and Teaching Improvement in Higher Education
228
learning visible: Course portfolios and the peer review of teaching. San Francisco:
Jossey-Bass.
Biglan, A. (1973). The characteristics of subject matter in different academic areas. Journal of
applied psychology, 57(3), 195.
Birnbaum, R. (1988). How colleges work. San Franciscio: Jossey-Bass.
Blackburn, R. T., & Lawrence, J. H. (1995). Faculty at work: Motivation, expectation,
satisfaction. Baltimore, MD: Johns Hopkins University Press.
Blaich, C., & Wise, K. (2011). From gathering to using assessment results. Urbana, IL:
University of Illinois and Indiana University, National Institute for Learning Outcomes
Assessment.
Blumberg, P. (2009). Developing learner-centered teaching: A practical guide for faculty. San
Francisco: Jossey-Bass.
Bowen, H. R. (1977). Investment in learning: The individual and social value of American
higher education. San Francisco: Jossey-Bass.
Boyer, E. L. (1990). Scholarship reconsidered: Priorities of the professoriate. Princeton, NJ:
Princeton University Press.
Boyer Commission on Education Undergraduates in the Research University (1998). Reinventing
undergraduate education: A blueprint for America's research universities. Princeton, NJ:
Carnegie Institute for the Advancement of Teaching.
Bressoud, D. M. (2011). Historical reflections on teaching the fundamental theorem of integral
calculus. American Mathematical Monthly, 118, 99–115.
Brint, S. (2011). Focus on the classroom: Movements to reform college teaching and learning,
Assessment and Teaching Improvement in Higher Education
229
1980-2008. In J.C. Hermanowicz (Ed.), The American academic profession:
Transformation in contemporary higher education, 44-70. Baltimore, MD: Johns
Hopkins University Press.
Brown, S., & Knight, P. (1994). Assessing learners in higher education. London: Kogan Page.
Bruffee, K.A. (1999), Collaborative learning: Higher education, interdependence, and the
authority of knowledge. Baltimore, MD.: Johns Hopkins University Press.
Cain, T.R. & Hutchings, P. (2015). Faculty and students: Assessment at the intersection of
teaching and learning. In G.D. Kuh, S.O. Ikenberry, N. Jankowski, T.R. Cain, P. Ewell,
P. Hutchings, and J. Kinzie (Eds.), Using evidence of student learning to improve higher
education, 95-116. San Francisco: Jossey-Bass.
Carey, J. O., & Gregory, V. L. (2003). Toward improving student learning: Policy issues and
design structures in course-level outcomes assessment. Assessment & Evaluation in
Higher Education, 28(3), 215–227.
Center for the Integration of Research, Teaching and Learning (n.d.). https://www.cirtl.net/
Champagne, J. (2011). Teaching in the corporate university: Assessment as a labor issue. AAUP
Journal of Academic Freedom, 2(1), 1-26.
Chickering, A. W. (1969). Education and identity. San Francisco: Jossey-Bass.
Chickering, A. W., & Gamson, Z. F. (1991). Seven principles for good practice in
undergraduate education. New directions for teaching and learning, 1991(47), 63-69.
Chism, N. 2007. Peer review of teaching: A sourcebook, 2nd ed.. Bolton, MA: Anker Publishing.
Chu, D. (2006). The department chair primer: Leading and managing academic departments.
Bolton, MA: Anker.
Clark, B. R. (1987). The academic life: Small worlds, different worlds. Princeton, NJ: Carnegie
Assessment and Teaching Improvement in Higher Education
230
Foundation for the Advancement of Teaching.
Coburn, C. E., & Turner, E. O. (2011). Putting the “use” back in data use: an outsider's
contribution to the measurement community's conversation about data. Measurement:
Interdisciplinary Research & Perspective, 9(4), 227-234.
Cogswell, C. A. (2016). Improving our improving: A multiple case study of the accreditor-
institution relationship, (Ph.D. dissertation), Indiana University. Retrieved from
http://search.proquest.com.libproxy2.usc.edu/docview/1787172792/abstract/D15AC7A3
E5DC4497PQ/1
Cohen, M. D., & March, J. G. (1991). The processes of choice. Organization and governance in
higher education, 175-181.
Condon, W., Iverson, E. R., Manduca, C. A., Rutz, C., & Willett, G. (2016). Faculty
development and student learning: Assessing the connections. Bloomington, IN: Indiana
University Press.
Cottrell Scholars (n.d.). http://rescorp.org/cottrell-scholars
Creswell, J.W. (2007). Qualitative inquiry and research design: Choosing among five
approaches. Thousand Oaks, CA: Sage.
Crouch, C. H., & Mazur, E. (2001). Peer instruction: Ten years of experience and
results. American journal of physics, 69(9), 970-977.
Cummings, T. G. (Ed.). (1980). Systems theory for organization development. San Francisco:
John Wiley and Sons.
Deardorff, M. D., Hamann, K., & Ishiyama, J. T. (2009). Assessment in political science.
Washington, D.C.: Am erican Political Science Association.
Delaney, E. H. (2015). The professoriate in an age of assessment and accountability:
Assessment and Teaching Improvement in Higher Education
231
Understanding faculty response to student learning outcomes assessment and the
collegiate learning assessment (Doctoral dissertation, Columbia University). Retrieved
from: https://academiccommons.columbia.edu/catalog/ac:187917
Deslauriers, L., Schelew, E., & Wieman, C. (2011). Improved learning in a large-enrollment
physics class. Science, 332(6031), 862. https://doi.org/10.1126/science.1201783
Dill, D. D. (2003). An institutional perspective on higher education policy: The case of academic
quality assurance. In J. C. Smart (Ed.), Higher Education: Handbook of Theory and
Research, Vol. 18, (pp. 669–699). Dordrecht: Springer Netherlands.
https://doi.org/10.1007/978-94-010-0137-3_12
DiMaggio, P. J., & Powell, W. W. (Eds.). (1991). The new institutionalism in organizational
analysis. Chicago: University of Chicago Press.
DiMaggio, P., & Powell, W. W. (1983). The iron cage revisited: Collective rationality and
institutional isomorphism in organizational fields. American Sociological Review, 48(2),
147-160.
Drezek McConnell, K. & Doolittle, P.E. (2012). Classroom-level assessment: Aligning
pedagogical practices to enhance student learning. In C. Secolsky and D.B. Denison
(Eds.), Handbook on measurement, assessment, and evaluation in higher education, 15-
30. New York: Routledge.
Ebersole, T. E. (2009). Postsecondary assessment: Faculty attitudes and levels of engagement.
Assessment Update, 21(2), 1-13.
Eisenhardt, K. M. (1989). Building theories from case study research. Academy of Management
Review, 14(4), 532-550
Eley, M. G. (2006). Teachers’ conceptions of teaching, and the making of specific decisions in
Assessment and Teaching Improvement in Higher Education
232
planning to teach. Higher Education, 51(2), 191-214.
Ely, M., Anzul, M., Friedman, T., Garner, D., & Steinmetz, A. M. (1991). Doing qualitative
research: Circles within circles. London: Falmer Press.
Entwistle, N. J. (1984). Contrasting perspectives on learning. In F. Marton, D. Hounsell
& N. Entwistle (Eds.), The Experience of Learning (pp. 1-18). Edinburgh, Great Britain:
Scottish Academic Press.
Ewell, P. T. (2008). Assessment and accountability in America today: Background and context.
New Directions for Institutional Research, 2008(S1), 7–17. https://doi.org/10.1002/ir.258
Ewell, P. T. (2002). An emerging scholarship: A brief history of assessment. Building a
scholarship of assessment, 3-25.
Ewell, P. & Ikenberry, S. (2015). Leadership in making assessment matter. In G.D. Kuh, S.O.
Ikenberry, N. Jankowski, T.R. Cain, P. Ewell, P. Hutchings, and J. Kinzie (Eds.), Using
evidence of student learning to improve higher education, 117-145. San Francisco:
Jossey-Bass.
Ewell, P., Paulson, K., & Kinzie, J. (2011). Down and in: Assessment practices at the program
level. Urbana, IL: University of Illinois and Indiana University, National Institute for
Learning Outcomes Assessment.
Fairweather, J. (2008). Linking evidence and promising practices in science, technology,
engineering, and mathematics (STEM) undergraduate education. Washington, D.C.:
National Academies National Research Council Board of Science Education.
Fairweather, J. S. (1996). Faculty work and public trust: Restoring the value of teaching and
public service in American academic life. Needham Heights, MA: Longwood Division,
Allyn and Bacon.
Assessment and Teaching Improvement in Higher Education
233
Fairweather, J. S. & Beach, A. L.(2002). Variations in faculty work at research universities:
Implications for state and institutional policy. The Review of Higher Education 26(1), 97-
115.
Fairweather, J. S., & Rhoads, R. A. (1995). Teaching and the faculty role: Enhancing the
commitment to instruction in American colleges and universities. Educational Evaluation
and Policy Analysis, 17(2), 179-194.
Francis, P. L., & Steven, D. A. (2003). The SUNY assessment initiative: Initial campus and
system perspectives. Assessment & Evaluation in Higher Education, 28(3), 333-349.
Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., &
Wenderoth, M. P. (2014). Active learning increases student performance in science,
engineering, and mathematics. Proceedings of the National Academy of Sciences,
111(23), 8410–8415. https://doi.org/10.1073/pnas.1319030111
Fuller, M. B., Skidmore, S. T., Bustamante, R. M., & Holzweiss, P. C. (2016). Empirically
exploring higher education cultures of assessment. The Review of Higher
Education, 39(3), 395-429.
Fullerton, H. (1999). Observation of teaching. In H. Fry, S. Ketteridge, & S. Marshall (Eds) A
handbook for teaching and learning in higher education. London: Kogan Page.
Gappa, J. M., Austin, A. E., & Trice, A. G. (2007). Rethinking faculty work: Higher education's
strategic imperative. San Francisco: Jossey-Bass.
Gerber, L. G. (2014). The rise and decline of faculty governance: Professionalization and the
modern American university. Baltimore, MD: Johns Hopkins University Press.
Gibbs, G., & Coffey, M. (2004). The impact of training of university teachers on their teaching
Assessment and Teaching Improvement in Higher Education
234
skills, their approach to teaching and the approach to learning of their students. Active
Learning in Higher Education, 5(1), 87–100.
Gilbert, E. (2018, January 12). An insider’s take on assessment: It may be worse than you
thought. The Chronicle of Higher Education. Retrieved from:
https://www.chronicle.com/article/An-Insider-s-Take-on/242235
Glesne, C. (2011). Becoming qualitative researchers: an introduction (4
th
ed). Boston: Pearson
Education.
Golde, C. M., & Dore, T. M. (2001). At cross purposes: What the experiences of today's doctoral
students reveal about doctoral education. Philadelphia: Pew Charitable Trust.
Gow, L., & Kember, D. (1993). Conceptions of teaching and their relationship to student
learning. British Journal of Educational Psychology, 63(1), 20–23.
Gregoire, M. (2003). Is it a challenge or a threat? A dual-process model of teachers' cognition
and appraisal processes during conceptual change. Educational psychology review, 15(2),
147-179.
Guetterman, T. C., & Mitchell, N. (2016). The role of leadership and culture in creating
meaningful assessment: A mixed methods case study. Innovative Higher
Education, 41(1), 43-57.
Grunwald, H., & Peterson, M. W. (2003). Factors that promote faculty involvement in and
satisfaction with institutional and classroom student assessment. Research in Higher
Education, 44, 173-204
Hadden, C., & Davies, T. G. (2002). From innovation to institutionalization: The role of
administrative leadership in the assessment process. Community College Journal of
Research & Practice, 26(3), 243-260.
Assessment and Teaching Improvement in Higher Education
235
Halpern, D. F. (1994). Changing college classrooms: New teaching and learning strategies for
an increasingly complex world. San Francisco, CA: Jossey-Bass.
Haras, C., Taylor, S. C., Sorcinelli, M. D., & von Hoene, L. (2017). Institutional commitment to
teaching excellence: Assessing the impacts. Washington, D.C.: American Council on
Education.
Hart, J. (2008). Mobilization among women academics: The interplay between feminism and
professionalization. NWSA Journal, 20(1), 184-208.
Hatch, J. A. (2002). Doing qualitative research in education settings. Albany, NY: SUNY Press.
Hiebert, J., & Stigler, J. W. (2017). Teaching versus teachers as a lever for change: Comparing a
Japanese and a US perspective on improving instruction. Educational Researcher, 46(4),
169-176.
Hinings, B. (2012). Connections between institutional logics and organizational culture. Journal
of management inquiry, 21(1), 98-101.
Ho, A., Watkins, D., & Kelly, M. (2001). The conceptual change approach to improving
teaching and learning: An evaluation of a Hong Kong staff development programme.
Higher Education, 42(2), 143–169.
Huber, M. T., & Hutchings, P. (2005). Surveying the scholarship of teaching and learning. The
advancement of learning: Building the teaching commons, 1-16.
Hutchings, P. (2011). From departmental to disciplinary assessment: Deepening faculty
engagement. Change: The Magazine of Higher Learning, 43(5), 36-43.
Hutchings, P. (2010). Opening doors to faculty involvement in assessment. Urbana, IL:
University of Illinois and Indiana University, National Institute for Learning Outcomes
Assessment.
Assessment and Teaching Improvement in Higher Education
236
Hutchings, P. (1998). The course portfolio: How faculty can examine their teaching to
advance practice and improve student learning. Washington, D.C.: American
Association for Higher Education.
Hutchings, P. (1996). The peer review of teaching: Progress, issues and prospects. Innovative
Higher Education, 20(4), 221-234.
Hutchings, P., Huber, M. T., & Ciccone, A. (2011). The scholarship of teaching and learning
reconsidered: Institutional integration and impact. San Francisco: Jossey-Bass.
Jankowski, N. A., & Marshall, D. W. (2017). Degrees that matter: Moving higher education to a
learning systems paradigm. Sterling, VA: Stylus.
Jankowski, N. A., Timmer, J. D., Kinzie, J., & Kuh, G. D. (2018). Assessment that matters:
Trending toward practices that document authentic student learning. Urbana, IL: National
Institute for Learning Assessment.
Jessop, T., & Maleckar, B. (2016). The influence of disciplinary assessment patterns on student
learning: a comparative study. Studies in Higher Education, 41(4), 696-711.
Johnston, S. (1996). What can we learn about teaching from our best university
teachers?. Teaching in Higher Education, 1(2), 213-225.
Judd, T. & Keith, B. (2012). Student learning outcomes assessment at the program and
institutional levels. In C. Secolsky and D.B. Denison (Eds.), Handbook on measurement,
assessment, and evaluation in higher education, 31-46. New York: Routledge.
Kandlbinder, P. (2015). Signature concepts of key researchers in North American higher
education teaching and learning. Higher Education, 69(2), 243–255.B.
https://doi.org/10.1007/s10734-014-9772-7
Kane, R., Sandretto, S., & Heath, C. (2002). Telling half the story: A critical review of research
Assessment and Teaching Improvement in Higher Education
237
on the teaching beliefs and practices of university academics. Review of Educational
Research, 72(2), 177–228. https://doi.org/10.3102/00346543072002177
Kember, D. (1997). A reconceptualization of the research into university academics’
conceptions of teaching. Learning and Instruction, 7(3), pp. 255-275.
Kember, D., & Gow, L. (1994). Orientations to teaching and their effect on the quality of student
learning. The Journal of Higher Education, 65(1), 58.
Kezar, A. (2013a). Institutionalizing student outcomes assessment: The need for better research
to inform practice. Innovative Higher Education, 38(3), 189–206.
https://doi.org/10.1007/s10755-012-9237-9
Kezar, A. (2013b). How colleges change: Understanding, leading, and enacting change. New
York: Routledge.
Kezar, A., & Lester, J. (2009). Supporting faculty grassroots leadership. Research in Higher
Education, 50(7), 715-740.
Kezar, A., & Maxey, D. (2014). Understanding key stakeholder belief systems or institutional
logics related to non-tenure-track faculty and the changing professoriate. Teachers
College Record, 116(10).
Kinzie, J., & Jankowski, N. A. (2015). Making assessment consequential. In G.D. Kuh, S.O.
Ikenberry, N. Jankowski, T.R. Cain, P. Ewell, P. Hutchings, and J. Kinzie (Eds.), Using
evidence of student learning to improve higher education, 73-190. San Francisco: Jossey-
Bass.
Kohut, G. F., Burnap, C., & Yon, M. G. (2007). Peer observation of teaching: Perceptions of the
observer and the observed. College Teaching, 55(1), 19-25.
Kolb, D. A. (1981). Learning styles and disciplinary differences. The modern American
Assessment and Teaching Improvement in Higher Education
238
college, 1, 232-255.
Kramer, M. W. (2017). Sage on the stage or bore at the board?. Communication
Education, 66(2), 245-247.
Kramer, P. I. (2008). The art of making assessment anti-venom: Injecting assessment in small
doses to create a faculty culture of assessment. Urbana, IL: University of Illinois and
Indiana University, National Institute for Learning Outcomes Assessment.
Kuh, G. D. (2004). The contributions of the research university to assessment and innovation in
undergraduate education. In W.E. Becker & M.L. Andrews (Eds.), The scholarship of
teaching and learning in higher education: Contributions of research universities, (pp.
161-192). Bloomington and Indianapolis, IN: Indiana University Press.
Kuh, G. D., Jankowski, N., Ikenberry, S. O., & Kinzie, J. (2014). Knowing what students know
and can do: The current state of student learning outcomes assessment in US colleges and
universities. Urbana, IL: University of Illinois and Indiana University, National Institute
for Learning Outcomes Assessment.
Kuh, G. D., Kinzie, J., Schuh, J. H., & Whitt, E. J. (2005). Student success in college: Creating
conditions that matter. San Francisco: Jossey-Bass.
Langen, J. M. (2011). Evaluation of adjunct faculty in higher education institutions. Assessment
& Evaluation in Higher Education, 36(2), 185-196.
Lasry, N., Mazur, E., & Watkins, J. (2008). Peer instruction: From Harvard to the two-year
college. American Journal of Physics, 76(11), 1066-1069.
Leaming, D. R. (1998). Academic leadership: A practical guide to chairing the academic
department. Bolton, MA: Anker.
Lederman, L. C. (1992). Communication pedagogy: Approaches to teaching undergraduate
Assessment and Teaching Improvement in Higher Education
239
courses in communication. Norwood, NJ: Ablex Publishing Corporation.
Leslie, D. W. (2002). Resolving the dispute: Teaching is academe's core value. The Journal of
Higher Education, 73(1), 49-73.
Lueddeke, G. R. (1999). Toward a constructivist framework for guiding change and innovation
in higher education. The Journal of Higher Education, 70(3), 235-260.
Libarkin, J. (2008). Concept inventories in higher education science. Manuscript prepared for the
National Research Council Promising Practices in STEM Education Workshop 2.
Washington, D.C.: National Research Council.
Light, G., Calkins, S., Luna, M., & Drane, D. (2009). Assessing the impact of a year-long
faculty development program on faculty approaches to teaching. International Journal of
Teaching and Learning in Higher Education, 20(2), 168-181.
Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry (Vol. 75). Thousand Oaks, CA: Sage.
Lindblom‐Ylänne, S., Trigwell, K., Nevgi, A., & Ashwin, P. (2006). How approaches to
teaching are affected by discipline and teaching context. Studies in Higher Education,
31(3), 285–298.
Linkon, S. L. (2005). Rethinking faculty work: How can assessment work for us? Academe,
91(4), 1-5. Retrieved from http://web2.uconn.edu/assessment/docs/resources/
Lorenzo, M., Crouch, C. H., & Mazur, E. (2006). Reducing the gender gap in the physics
classroom. American Journal of Physics, 74(2), 118-122.
MacDonald, S. K., Williams, L. M., Lazowski, R. A., Horst, S. J., & Barron, K. E. (2014).
Faculty attitudes toward general education assessment: A qualitative study about their
motivation. Research & Practice in Assessment, 9, 74-90.
Madison, B.L. (Ed.) (2006). Assessment of student learning in college mathematics: Toward
Assessment and Teaching Improvement in Higher Education
240
improved programs and courses. Tallahassee, FL: Association for Institutional Research.
Magruder, J., McManis, M. A., & Young, C. C. (1997). The right idea at the right time:
Development of a transformational assessment culture. New Directions for Higher
Education, 100, 17-29.
Marsh, H. W. (2007). Students’ evaluations of university teaching: Dimensionality, reliability,
validity, potential biases and usefulness. In R.P. Perry and J.C. Smart (eds.), The
scholarship of teaching and learning in higher education: An evidence-based
perspective (pp. 319-383). Dordrecht, NE: Springer.
Marshall, D. W., Jankowski, N. A., & Vaughan III, T. (2017). Tuning impact study: Developing
faculty consensus to strengthen student learning. Urbana, IL: University of Illinois and
Indiana University, National Institute for Learning Outcomes Assessment.
Martell, K. D., & Calderon, T. G. (Eds.). (2005). Assessment of student learning in business
schools: Best practices each step of the way (Vol. 1). Tallahassee, FL: Association for
Institutional Research.
Martin, J. (1992). Cultures in organizations: Three perspectives. Oxford, UK: Oxford University
Press.
Maxwell, J. A. (2013). Qualitative research design: An interactive approach, 3
rd
ed. Thousand
Oaks, CA: SAGE.
Mazur, E. (1997). Peer instruction. Upper Saddle River, NJ: Prentice Hall.
McAlpine, L., & Weston, C. (2000). Reflection: Issues related to improving professors' teaching
and students' learning. Instructional Science, 28, 363-385.
McKeachie, W. J. (1990). Research on college teaching: The historical background. Journal of
Educational Psychology, 82(2), 189.
Assessment and Teaching Improvement in Higher Education
241
Mellow, G.O., Woolis, D.D., Klages-Bombich, M., & Restler, S.G. (2015). Taking college
teaching seriously: Pedagogy matters! Sterling, VA: Stylus.
Merriam, S. B. (1998). Qualitative research and case study applications in education. San
Francisco: Jossey-Bass.
Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: A sourcebook. Beverly
Hills, CA: Sage.
Morphew, C.C. (2002). A rose by any other name: Which colleges became universities. Review
of Higher Education, 25(2), 207-224.
Morrison, J. R., Ross, S. M., Morrison, G. R., & Reid, A. J. (2017). Evaluation study of ACUE’s
collaboration with Miami Dade College: Cohort Two findings. Baltimore, MD: Center
for Research and Reform in Education, Johns Hopkins University.
National Center for Education Statistics (2016). Integrated Postsecondary Education Data
System. Washington, DC: United States Department of Education.
Neumann, R. (2001). Disciplinary differences and university teaching. Studies in higher
education, 26(2), 135-146.
Neumann, R., Parry, S., & Becher, T. (2002). Teaching and learning in their disciplinary
contexts: A conceptual analysis. Studies in higher education, 27(4), 405-417.
Nichols, J. O., & Nichols, K. W. (2000). The departmental guide and record book for student
outcomes assessment and institutional effectiveness. Edison, NJ: Agathon.
O'Meara, K., & Bloomgarden, A. (2011). The pursuit of prestige: The experience of institutional
striving from a faculty perspective. Journal of the Professoriate, 4(1).
Osthoff, E., Clune, W., Ferrare, J., Kretchmar, K., & White, P. (2009). Implementing immersion:
Design, professional development, classroom enactment and learning effects of an
Assessment and Teaching Improvement in Higher Education
242
extended science inquiry unit in an urban district. Madison, WI: University of
Wisconsin–Madison, Wisconsin Center for Educational Research.
Pallas, A. M. (2011). Assessing the future of higher education. Society, 48(3), 213-215.
Pallas, A.M., Neumann, A., & Campbell, C.M. (2017). Policies and practices to support
undergraduate teaching improvement. Cambridge, Mass.: American Academy of Arts &
Sciences.
Palmer, J. C. (2012). The perennial challenges of accountability. In Charles Secolsky & D.
Brian Denison, (Eds.), Handbook on measurement, assessment and evaluation in higher
education (pp. 57-70). New York, NY: Routledge.
Paretti, M. C., & Powell, K. (2009). Bringing voices together: Partnerships for assessing writing
across contexts. Assessment in writing, 1-9.
Pascarella, E. T., & Terenzini, P. T. (2005). How college affects students: A third decade of
research (Vol. 2). San Francisco: John Wiley & Sons.
Patton, M. Q. (2002). Qualitative research and evaluation methods (3rd ed.). Thousand Oaks,
CA: Sage.
Peshkin, A. (1988). In search of subjectivity—one's own. Educational Researcher, 17(7), 17-21.
Peterson, M. W., & Augustine, C. H. (2000). External and internal influences on institutional
approaches to student assessment: accountability or improvement?. Research in Higher
Education, 41(4), 443-479.
Peterson, M. W., & Einarson, M. K. (2001). What are colleges doing about student assessment?
Does it make a difference? The Journal of Higher Education, 72(6), 629–669.
Peterson, M. W., Einarson, M. K., Trice, A. G., & Nichols, A. R. (1997). Improving
Assessment and Teaching Improvement in Higher Education
243
organizational and administrative support for student assessment: A review of the
research literature. (National Center for Postsecondary Improvement). Stanford, CA:
Stanford University, NCPI.
Piburn, M., Sawada, D., Turley, J., Falconer, K., Benford, R., Bloom, I., & Judson, E. (2000).
Reformed teaching observation protocol (RTOP): Reference manual (ACEPT Technical
Report No. IN00-3). Tempe, AZ: Arizona Collaborative for Excellence in the Preparation
of Teachers.
Pitt, R. N., & Tepper, S. A. (2012). Double majors: Influences, identities, and impacts. Prepared
for the Teagle Foundation, Curb Center, Vanderbilt University.
Postareff, L., Lindblom-Ylänne, S., & Nevgi, A. (2008). A follow-up study of the effect of
pedagogical training on teaching in higher education. Higher Education, 56(1), 29–43.
Powell, J. W. (2011). Outcomes assessment: Conceptual and other problems. AAUP Journal of
Academic Freedom, 2(2), 1-25.
Preparing Future Faculty (n.d.). http://www.preparing-faculty.org/
Project Kaleidoscope (PKAL) (n.d.). https://www.aacu.org/pkal
Provezis, S. J. (2010). Regional accreditation and learning outcomes assessment: Mapping the
territory (Doctoral dissertation, University of Illinois at Urbana-Champaign). Retrieved
from https://www.ideals.illinois.edu/handle/2142/16260
Pryor, J., & Crossouard, B. (2010). Challenging formative assessment: disciplinary spaces and
identities. Assessment & Evaluation in Higher Education, 35(3), 265-276.
Rhodes, T. L. (2008). VALUE: Valid assessment of learning in undergraduate education. New
Directions for Institutional Research, S1, 59–70.
Rice, R. E. (2006). Enhancing the quality of teaching and learning: The U.S. experience. New
Assessment and Teaching Improvement in Higher Education
244
Directions for Higher Education, 133, 13-22.
Rice, R. E., Sorcinelli, M. D., & Austin, A. (2000). Heeding new voices: Academic careers for a
new generation. Washington, DC: American Association for Higher Education.
Roksa, J., Arum, R., & Cook, A. (2016). Defining and assessing learning in higher
education. In R. Arum, J. Roksa, and A. Cook (Eds.), Improving quality in American
higher education: Learning outcomes and assessments for the 21st Century, 1-25. San
Francisco: Jossey-Bass.
Russell, J., & Markle, R. (2017). Continuing a culture of evidence: Assessment for improvement.
ETS Research Report Series. Retrieved from: https://doi.org/10.1002/ets2.12136
Samuelowicz, K., & Bain, J. D. (2001). Revisiting academics’ beliefs about teaching and
learning. Higher Education, 41(3), 299–325.
Schein, E. H. (2010). Organizational culture and leadership (Vol. 2). San Francisco: John Wiley
& Sons.
Schein, E. H. (1985). Organisational culture and leadership: A dynamic view. San Francisco:
Jossey-Bass.
Schön, D. A. (1987). Educating the reflective practitioner: Toward a new design for teaching
and learning in the professions. San Francisco: Jossey-Bass.
Schön, D.A. (1983). The reflective practitioner: How professionals think in action. New York:
Basic Books.
Scott, W. R. (2014). Institutions and organizations: Ideas, interests, and identities (4th ed.).
Thousand Oaks, CA: Sage
Scott, W. R. (2008). Institutions and organizations: Ideas and interests. Thousand Oaks, CA:
Sage.
Assessment and Teaching Improvement in Higher Education
245
Seldin, P., Miller, J. E., & Seldin, C. A. (2010). The teaching portfolio: A practical guide to
improved performance and promotion/tenure decisions. San Francisco: John Wiley &
Sons.
Selznick, P. (1957). Leadership in administration: A sociological perspective. New York: Harper
& Row.
Selznick, P. (1947). TVA and the grass roots: A study in the sociology of formal organizations.
Berkeley and Los Angeles: University of California Press.
Senge, P. (1990). The fifth discipline: The art and science of the learning organization. New
York: Currency Doubleday.
Seymour, E., & Hewitt, N. M. (1997). Talking about leaving: Why undergraduates leave the
sciences. Boulder, CO: Westview Press.
Shavelson, R. (2010). Measuring college learning responsibly: Accountability in a new era.
Stanford, CA: Stanford University Press.
Small, A. (2018, July 3). Some questions for assessophiles. Inside Higher Ed. Retrieved from:
https://www.insidehighered.com/views/2018/07/03/professor-questions-current-
approaches-assessment-opinion
Smith, M. K., Jones, F. H., Gilbert, S. L., & Wieman, C. E. (2013). The Classroom Observation
Protocol for Undergraduate STEM (COPUS): a new instrument to characterize university
STEM classroom practices. CBE—Life Sciences Education, 12(4), 618-627.
Spellings, M. (2006). A test of leadership: Charting the future of US higher education. US
Department of Education.
Spillane, J. P., Reiser, B. J., & Reimer, T. (2002). Policy implementation and cognition:
Assessment and Teaching Improvement in Higher Education
246
Reframing and refocusing implementation research. Review of Educational
Research, 72(3), 387-431.
Stake, R. (2005). Qualitative case studies. In N.K. Denzin & Y.S. Lincoln, (Eds.), The Sage
handbook of qualitative research, 443-466. Thousand Oaks, CA: SAGE.
Stake, R. (1995). The art of case study research. Thousand Oaks, CA: SAGE.
Steadman, M. (1998). Using classroom assessment to change both teaching and learning. New
Directions for Teaching and Learning, 75, 23–35.
Stes, A., & Van Petegem, P. (2014). Profiling approaches to teaching in higher education: a
cluster-analytic study. Studies in Higher Education, 39(4), 644–658.
Stes, A., Coertjens, L., & Van Petegem, P. (2010). Instructional development for teachers in
higher education: impact on teaching approach. Higher Education, 60(2), 187–204.
Sullivan, W. (2004). Work and integrity. San Francisco: Jossey Bass.
Suskie, L. (2009). Assessing student learning: A common sense guide, 2
nd
Ed. San Francisco:
Jossey-Bass.
Swarat, S., Oliver, P. H., Tran, L., Childers, J. G., Tiwari, B., & Babcock, J. L. (2017). How
disciplinary differences shape student learning outcome assessment: A case study. AERA
Open, 3(1), 2332858417690112.
Tagg, J. (2003). The learning paradigm college. Bolton, MA: Anker Publishing Company.
Thomas, S., Chie, Q. T., Abraham, M., Jalarajan Raj, S., & Beh, L. S. (2014). A qualitative
review of literature on peer review of teaching in higher education: An application of the
SWOT framework. Review of Educational Research, 84(1), 112-159.
Thornton, P. H., & Ocasio, W. (2008). Institutional logics. The Sage handbook of organizational
institutionalism, 840, 99-128.
Assessment and Teaching Improvement in Higher Education
247
Thornton, P. H., Ocasio, W., & Lounsbury, M. (2012). The institutional logics perspective: A
new approach to culture, structure, and process. Oxford University Press on Demand.
Tinto, V. (1975). Dropout from higher education: A theoretical synthesis of recent
research. Review of Educational Research, 45(1), 89-125.
Trigwell, K., & Prosser, M. (2004). Development and use of the approaches to teaching
inventory. Educational Psychology Review, 16(4), 409-424.
Trigwell, K., & Prosser, M. (1996). Changing approaches to teaching: A relational perspective.
Studies in Higher Education, 21(3), 275–284.
Trigwell, K., Prosser, M., & Ginns, P. (2005). Phenomenographic pedagogy and a revised
Approaches to Teaching Inventory. Higher Education Research & Development, 24(4),
349–360.
Trigwell, K., Prosser, M., & Waterhouse, F. (1999). Relations between teachers’ approaches to
teaching and students’ approaches to learning. Higher Education, 37(1), 57–70.
Volkwein, J. F., Lattuca, L. R., Harper, B. J., & Domingo, R. J. (2007). Measuring the impact of
professional accreditation on student experiences and learning outcomes. Research in
Higher Education, 48(2), 251–282.
Watts, M., & Schaur, G. (2011). Teaching and assessment methods in undergraduate economics:
A fourth national quinquennial survey. The Journal of Economic Education, 42(3), 294-
309.
Weick, K. E. (1976). Educational organizations as loosely coupled systems. Administrative
science quarterly, 1-19.
Wieman, C., & Gilbert, S. (2014). The Teaching Practices Inventory: a new tool for
Assessment and Teaching Improvement in Higher Education
248
characterizing college and university teaching in mathematics and science. CBE—Life
Sciences Education, 13(3), 552-569.
White, E. M., Lutz, W., & Kamusikiri, S. (Eds.). (1996). Assessment of writing: Politics,
policies, practices. New York: Modern Language Association of America.
Williams, C. T., Walter, E. M., Henderson, C., & Beach, A. L. (2015). Describing undergraduate
STEM teaching practices: a comparison of instructor self-report instruments.
International Journal of STEM Education, 2(1), 18. https://doi.org/10.1186/s40594-015-
0031-y
Worthen, M. (2018, February 23). The misguided drive to measure ‘learning outcomes.’ The
New York Times. Retrieved from:
https://www.nytimes.com/2018/02/23/opinion/sunday/colleges-measure-learning-
outcomes.html
Xu, Y. J. (2008). Gender disparity in STEM disciplines: A study of faculty attrition and turnover
intentions. Research in Higher Education, 49(7), 607-624.
Yin, R.K. (2014). Case study research: Design and methods (5
th
ed.). Thousand Oaks, CA:
Sage.
Zellers, D. F., Howard, V. M., & Barcic, M. A. (2008). Faculty mentoring programs:
Reenvisioning rather than reinventing the wheel. Review of educational research, 78(3),
552-588.
Assessment and Teaching Improvement in Higher Education
249
Appendix A: Approaches to Teaching Inventory
This inventory is designed to explore the way that academics go about teaching in a specific context or
subject or course. This may mean that your responses to these items in one context may be different to
the responses you might make on your teaching in other contexts or subjects. For this reason, we ask you
to describe your context.
Please name the subject/year of your response here:
For each item please select one response (1-5). The numbers stand for the following responses:
1. This item was only rarely true for me in this subject
2. This item was sometimes true for me in this subject
3. This item was true for me about half the time in this subject
4. This item was frequently true for me in this subject
5. This item was almost always true for me in this subject
Please answer each item. Do not spend a long time on each: your first reaction is probably the best one.
1. In this subject students should focus their study on what I provide them
2. It is important that this subject should be completely described in terms of specific objectives that
relate to formal assessment items.
3. In my interactions with students in this subject I try to develop a conversation with them about
the topics we are studying.
4. It is important to present a lot of facts to students so that they know what they have to learn for
this subject.
5. I set aside some teaching time so that the students can discuss, among themselves, key concept
and ideas in this subject.
6. In this subject I concentrate on covering the information that might be available from key texts
and readings.
7. I encourage students to restructure their existing knowledge in terms of the new way of thinking
about the subject that they will develop.
8. In teaching sessions for this subject, I deliberately provoke debate and discussion.
9. I structure my teaching in this subject to help students to pass the formal assessment items.
10. I think an important reason for running teaching sessions in this subject is to give students a good
set of notes.
11. In this subject, I provide the students with the information they will need to pass the formal
assessments.
12. I should know the answers to any questions that students may put to me during this subject.
13. I make available opportunities for students in this subject to discuss their changing understanding
of the subject.
14. It is better for students in this subject to generate their own notes rather than copy mine.
15. A lot of teaching time in this subject should be used to question students’ ideas.
16. In this subject my teaching focuses on the good presentation of information to students.
17. I see teaching as helping students develop new ways of thinking in this subject.
18. In teaching this subject it is important for me to monitor students’ changed understanding of the
subject matter.
19. My teaching in this subject focuses on delivering what I know to the students.
20. Teaching in this subject should help students question their own understanding of the subject
matter.
21. Teaching in this subject should include helping students find their own learning resources.
22. I present material to enable students to build up an information base in this subject.
Assessment and Teaching Improvement in Higher Education
250
Appendix B: Demographic Survey Questions Appended to ATI
1. Please enter your name and email address.
2. What is your rank/job title?
a. Professor
b. Associate professor
c. Assistant professor
d. Lecturer
e. Clinical professor
f. Other (please describe)
3. Do you work at [INSTITUTION] part-time or full-time?
a. Part-time
b. Full-time
4. How long you been working at [INSTITUTION]? (will select number of years from drop-down
list)
5. At [INSTITUTION], I primarily teach:
a. Introductory undergraduate courses
b. Upper-level undergraduate courses
c. Graduate-level courses
d. Both a and b (introductory and upper-level undergraduate courses)
e. Both a and c (introductory undergraduate courses and graduate-level courses)
f. Both b and c (upper-level undergraduate courses and graduate level courses)
g. I teach courses at all three levels roughly equally
6. How long have you been a faculty member at any/all institutions? (will select number of years
from drop-down list)
7. Have you had any training on pedagogy or teaching practices in the past year?
a. Yes
b. No
c. Unsure
8. Have you had any training on student learning outcomes assessment in the past year?
a. Yes
b. No
c. Unsure
9. Where did you complete your terminal degree? (plus a text box to fill in the specific institution)
a. An institution in the United States
b. An institution outside the United States
10. Did you attend a non-U.S. institution of higher education for any of your postsecondary degrees
(bachelor’s, master’s, doctorate)?
a. Yes
b. No
Assessment and Teaching Improvement in Higher Education
251
11. What is your gender identity?
a. Female
b. Male
c. Other gender identity
d. Prefer not to respond
12. What is your racial/ethnic identity?
a. African American/Black
b. American Indian/Alaska Native
c. Asian American/Asian
d. Native Hawaiian/Pacific Islander
e. Mexican American/Chicano
f. Other Latino
g. White/Caucasian
h. Other
i. I prefer not to respond
Assessment and Teaching Improvement in Higher Education
252
Appendix C: Interview Protocols
Protocol 1: Faculty Interviews
So I told you a little bit about my study and its goals and purposes in our initial email exchange,
but I’ll go over it again just to refresh your memory. Specifically, I’m interested in teaching and
assessment at research universities. I’m interested in the ways that faculty think about assessment
and student learning, and whether and how that influences teaching practice. So for this
interview, I’m going to ask you some questions about your experiences with teaching and some
questions about your experiences with assessment. I’ll also ask you some questions about
broader departmental and institutional policies and practices around teaching and assessment.
Does that sound OK?
So, I’d also like to record this interview if you’re OK with that. It will just facilitate my
notetaking and help me to remember everything we talked about today. Just a reminder that your
identity will not be linked with anything specific you say, and all your answers will be kept
confidential. Are you OK with being recorded?
Great, then we can get started.
Individual Level
1. First, tell me a little bit about your role as a faculty member here in [THIS DEPARTMENT]
at [THIS INSTITUTION].
a. POSSIBLE PROBES: Years of experience, position type, responsibilities, etc.
2. Tell me about your experiences with teaching [THIS SUBJECT].
a. What does it mean to be a good teacher in your discipline?
b. How did you learn how to teach?
3. Think about a particular course you taught last year. What strategies or activities did you use
to teach this course?
[NOTE: I am trying to get at specific pedagogical approaches, such as small group work,
lectures, clickers, etc., NOT topics or content]
a. How did you decide upon these particular strategies or activities?
4. What should students learn in [THE CLASS YOU TAUGHT LAST YEAR]? How did you
know that students were learning?
Assessment and Teaching Improvement in Higher Education
253
a. Can you give me an example?
ASSESSMENT
5. Assessment means a lot of different things to different people in higher education. When you
hear the word “assessment,” what does it make you think of?
a. What feelings/emotions does it provoke?
b. What types of assessments do you know about? (i.e. CLA, VALUE rubrics, others?)
c. How do you use assessment?
d. How has assessment changed your practices/beliefs? (if at all)
Departmental Level
6. How is assessment used in your department?
a. Are there department-level policies about assessment?
b. What does your department chair say and do about assessment?
c. Is assessment/student learning outcomes a topic of discussion in your department
meetings?
7. What have you heard from your discipline/disciplinary society about assessment (if
anything)?
8. Do you think your department values and supports teaching? Assessment? How do you
know?
a. Are there specific teaching methods/strategies that your department endorses or
promotes?
b. What about specific assessment methods?
9. How are you evaluated on your teaching at the departmental level?
a. What role does teaching play in promotion and tenure decisions?
b. What role does assessment play in teaching evaluation, if any?
Assessment and Teaching Improvement in Higher Education
254
10. Are there any professional development opportunities around assessment in your
department? Around teaching?
Institutional Level
11. What sorts of professional development opportunities are available for teaching and
assessment at the institutional level? Have you gotten involved with those at all?
12. What do you know about how assessment is organized at the institutional level? How are you
involved with these assessment efforts, if at all?
Assessment and Teaching Improvement in Higher Education
255
Protocol 2: Department Chair Interviews
1. As you know, I’m interested in assessment at [DEPARTMENT] and [INSTITUTION].
Can you tell me about the assessment activity happening here in [DEPARTMENT]?
a. POSSIBLE PROBES: how long has this activity been happening? Did it start with
you or did it precede your tenure as chair?
b. Directed from dean/provost?
2. Tell me a little bit about your own experiences with/philosophy about assessment.
a. How did you learn about it?
b. How do you/how have you used it?
c. How do you talk about it with colleagues in the department?
3. How do the faculty in [DEPARTMENT] engage with assessment? And how do you
know?
a. Do you discuss assessment or assessment results at department meetings?
b. Are there any formal or informal structures for faculty to discuss assessment?
4. Are there professional development opportunities around assessment in
[DEPARTMENT]?
5. What about assessment activity at [INSTITUTION] as a whole?
a. Professional development?
6. How has your discipline engaged with assessment?
7. Are there other groups that have had an impact on assessment at [DEPARTMENT] or
[INSTITUTION]?
a. E.g. accreditors, policymakers, national organizations
8. Are assessment results used to inform teaching at all? If so, how?
Assessment and Teaching Improvement in Higher Education
256
Protocol 3: Assessment Staff Interviews
1. Tell me a little bit about your role here at [INSTITUTION] as [ASSESSMENT STAFF
ROLE].
a. POSSIBLE PROBES: What experiences have you had that led you here?
b. What are your primary responsibilities?
2. What do you perceive are [INSTITUTION’S] goals and priorities around assessm ent?
3. Which colleagues do you work with most frequently? What does that work look like?
a. i.e. departmental assessment coordinators, committees, administrators, faculty
(IF PARTICIPANT DOES NOT GET INTO RELATIONSHIP WITH FACULTY ASK #4)
4. Tell me how/to what extent you work with faculty to create and implement assessments
in their classrooms.
a. How about interpreting assessment data?
b. Making changes based on assessment data?
5. What is your understanding of how assessment data plays into promotion and tenure
decisions at [UNIVERSITY] (if at all)?
a. What about teaching quality?
Assessment and Teaching Improvement in Higher Education
257
Protocol 4: Center for Teaching and Learning Staff Interviews
1. Tell me about your role here at [INSTITUTION] as [CENTER FOR TEACHING AND
LEARNING STAFF].
a. POSSIBLE PROBES: What experiences have you had that led you here?
b. What are your primary responsibilities?
2. What role does assessment play in your work?
a. Do you work with your colleagues in the assessment office? If so, how?
b. Do you incorporate assessment into your work with faculty? If so, how?
3. What are the incentives at [INSTITUTION] for faculty to engage with you and the
[CENTER FOR TEACHING AND LEARNING]?
a. How do faculty get involved—referred/required? Only when there is a problem?
4. What is your understanding of how quality teaching plays into promotion and tenure
decisions at [UNIVERSITY]?
Assessment and Teaching Improvement in Higher Education
258
Appendix D: Code List
THEMES/CODES FROM LITERATURE AND THEORY
INDIVIDUAL LEVEL
Code Subcode Definition
Prior Experiences With teaching Any experiences with teaching in grad school, as a
student, or teaching at a prior institution or other
career
With
assessment
Any prior exposure to assessment
Beliefs About
teaching
What do faculty think good teaching looks like, what
do they think is their primary responsibility in terms
of teaching
About
assessment
How do faculty perceive assessment, any emotions or
beliefs about what assessment is or what its purpose
it
Socialization—
Doctoral/Disciplinary
Around
teaching
Disciplinary norms around teaching, things their
advisors or faculty told them during doctoral training
about teaching
Around
assessment
Disciplinary norms around assessment, any exposure
in doctoral training to assessment
Career Stage Any mention of career stage (pre- or post-tenure,
early mid or late career) or how it influences their
views on teaching or assessment
Appointment Type Tenure-track vs. non-tenure-track
Motivation References to why faculty engage (or don’t engage)
with assessment
Individual
(Classroom)
Assessment
References to how faculty members assess students
in their own classes
Assessment and Teaching Improvement in Higher Education
259
DEPARTMENTAL LEVEL
Code Subcode Definition
Departmental Culture Value of teaching Any reference to whether dept values teaching
or evidence about whether dept values
teaching
Value of assessment Any reference to whether dept values
assessment or evidence that dept values
assessment
Norms around
teaching
Expectations for teaching in dept (how much
do you teach, teaching vs. research, how much
do evals matter, do you try to improve or
experiment with teaching, etc)
Norms around
assessment
Expectations for assessment work in dept (is
everyone engaged, is it just for a few people,
how frequently, etc)
Disciplinary Culture Value of teaching Evidence of disciplinary value of teaching (e.g.
conferences or journals or projects on teaching
and pedagogy)
Value of assessment Evidence of disciplinary value on assessment
(conferences, articles, etc)
Leadership Chair Evidence that chair values and supports
teaching and/or assessment
Faculty leaders Faculty engagement with assessment or
teaching improvement
Policies and Practices Curriculum Evidence that curriculum is aligned with
SLOs, assessed regularly, changed as result of
assessment
Teaching
assignments
Information about teaching loads, expectations
for UG vs. grad courses
Rewards and
incentives
Rewards or incentives at departmental level for
engaging in assessment or teaching
improvement work or leading this work
Assessment
infrastructure
Committees, leadership, processes at dept level
for assessment
Professional
Development
Any reference to mentoring, workshops, or
other type of development around teaching or
assessment that happens at dept level
Assessment and Teaching Improvement in Higher Education
260
INSTITUTIONAL LEVEL
Code Subcode Definition
Structures Assessment
structures
Evidence of assessment offices, organization,
staffing, infrastructure
Teaching support
structures
Evidence of institution-level supports for
teaching (e.g. Center for Teaching and
Learning, institution-wide teaching awards,
PD)
Policies Assessment policies
Policy statements or documents that designate
how assessment is to be organized or carried
out
Teaching policies Policy statements or documents around
teaching (e.g. teaching loads/assignments)
Promotion and
tenure policies
Evidence of teaching or assessment in P&T
policies
Reward and
incentive policies
Evidence of rewards or incentives for excellent
teaching or teaching improvement, use of
assessment
Cultures Assessment/data use References to assessment or data being used to
make decisions at institutional level
Teaching/teaching
improvement
References to institutional value (or lack of
value) around teaching or teaching
improvement)
EXTERNAL LEVEL
Code Definition
Accreditation References to accreditation driving assessment process
State Accountability Policies References to state-level or system-level policies around
teaching or assessment
EMERGENT THEMES/CODES
Types of Change (Institution Level)
Curriculum change
Policy/practice change
Culture change
Levers for Institution-Level Change
Assessment and Teaching Improvement in Higher Education
261
Faculty-driven process
Supportive institutional structures
Intentional messaging around assessment for improvement
Neutral attitude towards accreditation
Support and training
Leadership support
Types of Change (Department Level)
Curriculum change
Changes to pedagogy
Policy/practice change
Culture change
Levers for Change at Department Level
Faculty champions
Supportive departmental leadership
Departmental culture that supports teaching
Disciplinary influences
Institution-level influences on departments
Types of Change (Individual Level)
Using rubrics
Thinking more about student outcomes
Lack of Change (Institution Level)
Lack of Change (Department Level)
Lack of Change (Individual Level)
Attitudes towards Assessment (eventually narrowed for findings):
Assessment as an add-on/extra (not part of core work)
Assessment as compliance-driven
Assessment imposed from on high
Assessment as violation of academic freedom and faculty autonomy (syllabus as
intellectual property)
Assessment and Teaching Improvement in Higher Education
262
Assessment leading to thinking more about outcomes
Assessment as an inevitability (resignation, begrudging acceptance)
Assessment of learning as part of professional obligations
Assessment for accountability
Assessment as jargon that is disconnected from work in the classroom
Assessment helping to ensure a more coherent, standardized experience for majors
Assessment as external interference
Fear of assessment as excuse to get rid of faculty or make them teach more
Assessment as women’s work
Anyone who cares about student learning finds value in assessment
Assessment as something that is meaningful for faculty work
Language used to talk about assessment and teaching
Student-focused
Outcome-focused
Learning-focused
Compliance-focused
Norms around assessment and teaching
Teaching as a collective endeavor
Assessment for improvement
Teaching as individualistic
Assessment for accountability
Values around teaching
Value on teaching
Value on research
Value of backwards design and curriculum mapping process
Assessment and Teaching Improvement in Higher Education
263
Appendix E: Participation Request Letters
Dear [NAME],
My name is Elizabeth Holcombe, and I’m a doctoral researcher at the University of Southern
California. I am conducting a study on the relationship between assessment and teaching at
research universities, and [YOUR INSTITUTION/DEPARTMENT] has agree to participate in
the study. I am writing to ask whether you are willing to participate by taking a short survey
about your teaching experiences. This survey should take no more than 15 minutes to complete,
and every respondent will be entered in a raffle to win one of 6 $25 Amazon gift cards as a
thank-you for participating. If you agree to participate, all your answers and your identity as a
participant will be kept confidential. This study has been approved by the USC Institutional
Review Board, and I am attaching an information sheet from the IRB to this email. Please let me
know if you have any questions.
If you agree to participate in the survey, I may follow up with you about conducting a short
(approximately hour-long) interview when I make a visit to your campus in [MONTH, DAYS].
You do not have to participate in an interview if you don’t want to.
Thank you so much for your consideration.
Best, Elizabeth Holcombe
Dear [NAME],
My name is Elizabeth Holcombe, and I’m a doctoral researcher at the University of Southern
California. I am conducting a study on the relationship between assessment and teaching at
research universities, and [YOUR INSTITUTION/DEPARTMENT] has agree to participate in
the study. I am contacting you to see if you would be willing to participate in an interview with
me about your experiences with teaching and assessment when I visit [CAMPUS]. I am hoping
to visit on [DATES].
This study has been approved by the USC Institutional Review Board, and I’m attaching the
information sheet about the study from the IRB. If you agree to participate in the interview, your
identity and answers will all be kept confidential.
If you are willing to participate, would you mind checking ALL the days and times that you are
available for a 1-hour interview at this Doodle poll link: [LINK] I am coordinating interviews
with many faculty members, and your willingness to indicate several times that you are available
would be very helpful as I coordinate these visits. I will get back to you as soon as possible
confirming an interview day and time.
Thank you again for your consideration!
Best, Elizabeth Holcombe
Abstract (if available)
Abstract
Calls for evidence of what students know and are able to do as a result of their college experiences have grown over the last thirty years. As a result, assessment activity on campuses has increased dramatically. Campuses often undertake assessment with the hope of both meeting accountability demands and improving student learning. Underlying these hopes for improvement is an often implicit assumption that assessment of student learning will lead to instructional improvement. However, there is very little empirical evidence that assessment does actually improve teaching, and little is known about the relationship between assessment and teaching more broadly. This study was designed to examine how assessment shapes teaching at two research universities. Using a systems theory perspective, this multiple-case study examined the link between assessment and teaching improvement among individual faculty, departments, and institutions. The findings provide an overview of assessment activity at each level of the system and how it shaped change (or lack of change) to the teaching and learning environment. The study also includes a detailed overview of levers for change at each level (individual, departmental, and institutional), as well as relationships and influences across levels. Based on these findings it appears that assessment has the potential to improve teaching at research universities, but it is complicated. Teaching improvement does not automatically result once institutions engage in assessment. Rather, a particular approach to assessment can foster teaching improvement, and attention to multiple supportive levers across all levels of the system is necessary to drive change.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Making equity & student success work: practice change in higher education
PDF
The dearth of learner-centered teaching methods in higher education: an innovation study
PDF
Undocumented student organizations: navigating the sociopolitical context in higher education
PDF
State policy as an opportunity to address Latinx transfer inequity in community college
PDF
Faculty learning and agency for racial equity
PDF
Participation of full-time, non-tenure-track faculty in school-level goveranance and decision making
PDF
Faculty learning in career and technical education: a case study of designing and implementing peer observation
PDF
Women of color senior leaders: pathways to increasing representation in higher education
PDF
Organizational change agents: an equity framework to addressing housing and food insecurity in higher education
PDF
Racing? to transform colleges and universities: an institutional case study of race-related organizational change in higher education
PDF
Race in the class: exploring learning and the (re)production of racial meanings in doctoral coursework
PDF
Participation in higher education diversity, equity, and inclusion work: a relational intersectionality of organizations analysis
PDF
Civic learning program policy compliance by a state department of higher education: an evaluation study
PDF
Embedding and embodying a Hispanic-serving consciousness: a phenomenological case study of faculty hiring experiences
PDF
An evaluation of individual learning among members of a diversity scorecard project evidence team
PDF
The continuous failure of Continuous Improvement: the challenge of implementing Continuous Improvement in low income schools
PDF
Leadership decisions and organizational change; the role of summer sessions as an enrollment management planning tool at the University of California: a case study
PDF
Collaborative social networks in student affairs: an exploration of the outcomes and strategies associated with cross‐institutional collaboration
PDF
The role of the timing of school changes and school quality in the impact of student mobility: evidence from Clark County, Nevada
PDF
Opening interpretations: visual literacy and teaching for inclusion, first-year studio faculty innovate for improvement
Asset Metadata
Creator
Holcombe, Elizabeth Marshall
(author)
Core Title
Assessment and teaching improvement in higher education: investigating an unproven link
School
Rossier School of Education
Degree
Doctor of Philosophy
Degree Program
Urban Education Policy
Publication Date
04/11/2019
Defense Date
08/09/2018
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
assessment,Higher education,OAI-PMH Harvest,organizational change,teaching improvement
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Kezar, Adrianna J. (
committee chair
), Bensimon, Estela Mara (
committee member
), Bickers, Nelson Eugene (
committee member
)
Creator Email
emholcombe715@gmail.com,holcombe@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-88756
Unique identifier
UC11676859
Identifier
etd-HolcombeEl-6818.pdf (filename),usctheses-c89-88756 (legacy record id)
Legacy Identifier
etd-HolcombeEl-6818.pdf
Dmrecord
88756
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Holcombe, Elizabeth Marshall
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
organizational change
teaching improvement