Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
A multi-perspective examination of developmental education: student progression, institutional assessment and placement policies, and statewide regulations
(USC Thesis Other)
A multi-perspective examination of developmental education: student progression, institutional assessment and placement policies, and statewide regulations
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
1
A MULTI-PERSPECTIVE EXAMINATION OF DEVELOPMENTAL EDUCATION:
STUDENT PROGRESSION, INSTITUTIONAL ASSESSMENT AND PLACEMENT
POLICIES, AND STATEWIDE REGULATIONS
by
Kristen Erin Fong
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(EDUCATION)
May 2016
Copyright 2015 Kristen E. Fong
ii
Dedication
To my mom, for all the love and support you blessed me with throughout my life. I will
always strive to make you proud. I love you and I miss you.
iii
Acknowledgements
First and foremost I would like to acknowledge the California community college faculty
and institutional researchers (my now colleagues) who I collaborated with on this dissertation as
well as throughout my PhD program. Without your willingness to share your data, none of my
work would exist, and more importantly, without your insight my work would be meaningless.
Second, I would like to thank my advisor, Tatiana Melguizo. I am so grateful that I had
you as my guide throughout my entire PhD program. Not only were you by far my toughest critic
and biggest advocate, but a true role model of achieving a work-life balance. Many thanks also to
my dissertation committee members, Gary Painter and Alicia Dowd. Thank you so much for
agreeing to be on my committee, you have both gone above and beyond what was required of
you. Thank you Gary for being my methodological sounding board, and Alicia for always
bringing me back to how my findings can help inform policy. I would also like to acknowledge
the Rossier School of Education and the Pullias Center for Higher Education for providing me
with the resources to pursue my research. I especially want to thank Laura Romero and Diane
Flores who I would never have gotten to graduation without. Lastly, I’d like to acknowledge
George Prather for providing practitioner-level insight that assisted in framing my research and
interpreting my findings within the community college environmental context.
Third, I would like to thank my USC peers who made this whole PhD rollercoaster much
more enjoyable. Thank you to Holly Kosiewicz who paved the way for me and all of Tatiana’s
mentees, you are my role model and forever friend. Also thanks to Federick Ngo for being much
smarter than I am, I have no doubt you will do great things in the community college research
field. Last but not least, thank you to the PICCLES crew: Andrea Bingham, Sean Gehrke, and
iv
Sophie Hiss. No one can get through a PhD program without having a good laugh that you did
this to yourself. We had a lot of laughs (and delicious Korean food!).
Fourth, I’d like to thank my sister Stacy Fong for being my rock and cheerleader
throughout my PhD program and throughout my life. I would also like to thank my dad, Peter
Fong. So much of my career choices have been guided by the love and passion you have for
education. You graduated from a community college, then transferred to and graduated from
UCLA, and eventually returned to the community college system as a counselor, Dean, and Vice
President. You are the reason I believe so dearly in this sector as the onramp to higher education
and second, third, and fourth chances for all students. I could not be more proud to follow in
your footsteps.
Thank you to my newly minted husband, David Beach for being my teammate and
occasional punching bag. Thank you for piecing and holding me (and us) together when the
pressure and stress threatened to crack my (our) armor. Last but not least thank you to my step-
daughters Lauren and Kayla. Miraculous things happen when you become a mother to daughters,
and this inherited position as a female role model. There were many points along this rocky
dissertation journey that I did not know if I had the energy or emotional and mental capacity to
accomplish all that I had on my plate. What gave me that extra push to the finish line was
needing to show you two that not anything but everything is possible.
v
Table of Contents
Dedication………………………………………………………………………………………….ii
Acknowledgements………………………………………………………………………………..iii
List of Tables……………………………………………………………………………………..vii
List of Figures………………………………………………………………………....................viii
Chapter One: Introduction………………………………………………………………………….1
Chapter Two: Increasing Success Rates in Developmental Math: The Complementary Role of
Individual and Institutional Characteristics……………………………………….9
Literature Review……………………………………………………………………………...13
Methodology…………………………………………………………………………………..20
Results…………………………………………………………………………………………31
Discussion……………………………………………………………………………………..39
Conclusions and Policy Implications………………………………………………………….44
Chapter Three: Utilizing Additional Measures of High School Academic Preparation to Support
Students in their Math Self-Assessment…………………………………………48
Review of Relevant Literature………………………………………………………………...52
Data and Institutional Context………………………………………………………………...62
Main Findings…………………………………………………………………………………74
Discussion……………………………………………………………………………………..86
Policy Implications…………………………………………………………………………....90
Future Research……………………………………………………………………………….91
Chapter Four: Understanding the Relationship between Increasing State-Standards for an
Asociate’s Degree and Developmental Math Student Outcomes………………..96
Background and Previous Research………………………………………………………….101
California’s Graduation Requirement Increase for Math and English………………………109
Methodology…………………………………………………………………………………112
Data and Analysis Sample…………………………………………………………………...112
Results………………………………………………………………………………………..119
Discussion……………………………………………………………………………………124
Conclusions and Future Research……………………………………………………………129
vi
Chapter Five: Conclusions and Policy Implications…………………………………………….131
References……………………………………………………………………………………….140
Appendix P1-A…………………………………………………………………………………..159
Appendix P3-A…………………………………………………………………………………..163
Appendix P3-B…………………………………………………………………………………..168
Appendix P3-C…………………………………………………………………………………. 171
vii
List of Tables
Table 1.1. Student Demographic Characteristics and Enrollment Status by Placement………22
Table 1.2. Average Institutional and Developmental Math Program Characteristics by
College……………………………………………………………………………..22
Table 1.3. Conceptual Model Fit in Three Stages……………………………………………..34
Table 1.4. Odds Ratio Results from Final Model of each Progression Outcome……………..35
Table 2.1. Selected Student Demographics for College H by Data Subgroup………………..63
Table 2.2. Compliance by Level if Student Attempted Math…………………………………68
Table 2.3. Non-Compliers by Race/Ethnicity if Attempted Math…………………………….69
Table 2.4. Odds Ratio of Success in First Attempted Developmental Math Class……………81
Table 2.5. T-tests Comparing Success Rates between Students Placed by Actual versus
Buffering Criteria…………………………………………………………………..83
Table 3.1. Student Cohort by Semester with the Last Semester of its Outcome Data……….113
Table 3.2. Description of Analysis Sample…………………………………………………..115
Table 3.3. Regression Results of Three Models: Number of Degree-applicable Units Earned,
Odds of Passing IA, and Odds of Earning 60 Degree-applicable Units………….120
Table 3.4. Treatment Coefficient of the IA Math Requirement on Degree-applicable Units
Earned for Students across Math Placements…………………………………….124
Table A1. Conceptual Model Fit in Three Stages……………………………………………160
Table A2. Odds Ratio Results from Final Hierarchical Model of each Progression
Outcome…………………………………………………………………………..161
Table B1. Proportion of College D AAT and AAS Degrees Awarded……………………...165
Table C1. Distribution of Outcomes by Placement Level…………………………………...168
Table C2. Regression Results of Degree-applicable Units Earned across Math Placement
Levels……………………………………………………………………………..169
Table D1. Acceptance Rates of California Residents by Campus and Academic Year……..171
viii
List of Figures
Figure 1.1. Percentage of Students Attempting and Passing each Level of the Developmental
Math Trajectory based on Initial Placement……………………………………….32
Figure 2.1. Math Placement Criteria based on Assessment Sub-test and Score……………….68
Figure 2.2. Multiple Measure Points Diagram…………………………………………………70
Figure 2.3. Student Choice of Assessment Sub-test Compared to Last Passed Math Level.......75
Figure 2.4. Alignment between Student Sub-test Choice and Prior Math Achievement............76
Figure 2.5. Distribution of Multiple Measure Points by Race/Ethnicity……………………….77
Figure 2.6. Distribution of Actual Placements and Buffering Placements…………………….78
Figure 2.7. Distribution by Race/Ethnicity under Actual and Buffering Placement Criteria….79
Figure 3.1a. English Writing Sequence………………………………………………………...111
Figure 3.1b Math Sequence……………………………………………………………………112
Figure 3.2. Number of Students Assessed and Enrolled in College D………………………..114
Figure 3.3. In-State UC and CSU Acceptance Rates over Time……………………………...128
Figure A1. Percentage of Students Passing each Level of the Developmental Math Trajectory
based on Initial Placement………………………………………………………..159
Figure B1. Resource Allocation over Time…………………………………………………..164
Figure B2. Associate’s Degrees Awarded over Time………………………………………..165
Figure B3. Completion Rate (SPAR) of Unprepared Student Population……………………166
1
Chapter One
Introduction
Within an increasingly global economy, there is a rising demand for an educated
workforce. Recent estimates indicate that by 2018 there will be a shortage of 3.4 million workers
with a college degree (Carnevale, Smith, & Strohl, 2010). As Goldin and Katz (2008) argue,
technological development is currently outpacing educational achievement in the United States,
and one of the main consequences of this trend is that it exacerbates inequality. In order to
remain globally competitive and secure the future economy, the Obama administration put forth
an educational goal that by 2020, the postsecondary attainment rates among 25-34 year olds will
be increased nationally from 40 percent to 60 percent, and with it, the U.S. will have the highest
proportion of college graduates in the world (Obama, 2009).
To attain this goal, community colleges would have to graduate an additional five million
students, which means focusing on graduating students who have traditionally had the lowest
attainment rates. These students are likely to be from underrepresented racial minority (URM)
backgrounds and low-income populations (Melguizo & Kosiewicz, 2013), and who are
overrepresented at community colleges, which report lower graduation rates than baccalaureate-
granting institutions. With the spotlight on community colleges to increase college completion,
research and legislation have focused on improving developmental education which has been
demonstrated to be one of the largest barriers to degree completion (Hawley & Harris, 2005–
2006; Horn & Nevill, 2006; Hoyt, 1999; Kelly, 2014) and affects an increasing number of
students entering postsecondary education.
Developmental education is provided to students who are academically underprepared for
college-level work (Merisotis & Phipps, 2000). Academic preparedness to succeed in college-
2
level work has been termed “college readiness.” Though no one definition of college-readiness
has been identified, it has been defined operationally as “the level of preparation a student needs
in order to enroll and succeed—without remediation—in a credit-bearing general education
course at a postsecondary institution that offers a baccalaureate degree or transfer [from a
community college] to a baccalaureate program” (Conley, 2007, p. 5). Over the years,
developmental education has become the primary means utilized by higher education institutions
to ensure that students entering college can succeed in college-level coursework. In 2000,
approximately 75 percent of all postsecondary institutions offered developmental education in
math or English (Parsad, Lewis, & Greene, 2003). Though developmental education, and how it
is delivered, has been an ever-growing source of controversy (Kingan & Alfred, 1993), its intent
is to assist academically underprepared students fill basic skill deficiencies rather than allowing
them to flounder in college-level courses (Bailey, 2009; Lazarick, 1997).
A substantial proportion of students, nationally, enter higher education underprepared for
college-level work. Data from the National Postsecondary Student Aid Study (NPSAS:04) show
that over half of public two-year college students enroll in at least one developmental education
course during their tenure (Horn & Nevill, 2006). More recent evidence demonstrates that this
percentage is increasing and that approximately 60 percent of students exhibit remedial
1
needs
(National Center for Public Policy and Higher Education (NCPPHE) & Southern Regional
Education Board (SREB), 2010). Lastly, in 2014, the ACT reported that among the 1.8 million
students who were assessed for college-readiness, 64 percent were prepared for college-level
English, and only 43 percent were ready for college-level math (ACT, 2014).
Though the purpose of developmental education is to equip initially underprepared
1
Developmental education is also commonly referred to as remediation or remedial education. These terms will be
used interchangeably throughout the paper.
3
students with the academic basic skills required to successfully pass college-level courses
(Bailey, 2009; Lazarick, 1997; Merisotis & Phipps, 2000), it is increasingly declared a broken
system (Complete College America, 2012). It is not difficult to understand why. The statistics on
student pass rates at community colleges are distressing; just 25 percent of students who begin at
a developmental level attain an associate’s degree within eight years (Bailey, 2009), and only
about 33 percent complete their developmental math sequence (Bailey, Jeong, & Cho, 2010).
Another report, presented by Complete College America (CCA), has indicated that 22 percent of
students who were referred into remedial math or English programs completed their
“gatekeeper” course—the first college-level math or English course—within two years. And, of
those students required to take three developmental math courses, just 16 percent complete their
full sequence within three years (CCA, 2012). Further, traditionally underserved student
populations (defined here as students from Latino or African American backgrounds) are
overrepresented in developmental education (e.g., Attewell, Lavin, Domina, & Levey, 2006;
Bettinger & Long, 2005, 2007), and are less likely to successfully progress through their
sequences (Bahr, 2010; Bailey, Jeong, & Cho, 2010).
While developmental education intends to serve students who enter college
“underprepared” for college-level work, it is viewed instead as one that decreases college
completion and amplifies racial disparities. Three key statistics currently define developmental
education: (1) the high proportion of students placed into remediation, (2) the low proportion of
these students completing college-level courses and graduating from college, and (3) the
overrepresentation and disproportionate success rates of URM students. Improving
developmental education student outcomes has thus become a focus in education policy,
practice, and research to meet the Obama administration’s attainment goal.
4
However, while existing statistics paint a dismal picture of developmental education, they
often neglect to account for the individualized and unique nature of delivering developmental
education. The program itself varies by each institution’s specific assessment and placement
policies, classroom instruction, characteristics of students enrolled in developmental education
and their respective educational goals, and oftentimes, state legislation. Stated simply, the
national statistics describing the state of developmental education bases the evaluation of its
success or failure solely on outcomes (e.g., success/pass rates) and does not seek to understand
the interactive agents within developmental education programs.
My dissertation engages in and intends to better direct this dialogue by taking an in-depth
examination of developmental education, and describing the importance of its complexities as it
relates to student outcomes. Each of my three studies highlights a different aspect of
developmental education and identifies areas for improvement. In my dissertation, I will be
describing the nuances of specific areas of developmental education, from student progression
through their sequence, institutional assessment and placement policies, and the effect of
statewide mandates on developmental student outcomes. By approaching the study of
developmental education in this way, I will also be able to garner a more thorough understanding
of where and how URM students are most affected, and identify what can be done to lessen the
racial disparities in developmental math education and ultimately degree completion. I
specifically focus on the math requirement increase because a larger proportion of students are
placed into and struggle with developmental math education compared to English (Bailey et al.,
2010).
Much of the existing quantitative research that is cited for low developmental student
success rates (e.g., 16 percent of students placed into remediation complete their full sequence
5
within three years (CCA, 2012)), calculates success rates by dividing the number of students
passing the gatekeeper course by the total number of students initially placed into levels of
developmental education. Meaning, success rates are based on the total population of assessed
students placed into remediation, without acknowledging whether the student actually attempted
the course or at what level of remediation they were placed.
In my first paper, I demonstrate an alternative approach to calculating pass rates, one that
is based not on the number of students placed into a developmental math class but the number of
students that attempted to complete the work of that class. Specifically, I define and track student
progression, how students move through their developmental math trajectory, in terms of both
attempting and passing each level. I then build a comprehensive conceptual framework that
accounts for the multi-leveled factors associated with the likelihood of students’ progression,
utilizing individual-, institutional-, and developmental math program factors. Employing step-
wise logistic regression models, I find that while each additional step improves model fit, the
largest proportion of variance is explained by individual-level characteristics, and more variance
is explained in attempting each level than passing that level. I identify specific individual and
institutional factors associated with higher attempt (e.g., Latino) and pass rates (e.g., small class
size) in the different courses of the developmental math trajectory. Findings from this study
suggest that colleges should implement programs and policies to increase attempt rates in
developmental courses in order to increase pass rates of the math pre-requisite courses for
associate’s degrees and transfer to baccalaureate-granting colleges.
After describing students’ placements and illustrating how they progress through their
developmental math sequence, I then turn my focus to understanding how students are placed.
Research has demonstrated the negative consequences of inaccurate placement into
6
developmental education at community colleges (Scott-Clayton, Crosta, & Belfield, 2014; Ngo
& Melguizo, 2015), the overall confusing and arbitrary nature of the assessment and placement
(A&P) process (Bunch, Endris, Panayotova, Romero, & Llosa, 2011; Fay, Bickerstaff, &
Hodara, 2013; Safran & Visher, 2010; Venezia, Bracco, & Nodine, 2010), and the tendency for
students especially from URM backgrounds to underestimate their preparedness (Gray-Little &
Hafdahl, 2000; Oakes, 1990a, 1990b; Stevens, Olivarez, Lan, & Tallent-Runnels, 2004). It is
thus imperative to conduct research that furthers understanding on the information necessary to
accurately place students into levels of the math trajectory.
In this second paper, I first describe students’ behavior through their actual A&P process.
I then use an A&P policy that includes additional measures such as prior math achievement and
that buffers against students’ lack of confidence or knowledge of the A&P process. I examine
changes in math placement distribution and success rates in students’ first attempted
developmental math course under these two placement criteria. I explore this at a single
California community college which allows their students to choose the assessment sub-test used
to place them. I find evidence of misalignment between students’ choice of assessment sub-test
and what they reported as the highest math course completed with a passing grade. After
correcting for this misalignment through an alternative placement criteria, I find that utilizing an
additional measure that assesses prior math preparedness alongside sub-test choice can increase
students’ access to higher levels of math while also maintaining students’ success in their placed
courses; this is especially beneficial to URM students.
In the first paper, I describe how students place and progress through their developmental
math sequence, and then in the second paper I demonstrate the extent to which institutional A&P
policies can play a role in increasing student access and success in higher levels of the math
7
trajectory, in my third and final paper, I examine the extent to which a statewide policy resulted
in differences in developmental education student outcomes. In response to the national
discourse of remediation being the largest barrier to degree completion and greatest factor
associated with dropout, states have reconsidered and redefined what it means to be “college-
ready” for its students. While several states have recently adopted legislation that utilizes a more
flexible interpretation of college-readiness, a recent California mandate increased its standards in
what students are required to know in order to be deemed college-ready. In 2009, California
increased its math and English requirements for an associate’s degree, specifically increasing the
content level students must reach in order to be college-ready.
In my third paper, I observe any change in developmental math student degree attainment
resulting from this increase of standards. Utilizing student-level data from one California
community college’s Office of Institutional Research, I run several regression models to examine
the extent to which a higher math requirement results in changes in the number of degree-
applicable units students earned, the probability students pass the new math requirement, and the
probability students obtain at least 60 degree-applicable units, which is the minimum number of
credits required for an associate’s degree. I also investigate the equity concern and estimate the
extent to which this additional requirement had any disproportionate relationship on URM
student degree attainment. Analyses revealed that the implementation of a higher math
requirement neither increased nor decreased the number of degree-applicable units students
earned, the probability students passed IA, or the students’ odds of earning at least 60 degree-
applicable units. Further, though compared to White students, URM students earned less degree-
applicable units and were less likely to earn at least 60 degree-applicable units, the increased
standard did not significantly amplify or diminish this relationship. Findings from this study
8
suggest that the Title V regulation increased standards without limiting access or equity,
however, these findings should be taken with caution given the external policies and economic
environment that confounded my results.
In each of my three studies, I utilize administrative student-level data from colleges
within the California Community College (CCC) system. Though results from these studies are
only generalizable to this population, given the variety of implementation of developmental
education, it is important to look at particular nuances of developmental education specific to
each of the colleges to truly understand how aspects of this program are helping or failing
colleges and its students. Moreover, the CCC system has a decentralized governance structure, so
each college has great autonomy in developing its own assessment and placement policies
(Melguizo, Kosiewicz, Prather, & Bos, 2014).
The structure of this dissertation proposal is as follows. In Chapter Two, I present my
paper illustrating how students progress through their developmental math sequence, and the
extent to which individual-, institutional-, and developmental math program factors play a role in
students’ successful progression. My second paper is presented in Chapter Three, which takes an
in-depth investigation into student behavior in the assessment and placement process, and
whether including an additional measure to assess student confidence in their math preparedness
results in changes in the distribution of math placements and success rates in these courses. In
Chapter Four, I present my third paper which explores the relationship between the state-
mandated math requirement change of an associate’s degree and developmental math student
degree attainment. In Chapter 5, I conclude this dissertation with recommendations for
practitioners and policymakers.
9
Chapter Two
Increasing Success Rates in Developmental Math: The Complementary Role of Individual
and Institutional Characteristics
2
A substantial proportion of students enter higher education underprepared for college-
level work. Data from the National Postsecondary Student Aid Study (NPSAS:04) show that
over half of public two-year college students enroll in at least one developmental education
course during their tenure (Horn & Nevill, 2006). In California, 85 percent of students assessed
are referred to developmental math with the largest proportion to two levels below college-level
(CCCCO, 2011). The considerable number of students placing into developmental education
courses is associated with significant costs to the students and the states (Melguizo, Hagedorn, &
Cypers, 2008; Strong American Schools, 2008).
Students referred to developmental education bear significant financial costs which may
serve as extra barriers to degree completion. Being in developmental education costs students
time, money, and often financial aid eligibility when the developmental education coursework is
not degree-applicable (Bailey, 2009a, 2009b; Bettinger & Long, 2007). In the case of California,
since the majority of developmental math students are placed into two levels below college-level,
these associate degree-seeking students must pass two math courses to meet the math
requirement for an associate’s degree. Students who are seeking to transfer to a four-year
institution and who placed two levels below college-level need to pass two courses before they
are permitted to enroll in transfer-level math. Both scenarios equate to at least two extra
semesters of math, lengthening the student’s time to degree. Not only do these additional
2
Paper recently published, see: Fong, K. E., Melguizo, T., & Prather, G. (2015). Increasing success rates in
developmental math: The complementary role of individual and institutional characteristics. Research in Higher
Education, 56(7), 719–749.
10
semesters consume more school resources and add expenses for the students, but they also take
away from students’ potential earnings (Bailey, 2009a, 2009b; Bettinger & Long, 2007;
Breneman & Harlow, 1998).
Thus, it is unsurprising that research has found that the amount of developmental
coursework community college students are required to complete is associated with student
dropout (Bahr, 2010; Hawley & Harris, 2005–2006; Hoyt, 1999). At the same time, however,
students placed into developmental education exhibit characteristics associated with a higher
likelihood of dropping out prior to enrolling in remediation.
3
Studies have suggested that some of
the negative impacts of remediation may be attributable to selection bias (e.g, Attewell, Lavin,
Domina, & Levey, 2006; Bettinger & Long, 2005; Melguizo, Bos, & Prather, 2013). For
example, developmental math students are found to be systematically different from college-
level students in terms of gender, ethnicity, first-generation status, academic preparation and
experiences in high school, and delayed college entry (Crisp & Delgado, 2014). Further,
compared to college-level students, developmental students also tend to enroll part-time given
financial obligations and their external commitments to their family and work (Hoyt, 1999).
These factors have been shown to increase the likelihood of dropout (Crisp & Nora, 2010; Hoyt,
1999; Nakajima et al., 2012; Schmid & Abell, 2003).
Developmental education is not only costly to the student, but is an increasingly
expensive program to operate in general. A decade ago, the yearly cost of providing
developmental education was estimated to be around $1 billion (Breneman & Harlow, 1998).
This figure has been recently revised with national estimates indicating that the annual cost of
developmental education is now at about $2 billion (Strong American Schools, 2008). Further,
3
Developmental education is also commonly referred to as remediation or remedial education. These terms will be
used interchangeably throughout the article.
11
this number may be conservative given that developmental education can act as a barrier to
degree completion and predict student dropout (Hawley & Harris, 2005–2006; Horn & Nevill,
2006; Hoyt, 1999). Indeed, Schneider and Yin (2011) calculated that the five-year costs of first-
year, full-time community college student dropout are almost $4 billion. In California alone, the
costs were about half a billion dollars (Schneider & Yin, 2011). Taking together the cost of
student dropout with the annual cost of developmental education, it is evident that developmental
education is a fiscally-expensive program. However, despite the costs of providing
developmental education, proponents argue that underprepared students are better served in
developmental courses than left to flounder in college-level courses (Lazarick, 1997), so it
remains a core function of community colleges (Cohen & Brawer, 2008). Given the difficult
economic climate in California during the Great Recession and the resulting budget cuts to the
state’s community colleges (LAO, 2012) as well as the national push to increase college degree
attainment, improving success rates among developmental education students has become a top
priority for community colleges.
This descriptive study contributes to the growing literature on developmental education
by providing a more comprehensive framework to study students’ successful progression
through their developmental math trajectory. There are two main objectives of this study: (1) to
track community college students’ progression through four levels (i.e., arithmetic, pre-algebra,
elementary algebra, and intermediate algebra) of their developmental math sequences; and (2) to
build a conceptual model of developmental math education by exploring the extent to which
individual-, institutional-, and developmental math-level factors are related to successful
progression through the sequence. We define progression as a two-step process: attempting the
course and passing the course. This is an important distinction, as other studies have determined
12
the probability of successful progression based on the entire sample of students initially placed
into specific developmental levels (e.g., Bailey, Jeong, & Cho, 2010). Similar to Bahr (2012),
this study offers a different view by measuring only those students who are actually progressing
(attempting and passing) through each course. The sample is drawn from a Large Urban
Community College District (LUCCD) Office of Institutional Research’s computerized database.
We analyzed student transcript data which enabled us to provide more detailed information on
the initial referral, and on subsequent enrollment and progression patterns for developmental
math students. The institutional variables examined in this study were also from the LUCCD
Office of Institutional Research’s computerized database and the public records obtained from
their website (LUCCD Office of Institutional Research, 2012).
The following research questions guide this study:
1) What are the percentages of students progressing through the developmental math sequence?
Does this vary by student placement level? Does this vary by course-level of the
developmental math trajectory?
2) To what extent do individual-, institutional-, and developmental math-level factors relate to
students’ successful progression through their developmental math sequence?
Defining successful progression as attempting each level and passing each level in the
developmental math trajectory, we found that one of the major obstacles related to low pass rates
is actually the low attempt rates. Once students attempted the courses, they passed them at
relatively high rates. In terms of the comprehensive model of progression in developmental
math, we found that although individual-level variables explained most of the variance in the
models, the institutional-level and developmental math-level factors also contributed to explain
progression. Specifically, in terms of developmental math-level factors, we found that holding all
13
else constant, class-size was inversely related and type of assessment test was positively related
to passing pre-algebra and elementary algebra (middle of the trajectory and where most students
in the LUCCD are placed). We found that developmental math-level factors along with
individual-level factors such as receiving multiple measure points (i.e., proxy for high school
academic preparation) were positively associated with course success rates, which illustrates the
importance of the role community colleges can play in increasing success rates in developmental
math. These findings have important policy implications related to the need for community
colleges to design assessment and placement policies and support systems for students to attempt
and successfully complete the math pre-requisites for their desired credential or degree.
The structure of this paper is as follows. We first review the current literature addressing
the factors associated with student success in developmental education. Next, we describe the
methodological design and empirical model using data from the LUCCD. We then present
findings from our analysis. We conclude with a discussion of the findings and their implications
for research, policy, and practice.
Literature Review
Although research in higher education has demonstrated a relationship between student
success and various institutional factors (Calcagno, Bailey, Jenkins, Kienzl, & Leinbach, 2008;
Hagedorn, Chi, Cepeda, & McLain, 2007; Wassmer, Moore, & Shulock, 2004), the majority of
existing quantitative literature describing student success in developmental education focuses on
student characteristics (Bettinger & Long, 2005; Glenn & Wagner, 2006; Hagedorn, Siadat,
Fogel, Nora, & Pascarella, 1999; Hoyt, 1999). Few studies have explored the extent to which
developmental education student success is related to varying institutional characteristics and
assessment and placement policies. Notable exceptions include the work by Bahr (2010) and
14
Bailey, Jeong, and Cho (2010). As research in developmental education continues to refine our
understanding of factors related to student success, it is imperative to recognize the relationship
between students and the institutions they enroll in, and the complexities of both.
Characteristics of Students Enrolled in Developmental Education
Historically-underserved student populations are overrepresented in developmental
education. Students placed into developmental education are more likely to be African American
or Latino (Attewell et al., 2006; Bettinger & Long, 2005, 2007; Crisp & Delgado, 2014; Grimes
& David, 1999; Hagedorn et al., 1999; Perry, Bahr, Rosin, & Woodward, 2010), older
(Calcagno, Crosta, Bailey, & Jenkins, 2007), female (Bettinger & Long, 2005, 2007; Crisp &
Delgado, 2014; Hagedorn et al., 1999), low-income (Hagedorn et al., 1999), or first-generation
students (Chen, 2005; Crisp & Delgado, 2014).
Research has also demonstrated that students enrolled in college-level math courses enter
institutions with many advantages over students enrolled in developmental math (Crisp &
Delgado, 2014; Hagedorn et al., 1999). Compared to college-level students, students placed into
remediation, report lower high school GPAs, earned less college credit during high school, took
lower-level math classes in high school, and delayed entry into college (Crisp & Delgado, 2014).
Further, Hagedorn et al. (1999) found that student characteristics that are predictive of
their placement into remediation, such as studying less in high school, extend to their relative
success in developmental courses as well. Examining the outcome trajectories of developmental
education students, Bremer et al. (2013) found that students from White/non-Latino
backgrounds, students who attended tutoring services, and students seeking an occupational
(commonly referred to as vocational) certificate were more likely to persist and exhibit higher
GPAs. Math ability was also identified as a powerful predictor of student success; however,
15
enrollment in developmental math courses was not a significant predictor for retention, and was
negatively associated with GPA in college-level courses. This finding suggests that the main
factor associated with student success is their initial math ability, and taking the additional
remedial courses did not translate into higher educational outcomes.
One limitation of the Bremer et al. (2013) study is that they define remediation as a
binary treatment: either a student is placed in a college-level course or placed into developmental
education. This is problematic given that developmental education in community colleges is
traditionally delivered as a sequence of courses (Bahr 2008, 2012; Bailey, 2009a, 2009b;
Melguizo, Kosiewicz, Prather, & Bos, 2014). Typically, students assigned to developmental
education must successively pass each assigned course in the sequence before they can enroll in
college-level courses in those subjects.
Student Progression through the Developmental Math Sequence
Recent literature on developmental education has examined success based on multiple
levels of the remedial sequence (Bahr, 2009, 2012; Boatman & Long, 2010; Hagedorn &
DuBray, 2010; Melguizo et al., 2013). Bahr (2012) explored the junctures in developmental
sequences to investigate the extent to which students placed into lower levels experienced
differential attrition compared to students placed into higher levels. He investigated three reasons
that developmental students may elect to drop out of their sequence: nonspecific attrition, skill-
specific attrition, and course-specific attrition. Bahr (2012) found evidence of nonspecific and
skill-specific attrition for remedial math and writing. Regardless of the point of entry, students
experience escalating rates of nonspecific attrition. Therefore, nonspecific attrition partially
explains the college-skill attainment gap since low-skill students have more steps in front of
them, and thus suffer greater total losses. Bahr (2012) also found evidence of course-specific
16
attrition. Students who progressed to beginning algebra from a lower point of entry (e.g.,
arithmetic or pre-algebra) exhibited a lower likelihood of passing the course on their first attempt
compared to students who advanced to other math courses at the same juncture. Therefore,
beginning algebra is associated with the lowest likelihood of success. This also contributes to the
gap in college-level skill attainment between low- and high-skill developmental math students.
Existing literature furthers our understanding of individual characteristics of the
developmental student population and how these factors relate to student success (e.g., Bremer et
al., 2013). The current research also provides rich detail illustrating students’ behavior
throughout their developmental sequences (Bahr, 2012). However, these previous studies do not
include institutional characteristics, which the literature has demonstrated to be influential on
students’ academic outcomes (e.g., Calcagno et al., 2008; Hagedorn et al., 2007; Wassmer et al.,
2004). Additionally, as Melguizo (2011) argues, traditional models of college completion should
expand to developing conceptual frameworks that apply more specific institutional
characteristics, and include programs like developmental education as influential factors related
to college persistence and attainment.
Developmental Student Outcomes by Individual- and Institutional- Characteristics
A number of descriptive quantitative studies on developmental education have
incorporated both individual- and institutional- characteristics. For example, Bahr (2008) used a
two-level hierarchical multinomial logistic regression to examine long-term academic outcomes
of developmental math students compared to college-level students. He compared students who
“remediate successfully” – meaning they passed college-level math – to students who were
initially referred to college-level math and passed the course, and found the two groups
indistinguishable in terms of credential attainment and transfer. Bahr (2008) interprets these
17
findings as indicative of developmental math programs resolving developmental students’ skill
deficiencies. Considering that efficacy of remediation may vary across levels of initial
placement, he then categorized students based on the first math course they enrolled in (proxy for
initial math placement). Overall, results from this model supported the findings from his
previous models. Controlling for other covariates, Bahr (2008) concluded that remediation is
equally efficacious in its impact on student outcomes across levels of initial placement.
Bahr (2008) notes, however, that he compared only developmental and college-level
students who completed college-level math, thereby eliminating from the analysis 75 percent of
developmental math students who did not complete the course sequence. Therefore,
developmental math was found to be effective, but only for a small percentage of students who
are not representative of the developmental student population. Further, though the analytical
design nests students within institutions, and allows separation of explained variance by student-
level characteristics and institutional-level characteristics, Bahr (2008) does not explore this
variation, instead choosing to control for institutional characteristics (institutional size, degree of
math competency of entering students, goal orientation of each college) while focusing his
discussion on the student-level variables.
Extending the use of institutional predictors, Bahr (2010) investigated whether the major
racial/ethnic groups (White, African American, Latino, and Asian)
4
reap similar benefits from
developmental math education. He found that though all students who successfully complete
college-level math within six years of enrollment experienced favorable long-term academic
outcomes at comparable rates, a sizable racial gap still existed for African American and Latino
4
For the purposes of utilizing consistent language throughout the article, we describe the ethnic/racial categories as
African American, Latino, White and Asian. When previous studies utilize other commonly referred categories like
Black and Hispanic, we have changed these to African American and Latino, respectively.
18
students in the likelihood of successful math remediation. Bahr (2010) concluded that rather than
reducing any existing racial disparities in K-12 math achievement, developmental education
amplified these disparities. The overrepresentation of African Americans and Latinos among
students who performed poorly in their first math course (which dissuaded students from the
pursuit of college-level math skill) exacerbated these racial gaps. Though college racial
concentration played a role in the likelihood of successful remediation, success varied across
racial groups. For example, successful remediation neither increased nor decreased for African
American students enrolled in institutions serving a high proportion of African American
students, though it declined for White, Latino, and Asian students. Conversely, Latino-majority
institutions were not positively associated with better outcomes. However, counter to findings in
Hagedorn et al. (2007), Bahr (2010) found that Latino students enrolled in Latino-majority
institutions were less likely to successfully remediate compared to their counterparts in colleges
serving a smaller Latino population.
Bailey et al. (2010) explored student progression through multiple levels of
developmental education, and whether placement, enrollment, and progression varied by student
subgroup and by various institutional characteristics. They found that less than half of
developmental students complete the entire math sequence, and that 30 percent of students
referred to developmental education did not enroll altogether. In terms of student characteristics,
they reported that female, younger, full-time, and White students had higher odds of progressing
through math than male, older, part-time, and African American students. Moreover, African
American students in particular had lower odds of progressing through the math sequence when
placed into lower developmental levels. In terms of institutional characteristics, they found that
the size of the college, student composition, and certificate orientation are associated with
19
developmental student progression even after controlling for student demographics. The odds of
students passing their subsequent math course were better when they attended small colleges,
while odds were lower when students attended colleges that served high proportions of African
American and economically disadvantaged students, had higher tuition, and were more
certificate-oriented.
Though Bailey et al. (2010) built a comprehensive model to explain student success in
developmental math education, their model does not consider specific institutional assessment
and placement policies. Further, the model is limited in explaining initial math ability – because
the measurement of math ability is solely reliant on placement level, it does not consider that
math skill may vary within each placement level. Qualitative investigations into developmental
education have demonstrated diversity in what type of assessment and placement policies
colleges adopt (Melguizo et al., 2014; Perin, 2006; Safran & Visher, 2010). Melguizo et al.
(2014) investigated the assessment and placement policies at community colleges within a large
urban district, and reported substantial variation in the way the nine colleges of the district were
assessing and placing students for the same developmental math sequence. Thus, simply
measuring placement level across colleges may not be accurately measuring students’ actual
incoming math ability. We build on Bailey et al. (2010) by including specific variables
representing the assessment and placement process including which test students were assessed
with, and the assessment test score used to direct students’ developmental math placement.
This study builds on the previous literature by first expanding the conceptual framework
proposed by Bailey et al. (2010) utilizing individual-, institutional-, and developmental math-
level characteristics We attempt to incorporate the complexity of and differences in the
assessment and placement processes by including a number of factors such as the test used to
20
assess and place students in the models. Second, we follow Bahr’s (2012) definition of
successful student progression through their developmental math trajectory. We argue that
community colleges cannot be held accountable if their students are not actually enrolling in the
courses. For this reason we explore the association of the individual-, institutional-, and
developmental math-level factors in two stages. We first explore the association of the factors
with attempting the course, and then we explore the association of the factors with successfully
passing the course. As discussed in Bahr (2013), we propose this definition as a more accurate
way of understanding progression through the developmental math sequence, given that in order
for community colleges to increase success rates they also need to understand the factors
associated with attempting these courses.
Methodology
Data
The data employed in this study are from the LUCCD, which is the largest district in
California, and is one of the most economically and racially diverse. Student data come from the
LUCCD Office of Institutional Research’s computerized system. Student-level information was
gathered for students who were assessed at one of the LUCCD colleges between summer 2005
and spring 2008, and then tracked through spring 2010. Thus, data are based on three cohorts of
students in developmental math education: 1) students assessed during the 2005-06 school year
(five years of data), 2) students assessed in 2006-07 (four years), and 3) students assessed in
2007-08 (three years).
The LUCCD Office of Institutional Research also provides annual statistics regarding
student populations and instructional programs for each of the colleges, and makes them publicly
available through the LUCCD website (LUCCD Office of Institutional Research, 2012).
21
Through these reports, we collected institutional-level data for enrollment trends, student
characteristics, and service area population demographics. We also accessed data on the
developmental math program, including average class size and proportion of classes taught by
full-time faculty. Data were gathered for summer 2005 to spring 2010 to match the student-level
data, and merged with the student dataset based on college attendance.
Sample
The initial sample included 62,082 students from eight colleges
5
who were assessed and
then placed into developmental math education, and who enrolled in any class at the college
where they were assessed. Students may take classes at multiple community colleges; however,
to predict students’ progression based on institutional characteristics and assessment and
placement policies, we eliminated students who did not enroll at the same campus they were
assessed thus dropping 1,164 students (2%) from the analysis.
6
We also filtered out 886 students
(1%) because they did not follow the college’s placement policy and took a higher or lower math
course than their referred level. Lastly, we removed 4,006 students (7%) who were placed
directly into college-level math and thus never enrolled in developmental math, as well as 1,200
students (2%) who were placed into a level below arithmetic because only one college referred
students to this level. The final sample included 54,879 students who were placed into
developmental math education. Table 1.1 illustrates the breakdown of the final sample by
placement level. As illustrated, there are higher proportions of older and female students placed
5
One college was dropped from the analysis because the school lacked data on the multiple measure points
employed to place students into developmental math education.
6
This only applies to having the assessment and placement college match the college in which the student takes
their first math course. This was done to ensure that we did not include students who “gamed” the system and had a
test score that would have placed them into one level, but because they took the class at another college, were able to
enroll in a higher level. Some students placed into arithmetic and enrolled in arithmetic at the same college but then
enrolled in pre-algebra at another college; these students are still included in the sample. However, only a small
percentage of students take their math courses at multiple colleges.
22
in lower developmental math levels. There are also a larger proportion of African American and
Latino students, and students with financial aid in lower levels of developmental math.
Table 1.1.
Student Demographic Characteristics and Enrollment Status by Placement
a
(N=54,798)
Variables Arithmetic Pre-Algebra
Elementary
Algebra
Intermediate
Algebra
Student demographics
Age at assessment (median)
25.8 24.6 22.0 21.3
Female
60.6% 57.5% 52.4% 48.8%
Race/Ethnicity
Asian/Asian American
5.5% 9.4% 14.8% 24.8%
African American
17.0% 15.2% 8.5% 5.8%
Latino
65.5% 58.1% 51.2% 39.3%
White/Caucasian
6.7% 11.0% 17.9% 21.1%
Other
5.4% 6.2% 7.6% 9.1%
Student enrollment
Full-time
8.7% 10.5% 14.1% 18.3%
Financial aid
71.0% 63.4% 57.7% 53.0%
Observations (N)
15,106 14,879 14,550 10,344
a.
Source: Assessment and enrollment records, LUCCD 2005/06-2009/10.
Table 1.2 summarizes the institutional and developmental math characteristics of each
college. The institutional characteristics in the tables are averaged across the 2005-06 to 2009-10
school years, while the developmental math factors are averaged across time and math level.
Table 1.2.
Average Institutional and Developmental Math Program Characteristics by College
a
Variables A B C D E F G H
Institutional variables
Full-time equivalent
students (thousands)
14.7 5.5 11.6 12.6 6.7 6.5 21 13.8
Median family income
(thousand $s)
38.2 87 29.2 44.9 48.3 95.3 53.1 70.5
% African American
students
11.9 5.1 30.3 6.7 15.5 45 2.2 6.8
% Latino students
40.7 74 53.7 41.9 44.8 27.3 75.6 31.1
% Certificates awarded
39.1 36.6 61.8 30.7 14.0 40.7 61.1 30.0
23
Academic Performance
Index
617.1 630.8 576.3 651.9 634.9 631.2 661.2 701.1
% HS graduate
48.6 60.4 54.0 60.4 64.1 58.5 68.2 64.1
% Transfer/AA
Educational Goal
31.4 35.6 23.1 44.7 37.6 39.3 37.4 48.3
Developmental math factors
Assessment test
b
ACCU ACCU ACCU ACCU COMP COMP MDTP MDTP
Class size
41.2 36.2 34.7 38.8 38.2 41.9 37.5 34.8
% Full-time faculty
56.1 26.1 67.6 46.8 48.3 41.2 46.3 43.3
Observations (N)
7,751 4,228 5,236 6,590 4,953 2,689 12,926 8,659
a.
Source: Office of Institutional Research Report, LUCCD 2005/06-2009/10.
b.
Note. ACCU=ACCUPLACER, COMP=COMPASS, MDTP=Mathematics Diagnostic Testing Project
Variables
This study builds an empirical model to explain the probability of students’ successful
progression (attempting and passing) through the developmental math sequence (arithmetic, pre-
algebra, elementary algebra, and intermediate algebra) as a function of individual-, institutional-,
and developmental math-level factors. We include variables based on findings from existing
empirical research. The following empirical model was specified for each progression outcome:
Where,
y = students’ successful progression
x = vector of individual-level characteristics, including student demographics, enrollment
status, and assessment scores and placement
v = vector of institutional-level variables, including institutional size, goal orientation,
and student composition
z =vector of developmental math-level factors, including assessment process and
developmental course information
e = error term
24
Individual-level variables. The individual-level variables fall into three main categories:
student demographics, enrollment status, and assessment scores and placement. Student
demographics included in the model are age at the time of assessment, female, and race/ethnicity
(Asian, African American, Latino, Other/Unknown, and the reference group, White). These
variables have consistently been found in the literature to be associated with student academic
success (e.g., Bahr, 2009; Choy, 2002; Crisp & Delgado, 2014; Pascarella & Terenzini, 2005).
The enrollment status variables are dichotomous and describe whether students are
enrolled full-time and their financial aid status, which is used as a proxy for income. Among
others, Bailey et al. (2010) found that part-time students were less likely to progress through
their developmental math sequences. Full-time status is a computed variable and represents the
average number of units a student attempted in his or her fall and spring semesters during their
tenure enrolled in community college. Students who were enrolled a minimum of 12 units per
semester were identified as full-time. The literature has also shown that family income is
associated with student success (Choy, 2002; Pascarella & Terenzini, 2005). Students’ financial
aid status describes whether the student was ever enrolled in financial aid programs. The
majority of students seeking financial aid were granted the Board of Governor’s Fee Waiver
Program (BOGGW) or the Pell grant; however, this variable also includes students who received
financial aid through the Cal grant, the Extended Opportunity Programs and Services (EOPS)
grant and book grant, the Federal Supplemental Educational Opportunity Grant, and federal work
study. Since each grant requires proof of financial need to combat the cost of enrollment and
supplies, this composite financial aid status is our proxy for family income.
Finally, the majority of prior research has demonstrated a positive association between
students’ initial ability (e.g., developmental placement) and past academic performance with
25
student outcomes (e.g., Bailey et al., 2010; Hagedorn et al., 1999; Hoyt, 1999). The student
assessment variables identify the level of developmental math in which the student was placed,
his or her test score, and whether the student was awarded multiple measure points.
The placement and test score variables describe students’ initial math ability. The
placement variable refers to the four levels of developmental math education explored in this
study. The test score variable describes the percent of questions the student answers correctly
within their sub-test and then transformed to incorporate their percentages across sub-tests. This
variable was computed by dividing the student’s score by the maximum points per sub-test of
each assessment test, which generated a value for the percentage of questions they answered
correctly. Then, we evenly weighted these sub-test percentages into thirds creating a continuum
of scores across sub-tests.
7
Considering previous studies have relied solely on placement level –
a broad category – to distinguish students’ initial ability (e.g., Bahr, 2012; Bailey et al, 2010;
Hagedorn & DuBray, 2010), the inclusion of this variable is a much better control for
individuals’ initial level of academic preparation.
Lastly, the multiple measure point variable measures past academic performance. During
the matriculation process in the LUCCD, students provide additional information regarding their
educational background and goals. This information is used to determine whether students
should receive points (multiple measure points) in addition to their placement test score. Multiple
measure points vary across colleges (Melguizo et al., 2014; Ngo & Kwon, 2014) and include,
among others, the highest level of math completed, high school GPA, importance of college, the
amount of time elapsed since their last math course, and hours per week they plan on being
employed while attending classes. Multiple measure points can thus result in a student being
7
Computation available upon request from authors.
26
placed into the next higher-level course (Melguizo et al., 2014; Ngo & Kwon, 2014). In this
model, the multiple measure point variable is treated as dichotomous and describes whether the
student earned any multiple measure points in the assessment and placement process.
Institutional-level variables. The institutional-level variables are also comprised of three
main categories demonstrated to be associated with student success: institutional size, goal
orientation, and student composition. The size of the institution has been found to be negatively
associated with student success (Bailey et al., 2010; Calcagno et al., 2008). In this study, as in
previous literature, institutional size is measured by full-time equivalent student (FTES) which is
a student workload measure representing 525 class hours (CCCCO, 1999).
Community colleges may vary in their institutional goal orientation; for example, some
colleges emphasize the transfer-function while others may be more certificate-oriented.
Certificate-oriented colleges may focus on vocational degrees which may not have a math
requirement, rather than associate degrees that do have a math requirement. Research has
demonstrated that students have lower odds of successful progression in developmental math
when enrolled in certificate-oriented community colleges (Bailey et al., 2010). A certificate
orientation variable is included in the model and represents the proportion of certificates granted
out of total certificates and associate degrees awarded over the time the data was collected.
A large proportion of LUCCD colleges’ student population is comprised of students from
the local and neighboring communities, which results in varying populations across colleges. As
illustrated in the existing literature, the composition of the student population is directly related
to student outcomes (e.g., Calcagno et al., 2008; Hagedorn et al., 2007; Melguizo & Kosiewicz,
2013; Wassmer et al., 2004). Wassmer et al. (2004) examined the effect of student composition
on transfer rates in community colleges and found that colleges serving a larger proportion of
27
students with a higher socioeconomic status and with better academic preparation have higher
transfer rates. Thus, we include measures of high school achievement, student academic
preparedness, and socioeconomic status. These variables are the college’s Academic
Performance Index (API) score, the percentage of high school graduates, the percentage of
students reporting an educational goal of an associate’s degree and/or transfer to a four-year
institution (percentage of student educational goal), and the college income variable which is a
measure of the median income of families residing in each college’s neighboring area. Lastly,
much research has been conducted on the effect of racial/ethnic composition of community
colleges on student outcomes (Bahr, 2010; Calcagno et al, 2008; Hagedorn et al, 2007; Melguizo
& Kosiewicz, 2013; Wassmer et al, 2004). Findings have been mixed, so it is important to
include variables for the colleges within the LUCCD who serve a very ethnically diverse student
population, and whose population varies across colleges. In the model, we include variables for
the percentage of African American students and percentage of Latino students.
Developmental math variables. The developmental math factors range from the
assessment process to the math classroom. The assessment test variable represents each of the
three assessment tests. At the time the students in this sample were assessed, four colleges were
using the ACCUPLACER test, two were using Mathematics Diagnostic Testing Project
(MDTP), and two were using COMPASS. Research from K-12 literature has demonstrated that
smaller class sizes are positively associated with student success (Akerhielm, 1995; Krueger,
2003; Finn, Pannozzo, & Achilles, 2003), so we also included a variable measuring the average
developmental math class size. Lastly, since research has demonstrated that students whose first
math course in a sequence is with a part-time instructor are less likely to pass the second course
when the instructor is full-time (Burgess & Samuels, 1999), we included the average proportion
28
of full-time faculty teaching developmental math classes.
Analyses
We first used simple descriptive analyses to track community college students’
progression through four levels of their developmental math sequences. We separated the groups
by initial referral and then calculated their attempt and pass rates at each level of their
developmental math trajectories.
We then used step-wise logistic regressions to build our model of success in
developmental math education in three steps. In the first step, we only included student
demographic, enrollment, and assessment variables. In the second step, we added overall
institutional-level variables that describe each college’s student population as well as
institutional size and certificate-orientation. The final step focuses on developmental math
factors; in this step, we included which assessment test the student took, the average
developmental math class size, and the proportion of these classes which were taught by full-
time faculty. Each of these steps was utilized in each of the eight outcomes we used to examine
the progression of students through the developmental math sequence.
The eight outcomes of interest represent successful progression through the
developmental math sequence. Because this study explored four levels of developmental math
education, each level was treated as two separate dichotomous outcomes: attempting the level
and not attempting the level, and passing the level and not passing the level. Each of the eight
step-wise logistic regressions analyzed a different sample of students. Since students typically do
not attempt courses below their placement, the sample for each outcome variable represents this
filter. In examining the probability of attempting arithmetic, for instance, students referred to
higher levels of developmental math (i.e., pre-algebra, elementary algebra, intermediate algebra)
29
are not included in the model. An extra restriction is based on how we define progression. As
students progress through their math trajectory, they may only attempt the subsequent course if
they passed the previous one. In estimating the probability of attempting pre-algebra, for
example, of the students initially placed into arithmetic, only those who passed arithmetic may
attempt pre-algebra. This logic is extended in investigating the probability that students pass their
developmental math course. Since students can only pass a course they attempt, students who do
not attempt the course are not included.
Limitations
There are several limitations to this study. One is that this is a descriptive study
examining which variables are associated with students attempting and passing the levels in their
developmental math sequence. Though we have a rich set of variables in our model there is no
reason to assume we have controlled for all possible factors associated with progression through
the sequence. Because of potential omitted variable bias and thus a correlation with the residual
term, we are unable to make any causal inferences from our findings.
Two, the cohorts used in this study were tracked between three and five years.
Technically this time frame includes at least six semesters worth of data, which is enough time
for students who are referred to the lowest developmental math level to progress through their
entire sequence. However, because the failure rate in developmental education courses is higher
than that of college-level courses (U.S. Department of Education, 1994), students may be
attempting courses multiple times which increases the amount of time necessary to complete
their sequence (Bahr, 2012; Perry et al., 2010). Our estimates may, therefore, be conservative if
students continued to attempt previously failed courses and progress through the sequence
beyond the time frame of our study.
30
A third limitation is that the LUCCD dataset did not include important factors that
research has demonstrated to be associated with student academic success including student
attributes like motivation and perseverance (e.g., Duckworth, Peterson, Matthews, & Kelly,
2007; Hoyt, 1999; Hagedorn et al., 1999), and environmental factors like the number of hours
worked and financial obstacles demonstrated to decrease the likelihood of persistence (Crisp &
Nora, 2010; Hoyt, 1999; Nakajima et al., 2012; Schmid & Abell, 2003). Further, we did not
include the concentration of underprepared population which Bahr (2009) found is positively
associated with remedial students’ conditional rate of progress. Lastly, though our model is one
of the first to include variables specific to colleges’ developmental math program, we are
missing factors related to faculty education level (among others). Research has found that
compared to developmental math faculty without a graduate degree, those with a graduate degree
are associated with better student outcomes (Fike & Fike, 2007).
There are limitations with the variables included in the model as well. For example,
financial aid status may be time-varying; however, we only had access to student information at
initial assessment. Our student test score variable is also limited. We were able to compute a
variable for student test score to assess students’ math ability prior to beginning their sequence
rather than solely relying on the broad placement variable. However, there are inherent biases
within this variable due to each college’s differing placement criteria. For example, since one of
the colleges had a substantial majority of its students take the lowest sub-test, their scores are
driven downward in comparison to other colleges. Though this variable has limitations, it is the
most accurate measurement for students’ initial math ability we could calculate that controls for
the varying range of scores possible for each sub-test as well as the different levels of sub-tests.
31
Results
Since this study offers an alternative view to examine progression through the
developmental math trajectory, we include the traditional understanding and sample definition in
the Appendix P1-A as a source of comparison.
8
The method of measurement utilized in this
study calculated pass rates based on the number of students who attempted the course, and
calculated attempt rates of the subsequent course based on whether they passed the prerequisite;
thus, we only included students who were actually persisting through their math trajectories.
Descriptive Statistics of Student Progression
As illustrated in Figure 1.1, students are relatively evenly distributed across the
developmental math placement levels though slightly more students are referred to lower levels
of developmental math. Figure 1 demonstrates the progression of students through the
developmental math sequence by their initial placement level.
9
Starting at the lowest level, of the
15,106 students initially referred to arithmetic, 61 percent (n = 9,255) attempted arithmetic; and
then 64 percent (n = 5,961) of those attempting the course passed it. Students who pass
arithmetic may choose to attempt pre-algebra, which 72 percent (n = 4,310) do and 71 percent
pass (n = 3,412). Of these 2,833 students who attempt elementary algebra (83 percent), 75
percent (n = 2,217) pass the course; 65 percent (n = 1,393) attempt intermediate algebra and 72
percent (n = 1,004) pass. Similar patterns exist for the other placement levels. However, it
appears that across initial placement levels, the largest proportions of students stop progressing
by not attempting their initial math course and not passing this course; this is especially apparent
8
The traditional manner of measuring successful progression through the developmental math trajectory calculates
attempt and pass rates of developmental math students by dividing the number of students who attempt or pass the
course by the total number of students initially placed into each level—thus utilizing the entire sample of students
regardless of their enrollment in developmental math.
9
Attempt is defined as enrolling in math and remaining past the no-penalty drop date. When computing the rates for
all those who enroll whether or not they remain in the course, the attempt percentages decrease though the pass rates
remain stable.
32
for the lowest levels of math.
All the patterns illustrated in Figure 1.1 may be a factor of student observable and unobservable characteristics as well as of
each college’s assessment and placement policies and other institutional factors. One important note is that from summer 2005 to
Figure 1.1. Percentage of students attempting and passing each level of the developmental math trajectory based on initial placement.
Arithmetic
N=15,106
Intermediate
Algebra
N=10,344
Elementary
Algebra
N=14,550
Pre-Algebra
N=14,879
64%
n=5961
79%
n=3412
72%
n=3654
75%
n=2127
68%
n=6776
70%
n=7446
72%
n=1004
72%
n=1746
78%
n=4012
73%
n=5618
Pass IA
Pass EA
Pass PA
Pass AR
Attempt 74%
n=7,706
Attempt 73%
n=10,666
Attempt 67%
n=10,035
Attempt 61%
n=9,255
33
spring 2010, the time frame of the dataset, the math requirement to obtain an associate’s degree
was elementary algebra. So, students’ attempt rates between the elementary and intermediate
algebra levels may be a function of their educational goals. Therefore, two groups may emerge:
students whose educational goal is an associate’s degree and students who are intending to
transfer to a four-year institution.
By unpacking the attempt and pass rates for each level of the developmental math
sequence, we offer a more detailed illustration of developmental student progression. Figure 1
shows that students who enter at lower developmental levels are passing the higher courses at
rates comparable to those who are initially placed in higher levels if they attempt those levels.
However, there are a large number of students who exit their sequence at different points along
the trajectory. Consistent with Bailey et al. (2010), Figure 1 reveals that across levels, most
students exit the sequence by not attempting or not passing their initial course. Though only a
small number of students make it through to the highest levels, Figure 1 suggests that when
students attempt developmental courses, these courses are helping students gain the skills
necessary to successfully pass the course required for an associate degree and the pre-requisite
course for transfer-level courses. In the next section, we examine what factors influence the
probability of successful progression, as measured by these attempt and pass rates.
Model Fit
Though the first stage in the step-wise logistic regressions, which only included
observable student characteristics, explained the most variance in each outcome, we found that
each further inclusive step in model-building statistically improved model fit to the data. The
final model for each outcome demonstrated the best fit; however, the explained variance
remained relatively small and ranged from five percent explaining attempting arithmetic and 17
34
percent explaining attempting pre-algebra (see Table 1.3). In comparing model fit across progression outcomes, we find relatively
large differences in the variance explained. With the exception of attempting compared to passing arithmetic, the model explained
more variance for attempting each level (pre-algebra, elementary algebra, intermediate algebra) than for passing those levels.
Table 1.3.
Conceptual Model Fit in Three Stages
a
Model 1: Student Characteristics Model 2: + Institutional Characteristics Model 3: + Developmental Math Factors
Outcome Pseudo R
2
Pseudo R
2
Pseudo ΔR
2
LR χ
2
(6) Pseudo R
2
Pseudo ΔR
2
LR χ
2
(4)
Attempt
b
AR 0.0483 0.0507 0.0024 47.14*** 0.0524 0.0017 33.65***
Pass
c
AR 0.0896 0.1061 0.0165 196.61*** 0.1110 0.0049 59.24***
Attempt PA 0.1669 0.1727 0.0058 130.89*** 0.1733 0.0006 13.04**
Pass PA 0.0567 0.0809 0.0242 414.83*** 0.0853 0.0044 76.43***
Attempt EA 0.1375 0.1462 0.0087 187.90*** 0.1578 0.0116 252.10***
Pass EA 0.0418 0.0567 0.0149 309.84*** 0.0619 0.0052 107.54***
Attempt IA 0.1253 0.1494 0.0241 473.46*** 0.1500 0.0006 12.50**
Pass IA 0.0383 0.0486 0.0103 188.31*** 0.0544 0.0058 105.15***
a.
Source: Assessment and enrollment records, Los Angeles Community College District (LACCD) 2005/06-2009/10.
b.
Attempt is defined as continuing to be enrolled in the course after the no penalty drop deadline.
c.
Passing is defined as successfully completing a course; meaning, receiving a 'C' or better which is required for advancement to the next level course.
*p < .10, **p < .05, ***p < .01
35
Odds Ratio Results from Final Model of Each Progression Outcome
10
Student characteristics. Student characteristics account for the most explained variance
in each regression analysis. Female students have higher odds of progressing throughout the
entire developmental math trajectory. While each additional year of age decreases the odds of
attempting the courses, it increases the odds of passing the courses, though the effect sizes are
relatively small. African American students are less likely to progress through the sequence
compared to White students. Compared to White students, Latino students have higher odds of
attempting each math level but lower odds of passing each level (see Table 1.4).
Students’ enrollment status variables are associated with the probability of successful
progression throughout the entire trajectory. Results indicate that students who are enrolled full-
time and students with financial aid have higher odds of progression compared to part-time
students and those without financial aid. The only exception is that full-time enrolled students
have lower odds of passing arithmetic.
Our transformed student test score variable demonstrates that each additional point
results in higher odds of successful progression; however, the practical significance of test score
is most strongly associated with passing arithmetic and loses strength at higher levels of math.
Overall, having obtained a multiple measure point significantly increases the odds of passing
Table 1.4.
Odds Ratio Results from Final Model of Each Progression Outcome
a
Variables
Attempt
b
AR Pass
c
AR
Attempt
PA Pass PA
Attempt
EA Pass EA
Attempt
IA Pass IA
Student Characteristics
Age 0.975*** 1.036*** 0.976*** 1.040*** 0.987** 1.039*** 0.957*** 1.030***
(0.002) (0.003) (0.002) (0.003) (0.004) (0.003) (0.002) (0.004)
Female 1.474*** 1.444*** 1.244*** 1.222*** 1.240** 1.348*** 0.978 1.214***
(0.053) (0.071) (0.046) (0.049) (0.093) (0.048) (0.039) (0.046)
10
Because of the statistical power in our models, we only discuss findings significant at the p < .01 level. All results
are provided in Table 1.4.
36
Asian American 1.004 1.045 1.029 1.097 0.963 1.070 0.864* 1.089
(0.101) (0.154) (0.081) (0.113) (0.119) (0.082) (0.052) (0.076)
African 1.004 0.371*** 0.801** 0.385*** 1.001 0.423*** 0.740*** 0.431***
American (0.083) (0.045) (0.058) (0.035) (0.132) (0.034) (0.061) (0.040)
Latino 1.245** 0.636*** 1.252*** 0.674*** 1.332** 0.614*** 1.094 0.623***
(0.092) (0.066) (0.077) (0.051) (0.135) (0.036) (0.061) (0.036)
Other 0.960 0.698* 1.033 0.846 1.133 0.735*** 0.913 0.974
(0.096) (0.099) (0.092) (0.092) (0.168) (0.062) (0.071) (0.073)
Student Enrollment Status
Full-time 1.477*** 0.845* 1.492*** 1.332*** 1.324* 1.382*** 1.651*** 1.522***
(0.097) (0.068) (0.095) (0.087) (0.154) (0.073) (0.096) (0.080)
Financial Aid 1.711*** 1.285*** 1.760*** 1.202*** 1.885*** 1.287*** 1.615*** 1.169***
(0.067) (0.071) (0.066) (0.053) (0.145) (0.050) (0.066) (0.048)
Student Assessment Variables
Test score 1.052*** 1.153*** 1.024*** 1.032*** 1.000 1.016*** 1.001 1.007***
(0.006) (0.009) (0.003) (0.003) (0.005) (0.002) (0.002) (0.002)
Multiple 1.045 1.272*** 1.013 1.158** 1.195 1.234*** 0.955 1.275***
measure point (0.044) (0.067) (0.045) (0.053) (0.118) (0.050) (0.047) (0.058)
AR placement 31.733*** 1.667*** 0.477 1.345* 10.642*** 1.097
(3.475) (0.111) (0.458) (0.170) (2.007) (0.140)
PA placement 0.265* 1.215* 2.722*** 1.186
(0.168) (0.096) (0.272) (0.114)
EA placement 2.816*** 1.420***
(0.189) (0.088)
Institutional Characteristics
FTES 0.919 1.121* 1.115* 1.004 0.877 0.852*** 0.981 0.901*
(0.041) (0.063) (0.051) (0.046) (0.063) (0.015) (0.043) (0.040)
%Certificates 1.008 0.998 0.995 1.003 1.007 1.000 1.013* 1.023***
Awarded (0.005) (0.007) (0.005) (0.005) (0.014) (0.005) (0.006) (0.006)
%HS graduates 1.057 0.983 1.003 1.016 1.259*** 1.034* 1.029 1.044*
(0.033) (0.042) (0.021) (0.025) (0.079) (0.016) (0.023) (0.021)
%degree goal 1.042 0.937* 0.932** 0.928** 0.976 1.043** 1.045 1.011
(0.024) (0.030) (0.020) (0.022) (0.049) (0.015) (0.024) (0.022)
API score 0.514** 2.019 1.046 0.991 1.095 0.989 1.027 1.004
(0.124) (0.739) (0.026) (0.025) (0.055) (0.009) (0.021) (0.021)
%African
American
1.025
(0.060)
1.040
(0.097)
1.044
(0.035)
0.981
(0.035)
1.148
(0.090)
1.045**
(0.014)
1.081
(0.068)
1.037
(0.072)
%Latino 1.062* 1.013 1.018 0.966 1.244*** 1.018 1.051** 1.020
(0.029) (0.038) (0.025) (0.025) (0.061) (0.009) (0.017) (0.014)
Median family 0.413** 2.119 1.017 0.975 0.905** 0.959*** 0.940 0.954
Income (0.127) (0.964) (0.015) (0.014) (0.029) (0.005) (0.037) (0.042)
Developmental Math Factors
37
MDTP 4.7e+17** 0.000 0.121 1.110 0.010* 5.785*** 0.391 2.059
(6.92e+18) (0.000) (0.148) (1.367) (0.018) (1.630) (0.284) (1.386)
Compass 2.0e+25** 0.000 0.583 4.157 0.864 0.994 0.369 1.596
(3.83e+26) (0.000) (0.462) (3.590) (1.118) (0.156) (0.345) (1.683)
Class size 0.044** 15.095 0.949 0.801*** 0.979 0.967*** 0.986 0.904
(0.049) (24.915) (0.040) (0.037) (0.022) (0.006) (0.100) (0.100)
Taught by full- 0.235** 4.214 1.031 0.939 1.019 0.995 0.938 0.981
time faculty (0.127) (3.432) (0.046) (0.045) (0.035) (0.004) (0.061) (0.072)
N 14,825 9,175 20,612 14,291 20,069 17,510 22,286 15,905
a.
Source: Assessment and enrollment records, LUCCD 2005/06-2009/10.
b.
Attempt is defined as continuing to be enrolled in the course after the no penalty drop deadline.
c.
Passing is defined as successfully completing the course; meaning, receiving a 'C' or better which is required for
advancement to the next level course.
*p < .10, **p < .05, ***p < .01
each level though it is not significantly related to attempting the courses.
The most interesting finding is the relationship between students who are persisting
through the sequence (persisting-students) compared to students initially-placed at the higher
level. Consistent with Bahr (2012), results reveal that the odds of attempting and passing each
subsequent course are higher for persisting-students compared to initially-placed students.
11
For
example, students who were initially placed into arithmetic, and attempted and passed arithmetic,
exhibited higher odds of both attempting and passing pre-algebra compared to students who were
initially placed into pre-algebra (see Table 1.4). One exception to these findings is that
persisting- students who are initially placed into pre-algebra have lower odds of attempting
elementary algebra compared to those initially placed into elementary algebra. There are also a
few non-significant results listed in Table 1.4. Overall, findings suggest that students who are
actually persisting are “catching up” and are even more successful than their peers who began
their sequences with higher math ability (as measured by the assessment test).
11
See Appendix P1-A Table A2 for results of progression outcomes which used the traditional selection process.
These results are similar to Bailey et al. (2010).
38
Institutional characteristics. Most of the institutional variables included in the
regression analyses are not statistically significant; however, those institutional variables that are
significant are concentrated at the elementary algebra level. The significant institutional factors
describe the college student composition. For example, each additional percent of an institution’s
student population that are Latino is associated with increases in students’ odds of attempting
elementary algebra. The same can be said for each additional percent of high school graduates at
an institution. Surprisingly, we found that for every increase in median family income, the odds
of students passing elementary algebra decreases, though this may be partially due to various
support services specifically for low-income students. The concentration of significant
institutional variables at the elementary algebra level is most likely due to the fact that
elementary algebra was the math requirement for an associate’s degree.
The percent of certificates awarded by an institution is the only institutional variable
associated with passing intermediate algebra. For every percent increase of certificate
orientation, students’ odds of passing intermediate algebra increases. This finding alludes to
Calcagno et al.’s (2008) conclusion that well-prepared students will do well regardless of
institution. Thus, students that attempt intermediate algebra may represent a specific student
subgroup based on motivational characteristics or who have certain support systems. Another
potential explanation is that this finding may be a function of instruction within higher-level
math courses.
Developmental math factors. Similar to the institutional variables, most of the
characteristics of each institution’s developmental math program did not significantly influence
the probability of students’ successful progression through their trajectory. We did find that
students who were assessed and placed by the MDTP test have higher odds of passing
39
elementary algebra compared to students who took the ACCUPLACER test. However, further
analysis reveals that colleges using MDTP have a larger percentage of their student population
placing into elementary algebra (56.5 percent) compared to ACCUPLACER colleges (43.5
percent). Thus, this finding may be more a function of student composition and the institutions’
cut-score rather than which assessment test was used. Class size is the only program factor in the
model associated with a decrease in the probability of successful student progression. For every
additional student in a pre-algebra class, students’ odds of passing pre-algebra decrease; similar
results are found for elementary algebra.
Discussion
The results of the final model (including student, institutional, and variables related to the
developmental math program at each college) explains only between five and 17 percent of the
variance of the probability of students’ successful progression through their developmental math
sequences in the LUCCD. With the exception of arithmetic, the model explains more variance in
attempting each level compared to passing those levels. We found that though each additional
step significantly improves the model fit, the largest proportion of variance for each progression
outcome is explained by student-level characteristics.
Student Background and Assessment Variables are Related to Successful Student
Progression through the Developmental Math Sequence
In the final logistic model, results show that, on average, female students are more likely
to progress at every stage of the developmental math trajectory compared to their male peers. We
also found that African American students are less likely to successfully progress compared to
White students. These findings are consistent with the current literature (Bailey et al., 2010).
Contrary to existing findings, we found that when progression is disaggregated by attempting
40
and passing each level, Latino students have higher odds of attempting, but lower pass rates.
Partially explaining this finding are the potential environmental pull factors experienced by a
large proportion of community college students, and especially Latino students enrolled in
developmental education. Crisp and Nora (2010) found that for these students, environmental
pull factors – such as the number of hours worked per week – decreases the likelihood of success
(persisting, transferring, or earning a degree). Thus, the competing demands for students’ time
may result in lower success rates in developmental education. Because these students are actually
trying to progress but seem to still have a difficult time passing, it is imperative that future
research seeks to understand teaching pedagogy and strategies employed in the classroom.
Results reveal that students who enter at lower developmental levels are attempting and
passing the higher courses at rates comparable to those who are initially placed in higher levels –
if they attempt those levels. Similarly, Martorell, McFarlin, and Xu (2013) employing a
regression-discontinuity design found that placement into developmental education did not act as
a discouragement to enrollment. Though only a small number of students make it through to the
highest levels, these progression rates are illustrating that persisting-students are “catching up”
and even exceeding their peers who were initially placed into higher courses.
This finding is similar to Bahr (2008) who compared students who were initially placed
in developmental math and passed college-level math to students who were initially placed into
college-level math and passed the course. He found that these two groups of students had similar
academic outcomes in terms of credential attainment and successful transfer to a baccalaureate-
granting institution. One limitation, though, of this study and Bahr’s (2008), is that we are using
a subset of students – students who persevered through their sequences – who may be highly
motivated and more college-prepared than the “average” developmental student. Due to the self-
41
selection issue and descriptive nature of our analysis, we cannot conclude whether these highly
motivated students would have done worse, just as well, or better had they been placed in higher
developmental math levels.
However, studies employing quasi-experimental designs have found similar results; this
suggests that developmental education has the potential of helping students when placed in the
level that matches their ability. Utilizing a regression-discontinuity design within a discrete-time
hazard analysis, Melguizo et al. (2013) estimated the effect of being placed into lower levels of
developmental math on passing the subsequent course and accumulating 30 degree-applicable
and 30 transferable credits. They conducted the analysis for the four levels of the developmental
math sequence (arithmetic, pre-algebra, elementary algebra, and intermediate algebra) and for
seven colleges in the LUCCD. The primary finding that emerged was that while, on average,
initial placement in a lower-level course increases the time until a student completes the higher-
level course by about a year, after this time period, the penalty for lower placement was small
and not statistically significant. Still, this finding is dependent on the course level in which the
student is placed and the college he or she attends. In terms of accumulating 30 degree-
applicable and 30 transferable credits, they concluded that there is little short or long-term cost to
a lower placement for students at the margin.
We also find that while student test scores increase the odds of progression throughout
the math trajectory, the practical significance decreases with higher levels of math. Similar to the
above findings, this result suggests that students’ initial math ability is less important as they
progress through their sequence. Having obtained a multiple measure point(s) also increases the
odds of passing each level, though it has no significant influence on attempting the courses.
Thus, obtaining multiple measure points is predictive of student success. Other community
42
college systems, such as Texas, have begun considering multiple measures (e.g., prior academic
coursework, non-cognitive factors) in the assessment and placement process of students
(THECB, 2013). The finding from this study as well as Ngo and Kwon (2014) demonstrates the
importance of these measures. Ngo and Kwon (2014) reported similar findings and concluded
that students who were placed in a higher developmental math level because of multiple measure
points for prior math background and high school GPA performed just as well as their peers.
Institutional Characteristics were neither Strongly Related to Student Progression nor
Consistent across the Sequence
Compared to student-level variables, the institutional characteristics and factors related
specifically to the institution’s developmental math program are not as strongly related across the
progression outcomes. Overall, it appears that it does not matter where students enroll in
developmental math, since institutional characteristics have no relationship with students’
probabilities of attempting and passing each level. This finding is similar to the Calcagno et al.
(2008) study which concluded that well-prepared students with economic resources will graduate
regardless of institution and students with many challenges may have difficulty even in strong
colleges. However, placement is directly affected by each college’s placement criteria (Melguizo
et al., 2014); thus, while the observed institutional and developmental math factors may not have
a direct relationship with the progression outcomes, they do directly determine student
placement.
One exception is that institutional size is associated with a decrease in the odds of passing
elementary algebra. This result indicates that smaller institutions provide a more conducive
environment for successfully passing elementary algebra. Another exception is that an increase
in median family income lowers the odds of passing elementary algebra. This finding could be
43
considered counter-intuitive and is inconsistent with Melguizo and Kosiewicz (2013) who found
lower success rates for students attending colleges that are segregated either by socioeconomic
status or race/ethnicity of the students. However, institutions with higher proportions of low-
income students may have higher-funded special support programs (e.g., Extended Opportunity
Programs and Services). These results may thus be reflecting what Gándara, Alvarado, Driscoll,
and Orfield (2012) highlighted, which is that programs targeting underrepresented and low-
income students are important in increasing their academic success.
The percentage of high school graduates enrolled in the colleges increased the odds of
attempting elementary algebra. Thus, having this type of student population may create a college
culture more driven towards a pure collegiate-function (Cohen & Brawer, 2008), since
elementary algebra was the math requirement for an associate’s degree. The percentage of Latino
students enrolled at the college also increases the odds of attempting elementary algebra. This
finding is promising given that Latinos represent the largest (and growing) proportion of students
within California community colleges. Our results reflect the Hagedorn et al. (2007) study which
found a positive relationship between Latino-representation on Latino student outcomes. These
minority-majority colleges appear to create an environment that fosters a collegiate function –
perhaps due to an increase in students’ sense of belonging and/or specific programs aimed at
increasing degree completion for these student populations. However, it is important to note that
while Latino representation increased the odds of attempting developmental math, it had no
significant relationship with passing the courses. Thus, further effort is necessary to ensure
students are passing courses once they are enrolled.
Finally, one counter-intuitive finding was that certificate orientation is associated with an
increase of the odds of passing intermediate algebra. Due to the filtering manner in which the
44
sample for each outcome was selected, this finding may be illustrative of a dichotomy between
the types of students within each institution, regardless of the institution’s certificate-, degree-, or
transfer-orientation.
Developmental Math Class Size and Assessment Test Variables are Associated with
Student Progression
The developmental math variables were significantly related to successful progression for
passing pre-algebra and elementary algebra. The class sizes of pre-algebra and elementary
algebra courses are associated with a decrease in the odds of passing each course, respectively.
This finding suggests that smaller class sizes are beneficial to developmental math students,
which is consistently demonstrated in K-12 educational research (e.g., Akerhielm, 1995;
Krueger, 2003; Finn, Pannozzo, & Achilles, 2003). Given the increases in enrollment and
developmental math placement, this finding has implications within community colleges.
Students who were assessed and placed using MDTP rather than ACCUPLACER have
higher odds of passing elementary algebra. However, since further analysis revealed a larger
percentage of MDTP colleges’ student population placing into elementary algebra, this finding
may be a function of student composition and the institutions’ placement criteria rather than
which assessment test was used. Future research may be able to further our understanding of how
these assessment and placement policies play a role in student progression.
Conclusions and Policy Implications
This study contributes to the literature by providing a more detailed and context-based
description of the progression of students through their developmental math sequence. Our
analysis demonstrates that more variance is explained for attempting each developmental math
level compared to passing the level (with the exception of arithmetic). In addition, this study
45
expands the traditional conceptual framework used to understand student progression by
documenting the importance of including additional variables such as institutional-level
characteristics (e.g., class size) and all the variables used by colleges to assess and place students
(e.g., multiple measures). Below, we summarize the policy implications of some of the main
findings.
1. The largest barrier for students placed into developmental math in the LUCCD appears
to be attempting their initial course. Since the assessment and placement process varies
by state and college, our findings are only generalizable to the LUCCD colleges.
However, this result has been consisted with previous research demonstrating large
proportions of students failing to attempt their first remedial math course (Bahr, 2012;
Bailey et al., 2010). The field would benefit from qualitative research exploring what
occurs between the time students are assessed and enrollment in math, as well as
observing the classroom environment and teaching pedagogy within developmental math
courses (Grubb, 2012).
2. Attempt rates are clearly aligned with the required courses to attain a degree, and this
result helps explain the relatively lower attempt rates of intermediate algebra. This is an
important finding that illustrates the need of having a clear understanding of the policy
context when conducting studies related to what previous literature has referred to as
“gateway” courses.
12
Degree, transfer, and vocational certificate math requirements vary
by state, and in a decentralized system like California, may vary by college. Therefore, it
is imperative research aligns course-taking patterns with student educational goals.
12
In this context, “gateway” courses relate to the highest level of math required prior to enrolling in college-level
math, or math that is required for an associate’s degree and/or transfer. While the gateway course for the LUCCD at
the time of this study was elementary algebra (requirement changed to intermediate algebra in 2009), this is not
always the case.
46
3. Pass rates for students progressing through the sequence ranged from 64 to 79 percent.
Examining these percentages across levels, it appears that students who are actually
progressing through their sequence are passing courses at comparable rates to their
initially higher placed peers. Though only a small number of students make it through to
the highest levels of developmental math, these findings suggest that developmental
courses are helping students gain the skills necessary to successfully pass them. In fact,
after controlling for other factors, students initially placed in lower levels are attempting
and passing subsequent courses with higher odds compared to their higher-placed peers.
However, it is important to note that these students represent those who persevered
through their sequences and thus represents a subset of students who may be more highly
motivated.
4. Students receiving a multiple measure point are more likely to pass each level. Though
most of the variance is explained by student-level characteristics, one of the strongest
predictors increasing the odds of passing each level is whether students received a
multiple measure point. As policymakers and practitioners refine their assessment and
placement policies, it is important to recognize that a score on a standardized test is only
one factor predicting student success.
Understanding student progression in terms of both attempting and passing courses in the
developmental math sequence has important implications for policymakers and practitioners. By
disaggregating progression as attempting and passing each developmental math level,
practitioners can gain a greater understanding of where their students are exiting the sequence
and focus initiatives at these important junctures. For example, our results indicate that the
largest proportion of students exit the sequence by not attempting or passing their initial course.
47
It is thus necessary to further understanding of what occurs between students’ placement and
attempting the course which results in lower attempt rates, and what occurs in the classroom that
may be discouraging or inhibiting their success.
The results of this study suggest that in order for community colleges to increase their
degree attainment and transfer rates they need to place students correctly, define clear college
pathways, and create programs to make sure that students successfully progress through their
developmental math trajectory. While community colleges cannot be held accountable for the
outcomes of students who do not pursue the courses, they should be held accountable for making
sure the largest possible number of students is attempting and passing the courses needed to
attain their desired educational outcome.
48
Chapter Three
Utilizing Additional Measures of High School Academic Preparation to Support
Students in their Math Self-Assessment
13
A substantial majority of community colleges require that their students are assessed in
math and English to determine students’ preparation for college-level work in these subjects
(Parsad, Lewis, & Greene, 2003). Traditionally, developmental math education is structured as a
sequence of three or four pre-requisite courses leading to college-level work (Bailey, Jeong, &
Cho, 2010; Melguizo, Kosiewicz, Prather, & Bos, 2014). Of entering community college
students who are assessed, it is estimated that 75 percent, nationally, are placed into
developmental education (National Center for Public Policy and Higher Education (NCPPHE) &
Southern Regional Education Board (SREB), 2010; Bailey, 2009a, 2009b). In California,
approximately 85 percent of community college students are referred to developmental math
with the largest proportion referred to three levels below college-level (California Community
College Chancellor’s Office, 2011).
Though the purpose of developmental education is to equip initially underprepared
students with the academic basic skills required to successfully pass college-level courses
(Bailey, 2009a, 2009b; Lazarick, 1997; Merisotis & Phipps, 2000), developmental education can
also act as a barrier to students’ degree completion or transfer to a four-year college, especially
among students placed into lower levels of the developmental math trajectory. Research has
consistently demonstrated that students placed into lower levels of developmental math
sequences exhibit substantially lower rates of enrollment and of success in passing the
subsequent math courses needed to attain an associate’s degree or transfer (Bahr, 2012; Bailey et
13
First author in co-authored paper with Tatiana Melguizo.
49
al., 2010; Fong, Melguizo, & Prather, 2015; Jenkins, Jaggars, & Roksa, 2009). Moreover, this
research has demonstrated that underrepresented racial minority (URM) students exhibit lower
odds of successfully progressing through their developmental math sequences (Bailey et al.,
2010; Fong et al., 2015), which may result in an exacerbation of the racial disparities found in
math achievement, and ultimately, degree completion.
Given the large number of students placed into developmental education, and the low
success rates among those placed, research has recently emerged that takes a closer look at
community colleges’ assessment and placement (A&P) process. The quantitative literature has
focused on examining the validity of standardized tests (e.g., Belfield & Crosta, 2012; Scott-
Clayton, 2012; Scott-Clayton, Crosta, & Belfield, 2014), exploring additional or alternative
measures (e.g., high school grades) to determine placement (e.g., Belfield & Crosta, 2012; Ngo
& Kwon, 2014; Noble, Schiel, & Sawyer, 2004), and testing whether colleges are setting the
cutoff scores of the placement tests correctly (Melguizo, Bos, & Prather, 2013). An underlying
theme of this literature has been the negative consequences, in terms of wasted time and
resources and decreased likelihood of completing college, for students who are inaccurately
placed. Research has shown that inaccurate placements affect as many as one-quarter of
community college students who are misassigned via placement tests (Scott-Clayton, Crosta, &
Belfield, 2014). Despite these negative effects, however, qualitative research has shown the
complex nature of establishing the A&P criteria, and the lack of training and support available
for community college faculty members charged with this task (Melguizo et al., 2014).
Qualitative research has also examined students’ experiences through the A&P process
and found that they lack awareness of the process and the consequences of their performance on
assessment tests (Bunch, Endris, Panayotova, Romero, & Llosa, 2011; Fay, Bickerstaff, &
50
Hodara, 2013; Safran & Visher, 2010; Venezia, Bracco, & Nodine, 2010). Further, students’ low
expectations of their academic ability may be fostered in high school leading up to the A&P
process (Venezia et al., 2010), and may affect the decisions they make throughout it. Moreover,
research has shown that first-generation college students, who constitute a large proportion of
community college students and are often from URM backgrounds, attempt to sidestep the
humiliation of failure and blow to their self-concept by undermining their educational goals
(Cox, 2009). Taking together, the substantial proportion of students assessed and placed into
developmental education, the negative consequences of inaccurate placement, the overall
confusing and arbitrary nature of the A&P process, and the tendency for students – especially
from URM backgrounds – to behave in ways that underestimate their preparation, it is
imperative to conduct research that furthers understanding of the information necessary to
accurately place students into levels of the math trajectory.
This study takes advantage of a California community college (College H) whose A&P
policy utilizes the Mathematics Diagnostic Testing Project (MDTP) with four “readiness” sub-
tests that vary in content-level, and enables students to choose their initial sub-test. Thus, College
H’s A&P process includes three steps: 1) students choose which assessment sub-test to take, 2)
students’ scores on the assessment sub-test are combined with any points they are awarded via
other measures, and 3) students are placed into a level of math based on their adjusted score.
Unlike other traditional assessment tests (i.e., ACCUPLACER, COMPASS), the MDTP test does
not include a branching system, so there is no automatic referral to a higher or lower sub-test
depending on how strong or weak the students are performing on their initial sub-test. Meaning,
while tests with a branching system are able to adjust to students’ ability after they have started
the test, the MDTP test does not. Because lower sub-tests lead to placements at lower levels,
51
College H’s policy of allowing students to choose which sub-test to take has real implications for
the level and accuracy of developmental education placement. College H’s A&P policy thus
allows for the identification of students who may be underestimating their math preparation.
The main contributions of this paper are to determine whether students are
underestimating their math preparation, describe an A&P policy (buffering criteria) that attempts
to control for this potentially detrimental behavior, and determine whether that A&P policy
compared to College H’s current process (actual criteria) differs in terms of student access and
student success. Specifically, the A&P criteria we propose in this study attempts to buffer against
students’ lower misperceptions of their math preparation, exhibited by demonstrating a higher-
level of math proficiency than the content-level of the assessment sub-test they choose to take.
The following research questions direct our study:
1. Are students choosing assessment sub-tests that align with their prior math achievement?
Does this vary by race/ethnicity?
2. How does the inclusion of prior math achievement as an additional measure of assessment
change placement distribution? Does this vary by race/ethnicity?
3. Does student success in developmental math differ between students placed under the
buffering criteria compared to students placed under the actual criteria? Does this vary by
developmental math level?
The structure of this paper is as follows. In the subsequent sections, we review the current
research on the accuracy of assessment instruments (i.e., standardized tests, multiple measures)
employed to direct placement into developmental education, as well as the literature on how
community college students’ confidence interacts with their decision-making and academic
behavior through the A&P process. Next, we describe the institutional context and data,
52
providing an in-depth description of College H’s A&P process and our buffering A&P criteria.
We then describe our empirical strategy to compare the likelihood of success for students under
each A&P criteria, followed by our main findings. Our findings suggest that including students’
prior math achievement (nested within sub-test choice) as part of the placement criteria increases
access to higher levels of math while also maintaining success rates in their placed courses. We
conclude with a discussion of these findings and their direct implications for policy and practice.
Review of Relevant Literature
The Predictive Validity of Assessment Instruments and Multiple Measures Associated with
Student Success in Developmental Education
Standardized tests. The majority of colleges utilize standardized and commercially-
available tests to assess students’ academic preparation and place students into levels of math
based on their test score (Burdman, 2012; Hughes & Scott-Clayton, 2011; Parsad, Lewis, &
Greene, 2003); presently, the two most commonly used standardized assessment tests are
ACCUPLACER and COMPASS (Primary Research Group, 2008). One of the advantages of
utilizing standardized tests to place students is that it can be less time-consuming and less
resource-intensive to administer than a more holistic approach to placement that includes student
interviews or reviews of individual files and transcripts (Hughes & Scott-Clayton, 2011). While
prior research has demonstrated a positive relationship between these test scores and college
students’ grades in their final math course and overall GPA (Peng, Le, & Milburn, 2011), what is
most useful to practitioners is to determine whether these tests are accurately placing students
into developmental- or college-level math.
Sawyer’s (1996) decision-making framework defines placement accuracy as simply
placing students at the highest level of math where they have the greatest likelihood of success.
53
These rates have been calculated as “the sum of ‘observed true positives’—students who are
placed at the college level and actually succeed there—and ‘predicted true negatives’—students
who are not predicted to succeed at the college level and are ‘correctly placed into remediation”
(Scott-Clayton, 2012, p. 7). Research generated by test producers has demonstrated placement
accuracy rates of 73 to 84 percent when the criterion is a grade of C or higher in college-level
math for the ACCUPLACER test (Mattern & Packman, 2009). Meaning, 73 to 84 percent of
students were placed into college-level math and passed the course. For the COMPASS test,
ACT (2006) reported accuracy rates of 63 to 68 percent when the criterion is C or higher.
While the ACCUPLACER and COMPASS tests serve as assessments that report
students’ test scores in order for colleges to determine students’ level of math placement, the
MDTP is a test which attempts to “diagnose” rather than “report” students’ math preparation
(Betts, Hahn, & Zau 2011). The MDTP, a diagnostic test utilized in a number of secondary and
post-secondary institutions in California, offers course-specific assessments that provide
feedback on students’ strengths and weaknesses in math (further description on how it is adopted
in the community college A&P process in the Institutional Context and Data section). Betts et al.
(2011) evaluated the effect of mandatory MDTP testing on San Diego Unified School District
(SDUSD) students’ math achievement finding that MDTP testing boosted California Standards
Test
14
math scores. Betts et al. (2011) determined that these gains arose in part because students
were more accurately placed into math classes that aligned with their ability.
Though the College Board and ACT have reported relatively high accuracy rates for their
ACCUPLACER and COMPASS products, and Betts et al. (2011) have demonstrated MDTP’s
14
California Standards Test (CST) is the main test in California used to measure student proficiency and to
determine whether schools should receive interventions and sanctions set out in the federal No Child Left Behind
law.
54
placement accuracy among high school students, independent research examining the
community college student population has reported positive but weak correlations between
placement test scores and student pass rates for developmental and college-level math courses
(Jenkins et al., 2009; Scott-Clayton, Crosta, & Belfield, 2014). Further, research has consistently
demonstrated that when alternative measures of student academic preparation are accounted for,
the positive relationship between test scores and success diminishes (Armstrong, 2000; Belfield
& Crosta, 2012; Scott-Clayton, 2012). This has motivated interest in understanding the
usefulness of employing multiple measures to make placement decisions (Burdman, 2012).
Multiple measures. Multiple measures have been explicitly recommended in the
Standards for Educational and Psychological Testing to improve the quality of high-stakes
decisions. Standard 13.7 states (American Educational Research Association, American
Psychological Association, & National Council on Measurement in Education, 1999):
In educational settings, a decision or characterization that will have major impact on a
student should not be made on a simple test score. Other relevant information should be
taken into account if it will enhance the overall validity of the decision. (pp. 147–148)
The call for multiple measures in assessments has emerged as a result of the increasing use of
high-stakes testing as a policy tool to assist in decisions like whether students should graduate,
teachers should be promoted, and schools should receive funding (Chester, 2003; Henderson-
Montero et al., 2003), and of course, whether students are in need of remedial education
(Burdman, 2012; Hughes & Scott-Clayton, 2011; Parsad, Lewis, & Greene, 2003). While there
has been widespread agreement in the use of multiple measures, “the definition of multiple
measures, criteria to evaluate each measure, and how these measures should be combined for use
in high-stakes decisions are not clear” (Henderson-Montero et al., 2003, p. 7). Henderson-
55
Montero et al. (2003) defined four primary approaches in which information from multiple
measures can be used: conjunctive, compensatory, mixed conjunctive-compensatory, and
confirmatory. The conjunctive approach is where information across multiple measures requires
a minimum level of performance across all included measures. In the compensatory approach,
poor performance in one measure may be off-set by strong performance in another measure. The
mixed approach uses a combination of conjunctive and compensatory approaches. Lastly, the
confirmatory approach is where information from one measure is utilized to confirm the
information from another measure.
Given the definitions put forth by Henderson-Montero et al. (2003), the compensatory
multiple measure approach is the most useful, and perhaps more so, the most likely to be used in
placement decisions for developmental education. However, the underlying assumption of this
approach is that the measures included assess knowledge, skills, and abilities relevant to success
in college-level coursework (p. 9).
From a simple empirical design argument, using multiple measures for developmental
math placement is beneficial in placement decisions for several reasons: (1) it has the potential of
increasing placement accuracy, because of the additional information utilized to make placement
decisions, (2) it can allow institutions to exempt from testing students who, given prior
achievement and other factors, are predicted to have a high probability of success in a college-
level course, and (3) it can decrease students’ anxiety due to the high-stakes nature of a
placement test (Sawyer, 1996). Noble et al. (2004) found that “using multiple measures to
determine students’ preparation for college significantly increases placement accuracy” (p. 302).
Boylan (2009) advocated for an assessment system that combines cognitive and affective data
along with information regarding students’ personal circumstances (e.g., weekly work hours,
56
parental status, marital status, financial obligations) to improve placement accuracy and serve
students more effectively. Similarly, Gaerter and McClarty (2015) identified six dimensions
shown to influence college readiness and success: academic achievement (e.g., test scores),
motivation (e.g., academic self-concept), behavior (e.g., absences), social engagement (e.g.,
participation in school clubs), family circumstances (e.g., socioeconomic status), and school
characteristics (e.g., school demography). They found that, among middle-school students, while
the academic achievement dimension explained the most variance in college readiness (17.1
percent), the nonacademic factors of motivation and behavior also explained a substantial
proportion of college readiness (15.3 and 14.1 percent, respectively). In terms of reducing the
racial disparity in success in higher education, Hoffman and Lowitzki (2005) argued that prior
academic achievement is the key for minority student success. For example, Marwick (2004)
found that Latino students at a single community college who were placed into higher-level
courses due to multiple measures achieved equal and sometimes greater outcomes than when
placement was solely determined by their test score.
High School GPA. High school grades have been found to better predict student
achievement in college than typical admissions tests such as the ACT or SAT (Bowen, Chingos,
& McPherson, 2009; Geiser & Santelices, 2007; Geiser & Studley, 2003).This relationship may
be even more pronounced in institutions with lower selectivity and academic achievement
(Sawyer, 2013). Research has commonly included high school GPA in models predicting student
success in developmental education. This literature has consistently demonstrated that high
school GPA is a better predictor of student success than placement test score (Armstrong, 2000;
Belfield & Crosta, 2012; Burrow, 2013; Scott-Clayton, 2012; Jaffe, 2012; Lewallen, 1994; Long
Beach Promise Group, 2013).
57
In fact, when controlling for high school GPA, the correlation between placement test
score and success disappears (Belfield & Crosta, 2012; Burrow, 2013). With access to a data
system that connects high school transcript information to the local community college, the Long
Beach Promise Group (2013) demonstrated that students who were placed in courses via a
“predictive placement” scheme based on high school grades instead of test scores were more
likely to complete college-level courses in their remediated subject areas. Armstrong (2000)
studied community college students in California and found that test scores and demographic
(e.g., race/ethnicity), dispositional (e.g., past experiences), and situational (e.g., employment
hours) variables contributed significantly to models predicting course grades and student
retention. However, the student dispositional factors tended to explain a greater amount of
variance in student outcomes than did other student-level variables, including test scores.
Specifically, students’ previous performance in school, such as high school GPA, grade in last
English or math course, and number of years of English or math taken in high school were
strongly predictive of college outcomes in both English and math courses. These findings may be
explained by the ability of report card grades to assess motivation and perseverance (Bowen et
al., 2009), or competencies associated with students’ self-control, which can help students study,
complete homework, and have successful classroom behaviors (Duckworth, Quinn, &
Tsukayama, 2012).
Scott-Clayton (2012) evaluated the predictive validity of COMPASS test scores to make
placement decisions (developmental or college-level), compared to the predictive value of other
measures used in place of or in addition to test scores. She found that the incremental validity
15
of placement tests relative to high school GPA as predictors of success is weak. However, adding
15
Incremental validity seeks to answer if a new test increases the predictive ability beyond an existing test.
58
test scores to a model utilizing high school GPA to predict grades in college-level math increased
the explained variance by six percentage points. Scott-Clayton’s additional simulations also
found that allowing students to test out of remediation either by their placement test scores or
high school GPA resulted in lowering the remediation rate by eight percentage points without
significant decreases in success rates in college-level coursework.
Utilizing multiple measures in the A&P process. While the extant literature has
consistently demonstrated that high school GPA is predictive of student success, there is little
research examining whether the use of these measures in the A&P process is an effective
practice in terms of access and success for community college students. One exception is the
study by Ngo and Kwon (2014) which examined how utilizing multiple measures to assess and
place students is related to access and success. They found that multiple measures increased
access to higher-level math courses, particularly for African American and Latino students.
Specifically, they found that measures of prior math achievement and high school GPA were
useful for making placement decisions. When comparing the outcomes of students whose
placement was increased due to the points they received from multiple measures (referred to as a
multiple measure boost) to students who placed similarly without the multiple measures boost,
Ngo and Kwon found that boosted students performed no differently on course passing rates and
longer-term credit completion.
Similar to Ngo and Kwon (2014), the current study examines the extent to which multiple
measures are effective in the A&P process. However, the current study adds to this literature by
including a multiple measure which attempts to off-set poor and uninformed choices students are
making in their A&P process that may lead to lower and inaccurate placements.
59
Student Experiences with the A&P Process
Colleges’ A&P processes are meant to sort students into levels of math and English
which best match their academic abilities. However, qualitative research has highlighted student
confusion around this process, which leads to inaccurate placements and decreases in student
motivation. Venezia et al. (2010) found that although research has demonstrated the negative
relationship between remedial placement and student success, student focus groups and
interviews with counselors revealed that entering community college students had little to no
knowledge of what to expect in terms of the A&P process or the consequences of performing
poorly on the test. In their research brief, Fay et al. (2013) highlighted similar patterns in student
perspectives on the A&P process, and reasons why students do not prepare for these high-stakes
tests. Findings revealed that students are ill-prepared because they lack knowledge regarding
available preparation materials, understanding of how to prepare for these tests, and confidence
in their math ability. Fay et al. (2013) concluded that academic confidence can impact student
motivation and academic behaviors related to success.
Research has consistently demonstrated that across populations and subject areas,
students’ academic performance is associated with confidence.
16
Bickerstaff, Barragan, and
Rucks-Ahidiana (2012) showed how entering college students’ confidence is shaped by past
academic experiences and expectations of college. Cox (2009) examined how the fear of failure
shaped students’ expectations and behaviors finding that first-generation college students attempt
to sidestep the humiliation of failure and blow to their self-concept by undermining their
16
Much of this research is from the educational psychology literature on self-efficacy. Self-efficacy is a construct
developed by Albert Bandura. In his Social Cognitive Theory, self-efficacy (Bandura, 1997) is the belief people
have regarding their ability to succeed in a particular situation. He theorized that the beliefs people hold regarding
their capability and expected outcomes of their efforts have a powerful influence over the ways in which they
behave. Given this theoretical definition, self-efficacy has been described as analogous to self-confidence (Schunk,
2011). Research has consistently demonstrated that across populations and subject areas, self-efficacy and
confidence are associated with academic performance.
60
educational goals. Though most of the students she interviewed entered the community college
with aspirations of ultimately attaining a bachelor’s degree, they revealed tremendous anxiety
over assuming the role of “college student,” and feared being unable to succeed in this role. Once
students sensed a mismatch between their ability and their new role, they relied on “strategies for
balancing their hopes and fears,” and this balancing act had a “greater influence on [students’]
approaches to college coursework than did their cognitive-academic preparation” (p. 54).
Confidence has been shown to play a significant role in academic success, further,
students from particular minority groups who exhibit lower math achievement also display lower
confidence in their math ability (Oakes, 1990a, 1990b; Gray-Little & Hafdahl, 2000). Research
has demonstrated Latino students report less confidence than their White counterparts regarding
their ability to successfully complete math problems (Stevens, Olivarez, Lan, & Tallent-Runnels,
2004). Contrary to previous research which has demonstrated lower confidence in math ability
among underrepresented minority groups, Pajares and Kranzler (1995) found African American
students’ confidence in math ability to be more complex. Compared to White students, African
American students exhibited lower performance scores but higher confidence of their math
ability. Pajares and Kranzler (1995) concluded that the accuracy of self-perception in math
ability was substantially lower for African American students compared to White students.
Taken together, findings from the literature suggest that Latino students are more likely to
underestimate their math ability given their lower confidence, while African American students
are more likely to inaccurately assess their math ability.
The extent to which confidence (of lack thereof) directly plays a role in developmental
math placement has been demonstrated in research on student self-placement, an A&P policy
where students select their own placement. While the majority of community colleges rely on a
61
standardized, commercially-available assessment test, Kosiewicz (2013) examined one college’s
use of a self-placement mechanism. Kosiewicz examined the effect of being assigned to remedial
math education via a test-placement method (COMPASS) compared to a self-placement method.
She found that under a self-placement regime, students are more evenly distributed in the
developmental math levels compared to similar students under the traditional testing procedure.
However, when she explored differences for particular racial and ethnic groups, she found that
the majority of African American students assigned themselves to the two lowest developmental
math levels, though she asserted this result should be read with caution given the small sample
sizes.
Existing research has examined the A&P process and the extent to which additional
measures may improve accuracy rates of students’ placements; however, it has not accounted for
students’ experiences prior to and behavior during the A&P process, which may lead to
unnecessarily lower-level placements in developmental math. In this descriptive study, we take
an in-depth look at students’ behavior through a single community college’s A&P process. As
identified in Melguizo et al. (2014), in a decentralized community college governance structure
like the California community college (CCC) system, the majority of governing power is
localized to the community college. Therefore, colleges have autonomy in developing their own
A&P policies, and as a result, these policies vary by college. We therefore focus on a single
community college (College H) whose A&P criteria allow students to choose which assessment
sub-test to take.
62
Data and Institutional Context
Data and Sample
Our data are from the office of institutional research at the large southern California
community college district to which College H belongs. We built the dataset using three different
sets of collected data: the enrollment, term, and assessment files. The enrollment file contains
student-level information related to the courses in which the student enrolled, the dates when a
specific course was added or dropped, and the final grade in the course. The term file contains
information related to the units attempted and dropped in each term along with basic student
demographic characteristics such as age, sex, race and ethnicity and language. The assessment
file contains detailed information related to the assessment process. This file includes the
assessment sub-test taken, test score, math placement, and information from the students’
Educational Planning Questionnaire (EPQ). During the matriculation process in College H,
students provide additional information regarding their past academic experiences and
commitment to college via the EPQ.
Our sample includes 8,838 first-time students who were enrolled in College H and
assessed between the summer 2005 and spring 2008 semesters. Overall, the students in the
sample are evenly split between females (51 percent) and males. While the average of the sample
is slightly older (21 years), which describes students who may be non-traditional, the median age
is 18 years old suggesting that 50 percent of the students are entering the college directly from
high school. The largest proportion of the students in the sample report being Latino (37 percent)
and the second largest being White (30 percent). We utilize this overall sample to address the
first two research questions, which examine the alignment between students’ prior math
63
achievement and assessment sub-test choice, and assess changes in the placement distributions
under the actual and buffering placement methods.
In order to respond to the final research question, which evaluates differences in students’
success in their first math course based on placement criteria, we utilize a subset of the overall
sample. For this analysis, we remove students who did not attempt a developmental math course
after being assessed. Specifically, we do not include students who attempted college-level math
courses because this level is comprised of a diverse set of courses at varying content levels, and
is thus beyond the scope of this paper. We also remove students who did not attempt any math
course within the sequence because we are interested in student behavior and success based on
two differing placement criteria; students who do not enroll in math are non-compliant with both
criteria. Again, understanding these non-compliant students’ behavior is outside the main
purpose of this study. The median age of the sample subgroup mirrors the overall sample.
However, the filter demonstrates a significant increase in the percentage of females (54 percent).
The proportion of Latino students also increases (44 percent) while the proportions of Asian
American (12 percent) and White (26 percent) students decrease (see Table 2.1).
Table 2.1.
Selected Student Demographics for College H by Data Subgroup
Overall Subgroup with EPQ data
1
Sample to determine placement
distribution
Sample to determine Pr(success)
for first math class
Female 51.11 54.08***
Age (mean) 20.60 20.41***
Age (median) 18.47 18.39
Race/Ethnicity
Asian American 14.80 12.26***
African American 6.94 7.01
Latino 37.07 43.54***
White 29.78 26.34***
Other 11.42 10.85
64
Sample size 8,838 4,192
Notes.
1. Does not include students with missing EPQ data and who placed into college-level math
*p < .10, **p < .05, ***p < .01
College H’s Assessment Test and Placement Criteria
The A&P process in College H includes four general steps which we will detail below:
(1) students choose which assessment sub-test to take, (2) students’ scores on the assessment
sub-test are combined with any points they are awarded via multiple measures, (3) students are
placed into a level of math based on their adjusted score, (4) students decide whether or not to
enroll in math. During the time frame of this study, College H’s A&P policy was unchanged, so
students with the same test scores received the same placement throughout the time period of our
study.
Step 1: Choice of assessment sub-test. The MDTP test was developed through a joint
program between the University of California and California State University systems (Betts et
al., 2011; Melguizo et al., 2014). Similar to other standardized and commercially-available
instruments such as the College Board’s ACCUPLACER and ACT’s COMPASS test, the MDTP
consists of a set of “readiness” sub-tests that vary in math content.
However, unlike
ACCUPLACER and COMPASS, the MDTP test does not include a computerized branching
system which automatically directs students to higher or lower sub-tests depending on how
strong or weak the students are performing on their initial sub-test. Meaning, while tests with a
branching system can adjust to students’ ability after they have started the test, the MDTP test
does not.
17
Another difference between the MDTP and the more common assessment tests is that
it is a diagnostic test, so each assessment test and overall test score is accompanied with a set of
17
College H used the paper and pencil version of MDTP, however, a computerized version of MDTP was recently
validated for use in 2012 by the CCC Chancellor’s Office.
65
sub-scores which measure students’ ability in areas within each level of math. For example,
students obtain sub-scores on fractions and decimals within the lowest-level test which measures
principles in arithmetic and pre-algebra. While the MDTP offers sub-scores for each assessment
sub-test, College H only utilizes the overall test score in the placement criteria.
College H is also an interesting case because it allows their students to choose which
assessment sub-test to take rather than directing them to a specific test level. Therefore, students
assess their own math preparation prior to the assessment test. All first-time students who have
an educational goal of earning a certificate, associate’s degree, or transfer to a baccalaureate-
granting institution are required to go through the matriculation process. Matriculation at College
H begins with students completing the assessment process. Upon completion of their assessment
and placement, students attend an orientation which provides information about the college. All
matriculating students then meet with their counselors to develop a plan to meet their educational
goal. Thus, students choose their assessment sub-test prior to receiving advisement.
In order of difficulty, College H offers students a choice to take the Algebra Readiness
(ALR), Elementary Algebra (EA), Intermediate Algebra (IA), or Pre-Calculus (PC) sub-test to
assess their math ability. In general, lower sub-tests lead to placements at lower levels. At
College H, students are placed into one of five levels of math: arithmetic (AR), pre-algebra (PA),
elementary algebra (EA), intermediate algebra (IA), or college-level math (CLM).
Step 2: Assessment sub-test + Multiple measure points. At most community colleges
utilizing standardized assessment tests, students’ test score determines placement level via the
college’s pre-established cut-score criteria. In the CCCs, placement into developmental
education is not legally allowed to be based on a single test score. In 1988, the Mexican
American Legal Defense and Education Fund (MALDEF) filed a lawsuit which contended that
66
the outdated assessments used to place students in lieu of full matriculation services, resulted in
tracking Latino students into required remedial coursework which prevented their full
participation in the transfer curriculum (Perry, Bahr, Rosin, & Woodward, 2010). As part of the
out of court settlement in 1991, Title V regulations were revised to include the validation of
prerequisite courses, assessment using multiple measures, and students’ right to challenge a
prerequisite (Perry et al., 2010).
Therefore, colleges using a standardized assessment test to place students must utilize
“multiple measures,” which are intended to measure other factors demonstrated to be positively
related to student success. These can include such measures as motivation, high school GPA,
prior math experience, and grade in last math course (Ngo & Kwon, 2014; Melguizo et al.,
2014). Due to the decentralized governance of the CCC system, research examining these
colleges’ A&P policies has demonstrated wide variation by college in the use of these multiple
measures both in terms of which measures additional points are awarded for (e.g., motivation,
high school GPA), how many additional measures are included, and how many points are
provided (research has demonstrated this can range from -2 to 5 points within a single district)
(Melguizo et al., 2014; Ngo & Kwon, 2014).
College H utilizes information from students’ EPQ, which includes information on their
past academic experiences and commitment to college, to determine whether students would
receive multiple measure points in addition to their assessment test score to place them into a
math level. College H provides additional multiple measure points for high school GPA, four
points are added to their test score for an “A” average high school GPA, two additional points for
a “B” and no additional points are awarded for a “C” or below. Multiple measure points can thus
result in a student being placed into the next higher-level course (Melguizo et al., 2014; Ngo &
67
Kwon, 2014).
Step 3: Assessment sub-test + Multiple measure points = Placement. Under College
H’s placement criteria, students may obtain an adjusted score (assessment sub-test score plus
multiple measure points) that refers them to take a lower-level assessment sub-test; but, students
are not able to obtain a score on a sub-test that refers them to take a higher-level sub-test. For
instance, taking the lowest level sub-test (ALR) will automatically result in placement into AR,
PA, or EA. However, choosing to take the IA sub-test will result in placement in CLM, IA, or a
downward referral to the EA sub-test. See Figure 2.1 illustrating the placement criteria. We find
that though this college employs a referral system, only 2.6 percent (n=234) of students are
referred to a different test and are re-assessed. Therefore, students’ sub-test choice is the initial
factor in determining placement.
Step 4: Students’ decision to enroll in a math course. After receiving their assessed
placement level, students make a decision on whether to: (1) enroll in their placed math course,
(2) enroll in a lower-level course, (3) challenge their placement level and get permission to enroll
in a higher-level course, or (4) not enroll in a math course.
Restricting the sample to students who attempted any math course after their assessment,
we find that on average, only 7 percent of students attempt a course that is located at a different
level than their math placement (93 percent compliance rate). As illustrated in Table 2.2, when
we disaggregate this compliance by placement level, students placed into AR have the lowest
compliance rate (60 percent); however, this is by design since College H specifically permits
students who place into AR the option of enrolling in PA. More surprising, however, is that a
larger proportion of non-complying students enroll in a course that is lower than their placed
level than higher than their placed level.
68
Table 2.2.
Compliance by Level if Student Attempted Math
Complier Attempted Lower Level Attempted Higher Level N
Arithmetic 59.80 0.00 40.20 102
Pre-Algebra 94.14 3.04 2.82 887
Elementary Algebra 93.30 4.98 1.72 2,552
Intermediate Algebra 92.70 4.41 2.89 2,315
College-level Math 96.45 3.55 0.00 1,098
Overall 93.21 4.24 2.55 6,954
Figure 2.1. Math placement criteria based on assessment sub-test and score.
PC
sub-test
IA
sub-test
EA
sub-test
ALR
sub-test
IA referral
EA referral
ALR referral
CLM placement
placement
CLM placement
IA placement
IA placement
EA placement
AR placement
PA placement
EA placement
69
We only look at the non-complying group in Table 2.3 and see that two-thirds (63
percent) of these students attempt a course that is located at a lower level than their placement.
We also disaggregate the non-complying group by race/ethnic group and find that a larger
proportion of Latino students attempt a course that is lower than their placement (72 percent)
while the proportion is more evenly split for Asian American and African American students.
Table 2.3.
Non-Compliers by Race/Ethnicity if Attempted Math
Attempted Lower Level Attempted Higher Level N
Asian 53% 47% 60
Black 52% 48% 46
Latino 72% 28% 158
White 63% 37% 141
Other 55% 45% 67
Overall 63% 37% 472
This is important because when addressing the third research question, we restrict the
sample to these non-compliant students who attempted a course located at a higher level than
their actual placement.
Buffering Placement Criteria
As discussed above, College H assesses students’ incoming math ability by their MDTP
sub-test score and high school GPA. Students’ adjusted score is then used for their placement
based on the college’s cut-off scores. Compared to these actual placement criteria, our buffering
placement criteria include students’ prior math achievement. Under the buffering placement
criteria, students earn multiple measure points for high school GPA as they do under the actual
placement criteria, but we also award an additional four points if their prior math achievement
was higher than their chosen assessment sub-test. For example, if students passed IA in high
school but chose to take the EA sub-test, they are given four multiple measure points. Students
70
receive two additional points if their highest level of math passed is equal to their level of
assessment sub-test, and no additional multiple measure points are awarded if students take a
higher-level assessment sub-test compared to the highest level of math passed (see Figure 2.2).
We choose this point scheme to be consistent with the multiple measure points awarded for high
school GPA.
Figure 2.2. Multiple measure points diagram
Choice of assessment sub-test is an important first step in College H’s A&P process;
however, research has demonstrated that students who lack confidence in their math ability may
underestimate their preparation. Taken together, when students exhibit higher proficiency but
choose a comparatively lower-level assessment sub-test, they may not be placed into the highest
level in which they have the strongest likelihood of succeeding. Under the buffering placement
criteria, we attempt to control for this underestimation by providing additional multiple measure
points to students who report a level of math achievement higher than their choice of assessment
sub-test.
Predictor Variables
Demographic characteristics. Student demographic variables include student’s reported
race/ethnicity (Asian/Asian American, Black/African American, Latino, White/Caucasian, and
Other/Unknown), sex, and age at the time of assessment. We utilize weighted effects coding for
71
race/ethnicity because it allows for the comparison of each race/ethnic group (e.g., Asian
American students) to the weighted average of all students in the sample. Meaning, the
proportion of students in each group is representative of the average corresponding proportion of
students in the college. We do not include the Other/Unknown ethnic group category in the
analysis to avoid perfect multicollinearity, though this group is included to generate the weighted
average. We also include covariates for students’ primary home language and citizenship status
to control for potential language barriers and students who were not educated in the United
States’ K-12 system and without the math sequence typically offered in our secondary
educational system.
Responses from Educational Planning Questionnaire (EPQ). Eight of ten items from
the questionnaire are included in our models, the two items that are excluded specifically
addressed preparation in English writing. The included items consist of those that describe
students’ anticipated weekly time commitment to their classes and employment, the length of
time spent away from education and math specifically, the importance of college to themselves
and their friends and family, and past academic experience in general and specific to their
experience with math (i.e., high school GPA and highest level of passed math course). While not
all the items are based on a Likert-like scale – responses ranging from “Not at all important” to
“Very important” – all response items are ordinal.
Level of developmental math placement. The developmental math placement variable
is an ordered categorical variable that identifies which level of the math sequence (AR, PA, EA,
IA, CLM) students were placed into under the actual placement criteria.
Placement criteria. A percentage of students who were assessed and placed into
developmental math choose to attempt a higher course. A proportion of those students who
72
attempt the higher course, thus, attempt the course they would have been placed into under the
buffering criteria. Our placement criteria variable is a dichotomous variable where our main
group is comprised of students who attempted a course higher than their placed level (buffering
placement criteria), while our comparison group consists of students who attempted a course at
their placed level (actual placement criteria).
Analytical Strategy
To address the first research question, we utilize descriptive statistics to explore the
alignment between students’ choice of assessment sub-test and the highest level of math they
completed with a passing grade. We describe this alignment in terms of the additional multiple
measures included in our buffering placement criteria, and its potential for reducing the racial
disparities in lower-level math placements for students from URM backgrounds. We then
respond to the second research question by using descriptive statistics to illustrate changes in the
distribution of math placements between the actual A&P criteria and the buffering criteria.
We address the final research question by first utilizing a logistic regression model to
examine whether students who attempted a higher or lower developmental math course from
their placement level exhibited the same probability of successfully passing the course compared
to complying students. The following empirical model was specified:
Pr 𝑦 = 1 𝑥 = 𝛽 0
+ 𝛽 1
𝑧 + 𝛽 𝑖 𝑥 𝑖 + 𝑒
Where,
y = whether students passed their first attempted developmental math course
z = buffering placement level (comparison group = actual placement level)
x =vector of student information, including demographic variables, responses to the EPQ,
and level of developmental math placement
73
e = error term
Next, we perform a set of two-group mean-comparison t-tests to compare how students
who placed via the buffering criteria and those placed via the actual criteria performed in their
first attempted math course. We separate these analyses by level of developmental math
placement. The buffering criteria group is comprised of students who “challenged” their actual
placement by enrolling in a higher-level math course. These students, therefore, attempted a
course they would have been placed at via the buffering criteria. We exclude students who were
boosted from IA to CLM because students can take several math courses at the CLM level.
As a final note, it is important to remember that this is an in-depth descriptive study
focusing on College H’s A&P criteria and examining what happens when an additional measure
of assessment is included in the criteria. It is not our intent to make any casual inferences
regarding the A&P criteria on student placement and success. We focus on changes in the math
placement distribution and success in the developmental courses students are placed into and to
which they are enrolled. The sample under analysis is thus a sub-group of the entire sample of
College H’s assessed student population. Because we are seeking to identify whether students
have the same rates of success between actual and buffering placements, we define success as
students passing their first attempted developmental math course. By doing this, we are
restricting our data to students who actually attempt a developmental math course, thus only
analyzing students who may be more motivated than the general developmental education
population (see Bahr, 2010; Fong et al., 2015). Drawing conclusions about the A&P criteria
beyond the initial course is problematic since many students do not enroll in developmental
education altogether, and the students who progress through their developmental sequence
become a more select sample at each subsequent step (see Bahr, 2010; Fong et al., 2015).
74
Main Findings
(Mis)Alignment between Student Choice of Assessment Sub-test and Prior Math
Achievement
The series of descriptive statistics we conduct in this study demonstrate the alignment or
misalignment between students’ choice of assessment sub-test and what they report as the
highest high school math course completed with a passing grade. There are a number of reasons
students may choose sub-tests that are higher or lower than their level of math preparation which
may derive from a misperception of ability or misunderstanding of the A&P process. For
example, students may overestimate their math ability and choose a higher-level sub-test, or
students may underestimate their preparation and choose a lower-level sub-test. Also, students
may choose a higher- or lower-level sub-test when they do not understand the A&P process. On
one hand, students may incorrectly believe that a higher-level sub-test, regardless of their score,
will automatically place them into a higher math level; thus choosing a sub-test higher than their
level of math preparation. On the other hand, students may incorrectly believe that if they
perform really well on a sub-test with lower-level content they will be placed at a higher-level.
While we do not have evidence of why these decisions are made, our study addresses
how an A&P policy may protect against misinformed assessment sub-test choices. Under the
buffering placement criteria, students who may have underestimated their ability and choose a
sub-test below their math preparation are provided with additional points. Figure 2.3 shows a
relatively equal split between students who choose a sub-test that aligned with their math
preparation (47 percent) and students who choose a lower sub-test (51 percent).
75
Figure 2.3. Student choice of assessment sub-test compared to last passed math level.
When this figure is disaggregated by students’ math preparation, we find that the greatest
misalignment is between the math sequence within the K-12 system which includes geometry
and trigonometry and the community colleges’ algebra-focused developmental math sequence.
As Figure 2.4 illustrates, 80 to 81 percent of students whose highest level of math passed were
either geometry or trigonometry chose a lower-level sub-test. Because of the confusing
alignment between the two educational systems, it may also be argued that geometry students
should be taking the EA test instead of the IA test, and trigonometry students should take the IA
test instead of the PC test. If this were the case, then these students would be choosing to take the
sub-test at the same level as their high school course, and 20 and 19 percent of them would be
taking sub-tests with a higher content level than the last math course they passed. Regardless of
how this multiple measure point is defined, there may be confusion around how the high school
sequence translates to the community college math sequence. This especially affects 30 percent
of the students in our sample (24 percent reporting geometry as their highest passed math course,
and 6 percent who passed trigonometry).
51%
47%
2%
Chose lower level
(+4 MM points)
Chose same level
(+2 MM points)
Chose higher level
(+0 MM points)
76
Figure 2.4. Alignment between student sub-test choice and prior math achievement
While there is confusion for geometry and trigonometry students, there is also a tendency
for a substantial proportion of students who passed beginning algebra, intermediate algebra, and
pre-calculus or calculus to choose to take lower-level assessment sub-tests. About half of these
students choose to take lower-level sub-tests; 53% of beginning algebra students take the ALR
sub-test instead of the EA sub-test, 48% of intermediate algebra students take the EA sub-test
instead of the IA sub-test, and 49% of pre-calculus or calculus students take the IA sub-test
instead of the PC sub-test.
As previously mentioned, as a result of the MALDEF lawsuit, the use of a multiple-
measure assessment and ultimately the creation of multiple measure points was required because
using a single standardized assessment test to place students into developmental education
disproportionately impacted students from URM backgrounds. We therefore examine how
multiple measure points are distributed across racial/ethnic groups to examine whether these
multiple measures actually assist in reducing the racial disparities in lower-levels of
developmental math. Figure 2.5 demonstrates that across race/ethnicity, small percentages of
21%
8%
2%
79%
39%
20%
50%
19%
51%
53%
80%
48%
81%
49%
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
Basic math Beginning
Algebra
Geometry Intermediate
Algebra
Trigonometry Pre-calculus
or Calculus
Higher-Level Sub-Test Same Level Sub-Test Lower-Level Sub-Test
77
students earn four multiple measure points for having an “A” average high school GPA which is
the policy under the actual College H A&P criteria (61 to 74 percent of students earn zero
multiple measures points, and only 5 to 11 percent earn four points). More students earn multiple
measure points under the buffering placement criteria. Approximately 39 to 62 percent of
assessed students reported taking a lower sub-test than their reported level of math course-taking,
and were therefore given the four multiple measure points. Of these students, the highest
percentages are among URM students, which suggests that these racial/ethnic groups are more
likely to underestimate their math preparation and make choices that potentially hinder success.
Another interpretation may be that other racial groups are more attuned to their math level – 58
percent of Asian American students and 49 percent of White students choose the sub-test that
aligns with the highest level of math completed with a passing grade.
Figure 2.5. Distribution of multiple measure points by race/ethnicity.
62%
3%
74%
2%
70%
3%
61%
2%
63%
2%
27%
58%
21%
37%
26%
42%
29%
49%
27%
50%
10%
39%
5%
62%
5%
56%
9%
49%
11%
48%
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
HS GPA Prior
Math
HS GPA Prior
Math
HS GPA Prior
Math
HS GPA Prior
Math
HS GPA Prior
Math
Asian American African American Latino White Other
+0 MM points +2 MM points +4 MM points
78
Distribution of Placements: Actual versus Buffering Criteria
Figure 2.6 illustrates the proportion of students placing into the five levels of math in
College H under the actual placement criteria and the buffering placement criteria. This figure
demonstrates that including the highest level of math completed as an additional measure of
ability results in a redistribution of placements into higher levels of math. We see that while the
proportion of placements into EA remain stable, placements into AR and PA significantly
decrease (p < .01) and placements into IA and CLM experience significant increases (p < .01).
Looking only at students who were placed into a higher developmental math level under
the buffering placement criteria, we find that the additional multiple measure points
disproportionately assists students placed at the lower levels of the math sequence. Specifically,
54 percent of students placing into AR under the actual criteria would be placed into PA under
the buffering criteria, 24 percent of PA-placed students would move to EA, and only 8 percent of
Figure 2.6. Distribution of actual placements and buffering placements.
*p < .10, **p < .05, ***p < .01
EA students and 3 percent of IA students would have a higher placement into IA and CLM,
respectively.
***
*
***
***
***
79
Figure 2.7 describes math placement by race/ethnicity under each placement criteria. We
find that while the buffering placement criteria shift the distribution of students across
developmental math levels, the racial disparities within developmental math levels do not
change. More specifically, Asian American and White students continue to be placed into higher
levels compared to African American and Latino students. Overall, this figure suggests that the
inclusion of students’ prior math achievement does not help in reducing the racial disparities in
math placement at the highest levels of the trajectory. However, in examining the placement
distribution under the buffering criteria within each racial/ethnic group, our chi-square analysis
finds that including additional measures results in higher placements for students from each
racial/ethnic group (p < .01). Therefore, while the buffering placement criteria does not decrease
the gap among racial groups in higher- and lower-level placements, it does increase placements
within each racial group.
Figure 2.7. Distribution by race/ethnicity under actual and buffering placement criteria.
1
4
2 1 2 2 1
7
22
16
10
11
4
20
13
7
10
28
41
43
37
34
27
43
42
36
32
36
27
31
32
36
37
29
34
35 37
29
6
8
20
18
32
6
10
22
21
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
Asian Black Latino White Other Asian Black Latino White Other
Actual Placement Simulated Placement
CLM
IA
EA
PA
AR
80
Student Success Rates: Actual versus Buffering Criteria
Results from our logistic model, which examines the likelihood students pass their first
developmental math course, are consistent with previous research. We find that female students
are more likely to succeed in their first math course compared to males. For each year older a
student is at the time of their assessment test, the more likely the student is to pass the first math
course. The more hours students anticipate working per week, the less likely they are to succeed.
Compared to the weighted-average student, Asian American students are more likely to succeed
while African American and Latino students are less likely to pass their first attempted
developmental math course. We also find that both prior achievement items that were utilized as
multiple measures of students’ assessment are statistically significant across models. Students’
high school GPA and highest level of math passed are positively associated with the likelihood
of passing their first developmental math course at College H.
In terms of developmental math level, we find that students placed into IA exhibit lower
odds of passing IA compared to students placing into lower levels of math and passing their
placed-courses (e.g. EA, PA, and AR). This finding is inconsistent with previous research that
shows higher success rates at higher levels of the math sequence (Fong et al., 2015). Our finding
suggests that compared to IA, lower levels of developmental math are easier courses to pass.
This may be due to either students in IA being over-placed or students at lower levels being
under-placed. Further, there may be variation in instructor grading practices by level of
developmental math. Research has demonstrated a relatively high degree of variation in grading
practices by instructors; this instability of measurements of success (i.e., final grade) thus makes
accurate developmental placement more problematic (Armstrong, 2000).
81
Our main independent variable identifies student success based on placement under the
actual compared to buffering criteria. We find that after controlling for placement level and all
other student covariates, students who attempt their buffering placement level had significantly
lower odds of passing compared to students who attempt a course located at their actual
placement level. Meaning, students who attempt a course at a level higher than their placement
level under the actual criteria perform significantly worse than students who attempt a level of
math that match their actual placement (p < .01) (see Table 2.4).
Table 2.4.
Odds Ratio of Success in First Attempted Developmental Math Class
Variables OR SE
Student Demographics
Age 1.06*** (0.01)
Female 1.13* (0.08)
Asian American 1.30*** (0.12)
African American 0.61*** (0.08)
Latino 0.83*** (0.04)
White 1.13** (0.07)
Primary Language 1.00 (0.00)
Citizenship code 1.00 (0.02)
Student EPQ
Hours/week plan on attending classes 1.05 (0.04)
Hours/week plan on being employed 0.90*** (0.02)
Time spent away from education 0.94 (0.06)
Importance of college (personal) 1.08 (0.09)
Important of college (family/friends) 0.93 (0.05)
H.S. GPA 1.30*** (0.07)
Highest level of math completed with passing
grade 1.37*** (0.05)
Time since last math class was completed 1.04 (0.05)
Student Placement (IA=comparison)
Elementary Algebra 1.15* (0.10)
Pre-Algebra 1.42** (0.19)
Arithmetic 3.28*** (1.27)
Placement Criteria (Attempted placed course under actual criteria=comparison)
Attempted placed course under buffering criteria 0.22*** (0.06)
N 3902
82
R
2
0.05
*p < .10, **p < .05, ***p < .01
These findings suggest that students who would have been placed at a higher-level under
the buffering criteria may be over-placed and would not succeed in their initial developmental
math course. However, this finding may also be a function of College H’s A&P policy that
permits students who place into AR the option of enrolling in PA. Following this policy, we
observe that 40 percent of students under the actual criteria place into AR and enroll in a higher
math course while less than three percent of students placing into PA, EA, or IA attempt a
higher-level math course (refer to Table 2.2). Therefore, there is a disproportionate
representation of students at each placement level under the buffering criteria. Because of this
skewed distribution, as well as evidence from previous research that success varies by level
(Boatman & Long, 2010; Fong et al., 2015; Melguizo et al., 2013), we next examine student
success by developmental math level.
By developmental math level. In the final analysis, we separate students under each
placement criteria and by each developmental math level. The resulting sample sizes become too
small to generate enough power to run regression models. We therefore utilize a series of two-
group mean-comparison t-tests to examine whether students placed via the actual or buffering
criteria had similar rates of success in their first attempted developmental math course.
Specifically, we compare success rates between students who attempt a higher level math course
and therefore match their buffering placement to students who comply with their actual
placement level. We found no statistically significant differences between the actual and
buffering placed students for passing IA, EA or PA (p > .10), which suggests that students who
do not comply with their actual placement by enrolling in the higher-level courses perform
83
similarly to students complying with their placement. These results indicate that these buffering
criteria can control for students who initially underestimated their math preparation and should
be placed into higher levels of math.
Restricted to students with similar scores. Because students who are placed at a higher
math level under the buffering criteria are likely more similar to students who are placed into the
higher math level but exhibit lower adjusted placement scores, we restrict the sample of actual
criteria students to those around the cut-off score. Specifically, we examine students who are
placed into a higher math level under the buffering criteria to students who are placed into the
same higher math level under the actual criteria and who score five or less points above the
cutoff score. Results from the sample falling within our bandwidth are consistent with the
unrestricted sample. We find no statistically significant differences in success rates between
students placed via the actual criteria compared to the buffering criteria (p > .10) (see Table 2.5).
Table 2.5.
t-tests Comparing Success Rates between Students Placed by Actual versus Buffering Criteria
Overall Sample Restricted Sample
†
n M SD t n M SD t
IA
Actual 2146 0.70 0.44 492 0.75 0.43
Buffering 43 0.74 0.46 0.59 43 0.74 0.44 -0.11
EA
Actual 2381 0.66 0.47 537 0.74 0.44
Buffering 21 0.62 0.50 -0.42 21 0.62 0.50 -1.22
PA
Actual 835 0.70 0.46 614 0.72 0.45
Buffering 37 0.73 0.45 0.41 28 0.71 0.46 -0.12
*p < .10, **p < .05, ***p < .01
†
Groups are restricted to within five points above cutoff
When examining the overall sample, results from the logistic regression reveal that
students placed by the buffering criteria perform worse than those placed by the actual criteria.
However, analyzing student success rates by developmental math level, we find no significant
84
difference in performance between students under each placement criteria. We then check the
robustness of these results by restricting the sample to students around the cutoff and again find
no significant differences in student success between the two criteria. Taking together results
from our descriptive statistics that show a shift of student placements into higher-levels of math
under the buffering criteria, along with findings from the empirical analysis indicating no
significant difference in student success rates between the actual and buffering criteria, this study
demonstrates how changes to A&P policies have the ability of increasing student access without
decreasing student success.
Limitations
There are several limitations to this study that warrant discussion as they relate to the
interpretation of our results. First and foremost, this study proposes the A&P criteria that
consider students’ choice of assessment sub-test in relation to the highest level of math passed in
order to buffer against choices that may have negative consequences in terms of students’
developmental math placement. While we examine student behavior, we did not have
quantitative or qualitative data that would help explain why or how students made their
assessment sub-test choices. Thus, we cannot be certain whether this choice is the result of lower
math confidence, lack of information regarding the structure of the A&P process, or direction
from peers, family, or advisors. However, we did not find any documentation of institutional
policies that sorted students prior to them choosing their assessment sub-test. Additionally,
research has found that although the majority of colleges provide access to sample tests on their
websites to assist students prepare for their assessment test, few had systematic practices in place
to direct students to these online materials or other test-prep resources (Hodara, Jaggars, & Karp;
2012). Therefore, assessment sub-test choice is mainly explained at the individual student-level.
85
Second, the results from our logistic regression that examined the probability of success
report a small Pseudo-R
2
, which is the statistic that describes model fit to the data. Meaning, our
empirical model was not as strong a fit to the data as we would have liked. The most reasonable
explanation for this is simply the lack of additional explanatory variables that relate to students’
success in their first attempted math course. The literature examining the factors related to
student success is expansive; unfortunately, we did not have access to many of these factors,
such as achievement motivation and perseverance (e.g., Duckworth, Peterson, Matthews, &
Kelly, 2007; Hoyt, 1999; Hagedorn, Siadat, Fogel, Nora, & Pascarella, 1999; Robbins et al.,
2004), environmental factors like financial and familial obstacles (Crisp & Nora, 2010; Hoyt,
1999; Nakajima, Dembo, & Mossler, 2012; Schmid & Abell, 2003), or psychosocial factors like
academic self-efficacy demonstrated to be related to student success (Armstrong, 2000; Krumrei-
Mancuso, Newton, Kim, & Wilcox; 2013; Robbins et al., 2004). Further, we did not have access
to high school transcript information (Long Beach Promise Group, 2013), so the control
variables in our model, including previous academic preparation, were all self-reported measures
from the students’ EPQ. Lastly, developmental education research has demonstrated the
relationship between student success and institutional factors, like institutional size and
demography (Bailey et al., 2010; Fong et al., 2015), as well as characteristics of the actual
developmental math course, such as instructor grading practices (Armstrong, 2000).
A third limitation is that the sample sizes necessarily shrunk when examining differences
in success rates at each developmental math level between students enrolled in courses based on
the actual placement criteria and students enrolled in courses based on the buffering placement
criteria. Because of the size limitation, we use mean-comparison t-tests and thus are unable to
control for other characteristics, like the factors included in the logistic regression, and that relate
86
to success in students’ initial developmental math course. As described above, we minimize
some of the effect of these other characteristic differences by narrowing the bandwidth. Results
from this analysis are consistent with findings from the full sample.
Discussion
A substantial proportion of students entering community colleges are assessed and placed
into developmental education. Further, the majority of students entering the CCC system are
placed into lower-levels of the developmental math trajectory. While these statistics are dismal,
part of the large proportion of lower-level placements may be due to inaccurate A&P policies.
This study takes an in-depth examination of students’ behavior through their A&P process and
the extent to which including prior math achievement (nested within assessment sub-test choice)
as an additional multiple measure may improve/increase students’ likelihood of access and
success.
Students Don’t Know Which Sub-Test to Choose and Are Likely to Underestimate their
Math Preparation
We first find that when students are granted agency in choosing their level of assessment
sub-test, it appears that: (1) there is confusion about the alignment between the high school and
community college math sequence, and (2) more students choose a lower-level sub-test than a
sub-test at the same or higher-level compared to their level of prior math achievement; this is
especially the case for students from URM backgrounds. These findings are consistent with prior
literature highlighting what students “don’t know” regarding the A&P process (e.g., Venezia et
al., 2010), and may also be a result of students’ lower confidence affecting their academic
choices (e.g., Bickerstaff et al., 2012; Cox, 2009; Pajares and Kranzler (1995).
87
The misalignment between high school and college sequence may also be subject to the
many math pathways for California high school students. Finkelstein, Fong, Tiffany-Morales,
Shields, and Huang (2012) analyzed the math and science course-taking patterns of more than
24,000 students from 24 different California school districts and documented 2,000 course-
taking patterns. In fact, they found that less than one-third of the students in their sample
followed paths in the top 20 most common patterns. Further, given that the existing California
high school graduation requirements do not require students to take math their senior year,
students may not be utilizing their senior year to prepare for college-level math. In California,
there are currently only three math requirements for high school graduation: (1) pass California
High School Exit Exam (CAHSEE) in math, (2) complete two years of math in high school, and
(3) pass Algebra I (EA). Therefore, research has shown that these low high school graduation
requirements mislead students into believing they are college-ready when they are not (Jaffe,
2014; Kirst & Bracco, 2004; Venezia & Kirst, 2005).
Jaffe (2014) found that the high school math path most frequently traveled by community
college-bound students were less rigorous than students entering baccalaureate-granting
institutions where almost 20 percent started in Algebra I (EA) or below and did not progress in
math beyond Algebra 2 (IA). Further, nearly half of community college-bound students took no
math in their senior year of high school. As they entered community colleges, Jaffe found that
their CAHSEE math performance was a significant predictor of placement at all four levels
below college-level math, and that not taking math in their senior year was a significant predictor
for students placing two-, three-, and four-levels below college-level math. Jaffe concluded that
this behavior is consistent with research documenting that students were unaware of the
community college placement process and standards and therefore did not use their high school
88
years to prepare (Hughes & Scott-Clayton, 2011; Kirst & Bracco, 2004; Venezia et al., 2010;
Venezia & Kirst, 2005).
The Buffering Placement Criteria Increases Access without Decreasing Success
Under the buffering placement criteria, we provide additional multiple measure points to
students whose level of prior math achievement was higher than their choice of assessment sub-
test. By including this measure, we are utilizing information on students’ prior math achievement
as an assessment of math preparation, as well as attempting to control for potential confusion
around the A&P sub-test choice or students underestimating their math ability. We find that the
buffering criteria predict increases in access to higher levels of developmental math without
decreasing success rates.
The findings suggest that utilizing high school transcript data to inform placement
decisions increases access to higher levels of math for community college students. Next, in the
absence of a counterfactual group that was placed according to the buffering criteria, we examine
students who are non-compliant with their actual placement and essentially attempt the higher
course they would have been placed in under the buffering A&P criteria. We then compare them
to students who comply with their placement and find these two groups of students do not differ
in success rates, even after restricting the sample to a narrow bandwidth around the cut-off.
Additional information, particularly in relation to students’ prior math experience which attempts
to off-set lower math confidence, is a strong predictor of math success and should be given
serious weight in the placement process as it increases access and maintains success. Together,
these two findings suggest that a criteria that buffers against misinformed student-choice may
increase student access to higher levels of math without decreasing student success. Again, these
89
results should be interpreted with caution since the sample sizes necessarily shrunk based on our
refining questions and the fact that we utilize simple mean-comparison tests.
Our findings are consistent with previous research comparing placement and success for
students who were assessed and placed via a test-placement or self-placement mechanism.
Kosiewicz (2013) found students who self-placed enrolled in higher-levels of math and exhibited
better outcomes compared to test-placed students. Ngo and Kwon (2014) found that utilizing
relevant multiple measures to place students into levels of math promote both student access and
student success. Further, the Long Beach Promise Group (2013) found that students who were
placed in courses via a “predictive placement” scheme based on high school grades instead of
test scores were more likely to complete college-level math and English courses.
It is important to note, however, that while these additional multiple measures and the
potential of student self-placement is promising in increasing access and promoting success in
developmental math education, utilizing additional measures and self-placement as part of the
A&P process may not reduce the racial disparities existing in math. The intent of the CCC Title
V regulations, which were established in response to the MALDEF lawsuit and requires the use
of multiple measures, was to combat the disproportionate impact of lower-level developmental
placements for URM students. This study demonstrates that prior math achievement and lower
confidence is especially prevalent for students from URM backgrounds, yet, including these
measures does not necessarily have a disproportionate positive impact for these students. Thus, it
is important to highlight the distance between statewide reform efforts and policy
implementation. From a policy perspective, this may mean simply that community colleges
should redesign their placement formulas and provide more weight to information from high
school transcripts relative to test score, it may also mean there must be more collaboration
90
between the K-12 and community college system to incorporate early intervention programs to
attempt to reduce the disadvantages students graduate from the K-12 system with and continue to
have when entering the community college system.
Policy Implications
As we explored the actual and buffering placement criteria, we recognized several
aspects of the A&P process that administrators should consider as they reassess their policies.
First, despite the requirement to utilize multiple measures to assess and place students into
developmental education, placement is still largely determined by students’ assessment sub-test
score. Ngo and Kwon (2014) similarly reported that across the colleges within a large urban
district in California, less than five percent of assessed students get “boosted” up into a higher
developmental math level due to multiple measure points. Moreover, these additional points have
no effect on reducing the racial disparities in developmental math placement. Thus, the manner
in which multiple measure points are implemented in these colleges do not conform to the
policy’s intent which is to address the disproportionate impact on underrepresented minority
students. Therefore, colleges should evaluate their weighting schemes, experiment with altering
the number of points awarded for multiple measures, and examine whether doing this increases
access without decreasing success.
Second, as we took an in-depth look at students’ behavior through their A&P process, we
noticed that non-complying students are more likely to attempt a math course that is located at a
level below their placement level than above their placement level. Thus, even when colleges
enforce more inclusive A&P policies, students still have the ability to sabotage their own
success. Early intervention programs through assessment or counseling services may assist in
91
reducing students’ tendency to create more barriers for themselves by underestimating their own
math preparation.
Lastly, this study shows that a substantial proportion of students are not choosing
assessment sub-tests that align with their level of math preparation. Much of this misalignment
seems due to the differences in structure between the community college and high school math
sequence. Thus, colleges should clearly provide students information comparing the courses in
these two sequences to better inform students which sub-test match their level of math
preparation. Moreover, given that some colleges have experimented with allowing students to
place themselves into levels of math they feel best matches their ability rather than being placed
by a standardized assessment test (Kosiewicz, 2013), these findings suggest that students may
have misperceptions of their own math preparation. Again, clearly articulating information to
students in terms of their own math preparation and how it fits within the community college
math sequence is crucial if students are to accurately choose the highest-level math course where
they have the greatest likelihood of success.
Future Research
Although this study was based on an analysis of students in a single institution, the
question of adequacy in which students are served by an institutional assessment and placement
policy is common to the majority of community colleges not just in California but nationwide.
Community colleges, nationally, serve the largest proportion of non-traditional students and
students from URM backgrounds. Research has demonstrated that these students report lower
math self-confidence (e.g., Jameson & Fusco, 2014; Pajares & Kranzler, 1995; Stevens et al.,
2004), and thus may need additional institutional policies to assist in overcoming behavior that
may hinder success. The behavior this study measured was the misalignment between
92
assessment sub-test choice and prior math preparation. Then, through an institutional-level
policy, we attempted to buffer against any misaligned choice by providing additional multiple
measure points to boost a students’ assessment score and potentially their developmental math
placement. This study focused on the specific behavior of sub-test choice, however, there are
many intersecting points in which institutions can systematically intervene to assist students
down a better-informed path.
Multiple measures may be one of the least resource-demanding and has emerged as one
of the focuses within state policies. Though California is one of the only states to have an
existing multiple measure policy, several other states have introduced policies to incorporate use
of multiple measures in their A&P policies for developmental math (Burdman, 2012). North
Carolina, for example, has developed a customized placement assessment that includes gathering
information from multiple measures, such as high school grades and noncognitive measures
(Burdman, 2012). The Texas Success Initiative (TSI) includes the recommendation that
additional multiple measures such as high school GPA, work hours, or noncognitive measures be
considered in conjunction with assessment test scores (Burdman, 2012; Texas Higher Education
Coordinating Board (THECB), 2012). Connecticut’s Senate Bill 40 and Florida’s Senate Bill
1720 have also proposed similar policies to incorporate multiple measures.
Thus, the analyses we conducted in this study serve as useful models for other colleges
and state systems interested in analyzing and experimenting with their own assessment and
placement policies. This paper is intended to stimulate discussion and inquiry among community
college policymakers, practitioners, and researchers seeking to improve the accuracy in the
assessment and placement of students into developmental education. In what follows, we have
93
outlined three main areas where future research can help flesh out students’ experiences through
the assessment and placement process to increase placement accuracy.
1. Measuring self-efficacy and motivational measures. Research in educational psychology
suggests that noncognitive measures, factors not specifically related to academic content
knowledge or skills, are also predictive of college success and future outcomes
(Duckworth et al., 2007; Heckman, Stixrud, & Urzua, 2006; Krumrei-Mancuso et al.,
2013; Sedlacek, 2004). For example, self-efficacy and motivation have been found to be
strongly predictive of future achievement (Duckworth et al., 2007; Krumrei-Mancuso et
al., 2013; Sedlacek, 2004). While research suggests these noncognitive factors should be
utilized in the A&P process (Boylan, 2009), few institutions use them in practice
(Gerlaugh, Thompson, Boylan, & Davis, 2009; Hughes & Scott-Clayton, 2011). In this
study, we attempted to incorporate this information through the misalignment between
student sub-test choice and prior math achievement. However, in order to accurately test
this assumption it should be measured. To more clearly identify the relationship between
these constructs and student behavior through the A&P process, future research should
measure students’ motivation and math self-efficacy.
2. Qualitatively documenting students’ experiences through the A&P process. Existing
qualitative research examining students’ experiences through the A&P process has found
that students lack awareness both of the process and the consequences of their
performance on assessment tests (Bunch et al., 2011; Fay et al., 2013; Safran & Visher,
2010; Venezia et al., 2010). Building upon this research, qualitative studies can flesh out
students’ experiences focusing on what is driving certain student choices and behaviors
94
through their A&P process. Future research should look to examine forms of guidance
students are receiving through counselors, assessment officers, faculty, and peers.
3. Examining the use and success of diagnostic testing. Diagnosing where students’
weaknesses are has the potential of better informing the P-16 math pathway as well as
placement decisions and classroom instruction. There is some recent work suggesting that
diagnostic tools may improve placement accuracy in middle- and high school
mathematics (Betts et al., 2011; Huang, Snipes, & Finkelstein, 2014) and also in
developmental math in community colleges (Fong et al., 2015; Rodriguez, 2014). Future
research can build off this existing literature by identifying the areas of math, such as
fractions or decimals, where students are experiencing the most difficulty and perhaps
experimenting with placement policies that are determined by math content rather than
math level. Further, research can attempt to bridge the gap between content on the
assessment test and faculty instruction. For example, whether granting faculty access to
their students’ diagnostic results would improve student success rates.
This is an exciting time for research and practice in terms of community college
developmental education programs. As more statewide policies are enacted to improve
developmental education, practitioners are continuously tasked with adopting and adapting to
these policies to systematically assist students through developmental education, while research
is tasked with identifying what, why, and how things are or are not working. As this study has
demonstrated, developmental education at the community colleges begins with the assessment
and placement process. While prior studies have not been able to quantifiably examine students’
choices through this process, in this study, we have attempted to observe student choice within
an institution’s assessment and placement criteria. It is important to recognize that while policies
95
may be created at the state- or federal-level, they are implemented locally and thus defined
through local agents and participants: community college practitioners and community college
students. Research should thus not only continue to examine these issues as part of the larger
context but also pay heed to the institutional environment in which these policies are
implemented and students’ decisions within that context.
96
Chapter Four
Understanding the Relationship between Increasing State-Standards for an Associate’s
Degree and Developmental Math Student Outcomes
“Assisting underprepared students to be successful in college-level work is essential to
the mission of the California Community Colleges…More than any other postsecondary segment
in California, the community colleges exemplify the spirit of the California Education Code
Section 66201 which affords each able Californian an unparalleled educational opportunity”
(Fulks & Alancraig, 2008, p. 2). By law, the main responsibility of providing basic skills
education
18
falls under the California Community College (CCC) system. Basic skills are the
foundational skills in English, math, and English as a Second Language (ESL) which are
necessary for students to succeed in the college curriculum (Boroch et al., 2007). Basic skills or
developmental education, then, is meant to prepare academically underprepared students so that
every student has the opportunity to be successful in college-level work. However, a large
proportion of students enter higher education academically underprepared for the college
curriculum (Horn & Nevill, 2006; National Center for Public Policy and Higher Education
(NCPPHE) & Southern Regional Education Board (SREB), 2010), and the lack of these “basic
skills" hinders their access and ability to succeed in college-level courses. Research on
developmental education has demonstrated that a small proportion of students who are assigned
to remediation, especially to lower-levels, complete college-level work (Bailey, Jeong, & Cho,
2010; Bahr, 2010; Fong, Melguizo, & Prather, 2015)
Recognizing the importance of providing educational opportunity to underprepared
students as well as the current state of developmental education, the California Community
18
The term “basic skills” is used in California and is synonymous with developmental education and remediation.
These terms are utilized interchangeably throughout this chapter.
97
Colleges Chancellor’s Office (CCCCO) funded the Basic Skills Initiative (BSI). This multi-year
statewide effort was tasked with improving curriculum, instruction, student services, assessment,
program practices and campus culture in the areas of English as a Second Language (ESL) and
basic skills education (Fulks & Alancraig, 2008). As part of the BSI, the CCC Board of
Governors adopted a regulation that increased the minimum math and English requirements for
an associate’s degree by one level, effective for all students entering the CCC system in fall 2009
and any term thereafter (CCCCO, 2008). Typically, developmental math and English are
delivered as a sequence of courses, each course serving as a pre-requisite to attempt the
subsequent course. This regulation, therefore, only affects students placed into developmental
education. For English, students are now required to successfully complete Freshman
Composition either by taking an assessment test and placing in a level above this course or by
passing the course itself. Similar for math, students are now required to pass intermediate algebra
(IA), an increase from the past requirement of elementary algebra (EA). Therefore, this
regulation pushes back the attainment of an associate’s degree at least one semester further for
developmental education students. Moreover, the failure rate in remedial courses is higher than
that of college-level courses (U.S. Department of Education, 1994), so while the requirement
increases by one step, students may be attempting courses multiple times which increases the
amount of time necessary to be able to attempt and pass the additional level (Bahr, 2012; Perry,
Bahr, Rosin, & Woodward, 2010).
The requirement increase only affects developmental education students, which
represents a substantial majority of CCC students and particular students from underrepresented
ethnic minority populations. In California, the system exhibits relative success in developmental
English, approximately 62 percent of incoming community college students are placed into
98
either transfer-level English (27 percent) or one-level below (35 percent) which formerly met the
requirement for an associate’s degree. In contrast, about 85 percent of incoming students are
placed into developmental math (CCCCO, 2011). The majority of these students are placed into
two (25 percent) and three (21 percent) levels below transferrable math. Further, over half of
these developmental math students report being either Latino or African American, the majority
of whom place into the lowest levels of the math sequence (CCCCO, 2011).
Community colleges are open-access institutions serving the most diverse student
population in postsecondary education, and where the largest proportion of first-generation, low-
income, and underrepresented racial minority (URM) students begin higher education (Bueschel,
2003; Cohen & Brawer, 2008). The multiple functions of community colleges reflect the
diversity of its students, providing pathways to transfer to baccalaureate-granting institutions,
vocational certificates, basic skills, and community programs (Cohen & Brawer, 2008).
However, with President Obama’s college attainment goal (Obama, 2009) paired with the Great
Recession, the focus of education policy has been on increasing accountability and ensuring
public institutions are economically efficient. Within this era of increased accountability,
community colleges have also experienced pressure to enforce academic standards (Dougherty &
Hong, 2006; Dougherty, Hare, & Natow, 2009). The open-access nature of community colleges
in a climate of increased accountability means community colleges face a constant pressure to
uphold standards while maintaining equal access.
The balance between access and standards frames this study; specifically, I examine what
happens to access when standards are increased. This framework aligns with the debate that
emerged among CCC faculty and administrators when discussing the possible implications of
adopting the Title V regulation that increased requirements for an associate’s degree. An
99
Executive Committee from the CCC Academic Senate published a report in an attempt to
provide a resource for informed participation of local senates in discussing the implications of
raising standards. Their report was based on forums held in the state which were provided by the
Senate’s Curriculum Committee to foster discussion about the new regulation. They found that,
on the one hand, proponents argued that raising the minimum math and English requirements are
necessary given the existing requirements were high-school-level courses. Thus, keeping the
standards at the current levels meant granting a “college degree for high-school level work”
which “undermines the value of [an associate’s] degree” (ASCCC, 2004, p. 2). Stated differently,
the skills attained in postsecondary education should be higher than that from secondary
education. On the other hand, opponents reasoned that raising the standards for an associate’s
degree may decrease the likelihood for many of the system’s “overburdened and underprepared
students from gaining their degrees” (ASCCC, 2004, p. 2); thus, limiting access to a
postsecondary degree. Further, they pointed out that while “more demanding English and
mathematics course requirements make sense for transfer students,” a large proportion of the
students in community colleges are seeking an associate’s degree for its value in the job market,
and are not intending to transfer to a 4-year university.
While proponents of this regulation view the policy change as an increase of standards
for a CCC associate’s degree (ASCCC, 2004), there may be unintended implications to equity
and access to educational opportunity for a substantial portion of students, particularly students
from low-income and underrepresented backgrounds who are more likely to be placed in
remediation (Shaw, 1997). Understanding the extent to which this policy may have
disproportionately hindered these groups of students is not only important for the equity agenda
of higher education, which has increasingly become the responsibility of community colleges,
100
but also on the future of California’s economy. The Public Policy Institute of California projects
labor shortages arising from educational attainment not keeping pace with job market needs,
which has shifted toward workers with college degrees (Reed, 2008). Students from URM
backgrounds represent the largest proportion of students entering postsecondary education in
California, however, their accelerated growth pattern is not mirrored in educational attainment
(Moore & Shulock, 2010). Among all the African American and Latino undergraduate students
enrolled in California’s public higher education system in 2010, 83 percent of African American
and 81 percent of Latino students were enrolled in community colleges compared to 12 and 14
percent in the California State University system and four percent each in the University system
(California Postsecondary Education Commission, 2010). Thus, the vitality of California’s
economy rests on URM students enrolled in the CCC system, so it is imperative to explore
changes in education policy that may affect the probability of these students completing their
degree and entering the workforce as college graduates. The main objective of this study is to
understand the extent to which this policy relates to developmental math student degree
attainment in order to better recognize how the increased math standard affected the access
mission. I specifically focus on the math requirement increase because a larger proportion of
students are placed into and struggle with developmental math education compared to English
(Bailey et al., 2010).
The structure of this paper is as follows. Given the dearth of literature examining the
impact of increased standards on community college student outcomes, I first review the K-12
literature examining the relationship between graduation requirements and student outcomes.
Next, I describe the CCC Board of Governors adoption of the Title V regulation that increased
the minimum math and English requirements for an associate’s degree. In the following section,
101
I describe my methodology to measure the extent to which the requirement change related to
developmental math student degree attainment. Utilizing student-level data from one California
community college that implemented the requirement per state policy for their fall 2009 cohort. I
ran several regression models to examine the extent to which a higher math requirement related
to changes in the number of degree-applicable units students earned, the probability students pass
the new math requirement, and the probability students obtain at least 60 degree-applicable units,
which is the minimum number of credits required for an associate’s degree. I also investigate the
equity concern and estimate the extent to which this additional requirement had any
disproportionate relationship on URM student degree attainment. Analyses revealed that the
implementation of a higher math requirement did not significantly change the number of degree-
applicable units students earn within four years, or the probability they pass IA or obtain at least
60 degree-applicable units at this particular community college. Further, differences between
White and URM students on developmental math student degree attainment did not change after
the IA requirement was implemented. Findings from this study suggest that the Title V
regulation increased standards without limiting access or equity, however, these findings should
be taken with caution given the external policies and economic environment that might have
confounded the results. I therefore conclude this paper with directions for future research.
Background and Previous Research
Across the nation, postsecondary institutions are serving an increasing number of
students deemed underprepared for college-level work, and are simultaneously being pressured
to drastically increase college graduation rates in order to meet the labor market needs for
workers with a college degree. To attend to these demands, states have focused on reforming
developmental education. As policies reforming developmental education have emerged, so has
102
the debate for the community college system to meet its missions of providing equal access and
upholding standards as postsecondary education institutions. Perin (2006) summarizes this
tension between access and standards in developmental education stating that “standard goals are
achieved when all students who need it receive remediation as preparation for college-level
study, and access goals are met when all students are allowed to participate in the college
curriculum irrespective of skills” (p. 369–370).
Several states, such as Connecticut (SB 40) and Florida (SB 1720), have implemented
policies that allow students who would have been assigned to developmental courses to enroll
directly in college-level math and English courses (Burdman, 2012; Perin, 2002; Kosiewicz et
al., 2014; Mangan, 2014a; Tierney & Duncheon, 2013). These policies are referred to as
mainstreaming and essentially remove the pre-requisites for college-level courses (Fain, 2012,
2013; National Conference of State Legislatures, 2012). While research has yet to explore the
impact of these policy changes, structurally, removing pre-requisites reduces the time students
would have to spend in remediation and provides students with the opportunity to earn degree-
applicable credits sooner. However, it makes the assumption that students are ready for college-
level work and can complete coursework if provided with ongoing support (Mangan, 2014b).
Some practitioners and critics of mainstreaming argue that policies restricting remediation
ultimately restrict access. They contend that a substantial proportion of students, mostly from
underrepresented populations, will continue to enter community colleges with remedial needs,
and by allowing them to enroll in college-level courses, they are likely to fail, which will result
in a further stratification in higher education (Fain, 2013). According to Perin (2006) and
proponents of this type of developmental education reform, however, mainstreaming meets the
access mission since all students, regardless of initial ability are provided the opportunity to
103
participate in college curriculum. Yet, academic standards may be impaired if the diversity of
student preparation in college-level courses affects instruction where faculty dilute the
curriculum in order to accommodate all their students (Costrell, 1998; Mangan, 2014b).
While Connecticut and Florida implemented policies that have removed pre-requisites
(effective in 2012 and 2014, respectively) for college-level work, California implemented a
regulation (effective in 2009) that increased the pre-requisites for an associate’s degree.
19
Aligning with Perin’s (2006) summary of the tension between standard goals and access goals,
debates emerged among CCC practitioners which highlighted the intended and potential
unintended consequences of this legislation. As described earlier, proponents of this regulation
argued that an associate’s degree should require higher standards than a high school diploma
while opponents voiced that an increase of standards would further restrict access.
Given the recent implementation of these policies, and thus the limited research
examining the extent graduation requirements affect the probability community college students
attain their degree, I turn to K-12 education which has a longer history of educational reforms
aimed at increasing standards. Like the CCCs which increased the standard of “what students
with an associate’s degree should know,” secondary institutions have debated the standard of
“what students with a high school diploma should know.” This study thus builds upon the extant
K-12 literature which has sought to understand what happens to student outcomes when policies
increase standards and graduation requirements, and examines the unintended consequences of
these reforms to student access as well as the disproportionate impact on URM and low-income
students.
19
It is important to note that with the increase of associate’s degree requirements for CCCs, these requirements are
still lower than other states (e.g., Florida, Connecticut, Texas, and Washington) that require students to pass college
algebra for an associate’s degree.
104
Since 1983 with the publication of the Nation at Risk report, policymakers have criticized
the low academic standards in the U.S. public secondary education system (National
Commission on Excellence in Education, 1983). Over the next three decades, claims that too few
students – especially from urban schools – graduate high school with the skills required to be
successful in college and in the workforce continued (Allensworth, Nomi, Montgomery, & Lee,
2009). Educational reforms resulting from a Nation at Risk aimed at increasing students’
curricular intensity in terms of academic course taking. To achieve this goal, state policies tended
to implement or increase graduation requirements (Clune & White, 1992). Most recently, the No
Child Left Behind Act of 2001 (NCLB) increased mandated testing and school accountability.
NCLB requires states to test their students on math and reading based on established state
curriculum standards, and to report their overall progress toward meeting state standards.
Additionally, the law called for monitoring progress of students “who are economically
disadvantaged, from racial or ethnic minority groups, have disabilities, or have limited English
proficiency” (Schiller & Muller, 2003, p. 299). The stated goal of this legislation was to both
raise academic standards, and with it, performance of secondary students, as well as decrease
gaps in achievement for students from URM backgrounds or low-income families (Schiller &
Muller, 2003). Pairing examination systems with a standardized state curriculum, these reform
efforts essentially redefined and set minimum criteria of what every high school graduate should
know. These reforms fall under the same vein as the Title V regulation by specifically
establishing the difference between what an associate’s degree-holder should know (in math and
English) compared to a high school graduate.
These types of educational policies also enforce the ongoing balance between increasing
standards without hindering access, especially for disenfranchised students. As Attewell and
105
Domina (2008) describe, two concerns emerge in discussing increasing curricular intensity via
increasing graduation requirements, “One is a concern about equity: whether disadvantaged
groups are unfairly excluded from access to more demanding high school courses. The other is a
policy concern: the hope that upgrading high school curricula might yield substantial payoffs in
terms of student skills and college success” (p. 65). In order to forecast the intended and
unintended consequences of the graduation requirement increase in CCCs, in what follows, I
explore these consequences in the context of K-12 educational reforms.
Intended Consequences: The Relationship between Graduation Requirements and
Curricular Intensity on Student Success
In the 1983 Nation at Risk report, the authors proposed an upgraded curriculum which
was organized around “Five New Basics” – (1) four years of English, (2) three years of math, (3)
three years of science, (4) three years of social studies, and (5) a half year of computer science
(NCES, 1983). Alexander and Pallas (1984) examining this new core curriculum found that
students who complete it, reap a significant benefit in their test performance. Over three decades
later, there continues to be a strong trend in U.S. secondary education toward curricular
upgrading, instituting graduation requirements, and implementing a standard core curriculum
(i.e., NCLB). Recent research studying these similar and more recent reforms have found that
students in states with more high school graduation requirements tended to enroll in and persist
through higher-level math courses (Schiller & Muller, 2003), and exhibit an increase in the
number of credits across academic subjects and in the level of difficulty of these courses (Clune
& White, 1992). However, research has also demonstrated that increasing curricular intensity via
graduation requirements may only provide modest gains to its students.
The tempering of gains in student achievement branches into two rationales. One,
106
relatively few students may even be affected by the requirement either because they took more
than the required coursework or they took courses that did not affect their achievement (Chaney,
Burgdof, & Atash, 1997). Two, there are only mild affects reported of graduation requirements
on student achievement; further, while the requirement leads to higher rates of course
completions, higher requirements also dilute the effectiveness of completing additional courses
(Hoffer, 1997). Meaning, the benefit of completing additional courses diminishes with higher
requirements. Hoffer (1997) examined legislation that required high school students to complete
at least three years of math in order to graduate. While he did not find any effect of this
graduation requirement on the probability students drop out of high school, he also did not find
an effect on achievement gains; moreover, the effects of socioeconomic status was not reduced
in schools enforcing this requirement. Based on these results, Hoffer (1997) argued that efforts to
improve achievement outcomes (or reduce the achievement gap for lower-income students) by
simply raising the number of required math courses was not effective.
While previous research focused on shorter-term outcomes, Adelman (2006) published a
series of studies using national longitudinal student surveys (i.e., High School and Beyond
Longitudinal Study and National Educational Longitudinal Study) and examined factors
contributing to completion of a bachelor’s degree. He consistently found that curricular intensity,
especially taking advanced math courses, was related to students’ odds of degree attainment.
Within these studies, however, Adelman (2006) also found the pattern Hoffer (1997) uncovered
where increasing requirements dilute the effectiveness of completing higher requirements.
Adelman (2006) observed that as advanced course enrollment became more common among
high school students, the positive relationship of taking higher-level courses and the likelihood of
college degree completion declined. In 1982, he found that students who completed Algebra 2
107
and trigonometry exhibited better degree prospects. By 1992, however, this relationship was no
longer significant. Instead, a student had to complete an even more advanced curriculum –
precalculus or calculus – to receive the college completion benefit initially observed for Algebra
2 and trigonometry students.
Building on Adelman’s (2006) work, Attewell and Domina (2008) utilized a quasi-
experimental design to account for the selection bias in previous research. Consistent with prior
research, Attewell and Domina (2008) found significant positive effects of taking a more intense
curriculum on 12
th
grade test scores and probability of entry to and completion of college.
However, also aligned with previous findings, the effect sizes of curricular intensity on these
outcomes were small. The authors concluded that “policy interpretations drawn from earlier
research that identified curriculum as an important factor for improving student test scores and
college outcomes were correct but were overly optimistic” (Attewell & Domina, p. 66).
Overall, the stated goals of educational reforms that raise graduation requirements are
two-fold – the first being to increase academic standards and performance, and the second to
decrease gaps in achievement for disadvantaged student populations (Attewell & Domina, 2008;
Schiller & Muller, 2003). While the research reviewed above focused on the former stated goals,
finding that educational reforms utilized to increase standards have only modest positive effects
on student outcomes that dilute over time, the field has also examined the unintended
consequences of these policies.
Unintended Consequences: The Relationship between Graduation Requirements and
Dropout
Recent research on course graduation requirements (CGRs) in secondary education has
found that it increases high school dropout rates and has an especially negative impact on
108
students from URM backgrounds and low-income families (Attewell & Domina, 2008; Lillard &
DeCicca, 2001; Plunk, Tat, Bierut, & Crucza, 2014).
Along with finding modest positive effects of taking a more intense curriculum on 12
th
grade test scores and college completion, Attewell and Domina (2008) also found significant
disparities in access to demanding courses. This gap operated primarily through socioeconomic
(SES) status. However, the authors noted that because racial and ethnic minorities are greatly
overrepresented at lower levels of SES, students from lower-income families in general take a
less intensive curriculum implying that URM students do face a disproportionate disadvantage of
a less intensive curriculum.
Similar, Lillard and DeCicca (2001) found that increases in state CGRs were associated
with increases in high school dropout rates, and that the biggest predicted effect was on students
from families characterized as “poor, disrupted, black or Hispanic, with more siblings and whose
parents also tend to be dropouts themselves” (p. 470). Utilizing student-level data and a quasi-
experimental design, Plunk et al. (2014) estimated the effect of state-mandated CGRs and the
size of the requirement (e.g., two vs. six courses) on three educational attainment outcomes: (1)
dropping out of high school, (2) enrolling in college, and (3) if enrolled in college, degree
completion. Similar to Lillard and DeCicca (2001), they found that students in states with the
highest CGR (six courses) compared to students in states with no CGR were more likely to drop
out of high school, with African Americans and Latinos experiencing the greatest increases in
their probability of dropping out. Further, while exposure to the highest CGR did not affect
college enrollment, it did increase the probability of completing a college degree. Lastly, the
probability of enrolling in college for African American women, and Latino/a men and women
decreased among those in states with the highest CGR, however, among these students who
109
enrolled in college, higher CGR exposure resulted in a higher likelihood they obtained their
college degree.
K-12 policy research has shown that increasing standards may unintentionally limit
access as students are more likely to drop out; however, students who persist through the
additional standards are granted more access in terms of obtaining a college degree. In my study,
I specifically parse out the access mission by exploring the relationship between increasing
graduation requirements in the community college setting and developmental math student
degree completion.
California’s Graduation Requirement Increase for Math and English
Assisting underprepared students to attain the foundational skills in reading, writing, and
math necessary to succeed in college-level work has been a core function of community colleges
throughout their history (Cohen & Brawer, 2008; Illowsky, 2008). However, efforts to address
the needs of underprepared students have been lacking. Of developmental education students
who enrolled in a basic skills class in the 2001-02 academic year, only 29 percent earned an
associate’s degree or certificate or transferred to a baccalaureate-granting institution (Illowsky,
2008). Though a handful of innovative institutional-level programs have proven successful, they
tend to be small in scale and relatively transient because they are funded through short-term
grants (Illowsky, 2008). The California Community Colleges Chancellor’s Office (CCCCO),
therefore, implemented a statewide effort to address the ongoing needs of CCCs’ underprepared
student population called the California Basic Skills Initiative (BSI).
Three key events lead to the adoption of the BSI. First, the CCC Board of Governors
adopted a strategic plan in 2006 to improve student access and success. As part of this
comprehensive strategy, the BSI was formed to address credit and noncredit basic skills
110
(developmental math and English) and English as a Second Language (ESL), as well as adult
education, and other programs designed to assist underprepared students. The BSI had a two-
pronged approach. One prong centered on supplemental funding to specifically address basic
skills needs. The second prong was designed to train community college faculty and staff in
providing instruction and services for basic skills and ESL students (Academic Senate for
California Community Colleges (ASCCC), 2009).
Second, the CCCCO embraced the BSI and provided funding for the initiative. Since
2007, each CCC has received funding above its general apportionment to improve the success
rates of students in developmental education. The statewide total allocation was approximately
$33 million in 2007-08 and 2008-09, and each college received a minimum of $100,000. The
statewide budget was reduced significantly to $19 million in 2009-10 due to the Great Recession
and resulting fiscal challenges. Though all CCCs continued to receive BSI funds, since 2009-10,
only around 40 percent of the colleges received the $90,000 minimum (CCCCO, 2013).
20
Along with additional funding specific to the BSI, Senate Bill 361 (SB 361) was passed, which
provided an enhanced allocation rate for non-credit career-development and college-preparation
courses (California State Legislature, 2005).
Third, the BOG unanimously approved the raising of statewide minimum English and
math graduation requirements for an associate’s degrees. This third event is the focus of my
analysis. Developmental education in community colleges is traditionally delivered as a
sequence of one to four stand-alone, semester-long courses (Bailey, 2009; Melguizo, Kosiewicz,
Prather, & Bos, 2014; Scaling Innovation, 2012), focused almost exclusively on math and
English (Bailey, 2009; Zachry, 2008). Typically, students assigned to developmental education
20
For an in-depth report on the BSI expenditures, see CCCCO’s Basic Skills report (2013).
111
must successively pass each assigned course in the sequence before they can enroll in college-
level (or degree-applicable) courses in those subjects. In most cases within the CCCs, the English
sequence consists of three levels, each course serving as a pre-requisite to attempt the subsequent
course: College English Skills, Basic Writing, Introduction to College Composition, and
Freshman Composition (see Figure 3.1a). Though the names of these courses may vary across
colleges, the levels are similar. For English, the regulation increased the requirement for an
associate’s degree by one level, from Introduction to College Composition to Freshman
Composition. Students are now required to successfully complete Freshman Composition either
by taking an assessment test and placing in a level above this course or by passing the course
itself. While the new English requirement for an associate’s degree matches the level of transfer
to a four-year institution, this is not the case for the math requirement.
Figure 3.1a. English writing sequence
Typically the developmental math sequence is comprised of four levels: arithmetic (AR),
pre-algebra (PA), elementary algebra (EA), and intermediate algebra (IA). Similar to English, the
math requirement increased by one level, requiring students to pass intermediate algebra (IA), an
increase from the past requirement of elementary algebra (EA). However, unlike the new English
0
-1
-2
-3
College/Transfer-Level
Freshman Composition
Introduction to College
Composition
Basic Writing
College English Skills
112
requirement, IA is still one-level below transfer-level (see Figure 3.1b). Therefore, what
distinguishes the English requirement for an associate’s degree (transfer-level) is higher than
what is considered college-level math content (below transfer-level).
21
Regardless, given the
structure of the math and English sequences, this regulation affects students placed into
developmental education, which is the majority of the CCC student population.
Figure 3.1b. Math sequence
Methodology
Data and Analysis Sample
I chose a single California community college, College D, for several reasons. First, it is
important that students were placed along the same criteria before and after the IA math
requirement was implemented. College D utilized the same assessment test and the same cut-
scores throughout the time period of my study. Second, the pattern of student assessments and
21
Based on an informal conversation with Ian Walton, then president of the Academic Senate leading the proposal
of this requirement, more resistance was received from math faculty who believed it challenged the open-access
mission of CCCs especially since a larger proportion of students are placed into developmental math than English
and at lower-levels of the math sequence than the comparable levels of the English trajectory.
0
-1
-2
-3
Transfer-Level
College-Level
Intermediate Algebra
Elementary Algebra
Pre-Algebra
-4 Arithmetic
113
enrollments also remained stable, where the majority of students took their assessment test in
their first enrolled semester at the college. Lastly, College D is located in the greater Los Angeles
area and serves an ethnically and socioeconomically diverse student population, it is also one of
the colleges from the largest urban community college district in California. This is important
when examining heterogeneous relationships with the IA math requirement.
Student-level data were obtained from College D’s Office of Institutional Research. Data
were gathered on all first-time community college students who were assessed and enrolled in
math between the fall 2005 through fall 2009 semesters. I therefore have data on nine cohorts of
students, and have information on students before and after the math requirement increase was
mandated. I gathered data on these students’ assessment and placement into developmental math
as well as outcome data on their degree attainment for up to four years (see Table 3.1).
Table 3.1.
Student Cohort by Semester with the Last Semester of its Outcome Data
Cohort (Semester of Assessment) Last Semester of Outcome Data
Fall 2005 Spring 2009
Spring 2006 Fall 2009
Fall 2006 Spring 2010
Spring 2007 Fall 2010
Fall 2007 Spring 2011
Spring 2008 Fall 2011
Fall 2008 Spring 2012
Spring 2008 Fall 2012
Fall 2009 Spring 2013
The final analytic sample includes 12,806 students assessed and enrolled in College D. Though
College D offers assessments in the summer term, these students are removed from the sample
because students who choose to take their assessment test in the summer may reflect other
underlying motivational factors and/or cultural capital; for example, students who recently
graduate high school are taking their test while the material is more salient.
114
On average, 1,996 students in the fall semester and 560 students in the spring are
assessed in College D assesses. Figure 3.2 illustrates the sample size distribution across the pre-
policy and policy implementation time frame in College D. College D assesses greater number of
students in the fall semester compared to the spring semester.
Table 3.2 presents the summary statistics for students who were assessed in math
between fall 2005 to fall 2009, and split by pre-policy (fall 2005–spring 2009) and policy (fall
2009) cohorts. These descriptive statistics show that differences in student demographics and
math placements exist between the two time periods. The policy cohort has a smaller proportion
of female students and students of color, but a larger proportion of White students. Math
placements are statistically different between the two policy time periods as well. The policy
cohort also has a significantly larger proportion of students who were placed into IA and smaller
proportion of students placing into PA. Recognizing that the implementation of the new math
requirement was around an economically tumultuous time for public higher education, the shift
in math placements may be a result of students displaced by the California State University
(CSU) and University of California (UC) systems.
115
Analytical Strategy
My analysis examines the extent to which increasing the math requirement related to
changes on associate’s degree attainment within four years for students who were assessed and
placed into developmental math between the fall 2005 and fall 2009 semesters. The following
empirical model was specified for each outcome:
𝑃𝑟𝑜𝑏 𝑦 = 1 𝑥 = 𝛽 0
+ 𝛽 1
𝑥 + 𝛽 2
𝑣 + 𝛽 3
𝑧 + 𝜀
Where,
y = developmental student degree attainment
x = IA math requirement
v = vector of individual covariates such as student demographics (e.g., sex and race) and
incoming math ability (math placement)
z = semester-cohort fixed effects included to control for cohort-specific time trends
𝜀 = error term
Table 3.2.
Student Demographics Fall 2005-Spring 2009 Fall 2009
Female 0.52*** 0.48
Asian 0.10*** 0.07
Black 0.06*** 0.05
Latino 0.49*** 0.42
White 0.24*** 0.30
Other/Unknown 0.06*** 0.03
Math Placement
Transfer-level 0.26 0.28
IA 0.11*** 0.14
EA 0.29 0.28
PA 0.11*** 0.08
AR 0.22 0.23
Number of observations 11,501 1,305
*p<.05, **p<.01, ***p<.001
Description of Analysis Sample
116
I also examined interactions between race/ethnicity and the IA requirement to investigate
whether specific student sub-groups exhibited disproportionately lower success after the
requirement increased.
The three academic outcomes in my study are intended to measure associate’s degree
attainment since it specifically aligns with the policy.
22
First, I generated a model where the
academic outcome of interest is the total number of degree-applicable units students earn within
four years of their initial assessment and enrollment. Degree-applicable units earned is a
continuous short-term outcome variable that measures the extent to which students are
progressing toward degree attainment The second and third outcomes measured by this model
predict the probability students pass IA and the probability students obtain 60 degree-applicable
units. The Title V regulation specifically increased the new math requirement for an associate’s
degree from passing EA to passing IA; I thus include passing IA as a dichotomous outcome
variable. I included the last dichotomous outcome variable, accumulation of at least 60 degree-
applicable units, because it is the minimum requirement of units for an associate’s degree.
All outcomes were restricted to within four years of their initial assessment and enrollment.
Limitations
There are two main limitations of this study, limitations arising from the data, and
limitations arising from the policy environment at the time. The objective of this study is to
estimate the extent to which this policy related to developmental math student degree attainment.
22
Another policy that may have affected degree outcomes for CCC students was the Student Transfer Achievement
Reform Act (Senate Bill 1440). This bill was signed into law in 2010 and first implemented in the 2011-12 academic
year, and enabled the CCCs and the California State University (CSU) system to collaborate on the creation of
Associate Degree transfer programs. Upon completion of this Associate Degree for Transfer (AA/S-T), community
college students are guaranteed admission into the CSU system. While this bill may have confounded the results of
the impact of the math requirement change on degree completion outcomes for developmental math students, data
reveals that the number of Associate’s degrees awarded over the time frame of my study remained stable and the
AA/S-Ts represent less than eight percent of these degrees. See Appendix P3-A for a more in-depth summary of SB
1440 as well as a description of other policies occurring in and around the time frame of this study.
117
However, because of the limitations of the data: (1) I did not have access to program award data,
(2) I had data that only allowed to collect students outcomes for up to four years after their
assessment, and similarly, (3) I could only include one policy cohort.
First, because I did not have access to program award data, I was unable to directly
measure whether a student received his or her associate’s degree which would have been the
most accurate measurement of degree attainment. Instead, I operationalized degree attainment
via three different academic outcomes within four years: degree-applicable units earned, passing
IA, and accumulation of at least 60 degree-applicable units. Second, I restricted my time frame
for each cohort to four years given the years of outcome data I was able to gather. However, four
years is a relatively short time-frame allotted for developmental math students to attain longer-
term outcomes, especially for students placing into lower-levels of the developmental math
sequence. In fact, Melguizo, Hagedorn, and Cypers (2008) found that students with deep
developmental needs averaged five years at a community college prior to transferring. Since their
sample included students that did indeed transfer to a baccalaureate-granting institution,
developmental math students who do not intend on transferring may take even longer. Third, my
study splits the College D sample into two groups: pre-policy cohort and policy cohort.
However, given the recency of the policy implementation and being able to obtain at least four
years of outcome data, I was only able to gather data on one policy cohort (fall 2009). Having a
larger sample of cohorts that were impacted by the IA math requirement would more accurately
assess how this new regulation affected developmental math student outcomes. This would also
allow for more advanced statistical models such as interrupted time-series regression that may be
better equipped to isolate the treatment effect of this type of regulation change. Future research
should include program award data and a larger time-frame to capture true student outcomes and
118
provide more policy cohorts to accurately assess the extent to which the IA math requirement
related changes in developmental math student degree attainment. Further explanation is
provided in the concluding sections.
Another limitation of my study was the time period in which the Title V requirement was
implemented. During this time frame, there were a number of other statewide policies, which
were meant to increase developmental student outcomes and associate’s degree attainment, were
being implemented. For example, in 2006, as part of Senate Bill 361 which established the new
community college resource allocation system, a higher rate per full-time equivalent student for
enhanced noncredit courses (career-development/college prep courses or CDCP) was
established. The majority of these courses are provided as basic skills education, so colleges may
be granted more funding by providing these enhanced noncredit courses which may have
incentivized an expansion of their non-credit basic skills courses and thus more diversity in how
developmental education is delivered. Another example is Senate Bill 1440, titled the Student
Transfer Achievement Reform Act, which was signed into law in 2010. The CCCs and CSU
system, together, created the Associate’s Degree for Transfer, which upon completion,
guaranteed community college students admission into the CSU system. Therefore, results from
my model may be confounded by other policies within the study’s time period. While I describe
these policies and explain away some of the threats to internal validity, I was unable to
statistically isolate the treatment of the IA math requirement from other policies or secular trends
during this time frame (see Appendix P3-A for further details).
Lastly, because the implementation of the new math requirement coincided with the
Great Recession, public higher education systems were working under major budgetary
constraints. It is probable that the Great Recession, then, also confounded the results of my study
119
because students who would have been accepted into the CSU and UC systems prior to the
economic crisis were turned away, and instead began their higher education journey at the
community college-level. This in-turn may have superficially increased the average academic
preparation of its student population. The relationship between the Great Recession and student
academic preparation is discussed in greater-depth in the Discussion section.
Results
Results from my three models predicting degree attainment indicate that implementation
of the Title V regulation did not decrease success or the likelihood of success for developmental
math students. Not only was the new requirement not negatively related to student outcomes, but
it also did not further exacerbate the existing racial inequalities in developmental math student
outcomes. These findings suggest that increasing the math standard for an associate’s degree did
not further decrease access to that degree. Interpretation should bear in mind the limitations of
the data and the inconsistency of these findings to the existing literature, which I address in a
subsequent section.
As illustrated in Table 3.3, my analyses revealed that the IA requirement had no
statistical relationship with the number of degree-applicable units earned, or the likelihood of
passing IA or earning at least 60 degree-applicable units. As noted in my Limitations section,
caution should be taken in interpreting the models for passing IA as well as accumulating at least
60 degree-applicable units given the short time-frame of four years allotted for developmental
math students to attain these longer-term outcomes. For example, few if any students who placed
into lower-levels of the math sequence passed IA or earned 60 degree-applicable units (see Table
B1 in Appendix P3-B for distribution of outcomes by placement level). Therefore, the
significance of the Title V requirement increase on passing IA and earning 60 degree-applicable
120
units were mainly driven by students who were directly placed into higher-levels of math.
Table 3.3.
Regression Results of Three Models: Number of Degree-applicable Units Earned, Odds of
Passing IA, and Odds of Earning 60 Degree-applicable Units
Degree-applicable
units earned
Passing IA
Earning at least 60
degree-applicable units
IA requirement 7.75 2.43 3.45
(5.88) (1.75) (2.23)
Female 3.66*** 1.17 1.42***
(0.52) (0.16) (0.10)
Years since H.S. graduation 0.00* 1.00 1.00
(0.00) (0.01) (0.00)
Citizenship -0.13 1.04 0.98
(0.10) (0.05) (0.02)
Age 0.09* 1.01 1.01
(0.05) (0.01) (0.01)
Ethnicity (White=Comparison Group)
Asian -5.63*** 1.22 0.62***
(1.06) (0.42) (0.08)
Black -9.66*** 0.49 0.43***
(1.18) (0.24) (0.09)
Latino -5.22*** 1.14 0.64***
(0.70) (0.24) (0.06)
Other -1.52 1.33 0.91
(1.04) (0.40) (0.12)
Assessment Semester (Fall 2009=Comparison Group)
Fall 2005 8.68*** 0.01*** 2.30**
(1.41) (0.01) (0.59)
Spring 2006 6.19*** -- 1.63
(1.67) (0.49)
Fall 2006 7.97*** 0.01*** 2.11**
(1.40) (0.01) (0.54)
Spring 2007 4.26* -- 1.81*
(1.72) (0.55)
Fall 2007 9.06*** -- 2.48***
(1.41) (0.64)
Spring 2008 1.62 -- 1.46
(1.77) (0.47)
Fall 2008 3.40* 1.87 1.60
(1.49) (0.44) (0.43)
Placement Level (IA=Comparison Group)
121
EA -13.90*** 2.71*** 0.40***
(0.79) (0.59) (0.03)
PA -22.42*** 0.68 0.14***
(0.99) (0.23) (0.02)
AR -19.37*** 0.97 0.21***
(0.85) (0.25) (0.02)
Policy by Ethnicity Interaction (Pre-Policy=Comparison Group)
IA Requirement*Asian 6.58 0.84 1.47
(6.67) (0.71) (0.99)
IA Requirement*Black 3.85 2.08 0.40
(6.73) (1.93) (0.39)
IA Requirement*Latino -2.44 0.69 0.51
(5.89) (0.50) (0.32)
IA Requirement*White -4.79 0.87 0.50
(5.96) (0.64) (0.32)
R-squared (Pseudo) 0.11 0.27 0.08
Total Observations (N) 9,098 6,012 9,161
*p<.05, **p<.01, ***p<.001
Standard errors in parentheses.
Notes.
Odds ratios are reported for the Passing IA and Earning 60 degree-applicable units logistic models
Across models, Spring 2009 assessment cohort was omitted due to collinearity
Across models, IA Requirement*Other/Unknown ethnicity was omitted due to collinearity
Empty fields in the passing IA model because it predicted failure perfectly
Earning Degree-Applicable Units
Holding all else constant, results reveal that students in the policy cohort did not earn less
units than students assessed and enrolled at College D pre-IA math requirement. Disaggregating
the cohorts, it appears that each of the pre-policy cohorts earned more degree-applicable units
within four years of assessment. The only exception is that of the spring 2008 cohort since it was
not statistically significant. Students placing into lower-levels of the developmental math
sequence (e.g., EA, PA, AR) earned significantly less degree-applicable units compared to
students who placed into IA. This is not surprising given more degree-applicable courses are
available for students who meet certain math pre-requisites.
122
Examining student characteristics, female students earn more degree-applicable units
than male students. And lastly, Asian, Black, and Latino students obtained significantly less
number of degree-applicable units within four years of assessment compared to White students.
However, examining the interaction between the Title V requirement increase and students’
race/ethnicity, the policy did not have a disproportionate relationship with any specific
racial/ethnic group (see Table 3.3).
Obtaining 60 Degree-Applicable Units
Results from the model estimating the probability of students earning at least 60 degree-
applicable units mirrored that of the first model observing the number of degree-applicable units
earned; meaning, where students earned more units they were also more likely to obtain at least
60 units. The IA math requirement did not significantly increase nor decrease the odds students
earn at least 60 degree-applicable units. Female students and students in each pre-policy cohort
were more likely to earn 60 degree-applicable units compared to male students and the fall 2009
cohort, respectively. Students of color were less likely to earn these units compared to White
students, however, the IA requirement did not exacerbate these racial disparities. Lastly,
compared to IA-placed students, students at lower-levels of math exhibited lower odds of earning
at least 60 degree-applicable units.
Passing Intermediate Algebra
The model predicting the probability students pass the new math requirement, IA, did not
result in very many statistically significant relationships. Like the prior two models, the IA math
requirement did not significantly relate to students higher or lower odds of passing IA. However,
results from this model did not reveal any significant relationships for student ethnicity, and
similar to previous models, the interaction between racial/ethnic group and the policy was not
123
statistically significant. Math placement was also not statistically significant, with the exception
that students placing into EA had higher odds of passing IA compared to students who were
directly placed into IA.
Earning Degree-Applicable Units: Disaggregated by Math Placement Level
Because my dataset only included four years of outcome data for each cohort, it was
limited in being able to accurately assess the longer-term outcomes; thus, the number of degree-
applicable units earned is my preferred model in being disaggregated by math placement level.
Though my shorter-term outcome is likely less predictive of students’ degree attainment
compared to the longer-term outcomes (passing IA and earning at least 60 degree-applicable
units), measuring the number of degree-applicable units students earned over four years
encompasses how students are progressing toward degree attainment. Results reveal that across
AR, PA, EA, and transfer-level placements, no statistical relationship was found between the
higher math requirement and number of degree-applicable units earned over four years. The new
IA math requirement only exhibited a negative relationship for students who placed directly into
IA, where students in the College D earned approximately 39 more degree-applicable units after
the policy change (see Table 3.4).
23
These findings suggest that the IA math requirement only
increased the number of degree-applicable units earned for students placing into IA, while
students placing into other levels of math were unaffected. Based on the descriptive statistics of
my analysis sample, College D experienced an increase in IA placements from the pre-policy
cohort to the policy cohort (see Table 3.3). Therefore, my results may not be as reflective of the
IA math requirement but rather the fact that a more academically-prepared student population
23
Coefficients of the entire models are available in Appendix P3-B (Table C2).
124
entered the College D in fall 2009. This explanation is described further in the Discussion
section.
Table 3.4.
Treatment coefficient of the IA math requirement on degree-applicable units earned for
students across developmental math placements
AR PA EA IA Transfer-Level
IA Requirement -4.39 -8.70 0.72 39.04** -2.89
(11.24) (10.02) (10.72) (14.95) (7.29)
Adj. R-squared 0.08 0.04 0.03 0.04 0.03
Total Observations (N) 2,714 1,279 3,665 1,440 3,324
*p<.05, **p<.01, ***p<.001
Standard errors in parantheses.
Discussion
The main objective of this study was to estimate the extent to which this policy related to
developmental math student degree attainment to better understand the extent to which the
increased math standard affected the access mission. As students are faced with higher standards,
I examined whether the change in graduation requirements results in a change of the number of
degree-applicable units students earned, the probability students pass the new math requirement,
and the probability students complete the minimum number of credits required for an associate’s
degree within four years of their initial enrollment. In this study, I also set out to investigate the
equity concern and estimate the extent to which this additional requirement had any
disproportionate relationship on URM student degree attainment.
Title V Requirement did not Change Developmental Math Student Degree Attainment
Controlling for gender, ethnicity, age, citizenship, developmental math placement level,
assessment semester, and number of years since high school, I found that students in College D
before and after the policy change did not exhibit different number of degree-applicable units
earned, probability of passing IA, or likelihood of earning at least 60 degree-applicable units.
125
Further, I found that Asian, Black, and Latino students obtained significantly less number of
degree-applicable units and exhibited lower odds earning at least 60 degree-applicable units
within four years of assessment compared to White students; however, my analyses did not
reveal that the IA requirement exacerbated these existing racial disparities. These findings are
inconsistent with K-12 education literature examining the relationship between higher standards
and student outcomes. This research has shown that increasing course graduation requirements
(CGRs) is associated with increases high school dropout rates and has an especially negative
impact on students from URM backgrounds and low-income families (Attewell & Domina,
2008; Lillard & DeCicca, 2001; Plunk, Tat, Bierut, & Crucza, 2014).
While my findings were inconsistent with prior K-12 literature on the relationship
between educational standards and student outcomes, it is important to note the data limitations
of my study. One explanation of my lack of significant findings is that the IA math requirement
would have a more delayed effect on students placing into lower-levels – AR and PA – of the
math trajectory. Therefore, while I did not find a significant relationship between the increased
standard on College D developmental math student degree attainment, there may be a delayed
effect on these students based on placement that is outside the time frame of my study. Similarly,
a higher proportion of URM students placed into lower-levels of the math trajectory. Therefore,
these students may exhibit disproportionately lower success rates when more years of outcome
data are included.
Students Placed into IA Earned More Degree-Applicable Units after Requirement Change
In disaggregating the model estimating the number of degree-applicable units earned by
math placement level, I found that the IA placement model was the only one that exhibited a
significant relationship. The non-significant findings for lower-level math placements, again,
126
may be symptomatic of not providing a long enough time frame for some of these outcomes to
take effect. Examining students from the policy cohort who placed into IA, I found that these
students earned more degree-applicable units compared to IA-placed students from the pre-
policy cohort. Taking these two findings together, it is possible that my results are driven by
changes in math placements, and specifically a larger proportion of placements into IA. Because
College D did not make any changes to their assessment and placement criteria, it is assumed
that increases in math placements, then, may be driven by a more academically-prepared student
population. It is important to note, however, that without high school transcript data I am
unfortunately unable to test this assumption.
Explanations for the Changes in Math Placements
The analysis sample of College D represents a more academically-prepared student
population given the differences in math placements between the pre-policy and policy periods.
Between the two time periods, more students placed into IA and less students placed into PA in
the College D. It thus appears that my results may be inflated due to a more academically
prepared student population in the College D after the IA requirement was implemented.
The assessment and placement criteria at College D did not change over the time frame
of my study. Therefore, while students being assessed and enrolling in College D from fall 2005
to fall 2009 were placed according to the same criteria, the College D experienced an increase in
IA math placements when the policy was implemented. There are many potential explanations
for these placement differences, given the external economic environment of the time, I discuss
indirect effects of the Great Recession which may have contributed to the variability in math
placements and the lack of relationship between the policy change and degree attainment.
127
Economic environment: The Great Recession. The variability in math placements may
have been attributed to the economic downturn that aligned with the timeframe of my study. This
economic decline is referred to as the Great Recession which officially lasted from December
2007 to June 2009. This decline was set-off by the bursting of an eight trillion dollar
housing bubble, which led to cutbacks in consumer spending and business investment, and
resulted in massive job loss (Economic Policy Institute, n.d.). Less monies in circulation meant
large cuts in resources to public education. According to information conversations with
administrators in the CCC system, the decrease of federal, state, and local funding during the
Great Recession resulted in cutbacks in the number of course sections offered, hiring freezes,
larger class sizes and so forth. It is safe to assume that the University of California (UC) and
California State University (CSU) systems responded similarly. In addition, these latter colleges
also admitted less students given their decreased resources.
Though resources were declining, the demand for a college education did not. Given the
postsecondary education hierarchy established by the 1960 California Master Plan for Higher
Education,
24
as the UCs admitted a smaller pool of students, there was a ripple effect on student
enrollment in the CSUs and CCCs. The UCs rejected more students which created an
academically-stronger student population enrolling into the CSU system, CSUs were then having
to turn away CSU-qualified students, and thus the CCCs experienced an influx of more
24
The 1960 California Master Plan for Higher Education establishes a hierarchically segmented postsecondary
system (Brint & Karabel, 1989). The formation of the Master Plan was directed by the impending arrival of the
postwar baby-boom generation and the inability for the existing college campuses to accommodate the growing
numbers of entering students. Therefore, the Master Plan limited the access to the state colleges and universities by
increasing the academic requirements and restricting admission to a smaller percentage of “top” students. The
community colleges remained the tier which would provide “open access” to all graduating high school students,
and maintained its role as a “shock absorber.”
128
academically-prepared students being assessed and placed. Figure 3.7 plots the acceptance rates
of first-time freshmen who were residents of California.
25
Though the assessment and placement criteria did not change, College D’s average
academic preparation of assessed students did change; data demonstrated an influx of placements
at higher-levels in the policy period. Given the explanations outlined above, it is not
unreasonable to suspect that higher placements were not all driven by policy, but rather that my
analysis sample for the College D reflected a more academically-prepared student population.
Therefore, while my models revealed no significant relationship between the math requirement
increase and degree-applicable units earned, and the odds of passing IA and obtaining at least 60
degree-applicable units, this may be subject to higher-placements. More research is needed to
identify the impact of higher standards that are implemented in community colleges on
developmental student outcomes.
25
See Appendix P3-C for numbers disaggregated by campus.
129
Conclusions and Future Research
Protecting both the open-access and standards missions of community colleges has been
an underlying struggle throughout community college history. The Title V regulation which
increased the math requirement for an associate’s degree provides a direct link to community
colleges’ balancing act between these two missions. While this regulation increases the standards
of an associate’s degree, it may also result in unintended consequences to community colleges’
defining characteristic of being open-access institutions. This increased requirement specifically
affects students placed into developmental education, the majority of whom are from URM
backgrounds. Therefore, there is also an underlying equity component to this policy and its
potentially disproportionate impact on students of color.
This study examined the potential unintended consequences of this regulation by
specifically examining whether developmental math students who are faced with higher
standards are less likely to earn their associate’s degree. Results revealed no significant
relationship between the requirement increase and developmental math student degree
attainment. Further, there was not a disproportionate relationship based on students’
race/ethnicity. These findings, however, are only suggestive given the data limitations of my
methodological design. My findings should also be taken with caution given the fact that the
policy was implemented during a very volatile time in the economy, which had ripple effects
throughout California’s public postsecondary education systems.
Investigating statewide policies through both the lenses of the intended consequences as
well as the unintended consequences is essential in understanding the balance between
community colleges’ goals of upholding standards and being open-access institutions. However,
in order to meet the assumptions necessary to evaluate these policies, studies are needed that
130
include a larger sample and variety of colleges as well as a longer time period to measure
outcomes to understand real and lasting changes. While in a decentralized system like California
where each college serves a diverse set of students and implements various institutional policies,
having a larger sample of colleges would allow a hierarchical linear design and thus be fit to
control for institutional variability. Moreover, studies evaluating policies should also include pre-
and post-time periods that consist of enough time points to accurately assess how colleges are
performing prior to policy implementation, and measure real changes in outcomes after the
policy is implemented; providing a long enough time frame would avoid the risk of measuring a
superficial inflation or deflation brought on by the policy. A comparative interrupted time-series
or difference-in-differences model, if properly specified, could causally identify the effect of the
policy change by controlling for observable and unobservable factors (e.g., other statewide
policies), and secular trends (e.g., changes in incoming student academic preparation) that
correlate with the policy implementation and academic achievement.
Future research should be conducted that advances understanding of how increasing
standards affects access within community colleges and among the developmental student
population. Future studies may specifically examine how the Title V requirement increase
affected developmental math students’ likelihood of graduating. Evaluating policies that affect
this growing student population through multiple lenses is essential, because the success of these
students is at the core of the community college mission.
131
Chapter Five
Conclusions and Policy Implications
Nationally, one of the largest barriers to degree completion for community college
students is developmental education (Hawley & Harris, 2005–2006; Horn & Nevill, 2006; Hoyt,
1999; Kelly, 2014). A substantial and increasing number of students are entering the community
college system underprepared for the college-level curriculum (ACT, 2014; Horn & Nevill,
2006; NCPPHE & SREB, 2010). This is especially true in our nation’s largest higher education
system, the California community colleges, where not only approximately 85 percent of students
need remediation, but of developmental education students, over half are from underrepresented
racial minority backgrounds (CCCCO, 2011). The purpose of my dissertation is to better
understand developmental education in California community colleges. My dissertation makes a
significant contribution to the existing literature on developmental education by describing the
importance of its complexities as it relates to student outcomes, and disaggregating those
outcomes by who it helps, who it hinders, and how the colleges may help those it hinders.
Overall, my research is consistent with previous studies that conclude that well-prepared
students will do well regardless of institution (Calcagno et al., 2008). I found that student-level
characteristics, like level of entering preparedness, account for the largest proportion of
explained variance in developmental student outcomes. Also illustrated in my dissertation,
however, is that within the nuances of providing developmental education, from assessment to
enrollment in remedial math, there are a multitude of decisions students make that directly relate
to their placement and success. Thus, the overall recommendation emerging from my dissertation
is not of institutional acquiescence or stagnation but of how to best equip students to help them
make the best-informed decision in order to be successful in pursuing their educational goals.
132
Stated differently, my dissertation points to the intersection of student agency within institutional
policies. In each of my three studies, I highlight different aspects of developmental education and
identify areas for improvement. Here, in my concluding chapter, I first review the important
findings from the three papers individually and describe the main lesson emerging from the three
papers collectively. I conclude this chapter and dissertation by discussing recommendations for
practitioners and policymakers.
Higher Math Placements Relate to Higher Probability of Success Regardless of Graduation
Standards
My third paper examined what happens to developmental student degree attainment when
statewide mandates increase graduation standards. Analyses revealed that the implementation of
a higher math requirement neither increased nor decreased the number of degree-applicable units
students earned, the probability students passed IA, or the students’ odds of earning at least 60
degree-applicable units. Further, compared to White students, Asian, Black, and Latino students
earned less degree-applicable units and were less likely to earn at least 60 degree-applicable
units; however, the increased standard did not amplify these racial disparities. Students placing
into lower-levels of the developmental math sequence (e.g., EA, PA, AR) earned significantly
less degree-applicable units compared to students who placed into IA. Therefore, students who
are seemingly more academically-prepared per their higher placements along the math trajectory
earn more degree-applicable units and are exhibit higher odds of earning at least 60 degree-
applicable units. Taken together, increasing standards will not decrease access for academically-
prepared students.
These findings, of course, mirror what Calcagno et al. (2008) concluded, but also go
further in that well-prepared students will do well regardless of institution and regardless of new
133
state requirements. Further, I found that higher statewide graduation standards did not improve
nor exacerbate the existing racial disparities in developmental math student success. Thus, state-
level policies, especially within a decentralized governing system like the California community
colleges will not be the main catalyst in improving (or in this case limiting) developmental
student success.
In my first paper, which modeled a variety of student-, institutional-, and developmental
math program-level factors to examine the extent to which these differently-leveled variables
related to the odds students progressed through their developmental math trajectory, I found that
institutionally-controlled factors like class size related to developmental math student success.
However, my models showed that student-level characteristics explained the largest proportion
of variance of students’ successful progression. According to my first and third papers, then,
student-level characteristics are more predictive of developmental math student success
compared to state-, institutional-, and developmental math-level factors. Moreover, my first
paper found that students, who were more academically-prepared as measured by their
assessment test score, were more likely to pass each level of developmental math. However,
because I measured student progression as the two-step process of attempting a course and
passing a course, I found that test score had a weaker (if any) relationship with attempting
developmental math courses.
This first take-home message, that more academically-prepared students exhibit higher
probabilities of success, is definitely not a new one, nor is it new news that student-level
characteristics and not state-level nor institutional-level characteristics are more predictive of
student success. Another aspect of success in the developmental math trajectory, though, is
whether students actually attempt their math courses. This hints at the fact that it is not simply
134
test score, or a standardized measurement of preparedness, but other underlying non-cognitive
factors that relate to developmental math student progress and success. Additional findings from
the first and second papers in this dissertation help advance this argument.
The Largest Barrier to Developmental Math Student Progression is Attempting and
Passing their Initial Math Course
In exploring developmental math student progression through their sequence, I found that
the largest barrier to developmental math student progression is actually attempting and passing
their initial math course. Disaggregating this finding into the barrier of attempting their placed
math course and the barrier of passing their placed math course, I believe the former relates to
non-cognitive factors while the latter may relate more to institutional assessment and placement
policies accurately placing students based on multiple measures of their academic preparedness.
If students attempt their developmental math courses, they are actually progressing
through their sequence and passing courses at comparable rates to their initially higher placed
peers. In fact, after controlling for other factors, students initially placed in lower levels are
attempting and passing subsequent courses with higher odds compared to their higher-placed
peers. However, it is important to note that these students represent those who persevered
through their sequences and thus represents a subset of students who may be more highly
motivated.
The second substantial barrier students face in progressing through their math trajectory
is passing their placed math course suggesting that institutional assessment and placement
policies may be inaccurately placing students. This assumption is supported by existing research
which has demonstrated the negative consequences of inaccurate placement into developmental
education at community colleges (Scott-Clayton, Crosta, & Belfield, 2014; Ngo & Melguizo,
135
2015). I contend that understanding the community college’s entering student population is
essential to accurately measuring these students’ academic preparedness. My second paper took
an in-depth examination into a single institution’s assessment and placement policy and students’
behavior within it; it thus provides important insight into how and where institutions may target
changes to improve placement accuracy and thereby help to improve student success in
developmental math.
Misalignment Exists between Students’ Choice of Assessment Sub-Test and their Highest
Passed Math Course
In my second paper I described one California community college’s assessment and
placement policy. In this paper, I highlight points within the institution’s assessment and
placement policy where students’ decisions (and not test scores) directly affected their math
placement. Specifically, when students – often from underrepresented racial minority
backgrounds – underestimated their math preparedness by choosing a lower-level assessment
sub-test compared to their highest passed math course, they really committed a disservice to
themselves since lower-level sub-tests lead to lower-level placements.
There are several plausible explanations to why students make these uninformed
decisions that hurt their chances of placing into higher-levels of math. One that emerged from
this study is the misalignment between students’ high school math sequence and the community
college Algebra-focused math sequence. There were several instances where students’ last math
course was Geometry and it became clear that they did not know which Algebra-focused sub-test
they should choose (often tending to opt for the lower-Algebra course). Better aligning these
math sequences, then, would drastically reduce confusion over sub-test choice.
136
It is also very important to remember the community college student population. Entering
community college students will have traveled very varied math pathways prior to taking their
assessment test. First, high school students are not currently required to take a math course in
their senior year (Finkelstein et al., 2012). Second, community colleges serve a large proportion
of older students who are returning to education (Cohen & Brawer, 2008). And third, students
with lower math confidence will likely avoid math altogether (Fay et al., 2013); this lower math
self-confidence may also be manifesting in lower assessment sub-test choice. Given that math is
a skill that deteriorates over time, it is thus important to not assess students on their academic
preparedness without first redirecting a math-avoidant mindset, and second, refreshing their
ability.
In order to be effective, students would need to be the catalyst to changing their own
mindset and refreshing their math preparedness; so again, we run into a well-prepared student
performing well regardless of institution and state policy. While it is important to note that these
students will succeed, what happens to the not-so-well-prepared student? In my first paper I
found that students who received additional points via multiple measures, exhibited substantially
higher odds of passing each developmental math course. This finding suggests that institutions
may reward multiple measures that would both increase access to higher-levels of math and
increase pass rates. In my second paper, I created an alternative placement criterion, which
utilized an additional noncognitive measure to place students into a level of math. Specifically, it
assessed prior math preparedness alongside sub-test choice; this additional measure increased
students’ access to higher levels of math while also maintaining students’ success in their placed
courses, and was especially beneficial for underrepresented racial minority students. Therefore,
simply understanding the entering community college students allowed for the adjustment of
137
correcting any misinformed decision and created positive changes in placements especially for
underrepresented racial minority students. By making these additional measures of academic
preparedness salient in students’ placements, it increased access to higher levels of math.
Recommendations for Practitioners and Policymakers
There are several recommendations emerging from my dissertation that informs
practitioners and policymakers on some structural changes in their developmental math programs
that may assist in breaking down some of the biggest barriers to student success.
1. Student placement should be determined by a holistic assessment of academic preparation.
Assembly Bill 743 was passed in 2011 and established a common assessment system for
California community colleges (California Legislative Information, n.d.). After a slight delay
due to budgetary restraints, the CCCCO responded to the bill by establishing the Common
Assessment Initiative to develop a new assessment for English as a Second Language (ESL),
math and English. The common assessment instrument is intended to be a consistent tool, so
that students can take their assessment results with them when they transfer to another
college, which would thereby reduce the number of reassessments. Along with establishing a
state-level instrument, the Research and Planning Group created the Multiple Measure
Assessment Project (MMAP) (Research and Planning Group, n.d.) to examine multiple
measures from high school transcripts, such as GPA that would assist in informing accurate
placement. Campuses are currently piloting the assessment system and the rollout across the
CCC system is set for fall 2016 through 2018. While the CCCs are establishing a common
assessment, placement will remain at the local level. Meaning, while students are able to take
their assessment scores anyway, their placement will be directed by the college’s placement
criteria in terms of cut-scores and defined multiple measures.
138
As policymakers and practitioners refine their assessment and placement policies at both
the state- and institutional-level, it is important to recognize that a score on a standardized
test is only one factor predicting student success. Yet, in the California community colleges
included in this dissertation, the test score – within institutional assessment and placement
policies – is nearly the sole determining factor of student placement. I found evidence that
noncognitive factors, such as motivation and math-confidence, play a role in student success.
Therefore, utilizing meaningful multiple measures of assessment is crucial for accurate
placement. Further, assessment and placement policies should consider who the entering
community college student is in terms of when their last math course was, how long ago it
was, where their confidence-level with math is and if it matches their actual behavior in prior
courses. Understanding students and their prior academic behavior is crucial in accurately
placing students into levels of math. It also, perhaps more importantly, guides practitioners
and policymakers on how to direct their students so that they have the greatest chance of
success both on their assessment test as well as throughout their higher education career.
Some students, for example, may benefit most from a math “boot camp” prior to their
assessment while others may find that adjusting their mindset toward math is more helpful.
2. Clearly align high school math pathways with community college math pathways.
Because the multiple pathways of high school math do not clearly align with the Algebra-
focused developmental math sequence at the community colleges, students are confused of
what their math preparation is and how it fits within this new sequence of courses.
Recognizing this, practitioners should clearly articulate information to students in terms of
their own math preparedness and how it fits within the community college math sequence.
College’s Assessment Offices and college counselors may consider providing clear
139
information to students in comparing the courses within their high school math-taking and
the community college math sequence.
Policymakers in K-12 education have recently specified “the mathematics that all
students should study in order to be college and career ready” (p. 58). As part of the
California Common Core State Standards for Mathematics (CA CCSSM), the state includes
two types of standards: Mathematical Practice Standards and Mathematical Content
Standards. Together, these standards address both “habits of mind” so students may develop
and foster mathematical understanding as well as content-expertise. The CA CCSSM model
courses for higher mathematics are organized into two pathways. In the traditional pathway,
students move from Algebra I, Geometry, and Algebra II while in the integrated pathway,
students are presented with higher mathematics as a connected subject (Mathematics I, II,
and III) in that each course contains standards from all conceptual categories (number and
quantity, algebra, functions, modeling, geometry, statistics and probability) (California
Department of Education, n.d.). Yet, these newly implemented pathways still remain
misaligned to the traditional community college remedial math sequence.
3. Change student behavior by changing student mindset toward math.
The largest barrier to student success in developmental math is that students do not
attempt their initial math course. Students cannot cross the finish line unless they first cross
the start line. My research has demonstrated that students’ low confidence in math is greatly
inhibiting, so much so, that they may avoid math altogether. Given that math is a
deteriorating skill, the longer they avoid their first class the steeper their climb towards a
college degree becomes. Changing this mindset, then, is one of the main keys to changing
behavior towards math (increasing attempt rates), and in math courses (increasing pass rates).
140
References
Academic Senate for California Community Colleges Executive Committee (2004). Issues and
options for associate degree levels in mathematics and English. Sacramento, CA:
Academic Senate for California Community Colleges. Retrieved from
http://www.asccc.org/node/175004
Adelman, C. (2006). The toolbox revisited: Paths to degree completion from high school through
college. Washington, DC: U.S. Department of Education
Akerhielm, K. (1995). Does class size matter? Economics of Education Review, 14(3), 229–241.
American Educational Research Association, American Psychological Association, and National
Council on Measurement in Education. (1999). Standards for educational and
psychological testing. Washington, DC: American Educational Research Association.
Armstrong, W. B. (2000). The association among student success in courses, placement test
scores, student background data, and instructor grading practices. Community College
Journal of Research and Practice, 24(8), 681–695. doi: 10.1080/10668920050140837
Attewell, P., & Domina, T. (2008). Raising the bar: Curricular intensity and academic
performance. Educational Evaluation and Policy Analysis, 30(1), 51–71.
Attewell, P., Lavin, D., Domina, T., & Levey, T. (2006). New evidence on college
remediation. Journal of Higher Education, 77(5), 886–924.
Bahr, P. R. (2008). Does mathematics remediation work?: A comparative analysis of academic
attainment among community college students. Research in Higher Education, 49, 420–
450.
141
Bahr, P. R. (2009). Educational attainment as process: Using hierarchical discrete-time event
history analysis to model rate of progress. Research in Higher Education, 50(7), 691–
714.
Bahr, P. R. (2010). Preparing the underprepared: An analysis of racial disparities in
postsecondary mathematics remediation. Journal of Higher Education, 81(2), 209–237.
doi: 10.1353/jhe.0.0086
Bahr, P. R. (2012). Deconstructing remediation in community colleges: Exploring associations
between course-taking patterns, course outcomes, and attrition from the remedial math
and remedial writing sequences. Research in Higher Education, 53, 661–693.
doi:10.1007/s11162-011-9243-2
Bahr, P. R. (2013). The deconstructive approach to understanding college students’ pathways
and outcomes. Community College Review, 41(2), 137–153.
Bailey, T. (2009a). Challenge and opportunity: Rethinking the role and function of
developmental education in community college. New Directions for Community
Colleges, 145, 11–30.
Bailey, T. (2009b). Rethinking remedial education in community college. New York, NY:
Community College Research Center, Teachers College, Columbia University.
Bailey, T., Jeong, D.W., & Cho, S. (2010). Referral, enrollment, and completion in
developmental education sequences in community colleges. Economics of Education
Review, 29, 255–270. doi: 10.1016/j.econedurev.2009.09.002
Bandura, A. (1997). Self-efficacy: The exercise of control. New York, NY: Freeman.
Basic Skills Initiative. (n.d.). Effective practices. Retrieved from
http://www.cccbsi.org/effective-practices
142
Belfield, C., & Crosta, P. M. (2012). Predicting success in college: The importance of placement
tests and high school transcripts (CCRC Working Paper No. 42). New York, NY:
Columbia University, Teachers College, Community College Research Center.
Bertrand, M., Duflo, E., & Mullainathan, S. (2004). How much should we trust differences-in-
differences estimates? The Quarterly Journal of Economics, 249–275.
Bettinger, E. P., & Long, B. T. (2005). Remediation at the community college: Student
participation and outcomes. New Directions for Community Colleges, 129, 17–26.
Bettinger, E. P., & Long, B. T. (2007). Institutional responses to reduce inequalities in college
outcomes: Remedial and developmental courses in Higher Education. In S. Dickert-
Conlin & R. Rubenstein (Eds.), Economic Inequality in Higher Education: Access,
Persistence, and Success. New York, NY: Russell Sage Foundation Press.
Betts, J. R., Hahn, Y., & Zau, A. C. (2011). Does diagnostic math testing improve student
learning? Sacramento, CA: Public Policy Institute of California.
Bickerstaff, S., Barragan, M., & Rucks-Ahidiana, Z. (2012). ‘I came in unsure of everything’:
Community college students’ shifts in confidence (CCRC Working Paper No. 48). New
York, NY: Columbia University, Teachers College, Community College Research
Center.
Boatman, A., & Long, B. T. (2010). Does remediation work for all students? How the effects of
postsecondary remedial and developmental courses vary by level of academic
preparation (NCPR Working Paper). New York, NY: National Center for Postsecondary
Research.
Boroch, D., Fillpot, J., Hope, L., Johnstone, R., Mery, P., Serban, A., Gabriner, R. S. (2007).
Basic Skills as a Foundation for Student Success in California Community Colleges.
143
Sacramento, CA: The Research and Planning Group for California Community Colleges,
Center for Student Success. Retrieved from http://files.eric.ed.gov/fulltext/ED496117.pdf
Boylan, H. R. (2009). Targeted Intervention for Developmental Education Students (T.I.D.E.S.).
Journal of Developmental Education, 32(3), 14–23.
Brint, S., & Karabel, J. (1989). The diverted dream: Community colleges and the promise of
educational opportunity in America, 1900-1985. New York, NY: Oxford University
Press, Inc.
Bremer, C. D., Center, B. A., Opsal, C. L., Medhanie, A., Jang, Y. J., & Geise, A. C. (2013).
Outcome trajectories of developmental students in community colleges. Community
College Review, 41(2), 154–175.
Breneman, D., & Harlow, W. (1998). Remedial education: Costs and consequences. Fordham
Report, 2(9), 1–22.
Brint, S., & Karabel, J. (1989). The diverted dream: Community colleges and the promise of
educational opportunity in America, 1900-1985. New York, NY: Oxford University
Press, Inc.
Bueschel, A. C. (2003). The missing link: The role of community colleges in the transitions
between high school and college. Palo Alto, CA: The Bridge Project: Strengthening K-12
Transition Policies, Stanford University. Retrieved from
http://www.stanford.edu/group/bridgeproject/community_college_rept_for_web.pdf
Bunch, G. C., Endris, A., Panayotova, D., Romero, M., & Llosa, L. (2011). Mapping the terrain:
Language testing and placement for US-educated language minority students in
California’s community colleges. Report prepared for the William and Flora Hewlett
144
Foundation. Retrieved from
http://www.hewlett.org/uploads/documents/Mapping_the_terrain_2011.pdf
Burdman, P. (2012). Where to begin? The evolving role of placement exams for students starting
college. Washington, DC: Jobs for the Future.
Burgess, L. A, & Samuels, C. (1999). Impact of full-time versus part-time instructor status on
college student retention and academic performance in sequential courses. Community
College Journal of Research and Practice, 23(5), 487–498.
Burrow, S. C. (2013). Standardized testing placement and high school GPA as predictors of
success in remedial math (Unpublished doctoral dissertation). University of Mississippi,
Mississippi.
Calcagno, J. C., Bailey, T., Jenkins, D., Kienzl, G., & Leinbach, T. (2008). Community college
student success: What institutional characteristics make a difference? Economics of
Education Review, 27(6), 632–645.
Calcagno, J. C., Crosta, P., Bailey, T., & Jenkins, D. (2007). Does age of entrance affect
community college completion probabilities? Evidence from a discrete-time hazard
model. Educational Evaluation and Policy Analysis, 29, 218–235.
California Community Colleges Chancellor’s Office. (1999). Understanding funding, finance
and budgeting: A manager’s handbook. Retrieved from
http://files.eric.ed.gov/fulltext/ED432331.pdf
California Community Colleges Chancellor’s Office (2008). California Community Colleges
guidelines for Title 5 regulations, chapter 6, part 1. Sacramento, CA: California
Community. Colleges Chancellor’s Office. Retrieved from
http://asccc.org/sites/default/files/t5guidelines.doc
145
California Community Colleges Chancellor’s Office. (2011). Basic skills accountability:
Supplement to the ARCC Report. Retrieved from
http://californiacommunitycolleges.cccco.edu/Portals/0/reportsTB/2011_Basic_Skills_Ac
countability_Report_[Final]_Combined.pdf
California Community Colleges Chancellor’s Office. (2015). Common Assessment Initiative.
Retrieved from http://cccassess.org/
California Community Colleges Chancellor’s Office. (n.d.). Data Mart [Data file]. Retrieved
from http://datamart.cccco.edu/
California Department of Education. (n.d.). Common core state standards. Retrieved from
http://www.cde.ca.gov/re/cc/
California Legislative Information. (n.d.). AB-743 California Community Colleges: Common
assessment system. Retrieved from
http://leginfo.legislature.ca.gov/faces/billCompareClient.xhtml?bill_id=201120120AB74
3
California Postsecondary Education Commission (2010). Student snapshots [Data file].
Retrieved from http://www.cpec.ca.gov/StudentData/StudentSnapshot.ASP
California State Legislature (2005). Senate bill no. 361. Retrieved from
http://www.leginfo.ca.gov/pub /05-06/bill/sen/sb_0351-0400/sb_361-
bill_20060929_chaptered.pdf
Campbell, D. T. (1969). Reforms as experiments. American Psychologist, 24, 407– 429.
Chaney, B., Burgdorf, K., & Atash, N. (1997). Influencing achievement through high school
graduation requirements. Educational Evaluation and Policy Analysis, 19(3), 229–244.
146
Chester, M. D. (2003). Multiple measures and high-stakes decisions: A framework for
combining measures. Educational Measurement: Issues and Practice, 22(2), 32–41. doi:
10.1111/j.1745-3992.2003.tb00126.x
Clune, W. H., & White, P. A. (1992). Education reform in the trenches: Increased academic
course taking in high schools with lower achieving students in states with higher
graduation requirements. Educational Evaluation and Policy Analysis, 14(1), 2–20.
Chen, X. (2005). First generation students in postsecondary education: A look at their college
transcripts (NCES 2005-171). U.S. Department of Education, National Center for
Education Statistics. Washington, DC: U.S. Government Printing Office.
Choy, S. P. (2002). Access and persistence: Findings from 10 years of longitudinal research on
students. Washington, DC: American Council on Education, Center for Policy Analysis.
Cohen, A. M., & Brawer, F. B. (2008). The American community college (5th Edition). San
Francisco, CA: Jossey-Bass.
Common Core State Standards Initiative (2015). Preparing America’s students for success.
Retrieved from www.corestandards.org
Cox, R. D. (2009). “It was just that I was afraid”: Promoting success by addressing students’ fear
of failure. Community College Review, 37(52), 52–80. doi: 10.1177/0091552109338390
Crisp, G., & Delgado, C. (2014). The impact of developmental education on community college
persistence and vertical transfer. Community College Review, 42(2), 99–117.
Crisp, G., & Nora, A. (2010). Hispanic student success: Factors influencing the persistence and
transfer decisions of Latino community college students enrolled in developmental
education. Research in Higher Education, 51, 175–194. doi: 10.1007/s11162-009-9151-x
147
Doughtery, K. J., Hare, R., & Natow, R. S. (2009). Performance accountability systems for
community colleges: Lessons for the voluntary framework of accountability for
community colleges (CCRC Working Paper). New York, NY: Columbia University,
Teachers College, Community College Research Center. Retrieved from
http://files.eric.ed.gov/fulltext/ED507825.pdf
Dougherty, K. J., & Hong, E. (2006). Performance accountability as imperfect panacea. In T.
Bailey & V. S. Morest (Eds.), Defending the community college equity agenda (pp. 68–
103). Baltimore, MD: Johns Hopkins University Press.
Duckworth, A. L., Peterson, C., Matthews, M. D., & Kelly, D. R. (2007). Grit: Perseverance and
passion for long-term goals. Journal of Personality and Social Psychology, 92(6), 1087–
1101. doi: 10.1037/0022-3514.92.6.1087
Duckworth, A. L., Quinn, P. D., & Tsukayama, E. (2012). What No Child Left Behind leaves
behind: The roles of IQ and self-control in predicting standardized achievement test
scores and report card grades. Journal of Educational Psychology, 104(2), 439–451. doi:
10.1037/a0026280
Economic Policy Institute. (n.d.). The Great Recession. Washington, DC: Economic Policy
Institute, The State of Working America. Retrieved from
http://stateofworkingamerica.org/great-recession/
Fay, M. P., Bickerstaff, S., & Hodara, M. (2013). Why students do not prepare for math
placement exams: Student perspectives (CCRC Research Brief No. 57). New York, NY:
Columbia University, Teachers College, Community College Research Center.
Fike, D. S, & Fike, R. (2007). Does faculty employment status impact developmental
mathematics outcomes? Journal of Developmental Education, 31(1), 2–11.
148
Finkelstein, N., Fong, A., Tiffany-Morales, J., Shields, P., & Huang, M. (2012). College bound
in middle school and high school? How math course sequences matter. Sacramento, CA:
WestEd, The Center for the Future of Teaching and Learning.
Finn, J. D., Pannozzo, G. M., & Achilles, C. M. (2003). The “why’s” or class size: Student
behavior in small classes. Review of Educational Research, 73(3), 321–368.
Fong, K. E., Melguizo, T., & Prather, G. (2015). Increasing success rates in developmental math:
The complementary role of individual and institutional characteristics. Research in
Higher Education. doi: 10.1007/s11162-015-9368-9
Fulks, J., & Alancraig, M. (Eds.). (2008). Constructing a framework for success: A holistic
approach to basic skills. Sacramento, CA: Academic Senate for California Community
Colleges. Retrieved from http://basicskills.publishpath.com/basic-skills-handbook
Gaertner, M. N., & McClarty, K. L. (2015). Performance, perseverance, and the full picture of
college readiness. Educational Measurement: Issues and Practice, 34(2), 1–14. doi:
10.1111/emip.12066
Gándara, P., Alvarado, E., Driscoll, A., & Orfield, G. (2012). Building pathways to transfer:
Community colleges that break the chain of failure for students of color. Los Angeles,
CA: The Civil Rights Project, University of California, Los Angeles.
Glenn, D., & Wagner, W. (2006). Cost and consequences of remedial course enrollment in Ohio
public higher education: Six-year outcomes for fall 1998 cohort. Paper presented at the
Association of Institutional Research Forum, Chicago. Retrieved from
http://regents.ohio.gov/perfrpt/special_reports/Remediation_Consequences_2006.pdf
Gray-Little, B., & Hafdahl, A. R. (2000). Factors influencing racial comparisons of self-esteem:
A quantitative review. Psychological Bulletin, 126, 26–54.
149
Grimes, S. K., & David, K. C. (1999). Underprepared community college students: Implications
of attitudinal and experiential differences. Community College Review, 27(2), 73–92.
Grubb, W. N. (2012). Basic Skills Education in Community Colleges: Inside and Outside of
Classrooms. New York, NY: Routledge.
Hagedorn, L. S., Chi, W., Cepeda, R. M., & McLain, M. (2007). An investigation of critical
mass: The role of Latino representation in the success of urban community college
students. Research in Higher Education, 48(1), 73–91.
Hagedorn, L. S., & DuBray, D. (2010). Math and science success and nonsuccess: Journeys
within the community college. Journal of Women and Minorities in Science and
Engineering, 16(1), 31–50.
Hagedorn, L. S., Siadat, M. V., Fogel, S. F., Nora, A., & Pascarella, E. T. (1999). Success in
college mathematics: Comparisons between remedial and nonremedial first-year college
students. Research in Higher Education, 40(3), 261–284.
Hawley, T. H., & Harris, T. A. (2005–2006). Student characteristics related to persistence for
first-year community college students. Journal of College Student Retention, 7(1–2),
117–142.
Henderson-Montero, D., Julian, M. W., & Yen, W. M. (2003). Multiple measures: Alternative
design and analysis models. Educational Measurement: Issues and Practice, 22(2), 7–12.
doi: 10.1111/j.1745-3992.2003.tb00122.x
Hodara, M., Jaggars, S. S., & Karp, M. M. (2012). Improving developmental education
assessment and placement: Lessons from community colleges across the country (CCRC
Working Paper No. 51). New York, NY: Community College Research Center.
150
Hoffman, J. L., & Lowitzki, K. E. (2005). Predicting college success with high school grades and
test scores: Limitations for minority students. The Review of Higher Education, 28(4),
455–474. doi: 10.1353/rhe.2005.0042
Horn, L., & Nevill, S. (2006). Profile of undergraduates in U.S. postsecondary education
institutions: 2003–04: With a special analysis of community college students (NCES
2006-184). U.S. Department of Education. Washington, DC: National Center for
Education Statistics.
Hoyt, J. E. (1999). Remedial education and student attrition. Community College Review, 27, 51–
72. doi: 10.1177/009155219902700203
Hughes, K. L., & Scott-Clayton, J. (2011). Assessing developmental assessment in community
colleges. Community College Review, 39(4), 327–351. doi: 10.1177/0091552111426898
Illowsky, B. (2008). The California Basic Skills Initiative. New Directions for Community
Colleges, 144, 83 –91.
Jaffe, L. (2014). Mathematics from high school to community college: Using existing tools to
increase college-readiness now (Policy Brief 14-1). Stanford, CA: Policy Analysis for
California Education (PACE).
Jameson, M., & Fusco, B. R. (2014). Math anxiety, math self-concept, and math self-efficacy in
adult learners compared to traditional undergraduate students. Adult Education Quarterly,
64(4), 306–322. doi: 10.1177/074171364541461.
Jenkins, D., Jaggars, S. S., & Roksa, J. (2009). Promoting gatekeeper course success among
community college students needing remediation: Findings and recommendations from a
Virginia study (Summary Report). New York, NY: Columbia University, Teachers
College, Community College Research Center.
151
Kasworm, C. E. (2008). Emotional challenges of adult learners in higher education. New
Directions for Adult and Continuing Education, 120, 27–34. doi: 10.1002/ace.313
Kirst, M. W., & Bracco, K. R. (2004). Bridging the great divide: How the K-12 and
postsecondary split hurts students, and what can be done about it. In M. W. Kirst & A.
Venezia (Eds.), From high school to college (pp. 1–30). San Francisco, CA: Jossey-Bass.
Kosiewicz, H. (2013). Examining the effects of self-placement in remedial math on achievement:
Evidence from Los Angeles. Paper presented at the Association for Education Finance
and Policy annual conference, New Orleans, Louisiana.
Krueger, A. B. (2003). Economic considerations and class size. The Economic Journal,
113(485), F34–F63.
Krumrei-Mancuso, E. J., Newton, F. B., Kim, E., & Wilcox, D. (2013). Psychosocial factors
predicting first-year college student success. Journal of College Student Development,
54(3), 247–266. doi: 10.1353/csd.2013.0034
Lazarick, L. (1997). Back to the basics: Remedial education. Community College Journal, 11-15.
Legislative Analyst’s Office. (2012). The 2012-13 budget: Analysis of the Governor’s higher
education proposal. Retrieved from
http://www.lao.ca.gov/analysis/2012/highered/higher-ed-020812.aspx
Large Urban Community College District, Office of Institutional Research. (2012). Retrieved
from http://www.LUCCD.edu/ (pseudonym)
Long Beach Promise Group (2013). 5-Year Progress Report (2008-2013): A breakthrough in
student achievement. Long Beach, CA: Author. Retrieved from
http://www.longbeachcollegepromis.org/wp-content/uploads/2013/03/LBCP-5-Year-
ProgressReport.pdf.
152
Martorell, P., McFarlin Jr., I., & Xue, Y. (2013). Does failing a placement exam discourage
underprepared students from going to college? (NPC Working paper #11-14). University
of Michigan: National Poverty Center.
Marwick, J. D. (2004). Charting a path to success: The association between institutional
placement policies and the academic success of Latino students. Community College
Journal of Research & Practice, 28, 263–280. doi: 10.1080/10668920490256444
Mattern, K. D., & Packman, S. (2009). Predictive validity of ACCUPLACER scores for course
placement: A meta-analysis (Research Report No. 2009-2). New York, NY: College
Board.
Mayer, B. D. (1995). Natural and quasi-experiments in economics. Journal of business and
Economic Statistics, 13(2), 151–161.
Melguizo, T. (2011). A review of the theories developed to describe the process of college
persistence and attainment. In J. C. Smart & M. B. Paulsen (Eds.), Higher education:
Handbook of theory and research (pp. 395–424). New York, NY: Springer.
Melguizo, T., Bos, H., & Prather, G. (2013). Using a regression discontinuity design to estimate
the impact of placement decisions in developmental math in Los Angeles Community
College District (LACCD). (CCCC Working Paper). Rossier School of Education,
University of Southern California: California Community College Collaborative.
Retrieved from http://www.uscrossier.org/pullias/research/projects/sc-community-
college/
Melguizo, T., Hagedorn, L. S., & Cypers, S. (2008). The need for remedial/developmental
education and the cost of community college transfer: Calculations from a sample of
153
California community college transfers. The Review of Higher Education, 31(4), 401–
431.
Melguizo, T., & Kosiewicz, H. (2013). The role of race, income, and funding on student success:
An institutional level analysis of California community colleges. In The Century
Foundation Task Force on Preventing Community Colleges from Becoming Separate and
Unequal (Eds.), Bridging the higher education divide: Strengthening community colleges
and restoring the American dream (pp. 137–156). New York, NY: The Century
Foundation.
Melguizo, T., Kosiewicz, H., Prather, G., & Bos, H. (2014). How are community college
students assessed and placed in developmental math? Grounding our understanding in
reality. Journal of Higher Education, 85(5), 691–722. doi: 10.1353/jhe.2014.0025
Merisotis, J. P., & Phipps, R. A. (2000). Remedial education in colleges and universities: What’s
really going on? The Review of Higher Education, 24(1), 67–85. doi:
10.1353/rhe.2000.0023
Moore, C., & Shulock, N. (2010). Divided we fail: Improving completion and closing racial gaps
in California’s community colleges. Sacramento, CA: Institute for Higher Education
Leadership and Policy, California State University.
Nakajima, M. A., Dembo, M. H., & Mossler, R. (2012). Student persistence in community
colleges. Community College Journal of Research and Practice, 36(8), 591–613. doi:
10.1080/10668920903054931
National Center for Public Policy and Higher Education (NCPPHE) & Southern Regional
Education Board (SREB). (2010). Beyond the Rhetoric: Improving College Readiness
154
through Coherent State Policy. Washington, D.C.: NCPPHE & SREB. Retrieved from
http://publications.sreb.org/2010/Beyond%%20th%20Rhetoric.pdf.
National Commission on Excellence in Education. (1983). A nation at risk: The imperative for
educational reform. Washington, DC: U.S. Government Printing Office.
Ngo, F., & Melguizo, T. (February 2015). How can placement policy improve math remediation
outcomes? Evidence from community college experimentation. Paper presented at the
annual meeting of the Association for Education Finance and Policy, Washington, D. C.
Ngo, F., & Kwon, W. (2014). Using multiple measures to make math placement decisions:
Implications for access and success in community colleges. Research in Higher
Education. doi: 10.1007/s11162-014-9352-9
Noble, J. P., Schiel, J. L., & Sawyer, R. L. (2004). Assessment and college course placement:
Matching students with appropriate instruction. In J. E. Wall & G. R. Walz (Eds.),
Measuring up: Assessment issues for teachers, counselors, and administrators (pp. 297–
311). Greensboro, NC: ERIC Counseling and Student Services Clearinghouse and the
National Board of Certified Counselors (ERIC Document Reproduction Service No.
ED480379).
NonCollege D (n.d.) Student background and demographic data. Retrieved from
http://www.nontreatedcollege.edu/EnrollmentDevelopment/InstitutionalResearch/Pages/S
tu-Background-Demo.aspx
Oakes, J. (1990a). Lost talent: The underparticipation of women, minorities, and disabled
persons in science. Santa Monica, CA: RAND. Retrieved from
http://files.eric.ed.gov/fulltext/ED318640.pdf
155
Oakes, J. (1990b). Opportunities, achievement, and choice: Women and minority students in
science and mathematics. Review of Research in Education, 16, 153–222. doi:
10.2307/1167352
Obama, B. (14 July, 2009). Remarks by the President on the American Graduation Initiative.
Retrieved from http://www.whitehouse.gov/the_press_office/Remarks-by-the-President-
on-the-American-Graduation-Initiative-in-Warren-MI/
Pajares, F., & Kranzler, J. (1995). Self-efficacy beliefs and general mental ability in
mathematical problem-solving. Contemporary Educational Psychology, 20, 426–443.
doi: 10.1006/ceps.1995.1029
Parsad, B., Lewis, L., & Greene, B. (2003). Remedial education at degree-granting
postsecondary institutions in fall 2000 (NCES 2004-101). Washington, DC: National
Center for Education Statistics.
Pascarella, E. T., & Terenzini, P. T. (2005). How college affects students: A third decade of
research. San Francisco, CA: Jossey-Bass.
Peng, X., Le, T., & Milburn, R. K. (2011). Predicting a student’s success at a post-secondary
institution. Journal of Applied Research in the Community College, 19(1), 42–49.
Perin, D. (2006). The location of developmental education in community colleges: A discussion
of the merits of mainstreaming vs. centralization. Community College Review, 30(1), 27–
44.
Perry, M., Bahr, P. R., Rosin, M., & Woodward, K. M. (2010). Course-taking patterns, policies,
and practices in developmental education in the California community colleges.
Mountain View, CA: EdSource. Retrieved from
http://www.edsource.org/assets/files/ccstudy/FULL-CC-DevelopmentalCoursetaking.pdf.
156
Primary Research Group, Inc. (2008). Survey of assessment practices in higher education. New
York, NY: Author.
Reed, D. (2008). California’s future workforce: Will there be enough college graduates? San
Francisco, CA: Public Policy Institute of California.
Research and Planning Group for California Community Colleges. (n.d.). Multiple measures
assessment project. Retrieved from http://rpgroup.org/projects/multiple-measures-
assessment-project
Robbins, S. B., Lauver, K., Le, H., Davis, D., Langley, R., & Carlstrom, A. (2004). Do
psychological and study skill factors predict college outcomes? A meta-analysis.
Psychological Bulletin, 130(2), 261–288. doi: 10.1037/0033-2909.130.2.261
Ross-Gordon, J. M. (2003). Adult learners in the classroom. New Directions for Student
Services, 102, 43–52. doi: 10.1002/ss.88
Safran, S., & Visher, M. G. (2010). Case studies of three community colleges: The policy and
practice of assessing and placing students in developmental education courses (NCPR
Working Paper). New York, NY: National Center for Postsecondary Research, Teachers
College, Columbia University.
Sawyer, R. (1996). Decision theory for validating course placement tests. Journal of Educational
Measurement, 33(3), 271–290. doi: 10.1111/j.1745-3984.1996.tb00493.x
Schiller, K. S., & Muller, C. (2003). Raising the bar and equity? Effects of state high school
graduation requirements and accountability policies on students’ mathematics course
taking. Educational Evaluation and Policy Analysis, 25(3), 299–318.
157
Schmid, C., & Abell, P. (2003). Demographic risk factors, study patterns, and campus
involvement as related to student success among Guilford Technical Community College
students. Community College Review, 31(1), 1–16. doi: 10.1177/009155210303100101
Schneider, M., & Yin, L. (2011). The hidden costs of community colleges. Washington, DC:
American Institutes for Research.
Schunk, D. H. (2011). Learning theories: An education perspective (6th ed.). Upper Saddle
River, NJ: Prentice Hall
Scott-Clayton, J. (2012). Do high-stakes placement exams predict college success? (CCRC
Working Paper No. 41). New York, NY: Columbia University, Teachers College,
Community College Research Center.
Scott-Clayton, J., Crosta, P. M., & Belfield, C. R. (2014). Improving the targeting of treatment:
Evidence from college remediation. Educational Evaluation and Policy Analysis, 36(3),
371–393. doi: 10.3102/0162373713517935
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental
designs for generalized causal inference. Boston, MA: Houghton-Mifflin.
Shaw, K. M. (1997). Remedial education as ideological battleground: Emerging remedial
education policies in community college. Educational Evaluation and Policy Analysis,
19(3), 284–296.
Stevens, T., Olivarez, A., Lan, W. Y., & Tallent-Runnels, M. K. (2004). Role of mathematics
self-efficacy and motivation in mathematics performance across ethnicity. The Journal of
Educational Research, 97(4), 208–221. doi: 10.3200/JOER.97.4.208-222
Strong American Schools. (2008). Diploma to nowhere. Retrieved from http://www.edin08.com
Texas Higher Education Coordinating Board (2013). Developmental education best practices.
158
Retrieved from http://www.thecb.state.tx.us/download.cfm?downloadfile=B85F46FD-
F0A7- A0E7-53324B738D5631E3&typename=dmFile&fieldname=filename.
College D (n.d.). Institutional research. Retrieved from
http://www.treatedcollege.edu/Departments/EPIE/Research/Pages/default.aspx
U. S. Department of Education. (1994). National Center for Educational Statistics, Report on the
state of remedial education, 1992–1993. Washington, DC: U.S. Department of
Education.
Venezia, A., Bracco, K., & Nodine, T. (2010). One-shot deal? Students’ perceptions of
assessment and course placement in California’s community colleges. San Francisco,
CA: WestEd.
Venezia, A., & Kirst, M. W. (2005). Inequitable opportunities: How current education systems
and policies undermine the chances for student persistence and success in college.
Educational Policy, 19(2), 283–307. doi: 10.1177/0895904804274054
Wassmer, R., Moore, C., & Schulock, N. (2004). Effect of racial/ethnic composition on transfer
rates in community colleges: Implications for policy and practice. Research in Higher
Education, 45(6), 651–672.
159
Appendix P1-A
Tables and Figures correspond with those in previous analyses, however employ the traditional measurement and sample definition
utilized in existing research.
Figure A1. Percentage of students passing each level of the developmental math trajectory based on initial placement.
Arithmetic
N=15,106
Intermediate
Algebra
N=10,344
Elementary
Algebra
N=14,550
Pre-Algebra
N=14,879
39%
n=5,961
23%
n=3,412
25%
n=3,654
14%
n=2,127
46%
n=6,776
51%
n=7,446
7%
n=1,004
12%
n=1,746
28%
n=4,012
54%
n=5,618
Pass IA
Pass EA
Pass PA
Pass AR
160
Table A1. Conceptual model fit in three stages
a
Model 1: Student Characteristics Model 2: + Institutional Characteristics Model 3: + Developmental Math Factors
Outcome Pseudo R
2
Pseudo R
2
Pseudo ΔR
2
LR χ
2
(6) Pseudo R
2
Pseudo ΔR
2
LR χ
2
(4)
Attempt
b
AR 0.0483 0.0507 0.0024 47.14*** 0.0524 0.0017 33.65***
Pass
c
AR 0.0954 0.1135 0.0181 219.73*** 0.1157 0.0022 27.10***
Attempt PA 0.0246 0.0313 0.0067 233.01*** 0.0359 0.0046 160.91***
Pass PA 0.0766 0.1151 0.0385 795.92*** 0.1212 0.0061 126.82***
Attempt EA 0.1552 0.2021 0.0469 1650.45*** 0.2059 0.0038 130.98***
Pass EA 0.1212 0.1548 0.0336 1030.15*** 0.1619 0.0071 218.95***
Attempt IA 0.0909 0.1323 0.0414 2410.16*** 0.1366 0.0043 196.19***
Pass IA 0.1911 0.2297 0.0386 1307.42*** 0.2339 0.0042 140.98***
a.
Source: Assessment and enrollment records, LUCCD 2005/06-2009/10.
b.
Attempt is defined as continuing to be enrolled in the course after the no penalty drop deadline.
c.
Passing is defined as successfully completing a course; meaning, receiving a 'C' or better which is required for advancement to the next level course.
*p < .10, **p < .05, ***p < .01
161
Table A2. Odds ratio results from final hierarchical model of each progression outcome
a
Variables
Attempt
b
AR Pass
c
AR
Attempt
PA Pass PA
Attempt
EA Pass EA
Attempt
IA Pass IA
Student Characteristics
Age 0.975*** 1.035*** 1.000 1.033*** 1.028*** 1.031*** 1.006*** 1.013***
(0.002) (0.003) (0.002) (0.003) (0.002) (0.002) (0.001) (0.002)
Female 1.474*** 1.437*** 1.183*** 1.228*** 1.030 1.423*** 1.053* 1.236***
(0.053) (0.071) (0.032) (0.046) (0.032) (0.044) (0.024) (0.039)
Asian 1.004 1.037 1.024 1.103 1.048 1.068 0.928 1.067
American (0.101) (0.150) (0.067) (0.103) (0.077) (0.067) (0.039) (0.057)
African 1.004 .374*** 0.758*** 0.387*** 0.726*** 0.375*** 0.790*** 0.364***
American (0.083) (0.045) (0.043) (0.032) (0.047) (0.025) (0.036) (0.025)
Latino 1.245** 0.639*** 1.043 0.699*** 0.840** 0.636*** 0.851*** 0.606***
(0.092) (0.066) (0.051) (0.048) (0.045) (0.031) (0.029) (0.027)
Other 0.960 0.701* 0.991 0.846 1.000 0.753*** 0.908 0.815**
(0.096) (0.097) (0.069) (0.082) (0.078) (0.052) (0.045) (0.052)
Student Enrollment Status
Full-time 1.477*** 0.840* 1.113* 1.268*** 1.001 1.416*** 1.164*** 1.600***
(0.097) (0.067) (0.052) (0.076) (0.051) (0.065) (0.040) (0.068)
Financial Aid 1.711*** 1.293*** 1.357*** 1.206*** 0.942 1.374*** 1.016 1.328***
(0.067) (0.071) (0.039) (0.050) (0.031) (0.046) (0.024) (0.044)
Student Assessment Variables
Test score 1.052*** 1.151*** 1.022*** 1.041*** 1.014*** 1.017*** 1.003* 1.012***
(0.006) (0.009) (0.002) (0.003) (0.002) (0.002) (0.001) (0.002)
Multiple 1.045 1.267*** 1.045 1.153** 1.097* 1.192*** 1.015 1.158***
Measure Point (0.044) (0.067) (0.033) (0.049) (0.039) (0.043) (0.027) (0.043)
AR placement 1.708*** 0.992 0.117*** 0.635*** 1.181* 0.474***
(0.077) (0.059) (0.012) (0.048) (0.090) (0.048)
PA placement 0.082*** 0.681*** 0.740*** 0.373***
(0.006) (0.034) (0.043) (0.029)
EA placement 0.767*** 0.634***
(0.032) (0.033)
Institutional Characteristics
FTES 0.919 1.107 1.146*** 1.037 0.970 1.027 0.914*** 0.868***
(0.041) (0.062) (0.037) (0.044) (0.035) (0.038) (0.022) (0.030)
%Certificates 1.008 0.999 0.994 1.000 0.997 0.988* 1.008* 1.018***
Awarded (0.005) (0.007) (0.004) (0.005) (0.004) (0.005) (0.003) (0.005)
%HS 1.057 0.975 0.981 1.036 1.024 0.988 1.066*** 1.111***
Graduates (0.033) (0.042) (0.017) (0.024) (0.020) (0.021) (0.013) (0.019)
%degree goal 1.042 0.942 0.940*** 0.913*** 0.979 0.975 1.031* 1.016
(0.024) (0.030) (0.015) (0.020) (0.016) (0.017) (0.013) (0.018)
API score 0.514** 1.923 1.004 1.011 1.037* 1.070*** 1.043*** 1.039*
(0.124) (0.688) (0.016) (0.023) (0.015) (0.016) (0.012) (0.017)
162
%African
American
1.025
(0.060)
1.045
(0.096)
0.994
(0.024)
1.015
(0.033)
1.073
(0.035)
1.217***
(0.042)
1.136***
(0.040)
1.124*
(0.060)
%Latino 1.062* 1.011 0.988 0.988 0.970 1.003 1.103*** 1.101***
(0.029) (0.037) (0.17) (0.023) (0.019) (0.018) (0.010) (0.014)
Median family 0.413** 2.107 1.030** 0.971* 0.980 0.908*** 0.935** 0.921*
Income (0.127) (0.935) (0.010) (0.013) (0.022) (0.019) (0.021) (0.031)
Developmental Education Factors
MDTP 4.72e+17** 0.000 0.252 0.337 0.002*** 0.015*** 0.243** 0.383
(6.92e+18) (0.000) (0.198) (0.368) (0.002) (0.014) (0.103) (0.216)
Compass 2.00e+25** 0.000 1.020 2.668 7.311* 0.880 0.397 0.801
(3.83e+26) (0.000) (0.588) (2.100) (6.287) (0.707) (0.210) (0.655)
Class size 0.044** 15.011 0.855*** 0.780*** 0.502*** 0.632*** 0.847** 0.823*
(0.049) (24.174) (0.027) (0.034) (0.065) (0.079) (0.050) (0.072)
Taught by full- 0.235** 3.871 1.005 0.953 1.036 0.897** 0.995 0.990
time faculty (0.127) (3.079) (0.032) (0.041) (0.036) (0.029) (0.037) (0.056)
N 14,825 9,276 29,514 16,119 39,425 22,710 52,259 24,441
a.
Source: Assessment and enrollment records, LUCCD 2005/06-2009/10.
b.
Attempt is defined as continuing to be enrolled in the course after the no penalty drop deadline.
c.
Passing is defined as successfully completing the course; meaning, receiving a 'C' or better which is required for
advancement to the next level course.
*p < .10, **p < .05, ***p < .01
163
Appendix P3-A
Several large California Community College (CCC) policies were implemented during the time
frame of my study, as well as the national trend toward adopting the Common Core State
Standards in the K-12 system. Each of these policies affected CCCs and their students. This
Appendix describes how each potentially confounding policy affected College D over time. I
contest, however, that these policies were not directly affecting my analysis sample. Rather, the
more academically-prepared student population in my analysis sample after the math
requirement increase is more a reflection of the Great Recession and trickle-down effect within
the hierarchical higher education system in California (discussed in manuscript).
A. Senate Bill 361 (SB 361): Community College Finance System
SB 361 established a new California community college funding system which was
signed into law in 2006 and implemented for the 2007–08 fiscal year. SB 361’s resource
allocation model provides a base allocation to each district (providing additional resources to
smaller colleges given their lower economies of scale), equalized rates of funding per full-time
equivalent student (FTES), and created three clearly defined categories for funding different
types of FTES students (in order of base rate from highest to lowers--credit, Career Development
and College Preparation (CDCP), and regular non-credit courses). Under SB 361, the
calculations of each community college district’s revenue for each fiscal year is based on the
level of general apportionment revenues (state and local) the district received the prior year, and
based on an amount attributed to a minimum workload growth, with revenue adjustments for:
increases or decreases in workload, program improvement as authorized by section 84750 or any
other provision of law, inflation, and other purposes authorized by law.
164
Figure B1 graphs the general fund provided to College D over time. Data was gathered
College D’s website. It appears that the general apportionment for College D remained relatively
stable up until the last year of available data, 2012–13.
B. Senate Bill 1440 (SB 1440): The Student Transfer Achievement Reform Act
SB 1440 was signed into law on September 29, 2010 and started being implemented in
the 2011–12 academic year. This bill enabled the CCCs and the California State University
(CSU) system to collaborate on the creation of Associate in Arts (AA) Degree and Associate in
Science (AS) Degree transfer programs. Upon completion of an Associate Degree for Transfer
(AA/S-T), community college students are guaranteed admission into the CSU system. These
students are also given priority consideration when applying to a particular program that aligns
with their community college major.
Figure B2 plots data collected from the CCCCO Data Mart’s (CCCCO, n.d.) illustrating
the number of degrees awarded by college over the 2002–03 to 2013–14 school years. Figure C1
demonstrates a relatively stable trend over time for College D. This figure is inclusive of the
entire time period of my study and suggests that SB 1440 would not confound my results.
30
35
40
45
50
55
60
Figure B1. Resource allocation over time (in millions).
165
Disaggregating the numbers further, the data reveals that less than 7 percent of the total
number of Associate degrees awarded were designated as Associate Degrees for Transfer (see
Table C1). It is thus highly unlikely that this bill affected students in my analysis sample.
Table B1.
Proportion of College D AAT and AAS degrees awarded
2011-12 2012-13 2013-14
N %age N %age N %age
3 0.42% 21 3.07% 51 6.45%
C. California’s Basic Skills Initiative
The purpose of California’s Basic Skills Initiative (BSI) was to address programs
designed to assist academically underprepared students succeed. One event that emerged from
the BSI’s adoption was the raising of minimum statewide standards for an associate’s degree.
Therefore, part of the difficulty of my design is to control for other elements of the BSI that my
confound my estimate of the Title V requirement increase.
In Figure B3, I look at the completion rate over time in order to illustrate whether the BSI
may have directly affected developmental education student outcomes. As described by the
CCCCO (n.d.), completion rate is measured through the Student Progress and Attainment Rate
(SPAR). Specifically, SPAR is based on “the percentage of first-time students with a minimum
0
100
200
300
400
500
600
700
800
900
1000
Figure B2. Associate's degrees awarded over time
166
of six units earned who attempted any math or English in the first three years,” and within six
years earned: (1) an Associate’s degree or Chancellor’s Office-approved certificate, (2)
transferred to a baccalaureate-granting institution, or (3) achieved “transfer-prepared’ status
which is defined as completing 60 UC/CSU transferable units with a GPA of at least 2.0. As
Figure B3 demonstrates, College D’s completion rates over time remained stable.
D. Common Core State Standards (CCSS)
The Common Core State Standards was developed in 2009 by state school chiefs and
governors comprising the Council Chief State School Offcers (CCSSO) and the National
Governors Association (NGA) Center for Best Practices. Ths state-led effort was developed in
response to the call for consistent learning goals across states. The CCSS is a set of academic
standards in math and English language arts/literacy (ELA). These learning goals outline what a
student should know and be able to do at the end of each grade (from kindergarten through 12
th
grade). The standards were created to ensure that all students, graduate from high school with the
skills and knowledge necessary to succeed in college, career, and life. Currently, 42 states, the
District of Columbia, four territories and the Department of Defense Education Activiy
167
(DoDEA) have voluntarily adopted and are moving forward with the CCSS. Though the
standards set grade-specific learning goals, they do not define how the standards should be
taught or which materials should be used to support students.
The California State Board of Education adopted the CCSS in 2010 and fully
implemented them in the 2014-15 school year. Because my analysis sample only includes
students who were assessed and placed, and then enrolled in my nontreated or College D
between fall 2005 to fall 2009, it is highly unlikely the CCSS resulted in a more academically-
prepared student population entering these two community colleges during the time frame of my
study.
168
Appendix P3-B
Table C1.
Distribution of Outcomes by Placement Level
Fall 2005-Spring 2009 Fall 2009
Degree-applicable units earned (mean) # % # %
IA 1290 33.64 182 41.68
EA 3371 20.31 357 16.05
PA 1217 11.24 91 11.13
AR 2485 14.77 298 11.14
Passing IA
IA 14 1.08% 19 10.38%
EA 104 3.07% 68 18.84%
PA 11 0.88% 6 6.12%
AR 32 1.29% 22 7.33%
Earning at least 60 degree-applicable units
IA 299 23.09% 61 33.33%
EA 343 10.12% 23 6.37%
PA 50 4.02% 2 2.04%
AR 153 6.15% 10 3.33%
Number of observations 11,501 100.00% 1,305 100.00%
169
Table C2.
Regression Results of Degree-applicable Units Earned across Math Placement Level
AR PA EA IA Transfer
IA requirement -4.39 -8.70 0.72 39.04** -2.89
(11.24) (10.02) (10.72) (14.95) (7.29)
Female 6.09*** 1.13 1.84* 6.46*** 4.33***
(0.86) (1.08) (0.84) (1.64) (0.97)
Years since high school
graduation 0.01 0.00 0.00 0.01 -0.01
(0.00) (0.00) (0.00) (0.01) (0.01)
Citizenship -0.10 0.62 -0.17 -0.13 0.08
(0.12) (0.43) (0.16) (0.59) (0.35)
Age 0.10 0.25** -0.01 -0.25 -0.14
(0.06) (0.08) (0.10) (0.21) (0.13)
Ethnicity (White=Comparison Group)
Asian -1.66 1.37 -5.45** -7.57** -7.54***
(2.14) (2.65) (1.80) (2.36) (1.80)
Black -11.71*** -6.02* -10.22*** -9.37 -12.87***
(1.71) (2.37) (1.98) (5.43) (2.50)
Latino -8.97*** -4.89** -4.43*** -0.11 -6.12***
(1.15) (1.47) (1.14) (2.25) (1.25)
Other -2.04 -0.69 -1.55 -1.19 -2.65
(1.70) (2.44) (1.72) (2.91) (1.86)
Assessment Semester (Fall 2009=Comparison Group)
Fall 2005 10.92*** 2.13 9.84*** 8.96 9.44**
(2.17) (2.43) (2.44) (5.51) (3.03)
Spring 2006 8.72*** 6.32* 2.42 12.23 6.93
(2.48) (2.87) (2.91) (7.14) (3.64)
Fall 2006 9.43*** 2.84 9.20*** 7.56 9.24**
(2.13) (2.44) (2.43) (5.52) (3.03)
Spring 2007 5.96* 1.00 3.79 7.53 7.60*
(2.60) (3.00) (2.95) (7.15) (3.86)
Fall 2007 12.51*** 2.39 9.80*** 8.71 8.78**
(2.18) (2.45) (2.45) (5.51) (3.04)
Spring 2008 3.32 -0.17 2.08 -0.42 -4.97
(2.58) (3.14) (3.13) (7.25) (4.10)
Fall 2008 3.31 0.69 3.21 7.02 4.47
(2.33) (2.69) (2.56) (5.69) (3.15)
IA Requirement by Ethnicity Interaction (Pre-policy=Comparison Group)
IA Requirement*Asian 7.33 -- 13.36 -18.24 24.17**
(14.39) (12.02) (15.17) (8.41)
IA Requirement*Black 12.86 27.64* 13.42 -19.78 18.15
170
(12.01) (13.15) (12.12) (20.33) (10.27)
IA Requirement*Latino 12.60 10.47 0.93 -25.79 10.43
(11.28) (10.23) (10.68) (14.64) (7.16)
IA Requirement*White 3.66 7.92 1.63 -25.09 6.11
(11.34) (10.48) (10.84) (14.69) (7.30)
Adjusted R-squared 0.08 0.04 0.03 0.04 0.03
Total Observations (N) 2,714 1,279 3,665 1,440 3,324
*p<.05, **p<.01, ***p<.001
Notes.
Across models, Spring 2009 assessment cohort was omitted due to collinearity
Across models, IA Requirement*Other/Unknown ethnicity was omitted due to collinearity
IA Requirement*Asian is an empty field because there are no observations in the PA model
171
Appendix P3-C
Table D1.
Acceptance Rates of California Residents by Campus and Academic Year
2007-08 2008-09 2009-10 2010-11 2011-12 2012-13 2013-14 2014-15
California State University (CSU) 76.1% 68.8% 68.5% 64.7% 75.1% 74.4% 74.5% 75.0%
Bakersfield 47.8% 60.8% 71.5% 66.3% 69.9% 68.3% 68.9% 73.1%
Channel Islands 53.3% 40.1% 67.9% 57.5% 54.1% 64.9% 69.3% 73.2%
Chico 81.2% 80.1% 81.4% 62.2% 76.5% 73.1% 67.2% 72.5%
Dominguez Hills 12.1% 49.3% 59.0% 57.9% 58.2% 59.8% 59.5% 47.3%
East Bay 70.6% 71.8% 73.3% 21.9% 35.0% 71.6% 70.4% 72.4%
Fresno 68.8% 70.1% 71.9% 55.9% 60.9% 58.4% 60.9% 59.0%
Fullerton 60.5% 59.6% 55.1% 45.1% 47.4% 46.3% 48.4% 45.9%
Humboldt 82.4% 75.5% 84.0% 82.7% 81.9% 80.5% 76.9% 77.7%
Long Beach 47.0% 42.0% 31.8% 34.4% 30.5% 30.6% 35.3% 36.2%
Los Angeles 61.8% 76.5% 67.2% 57.2% 72.5% 70.8% 57.8% 63.4%
Maritime Academy 79.4% 74.2% 77.9% 69.1% 74.8% 80.3% 72.7% 67.3%
Monterey Bay 68.9% 70.7% 81.5% 52.3% 47.1% 45.3% 46.5% 72.1%
Northridge 68.7% 74.7% 72.5% 73.5% 64.3% 47.3% 63.7% 54.3%
Pomona 68.8% 52.8% 61.3% 44.7% 55.9% 52.9% 53.4% 51.9%
Sacramento 66.6% 66.8% 80.4% 70.9% 68.6% 71.1% 73.5% 76.2%
San Bernardino 61.1% 24.1% 23.3% 19.1% 20.6% 58.8% 55.5% 66.7%
San Diego 44.2% 31.2% 36.4% 30.0% 31.2% 29.6% 35.1% 31.9%
San Francisco 66.8% 66.0% 72.4% 62.6% 66.4% 65.4% 61.6% 68.3%
San Jose 64.3% 19.7% 16.5% 19.4% 76.4% 64.7% 66.1% 61.3%
San Luis Obispo 44.7% 33.3% 37.3% 32.5% 35.2% 28.3% 32.2% 27.9%
San Marcos 71.7% 73.3% 71.4% 40.8% 58.6% 63.2% 67.0% 62.0%
Sonoma 82.9% 81.1% 77.3% 80.9% 85.7% 82.6% 80.9% 81.0%
Stanislaus 65.3% 65.8% 36.6% 33.6% 78.2% 73.3% 76.0% 73.4%
University of California (UC) 77.5% 75.4% 72.5% 71.6% 69.7% 65.8% 63.6% 62.9%
Berkeley 24.5% 23.8% 22.7% 21.5% 19.7%
Davis 44.5% 43.4% 44.5% 39.4% 38.1%
Irvine 45.4% 44.1% 33.6% 39.0% 35.2%
Los Angeles 21.0% 22.5% 17.7% 17.7% 16.7%
Merced 78.0% 78.9% 76.5% 68.7% 69.2%
Riverside 77.4% 61.6% 61.6% 60.2% 58.0%
San Diego 36.8% 30.6% 32.1% 32.8% 30.3%
Santa Barbara 41.7% 43.4% 41.0% 39.5% 37.2%
Santa Cruz 64.9% 68.5% 61.6% 49.5% 53.6%
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Reforming developmental education in math: exploring the promise of self-placement and alternative delivery models
PDF
Three essays on the high school to community college STEM pathway
PDF
Essays on education: from Indonesia to Los Angeles, CA
PDF
Math and the making of college opportunity: persistent problems and possibilities for reform
PDF
To what extent does being a former high school English learner predict success in college mathematics? Evidence of Latinx students’ duality as math achievers
PDF
Oppression of remedial reading community college students and their academic success rates: student perspectives of the unquantified challenges faced
PDF
Essays on economics of education
PDF
The advanced placement program: a case study of one urban high school
PDF
How extending time in developmental math impacts persistence and success: evidence from a regression discontinuity in community colleges
PDF
AB 705: the equity policy – race and power in the implementation of a developmental education reform
PDF
Developmental math in California community colleges and the delay to academic success
PDF
College academic readiness and English placement
PDF
Investigating the association of student choices of major on college student loan default: a propensity-scored hierarchical linear model
PDF
A developmental evaluation of action research as a process for organizational change
PDF
Ready or not? Unprepared for community college mathematics: an exploration into the impact remedial mathematics has on preparation, persistence and educational goal attainment for first-time Cali...
PDF
Measuring the alignment of high school and community college math assessments
PDF
Culture, politics, and policy implementation: how practitioners interpret and implement the transfer policy in a technical college environment
PDF
Changing the landscape of institutional assessment on transfer: the impact of action research methods on community college faculty and counselors
PDF
Math faculty as institutional agents: role reflection through inquiry-based activities
PDF
Developmental education pathway success: a study on the intersection of adjunct faculty and teaching metacognition
Asset Metadata
Creator
Fong, Kristen Erin
(author)
Core Title
A multi-perspective examination of developmental education: student progression, institutional assessment and placement policies, and statewide regulations
School
Rossier School of Education
Degree
Doctor of Philosophy
Degree Program
Education
Publication Date
02/22/2016
Defense Date
02/22/2016
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
assessment and placement,Community Colleges,developmental education,institutional policies,OAI-PMH Harvest
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Melguizo, Tatiana (
committee chair
), Dowd, Alicia (
committee member
), Painter, Gary D. (
committee member
)
Creator Email
kristen.fong@gmail.com,kristenf@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-212889
Unique identifier
UC11278019
Identifier
etd-FongKriste-4137.pdf (filename),usctheses-c40-212889 (legacy record id)
Legacy Identifier
etd-FongKriste-4137.pdf
Dmrecord
212889
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Fong, Kristen Erin
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
assessment and placement
developmental education
institutional policies