Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Accountability models in remedial community college mathematics education
(USC Thesis Other)
Accountability models in remedial community college mathematics education
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Running head: ACCOUNTABILITY MODELS 1
ACCOUNTABILITY MODELS IN REMEDIAL COMMUNITY COLLEGE
MATHEMATICS EDUCATION
by
Orchid Nguyen
______________________________________________________________________
A Dissertation Presented to the
FACULTY OF THE USC ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
May 2014
Copyright 2014 Orchid Nguyen
ACCOUNTABILITY MODELS 2
Dedication
I would like to dedicate this dissertation to my parents whose strengths, endless love and
support sustained me throughout the challenges of life. Together, we have fought and won many
battles! For my husband, Han Nguyen, who believed in me when I did not believe in myself.
Your encouragement, patience, reassurance, and most of all, undying love pushed me to the
finish line. Without you, this journey would never have been possible.
ACCOUNTABILITY MODELS 3
Acknowledgements
This study would not have been possible without the guidance and help of my
dissertation committee. I am grateful for the guidance, feedback, and encouragement provided
by my dissertation chair, Dr. Dennis Hocevar. Your commitment, patience and dedication in
helping me with my educational process have been greatly appreciated. I would also like to thank
Dr. Robert Keim and Dr. Robert Pacheco for their assistance with helping define and refine this
study. Your expertise in the community college system was invaluable.
ACCOUNTABILITY MODELS 4
Table of Contents
Dedication 2
Acknowledgements 3
List of Tables 5
List of Figures 6
Abstract 7
Chapter One: Introduction 9
Problem Identification 11
Purpose of the Study 13
Research Questions 16
Summary 16
Importance of the Study 17
Organization of the Study 17
Chapter Two: Literature Review 18
Remedial Math Education: A Burden for Higher Education 19
The Need for Remedial/Developmental Math Courses 20
Student Progression through Developmental Education 24
Developmental Programs 24
Mathematics Achievement and Retention 26
Characteristics of Underprepared Students 27
Psychological Theories: Goal Orientation Theory, Self-Efficacy and Attribution Theory 31
Conclusion 34
Chapter Three: Methodology 36
Introduction 36
Population and Sample 36
Data Sources 37
Instrumentation 38
Predictor and Criterion Variables 39
Theoretical Framework for the Methodology 40
Limitations 44
Delimitations (Generalizability) 45
Chapter Four: Results 46
Introduction 46
Findings 46
Summary 64
chapter Five: Summary, Discussion and Implications 66
Summary of Results 67
Discussion 70
Implications for Policy and Practices 72
Future Research 77
Limitations to Generalizability 77
Conclusion 78
References 79
ACCOUNTABILITY MODELS 5
LIST OF TABLES
Table 1: Repeat/Success Rate of Cerritos College and Glendale College 40
Table 2: Mean Differences 47
Table 3: Descriptive Statistics of Predictor Variables for Status Model 49
Table 4: Year-to-Year Stability of Status Model 50
Table 5: Mean Improvement Rates 51
Table 6: Linear Correlation for Year-to-year Improvement 51
Table 7: Means and Standard Deviations by Cohort and Ethnicity 53
Table 8: Reliability Statistics for 3-year average 54
Table 9; Correlation Matrix of Success Rates for Each Ethnic Group Over 3 Years 55
Table 10: Descriptive Statistics for Input-Adjusted Scores 56
Table 11: Stability of Input-Adjusted Model 59
Table 12: Over-Performing Institutions for the Status Model 60
Table 13: Under-performing Institutions for the Status Model 60
Table 14: Over-performing Institutions for the Input-Adjusted Model 61
Table 15: Under-performing Institutions for the Input-Adjusted Model 61
Table 16: Over-performing Institutions-Hispanics 62
Table 17: Under-performing Institutions-Hispanics 63
Table 18: Over-performing Institutions-Caucasians 63
Table 19: Under-performing Institutions-Caucasians 64
ACCOUNTABILITY MODELS 6
LIST OF FIGURES
Figure 1: The formula for Pearson’s r 43
Figure 2: Overall Remedial Math Success Rate: 2004-2005 Cohort 47
Figure 3: Overall Remedial Math Success Rate: 2005-2006 Cohort 48
Figure 4: Overall Remedial Math Success Rate: 2006-2007 Cohort 48
Figure 5: NCE 2004-2005 57
Figure 6: NCE 2005-2006 58
Figure 7: NCE 2006-2007 58
ACCOUNTABILITY MODELS 7
Abstract
Measuring the quality of community college performance is not an easy task, yet some
researchers have attempted to use data not only to analyze, but to solve one of the toughest
problems of community college: the low success rates in developmental math courses. Since
higher education is multidimensional, it is problematic to observe the entire package of
knowledge and skills possessed by students. Furthermore, it is difficult to compare and contrast
institutions, since postsecondary institutions possess different educational agendas and student
profiles. In the case of community colleges, the diversity of measures employed by individual
institutions does not allow policy makers the opportunity to examine the overall school
performance.
The purpose of the study was to compare four accountability models that have been
proposed for use in community colleges: status, improvement, input-adjusted and disaggregated
models. For the status model, the percentage of credit students tracked for six years for the most
recent three cohorts who started below transfer level in mathematics was retrieved from the
California Community Colleges Students Success Scorecard. From the scorecard, the remedial
math success rates of all 112 colleges were computed and compared. For the improvement
model, the difference between the remedial math success rates over two consecutive years were
calculated and examined for each institution. For the disaggregated model, the remedial math
success rate of each ethnic group membership was calculated and examined individually. For
the input-adjusted model, the multiple regression technique was used compute success rate with
ethnic group membership as a control variable. The residual (actual success minus predicted
success was standardized to show relative standing among the 112 community colleges. Over
ACCOUNTABILITY MODELS 8
and under-performing institutions were identified to make more meaningful comparisons among
institutions.
Various themes are evident in this study. The four accountability models: status,
improvement, disaggregated, and input-adjusted all possess some similarities as well as
differences. The raw overall success rates are extremely stable from year to year. However,
examining raw success rates without considering the factors beyond institutions’ control such as
ethnicity is not a fair framework. Therefore, the improvement, disaggregated and input-adjusted
models were introduced. The improvement model is a fairer framework for comparing
institutions, yet the improvement scores are not stable from year to year. The results of
disaggregated model and input-adjusted model yield a better solution for comparing students’
success rates. Both the disaggregated success rates and the input-adjusted success rates indicate
a strong stability from year to year. A combination of the disaggregated and input-adjusted
models is recommended as the best way to measure success in future accountability applications.
ACCOUNTABILITY MODELS 9
CHAPTER ONE: INTRODUCTION
Facing increased international competition, the demand for higher skills in the United
States has increased dramatically (Achieve, 2009). Technological advances are expected to
speed up, and thereby increase the demands on business and the workforce (Karoly & Panis,
2004). Workforce competitiveness is great motivation for many Americans to increase their
education and skill levels. In Carnevale’s study (2008), he revealed that 60 percent of jobs are
held by those with at least some college experience. The economic benefit of possessing a post-
secondary education are significant. According to Achieve (2009), an individual with a high
school diploma will earn $1.4 million over his/her lifetime, whereas another with an associate’s
degree will earn $1.8 million, and one with a bachelor’s degree will earn $2.5 million.
Community colleges have done an outstanding job of providing higher education to
diverse populations. According to Scott (2009), over 2.9 million students annually attend
California community colleges. As open access institutions, these community colleges accept
students regardless of their academic background (Achieve, 2009). Unfortunately, access does
not guarantee success for community college students. As a result, more and more students
continue to enter college underprepared (Greene & Foster, 2003). Underprepared students, often
referred to as developmental students, are those who lack college-level skills in subjects such as
mathematics or English (Cohen & Brawer, 2003). In higher education, the most frequently
required developmental courses are math, more so than any other subject areas including
reading, writing, and English (Provasnik & Planty, 2008), making this a growing issue.
According to the National Center on Education Statistics, three million incoming students enter
higher education, yet half take at least one non-credit course while they are enrolled to catch up
with their counterparts, costing nearly $7 billion a year. Unfortunately, this number is not a
ACCOUNTABILITY MODELS 10
surprise to faculty and staff at community colleges, which have the highest percentage of
students enrolled in developmental coursework across all U.S postsecondary institutions.
According to Roueche and Waiwaiole (2009), 43 percent of community college students needed
developmental math courses, compared to 29 percent at 4-year institutions.
Community college students must pass at least an intermediate level algebra course in
order to transfer to a four-year institution in California, such as a University of California,
California State University or private institution (Scott, 2009). Basic math, pre-algebra, and
beginning algebra are identified as developmental math courses. Developmental math courses
do not apply toward degree requirements, but students are required to enroll in them due to their
low assessment test scores (Cohen & Brawer, 2003). In 2007, approximately 61 percent of
California community college students were assessed at a skill level below intermediate algebra,
while 23 percent were assessed at the intermediate level and only 16 percent were assessed
above intermediate algebra (Scott, 2009). California community college students who completed
a college-level math course within two years of their initial enrollment are more likely to earn an
associate’s degree, complete a certificate, or transfer to a four-year institution than those who had
not completed a college-level course within two years of their initial enrollment (Greene &
Foster, 2003). While the main goal of developmental math courses is to prepare students for
their college-level coursework, unfortunately, student success rates as measured by college-level
math course completion and transfer completion are below 50 percent (Scott, 2009). This
number demonstrates that the current community college developmental math education in
California is not meeting the needs of underprepared students.
According to Tai et al. (2006), math is essential to student achievement in scientific
disciplines such as biology, chemistry and physics. Furthermore, one of the most important
ACCOUNTABILITY MODELS 11
contributors to strong performance in economics is the student’s success in math (Ballard &
Johnson, 2004). According to Fike and Fike (2008), community college students who completed
a developmental math course are more likely to persist from fall-to-fall or fall-to-spring
semesters than those who took developmental math but did not complete the course. They also
pointed out that those who did not complete the developmental math course were still more
likely to persist than those who did not enroll in basic-skills math course at all. There is no doubt
that completing developmental math course is critical for community college students since
success in developmental math courses increases the possibility of students completing college-
level math courses. Success in math improves their academic careers and provides future career
opportunities. Therefore, interventions are definitely needed for developmental math education
at community colleges.
Problem Identification
Professional Accountability
Low retention rates and success rates at community colleges are issues of professional
accountability. Faculty are hired for a variety of duties, yet the most important duty of a
community college faculty member is to educate and bring students up to specific standards
(Adelman, 2006). If students are unable to reach the standard, the instructors must be held
accountable for such failures. Faculty members are considered professionals in the educational
field and are required to obtain a level of professional knowledge --a set of rules and skills--
prior to entering their classes.
Educators are not only helping students, but they are obligated to serve their clients,
namely their students, with the required set of knowledge and skills. In the case of low retention
rates and success rates in developmental math courses, educators have failed to demonstrate
ACCOUNTABILITY MODELS 12
some key elements of professional accountability. According to Goldberg and Morrison (2003),
the elements of professional accountability include working actively to keep up with the latest
research, participating in reviews of educational quality with colleagues, employing the effects of
research-based instructional strategies, continuously improving their qualifications, and
undertaking actions that can be at odds with professional accountability. Educators need to be
held accountable for the results of the crisis in developmental math course.
A low success rate in developmental math courses indicates a lack of instructional
leadership. There is a substantial relationship between leadership and student achievement
(Waters, Marzano, & McNulty, 2003). Based on the balanced leadership’s framework of Waters
et al., effective leaders understand how to balance pushing for change as well as protecting
aspects of culture, values, and more worth preserving. They also know when, why, and how to
create learning environments that support people and provide the knowledge, skills, and
resources they need to succeed (Waters et al., 2003). Educators constantly need to improve
students’ achievement, especially at the basic skills level. In fact, educators are change agents.
They need to develop a set of responsibilities and practices to ensure the success of students
(Waters et al., 2003). They are held accountable for the students’ poor performance.
Policy makers have used the high rates of remediation to blame primary and secondary
schools for the failure of students in college-level work. These were the key argument behind
the development of the common core and other standards-reform initiatives. Administrators and
policy makers focus on the placement tests such as Accuplacer as an indicator for students’
college readiness (Bailey & Xu, 2012). They fail to look at other demographic indicators such as
income, age, and educational level.
ACCOUNTABILITY MODELS 13
Purpose of the Study
Measuring the quality of college performance is not an easy task, yet some researchers
have attempted to use data not only to analyze, but to solve one of the toughest problems in
community colleges: the low success rates in developmental math courses. An accountability
model is a systematic method of summarizing college performance. In order to choose the right
model to evaluate college performance, valid inferences need to be generated (Goldschmidt,
Roschewski, Choi, Auty, Hebbler, Blank, & Williams, 2005). Since higher education is
multidimensional, it becomes difficult to observe the entire package of knowledge and skills
possessed by students (Bailey & Xu, 2012). Furthermore, it is difficult to compare and contrast
institutions, since postsecondary institutions possess different educational agendas and student
profiles. In the case of community colleges, the diversity of measures employed by individual
institutions does not allow policy makers the opportunity to examine overall school performance
(Bailey & Xu, 2012).
The purpose of this study was to compare four accountability models: status,
improvement, input-adjusted and disaggregated model to determine which one is the best fit for
community colleges. A status model takes a snapshot of a school’s level of student proficiency
at one point in time (or an average of two or more points in time) and compares the proficiency
level with an established target (Goldschmidt et al., 2005). For instance, in Adequate Yearly
Progress (AYP) under No Child Left Behind (NCLB), the established performance target is the
annual measurable objective (AMO-the level of proficiency the state established as an annual
goal for schools and students). Progress is defined by the percentage of students achieving at the
proficient level for that particular year, and the school is then evaluated based on whether the
students met or did not meet the goal. In addition, status models can be compared at two or more
ACCOUNTABILITY MODELS 14
points in time to provide a measure of improvement. Such a status model is called an
improvement model of accountability (Goldschmidt et al., 2005).
Status models are categorized into unconditional and conditional status models.
Policymakers generally would be more interested in results based on a conditional model since
this model attempts to account for factors affecting performance that lie outside of a school’s
control whereas an unconditional model uses unadjusted mean school performance, or
percentage of proficient as an indicator of performance (Goldschmidt et al., 2005). In the status
model, all student success is attributable to the current school in the current year. However,
unlike an unconditional status model, conditional status models recognize that students bring
“human capital” inputs with them to the school (Goldschmidt et al., 2005). One major problem
of the unconditional status model is the status scores could be biased in favor of institutions
located in wealthier neighborhoods. Therefore, this model cannot be used to make a fair and
useful analysis of outcomes among institutions.
One type of conditional status model is the input-adjusted model. Unlike the status
model, the input-adjusted model does not ignore important variables such as income, educational
level, and socio-economic status in the analysis. Input adjusted models involve conveying an
analysis of demographic and institutional characteristics so that reasonable comparisons can be
made among the outcomes of different institutions (Bailey & Xu, 2012). The input-adjusted
model is an attempt to evaluate school performance in a fair and transparent manner by
controlling student and community characteristics because these characteristics are highly
correlated with student performance (Hocevar & Tate, 2012). The most common method in the
input-adjusted model is to regress graduation rates or success rates on a set of variables and
examine the residuals. Positive residuals imply that colleges are doing better than would be
ACCOUNTABILITY MODELS 15
predicted by the variables included in the regression. The colleges with the largest negative
residuals represent the lowest-ranked colleges (Bailey & Xu, 2012).
A viable alternative of the input-adjusted model is the disaggregated model. Typically,
most student achievement data is reported for the population as a whole such as the entire state,
school, grade level, or class. Disaggregating the data can bring to light critical problems and
issues, such as the low success rates in developmental math, which otherwise remain invisible
(Bensimon, 2004). In this study, the data were disaggregated for each individual ethnic group.
The data examined in this study was obtained from California’s Accountability Reporting
for Community College (ARCC). Due to low success and transfer rates, California enacted the
ARCC system to establish more rigorous accountability measures for community college
performance (Postsecondary Education Accountability, California Education Code §84754.5).
The ARCC provided a comprehensive report of statewide and individual community college
outcomes. More importantly, the ARCC indicators include all the necessary student and
community characteristics, such as gender, age group, ethnicity, California Work of Opportunity
& Responsibility to Kids program (CalWORKs), Disabled Students Programs & Services
(DSPS) and Extended Opportunity Programs & Services (EOPS). Based on the data from the
ARCC, I will be able to analyze the success rates of remedial math courses at community
colleges statewide, up to four levels below the transfer courses.
Although often used in practice, there is a general consensus that institutional status
scores should not be used for evaluation and accountability purposes because status scores are a
function of factors that are beyond an institution’s control (e.g., student ethnicity). The
overarching question in this study is whether two types of control, input-adjusted scores and
ACCOUNTABILITY MODELS 16
disaggregation by ethnicity, yield institutional level statistics that are stable enough to be useful
for evaluation and accountability purposes. The research questions are:
Research Questions
Based on the problem identification and purpose of the study, the following research
questions delimit the scope of the study:
1. How have indicators of math remediation success changed in the past three years? How
are status indicators distributed? How stable are status indicators of success?
2. How have improvement rates changed over the last two years? How stable and reliable
are improvement rates?
3. Have disaggregated scores changed over the past three years? How reliable are
disaggregated status scores? How stable are disaggregated scores for each ethnic group?
4. What are the descriptive characteristics of year-to-year standardized residuals (input-
adjusted scores) when remedial math is regressed on ethnic group membership? Are year-
to-year input-adjusted scores stable?
Summary
The purpose of the study was to compare four accountability models: status,
improvement, input-adjusted and a disaggregated model. For the status model, the percentage of
credit students tracked for six years through 2011-2012 who started below transfer level in
mathematics is retrieved from California Community Colleges Students Success Scorecard.
From the scorecard, the success rates of all 112 colleges are examined individually. For the
input-adjusted model, the multiple regression technique was used to regress the success rates on
ethnic group memberships. Ethnic group memberships were used as the predictor variables. The
residual was standardized to show relative performance among 112 community colleges. Over
ACCOUNTABILITY MODELS 17
and under-performance institutions were identified to make more meaningful comparisons
among institutions. The input-adjusted model will then be compared to the disaggregated model.
Importance of the Study
While the mission of community colleges continues to develop and improve (McCabe,
2003), community colleges still fail to prepare students for college-level coursework. Although
researchers have considered several accountability models, such as the improvement model and
the value-added model, these models have been shown to be unreliable and invalid. These
models do take care of the outside factor problem. The results of this study could yield
performance targets that are tailored to those specific characteristics. Furthermore, the
framework of this study could be used for goal-setting purposes for not only remedial math
courses but for other subjects as well.
Organization of the Study
This research is presented as a dissertation in five chapters. The background of the
problem, the research topic, the purpose of the study, and the research questions were posed in
this chapter. A review of the relevant research literature is contained in Chapter 2, Literature
Review. Chapter 3, Research Methodology, outlines the inquiry methods utilized in this study to
answer the research questions. The results of the research are offered in Chapter 4, Findings and
Results. The problem is summarized, results are interpreted, implications of the findings, and
limitations of the study are discussed in Chapter 5, Summary and Conclusions.
ACCOUNTABILITY MODELS 18
CHAPTER TWO: LITERATURE REVIEW
Community colleges have been the solution for many underprivileged students, including
minorities, low-income students, and older students (McCabe, 2000). They serve a critical role
in the education of our nation’s workforce; they fulfill the democratic principle of accepting all
the students to their campuses, regardless of educational backgrounds, qualifications or
credentials. When students begin their college endeavors, they are required to go through the
assessment process. This process consists of testing their knowledge in mathematics and English
in order to be placed in the correct courses. Some colleges have written tests and some are done
via computer. Accurate assessment of basic skills is a major concern for most postsecondary
institutions, especially those that have open admission policies that provide access for all
students, without regard to their academic preparedness (Smittle, 1990, p. 22). One of the
missions of community colleges is to offer basic skills courses (Scott, 2009). Developmental
math courses are formulated to provide fundamental academic skills to prepare students for
college-level courses. The number of students enrolled in developmental math courses by far
exceeds their enrollment in other developmental courses. NCES (2003) estimated that, in the fall
of 2003, the number of freshmen enrolled in developmental math courses was double the number
of those in reading and writing courses.
Remedial education is one of the most difficult issues facing community colleges.
Community colleges are designated with the task of teaching students college-level material;
unfortunately, a majority of community college students arrive unprepared to effectively engage
in the core function of the college. What happens to students who enroll in remedial education?
Do they complete the sequence of remedial courses and advance to college-level courses? Only
30 percent pass all of the math remedial courses they enroll in, and, further, this accounts for
ACCOUNTABILITY MODELS 19
only 44 percent of those who enroll in any remedial math courses. About 75 percent of all
students who were referred three levels below college level for math drop out between their
courses (Attewell, Lavin, Domina, & Levey, 2006).
This chapter examines the characteristics of underprepared students in math courses, and
the effect of remedial math education on community college student outcomes. There is a need
to enhance public understanding of the challenges remedial math students face in higher
education. These challenges lead to the low retention rate and success rate that community
colleges face every year (Bailey, Jeong & Cho, 2010; Melguizo, T., Kosiewicz, H., Prather, G.,
& Bos, J. M, 2013).
Remedial Math Education: A Burden for Higher Education
Financial Issue
Providing a developmental math program at community colleges is a controversial issue.
While prestigious institutions would not offer developmental math courses and often tuck
remediated instruction in the tutoring centers (Grubb & Worthen, 1999), as open access
institutions, community colleges acknowledge and admit underprepared students. According to
the data collected by the National Center for Educational Statistics (2004), freshmen at public
community colleges were twice as likely to be enrolled in a developmental education course than
their public four-year counterparts. Although community colleges have been giving hope to
these underprepared students, they cannot celebrate their role as remediation providers. Critics
have raised the issue of excellence by challenging the legitimacy of higher education that
provides less than college level instruction (McCabe, 2000). The lack of academic preparation
originated from the lack of curricular coordination between high schools and post-secondary
institutions as well as the rigor of the students’ high school curriculum (Adelman, 1999). High
ACCOUNTABILITY MODELS 20
school graduates do not necessarily have the knowledge to be sufficiently prepared for college
since high school graduation requirements do not align with college entrance requirements
(Adelman, 1999). In the case of community colleges, there are no requirements. High school
preparation has played a significant role in students’ future academic preparedness (Venezia,
Kirst, & Antonio, 2004). Therefore, critics often blame the high school curriculum. Providing
remedial education is the duty of high school teachers. Legislators often challenge community
colleges for taking money and wasting it on instruction that has already been provided in public
high schools (McCabe, 2000).
Students’ Motivation
According to McCusker (1999), developmental math courses have been criticized for
causing anxiety and feelings of discouragement by reinforcing students’ sense that they are at
risk and forcing them to take longer to finish their degrees. Most developmental math courses
are not counted toward college credits. Most developmental math students have suffered from
the lack of perceived usefulness, and it may affect self-regulatory strategy usage and, ultimately,
courses outcome by hindering motivation and effort. Critics argue that the lack of perceived
utility of math and math anxiety have occurred at the early age and should be diagnosed in high
school or even earlier (Miller, 2000).
The Need for Remedial/Developmental Math Courses
The Diversity of the Student Body at Community Colleges
As open access institutions, community colleges have been famous for promoting the
diversity of the student body. Ethnic minorities, first generation, and low income students are
more likely to be underprepared for colleges than their counterparts (McCabe, 2000; Venezia et
al., 2004). These students are also less likely to have access to college preparatory courses in
ACCOUNTABILITY MODELS 21
high school and to perform well on college assessment exams (Venezia et al., 2004).
Developmental math courses are critical to these diverse groups of students. When students
successfully complete their developmental math courses, their math program completion rates
increase (Provasnik & Planty, 2008). It gives more students, especially minorities, a chance to
complete their academic requirements and provides the credentials needed for their future
opportunities and advancement. About more than 60% of underprepared students attending
community colleges are deficient in math, and more than 75% of ethnic minority students need
to enroll in at least one developmental course in their first year (McCabe, 2000). Therefore,
eliminating developmental math courses is equivalent to taking away these students’ chances to
accomplish their educational or career goals. Hence, on the quest for academic excellence,
critics have violated equity by denying the need for remedial education.
Economic Troubles
Proponents of remedial instruction in the community colleges argue that neglect of
remedial education in higher education could lead to significant economic troubles for the United
States. Today’s society demands highly skilled and technical workers who require instruction
beyond high school. Success in math is important not only to the United States as a country, but
also to individual students because it gives them a variety of career options, and it increases
prospects for future income. According to Seidman (2005), students who do not accomplish
their personal and academic goals will become a burden to society.
The demands of today’s society and the society of the future keep rising. In the 1950s,
sixty percent of American jobs were filled by unskilled laborers, but today the percentage is only
fifteen (Roueche & Roueche, 1993). Jobs once available to those without college-level
education will no longer exist in the twenty-first century. According to Roueche and Roueche
ACCOUNTABILITY MODELS 22
(1993), the remaining unskilled jobs are simply dead-end jobs that do not provide a route to a
better life as they may have done in the past. Therefore, a new population of students who would
not have gone to college will rush to the community colleges to obtain the education and skills
needed for better employment. These students are most likely to be underprepared for college-
level work (McCabe, 2000). Eliminating remedial education simply aligns with taking away the
key to the door of better lives and better careers.
Beside ethnic minority groups, first generation and low income students, McCabe (2000)
also highlights the overwhelming number of Americans born between 1945 and 1964 who will
leave the workforce in the coming decades. According to McCabe (2000), this massive
onslaught of retirees will result in a shortage of American workers who are both available to
work and who have the requisite education and skills to work. Therefore, neglecting remedial
education in higher education will create more economic issues to the United States.
Students’ Need
College math is generally a requirement for any degree completion. Therefore,
struggling with developmental and college-level math courses will prevent students from
accomplishing their educational and career goals (Adelman, 2004). Moreover, besides being
required for degree completion, math is one of the most important factors related to an
individual’s success beyond college (Drew, 1996). Millions of jobs require some mathematical
skills (Saffer, 1999). Math is crucial for entry into many careers and is essential for both existing
and emerging occupations in a global, information and technology-based economy (Drew, 1996).
Students need to understand that math is not simply for daily skills such as managing money, but
also for employment in some of the most profitable occupations (Saffer, 1999). It is imperative
ACCOUNTABILITY MODELS 23
that students possess the basic understanding of math and are able to apply math principles to
their daily lives and work.
Course Placement in Developmental Mathematics
While developing and maintaining a good developmental math program is still one of the
toughest issues of community colleges, administrators have to face another developmental
educational issue: the placement test. As students enroll in community college, they are assessed
using a standardized placement test and then placed in one of the college’s developmental math
sequence courses. Unfortunately, researchers have found that the lower the initial placement, the
less likely the student is to attain a degree or transfer (Bailey, Jeong & Cho, 2010; Melguizo, T.,
Kosiewicz, H., Prather, G., & Bos, J. M, 2013). Therefore, community colleges in California
need to place students on a trajectory of courses that maximizes opportunities for success.
According to Title 5 of the California Code of Regulations, California community colleges are
required to use multiple measures of their choice to place students in developmental courses.
Unfortunately, in California, it has been deemed an unfair and bias practice to make these
decisions based only on standardized testing instruments. According to Melguizo et al. (2013),
utilizing a multiple measure boost can enable community colleges to increase access while
ensuring student success in developmental math courses. Furthermore, this may also promote
equity, accuracy, and even efficiency in the assessment and placement process (Melguizo, T.,
Bos, J. M., & Prather, G, 2013). Boosting students into higher-level math courses where they are
likely to succeed can accelerate college completion as well as reduce the financial burden of
community college remediation (Melguizo et al., 2013).
ACCOUNTABILITY MODELS 24
Student Progression through Developmental Education
Before we propose a solution for improving remedial education, one important issue that
needs to be addressed is student progression through developmental math. After being assessed
and placed in the remedial math sequence, not all students move forward and enroll in these
courses. According to Bailey, Jeong, and Cho (2010), about 30 percent of students referred to
remedial education do not enroll in any remedial courses, and only about 60 percent of referred
students actually enroll in the remedial course to which they were referred. Furthermore, of
those students who enrolled in a remedial course, 29 percent exited their sequences after failing
or withdrawing from one of their courses, with a further 11 percent exiting their sequence never
having failed a course (Bailey et al., 2010). The goal of developmental education is to prepare
students for college-level courses. In order to move forward to their academic career, remedial
students need to complete the sequence. However, based on these findings, failure to enroll is a
greater barrier than course failure or withdrawal.
Developmental Programs
In order to help increase retention for students who are in developmental education
mathematics, a good developmental program is necessary. Developmental education
incorporates a wide range of interventions designed to help underprepared students be successful
in higher education. These interventions include tutoring programs, special academic advising
and counseling programs, learning laboratories, and comprehensive learning centers. They also
include developmental courses, which represent the intervention most commonly used in higher
education (Boylan, 1999, p. 2).
As reported by Boylan (1999), “According to the National Center for Education Statistics
(1996b), just over half of the students graduating from high school in 1994 took a complete
ACCOUNTABILITY MODELS 25
battery of ‘core’ or college preparatory courses…nearly two-thirds of current high school
graduates will attempt college at some point” (p. 2). Therefore, according to the report, just
under 50 percent of our graduating high school students have not completed the prerequisite
courses for college. Boylan (1997) found what most already believed: that a program would
have a greater chance of success if the services provided were more complete. This research,
along with the National Study of Developmental Education, has provided the minimum
components required for a successful developmental program. “These components include
centralized or well-coordinated administrative structures, mandatory assessment and placement,
tutoring with tutor training, commitment to faculty and staff development, advising and
counseling, and ongoing and systematic evaluation” (Boylan, 1997, p. 8). Of the components
examined, centralization, tutor training, and evaluation have the strongest positive relationship to
success. Students who were part of centralized developmental programs had higher rates of
retention than those in decentralized programs, but first-term and cumulative GPA were not
different for either. Students in institutions that had mandatory assessment instead of voluntary
assessment for students participating in developmental programs were more likely to pass their
first developmental mathematics class. Tutoring programs were found to have no relationship to
GPA or retention until a training component was added.
St. John (2000) examined the program at Prince George’s Community College and
reported, “[w]hat the program at Prince George’s Community College does is improve retention
rates and graduate students who perform on an equal level – and in many cases, higher – level of
academic competence than students who did not need the ‘developmental assistance’” (p. 1).
According to St. John (2000), a report by the Southern Regional Education Board (SREB) –
ACCOUNTABILITY MODELS 26
Reducing Remedial Education: What Progress Are States Making? recommends that colleges do
three things in order to reduce remediation:
1. Encourage students early in high school to begin academic planning for the skills
needed to complete the college-prerequisite curriculum;
2. Provide guidance to high school students in taking courses that will prepare them for
college-level study;
3. Help students in the application process for college admission and financial
assistance.
St. John (2000) quoted the report stating that “nearly 80 percent of the students who now
enter four-year colleges in most SREB states are ready for college-level work” – the percentage
of entering students needing remediation at two-year institutions is up around 50 percent” (p. 2).
At Prince George’s Community College 1,295 out of 1,663 students needed to take at least one
developmental course. Findings conclude that students in developmental programs do as well as,
if not better than, students who have never been part of a developmental course. The report also
states that the need for remediation is due to poor preparation.
Mathematics Achievement and Retention
Mathematics achievement and retention are continuously discussed nationwide. One of
the studies presented in this chapter tested a program of teaching lower-level (remedial)
mathematics with the ultimate goal of increased mathematics performance, work, study, and
concentration skills (Hagedorn et al., 2000). Elementary and college algebra were chosen as the
math groups. The control groups kept the instruction intact, while the experimental groups
consisted of teachers who received training in instructional procedures. Hagedorn et al. (2000)
examined “(a) the percent of students who completed the course (retention), (b) students’
ACCOUNTABILITY MODELS 27
performance in mathematics, and (c) the students’ ability to work with full concentration” (p.
143).
What is astounding is that the retention rates for the students in the treatment group were
found to be higher, even though tests were given during each class and grading was done on an
absolute scale without partial credit. There were no significant differences found between the
students’ math performance of the control and treatment groups. The reason for this could be
that the control group was self-selected and consisted of students who were planning on
becoming mathematics teachers. Since no significant difference was found, this demonstrates
that the treatment group had mastered the material at hand.
Those who participated in the teacher training classes showed large increases in their
arithmetic and reading comprehension tests. The pre-test/post-test comprehension results had a
correlation of 24 percent between the improvement between the two exams. Though the control
groups had no significant change in concentration, the treatment classes did. From this study,
with adjustments in teaching methodology in mathematics, concentration levels as well as
retention rates can be significantly affected (Hagedorn et al., 2000).
Characteristics of Underprepared Students
In order to obtain a deeper understanding of the problem behind community college
remedial math education, the characteristics of underprepared students need to be understood and
addressed. Remedial math education involves understanding student population, faculty
attitudes, psychological theories of learning and conventional wisdom about higher education.
Most underprepared students are minority, first generation, come from low socioeconomic status,
are culturally disadvantaged, and are low high school achievers (based on their GPA) with low
self-esteem and high failure expectations.
ACCOUNTABILITY MODELS 28
Robinson and Kubala (1999), through their study, found that achievement in courses was
affected by gender, ethnicity, prior exposure to course content, time since high school diploma or
GED, high school experiences, enrollment status, and enrollment in a study skills course. In
examining students between the eighth and tenth grade Catsambis (1994) found that female
students are not far behind in grades and test scores in comparison to male students. Also,
“white females are exposed to more learning opportunities in mathematics than are male
students” (Catsambis, 1994, p. 199). Even though females have more exposure, they are less
interested and confident in their mathematics abilities than their male counterparts. Gender
differences were found to be smallest among African American students and largest in the Latino
student population.
Among minority students, regardless of gender, the major obstacles for mathematics
achievement are limited learning opportunities and low levels of achievement, while, for white
female students, it is their attitudes and career choices. When it comes to enrollment in
mathematics classes, African American males had the highest percentage in low-ability classes
while white females had the highest percentage in high-ability classes. “These differences are
important because they show that both female and white students have greater opportunity to
learn mathematics, since they are more often exposed to the rigorous and demanding curricula of
high-ability classes” (Catsambis, 1994, p. 203).
Catsambis (1994) found that within white, Latino, and African American student groups,
there were no significant interactions between socioeconomic status and gender on achievement.
The most significant predictors of mathematics attitudes for Mexican American Hispanics are
ability, the quality of mathematics preparations, college credits, and the perceived importance of
ACCOUNTABILITY MODELS 29
mathematics, while language preference, cultural identification, or socioeconomic effects were
not found to effect math attitudes (Ramirez et al., 1990).
Those students who were in remedial classes were found to have studied less in high
school, have lower GPAs, and have not been strong collaborative participants in college. There
was “found overrepresentation by women and minorities in college remedial math classes”
(Hagedorn et al., 1999, p. 279). Therefore, “high school personnel must take the impetus and
provide opportunities and encouragement to ensure that all students, regardless of gender or
ethnicity, are enrolling in higher-level mathematics courses” (Hagedorn et al., 1999, p. 279).
In comparing math achievement for non-remedial and remedial students, Hagedorn et al.
(1999) found that higher income, nonminority status, and having parents with higher education
led to higher levels of math achievement for non-remedial students. Higher levels of high school
mathematics led to higher levels of math achievement for remedial students. Racial composition
of high school and neighborhood was found to be a significant factor for both remedial and non-
remedial students. For both remedial and non-remedial students, a significant connection
between high school study habits and high school GPA was found, with the connection being
stronger for remedial students (Hagedorn et al., 1999).
There is an ongoing debate over who should be responsible for students who need
remediation mathematics. Universities are beginning to put timelines on how long a student has
to complete these courses. If the student does not complete the courses, then they must turn
elsewhere for help: the community college.
Waycaster (2001) investigated the success of students in the Virginia Community
College system to see if they would pass a college-level mathematics course right after
completing the developmental prerequisite course. The passing percentages were between 29
ACCOUNTABILITY MODELS 30
percent and 64 percent among the system's colleges. Two of the five colleges attained at least a
50 percent passage rate; 10 of the 15 math groups had more than 50 percent passage rates. “In
most cases, students who had taken the prerequisite developmental course did as well as or better
than students who had placed into the course” (Waycaster, 2001, p. 410). Therefore, this
research demonstrated that developmental programs are useful.
The study revealed that the retention rates for developmental students were considerably
higher. The retention rates for developmental students were between 61.9 percent and 80.6
percent, while the range for non-developmental students was 42.1 percent to 61.9 percent. Class
size might have an effect on retention rates because the college with the lowest retention rate for
developmental students had class sizes over 25. Of the students that graduated from the five
community colleges, 40 percent took a developmental course at one time or another.
According to Tinto’s retention model (1975), some student entry characteristics affect
student attribution and retention. These characteristics include family background
(socioeconomic status, parental educational level) individual attributes (academic ability, race,
and gender) and precollege schooling experiences (high school academic achievement).
Characteristics of underprepared students are negatively linked to each of these three areas.
These students need more attention because underprepared students indicated essential
difficulties compared to college-ready students in each person-environmental interaction, with
significantly lower high school GPAs, weaker coursework in some academic areas, lower self-
ratings of ability, and lower predictions of future accomplishments (Grimes & David, 1999).
Moreover, these students provided lower self-ratings than college-ready students on the six
highest-rated abilities, including cooperativeness, drive to achieve, understanding of others,
academic abilities, and intellectual self-confidence. These areas suggest that the problems
ACCOUNTABILITY MODELS 31
underprepared students faced are mostly psychological. Therefore, in order to obtain a deeper
understanding of remedial students, educators should focus on some psychological theories.
Psychological Theories: Goal Orientation Theory, Self-Efficacy
and Attribution Theory
When addressing characteristics of remedial students, it is important to consider several
important psychological theories such as goal orientation theory, self-efficacy, and attribution
theory. Several programs are designed to improve students' learning ability and outcomes;
however, those programs without the elements of psychological theories might be effective for
highly motivated, goal-oriented students with a strong support structure, but are less likely to be
effective with remedial/underprepared students.
Goal orientation theory is predominantly studied in the domain of education.
Furthermore, the theory is widely accepted as influencing behavior and learning (Pintrich, 2000).
Goal orientation theory examines the reasons students engage in their academic work (Anderman
& Anderman, 1999). Some educators misinterpret students’ motivation and assume a student is
demonstrating a clear academic goal and the thirst for learning by simply enrolling in secondary
institution (Stage, 1996). However, this is not the case for most remedial students. According to
Gardiner (1994), many remedial/underprepared students view college as a gateway to a better
job. In order to apply the appropriate theory to remedial students, it is important to understand
how goals are conceptualized in the research literature (Pinrich, 2000). Two major classes of
goals are classified as mastery goals and performance goals. Students who are mastery-oriented
are concerned with self–improvement, comparing their current level of achievement to their own
prior achievements. In contrast, students who are performance-oriented are interested in
ACCOUNTABILITY MODELS 32
competition. They tend to use other students as points of comparison, rather than themselves
(Pinrich, 2000).
Mastery and performance goals are each divided into subcategories: approach and avoid
goals. Mastery-approach oriented students focus on mastering an academic task whereas
mastery-avoid students focus in avoiding misunderstanding the task. In terms of performance
goals, students hold performance-approach goals are interested in demonstrating that they have
more ability than other students, as opposed to those who are performance-avoid oriented, who
want to avoid appearing incompetent or stupid (Pinrich, 2000). Most remedial students hold
performance goals since they are more likely motivated by other extrinsic circumstances,
including the lack of employment and parental encouragement (Stage, 1996).
Some theorists also discuss the culture of the school when they study goal orientation
theory. The students’ goal can be conceptualized at differing organizational levels. Classroom
goal structures refer to students’ belief about the goals that are emphasized by their teachers in
their classrooms. In terms of mastery goal structure, students believe that instruction in the class
is characterized by emphasis on improvement, learning material to a level of mastery, and self-
comparisons; in contrast, in terms of performance goal structure, they believe that the class is
characterized by competition, an emphasis on grades and outperforming others (Pinrich, 2000).
Without a clear understanding of students’ goal and motivation, educators might negatively
affect student performance. For example, some educators show disappointment when remedial
students do not live up to their potential (King & Baxter-Magolda, 1996). This leads to the
remedial students’ perception of the school's culture being one that only focuses on grades,
achievement, competition, and outperforming others (Anderman & Johnston, 1998).
ACCOUNTABILITY MODELS 33
Self-Efficacy Theory
Researchers often focus on another important psychological theory called self-efficacy
for understanding underprepared students. According to Bandura (1997), self-efficacy beliefs
are judgments that individuals hold about their capabilities to learn or perform courses of action
at designated levels. Self-efficacy provides the foundation for human motivation, well-being,
and personal accomplishment. Researchers suggest that self-efficacy beliefs shape choices of
activities, careers, environments, and, thereby, lives. Understanding self-efficacy is critical in
the school setting since perceptions of academic ability and expectations for academic
performance are significantly related to college persistence (House, 1992).
Confidence plays an important role in student learning outcomes. Confident students
anticipate successful outcomes. Students who have confidence in their academic skills expect
high marks on exams and expect the quality of their work to reap academic benefits. By
contrast, students who lack confidence in their academic skills expect a low grade even before
they begin an exam or enroll in a course (Bandura, 1997). Students determine the effects of their
actions, and their interpretations of these effects help create their efficacy beliefs. Self-efficacy
plays an important part in remedial students’ success. If they perform well on mathematics
exams and earn high grades, they develop confidence in their mathematics capabilities. High
self-efficacy gives them the boost to finish their remedial math courses in the timely manner and
move on to college-level courses.
According to Graham and Weiner (1996), self-efficacy beliefs, behavior changes, and
outcomes are highly correlated, making self-efficacy one of the best predictors of behavior.
Remedial math teachers need to learn how to nurture the self-efficacy beliefs of their students.
Bandura (1996) suggests that teachers need to provide students challenging tasks and meaningful
ACCOUNTABILITY MODELS 34
activities, as well as supervising these efforts with support and encouragement. In order to be an
effective teacher to underprepared students, they need to know their students’ capabilities and
assign work that will be challenging but that they are still sure can be accomplished with proper
effort. According to Bandura (1996), assessing students’ self-efficacy beliefs can provide
teachers with important insights about their students’ motivations.
Conclusion
In the twenty first century, higher education has become a gateway to a better life. As
open access institutions, community colleges have been one solution to the problems of access to
higher education. The missions of community colleges have improved over time and become
more comprehensive. According to Cohen and Brawer (1989), community colleges enable
students to enjoy more rewarding employment and fuller participation in life in the United
States. Community colleges have become the face of remedial education. They are the main
providers of remedial education, workforce training and courses to prepare students to transfer to
four-year universities (Cohen & Brawer, 1989). Clearly, annually, community colleges will
continue to be the gateway for hundreds of thousands of students who require remedial
education. Based on the positive outcomes of number of studies on developmental math courses,
developmental education has served its purpose, which is to provide a solid foundation to
students for college-level courses.
With the changing demographics and the changing economy of the United States, the
number of developmental students will likely increase. Therefore, more research on remedial
education, especially remedial math courses, is needed to improve students' learning outcomes.
Researchers need to pay more attention to psychological theories as indicators in the study of
remedial mathematics students. Furthermore, a remedial education program can serve as a cost
ACCOUNTABILITY MODELS 35
effective investment. For a modest expenditure of public funds, data shows that students who
have completed developmental math sequences perform as well or better than college ready
students in college-level classes. For a better society, community colleges must be proactive in
demonstrating the worth of remedial education. Stakeholders need to recognize the merit of
developmental programs throughout the United States.
ACCOUNTABILITY MODELS 36
CHAPTER THREE: METHODOLOGY
Introduction
The purpose of this quantitative study was to compare four accountability models using
remedial mathematics courses in California community colleges. In addition, a correlation
analysis to illuminate the relationship of success rates of remedial math students to ethnicity will
be conducted. The aim of this study is to answer the established primary research questions:
1. How have indicators of math remediation success changed in the past three years? How
are status indicators distributed? How stable are status indicators of success?
2. How have improvement rates changed over the last two years? How stable and reliable
are improvement rates?
3. Have disaggregated scores changed over the past three years? How reliable are
disaggregated status scores? How stable are disaggregated scores for each ethnic group?
4. What are the descriptive characteristics of year-to-year standardized residuals (input-
adjusted scores) when remedial math is regressed on ethnic group membership? Are year-
to-year input-adjusted scores stable?
Population and Sample
The unit of analysis of this study's sample was at the institutional level. The population
studied consisted of the 112 institutions of the California Community Colleges System (CCCS).
The largest system of higher education in the nation, California Community Colleges serve over
2.7 million students. Community colleges are spread all over the state, from the largest, Los
Angeles Community College District, with 99, 000 full-time equivalent students, to the smallest,
Feather River Community College with 1800 students. As open access institutions, these 112
colleges offer certificates and degrees for future careers, prepare students for transfer to four-year
ACCOUNTABILITY MODELS 37
universities and colleges, and, most importantly, provide basic skills education (California
Community Colleges Chancellor’s Office [CCCCO], 2013).
Developmental education has been the core function of the California Community
Colleges throughout their history. The complete dataset that contains outcome results for
California’s performance measurement system was provided by the Research, Analysis and
Accountability Unit (RAA) of CCCCO. The complete dataset essentially allows the public to
maximize the accessible population and accurately align the study sample with the universe of
colleges within CCCS. This study focuses on the report of Basic Skills Accountability. This
report gives policy makers the opportunity to analyze the system-wide efforts and outcomes in
basic skills, particularly in mathematics.
No college elected to be part of the sample of institutions. A correlational, quantitative
research design used to determine the relationships in performance among colleges in the
outcome measures was based on publicly available data (CCCCO, 2013). Indicators varied, and
any variation and correlation were examined.
Data Sources
The main source of data accessed and utilized to complete the quantitative analysis for
this study is the CCCCO Data Mart. Under the CCCCO Data Mart, the Basic Skills Progress
Cohort Tracking Tool gives all California community colleges immediate access to data on
student progress through their English, reading, ESL, and math pipelines. In this study, the Basic
Skills Progress Tracker was used to retrieve cohorts of students in Basic Skills Mathematics
statewide (which includes all math courses prior to college level). For the cohort of each specific
college, the report includes the headcount, the count of enrollment by the students, and the count
of successful enrollments (grade of ‘A’, ‘B’, ‘C’, ‘P’, ‘IA’, ‘IB’, ‘IC’, ‘IPP’). The Basic Skills
ACCOUNTABILITY MODELS 38
Progress Tracking tool can be used to disaggregate the report by the demographic and
programmatic categories such as age group, gender, ethnicity, etc.
Instrumentation
The instrument evaluated for sources in this study was the performance measurement
system created for two-year institutions by the State of California, officially referred to as ARCC
2.0. Since this study focused on remedial math students, the Basic Skills Accountability report
was evaluated. Under ARCC 2.0, performance is measured with four categories of metrics:
descriptive metrics or “demographic snapshots,” workload metrics, assessment and placement
metrics, and students’ progress metrics. The Basic Skills Accountability report has raised
awareness of policymakers regarding system-wide efforts and outcomes in basic skills.
For this study, data were retrieved from the Student Success Scorecards and examined for
each of the junior colleges included in the sample on three cohorts: year 2004-2005, year 2005-
2006, and year 2006-2007. For each cohort, the aggregate data included:
1. Student: Students number is the headcount for four levels below transfer, three levels
below transfer, two levels below transfer, and one level below transfer.
2. Success: Success is the count of successful enrollments (grade of ‘A’, ‘B’, ‘C’, ‘P) for
four levels below transfer, three levels below transfer, two levels below transfer and
one level below transfer.
Based on these data, the success rate of each cohort was calculated for each of the four
levels before transfer level. The data has undergone statewide recoding for standardization of
critical course data elements, i.e., the code for levels below transfer-level and the Taxonomy of
Programs (TOP) codes for identifying courses. As the result of this standardization, the
comparability of courses across community colleges has increased and some courses that had
ACCOUNTABILITY MODELS 39
previously counted as basic skills courses have lost that designation. Therefore, the data
included the success rates of all colleges; however, not all colleges contain all four levels below
transfer level.
Predictor and Criterion Variables
Predictor Variables
The predictor variable analyzed in this study is ethnicity. Information about the students’
ethnicity is identified by the students on the application for admission to college. Ethnicity is
recognized as data elements measured by the CCCCO and were defined as the student’s self-
declared racial or ethnic background. In this study, race and ethnicity are categorized into five
groups: African American, Asian, Caucasian, Hispanic, and “Others.” For this study, the data
were retrieved from the Student Success Scorecard. For each cohort, the numbers of students,
attempt and success are retrieved and categorized into these five racial groups.
Performance Indicators (Output Measures)
Successful Course Completion Rate for Credit-Based Basic Skills Math Courses: (Basic
Skills Success Rate) Basic Skills Math Success Rate is calculated as the percentage of students
enrolled in a credit-base basic skills math course over the previous six academic years who
receive:
1. A final course grade of A, B, C grade or
2. A final assignment of Pass (P) for the course.
The Basic Skills Math Success Rate would include the Math progression of the cohort of
students who enrolled from Fall 2006 to Spring 2012 and the cohort of students who enrolled
from Spring 2007 to Fall 2012. The Success Rate is the percentage of those students who passed
to transfer level. The formula for success rate for each level is:
ACCOUNTABILITY MODELS 40
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑆𝑢𝑐𝑐𝑒𝑠𝑠 ÷𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑆𝑡𝑢𝑑𝑒𝑛𝑡𝑠
The formula of the repeat rate for each level is:
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐴𝑡𝑡𝑒𝑚𝑝𝑡𝑠−𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑆𝑡𝑢𝑑𝑒𝑛𝑡𝑠 ÷𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐴𝑡𝑡𝑒𝑚𝑝𝑡𝑠
The success and repeat rates of Cerritos College and Glendale College are shown in
Table 1. However, this study only analyzed success rates. Cerritos College starts at basic
mathematics, which is four levels below transfer level whereas Glendale College starts at pre-
algebra which is three levels below transfer level.
Table 1
Repeat/Success Rate of Cerritos College and Glendale College
Cerritos College
Fall06-Sp2012 Students Attempts Success Success rate Repeat Rate
4 levels 832 1062 475 0.570913462 0.216572505
3 levels 401 569 284 0.708229426 0.295254833
2 levels 264 451 168 0.636363636 0.414634146
1 level 137 267 98 0.715328467 0.486891386
Glendale College
Fall06-Sp2012 Students Attempts Success Success Rate Repeat Rate
3 levels 293 364 188 0.641638225 0.195054945
2 levels 151 334 117 0.774834437 0.547904192
1 level 78 143 68 0.871794872 0.454545455
Theoretical Framework for the Methodology
Status Model
From the Student Success Scorecard, the success rates of all 112 colleges were recorded
and examined individually.
Improvement Model
From the Student Success Scorecard, the success rates of all 112 colleges were recorded
for all three cohorts: Year 1 is 2004-2005, Year 2 is 2005-2006, and Year 3 is 2006-2007. The
improvement scores are the difference between the success rates of two consecutive years.
ACCOUNTABILITY MODELS 41
Step 1: The formulae for computing improvement scores are:
Improve 1 = Overall Success Rates of each college of Year 2 – Overall Success Rate of
each college of Year 1.
Improve 2 = Overall Success Rates of each college of Year 3 – Overall Success Rate of
each college of Year 2.
Step 2: Repeat the same procedure to calculate the improvement scores of each ethnic
group.
Input-adjusted Model.
Success rate alone cannot be used to make fair and useful between-school comparisons
because they are highly correlated with student and community characteristics that are outside
the control of schools and instructors (Hocevar & Tate, 2012). Therefore, for this model, beside
the success rate index, an index of achievement adjusted for ethnic group relationship was
calculated so leaders can access faculty performance. This was a three-step process.
Step 1: Regress the outcome measures with the predictor variables. The multiple
regression technique was selected for this study with ethnic groups as independent variables.
The formula used to regress the outcome measure with the independent variables is:
𝑌’ = 𝑏
!
𝑥
!
+ 𝑏
!
𝑥
!
+ 𝑏
!
𝑥
!
+ 𝑏
!
𝑥
!
+ 𝑏
!
𝑥
!
+𝑎
where 𝑌’ represents the predicted value of the variable being explained, 𝑏
!
, 𝑏
!
, 𝑏
!
, 𝑏
!
, 𝑏
!
represent the regression coefficients used as multipliers for the predictor variables, 𝑎 is the y-
intercept (the value of the performance indicator when all of the predictor variables are 0), and
𝑥
!
, 𝑥
!
, 𝑥
!
, 𝑥
!
, 𝑥
!
represent the predictor variables in the study as follows:
𝑥
!
= African American
𝑥
!
= Asian
ACCOUNTABILITY MODELS 42
𝑥
!
= Filipino
𝑥
!
= Hispanic
𝑥
!
= White
The sixth ethnic group “other” was coded zero on all six dummy coded ethnic groups.
Step 2: In order to make meaningful comparisons among institutions, the determination
of the differences was calculated using the residuals produced by the multiple regression
equation. The difference between the actual rate and the predicted rate on the performance
indicators produced a residual value that consisted of the error present, based on the combined
effect of the input variables. The formula for the residual is
𝑅𝑒𝑠𝑖𝑑𝑢𝑎𝑙 = 𝑦 – 𝑦’
Where 𝑦 represents the actual success score, and 𝑦’ represents the predicted score based on the
regression line equation. Institutions with positive residual values (i.e., ARCC 2.0 success rate
above the regression line) over-performed on the performance indicator, based in the effect of
the inputs. In contrast, institutions with negative residual values (i.e., ARCC 2.0 success rate
below the regression line) under-performed on the performance indicator, given the combined
effect of the dependent variables.
Step 3: Standardized residual score was computed to allow for comparisons among
institutions.
Ethnic group membership was collected and correlated against the number of success
rates along with a visual representation using a scatter-plot.
The multiple regression used the least-squares regression line that minimized the sum of
the squared errors. The formula is:
min (𝑦−𝑦′)
!
ACCOUNTABILITY MODELS 43
The difference between the college’s rate and the predicted rate on the geographic
indicators produced a residual value that consisted of the error present, based on the effect of the
input variable.
A Pearson product-moment coefficient was calculated as a descriptive measure to
demonstrate the existence, direction, or no correlation between the variables (Salkind, 2008). A
linear correlation coefficient is represented by x and y quantitative values. The formula is shown
in Figure 1.
𝑟
!"
=
𝑛( 𝑥𝑦)− ( 𝑥)( 𝑦)
𝑛( 𝑥
!
− ( 𝑥)
!
𝑛( 𝑦
!
− ( 𝑦)
!
Figure 1. The formula for Pearson’s r
𝑟
!"
is the test-retest reliability correlation coefficient from year to year, 𝑛 is the number of pairs
scores of 𝑥 and 𝑦 data contained in the analysis. 𝑥 and 𝑦 represent the additive sum of the 𝑥
and 𝑦 values, respectively. 𝑥𝑦 represents the sum of the products of each individual 𝑥 value
and 𝑦 value.
Analysis
The result is presented in a correlation matrix with coefficients between the ranges of -1.0
and +1.0. Since the relationship of the variables can be direct (positive) or indirect (negative), a
two-tailed test was utilized in order to find significance level at 0.05, given the sample size of
112 colleges, resulting in the degree of freedom of 110 (Salkind, 2008). In order to further
interpret the correlation coefficients, their respective coefficient of determination was also
derived to explain the percentage of variance in one variable that is accounted for in other
variables, i.e., the squared of the correlation coefficient. The coefficient of determination was
ACCOUNTABILITY MODELS 44
calculated by taking the square of the correlation coefficient. While the goal of this analysis was
to determine the relationship of variables, it does not necessarily prove causation. The
relationship between race and success rate is the concern of this study. The purpose of this
particular analysis was to determine whether ethnic group membership is an indicator of the
success rate for remedial math courses statewide.
Disaggregated Model.
Although equity is valued in principle at many institutions, it is not measured in relation
to education outcomes for specific groups of students (Bensimon, 2004). In order to bring about
change in an institution, stakeholders must analyze and integrate the meaning of inequalities
among ethnic groups. Since ethnic groups are an important factor in this study and given input-
adjusted scores are needed to control for the bias introduced when institutions have different
ethnic groups, a disaggregated model is a viable alternative to the input-adjusted model. For the
disaggregated model, each ethnic group membership was analyzed separately.
Step 1: The percentage of credit students tracked for six years through 2011-2012 who
started below transfer level in mathematics was retrieved from California Community Colleges
Students Success Scorecard.
Step 2: Data were disaggregated by computing and recording the success rate of each of
the 5 ethnic groups: Hispanic, Caucasian, Asian, African American and Filipino.
Limitations
This study encompasses a pool of comparable institutions within the 112 public
community colleges in the state of California. The validity of the study is limited to the
reliability of the data provided by the Chancellor’s office as well as the additional data from the
ACCOUNTABILITY MODELS 45
district offices and other sources. Furthermore, in this study, the sample size of each ethnic
group is missing. Therefore, the disaggregated model could not yield to more reliable results.
Delimitations (Generalizability)
An abundance of data is available regarding the success rates of remedial math courses at
community colleges. However, other variables beside ethnic groups such as age, gender, and
SES were excluded in this study. The study was limited to the three latest cohorts of remedial
math students. Student retention rate, transfer rate, and program completion are outside the
scope of this study. The data gathered from these sources is not compatible with other
postsecondary institutions such as four-year universities (UC and CSU), for-profit institutions
and private universities, and would, therefore, not be an appropriate method of analysis for these
institutions. This study cannot be generalized to other populations.
ACCOUNTABILITY MODELS 46
CHAPTER FOUR: RESULTS
Introduction
The purpose of this quantitative study was to compare four accountability models using
remedial mathematics courses in California community colleges. These four models are Status,
Improvement, Disaggregated and Input-Adjusted. In addition, to illuminate the relationship
between success rates of remedial math students to ethnicity, a correlation analysis was
conducted. For each model, three fundamental quantitative procedures were performed when
warranted: (a) summary descriptive statistics were computed, (b) the reliability of the success
scores of each model was determined, and (c) the stability coefficients for the success scores of
each model were computed and analyzed. To determine the significance of the symmetry and
variability of the frequency distributions, t-values were obtained by dividing the observed
skewness and kurtosis by their respective standard errors.
Findings
Status Model:
Research Question: How have indicators of math remediation success changed in the
past three years? How are status indicators distributed? How stable are status indicators of
success?
Measures of central tendency, standard deviation, and sample size were computed for
four sets of success scores of remedial math courses in community colleges. Table 2 shows the
means and standard deviations for the 2004-2005 success rates (cohort 1), 2005-2006 success
rates (cohort 2) and 2006-2007 success rates (cohort 3). The means increase from year to year.
To test the significance of this increase, a repeated measures analysis of variance (ANOVA) was
conducted. The ANOVA yielded a significant linear effect, F(1,102) = 8.898, p = .004, and post
ACCOUNTABILITY MODELS 47
hoc LSD contrasts indicated that the difference between cohort 1 and cohort 2 was significant (p
=. 006), but the difference between cohort 2 and cohort 3 was not significant (p = .159).
Table 2
Mean Differences
Mean Std. Deviation N
Overall0405 .23576 .097060 103
Overall0506 .24434 .095113 103
Overall0607 .24904 .089683 105
The frequency distributions for each of the success rates are given in Figures 2 through 4.
Figure 2. Overall Remedial Math Success Rate: 2004-2005 Cohort
ACCOUNTABILITY MODELS 48
Figure 3. Overall Remedial Math Success Rate: 2005-2006 Cohort
Figure 4. Overall Remedial Math Success Rate: 2006-2007 Cohort
ACCOUNTABILITY MODELS 49
The frequency distributions for the success rates of Year 2004-2005 and Year 2005-2006
are similar. Both distributions are skewed to the right with a high kurtosis, indicating that there
are a few unusually small measurements, i.e. low success rates. The distribution for the success
rates of Year 2006-2007 is more normally distributed compared to the distributions of the two
previous years.
To determine the significance of the symmetry and variability of the frequency
distributions, t-values were obtained by dividing the observed skewness and kurtosis by their
respective standard errors. Table 3 displays the t-values for skewness and kurtosis for all three
years. These values also indicated that the sample distributions for the success rates of the Year
2004-2005 (2.61) and the Year 2005-2006 (2.53) were significant and positively skewed.
Table 3
Descriptive Statistics of Predictor Variables for Status Model
Overall0405 Overall0506 Overall0607
N
Valid 103 103 105
Missing 3 3 1
Mean .23576 .24434 .24904
Std. Deviation .097060 .095113 .089683
Skewness .622 .602 .407
Std. Error of Skewness .238 .238 .236
Kurtosis -.161 -.286 -.068
Std. Error of Kurtosis
T-value for Skewness
T-Value for Kurtosis
.472
2.61
-3.41
.472
2.53
-0.61
.467
1.72
-0.146
The linear correlation coefficients of the predictor variables were computed using
Pearson’s r. Table 4 displays the stability coefficients for the success rates for the status model.
The results indicate that the overall success rates are extremely stable with the range from (r =
.865) to (r = .949).
ACCOUNTABILITY MODELS 50
Table 4
Year-to-Year Stability of Status Model
Correlations
Overall0405 Overall0506 Overall0607
Overall0405
Pearson Correlation 1 .949 .865
Sig. (2-tailed) .000 .000
Overall0506
Pearson Correlation .949 1 .899
Sig. (2-tailed) .000 .000
Overall0607
Pearson Correlation .865 .899 1
Sig. (2-tailed) .000 .000
Improvement Model
Research Question: How have improvement rates changed over the last two years? How
stable and reliable are improvement rates?
Based on the findings on the overall success rates of remedial math courses of
community colleges over three cohorts, status models are biased in favor of community colleges
in wealthy neighborhoods. Therefore, improvement models are advocated by most politicians
and stakeholders as a way to make more valid comparisons. Unlike status models, improvement
models allow researchers to compare institutions to themselves to preserve fairness to each
institution. The improvement rates are computed by taking the difference between the success
rates of two consecutive years. Improvement one was calculated by subtracting the success rate
of the 2004-2005 cohort from the success rate of the 2005-2006 cohort. Improvement two was
computed by subtracting the success rate of the 2005-2006 cohort from the success rate of the
2006-2007 cohort. Table 5 displays the mean differences for improvement scores of the overall
success rates as well as the success rates of each ethnic group.
ACCOUNTABILITY MODELS 51
Table 5
Mean Improvement Rates
Mean Std. Deviation N
ImproveOverall1 .0086 .03074 103
ImproveOverall2 .0059 .04190 103
ImproveHisp1 .0061 .06328 103
ImproveHisp2 .0107 .05818 103
ImproveAsian1 .0341 .16993 95
ImproveAsian2 .0036 .19573 95
ImproveBlack1 -.0015 .10225 100
ImproveBlack2 .0129 .10759 100
ImproveFil1 .0156 .14251 87
ImproveFil2 .0169 .18467 87
ImproveWhite1 .0034 .05685 104
ImproveWhite2 .0135 .06010 103
The mean of the overall success rate decreases from year 1 to year 2. Among the ethnic
groups, the means are different. Among the five ethnic groups, only the mean for Asians
decreases while the mean of the other groups all increase from year 1 to year 2. The correlation
among the improvement scores of the overall success rates and the improvement scores of the
success rate of each of the five ethnic groups were computed using year 1 to year 2 with year 2 to
year 3.
Table 6 displays the linear stability coefficients for the improvement scores.
Table 6
Linear Correlation for Year-to-year Improvement
Improve
Overall
Improve
Hispanic
Improve
Asian
Improve
Black
Improve
Filipino
Improve
White
Pearson’s r -.112 -.528 -.260 -.486 -.516 -.422
Sig.(2-tailed) .260 .001 .012 .001 .001 .001
ACCOUNTABILITY MODELS 52
The results of the correlation analysis of the two improvement scores and ethnicity
indicate that improvement scores showed negative year-to-year correlation for each of the ethnic
groups. The correlation ranged from a low correlation for Asian (r = -.260, p = 0.012) to a
moderate correlation such as Filipino (r = -.516, p = 001). Since growth is negatively correlated
with the base year, e.g. 2004, the growth scores are completely unreliable and should not be used
for accountability purposes.
Disaggregated Model
Research Question: Have disaggregated scores changed over the past three years? How
reliable are disaggregated status scores? How stable are disaggregated scores for each ethnic
group?
Based on the findings of the status model, there are some biases for community colleges
with a predominance of certain ethnic groups. Therefore, to control the ethnic bias, a
disaggregated model was implemented and analyzed for this study. The logic is to compare the
success rate within each individual ethnic group. Table 7 shows the means and standard
deviations for each individual ethnic group throughout three consecutive years from 2004 to
2007.
ACCOUNTABILITY MODELS 53
Table 7
Means and Standard Deviations by Cohort and Ethnicity
Mean Std. Deviation N
HispRATE0405 .21927 .088913 103
HispRATE0506 .22563 .090788 104
HispRATE0607 .23465 .089751 105
AsianRATE0405 .31809 .176454 98
AsianRATE0506 .35231 .170795 96
AsianRATE0607 .34940 .164866 100
BlackRATE0405 .16060 .099188 100
BlackRATE0506 .15752 .100709 101
BlackRATE0607 .16716 .110365 102
FilRATE0405 .27069 .146058 87
FilRATE0506 .28076 .168337 89
FilRATE0607 .29420 .150650 89
WhiteRATE0405 .26446 .107222 104
WhiteRATE0506 .26787 .106300 104
WhiteRATE0607 .27787 .099563 105
The means of each group slightly increase from year to year except for Asians. The
Asian mean increases from the year 1 (2004- 2005) to year 2 (2005-2006), but it slightly
decreases from year 2 to year 3 (2006-2007). The means of the success rates of African
American and Hispanic are alarmingly low compared to the other three ethnic groups.
Since the success rates of each ethnic group was examined separately for three years, a
reliability analysis was conducted to determine the reliability of the sum of the three years.
Table 8 presents the Cronbach’s alpha, a coefficient of reliability, of each ethnic group. A
Cronbach’s alpha can be written as a function of the number of test items (years) and the mean
inter-correlation among the indicators.
ACCOUNTABILITY MODELS 54
Table 8
Reliability Statistics for 3-year average
Cronbach's
Alpha
N of Items
Hispanics .912 3
Asians .557 3
African Americans .708 3
Filipino .688 3
White .937 3
The alpha coefficient for the three year composites are extremely high in the case of
Hispanics (.912) and Whites (.937) suggesting that the success rates over three years have very
high internal consistency. Asians have the lowest alpha coefficient for the three consecutive
years. This result correlates with the mean differences of success rates over three years within
the group of Asian students. The mean increases from .318 (2004-2005) to .352 (2005-2006),
but it decreases to .349 (2006-2007).
The existence of the correlation among the annual overall success rates and the annual
success rate of each of the five ethnic groups were computed using Pearson’s r. Table 9 displays
the linear stability coefficients for the overall success rates and the success rates broken down to
each individual ethnic groups. The numerical values in Table 9 represent the correlation
coefficients within each ethnic group over three years (2004-2005, 2005-2006, 2006-2007). The
results of the correlation matrix indicate a strong degree of temporal stability for the success
rates for each of the ethnic group across the three-year period. As would be expected from the
results in Table 7, Hispanics and Whites have the highest degree of stability. Since Hispanics
and Whites comprise about 67% of students at community colleges in California, the results can
be attributed to the sample size.
ACCOUNTABILITY MODELS 55
Table 9
Correlation Matrix of Success Rates for Each Ethnic Group Over 3 Years
Overall0405 Overall0506 overall0607
Overall0405
Pearson Correlation 1 .949 .865
Sig. (2-tailed) .000 .000
Overall0506
Pearson Correlation .949 1 .899
Sig. (2-tailed) .000 .000
Overall0607
Pearson Correlation .865 .899 1
Sig. (2-tailed) .000 .000
HispRATE0405 HispRATE0506 HispRATE0607
HispRATE0405
Pearson Correlation 1 .753 .780
Sig. (2-tailed) .000 .000
HispRATE0506
Pearson Correlation .753 1 .794
Sig. (2-tailed) .000 .000
HispRATE0607
Pearson Correlation .780 .794 1
Sig. (2-tailed) .000 .000
AsianRATE0405 AsianRATE0506 AsianRATE0607
AsianRATE0405
Pearson Correlation 1 .503 .136
Sig. (2-tailed) .000 .185
AsianRATE0506
Pearson Correlation .503 1 .306
Sig. (2-tailed) .000 .003
AsianRATE0607
Pearson Correlation .136 .306 1
Sig. (2-tailed) .185 .003
BlackRATE0405 BlackRATE0506 BlackRATE0607
BlackRATE0405
Pearson Correlation 1 .473 .468
Sig. (2-tailed) .000 .000
BlackRATE0506
Pearson Correlation .473 1 .469
Sig. (2-tailed) .000 .000
BlackRATE0607
Pearson Correlation .468 .469 1
Sig. (2-tailed) .000 .000
FilRATE0405 FilRATE0506 FilRATE0607
FilRATE0405
Pearson Correlation 1 .590 .381
Sig. (2-tailed) .000 .000
FilRATE0506
Pearson Correlation .590 1 .319
Sig. (2-tailed) .000 .003
FilRATE0607
Pearson Correlation .381 .319 1
Sig. (2-tailed) .000 .003
WhiteRATE0405 WhiteRATE0506 WhiteRATE0607
WhiteRATE0405
Pearson Correlation 1 .858 .814
Sig. (2-tailed) .000 .000
WhiteRATE0506
Pearson Correlation .858 1 .829
Sig. (2-tailed) .000 .000
WhiteRATE0607
Pearson Correlation .814 .829 1
Sig. (2-tailed) .000 .000
ACCOUNTABILITY MODELS 56
Input-Adjusted Model
Research Question: What are the descriptive characteristics of year-to-year standardized
residuals (input-adjusted scores) when remedial math is regressed on ethnic group membership?
Are year-to-year input-adjusted scores stable?
According to Hocevar and Tate (2012), unadjusted success rates cannot be used to make
fair and useful institutional comparisons because they are highly correlated with student and
community characteristics that are outside the control of institutions (e.g., ethnicity in this study).
The input-adjusted model is another way to control for the ethnicity bias. Input-adjusted scores
are computed by the regression of success on ethnicity. Table 10 presents the regression results
for input-adjusted scores. This table displays the multiple correlation coefficient (R), the
coefficient for determination (R Square), the adjusted R Square, the ANOVA results, and the
statistical significance of each year. To facilitate the use of raw success rates for accountability
purposes, researchers adjust for factors beyond a school’s or teacher’s control such as student
social-economic status, ethnicity, and age. For this study, the factor that was adjusted is
ethnicity.
Table 10
Descriptive Statistics for Input-Adjusted Scores
Model Summary
Model R R
Square
Adjusted R
Square
Std. Error
of the
Estimate
F df Sig
0607 .496 .246 .208 .079806 6.467 5 (99) .001
0506 .462 .213 .173 .086509 5.260 5(97) .001
0405 .420 .176 .134 .090320 4.158 5(97) .001
ACCOUNTABILITY MODELS 57
The ANOVA yielded a significant ethnic effect, particularly in the case of year 2006-
2007. The multiple correlation for this year was .497, F(5, 99) = 6.467, p = .001. The R Square
and adjusted R Square were .246 and .208, respectively. All indicators presented in Table 10 are
significant indicators.
The input-adjusted success rates are computed in three steps. After the predicted
achievement and the residual were obtained, using SPSS, the residuals were converted to
percentiles, and then the percentiles were converted to Normal Curve Equivalence (NCE) scores.
Figures 5 through 7 represent the histograms of the NCE score of each school year.
Figure 5. NCE 2004-2005
ACCOUNTABILITY MODELS 58
Figure 6. NCE 2005-2006
Figure 7. NCE 2006-2007
ACCOUNTABILITY MODELS 59
Unlike Figures 2 through 4 in which the distributions are skewed to the right with a high
kurtosis, indicating that there are a few unusually small measurements, i.e. low success rates,
Figures 5 through 7 display normal distributions for all three years. This is an indication that
input-adjusted success scores are a more interpretable indicator of an institutions’ performance.
To facilitate interpretation, the NCE scores were converted to T-scores.
To test the existence of the correlation among the variables in the input-adjusted model,
the success rates of all three years were calculated using the two-tailed Pearson correlation
coefficient. Table 11 displays the correlation coefficients of the input-adjusted scores. These
values represent the input-adjusted rates of three years correlated to each other, an indicator of
stability.
Table 11
Stability of Input-Adjusted Model
Tscore0405 Tscore0506 Tscore0607
Tscore0405
Pearson Correlation 1 .924 .853
Sig. (2-tailed) .000 .000
Tscore0506
Pearson Correlation .924 1 .890
Sig. (2-tailed) .000 .000
Tscore0607
Pearson Correlation .853 .890 1
Sig. (2-tailed) .000 .000
The results of the correlation analysis among the input-adjusted success rates over the
span of three years indicated a very high degree of correlation. The strongest degree of temporal
stability of the success rate residuals were for the year 2004-2005 and 2005-2006 (r = .924, p =
001).
ACCOUNTABILITY MODELS 60
Comparison of Input-Adjusted Model and Disaggregated Model
Table 12 and 13 display the over-performing and under-performing institutions for the
status model, respectively. The success rates are the raw overall success rates. Table 14 and 15
display the over-performing and under-performing institutions for input-adjusted model,
respectively. To get a sense of the extent to which the differences were due to SES, the
percentage of Caucasian students on the campus is indicated.
Table 12
Over-Performing Institutions for the Status Model
College Success Rates
De Anza College
San Joaquin Delta College
Ohlone College
Moorpark College
Irvine Valley College
Saddleback College
Orange Coast College
Mt. San Antonio College
Mira Costa College
Cuyamaca College
0.545
0.430
0.421
0.420
0.410
0.391
0.382
0.379
0.371
0.363
Table 13
Under-performing Institutions for the Status Model
College Success Rates
LB1
B12
Sa1
WH1
Si1
Me1
LAT
PA1
LAS
La1
0.133
0.131
0.124
0.123
0.108
0.108
0.097
0.096
0.095
0.087
ACCOUNTABILITY MODELS 61
Table 14
Over-performing Institutions for the Input-Adjusted Model
College Adjusted Scores Caucasians
San Joaquin Delta College
De Anza College
Moorpark College
Cuyamaca College
Mira Costa College
Saddleback College
Imperial Valley
San Bernardino Valley
Berkeley City College
Grossmont College
74.39
71.40
69.91
68.41
67.42
66.42
65.43
64.93
63.93
63.44
26.70%
25.20%
57.00%
49.10%
49.60%
60.90%
03.60%
18.00%
26.40%
45.40%
Table 15
Under-performing Institutions for the Input-Adjusted Model
College Adjusted Scores Whites
Ch1
PV1
B12
La1
Me1
S12
SJ1
SF1
Mi1
C12
36.56
36.06
35.06
34.56
33.57
32.57
31.58
30.08
28.59
25.60
18.90%
44.30%
65.90%
55.90%
59.70%
26.70%
16.70%
26.10%
20.60%
34.90%
In the input-adjusted model, the list of over-performing institutions is comprised of
institutions with large population of Caucasians. For instance, sixty percent of the student
population of Saddleback College is Caucasian. Moorpark College is another school with a large
Caucasian population of 57%. These two schools appear on all three tables of over-performing
institutions for input-adjusted and disaggregated models. However, when analyzing the list of
under-performing institutions, some of these colleges also consist of a large population of
ACCOUNTABILITY MODELS 62
Caucasians. For instance, Butte College’s student population is has 65.9% Caucasian and
Mendocino College’s is 59.7% Caucasian. These two schools also appear on the list of under-
performing institutions. Therefore, there is no evidence of ethnic bias shown in input-adjusted
model. Both input-adjusted and disaggregated model could be used to compare institutions
without ethnic bias and for accountability purposes.
Table 16 and 17 display the over-performing and under-performing institutions for
Hispanics, respectively.
Table 16
Over-performing Institutions-Hispanics
San Joaquin Delta College 0.461
De Anza College 0.431
Saddleback College 0.426
Laney College 0.403
Ohlone College 0.392
Berkeley City College 0.385
Orange Coast College 0.376
Moorpark College 0.372
Napa Valley College 0.354
Mt. San Antonio College 0.352
ACCOUNTABILITY MODELS 63
Table 17
Under-performing Institutions-Hispanics
Co1 0.000
Si1 0.071
M12 0.086
PV1 0.091
SJ1 0.108
B12 0.109
Sa1 0.111
Ch1 0.112
Cu1 0.119
MP1 0.120
Table 18 and 19 display the over-performing and under-performing institutions for
Caucasians, respectively.
Table 18
Over-performing Institutions-Caucasians
De Anza College 0.560
Evergreen Valley College 0.500
Imperial Valley 0.475
Ohlone College 0.453
College of Alameda 0.451
Moorpark College 0.450
College of San Mateo 0.434
Irvine Valley College 0.428
Glendale College 0.413
ACCOUNTABILITY MODELS 64
Table 19
Under-performing Institutions-Caucasians
PV1 0.049
La1 0.104
CC1 0.105
M12 0.121
WH1 0.125
C12 0.127
LAT 0.128
Si1 0.131
B12 0.134
MJ1 0.154
Hispanics and Caucasians are the two biggest ethnic groups. Therefore, these two groups
are selected for the analysis of over and under-performing institutions. Table 16-19 show little
overlap among the over-performing institutions as well as the under-performing institutions for
Hispanics and Caucasians. The result indicates that disaggregated model is not reliable. This
suggests that a single index may not be warranted.
Summary
Various themes are evident in this data analysis. The four accountability models: status,
improvement, disaggregated, and input-adjusted all possess some similarities as well as
differences. First, in answering the first research question, the raw overall success rates are
extremely stable from year to year. However, examining raw success rates without considering
the factors beyond institutions’ control such as ethnicity is not a fair framework. Therefore,
improvement, disaggregated and input-adjusted models were introduced. The improvement
model is a fairer framework for comparing institutions, yet the improvement scores are not stable
from year to year. The results of disaggregated model and input-adjusted model yield a better
ACCOUNTABILITY MODELS 65
solution for comparing students’ success rates. Both the disaggregated success rates and the
input-adjusted success rates indicate a strong stability from year to year.
ACCOUNTABILITY MODELS 66
CHAPTER FIVE: SUMMARY, DISCUSSION AND IMPLICATIONS
Mathematics completion is a continuous problem at community colleges nationwide.
Students as well as faculty and organizations are struggling with producing student success in
math, and it prevents students from moving forward with their career goals. Community
colleges nationwide admit hundreds of thousands underprepared students annually. Therefore,
community colleges have the highest number of students enrolled in developmental coursework
across all the U.S postsecondary institutions (Provasnik & Planty, 2008; Roueche & Waiwaiole,
2009). Developmental/remedial education has become one of the most controversial issues of
higher education. Developmental math education has been defined by the National Center for
Education Statistics as mathematics for college students who lack skills necessary to perform
college-level work at the level required by the institution (U.S Department of Education, 1996).
Community college students present a number of academic and personal risk factors. In 2007,
approximately 61% of California community college students were assessed at skill levels below
intermediate algebra (Scott, 2009).
According to Melguizo et al. (2013), approximately 50 percent of Los Angeles
Community College District students assessed in math were placed into a developmental math
course between 2005 and 2007. Today, more students are referred to developmental math
compared to developmental reading or English. Furthermore, fewer students complete their
coursework in math than in other subjects. Since 2010, California requires students to pass
intermediate algebra to receive an Associate’s degree, one math level higher than beginning
algebra, the previous degree requirement (Melguizo et al., 2013). In response, the community
colleges are attempting to increase students’ preparedness for the college curriculum by
strengthening developmental math courses with greater accountability. This study focused on
ACCOUNTABILITY MODELS 67
four specific accountability models: Status, Improvement, Disaggregated, and the Input-Adjusted
Model.
Community colleges are encouraged to identify students with greater needs for
developmental education, especially for math, and, at the same time, develop strategies to help
them succeed. In order to improve the success rates in developmental education, the right
instruments and accountability models need to be selected. In this study, the success rates of
developmental math were calculated and analyzed under the scope of four accountability models
to examine the pros and cons of each model.
As a result, the following research questions were developed:
1. How have indicators of math remediation success changed in the past three years? How
are status indicators distributed? How stable are status indicators of success?
2. How have improvement rates changed over the last two years? How stable and reliable
are improvement rates?
3. Have disaggregated scores changed over the past three years? How reliable are
disaggregated status scores? How stable are disaggregated scores for each ethnic group?
4. What are the descriptive characteristics of year-to-year standardized residuals (input-
adjusted scores) when remedial math is regressed on ethnic group membership? Are year-
to-year input-adjusted scores stable?
Summary of Results
Status Model
Research Question: How have indicators of math remediation success changed in the past
three years? How are status indicators distributed? How stable are status indicators of success?
ACCOUNTABILITY MODELS 68
A status model is one of the quickest methods to compare success rates among
institutions. Since the model is simple, it leaves too much room for mistakes. The most salient
flaw of the status model is that comparisons among institutions are based on raw scores without
considering any other factors such as ethnicity and SES. The results of this study showed that
the correlations between ethnicity and the ARCC remedial math status indicators for each of the
three years are high. In fact, they are so high that the remedial success status indicators cannot
be used for accountability or evaluation or “pay-for-performance” purposes.
Improvement Model
Research Question: How have improvement rates changed over the last two years? How
stable and reliable are improvement rates?
To avoid biases associated with institutional characteristic in status model, stakeholders
have introduced the improvement model. The logic behind this model is that institutional
comparisons are taken with themselves, so it is fair for each institution. The problem of this
model is that improvement scores are difference scores; hence, they are usually very unstable
from year to year (Thorndike, 1977). According to Thorndike (1977), the low reliability that
tends to characterize difference scores becomes a problem whenever we wish to use change
patterns for diagnosis. It is unfortunately true in this study, since the results of the improvement
model shows a substantially low reliability for improvement scores.
Disaggregated Model
Research Question: Have disaggregated scores changed over the past three years? How reliable
are disaggregated status scores? How stable are disaggregated scores for each ethnic group?
There are quite a number of variables that each indicator could be regressed on such as
educational level, income, age, and gender. However, since ethnicity is considered by many
ACCOUNTABILITY MODELS 69
researchers to be most important, this study only focused on ethnicity. Disaggregated models are
one way to control for the ethnicity bias. The logic is that the comparisons need to be conducted
within each ethnic group such as White with White and Hispanics with Hispanics. Thus, the
comparison is a fair way to evaluate institutions. In order to use an index for accountability, it
has to be stable from year to year. Based on the results, each race shows high stability especially
the cases of Hispanics and White.
Input-Adjusted Model
Research Question: What are the descriptive characteristics of year-to-year standardized
residuals (input-adjusted scores) when remedial math is regressed on ethnic group membership?
Are year-to-year input-adjusted scores stable?
Another way to control for the ethnicity bias is the input-adjusted model. Unadjusted test
scores cannot be used to make fair and useful between-institution comparisons because they are
highly correlated with student and community characteristics that are outside the control of
institutions and instructors (Hocevar and Tate, 2012). The regressions of the input-adjusted
models showed a promising result in this study. The residuals when success is regressed on
percentage of each ethnic group are normally distributed and stable. The results indicated that
the input-adjusted model is viable.
The use of input-adjusted indicators requires that the distribution of the residuals be
normally distributed. All three histograms of each of three years display normal distributions.
These findings are significant since the distributions of raw success model scores were skewed to
the right with a high kurtosis.
The results of the correlation analysis among the input-adjusted success rates over the
span of three years showed high year-to-year correlations. The stability coefficients represent a
ACCOUNTABILITY MODELS 70
strong degree of temporal stability of the success rates residuals for the year 2004-2005 and
2005-2006. Therefore, input-adjusted scores can be potentially very useful for accountability
purposes.
Both input-adjust and disaggregated models show promising results. These two models
are stable and viable tools for accountability purposes. In the input-adjusted model, Caucasians
appeared largely on both lists of under-performing and over-performing institutions. This result
indicates that the input-adjusted model has eliminated the ethnic bias. In the disaggregated
models, the list of under-performing and over-performing institutions are distributed for each
ethnic group. The results coincided with the results of input-adjusted models. Some institutions
appeared on both input-adjusted and disaggregated model. Therefore, both input-adjusted and
disaggregated models can be used for accountability purposes.
Discussion
The status model is not useful for accountability purposes because status scores are lower
for community colleges that serve Hispanics and African-Americans. This finding corresponds to
Pacheco’s study (2012) on performance metrics. According to Pacheco (2012), socioeconomic
factors are highly predictive in the aggregate of ARCC performance. It is troublesome to
separate the location of the institution, income and educational attainment of the residents, and
student population makeup from what institution is actually doing to meet its commitment to
academic quality. In this study, the correlations between ethnic group memberships and the
status indicators are extremely high. This study also aligns with the review of Bailey and Xu
(2012) on input-adjusted scores. It is unfair to evaluate an institution without accounting for the
characteristics of the students who enter it and the resources available to it (Bailey and Xu,
2012).
ACCOUNTABILITY MODELS 71
The improvement model is unreliable and biased in favor of low scoring institutions. The
results of this study show a substantially low reliability. According to Thorndike (1977), the
difference between two scores has low reliability due to two factors: (1) the errors of
measurement in both separate scores accumulate in the difference score, and (2) whatever is
common to both measures is canceled in the difference score. The low reliability becomes a
problem, and, therefore, any difference need to be interpreted in light of the standard error of
measurement of that difference (Thorndike, 1977). Thus, improvement models cannot be used
for accountability purposes.
The disaggregated model is stable and a viable alternative to status and improvement
models. The idea behind disaggregated models was adopted from the Diversity Scorecard
created by Bensimon (2004). The Diversity Scorecard is an ongoing initiative designed to foster
institutional change in higher education by helping to close the achievement gap for historically
underrepresented students (Bensimon, 2004). Many institutions obsessively notice minor
fluctuations in their success and transfer rates, but rarely are aware of the proportion of
underrepresented students who contributed to the success rates (Bensimon, 2004). The results of
the present study are quite promising. Since this model is stable, institutions can make more use
of assessment and accountability data.
The input-adjusted model also is very stable; however, controlling only one factor (e.g.,
ethnicity) might be too weak for accountability purposes. In Pacheco’s study (2012), the author
found that the ARCC indicators (SPAR, Persistence, and Thirty-Unit Completion) are stable and
consistent measures of institutional effectiveness. The input-adjusted models were also
implemented in the study by Bahr, Home and Perry (2005) on college transfer performance. In
their study, Bahr et al. also used the input-adjusted method, which involves major enhancements
ACCOUNTABILITY MODELS 72
to prior efforts in California to implement transfer comparisons, including a less biased definition
of the transfer rate, the use of multiple student cohorts, the uses of statistical models to adjust for
exogenous variables observed to affect to transfer outcome, and the inclusion of data on student
transfers to a wider range of four-year institutions (Bahr et al., 2005). As the results, both
Pacheco’s study and the study of Bahr et al. yield better findings. The R-squares range from
0.712 to 0.755 for Bahr et al.’s study (2005) on transfer rates, while the R-square and the
adjusted R-square of the regression of SPAR rates on the predictor variables in Pacheco’s study
(2012) were.755 and .729 respectively. These results are much better compared to the result of
this study with R-square only range from 0.176 to 0.246. Pacheco (2012) and Bahr et al. (2005)
presented a better way of doing an input-adjusted model.
The U.S. Department of Education is developing a Postsecondary Institution Ratings
System (PIRS) to assess the performance of postsecondary institutions, advance institutional
accountability and enhance consumer access to useful information. To assist in this effort, NCES
has invited national experts to provide information about data elements, metrics, methods of data
collection, methods of weighting or scoring, and presentation frameworks for a PIRS. This is a
great start for community colleges nationwide to be more aware of accountability, and,
hopefully, PIRS will take advantage of the disaggregated and input-adjusted models.
Implications for Policy and Practices
There are several essential recommendations developed in this study. These
recommendations are needed to improve the existing policy and practice to support and improve
the success rates of developmental math in California Community Colleges. The stability of
both the disaggregated and the input-adjusted indicators were strong over a span of three years.
Therefore, the input-adjusted and the disaggregated models are viable for accountability
ACCOUNTABILITY MODELS 73
purposes. Based on the findings of this research and the existing research literature, institutions
should consider the following:
1) When test scores are available, colleges should adopt Comprehensive Accountability
Profiling (CAP) which is an annual measurement of the effectiveness of subject matter teacher
teams (in this case, mathematics instructors) in California community colleges (Hocevar and
Tate, 2012). The core measurements in STAR are 1) raw test scores, with institutions ranked on
the scaled of 1-10 compared with all institutions in the state; and 2) adjusted test scores, with
schools compared with similar institutions on a scale from 1 to 10. The adjusted test scores are
extremely important in the case of community colleges in California, since the scores are
adjusted for factors beyond a school’s or teacher’s control - in particular, ethnicity, student socio-
economic status, and English Learner designation. CAP methodology can also be applied to
success rates like the success rate in this study.
2) Colleges should develop a culture of continuous innovation and quality
improvement in forming developmental programs such as math intensive programs. These
programs often fall into three categories: Boot Camps, Summer Bridges and Accelerated
Programs. Boot camp programs are short, target a wide range of students, accommodate large
numbers, and support improved math performance on the math placement test. Pasadena City
College’s Math Jam is an example of boot camp programs. This college also offers a summer
bridge program called Summer Bridge. Summer bridge programs also happen in the summer,
but they run for longer periods of time (between 5-10 weeks). They are all math intensive and
they all focus support on transition to college and explicit instruction of study skills. As
opposed to the summer programs aimed at students who want to finish or skip some
developmental math credits before they enroll in the fall, these accelerated programs are math
ACCOUNTABILITY MODELS 74
courses that exist during fall and spring term. Glendale community college has run an
accelerated program called Fast Track. This program has the success rate of about 90%.
3) Colleges with a large student population of Hispanics and African Americans should
provide a supportive and positive learning experience for these groups of students which include
the establishment of a network of staff, faculty and other students. These colleges can foster best
organizational administrative practices to support Hispanics and African Americans student
success in mathematics.
4) With tools such as the Student Success Scorecard and the basic skills trackers, colleges
should investigate the large proportion of students who either did not attempt, or attempted but
did not complete, all of their required developmental math coursework. Colleges need to look
into building successful math programs with early intervention in order to assist in making
important gains in student success.
5) Colleges in California should consider the Pathways of the Carnegie Foundation.
STATWAY is a one-year pathway focused on statistics, data analysis, and causal reasoning that
combines college-level statistics with developmental math. QUANWAY is a pathway focused
on quantitative reasoning that fulfills developmental requirement with the aim of preparing
students for success in college level mathematics. The strengths of Pathways lie in curriculum
and engagement of students, faculty, administrators, education researchers, and program
designers. Pathways helps students to see themselves as capable of mathematical success
through interventions focused on non-cognitive factors and the development of language and
literacy skills. In addition, Pathways instruction utilizes a pedagogical model that supports
ambitious math learning.
ACCOUNTABILITY MODELS 75
6) The Department of Education is developing a Postsecondary Institution Rating System
(PIRS) to assess the performance of postsecondary institutions, advance institutional
accountability and enhance consumer access to useful information. To assist in this effort,
NCDS has invited national experts to provide information about data elements, metrics, method
of data collection, methods of weighting or scoring, and presentation frameworks for a PIRS.
Therefore, to improve accountability, administrators and stakeholders of community colleges
nationwide should be aware of new models and tools such as PIRS and Student Success
Scorecard.
7) Community Colleges nationwide need to consider a better way of assessing students’
entry level rather than relying solely on a placement test such as ACCUPLACER or COMPASS.
Belfield and Crosta (2012) found that ACCUPLACER score are not good predictors of course
grades in developmental education classes. Accuracy rates using placement tests are not high,
and in some cases, they could be improved by the use of a categorical rule placing all students
into developmental education or directly into college classes (Melguizo et al., 2013).
8) With new and improved tools for analyzing data, community colleges should consider
the process of benchmarking for developmental math education. While there are no absolute
best practices, there were plenty of examples of what most higher education teachers and
departments would consider better than their own. The Student Success Scorecard allows the
public to identify the institutions with good success rates to study their practices. Benchmarking
information will help the new cadres of academic reviewers to make informed judgments in the
setting of standards.
Since the Student Success Scorecard is available to the public, it is possible for California
community colleges to adopt the method of evidence-based benchmarking, a part of the
ACCOUNTABILITY MODELS 76
Comprehensive Accountability Profiling project (Hocevar and Tate, 2012). Low and high
performing institutions can be identified. The top 10 high performing institutions could be
chosen to be the benchmarking institutions-aspirational peers. After benchmarking the other
institutions’ success rates to the 10 high performing institutions, the benchmarking results can be
used to set a realistic evidence-based goal for the target institutions in developmental math.
9) Accrediting agencies nationwide should be searching for better guidance about how to
engrave evidence of student learning outcomes as they establish policies, standards, and
approaches to review. Accrediting organizations must be more aggressive and creative in
requiring meaningful data of student learning outcomes as an integral part of their standards and
process for review. There are appropriate differences among accrediting agencies in how they
choose to engage student learning, but not all are doing in appropriate and rigorous ways.
Accreditors should speak with a common voice when considering evidence of student success.
Furthermore, they must adopt a visible and proactive stance with respect to assuring acceptable
levels of student academic achievement.
10) California needs to adopt a full college-and career-ready (CCR) accountability
system. CCR accountability system includes a set of indicators that measure college and career
readiness and are used in several ways. California should include the CCR diploma and a CCR
assessment and uses either earning college credit while in high school or postsecondary
remediation indicators in its reporting and accountability system. For each CCR indicator, the
state needs to publicly report and set a statewide performance goal. Furthermore, the state can
provide incentives for improvement or factor improvement into its accountability formula.
ACCOUNTABILITY MODELS 77
Future Research
1) A replication of this quantitative study should be conducted with other control
variables such as age, gender, and socio-economic status. Besides ethnicity, these factors are
essential in the evaluations of student success. Stability and reliability could be examined with
these variables in the aggregated models and input-adjusted models for accountability purposes.
Pacheco’s (2012) study of SPAR rates persistence and 30-Unit completion and Bahr’s (2005)
study of transfer rates are good examples of research that can be applied to developmental math.
2) Based on the findings of this study, ethnicity plays an important role in the study of
accountability in community colleges. Therefore, a qualitative study should be considered of the
institutions that over-perform on the metrics in which ethnicity is adjusted. The quantitative
nature of this study only determines the existence of differences between over- and under-
performing institutions; however, it does not address the potential reasons behind these
differences and provides tools for benchmarking. A qualitative study which provides insight and
deep, descriptive accounts of Hispanic students enrolled in developmental math should be
considered.
3) The design of this study and the method of data analysis can be adopted for other
subjects such as ESL and English. The Student Success Scorecard is a helpful tool which
provides not only the success rates for developmental math, but it also contains useful accurate
information on ESL and English students.
Limitations to Generalizability
The method of this study’s data analysis leaves room for some limits to the study. Due to
the different in cut scores of the assessment, a large population of students shop for the easy
assessment cut scores as well as course offerings to math with personal and work schedules. The
ACCOUNTABILITY MODELS 78
success rates do not represent the actual rate of each specific institution since some students
attend to multiple institutions within or even outside of the districts.
Conclusion
High numbers of students who enter the community college system are identified as in
need of remedial math coursework, and the majority of those students never persist to a degree
(Cho, 2012). Supporting students who struggle in mathematics is one of the toughest
challenges for the community college system. Community colleges nationwide have
attempted a wide range of interventions. Although none of the accountability models that have
been implemented has been proven to be the best, greater accountability for remediation
success is needed. Recognizing the bias of institutions evaluations without considering the
factors beyond schools and teachers’ control such as ethnicity, SES, age and gender, the input-
adjusted models provide a fair and transparent accountability system. With this new and
improved accountability model, administrators and stakeholders can be motivated to come up
with suitable interventions for specific group of students in developmental math courses.
ACCOUNTABILITY MODELS 79
References
Achieve (2008). Benchmarking for success: Ensuring U.S. students receive a world-class
education. Retrieved on October 5, 2011 from http://achieve.org/BenchmarkingforSuccess
Achieve (2009). The American Diploma Project: Creating a high school diploma that counts:
The national context. Retrieved October 10, 2012, from
http://achieve.org/BenchmarkingforSuccess
Adelman, C. (2006). Principal indicators of students’ academic histories in postsecondary
education, 1972-2000. Washington, DC: U.S. Department of Education, Institute of
Education Sciences.
Anderman, E. M., & Johnston, J. (1998). Television news in the classroom: What are adolescents
learning? Journal of Adolescent Research, 13(1), 73-100.
Anderman, L. H., & Anderman, E. M. (1999). Social predictors of changes in students’
achievement goal orientations. Contemporary Educational Psychology, 25, 21-37.
Armstrong, William B. (2000). The association among students success in courses,
placement test scores, student background data, and instructor grading practices.
Community College Journal of Research and Practice, 24(8), 681-695.
Attewell, P., Lavin, D., Domina, T., & Levey, T. (2006). New evidence on college remediation.
Journal of Higher Education, 77(5), 886-924.
Bandura, A. (1997). Self-efficacy: The exercise of control. New York, NY: Freeman.
Ballard, L. C., & Johnson, M. F. (2004). Basic math skills and performance in an
introductory economics class. Journal of Economic Education, 35(1), 3-22.
ACCOUNTABILITY MODELS 80
Bahr, P., Hom, W., Perry, P. (2005) College Transfer Performance: A Methodology for
Equitable Measurement and Comparison. Journal of Applied Research in
Community College, 13(1).
Bailey, T., & Xu, D. (2012). Input-adjusted graduation rates and college accountability: What is
known from twenty years of research? Context for Success Working Paper. Available at:
www.hcmstrategists.com/contextforsuccess/papers/LIT_REVIEW.pdf
Belfield, C. and Crosta, P. (2012). Predicting Success in College: The Importance of Placement
Tests and High School Transcripts. Community College Research Center.
Bensimon, E. (2004). The diversity scorecard: A learning approach to institutional change.
Change: The Magazine of Higher Learning, 36(1), 44-52.
doi:10.1080/00091380409605083
Boylan, Hunter R. (1999). Demographics, Outcomes, and Activities. Journal of
Developmental Education, 23(2), 2-8.
Boylan, H. R., Bonham, B. S., White, J. R. & George, A. P. (2000). Evaluation of
college reading and study strategy programs. In R. Flippo & D. Caverly (Ed.), Handbook
of college reading and study strategy research (pp. 365-401). Mahwah, NJ: Lawerence
Erlbaum Associates, Inc.
Carnevale, A. P. (2008). College for all? Change, 40(1), 23-29.
Catsambis, Sophia. (1994). The path to math: gender and racial-ethnic differences in
mathematics participation from middle school to high school. Sociology of Education,
14(3), 199-215.
Cohen, A. M., & Brawer, F. B. (2003). The American community college. San Francisco, CA:
Jossey Bass.
ACCOUNTABILITY MODELS 81
Fike, D. S., & Fike, R. (2008). Predictors of first-year student retention in the community
college. Community College Review, 36(2), 68-88.
Fong, K., Melguizo, T., Prather, G., & Bos, J. M. (2013). A different view of how we understand
progression through the developmental math trajectory. Los Angeles, CA: The University
of Southern California.
Gardiner, L. F. (1994). Redesigning higher education: Producing dramatic gains in student
learning. ASHE-ERIC Higher Education Report, 23(7). Washington, D.C.: George
Washington University, Graduate School of Education and Human Development.
Goldberg, B., & Morrison, D. M. (2003). Co-Nect: Purpose, accountability, and school
leadership. In J. Murphy & A. Datnow (Eds.), Leadership lessons from comprehensive
school reforms (pp. 57-82). Thousand Oaks, CA: Corwin Press.
Goldschmidt, P., Roschewski, P., Choi, K., Auty, W., Hebbler, S., Blank, R., & Williams, A.
(2005). Policymakers’ guide to growth models for school accountability: How do
accountability models differ. Washington, DC: Council of Chief State School Officers.
Graham, S., & Weiner, B. (1996). Theories and principles of motivation. In D. C. Berliner &
R. C. Calfee (Eds.), Handbook of educational psychology (Macmillan Research on
Education Handbook Series) (pp. 63-84). New York, NY: Macmillan.
Greene, J., & Foster, G. (2003). Public high school graduation and college readiness rates in the
United States (Education Working Paper, no. 3). New York, NY: Manhattan Institute for
Policy Research, Center for Civic Information.
Hagedorn, L. S., Siadat, M. V., Fogel, S. F., Nora, A., Pascarella, E. T. (1999). Success
in college mathematics: Comparisons between remedial and nonremedial first-year
college students. Research in Higher Education, 40(3), 261-84.
ACCOUNTABILITY MODELS 82
House, J. D. (1992). The relationship between academic self-concept, achievement-related
expectancies, and college attribution. Journal of College Student Development, 33, 5-10.
Hurtado, S., & Carter, D. F. (1997). Effects of college transition and perceptions of the campus
racial climate on Latino college students’ sense of belonging. Sociology of Education, 70,
324-345.
Karoly, L. A., & Panis, C. W. (2004). The 21st century at work: Forces shaping the future
workforce and workplace in the United States. Santa Monica, CA: RAND Corporation.
Retrieved October 19, 2012, from http://www.rand.org/pubs/monographs/MG164/
McCabe, R. H. (2000). No one to waste: A report to public decision-makers and community
college leaders. Washington, D.C.: Community College Press.
Melguizo, T., Bos, H., & Prather, G. (2011). Is developmental education helping
community college students persist? A critical review of the literature. American
Behavioral Scientist, 55(2), 173-184.
Melguizo, T., Bos, J. M., & Prather, G. (2013). Are Community Colleges Making Good
Placement Decisions In Their Math Trajectories? Los Angeles: University of Southern
California.
Melguizo, T., Kosiewicz, H., Prather, G., & Bos, J. M. (2013). How are community college
students assessed and placed in developmental math: Grounding our understanding in
reality.
National Center for Education Statistics. (2004). The condition of education 2004 (NCES 2004-
077). Washington, D.C.: U.S. Government Printing Office.
ACCOUNTABILITY MODELS 83
Pacheco, R. (2012). Assessing and addressing random and systematic measurement error in
performance indicators of institutional effectiveness in the community college.
University of Southern California, Los Angeles, CA.
Pinrich, P. R. (2000). An achievement goal theory perspective on issues in motivation
terminology, theory, and research. Contemporary Educational Psychology, 25(1), 92-
104.
Provasnik, S., & Planty, M. (2008). Community colleges: Special supplement to the condition of
education 2008 (NCES 2008-033). National Center for Education Statistics. Washington,
D.C.: U.S. Government Printing Office. Retrieved November 7, 2012, from
http://nces.ed.gov/programs/coe/2008/analysis/sa01a.asp
Ramirez, Olga M., Taube, S. R., Taube, P. M. (1990). Factors influencing
mathematics attitudes among mexican american college undergraduates. Hispanic
Journal of Behavioral Sciences, 12 (3), 292-98.
Robinson, Shawn H., Kubala, Thomas S. (1999). Critical factors in the placement of
community college mathematics students. Visions: The Journal of Applied
Research for the Florida Association of Community Colleges, 2 (2), 45-48.
Roueche, J.E., Waiwaiole, E.N. (2009). Developmental Education: An investment we cannot
afford not to make. Diverse: Issues in Higher Education, 26(16), 16
Salkind (2008). Statistics for people who (think they) hate statistics (3
rd
ed.). Thousand Oaks, Ca:
Sage Publications.
Scott, J. (2009). Chancellor’s office basic skills accountability report. Sacramento, CA:
California Community Colleges. Retrieved October 10, 2012, from
www.cccco.edu/portals/4/Tris/research_Basic_Skills/system.pdf
ACCOUNTABILITY MODELS 84
Stage, F. K. (1996). Setting the context: Psychological theories of learning. Journal of College
Student Development, 37(2), 174-181.
Tai, R. H., Sadler, P.M., & Mintzes (2006). Factors influencing college science success.
Journal of College Science Teaching, 36(1), 53-56.
Tinto, V. (1987). Theories of college student departure revisited. In John C. Smart (Ed.), Higher
education: Handbook of theory and research (Vol. 2; pp. 359-384). New York, NY:
Agathon.
Tinto, V. (1993). Leaving college: Rethinking the causes and cures of student attrition (2nd ed.).
Chicago, IL: University of Chicago Press.
Tinto, V. (1997). Classrooms as communities: Exploring the educational character of student
persistence. Journal of Higher Education, 69, 599-623.
Tinto, V., Goodsell, A., & Russo, P. (1993). Gaining a voice: The impact of collaborative
learning on student experiences in the first year of college. Unpublished manuscript.
Syracuse, NY: Syracuse University.
Umoh, U. J., Eddy, J., Spaulding, D. J. (1994). Factors related to student retention in community
college developmental education mathematics. Community College Review, 22 (2), 37-
47.Th
Waters, J. T., Marzano, R. J., & McNulty, B. (2003). Balanced leadership: What 30 years of
research tells us about the effect on leadership on student achievement. McRel Policy
Brief. Aurora, CO: Mid-continent Research for Education and Learning.
Abstract (if available)
Abstract
Measuring the quality of community college performance is not an easy task, yet some researchers have attempted to use data not only to analyze, but to solve one of the toughest problems of community college: the low success rates in developmental math courses. Since higher education is multidimensional, it is problematic to observe the entire package of knowledge and skills possessed by students. Furthermore, it is difficult to compare and contrast institutions, since postsecondary institutions possess different educational agendas and student profiles. In the case of community colleges, the diversity of measures employed by individual institutions does not allow policy makers the opportunity to examine the overall school performance. ❧ The purpose of the study was to compare four accountability models that have been proposed for use in community colleges: status, improvement, input‐adjusted and disaggregated models. For the status model, the percentage of credit students tracked for six years for the most recent three cohorts who started below transfer level in mathematics was retrieved from the California Community Colleges Students Success Scorecard. From the scorecard, the remedial math success rates of all 112 colleges were computed and compared. For the improvement model, the difference between the remedial math success rates over two consecutive years were calculated and examined for each institution. For the disaggregated model, the remedial math success rate of each ethnic group membership was calculated and examined individually. For the input‐adjusted model, the multiple regression technique was used compute success rate with ethnic group membership as a control variable. The residual (actual success minus predicted success was standardized to show relative standing among the 112 community colleges. Over and under‐performing institutions were identified to make more meaningful comparisons among institutions. ❧ Various themes are evident in this study. The four accountability models: status, improvement, disaggregated, and input‐adjusted all possess some similarities as well as differences. The raw overall success rates are extremely stable from year to year. However, examining raw success rates without considering the factors beyond institutions’ control such as ethnicity is not a fair framework. Therefore, the improvement, disaggregated and input‐adjusted models were introduced. The improvement model is a fairer framework for comparing institutions, yet the improvement scores are not stable from year to year. The results of disaggregated model and input‐adjusted model yield a better solution for comparing students’ success rates. Both the disaggregated success rates and the input‐adjusted success rates indicate a strong stability from year to year. A combination of the disaggregated and input‐adjusted models is recommended as the best way to measure success in future accountability applications.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Input-adjusted transfer scores as an accountability model for California community colleges
PDF
The impact of remedial mathematics on the success of African American and Latino male community college students
PDF
Examining opportunity-to-learn and success in high school mathematics performance in California under NCLB
PDF
Promising practices of California community college mathematics instructors teaching AB 705 accessible courses
PDF
Developmental math in California community colleges and the delay to academic success
PDF
An exploratory, quantitative study of accreditation actions taken by the Western Association of Schools and Colleges' Accrediting Commission for Community and Junior Colleges Since 2002
PDF
Unprepared for college mathematics: an investigation into the attainment of best practices to increase preparation and persistence for first-time California community college freshmen in remedial...
PDF
Ready or not? Unprepared for community college mathematics: an exploration into the impact remedial mathematics has on preparation, persistence and educational goal attainment for first-time Cali...
PDF
Motivational, parental, and cultural influences on achievement and persistence in basic skills mathematics at the community college
PDF
Three essays on the high school to community college STEM pathway
PDF
Assessing and addressing random and systematic measurement error in performance indicators of institutional effectiveness in the community college
PDF
The effects of a math summer bridge program on college self-efficacy and other student success measures in community college students
PDF
College readiness in California high schools: access, opportunities, guidance, and barriers
PDF
Oppression of remedial reading community college students and their academic success rates: student perspectives of the unquantified challenges faced
PDF
Academic achievement among Hmong students in California: a quantitative and comparative analysis
PDF
Examining the faculty implementation of intermediate algebra for statistics: An evaluation study
PDF
The effect of site support teams on student achievement in seven northern California schools
PDF
Reforming developmental education in math: exploring the promise of self-placement and alternative delivery models
PDF
Native Hawaiian student success in the first-year: the impact of college programs and practices
PDF
Institutional researchers as agents of organizational learning in hispanic-serving community colleges
Asset Metadata
Creator
Nguyen, Orchid
(author)
Core Title
Accountability models in remedial community college mathematics education
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education (Leadership)
Publication Date
04/15/2014
Defense Date
04/15/2014
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
Community colleges,OAI-PMH Harvest,remedial math
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Hocevar, Dennis (
committee chair
)
Creator Email
onguyen@lbcc.edu,orchidmath@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-378967
Unique identifier
UC11296669
Identifier
etd-NguyenOrch-2362.pdf (filename),usctheses-c3-378967 (legacy record id)
Legacy Identifier
etd-NguyenOrch-2362-0.pdf
Dmrecord
378967
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Nguyen, Orchid
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
remedial math