Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Examining opportunity-to-learn and success in high school mathematics performance in California under NCLB
(USC Thesis Other)
Examining opportunity-to-learn and success in high school mathematics performance in California under NCLB
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Running Head: OTL AND SUCCESS 1
EXAMINING OPPORTUNITY-TO-LEARN AND SUCCESS IN HIGH SCHOOL
MATHEMATICS PERFORMANCE IN CALIFORNIA UNDER NCLB
by
Daniel Miodrag Gavrilovic
A Dissertation Presented to the
FACULTY OF THE USC ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
August 2013
Copyright 2013 Daniel Miodrag Gavrilovic
OTL AND SUCCESS 2
Dedication
First and foremost I want to thank the Living God and Lord Jesus Christ who is
Omniscient, Omnipresent, and Omni-benevolent. With God’s power over time, His Grace,
Mercy, Righteousness, and Justice has allowed me to go through life experiences to help teach
me who He is and understand His love, which is beyond my comprehension. He is the Great
Teacher and I still have yet to learn more from Him. He has given me the patience and time to
go through this doctoral experience. Through many frustrations, failures, and successes, He
deserves the Praise. “I have more understanding than all my teachers; for thy testimonies are my
meditation” (Psalm 119:99). First Corinthians 8:1 says, “…Knowledge puffs up, but love builds
up”. God is love and His love was manifested among us in His Son Jesus Christ, who is the
image of the Invisible God (I John 4:8 and Colossians 1:15) and who is God. His Love is
everlasting and I don’t know how else to thank Him but to just give Him the Glory and Praise.
Thank you Lord!
I also want to thank and dedicate this dissertation to my family (Ilija, Nada, Lydia,
Elizabeth, Ilija (Lane), David, Melissa, and Sharon…Gavrilovic) who have been through a lot
with me and have encouraged me and have inspired me in so many ways that only God knows
how, sometimes not even I can understand. Having parents and a family that show love toward
each other and others, is a reflection of God’s love toward us. Through prayers and
admonishments my family has been there, even though we are apart geographically. I also want
to thank my extended family (Vujicic’s and Gavrilovic’s – and their extensions) and my family
in Christ with many brothers and sisters in the Lord who have encouraged and challenged me to
help shape who I am today…I guess one can call that a social/spiritual construct. There are
many others who I can say helped me, and to those…thank you. I have much more to learn!
OTL AND SUCCESS 3
Acknowledgements
My social and spiritual milieu has become more extensive from interacting with
professors, the coursework, and peers while attending the doctoral program at USC. The co-
constructing and challenge of knowledge that occurred between my peers and professors was a
rigorous road but a refining one.
I especially want to thank Dr. Dennis Hocevar for taking the time to work with me and
teach me the ins and outs of quantitative research. The Inquiry course with Dr. Hocevar has
really given me a comfort with both quantitative and qualitative research and the way he
explained it was simple, concise, and effective. I appreciate his knowledge and help on SPSS as
he has taken me to the next level, if not more, of understanding the program and it’s faculty. His
guidance on not only research, but also his encouragement to spend time with family and friends
was substantial to my finishing this dissertation. I have ALWAYS left meetings with him
feeling energetic and comforted that I was on the right track and he was critical with being clear
and true to interpreting data. Thank you Dr. Hocevar for knowing my ZPD.
I also would like to thank Dr. Pedro Garcia for serving as a committee member. From his
leadership class to serving as another lens for my research has given me enough challenge to
consider what this study entails and where it can go. Leadership is not always seen forthright but
also done in the background. Thank you for your challenge Dr. Garcia.
I would also like to thank Dr. Angela Hasan for serving as a committee member. She has
opened her heart and mind to letting me observe her online class and give me a little insight to
what it means to be a teacher educator in a whole new realm, namely, online. Thank you for
your encouragement to finish my research and also for exhorting me to publicize the findings. I
appreciate your input and do not take it lightly.
OTL AND SUCCESS 4
Lastly, I would like to acknowledge my cohort in the TEMS program (and core classes)
who have challenged me and essentially changed some of my understanding of not only
education but also the world around me. I thank God for our experiences that contributed to my
experience. In addition, many of the professors I have had in my TEP program, specifically Dr.
Halter, to educators I work with and have worked with, and most importantly, the students that
have walked in and out of my classroom for the past ten plus years were all part of the thoughts
that went into this dissertation and doctoral program. Thank you all and again, I give God the
Glory because He knew it and knows it before I thought of it.
OTL AND SUCCESS 5
Table of Contents
Dedication 2
Acknowledgements 3
List of Tables 7
List of Figures 10
Abstract 11
Chapter One: Introduction 12
Background of the Problem 12
Statement of the Problem 25
Purpose of Study 26
Organization of the Study 27
Definition of Terms 28
Chapter Two: Literature Review 32
NCLB Theory of Action 32
Curriculum Ideologies 34
Definition of Four Curriculum Ideologies 35
Essentialism and Scholar Academic Ideology 36
Perennialism and Social Efficiency Ideology 37
Progressivism & Learner Centered Ideology 38
Reconstructivism and Social Reconstruction Ideology 39
History of American Education through lens of Ideologies 41
Standards Movement 49
Age of Accountability 56
NCLB – No Child Left Behind 56
RTTT – Race To The Top Initiative 60
Performance Data to Measure Schools 61
Value-Added to Measure Schools 64
Chapter Three: Methodology 66
Introduction to Research Design 66
Research Questions 66
Quantitative Research Design 67
Phase One – Trend and Stability of OTL and SS scores 67
Phase Two – Internal Consistency of the OTL and SS Composites 70
Phase Three – Correlation of Input-Unadjusted composites with SCI 70
Phase Four – Reliability/Stability of the Residuals of OTL/SS composites 70
Participants and Setting 72
Instrumentation and Data Collection 73
Limitations of Study 75
Chapter Four: Results 79
Research Question One - Opportunity to Learn Results 80
Research Question Two - Subject Level Success Results 88
Research Question Three - Stability of OTL and SS Scores 94
OTL AND SUCCESS 6
Research Question Four - Reliability of the Composites of OTL and SS Scores 110
Research Question Five - Correlation of Input-Unadjusted Composites with SCI 116
Research Question Six - Reliability of the Composites of the Residuals of the
OTL and SS scores 119
Research Question Seven - Stability of the Composites of the Residuals of the
OTL and SS Scores 128
Chapter Five: Discussion 140
Summary of the Findings 142
Phase One – Trend and Stability of OTL and SS scores 142
Phase Two – Internal Consistency of the OTL and SS Composites 145
Phase Three – Correlation of Input-Unadjusted composites with SCI 147
Phase Four – Reliability/Stability of the Residuals of OTL/SS composites 148
Discussion 150
Implications 154
Theoretical Implications 154
Practical Implications 155
Recommendations for further Research 157
References 160
Appendix 178
OTL AND SUCCESS 7
List of Tables
Table 2.1: CST Cutoff Scores Chart 59
Table 3.1: Calculations of OTL and SS Scores 68
Table 3.2: Computation Formulas with Description for each symbol 71
Table 3.3: Regression and Residual Functions 72
Table 3.4: Types of Threats to Internal Validity 76
Table 3.5: Types of Threats to External Validity 78
Table 4.1: Algebra 1 School Level OTL Descriptive Statistics 81
Table 4.2: Geometry School Level OTL Descriptive Statistics 84
Table 4.3: Algebra 2 School Level OTL Descriptive Statistics 85
Table 4.4: Summative Math School Level OTL Descriptive Statistics 87
Table 4.5: Descriptive Statistics Algebra 1 Success 88
Table 4.6: Descriptive Statistics Geometry Success 90
Table 4.7: Descriptive Statistics Algebra 2 Success 91
Table 4.8: Descriptive Statistics Summative Math Success 93
Table 4.9: Bivariate Correlations for Algebra 1 OTL 95
Table 4.10: Bivariate Correlations for Geometry OTL 97
Table 4.11: Bivariate Correlations for Algebra 2 OTL 99
Table 4.12: Bivariate Correlations for Summative Math OTL 101
Table 4.13: Bivariate Correlations for the Algebra 1 Success Scores 103
Table 4.14: Bivariate Correlations for the Geometry Success Scores 105
Table 4.15: Bivariate Correlations for the Algebra 2 Success Scores 107
Table 4.16: Bivariate Correlations for the Summative Math Success Scores 109
OTL AND SUCCESS 8
Table 4.17: Reliability Statistics for OTL 111
Table 4.18: Item-Total Statistics for 2004 Composite OTL Scores 112
Table 4.19: Item-Total Statistics for 2008 Composite OTL Scores 112
Table 4.20: Item-Total Statistics for 2011 Composite OTL Scores 113
Table 4.21: Reliability Statistics for SS 114
Table 4.22: Item-Total Statistics for 2004 Composite Success Scores 114
Table 4.23: Item-Total Statistics for 2008 Composite Success Scores 115
Table 4.24: Item-Total Statistics for 2011 Composite Success Scores 115
Table 4.25: Descriptive Statistics for the Unadjusted Composites of
OTL, SS, and SCI 2004 116
Table 4.26: Descriptive Statistics for the Unadjusted Composites of
OTL, SS, and SCI 2008 117
Table 4.27: Descriptive Statistics for the Unadjusted Composites of
OTL, SS, and SCI 2011 117
Table 4.28: Correlations of the Unadjusted Composites of OTL, SS,
and SCI for 2004 118
Table 4.29: Correlations of the Unadjusted Composites of OTL, SS,
and SCI for 2004 118
Table 4.30: Correlations of the Unadjusted Composites of OTL, SS,
and SCI for 2004 119
Table 4.31: Reliability Statistics for the Composites of the
Residuals of OTL Scores 120
Table 4.32: Composites of OTL Residual Scores 2004 – Descriptive Statistics 121
OTL AND SUCCESS 9
Table 4.33: Composites of OTL Residual Scores 2008 – Descriptive Statistics 121
Table 4.34: Composites of OTL Residual Scores 2011 – Descriptive Statistics 121
Table 4.35: Composites of OTL Residual Scores 2004 – Item Total Statistics 122
Table 4.36: Composites of OTL Residual Scores 2008 – Item Total Statistics 123
Table 4.37: Composites of OTL Residual Scores 2011 – Item Total Statistics 123
Table 4.38: Reliability Statistics for the Composites of the Residuals of SS Scores 124
Table 4.39: Composites of Success Residual Scores 2004 – Descriptive Statistics 125
Table 4.40: Composites of Success Residual Scores 2008 – Descriptive Statistics 125
Table 4.41: Composites of Success Residual Scores 2011 – Descriptive Statistics 125
Table 4.42: Composites of Success Residual Scores 2004 – Item Total Statistics 126
Table 4.43: Composites of Success Residual Scores 2008 – Item Total Statistics 127
Table 4.44: Composites of Success Residual Scores 2011 – Item Total Statistics 127
Table 4.45: Descriptive Statistics for the Composites of OTL Residuals 129
Table 4.46: Bivariate Correlations for the Composites of OTL Residuals 130
Table 4.47: Descriptive Statistics for the Composites of Success Residuals 135
Table 4.48: Bivariate Correlations for the Composites of Success Residuals 136
Table 5.1: Summary of the Percent Increase for OTL and SS 151
OTL AND SUCCESS 10
List of Figures
Figure 2.1: NCLB Theory of Action (Haertel & Herman, 2005) 31
Figure 2.2: NCLB Theory of Action influenced by curriculum ideologies 32
Figure 4.1a: School Level OTL Algebra 1 Mean Scores Listwise 80
Figure 4.1b: School Level OTL Algebra 1 Mean Scores Variable by Variable 81
Figure 4.2: Graph Means of School Level OTL Geometry Scores 84
Figure 4.3: Graph of the Means of School Level OTL Algebra 2 Scores 86
Figure 4.4: Graph of the Means of School Level OTL Summative Math Scores 87
Figure 4.5: Graph of the Means of Algebra 1 Success Scores 89
Figure 4.6: Graph of the Means of Geometry Success Scores 90
Figure 4.7: Graph of the Means of Algebra 2 Success Scores 92
Figure 4.8: Graph of the Means of Summative Math Success Scores 93
Figure 4.9: Histogram of the Composites of OTL Residuals for 2004 131
Figure 4.10: Histogram of the Composites of OTL Residuals for 2008 132
Figure 4.11: Histogram of the Composites of OTL Residuals for 2011 133
Figure 4.12: Histogram of the Composites of Success Residuals for 2004 137
Figure 4.13: Histogram of the Composites of Success Residuals for 2004 138
Figure 4.14: Histogram of the Composites of Success Residuals for 2004 139
OTL AND SUCCESS 11
ABSTRACT
The No Child Left Behind Act of 2001 has put many schools under a lot of pressure to
meet its high demands. In this quantitative study, the effects that the NCLB act has had on
students’ opportunity to learn (OTL) and Subject Level Success (SS) from 2004 to 2012 in 9
th
,
10
th
, and 11
th
grade math coursework (Algebra 1, Geometry, Algebra 2, and Summative Math)
were examined. The California Standards Test (CST) data, which comes from the California
Department of Education (CDE) website, was used to calculate the opportunity to learn and
success rates. Essentially, the unadjusted scores of OTL and SS greatly increased between 2004
and 2012. The magnitude of the OTL changes were very large, ranging from 26 to 49 percent.
Along the same lines, the success changes were very large, ranging from 39 to 70 percent.
Numerous research studies have documented that raw achievement tests scores cannot be
used for accountability purposes because they are highly correlated with socioeconomic status.
Input-adjusted scores are a promising alternative to value-added measurements introduced by
Hocevar and his colleagues at the University of Southern California. In marked contrast to
value-added measurements, the composite input-adjusted scores for both OTL and SS scores
were internally consistent (alpha
OTL2011
= 0.79 and alpha
SS2011
= 0.87) and stable (r
OTL2011
= 0.90
and r
SS2011
= 0.85). Potential uses for input-adjusted scores in practice are discussed.
OTL AND SUCCESS 12
CHAPTER ONE: INTRODUCTION
Current initiatives and policies in the American educational system are shaped by
historical attempts of the federal government to respond to the needs of society. The problem is
that the federal government puts forth the attempt to change the way current conditions, whether
fair or unfair, are in our schools. And, although we are coming closer to understand what it
means to educate effectively, we are certainly on the path of understanding what it means to
determine successful performance of schools, teachers, and students. This chapter begins with
an examination of the historical shifts if the educational ideologies in the history of America
since the beginning of the 20
th
century. Then, a statement of the problem will reveal how current
educational reforms, especially in light of a fast growing technological era, effect how schools,
teachers, and student performance are measured. Finally, how student performance data is used
to evaluate schools and teachers will be discussed.
Background of the Problem
Sputnik
The belief that American schools should be held accountable for what is taught to
students is not a new idea. A number of historical events have taken place to change how
American schools are held accountable, what is taught in our schools, and to what standards
should educators look to for increasing student learning. Looking back, the 1957 launch of the
Russian spacecraft Sputnik probed the American government to examine and reconstruct the
American education (Jennings, 1987; Bybee, 2007). According to Jennings (1987), the
American perception that the Soviets were inferior to America, both in terms of economics and
technology radically changed as a result of the launch. In response to the launch of the Sputnik
satellite, the federal government, being compelled to enhance scientific research to launch an
OTL AND SUCCESS 13
American satellite into space, enacted the National Defense Education Act of 1958, which
mandated schools across the nation to strengthen the instruction of mathematics and science
courses (Bybee & Fuchs, 2006; Jennings, 1987; Stotts, 2011). The National Defense Education
Act (NDEA) of 1958 led to the development of programs in science, mathematics, and foreign
languages in American schools and colleges (Jennings, 1987). Underlying the NDEA was the
concern for national security and around the same time, Massachusetts Senator John F. Kennedy
challenged Dwight D. Eisenhower, the U.S. president at the time of the Sputnik launch, for the
lack of the president’s response to what seemed to be an American national security crisis
(Kennedy, 2006). In pursuit to becoming the next president, Robert F. Kennedy used the issues
surrounding the space race against the Soviets and eventually challenged the nation to respond
by setting a clear goal that was to be fulfilled by the end of the decade; namely, “send a man to
the moon and return him safely” (Bybee, 2007; Kennedy, 2006). Nevertheless, the major issue
that gained ground in political conversations was the need to reform the American education, and
specifically, to change curriculum in the U.S. toward math and scientific research.
Essentially, the challenge for the federal government, after the uproar of Sputnik, was to
take a closer look into the disparities of the American educational system. Three specific
historical events took place that shifted in the government’s role in education: 1) the signing of
the Elementary and Secondary Education Act (ESEA) in 1965, 2) the release of the Coleman
Report (1966), and 3) the release of A Nation at Risk (1983). Each of these events is briefly
described below.
ESEA
Eight years after the launch of the Sputnik, President Lyndon Johnson signed the
Elementary and Secondary Education Act (ESEA) in 1965, which was the federal government’s
OTL AND SUCCESS 14
promise to the nation to support the poorest students as well as students who come from
disadvantaged backgrounds (Edsource, June 2010; Jennings, 1987; Kantor, 1991; Stotts, 2011;
Thomas, 1983). Additionally, according to Kantor (1991), the result of ESEA “was to give the
federal government a distinct new role in defining the nation's educational priorities and to make
federal policy a central focus of the struggles over access to schooling and control of educational
policy that characterized the history of education during the 1960s and early 1970s” (p. 49).
Attached to the ESEA was the promise that federal policymakers would reauthorize (revise and
renew) the law every five to six years (Edsource, June 2010). Eventually, the No Child Left
Behind Act (NCLB) of 2001 became the most recent reauthorization of the ESEA. A brief
discussion of the NCLB will be discussed later in this chapter.
Coleman Report
Less than a year after the signing of ESEA, the Coleman Report (1966), written by James
S. Coleman, maintained that family background contributed more to student achievement than
did schools (Coleman, 1966; Carver, 1975; Ravitch, 1981). However, soon after the report came
out, researchers were quick to discredit Coleman’s assertion (Bowles & Levin, 1968) and
eventually led Coleman to acknowledge that schools do make a difference in student
achievement regardless of family background (Coleman, 1970; Ravitch, 1981). Consequently, a
new report, Public and Private Schools (1981) (which was a major part of a longitudinal study
called “High School and Beyond” (HS&B) financed by the National Center for Education
Statistics) was Coleman’s (1981) attempt to replace his original report to address that schools do
make a difference in student achievement (Ravitch, 1981). Nonetheless, the claims made by
James Coleman himself (Coleman, 1966; Coleman, 1981), and the debates that transpired as a
result of those reports, were just the beginning of the nation’s concern for education.
OTL AND SUCCESS 15
A Nation at Risk
According to Schwartz, Robinson, Kirst, and Kirp (2000), the current reform period is
generally acknowledged to have begun with the release of A Nation at Risk (1983). Essentially,
in 1983, the National Commission on Excellence in Education (NCEE) released A Nation at Risk
to shed light on the decline of America’s academic, technological, industrial, mathematical, and
scientific skills and abilities (Stotts, 2011). Like the launch of the Sputnik, A Nation at Risk
“focused the nation’s attention to the claims and fears attendant to the Global Economy”
(Johanningmeier, 2010, p. 348) and educational critics and reformers once again called for more
and better mathematics, science, and foreign language in public schools. Additionally, A Nation
at Risk only amplified the federal governments role in the nation’s educational system.
Furthermore, the National Science Board Commission on Precollege Education in Mathematics,
Science, and Technology (1983) was not far from the truth when it declared that “discrimination
and other disadvantages due to race, gender, ethnic background, language spoken at home or
socioeconomic status and the lingering effects thereof must be eradicated completely from the
American educational system” (p. 14). Thus, in response to the ESEA, Coleman Report, and A
Nation at Risk, a number of state and national organizations were developed in order to hold the
nation’s schools accountable for the success of their students. Organizations, such as the
National Science Foundation (NSF) and the National Council of Teachers of Mathematics
(NCTM), emphasized the need to establish standards and aligned assessments in order to
improve achievement (Johanningmeier, 2010; Stotts, 2011). These organizations, among many
that developed as a result of A Nation at Risk, influenced what is now called the standards
movement.
OTL AND SUCCESS 16
The Standards Movement
According to Thompson (2001),
“…standards-based reform is fundamentally concerned with equity. It departs radically
from the tracking and sorting carried out by the factory-style school of yore. Instead, it
aims to hold high expectations and provide high levels of support for all students,
teachers, and educational leaders” (p. 1).
Similar to the aftermath of Sputnik launch, President George Bush, at a U.S. state
governors meeting held in 1989, supported the development of content standards to ensure that
the “nation’s students would be the first in international competitions” (Johnson, Musial, Hall,
Gollnick, & Dupuis, 2005, p. 391). From that, the National Council of Teachers of Mathematics
(NCTM) developed the Curriculum Evaluation Standards for School Mathematics (Standards)
and around the same time the National Science Foundation (NSF) developed and funded the
Benchmarks for Science Literacy in 1993 and the National Science Education Standards in 1996
(Leonard & Penick, 2005). Furthermore, with support from the government, other professional
organizations, along with the NCTM and the NSF, the National Research Council (NRC)
developed educational standards for Pre-school to 12
th
grade (P-12) students. Although the
standards before 1987 more focused on curriculum (Johnson et al., 2005), the standards today
identify what students and teachers should know and be able to do at the end of a course. Simply
put, these performance standards are standards-based curriculum, which are aligned with
national, state, and district content, in order to establish the same expectations for what all
students should learn. Now, as to what standards should be taught in the schools or whether or
not the standards themselves portray a discriminating message is not the problem. The problem
is how the achievement of these standards is currently being measured. And, if the government
OTL AND SUCCESS 17
is more involved with subsidizing the educational system, a closer look at the various
accountability systems in place puts states, districts and schools at high stakes.
During the Clinton administration, the Goals 2000: Educate America Act was designed to
provide direction to the nations educational system and in it, came the push for developing new
academic standards, assessments that measure student progress against those standards, and
accountability systems that would provide annual reports based on school and district
performance (Schwartz & Robinson, 2000). This goal brought challenges to the federal
government, like during the time of Sputnik and ESEA, however, the focus here shifted from
targeting specific students needing federal protection to targeting the achievement of all students
(Schwartz & Robinson, 2000). But, in order to ensure whether state schools are doing what it
takes to increase student achievement, which is in line with implementing standards, is to create
and use standards-based assessments for measuring achievement.
Standards-based Education, High Stakes Testing, and Accountability
Generally, researchers assert that the standards movement challenges educators to clarify
what it is that students should know and be able to do (Ravitch, 1993; Lachat, 1999; Nelson-
Barber, 1999). Lachat (1999) points out that “high standards are as important in education as
they are in the medical profession, in licensing pilots, or in international sports competition such
as the Olympics. They define what is essential for successful performance and encourage people
to strive for the best” (p. 25). In other words, educational content standards provide educators a
coherent framework for what students need to learn in order to become successful. However, the
question is how will student success be measured and according to what standard? Inseparable
to standards-based education are the standards-based assessments. The National Assessment of
Educational Progress (NAEP), first implemented in 1969, is a nationally represented assessment
OTL AND SUCCESS 18
that monitors students’ knowledge and skills. Influenced by the NCTM around 1990, the NAEP
responded to standards-based education by creating standards-based assessments in order to
assess whether students are learning the national standards and whether teachers and schools are
holding students accountable according to those standards (Wise & Leibbrand, 2001; Nelson-
Barber, 1999).
Additionally, in 1991, Lamar Alexander, the U.S. secretary of Education, persuaded
Congress to establish the National Council on Educational Standards and Testing (NCEST), a
32-member bipartisan body, to consider whether and how to develop new standards and tests
(Ravitch, 1993). As a result, data from these standards-based assessments were used both
nationally at the state-level to monitor student achievement. But, what came unforeseen was
how states used the results of data to not only monitor student achievement, but also to evaluate
districts, schools, and even teachers (Corcoran, 2010; Darling-Hammond, 2000). In the article,
The Authentic Standards Movement and Its Evil Twin, Thompson (2001) argues that the
standards movement is not alone; it has an “evil twin.” What he describes as the evil twin to its
sibling, “the authentic, standards based reform,” is the “high-stakes, standardized, test-based
reform” and the kernel difference between them is their own influence in the instructional core of
schooling and on equity issues (Thompson, 2001). In other words, the assessments, along with
the standards, were designed to boost up the expectations and “provide high levels of support for
all students, teachers, and educational leaders” (p. 1), but the more powerful and visible partner
(the “evil twin”) in fact marginalizes curriculum and instruction and is often misused to evaluate
teachers and schools (Thompson, 2001).
Nevertheless, in light of a fast growing competitive digital world, some researchers claim
that America is falling behind in comparison with other countries (Bybee, 2007; Atkinson &
OTL AND SUCCESS 19
Mayo, 2010) and the inseparable standards-based education and standards-based assessments are
seen as the remedy. According to a report by the U.S. Department of Education (2006), other
countries have not only followed America’s lead in providing education to all, but are “now
educating more of their citizens to more advanced levels than we are” (p. x). So, in response,
policies issued from the federal and state governments have changed over time in order to hold
states to higher expectations.
Higher Expectations: NCLB, RTTT, and STEM
According to the U.S. Department of Education (2006):
“Every student in the nation should have the opportunity to pursue postsecondary
education. We recommend, therefore, that the U.S. commit to an unprecedented
effort to expand higher education access and success by improving student
preparation and persistence, addressing nonacademic barriers and providing
significant increases in aid to low-income students” (p. 17).
The government’s recommendation seems promising, considering its past contribution
for improving education during the Sputnik era (1957), the Coleman report (1966), and A Nation
at Risk (1983). However, the federal government, under the administration of President George
W. Bush, put extremely high expectations on all states starting in 2001. Namely, President Bush
proposed the No Child Left Behind Act (NCLB) of 2001.
NCLB
In December of 2001, the U.S. Congress approved a reauthorization of the ESEA and
renamed it the No Child Left Behind Act (NCLB). According to Edsource (June, 2010) the
NCLB:
OTL AND SUCCESS 20
“…conditioned states’ receipt of substantial federal funding on establishing standards,
annually assessing students’ proficiency on those standards, and holding schools
accountable for helping an increasing percentage of students demonstrate proficiency
each year. The federal legislation left it to individual states to determine the focus,
content, and rigor of their K–12 academic content standards” (p. 2).
Additionally, NCLB allowed states to determine what levels of performance a student
must demonstrate to achieve proficiency (Edsource, 2010). At present, the NCLB requires that
every school make adequate yearly progress toward 100% of all students regardless of race,
gender, ability, or socioeconomic status (SES) to demonstrate mastery at or above the
proficiency level on state standards as measured by state standardized assessments within 12
years; namely, by the end of the 2013-2014 academic year (Horn, 2005). In essence, the NCLB
is holding states to unrealistically high expectations for all students.
RTTT
What's more, in 2009, President Barrack Obama launched the “Educate to Innovate”
campaign, which was a nationwide effort to help move American students from the middle to the
top of the pack in science and math achievement over the next decade. The focus was primarily
for excellence in Science, Technology, Engineering, and Mathematics (STEM) education, which
will be discussed shortly. Nonetheless, from this campaign came the federal program Race To
The Top (RTTT) in 2009. Basically, the RTTT was a federal stimulus program that initially
funded individual states with $4.3 billion from the American Reinvestment and Recovery Act
(ARRA), as an incentive for states to increase student achievement. Right now, President
Obama announced his plans to continue the RTTT challenge, requesting $1.35 billion for the
OTL AND SUCCESS 21
program in his 2011 fiscal year budget
1
. So, according to the U.S. Department of Education (see
footnote 1), the federal government is asking States to advance reforms, through RTTT, around
four specific areas:
1. Adopting standards and assessments that prepare students to succeed in college and the
workplace and to compete in the global economy;
2. Building data systems that measure student growth and success, and inform teachers and
principals about how they can improve instruction;
3. Recruiting, developing, rewarding, and retaining effective teachers and principals,
especially where they are needed most; and
4. Turning around our lowest-achieving schools.
Accordingly, the RTTT awards will go to states that are leading in the way with plans for
implementing “coherent, compelling, and comprehensive education reform…and provide
examples for States and local school districts throughout the country to follow as they too are
hard at work on reforms that can transform our schools for decades to come”
2
. With STEM
education being at the heart of RTTT, many states face the challenge of moving all schools
toward a STEM approach and more critically, gearing up all students for achievement in STEM.
However, a discussion on the background of STEM education is necessary before understanding
how schools can receive incentives under RTTT.
STEM Initiative
Recently, a number of initiatives for improving education in America are impacting what
curriculum is taught in our schools (Atkinson & Mayo, 2010; Subotnik, Tai, Rickoff, &
1
Retrieved from: http://www2.ed.gov/programs/racetothetop/index.html
2
Retrieved from: http://www2.ed.gov/programs/racetothetop/index.html
OTL AND SUCCESS 22
Almarode, 2010; Breiner, Harkness, Johnson, Koehler, 2012; Williams, 2011). For example, in
2009, the Obama administration has launched a nationwide “Educate to Innovate” campaign in
order to push schools to increase the level of rigor in the areas of science, technology,
engineering, and mathematics (STEM) and to move students from the middle to the top of the
pack in STEM related fields (White House, 2009). In a Press Release, Obama (2009) stated that
“reaffirming and strengthening America’s role as the world’s engine of scientific discovery and
technological innovation is essential to meeting the challenges of this century” (White House,
2009). In other words, the Obama Administration is seeking to increase the concentration of a
STEM curriculum in order to compete with other countries around the globe in technology.
And, together with the United States, the United Kingdom and South Africa have also made it a
point to coordinate a STEM focus in their schools (Department for Education and Skills, 2006;
Barlex, 2007; Williams, 2011). Williams (2011) posits that both the USA and UK recognize that
“there is a need to be involved in the STEM agenda in order to try and maintain the integrity and
place of technology and engineering education in the movement” (p. 31). Moreover, the shift
toward a STEM education is what lies beneath the expectations of political leaders in America.
According to a Bureau of Labor Statistics Report titled Employment Projections (2010),
the STEM labor force is expected to grow 19%, which is almost double that of all occupations
(Atkinson & Mayo, 2010). Also, Atkinson and Mayo (2010) also claim that if the United States
were to turn around the economy, it would need to do it largely through science and technology.
Additionally, a press release from The White House (2009) claims that “a growing number of
jobs require STEM skills and America needs a world class STEM workforce to address the grand
challenges of the 21st century, such as developing clean sources of energy that reduce our
dependence on foreign oil and discovering cures for diseases” (para. 13). In other words,
OTL AND SUCCESS 23
because of the fast growing technological advances in our society, America needs to reform the
present curriculum taught in our schools. According to a number of researchers (Gattie and
Wicklein, 2007; Rogers, 2005; Douglas, Iversen, & Kalyandurg, 2004; Project Lead the Way,
2005 (as cited in Williams, 2011); Wicklein, 2006; Barlex, 2008; Williams, 2011), it is held that
a STEM approach will:
• Increase interest, improve competence and demonstrate the usefulness of mathematics and
science
• Improve technological literacy which promotes economic advancement
• Improve the quality of student learning experiences
• Prepare students for university engineering courses
• Elevate technology education to a higher academic and technological level
• Improve science and mathematics education in order to increase the flow of STEM people
into the workforce and improve STEM literacy in the population (Williams, 2011, p. 31).
However, reform is not cheap and in order for the government to push the STEM agenda
nationwide, it needs to find ways to financially support all the programs involved for improving
science, technology, engineering, and mathematical fields of study.
A number of researchers emphasize that for governments to promote STEM education,
they need to not only provide funds for institutions to move toward STEM best practices, but
also provide incentives for schools and teachers who increase student learning in STEM fields
(Barlex, 2009; Atkinson & Mayo, 2010; Subotnik, Tai, Rickoff, & Almarode, 2010; Hughes &
Bell, 2011). According to Atkinson and Mayo (2010), STEM policy “needs to provide
incentives—both carrots and sticks—for institutions to move to STEM best practice” (p. 10).
So, governments have taken some steps to organizing ways in which to fund the STEM agenda.
OTL AND SUCCESS 24
Before President Obama’s “Educate to Innovate” initiative, a 2006 Congressional Research
Service (CRS) Report for Congress stated:
“According to a 2005 Government Accountability Office (GAO) survey of 13 federal
civilian agencies, in FY2004 there were 207 federal education programs designed to
increase the number of students studying in STEM fields and/or improve the quality of
STEM education. About $2.8 billion was appropriated for these programs that year, and
about 71% ($2 billion) of those funds supported 99 programs in two agencies.” (Kuenzi,
Matthews, & Mangan, 2006, p. CRS-18).
Some of the federal agencies, such as the National Institutes of Health (NIH), National Science
Foundation (NSF), National Aeronautics and Space Administration (NASA), U.S. Department of
Education (ED) and the Environmental Protection Agency (EPA), were recruited by the federal
government to help issue funds to districts around the nation in response to President Obama’s
2009 initiative. A more recent venture by the president is the Race to the Top (RTTT) (2009)
grant that promises $4.35 billion to states that commit to improving STEM education (The White
House, 2009). And, a coalition of private and public philanthropic organizations associated with
the federal government, support the political agenda. For example, the Bill and Melinda Gates
Foundation (now known as the Gates Foundation) and the Carnegie Corporation of New York
are funding various schools as well as recruiting private sectors to help implement STEM
education at the state level. For example, one high school in San Diego, California (High Tech
High) received a $17 million investment since the year 2000 from the Bill and Melinda Gates
Foundation to enhance STEM related projects and education (Atkinson & Mayo, 2010).
Looking beyond the American borders, two of England’s programs, The National STEM
Programme and The Race to the Top: A Review of Government's Science and Innovation
OTL AND SUCCESS 25
Policies have both indicated the need for students to pursue science and mathematics (Barlex,
2009). Sainsbury (2007) claims that there is an increasing demand for STEM skills in the UK
and since there is a 20-year decline, for example, in the number of student taking A-level
Physics, the UK government should step in and provide financial incentives in order to increase
the number of students studying STEM subjects. In other words, America is not the only nation
whose challenge is to meet the technological growth and competition around the world.
Statement of the Problem
In light of new advances in technology, schools and districts are being faced with more
challenges than ever before. The way technology is being used in linking student and school
data is in response to recent accountability measures (NCLB and RTTT) that are holding schools
and districts accountable for the way students are performing. And, since the federal government
has increased their involvement in American education since the ESEA (1965), the Coleman
Report (1966), and A Nation at Risk (1983), schools and districts face the more recent challenge
of carrying out high expectations such as the NCLB, the Race To The Top (RTTT), and STEM.
While there may be some advantages with government intervention, the problem is that
there exists a lack of clarity about how districts, schools, teachers, and students will be measured
to show their compliance to the new initiatives. Essentially, there needs to be a clearer way for
educators and policy makers to communicate how the NCLB, since its implementation, has
affected student achievement. The fact that so many states, like California, have an increasing
amount of diversity of students in schools (Bennett, 2001; Darling-Hammond, 2001; Garcia,
2002), how will data be used to determine the effect of NCLB? The belief that STEM education
will improve the quality of student learning experiences, increase student interest, and
competence in math and science education (Williams, 2011), as mentioned above, stands out
OTL AND SUCCESS 26
against the majority of students in the U.S. who still fail to reach adequate levels of proficiency
in math and science (Kuenzi, Matthews, & Mangan, 2006).
Additionally, Kuenzi et al. (2006) posit that a large majority of secondary students are
taught by teachers who lack adequate subject matter knowledge. Darling-Hammond (2001)
posits that “teachers who lack certification in their field and those who have entered through
short-term alternative certification programs are less effective in developing student learning
than those who have a full program of teacher education” (Darling-Hammond, LaFors, &
Snyder, 2001, p. 2). In contrast, students who are taught by teachers with certification in their
teaching field achieve at higher levels and are less likely to drop out (Darling-Hammond et al.,
2001) and in reality, those students are given more opportunities to learn in math and science
classrooms (Oakes, 1990; Lachat, 1999). Research has shown that low-income and minority
students have less contact with the best-qualified science and mathematics teachers (Oakes,
1990; Lachat, 1999; Oakes, 1986; Kuenzi, Matthews, & Mangan, 2006; Atkinson & Mayo,
2010) and yet, because of recent advances in technology that links student data to teachers,
research still lacks in determining whether the NCLB is leading schools, teachers, and students to
greater success. Hence, this study will document the effect that NCLB has on student success.
Purpose of the Study
The purpose of this study is to determine whether the NCLB has increased the
mathematics performance of 9
th
, 10
th
, and 11
th
grade students in California schools. Specifically,
this study will seek to utilize student math scores from the California Standards Test (CST),
controlling for a number of factors, and determine whether the NCLB has led to greater
opportunities to learn and success for students in math coursework. Accordingly, the primary
over-arching research questions of this study are as follows:
OTL AND SUCCESS 27
1. Has NCLB led to greater opportunity-to-learn (OTL) in high school math coursework
(Algebra 1, Geometry, Algebra 2, and Summative) in California?
2. Has NCLB led to greater success in math coursework in California?
3. How stable are the eight scores OTL and SS scores for Algebra 1, Geometry, Algebra
2, and Summative Math?
4. How reliable, or internally consistent, are the math composites of OTL and SS?
5. What is the correlation between the SCI and OTL status composite scores and SS
scores?
6. What are the descriptive characteristics and reliability coefficients (i.e. internal
consistency) of the composites of the residuals of the OTL and SS scores?
7. How stable are the composites of the residuals of the OTL and SS scores?
Organization of the Study
Chapter 1 provides an introduction, the background of the problem, the statement of the
problem, the purpose of the study, and the key definitions of terms used for this particular study.
Chapter 2 will begin with a brief discussion on the NCLB Theory of Action along with the
author’s addition to the NCLB Theory of Action. In addition, chapter 2 will discuss the
historical shifts in curriculum ideologies and how specific ideologies influence policies in
education. Then, chapter 2 will close with a discussion on the age of accountability and how
recent accountability systems are used to evaluate schools and teachers. Furthermore, chapter 3
will describe the methodologies used to address the research questions of this study. Chapter 4
will detail the results of the study and lastly, chapter 5 concludes the study with a summary of
OTL AND SUCCESS 28
the findings, a discussion of the results, implications of the results, and recommendations for
further research.
Definitions of Terms
Academic Performance Index (API). The API is a single number, ranging from a low of
200 to a high of 1000, which reflects a school’s, an LEA’s, or a subgroup’s performance level,
based on the results of statewide testing. Its purpose is to measure the academic performance
and growth of schools. The API was established by the PSAA, a landmark state law passed in
1999 that created a new academic accountability system for K-12 public education in California.
The API is calculated by converting a student’s performance on statewide assessments across
multiple content areas into points on the API scale. These points are then averaged across all
students and all tests. The result is the API. An API is calculated for schools, LEAs, and for
each numerically significant subgroup of students at a school or an LEA (California Department
of Education, 2012a).
API Improvement Scores. The API improvement scores reflect the value of the difference
between the Base API and the Growth API for a school, district, or LEA (California Department
of Education, 2012a).
Adequate Yearly Progress (AYP) Proficiency Scores. AYP proficiency scores represent
the ratio of students scoring proficient or advanced on the state standardized tests in ELA and
math to the total number of students tested in a given year (California Department of Education,
2012b).
Adjusted Achievement (Assessment) Indicators. Adjusted achievement (assessment)
indicators are variables used for evaluating performance, which account for various factors
OTL AND SUCCESS 29
contributing to differences in student test scores that may or may not be under the control of the
teacher, school, district, or LEA.
Adjusted Normal Curve Equivalent Scores. Adjusted normal curve equivalent (ANCE)
scores are scores that have been scaled in such a way that they have a normal distribution, with a
mean of 50 and a standard deviation of 21.06 in the normative sample for a specific grade.
ANCE scores range from one to 99. They appear similar to percentile scores, but they have the
advantage of being based on an equal interval scale. That is, the difference between two
successive scores has the same meaning throughout the scale. They are useful for making
meaningful comparisons between different achievement tests and for statistical computations,
such as determining an average score for a group of students. In this study, ANCE scores are
used for school level comparison (Hocevar, 2010; Hocevar et al., 2008).
Growth Models. Growth models generally refer to models of education accountability
that measures progress by tracking the achievement scores of the same students from one year to
the next with the intent of determining whether or not, on average, the students made progress
(Goldschmidt et al., 2005).
Internal Consistency. Internal consistency is the extent of inter-item correlation
(American Psychological Association, 1999; Lissitz & Samuelson, 2007).
Opportunity-to-learn (OTL). OTL is access and equal opportunities to effective teachers,
curriculum, scheduling or high-ability tracking, in order to learn standards. (Goals 2000: Educate
America Act). OTL may also be used as a measurement tool for evaluation purposes (Pub. L
No. 103-227, ! 3 [7]) (as cited in Black, 2012).
Reliability. Reliability is a measure’s ability to repeatedly yield consistent results
(American Psychological Association, 1999; Choi, Goldschmidt, & Yamashiro, 2005).
OTL AND SUCCESS 30
Schools Characteristics Index (SCI). The SCI is a composite measure of a school’s
background characteristics. With the fixed comparison bands method, schools are grouped on
the basis of their location within fixed ranges of the value of the SCI (California Department of
Education, 2012c).
Similar Schools Ranks. A school’s similar schools rank compares its API to the APIs of
100 other schools of the same type that have similar opportunities and challenges (California
Department of Education, 2012a).
Similar Schools Scores. Using the school characteristics index (SCI), a composite of the
school’s demographic characteristics, and the Base API, the similar schools scores are calculated
among a comparison group of 100 similar schools for the varying school types – elementary,
middle, and high school (California Department of Education, 2012a).
Stability. Stability is the degree to which a test measures the same thing at different times
or in different situations (Kurpius & Stafford, 2006). In other words, it is the extent to which the
same results are readily reproduced by an accountability measure. For this study, stability is the
measure of consistency for the OTL and SS scores as correlated from 2004 to 2012.
Status Models. A status model (such as AYP under NCLB) takes a snapshot of a
subgroup or school’s level of student proficiency at one point in time (or an average of two or
more points in time) and often compares that proficiency level with an established target
(Goldschmidt at al., 2005).
Success. For this study, success in a course is defined as a score considered basic or
above on the CST. The basic score scale begins with 300.
OTL AND SUCCESS 31
Unadjusted Achievement (Assessment) Indicators. Unadjusted achievement indicators are
those used in the accountability systems, which do not take into account or control for variations
in student, teacher, or school characteristics that may impede or accelerate academic gain.
Valid. A valid measure is one that measures what it was purported to measure (American
Psychological Association, 1999; Choi et al., 2005). In this study, the researcher examined the
information value that is generated from varying accountability indicators and the extent to
which these measures lead to valid conclusions of teacher, grade-level team, and school
effectiveness.
Value-added Models. Value-added models are one type of growth model in which states
or districts use student background characteristics and/or prior achievement and other data as
statistical controls in order to isolate the specific effects of a particular school, program, or
teacher on student academic progress (Goldschmidt et al., 2005).
OTL AND SUCCESS 32
CHAPTER TWO: LITERATURE REVIEW
In an effort to explore the effects that the No Child Left Behind Act (NCLB) has on
education today, the approach for the review of the literature will be explained using two lenses:
first, the NCLB theory of action, proposed by Edward Haertel, a professor at Standford
University, and second, four curriculum ideologies, which will be discussed below, that have
identifiable roots in influencing American education and various accountability systems.
NCLB Theory of Action
Essentially, the foundation behind NCLB’s theory of action is the academic content
standards. A number of research claims that standards do improve learning and instruction
(Lachat, 1999; Lauer, Snow, Martin-Glenn, Van Buhler, Stoutemyer, & Snow- Renner, 2005).
Thus, according to Haertel and Herman (2005), the NCLB “continues the policy assumption that
being explicit about standards for student performance and measuring student progress toward
them, coupled with sanctions and incentives, will leverage the improvement of student learning”
(p. 21). In other words, the theory’s foundation utilizes the standards, which fuse into the
curriculum and instructional practices in classrooms. The state-adopted content standards are
assessed two ways: internally and externally. Internally, the standards are assessed at the school
level, using curriculum, or other academic materials, aligned to the content standards.
Externally, the content standards are assessed in the form of state adopted standards-based
assessments (i.e. California Standards Test). Two results occur from the outcomes of those
assessments: 1) student scores, which are assigned a numerical value of the proficiency level set
by the state 2) evidence of classroom assessments, aligned to the content standards, produce what
standards students know and are able to do. Feedback of the two assessments is then used to
improve learning opportunities for students and increase their attainment of standards (Haertel &
OTL AND SUCCESS 33
Herman, 2005). Also, the results of the assessments are used to gauge strengths and weaknesses
of students’ knowledge of standards, gauge the strengths and weaknesses of instructional
practices, and identify what strategic actions need to take place for providing necessary resources
for students with special needs. Figure 2.1 represents a flowchart of the NCLB theory of action
proposed by Haertel and Herman (2005). However, the approach for this literature review,
represented in Figure 2.2, points to history and the ideological influences that led up to the
NCLB theory of action. Nevertheless, the NCLB theory of action lays the foundation for this
particular study.
Figure 2.1.
NCLB Theory of Action (Haertel & Herman, 2005)
OTL AND SUCCESS 34
Figure 2.2.
NCLB Theory of Action influenced by Curriculum Ideologies
Note. The curriculum ideologies are the author’s revision to the NCLB Theory of Action of
Haertel & Herman, 2005.
Curriculum Ideologies
There are four ideologies, which will be discussed below, that are behind what
curriculum is taught in our schools and according to Schiro (2008), each of the four visions of
curriculum “embodies distinct beliefs about the type of knowledge that should be taught in
schools, the inherent nature of children, what school learning consists of, how teachers should
instruct children, and how children should be assessed” (p. 2). That latter portion, “how children
should be assessed,” has been the topic of recent debate among educational researchers. Under
the recent accountability systems, like the NCLB and Race to the Top (RTTT), ways in which
student progress is measured is also a discussion of educators and policy makers.
OTL AND SUCCESS 35
The ongoing debate over whether it is more important to teach knowledge for knowledge
sake or to teach children the necessary skills and strategies for critically analyzing and
reconstructing society in the future is a challenge for many scholars and policy makers (Moore,
2000; Schiro, 2008). An abundance of research has been centered on curriculum ideologies and
on the history of American education (Bobbitt 1918; Dewey & Dewey, 1915; Counts, 1932a;
Counts, 1932b; Tyler, 1949; Bruner, 1963; Freire, 1970; Morgan, 1974; Cohen, 1976). And, the
ideologies for what American schools ought to look like, currently, poses a challenge for
educators, who may actually embrace a very different ideology of curriculum. To add, teachers
have been expected to take various viewpoints of education on trust without critically examining
them (Pedretti & Hodson, 1995), which only adds to the challenge. Because of the controversy
over what American education should look like, today’s accountability systems, NCLB and
RTTT, are driving the way schools, students, and teachers are being held accountable.
Accordingly, this review of the literature will start by discussing the four curriculum ideologies
and then a glimpse into the history of American education through the lens of those ideologies.
Furthermore, various policies and laws, which pertain to both civil and educational issues,
implemented throughout the 20
th
century will be discussed. Subsequently, the talk about how
recent policies and initiatives have affected schools, teachers, and students and the way they are
being measured will be discussed.
Definition of the Four Curriculum Ideologies
In effect, the term ideology, as used in Schiro (2008), is defined as “a collection of ideas,
a comprehensive vision, a way of looking at things, or a worldview that embodies the way a
person or a group of people believes the world should be organized and function” (p. 8). Schiro
adds that “the conceptual systems people use are often tied to the role in which they see
OTL AND SUCCESS 36
themselves functioning” (p. 10). Furthermore, in his book, Schiro uses the word ideology rather
than philosophy as a way to distinguish between motives that underlie behavior and articulated
beliefs. Nevertheless, the following four curriculum ideologies and their emergence in the
history of American education will be discussed. The four curriculum ideologies as Schiro
describes are: Scholar Academic Ideology, Social Efficiency Ideology, Learner Centered
Ideology, and the Social Reconstruction Ideology. Past research referred to the four ideologies
as: Essentialism, Perennialism, Progressivism, and Reconstructivism, respectively (Apps, 1973).
Ultimately, providing a historical landscape of American education, viewed through the lens of
these ideologies, may shed light on the current viewpoints that situate the federal policies that are
affecting states, schools, teachers, and students.
Essentialism & Scholar Academic Ideology
From about 1890 to 1916, during the time of the industrial revolution, much of the
curriculum taught in the schools held an essentialist philosophy. Essentialism, first introduced
by William C. Bagley in 1938 (Apps, 1973), is a philosophy that claims that the “essential
elements of education are available from historical and contemporary knowledge” and the
“content of education is derived from the physical world, including mathematics and the natural
sciences” (Apps, 1973, p. 21). In essence, the approach of an essentialist philosophy is to
educate others using the basic “Three R’s” (reading, writing, and arithmetic). Apps (1973) posits
that essentialists see their role as one that preserves and passes on the classical elements of
knowledge to succeeding generations with a prime emphasis on subject matter. Equally, a more
recent term coined to depict the same meaning as essentialism, is the scholar academic ideology
(Schiro, 2008). For the purpose of this paper, the two terms will be interchangeable. Schiro
(2008) posits that the purpose of education, for the scholar academic ideology, is to help children
OTL AND SUCCESS 37
learn and accumulate the knowledge that has been collected and organized into the academic
disciplines over the centuries. In addition, the aim of education for the scholar academic
ideology, or essentialist, is to extend information from their disciplines to succeeding generations
(Schiro, 2008). In other words, because the essentialist holds the classical elements of
knowledge to utmost importance, the curriculum taught in our schools should mirror that
philosophy in order to acculturate children into society in such a way that they become good
citizens and culturally literate adults (Schiro, 2008).
However, according to Freire (1970/2006), the danger of demanding schools to take only
an essentialist point of view is that students become “containers” and “receptacles” to be filled
by the teacher and “the more completely he fills the receptacles, the better a teacher he is. The
more meekly the receptacles permit themselves to be filled, the better students they are” (Freire,
1970/2006, p. 72). In other words, for the essentialist, students sit passively and are expected to
assimilate all the knowledge passed on from the teacher and then are expected to transfer all that
knowledge for practical purposes, like pursuing a career. However, the potential problem with
the scholar academic point of view is that it can restrict the curriculum to a specific group of
students (i.e. science and mathematically prone and college-bound students) and it provides those
students the practical applications they need in order to produce quality work in their careers, but
leaves out the students who may not have the financial, familial, and academic support that
students from more affluent areas have (Oakes, 1985; Oakes, 1990).
Perennialism & Social Efficiency Ideology
Perennialism is similar to that of essentialism in that it does hold subject matter in high
regard. Unlike essentialism, however, Apps (1934) suggests that “perennialists subscribe to the
view that the basic beliefs and knowledge of the ancient culture have as much application today
OTL AND SUCCESS 38
as they did thousands of years ago” (p. 21) and that the focus of learning lies in activities
designed to discipline the mind. Although perennialism is also thought of as the parent
philosophy of essentialism, the principle idea is that “subject matter of a disciplinary and
spiritual nature such as the content of mathematics, languages, logic, great books, and doctrines
must be studied whether used as such or not” (Johnson, 1969, p. 322). Furthermore, many
perennialists believe that education be directed toward the intellectually gifted and talented,
following Plato’s suggestion, and the less gifted should be provided vocational training (Apps,
1934). All in all, the focus for the perennialist, similar to the essentialist, is that concepts and
subject matters from thousands of years ago can be applied to today’s society.
Another term used in lieu of perennialism is the social efficiency ideology (Schiro, 2008).
Schiro (2008) posits that the social efficiency ideology, first coined by Franklin Bobbit in 1913,
views “education as a social process that perpetuates existing social functions” and that “society
is defined in terms of the ‘affairs of the mature world’ and not in terms of the affairs of youth
(Bobbitt, 1918, p. 207, as cited in Schiro, 2008, p. 61). In essence, the view of the social
efficiency ideology is to educate people for the development of a future society that would be
superior to the existing one and to prepare the youth to live a meaningful adult life and not as the
proponent of mediocrity (Schiro, 2008).
Progressivism & Learner Centered Ideology
Progressivism (in this section, the terms progressivism, learner-centered, and child-
centered will be used interchangeably) takes a very different perspective than the scholar
academic and the social efficiency ideologies. Growing out of the “pragmatism of Charles S.
Peirce and William James, and further developed for education by John Dewey” (Apps, 1934, p.
22), the primary focus of the progressivist is to use man’s experience as the basis for knowledge
OTL AND SUCCESS 39
and the needs and interests of the learners determine the school program not the interests of
members of society (i.e., teachers, principals, school subjects parents, or politicians) (Apps,
1934; Schiro, 2008). Also, Moore (2000) states that progressivism, or the child-centered
approach, sees education as a “drawing-out” rather than a putting in of knowledge; or in other
words, getting the “society out of the child” rather than getting the “society out of knowledge”
(p. 20), as is the approach of essentialism and perennialism. In his Pedagogic Creed, Dewey
(1897) states that:
“The child's own instincts and powers furnish the material and give the starting point for
all education…” and “…if we eliminate the social factor from the child we are left only
with an abstraction; if we eliminate the individual factor from society, we are left only
with an inert and lifeless mass. Education, therefore, must begin with a psychological
insight into the child's capacities, interests, and habits” (pp. 360-362).
In other words, Dewey’s writing takes on a progressivist view of education in that the child
should be the center of what is taught in schools. Unlike the essentialist and the perennialist
points of view, which deposits knowledge into the child, the progressivist point of view guides
and empowers the child to be able to construct their own meaning of society based on the child’s
own interests.
Reconstructivism & Social Reconstruction Ideology
According to the Oxford Dictionary
3
, the term reconstruction is “a thing that has been
rebuilt after being damaged or destroyed.” In other words, the social reconstruction ideology
begins with the assumption that the present society is unhealthy and that the survival of our
3
Retrieved July 12, 2012 from: http://oxforddictionaries.com/definition/english/reconstruction
OTL AND SUCCESS 40
society is threatened by many problems but something can be done to keep society from
destroying itself (Schiro, 2008). The problems, according to Schiro (2008), include:
“…racism, war, sexism, poverty, pollution, worker exploitation, global warming, crime,
political corruption, population explosion, energy shortage, illiteracy, inadequate health
care, and unemployment. Underlying many of these problems are deep social
structures—many Eurocentric conceptions of knowledge, culture, and values—that
through the school’s hidden curriculum subtly shape student beliefs and behavior in such
a way that they, as both students and future adults, will contribute to the continuation and
worsening of these problems. If these problems are not resolved, they will threaten the
survival of our society” (p. 133).
In the article, Genres of Research in Multicultural Education, Bennett (2001) claims that
since American history has been deeply rooted in institutional and cultural racism and because
America is a culturally diverse society, the overall purpose for education is to develop
knowledge and awareness of societal inequities in order to bring about change. In essence, a
social reconstructivist views education as the channel through which societal problems can be
solved and can be used as the power to educate people to analyze the current conditions of
society and act so as to bring the visions for change into existence (Schiro, 2008). The objective
under social reconstruction ideology, for American education, is to abolish the injustices and
inequities rooted in our American culture. Accordingly, because inequities are rooted in
American education, a deeper consultation with the literature about how the aforementioned
curriculum ideologies have shifted and shaped American education and policies will now be
discussed.
OTL AND SUCCESS 41
History of American Education Through Ideological Lens
Currently, so many schools across America are being held accountable under federal and
state policies and initiatives in order to demonstrate student progress and achievement. Namely,
the No Child Left Behind (NCLB) Act of 2001 has been the country’s most recent standard that
was mandated for all students to reach proficiency by the year 2014, despite the diversity of
students. However, the U.S. government establishing federal policies is not new to education.
Since the growth of the country, in the early 1900s, federal policies have changed and shifted in
relation to the viewpoints of ideologies (Schiro, 2008). Essentially, with increasing pressure
from the NCLB, schools, teachers, and students are required to exhibit the performance
necessary to meet the state and federal mandates. In order to better understand how the NCLB is
affecting schools today, a deeper look into the literature about the history of American education
may help us recognize where we are today.
Late 1800’s to Early 1900s – Growth of the Country
At the turn of the century, American schools experienced an influx of immigrants from
Europe. Tyack (1974) notes that in December 1908, the U.S. Senate Immigration Commission
tallied more than sixty nationalities in the nation and more than 58% of all students had fathers
who were born abroad. Tyack (1974) further states that some of the largest cities in the U.S.,
such as New York, Chicago, Boston, Cleveland, and San Francisco, had alarming percentages of
immigrant students: 72%, 67%, 64%, 60%, and 58%, respectively. In New York alone, as the
major port of entry for immigrants, tens of thousands of newcomers were entering the city and in
only 15 years, from 1899 to 1914, there was a 60% increase in school enrollment (Tyack, 1974).
Consequently, because schools were unprepared (i.e. lack of space) in admitting students to
school, 60 to 70 thousand children in 1905 were denied admissions (Tyack, 1974). Seeing this
OTL AND SUCCESS 42
was a problem, a curious reporter, Adele Marie Shaw, visited 25 New York schools to observe
how the school system was managing the influx of children (Tyack, 1974). Concerned about
how the U.S. school system was to “Americanize” the school population, Shaw suggested that
the school system hold a dichotomous position. Shaw (as cited in Tyack, 1974) states:
“‘To educate the children of our adoption we must at the same time educate their
families, and in a measure the public school must be to them family as well as school.’ If
children come to school dirty, the teacher must teach them how to keep clean. If children
cannot learn because they are hungry, the school must provide cheap lunches and teach
proper nutrition. If students come to class with physical defects, then the system must
provide free medical inspection. If the child learns to be delinquent on the streets, then
the schools should provide playgrounds and vacation schools. If youth or their parents
have no place to study or find recreation in the evenings, then the school should become a
community center after class hours. If grown boys and girls arrive without a knowledge
of English, then special ‘steamer classes’ should be created for them. By and large, the
newcomers flocked to take advantage of the voluntary services as soon as they were
provided early in this century” (pp. 231-232).
In other words, it was the job of the schools to provide students not only the basic daily
provisions (i.e. food, water, nutrition, medical care, etc.), but also to teach children the essentials
of knowledge. Echoing the above discussion, about essentialism, American education sought to
acculturate children into society in such a way that they become good citizens and culturally
literate adults (Schiro, 2008). That is, education, at the turn of the century, provided an
essentialist education, where knowledge was taught in order to teach others to become good
citizens in society.
OTL AND SUCCESS 43
More than only being rooted in essentialism, the educational system also took on the
perennialist viewpoint—“that education be directed toward the intellectually gifted and talented,
following Plato’s suggestion, and the less gifted should be provided vocational training (Apps,
1934). Since a language barrier was common for many of the immigrants attending the schools,
many were taught vocational skills needed for the purpose of working in the growing
industrialized factories (Leland & Kasten, 2002). Education, essentially, “provided a vehicle for
the efforts of one class to civilize another and thereby ensure that society would remain tolerable,
orderly, and safe’’ (Katz, 1971, p. 9, as cited in Leland & Kasten, 2002, p. 7). In addition,
White, Scotter, Haroonian, and Davis (as cited in Parkay, Hass, & Anctil, 2010) asserted that
they “…believe that only enlightened individuals are capable of carrying out the duties of a
citizen. This ability is acquired through an education base on the powerful ideas that, in a
democracy, must be accessible to all on equitable terms” (p. 27). But, because traditional
curriculum was primarily Anglo Eurocentric in scope, knowledge and perspectives that have
been previously ignored or suppressed (Bennett, 2001) impelled progressive reformers, such as
John Dewey (1859-1952) and William Heard Kilpatrick (1871-1965), to change the purpose and
direction of American education.
1917 to the Late 1950’s – Progressivism and Brown vs. Board of Education
Essentially, progressivist ideologies first came into prominence during the late 1890s but
became the pinnacle of consideration for education as researchers saw the dangers of the factory-
based model of education. In addressing the field of science, Dewey (1910) claimed that too
much emphasis was on facts and not enough emphasis on thinking and the attitude of the mind.
Dewey adds that the only prerogative of man was to actively participate in the making of
knowledge and by this participation, warrants mans’ freedom. Dewey (1916) strongly contested
OTL AND SUCCESS 44
the essentialist viewpoint and emphasized that the ignorant basis for an essentialist education,
like the factory model, was men and women “doing things which serve ends unrecognized by
those engaged in them, carried on under the direction of others for the sake of pecuniary reward”
(p. 88). In other words, the purpose for education, during the industrial revolution, was for the
benefit of others, which inherently revealed the disparity of social classes. Around 1919, the
Progressive Education Association was established to act as an umbrella organization for many
Learner Centered educators working in private and public schools (Schiro, 2008). During that
time, studies have shown that students from learner-centered schools performed better in college
than students from more traditional schools (Aiken, 1942 as cited in Schiro, 2008, p. 114). But,
the problem was that schools were not educating all students equally.
Although progressivism centered on the learner, students of color, minorities, and other
students who come from disadvantaged backgrounds were not receiving an equitable education
as did their White counterparts. With the growth of the civil rights movement in the first half of
the century, a series of laws and court rulings that were in effect, namely, the Plessy vs.
Ferguson case of 1896, were only attempts to giving students of color an education. In brief, the
Plessy vs. Ferguson case ordered a “separate but equal” law that allowed students of color to be
educated, but in a separate environment. According to Nichols (2005), Plessy was the gateway
to a line of cases that stressed the government's role in preserving the separate, but equal, station
of blacks in American society” (p. 154). However, the “separate but equal” doctrine was
challenged by the National Association for the Advancement of Colored People (NAACP)
around 1930. Essentially, the NAACP “relied on the ‘separate but equal’ doctrine to force states
to provide better educational opportunities for black students” (Nichols, 2005, p.156).
OTL AND SUCCESS 45
Following the Plessy vs. Ferguson case (Plessy), later in the 1950’s, the Brown vs. Board
of education court ruling debunked the Plessy case as inherently violating the 14
th
Amendment.
In essence, many of the cases were all attempts by the federal government to provide all students,
regardless of race, ethnicity, and social class an equitable educational experience. However, in
reality, not all students received an equal education. In response to the tragic racial intolerance
of Germany and Italy, Dewey (1938) questioned whether the American democracy was free from
racial intolerance:
“Are we entirely free from that racial intolerance, so that we can pride ourselves upon
having achieved a complete democracy? Our treatment of the Negroes, anti-Semitism,
the growing (at least I fear it is growing) serious opposition to the alien immigrant within
our gates, is, I think, a sufficient answer to that question. Here, in relation to education,
we have a problem; what are our schools doing to cultivate not merely passive toleration
that will put up with people of different racial birth or different colored skin, but what are
our schools doing positively and aggressively and constructively to cultivate
understanding and goodwill which are essential to democratic society” (p. 98)?
In other words, our own country was facing racial calamities that were destroying the learning
and education of all students and it was the schools job to change society.
Early 1950’s to 1967 – Brown vs. Board of Education to the Sputnik
In 1954, the Brown vs. Board of Education case overturned the Plessy vs. Ferguson
(1896) case, declaring that establishing separate public schools for Black and White students was
unconstitutional and damaging (Bickel, 1998; Nichols, 2005). However, according to Edelman
(1973), ten years after the Brown decision, 99% of the nation’s Black school children were still
in segregated schools. So, states were very slow in propagating the desegregation process and in
OTL AND SUCCESS 46
fact, some researchers have shown that smaller communities, within larger cities, were separated
by socioeconomic status—which includes lower-performing and higher performing schools—up
until the 1990s where desegregation began to turn back (Stephan, 1980; Orfield & Yun, 1999;
Holme, 2002). Nevertheless, in 1964, the Congress passed the Civil Rights Act, which outlawed
discrimination against race, color, religion, or national origin (Harvard Law Review, 1965). In
Title IV of the Civil Rights Act, the government was to cut off any federal funds to local school
districts that were not in compliance with the Brown decision (Stephan, 1980).
Meanwhile, around 1957, the same ideologies, as during the growth of the country, also
became prevalent when Russia launched their first satellite into space, the Sputnik, in 1957. In
fear of losing the space race against Russia, America changed the goal of education from
progressivism back to essentialism and perennialism. Again, America set out to change the way
curriculum was taught in American schools and pushed for a strong scholar academic ideology
(Schiro, 2008; Kelly, 2004). The federal government encouraged schools to carry out a strong
math and science curriculum in order to educate children the essential tools required to compete
with other countries, such as Russia, in scientific and mathematical advances. In spite of this,
that embarkment paralleled an earlier outcome—that is, students of color and students with
disadvantages were limited in receiving the same type of curriculum. However, the gradual
process of states aiming to desegregate schools, as a result of the Civil Rights Act, and the forms
that desegregation took “were often antithetical to achieving equal nondiscriminatory educational
systems” (Stephan, 1980, p. 18). Stephan (1980) also writes that Blacks were bussed to different
schools more than Whites because it was the Black schools that were usually the ones that were
closed. Consequently, Black students were only able to take advantage of the “Sputnik
education” when given the access. But, in most cases, access was denied.
OTL AND SUCCESS 47
In the meantime, with the federal government’s increasing involvement in schools, due to
the space race, educational reports also emerged at the same time to identify the prevailing gap in
student achievement and success. In 1966, the publication of the landmark commissioned report,
Equality of Educational Opportunity of 1966, also known as the Coleman Report (Coleman,
1966), argued that the lack of student achievement was more attributed to the family background
than the schools (Carver, 1975; Coleman, 1970; Ravitch, 1981).
1968 to 1974 – Open Education and Back to Progressivism
Noddings and Enright (1983) posit that American education is notorious for its
pendulum-like movement that swings back and forth from different ideological movements. As
mentioned above, the progressivist point of view sees education as coming out of the child (i.e.
interests, needs, and capabilities) rather than the imparting of the knowledge into the child. In the
late 1960s, the progressivist movement again surfaced in education, like during the 1920s. Sloan
(1974), like former progressivists John Dewey and William Kilpatrick, summarizes that the
child-centered ideology proposes that children learn at different rates, in different ways, from
different activities, and with different people. Sloan adds that the schools experience is the total
process and not just the segmented efforts of teaching children. Nevertheless, during the 1960s,
the National Science Foundation (NSF) funded the Elementary Science Study, which sparked the
resurgence of the child-centered ideology (Schiro, 2008). Using examples of child-centered
schools in Britain, the NSF used the project method, promoted by Kilpatrick in the 1920s, as a
means to educate children and consequently, resurrected progressivism but was termed “open
education” (Schiro, 2008).
As open education classrooms were being developed, the description of the term “open
education” transpired into the view of classrooms as open spaces, a place of team teaching,
OTL AND SUCCESS 48
schools without walls, and promoting individually guided instruction (IGE) (Noddings &
Enright, 1983). According to Barth (1977),
“Open education had to be created because there were many needs demanding
satisfaction in American education. Dissatisfaction with educational practices—fanned
by the popular critics of the 1960s—could not be relieved until we had a "thing" with
which to replace the “grim, joyless places” (p. 489).
In essence, many proponents of the open education movement saw the traditional educational
system as a way to disentangle the classroom from a more traditional and laissez-faire
environment (Barth, 1977). Additionally, Horwitz (1979) revealed that children from the open
schools were less rigid, more cooperative, less competitive, had much more positive attitudes
about school, and were more effective than their counterparts from more traditional schools.
However, Morgan (1974), a critic of the progressivist movement, claimed that the focus
of open education seemed to be more on the educative process rather than on the educational
content (Morgan, 1974). In addition, Bennett (1976) highlighted a study that revealed superior
attainment for traditional school children and claimed that open education has been proven to be
detrimental to achievement. Consequently, after further attacks on open education, by the late
1970s, open education gradually withered (Schiro, 2008). Sherman (2009) attributed the lack of
staying power of the open education is due to a number of factors including the political climate.
Specifically, in the early 1980s, a national report, A Nation at Risk, underscored the decline of
achievement in American education.
1975 to Mid-1990s – A Nation at Risk to The Standards Movement (Essentialist)
Around 1975 to about 1989, the federal government again started to turn their interest in
American education when in 1983, the report A Nation at Risk, highlighted the decline of the
OTL AND SUCCESS 49
existing education in the US. In brief, however, the 1983 report contested the claim of the
Coleman (1966) report, which stated that the reason for the lack of student achievement was due
more to familial issues than schools. Instead, A Nation at Risk (1983) put the spotlight back on
the schools as the reason for the decline of student achievement. In light of the claims and fears
attendant to the global economy, the report shifted the government’s attention in changing the
educational system to respond to those fears as well as changing the way math, science, and
engineering were taught in the schools (Johanningmeier, 2010). Also, the role of the federal
government increased since A Nation at Risk and the federal interest was to ensure “providing
and achieving equality of educational opportunity as well as developing citizens capable of
performing effectively in the Global Economy” (Johanningmeier, 2010, p. 348). The National
Science Board Commission on Pre-College Education in Mathematics, Science and Technology
(1983) adds that “discrimination and other disadvantages due to race, gender, ethnic background,
language spoken in the home or socioeconomic status and the lingering effects thereof must be
eradicated completely from the American educational system” (p. 14). In attempt to eradicate
discrimination in schools, educators, policy makers, and national organizations put together a set
of academic standards that all students, regardless of race, ethnic background, and
socioeconomic status, should know and be able to do.
Standards Movement
Since A Nation at Risk, reformers viewed the current educational system as one that
encouraged mediocre and undemanding work (Doan Holbein, 1998). In 1989, under the
administration of President Bush, the National Goals Panel and the National Council on
Education Standards and Testing (NCEST) was established with one goal in mind: “to develop
‘world-class’ national standards for schools and students in the United States” (Doan Holbein,
OTL AND SUCCESS 50
1998, p. 559). The mission of NCEST was to advise and council the federal government in the
feasibility of establishing national standards and assessments. Consequently, starting in 1989 to
the mid-1990s, professional organizations, such as the National Council of Teachers of
Mathematics (NCTM), the U.S. Department of Education (USDE), and the National Science
Foundation (NSF), put together model content standards for the content areas of math, science,
language arts, and social studies (Doan Holbein, 1998; Schwartz & Robinson, 2000; Johnson,
Musial, Hall, Gollnick, & Dupuis, 2004 or 2005). A number of researchers point out that the
purpose of standards is to provide a guideline of what students should know and be able to do
(Ravitch, 1993; Ravitch, 1995; Lachat, 1999; Schwartz & Robinson, 2000). According to Lachat
(1999), “much of the work of developing education standards has been accomplished by
individuals who are deeply committed to a vision of society where people of different
backgrounds, cultures, and perceived abilities have equal access to a high quality education” (p.
3). In addition, Thompson (2001) posits the standards based reform is concerned with equity,
high expectations, and support provision for all students and moves away from the tracking and
sorting carried out by the factory model of education in the early 20
th
century. Nevertheless, in
conjunction with the standards came assessments that were based on the academic content
standards.
In 1990, the NCEST recommended that President Bush and the governors establish a
program that would monitor the progress of states and schools meeting the national standards.
Accordingly, the National Education Standards and Assessment Council (NESAC) was
established as a national organization to create assessments aligned with the national standards.
OTL AND SUCCESS 51
Standardized Testing
In 1994, the Elementary Secondary Education Act (ESEA), established federal and state
roles in policies regarding the content standards. Also called the Improving America’s Schools
Act, its role was to ensure that every state across the nation would adopt formal standards in
reading and mathematics as a condition for receiving federal funds for Title 1 students (Lauer,
Snow, Martin-Glenn, Van Buhler, Stoutemyer, & Snow-Renner, 2005). According to Ravitch
(1993), “standards and assessments were critical because they would allow teachers, students,
parents, and communities to determine whether they were aiming high enough and whether they
were making progress. Without standards and assessments, the "new American schools" and the
community-based reformers would not know whether their innovative efforts were making a
difference” (pp. 1-2). In other words, standardized assessments were designed to measure school
and student progress in mastering the content standards. Although the goal of the standards
movement was to ensure that all students, regardless of race, ethnic background, and
socioeconomic background, a plethora of research reveals that inequities were still prevalent
across the nation (Oakes, 1990; Kozol, 1991; Ravitch, 1993; Ravitch, 1995; Wells & Oakes,
1996; Lachat, 1999). Essentially, the standards movement, along with the standardized testing,
drastically highlighted the disparities and inequities in education.
1990s to 1999 – Reconstructivism, Goals 2000, and Opportunities to Learn
The response of the federal government, similar to the Sputnik, Coleman Report, and A
Nation at Risk, increased all the more in the enactment of Goals 2000, also known as Educate
America Act (1994). The Goals 2000 ushered in the era of “systemic reform” in education
(Wells & Oakes, 1996; Schwartz & Robinson, 2000). After very contentious debates under the
Clinton administration regarding whether there should be standards for schools and schools
OTL AND SUCCESS 52
systems, as well as for students, the council resolved that both school delivery standards and
system performance standards were needed (Schwartz & Robinson, 2000). However, these
standards were not to be set nationally, but should be developed by individual states, where each
state would “set the criteria that it finds useful for the purpose of assessing a school’s capacity
and performance” (NCEST, 1992). But, one of the outcomes of the Goals 2000 was that it
renamed the content and performance standards as “opportunity to learn” (OTL) standards—
which its goal was required for all states, in order to receive federal funding, to take corrective
action if OTL standards were not implemented in local school districts (Schwartz & Robinson,
2000). Around the same time, President Clinton asked the National Education Standards and
Improvement Council (NESIC), a federal program that helped minimize conflict around testing,
to certify the OTL standards. But, opponents of the OTL standards claimed that if NESIC would
certify the OTL standards, “it would pave the way for federal control in schools and fuel lawsuits
from civil rights groups” (Schwartz & Robinson, 2000, p. 194). As a result, the state
requirement for developing OTL standards was dropped.
Fundamentally, the OTL standards were “proposed by several groups at the national level
to clarify the conditions in schools necessary for all students to have the opportunity to achieve
the knowledge, skills, and understandings set out in the content standards” (Lachat, 1999, p. 9).
And, at the heart of OTL is equal access to quality instruction—instructional strategies that led to
the development of higher-order thinking reasoning and problem-solving skills (Stevens, 1996;
Wolf & Reardon, 1993).
However, the “heart” of OTL standards did not go unchallenged. Wells & Oakes (1996)
argued that as more emphasis was placed on state standards as the accountability measures,
OTL AND SUCCESS 53
“…many educators, researchers, and advocates (see for example, Darling-Hammond
1992) argued that efforts to hold all schools accountable on the basis of standards and
student outcomes were being cast onto an uneven playing field in which some schools
would be less able to provide students with opportunities to learn the curricular content
reflected in the state tests” (p. 135).
According to Wise (1982), the educational policy in the early 1960s and 1970s was essentially
designed to address equity issues and to vanquish problems like segregation or rights of
disadvantaged students. But, the issues of segregation were still prevalent in schools, just in
another form—tracking. Tracking, according to Oakes (1986), is the “practice of dividing
students into separate classes for high-, average-, and low-achievers; it lays out different
curriculum paths for students headed for college and for those who are bound directly for the
workplace” (p. 13). In one of her books, Keeping Track: How Schools Structure Inequality,
Jeannie Oakes, a former presidential professor in educational equity at the University of
California, Los Angeles (UCLA) Graduate School of Education and Information Studies and
now the Director of Education and Scholarship of the Ford Foundation
4
, Oakes (1985)
recognizes that one of the reasons school people track students is the assumption that tracking is
an integral part of the school and that it belongs to the “natural” order of schools. In other
words, because so many poor and minority students were underperforming, schools would
“naturally” segregate those students into lower-level classes hoping to have more time to teach
fundamentals in the core content areas. Oakes (1985) found that poor and minority students—
4
Retrieved on July 26, 2012 from: http://www.fordfoundation.org/about-us/grant-maker/jeannie-
oakes/
OTL AND SUCCESS 54
“the very students on whom so many educational hopes are pinned” (p. 103)—have suffered the
most from tracking in schools.
Moreover, working together with John Goodlad on A Study of Schooling, Goodlad and
Oakes studied three areas: curriculum content, instructional quality, and classroom climate.
Essentially, students who were tracked in English and Math classes had access to considerably
different types of knowledge and skills as those who were not tracked (Oakes, 1986). For
example, students in high-track language arts classes were exposed to content that can be called
“high-status knowledge,” which included topics and skills required for college (Oakes, 1986, p.
15). The same type of content was exposed to students tracked in “high-status” math classes.
That is, high track classes focused more on the conceptual part of mathematics whereas the low
track classes focused more on computational skills and strategies (Oakes, 1986). As a result,
students are continuously locked in lower-level classes as they progress in years of schooling and
in the end, are denied the knowledge needed to move in to higher-level classes.
Furthermore, students in higher-level classes had better classroom opportunities to learn
for two reasons: more instructional time was allotted for learning and the teacher quality was
better. Teachers set aside more instructional time for student learning and the students were held
to higher expectations, like spending more time on homework (Oakes, 1986). In addition,
teachers who taught the higher-level courses were more enthusiastic and clearer when teaching
content and spent less time disciplining students or going over class routines (Oakes, 1986).
Oakes describes the irony of the situation: “those students who need more time to learn appear to
be getting less; those students who have the most difficulty learning are being exposed least to
the sort of teaching that best facilitates learning” (p. 16).
OTL AND SUCCESS 55
More recently, a number of researchers have found that tracking still exists across the
nation, internationally as well (Rogers, 1998; Brunello, & Checchi, 2007; Van Houtte, &
Stevens, 2009; Demanet, Houtte, & Stevens, 2012; Tate, Jones, Thorne-Wallington, & Hogrebe,
2012). Furthermore, some advocates of ability grouping, another name for tracking but with an
emphasis on academic subjects that tracked students into classes geared to different levels for
students with different abilities (Oakes, 1986), posit that high-ability and gifted students actually
benefit from ability grouping because the strategy provides them with the opportunity to access
more advanced knowledge and skills and to practice deeper processing (Rogers, 1998).
Nevertheless, as Wells and Oakes (1996) promoted the need for detracking in schools, there is
research that claims schools are still detracking students (Tice, 1998; Burris, 2010). Overall, the
effort from the federal government in ensuring that all students have equal access to education
has not been unsuccessful. With the increasing need to teach all students higher levels of math,
science, and technology, due to the growing technological competition with other countries, the
federal government, in the late 1990s, set up policies and grants for that competition.
1999 to Present – Ideology Amalgamation, Digital Age, and the Age of Accountability
The ideological landscape of the 20
th
and the beginning of the 21
st
century has shifted
toward an amalgamation of the four ideologies mentioned at the beginning of this chapter:
essentialism, perennialism, progressivism, and reconstructivism. It is important to note that the
term amalgamation is not used to discredit the likelihood that former educational movements
throughout the 20
th
century were exempt from mingling the different ideologies. But, the term
amalgamation is used to describe how current educational movements pull together different
parts of the four ideologies in order to meet the demands set by the federal government.
OTL AND SUCCESS 56
Nonetheless, in the competitive technological and digital age, some research claims that
the United States is falling behind other countries in STEM education (Bybee, 2007; Atkinson &
Mayo, 2010). Bybee (2007), in comparing today’s focus on science, technology, engineering,
and math (STEM) to the Sputnik era, states that “the United States must again reform science
education, in this case because we are losing our competitive edge in the global economy and
clearly must attend to environmental and resource issues because they often underlie economic
realities” (p. 454). Essentially, the federal governments call to become more competitive in the
global economy is linked to the current policies and accountability systems in place in the United
States. Namely, the No Child Left Behind Act (NCLB), established in 2001, is a most recent
accountability system that requires all students reach proficiency in mathematics and English by
the year 2014, which is only two years away. In essence, many schools are being challenged
with the extremely high demand that all students reach proficiency in math and English
(Edsource, 2010). The NCLB has propagated what some call the age of accountability (Gusky,
1998; Wright & Wiley, 2004).
Age of Accountability
There are two major federal legislations that are affecting the terrain of the educational
system in America: 1) NCLB Act of 2001 and 2) the Race To The Top (RTTT) initiative of
2009. In the following section, both legislations and various components of each will be
discussed.
No Child Left Behind of 2001
In December of 2001, the U.S. Congress approved a reauthorization of the ESEA and
renamed it the No Child Left Behind Act. The NCLB covers four basic premises: 1) stronger
accountability for schools and teachers, 2) increased flexibility and local control over federal
OTL AND SUCCESS 57
funds, 3) greater schooling options for parents and, 4) a focus on proven, research-based teaching
methods (Rush & Scherff, 2012). This section will focus on the accountability measures for
schools.
Adequate Yearly Progress (AYP)
Essentially, each state is required to develop and implement a statewide accountability
system that will ensure that all schools and districts make Adequate Yearly Progress (AYP) as
defined by NCLB
5
. According to the United States Department of Education (USDE) (2012)
6
:
“AYP is a measure of year-to-year student achievement on statewide assessments. Each
state comes up with its own definition on what it means to make AYP. Definitions must
answer three questions: the percentage of students that must be proficient or above when
tested in reading and mathematics (yearly in grades 3-8 and once in high school); whether
or not at least 95 percent of students in those grades participated in the assessments; and,
the additional academic indicator (e.g., graduation rates for high schools) that will be
measured.
There are five criteria for which schools are required to accomplish in order to meet the AYP
(USDE, 2012)
7
:
1. Same high standards of academic achievement for all
2. Statistically valid and reliable
3. Continuous & substantial academic improvement for all students
5
Retrieved on July 27, 2012 from: http://www.cde.ca.gov/nclb/index.asp
6
Retrieved on July 27, 2012 from: http://eddataexpress.ed.gov/definitions.cfm
7
Retrieved on July 27, 2012 from: http://www2.ed.gov/admins/lead/account/ayp/edlite-
slide009.html
OTL AND SUCCESS 58
4. Separate measurable annual objectives for achievement
a. All students
b. Racial/ethnic groups
c. Economically disadvantaged students
d. Students with disabilities (IDEA, Sec. 602)
e. Students with limited English proficiency
5. Graduation rates for high school and one other indicator for other schools
Basically, students are tested on the state-adopted standards in the areas of English, Math,
Science, and History. For the purpose of this paper, the math performance-based assessments for
California, or the California Standardized Test, will be discussed.
Performance-based assessments are standards-based assessments that attribute success
levels in five performance bands: Far Below Basic (FBB), Below Basic (BB), Basic, Proficient,
and Advanced. In California, the range of the Mathematics scale scores, or cut off scores, for the
2011 Elementary to high school are summarized in Table 2.1
8
. It is important to note that the cut
off scores can change from state to state and year-to-year according to the NCLB. Essentially,
the NCLB demands that if schools do not meet AYP requirements, then further sanctions will be
imposed on those schools. Particularly, schools are to have all students maintain a proficient or
above on their English and Math scores by the year 2014.
According to Linn (2005), the NCLB provided some positive features for education. For
example, NCLB is praiseworthy for: the special attention it gives to improved learning for
8
Retrieved on July 27, 2012 from: http://help-
isi.illuminateed.com/s/dna_isi_help_manual/m/6451/l/61677-star-performance-level-cut-points-
scale-score-ranges#!prettyPhoto
OTL AND SUCCESS 59
children who have been ignored or left behind in the past, emphasis on closing the achievement
gap, the freedom given to states to adopt ambitious subject matter standards and enhance teacher
quality, its focus on students with low achievement, the percentage of schools meeting AYP
targets increased in 2003-04 from the year before in most states, and the recently released
National Assessment of Educational Progress (NAEP) long-term trend scores have shown some
narrowing of achievement gaps (Linn, 2005).
Table 2.1
CST Cutoff Scores Chart
Note. Retrieved on July 27, 2012 from: http://help-isi.illuminateed.com/s/dna_isi_help_
manual/m/6451/l/61677-star-performance-level-cut-points-scale-score-ranges#!prettyPhoto
However, Linn (2005) among many other researchers argue that by the year 2014, 100%
of students must reach the proficient level or above in math and reading is not realistic (Lee &
Reeves, 2012). In response to the unrealistic target, Linn (2005) offers three suggestions for
modifying the NCLB: 1) set realistic performance targets for adequate yearly progress,
OTL AND SUCCESS 60
rewarding effort with success, 2) AYP should be determined by a consideration of growth in
achievement and not just status in comparison to a fixed target, and 3) because the current
definitions of proficient achievement established by states lack any semblance of a common
meaning, alternatives to defining proficiency should be considered that would provide more
meaningful and comparable achievement targets (Linn, 2005). In other words, it is unlikely that
all schools across the nation will meet the high demands NCLB requires.
However, the second suggestion made by Linn (2005), that the AYP should be determined
by a consideration of growth in achievement and not just status in comparison to a fixed target,
has not been ignored. In 2009, President Barack Obama launched the “Educate to Innovate”
campaign, which was a nationwide effort to help move American students from the middle to the
top of the pack in science and math achievement over the next decade. Specifically, President
Obama launched the Race To The Top (RTTT) initiative in 2009.
Race To The Top (RTTT) Initiative of 2009
With the focus primarily on excellence in Science, Technology, Engineering, and
Mathematics (STEM) education, the federal program Race To The Top (RTTT) of 2009 is a
stimulus program that funds individual states as an incentive for states to increase student
achievement. With a $4.3 billion budget from the American Reinvestment and Recovery Act
(ARRA), RTTT plans to reward state schools that “commit to pursuing reform in four areas: 1)
adopting standards and assessments that prepare students to succeed in college and the
workplace and to compete in the global economy, 2) building data systems that measure student
growth and success, and inform teachers and principals about how they can improve instruction,
3) recruiting, developing, rewarding, and retaining effective teachers and principals, especially
where they are needed most, and 4) turning around our lowest-achieving schools. More recently,
OTL AND SUCCESS 61
in the Year 1 Annual Performance Report (APR) for the RTTT, twelve states across the nation
have successfully met all four criteria in the RTTT initiative
9
. Among those states were:
Delaware, District of Columbia, Florida, Georgia, Hawaii, Maryland, Massachusetts, New York,
North Carolina, Ohio, Rhode Island, and Tennessee. Moreover, President Obama announced his
plans to continue the RTTT challenge, requesting $1.35 billion for the program in his 2011 fiscal
year budget
10
.
The fact that twelve out of fifty states are able to receive federal rewards because they met
the four requirements for RTTT seems a problem. What is happening in the other states for them
not to receive the rewards? According to Edsource (2009), California legislators were meeting
in a special session to consider measures that would improve the state’s chances of winning a
grant from RTTT. However, according to the RTTT annual performance report, California was
not on the list. Moreover, schools must demonstrate success based on the four criteria under the
RTTT, but some questions to ask are: 1) What type of performance data are schools using to
demonstrate success in preparing students to succeed in college and the workplace to compete in
the global economy? 2) How are schools and districts creating and utilizing data systems to
measure student performance and growth? and 3) How are schools determining what makes an
effective teacher in order to receive the awards that RTTT offers?
Performance Data Used for Evaluation of California Schools
For California, one of the performance data being used comes from the California
Standards Test (CST). Another type of performance data being used to measure student progress
come from district adopted benchmarks, which in some districts are aligned to the California
9
Retrieved on July 27, 2012 from http://www2.ed.gov/programs/racetothetop/index.html
10
Retrieved on July 27, 2012 from: http://www2.ed.gov/programs/racetothetop/index.html
OTL AND SUCCESS 62
content standards and are used as formative assessments (Black & William, 1998; Edsource,
2009). Nevertheless, the CST is the one of the chief tests used to report to the state of
California—the CST tests students in the subjects of: English language arts (ELA), Mathematics,
Science, Life Science, History, and Social Science. All of the subjects are tested against
respective California content standards for each subject.
CST Scores to Measure School Quality
Moreover, one of the ways school progress is measured is by the academic performance
index (API). According to the California Department of Education (CDE) (2012), the API
summarizes a school's or a local educational agency's (LEA) academic performance and progress
on statewide assessments. To add, the API also is used as an additional indicator for federal
Adequate Yearly Progress (AYP) requirements
11
. Essentially, both the API and AYP are
criterion for which California and the federal government track progress of schools throughout
California. The purpose of the Academic Performance Index (API) is to measure the year-over-
year growth in academic performance for California schools. Furthermore, the API summarizes
a school's standardized test scores into a single number, which ranges from 200 to 1000 and the
statewide API goal is 800 for all schools; higher numbers generally indicate better performance
on the tests
12
. So, what does this mean for schools in California? Even more, what does this
mean for teachers in California?
CST Scores to Measure Student and Teacher Performance
Currently, because a substantial body of research claims that teaching matters (Ballou,
Sanders & Wright, 2004; Hanushek, 1992; Haycock, 1998; Marzano, 2003; McCaffrey, Koretz,
11
Retrieved on July 27, 2012 from: http://www.cde.ca.gov/ta/ac/ap/glossary11b.asp
12
Retrieved on July 27, 2012 from: http://www.greatschools.org/issues/ca/api.html
OTL AND SUCCESS 63
Lockwood, & Hamilton, 2003; Mullens, Leighton, Laguarda & Obrien, 1996; Sanders, 2000;
Sanders & Horn, 1998; Sanders & Rivers, 1996; Sanders, Saxon & Horn, 1997; Wenglinsky,
2000; Wright, Horn & Sanders, 1997), teachers are being held responsible for the success of their
students. And, there is debate over how teachers should be evaluated (Corcoran, 2010; Darling-
Hammond, 2000) because up until now, teachers are evaluated based on a short, one-day formal
observation (called the Stull Evaluation) from administrators that have little interaction with
those teachers and classrooms whom they visit. Some schools are including students’
standardized test scores as a way to evaluate teachers. According to Edsource (2009),
“The extent to which districts incorporate student performance on standards-based tests
into the teacher-evaluation process is unclear but likely not universal. This is true despite
the fact that California law since 1999 has explicitly stated that districts shall evaluate the
performance of certificated employees based partly on their students’ progress on the
state-adopted academic content standards as measured by the state’s criterion-referenced
assessments, the CSTs” (p. 8)
In other words, teachers are being held accountable based on the success of their students using
standardized assessments. And since one of the components of RTTT is to measure teacher
effectiveness, some schools are inadequately using CST scores as the only source for evaluating
teachers (Edsource, 2009). Essentially, the term teacher effectiveness is the amount of impact a
teacher has on student learning, and is usually measured by student assessment scores. A
number of researchers indicate a strong correlation between teacher effectiveness and student
achievement (Kane, Rockoff, & Staiger, 2008; Rivkin, Hanushek, & Kain, 2005; Nye, Hedges,
& Konstantopoulos, 2004), but there is also a great debate over how to measure teacher
effectiveness (Ellett & Teddlie, 2003; Corcoran, 2010). Since so many factors (i.e. race, gender,
OTL AND SUCCESS 64
ethnicity, socioeconomic status (SES), family, prior achievement, etc.) contribute to the way
students score on achievement tests, measuring teacher effectiveness becomes a challenge.
Retrospectively, the Coleman Report (1966) and A Nation at Risk (1983) highlighted that home
and school factors play a part on how students perform in schools. So, because of the current
increasing accountability systems in place, NCLB and RTTT, many districts and schools are
using various evaluation models to measure teacher and student progress over time. One of the
models being used is the Value-Added model (VAM).
Value-Added used to measure school and teacher effectiveness
The value-added model (VAM) is not new to the education field (Sanders & Horn, 1994).
Basically, the VAM is a method of measuring student academic progress year to year, even after
the proficient level has been reached, but what is different about the VAM from just using a
single score to measure student progress is that it accounts for various factors that contribute to
performance outcomes. Starting in the 1980s, the VAM was developed by statistician Dr.
William Sanders (Hong, 2010). After hearing about the controversies in public education (i.e. A
Nation at Risk), Sanders and a few others begun to explore the feasibility of using a statistical
mixed-model methodology to show that the vertical scaling of test results, or growth modeling,
was a great improvement over a single cut-score on a standardized test (Sanders & Horn, 1994;
Hong, 2010).
Although the VAM seemed to be appealing for evaluating teachers based on student test
scores, the VAM came under criticism because the model did not control for SES and
demographic factors (Ballou, Sanders, & Wright, 2004). According to the Value-Added
Research Consortium at the University of Florida College of Medicine (University of Florida,
2000a; 2000b), variables, such as student income and race were almost always statistically
OTL AND SUCCESS 65
significant in estimating teacher and school effects. Nevertheless, more recent value added
models, which still require vertical equating, now utilize student background characteristics
and/or prior achievement and other data as statistical controls in order to separate the specific
effects of a particular school, program, or teacher on student academic progress (Goldschmidt,
Roschewski, Choi, Auty, Hebbler, Blank, et al., 2005; Lissitz, Doran, Schafer, Willhoft, 2006).
OTL AND SUCCESS 66
CHAPTER THREE: METHODOLOGY
Introduction of the Research Design
The purpose of this study is to determine how reliable, valid, and stable can the
California Standards Test (CST) scores, in the subject areas of Algebra 1, Geometry, Algebra 2,
be used to determine student progress in 9
th
, 10
th
, and 11
th
grade math coursework in California.
The data that will be used comes from the California Department of Education website
(www.cde.ca.gov). The following research questions will be addressed:
1. Has NCLB led to greater opportunity-to-learn (OTL) in high school math coursework
(Algebra 1, Geometry, Algebra 2, and Summative) in California?
2. Has NCLB led to greater success in math coursework in California?
3. How stable are the eight scores OTL and SS scores for Algebra 1, Geometry, Algebra
2, and Summative Math?
4. How reliable, or internally consistent, are the math composites of OTL and SS?
5. What is the correlation between the SCI and OTL status composite scores and SS
scores?
6. What are the descriptive characteristics and reliability coefficients (i.e. internal
consistency) of the composites of the residuals of the OTL and SS scores?
7. How stable are the composites of the residuals of the OTL and SS scores?
A quantitative approach will be used to gather data and information. For research
questions one through four, data will be collected from CST scores of students who took Algebra
1, Geometry, Algebra 2, and Summative Math between the years 2004 and 2012. However,
because the state of California has not published the 2012 Base Data file, which includes the SCI
OTL AND SUCCESS 67
for each school, data for research questions five through seven will be collected between 2004
and 2011.
Quantitative Research Design
The following quantitative research design will be organized into four phases, each of
which will utilize CST data from the 2004 to 2012 administrations of the California Standards
Test (CST) and SPSS to calculate the OTL and SS scores, the composites of the OTL and SS
scores, and the composites of the residuals, which accounts for SCI. Table 3.1 illustrates how
the OTL and SS scores will be calculated for each of the courses. For this study, the math
courses Algebra 1, Geometry, and Algebra 2 are theoretically taken in the 9
th
, 10
th
and 11
th
grade,
respectively. Phase one will illustrate the descriptive statistics and stability of the OTL and SS
scores between 2004 and 2012. Phase two will illustrate the internal consistency of the
composites of OTL and SS. Phase three will illustrate the reliability and stability coefficients of
the composites of the residuals of OTL and SS. Phase four will illustrate the internal
consistency, descriptive statistics, the item-total statistics, and the stability of the composites of
the residuals of OTL and SS were examined.
Phase One – Trend and Stability of OTL and SS scores
Phase one will summarize data from the first three research questions, which are to
examine the descriptive statistics and stability of the OTL and SS scores between 2004 and 2012.
Stability is the degree to which a test measures the same thing at different times or in different
situations (Kurpius & Stafford, 2006). Essentially, phase one will answer whether the NCLB has
led to greater opportunities-to-learn in math course work. Students who are enrolled in Algebra
1 in the 9
th
grade are on track for both high school graduation and have begun the eligibility to
enter a four-year institution. The OTL scores are determined by dividing the total number of
OTL AND SUCCESS 68
students with scores in on the CST Algebra 1 in the 9
th
grade, including 9
th
graders who are
enrolled in Geometry or higher (assuming that Algebra 1 was taken prior to enrolling in 9
th
grade), by the total number of the 9
th
grade cohort, as measured by the number of students with
scores in the English Language Arts (ELA) CST. The ratio of the two values will yield a score
that will reveal the opportunity to learn score (OTL). In essence, this score will reveal the
percentage of students who have the opportunity to learn. The success scores (SS) are
determined by dividing the percent of students who scored Basic or above on the respective math
CST by the total number of the respective 9
th
grade cohort, as measured by the number of
students with scores in the English Language Arts (ELA) CST.
Table 3.1
Calculations of OTL and SS Scores
Phase one:
OTL Scores for all math courses:
SS scores formula for Algebra 1 and Geometry:
SS scores formula for Algebra 2:
SS scores formula for Summative Math:
OTL AND SUCCESS 69
Phase two: Composites of OTL and SS
Step one: For each year, add the OTL score for Algebra 1, Geometry, Algebra 2, and
Summative Math. (i.e. CompOTL = Alg1OTL + GeoOTL + Alg2OTL + SummOTL)
Step two: For each year, add the Success scores for Algebra 1, Geometry, Algebra 2, and
Summative Math. (i.e. CompSS = Alg1SS + GeoSS + Alg2SS + SummSS)
Phase Three: Adjusting OTL and SS with SCI
Regression Function:
Y’ = b
i
X
i
– a, where Y’ is predicted success, b
i
is the regression coefficient, X is the
California Characteristics Index (SCI), and a is the y-intercept.
Residual Function:
RES = Y’ – Y, where Y is actual achievement and Y’ is predicted (expected)
achievement based on SCI.
Subject Level Equivalent Scores (SLE Scores)
Step one: Convert raw OTL and SS scores to NCE’s.
Adjusted SLE (ASLE) Scores
Step one: Regress the SLE scores with SCI
Step two: Convert regression standardized residuals to NCE’s.
Adjusted SLE (ASLE) Scores:
The ASLE scores are essentially the residual OTL and SS scores.
Note: Variables; OTL = Opportunity to Learn; SS = Subject Level Success (as measured by
Basic or Above on the CST); CompOTL = the composite score for OTL; CompSS = the
composite score for Success; SLE = Subject Level Equivalent; ASLE = Adjusted Subject Level
Equivalent; SCI = School Characteristics Index; NCE = Normal Curve Equivalents;
OTL AND SUCCESS 70
T = Number of students with scores in a specific math course (for CST year ! 2006, T =
Students tested); c = Math course (Algebra 1 = 1; Geometry = 2; Algebra 2 = 3; Summative
Math = 4); ELA
yr9
= 9
th
Grade cohort of students with scores or tested;
P = Sum of the percent of students scoring Basic and above
Phase Two – Internal Consistency of the Composites of OTL and SS scores
In phase two, which will answer research question four, the internal consistency of the
composites of OTL and SS will be examined. Controlling for the California School
Characteristics Index (SCI), a comparison will be made between the OTL and SS scores of
Algebra 1, Geometry, Algebra 2, and Summative Math, as measured by the CST’s. The subject
level equivalent scores will be computed by converting raw CST math OTL and SS scores into
Normal Curve Equivalent (NCE) scores. The adjusted subject level equivalent scores will be
computed by correlating the SLE scores with SCI, converting the regression’s standardized
residuals to NCE scores. Furthermore, there are 16 characteristics that contribute to the School
Characteristics Index and will be used as the correlation coefficients in the linear regression
equation described in Table 3.2. The formulas used in computations are represented in Table
3.1.
Phase Three – Correlation of Input-Unadjusted composites with SCI
The goal in phase three is to answer research question five, where the reliability and
stability coefficients of the composites of the residuals of OTL and SS will be examined.
Phase Four – Reliability and Stability of the Residuals of OTL and SS composites
The goal in phase four is to answer research questions six and seven, where the internal
consistency, descriptive statistics, the item-total statistics, and the stability of the composites of
the residuals of OTL and SS will be examined. The residual scores are the composites of the
OTL and SS scores regressed with the SCI.
OTL AND SUCCESS 71
Table 3.2
Computation Formulas with Description for each symbol
Standard Deviation Formula
Z-Score Formula
Percentile Rank Formula
PR(x) = ((f / 2 + L ) / N)100
PR = Percentile Rank
x = Scale score of interest
f = Frequency of the scale score of interest
L = Cumulative frequency associated with the next lowest scale score
N = Population size (number of persons tested).
Normal Curve Equivalent (NCE) Formula (see Appendix)
* Retrieved on July 31, 2012 from: http://www.ask.com/wiki/Normal_curve_equivalent
OTL AND SUCCESS 72
Table 3.3
Regression and Residual Functions
Regression Function
Y’ = b
i
X
i
– a, where Y’ is predicted success, b
i
is the regression coefficient, X is the
California Characteristics Index (SCI), and a is the y-intercept.
Residual Function
RES = Y’ – Y, where Y is actual achievement and Y’ is predicted (expected)
achievement based on SCI.
In phase one and phase two, the composite of the OTL and SS scores is used to indicate
the stability of the research design. In Phase 3, the intended use of Subject Level Equivalents
(SLE) is to compare California high schools to each other.
However, in phase three, SLE scores cannot be used to make between-school
comparisons because they are highly correlated with student and community characteristics that
are beyond the control of schools and teachers. Therefore, phase three also incorporates
adjusting for SLE scores, which will make the study fair. In other words, the Adjusted Subject
Level Equivalents (ASLE) scores control for the sixteen factors that contribute to the linear
regression equation, which creates the ability to compare schools to other schools regardless of
characteristics. These sixteen factors are a School’s Characteristics Index (SCI). The SCI will
be described in the instrumentation and data collection section below.
Participants and Setting
The participants in this study are the all the public high schools in California. According
to the California Department of Education (CDE), during the 2010-2011 school year, there were
1,289 public high schools, (not including private, continuation, community-day, alternative, and
nonpublic nonsectarian schools). The total number of students enrolled in the public high school
OTL AND SUCCESS 73
sector was 1,808,597
13
. For kindergarten to the 12
th
grade during the 2009-2010 school year,
there was a total of 6,189,908 students enrolled in public schools with 50.3% Hispanic or Latino,
27.03% White non Hispanic, 8.51% Asian, 6.85% African American not Hispanic, 2.53%
Filipino, 1.82% not reported, 1.56% two or more races not Hispanic, 0.73% American Indian or
Alaska Native, and 0.60% Pacific Islander
14
. Additionally, according to the CDE in 2001, there
was a total of 1,511,299 students who were classified as English Learners (ELs); and in 2002,
there were 1,559,248 students who were classified as ELs. More recent, during the 2010-2011
school year, there were approximately 1.4 million English Learners in California public
schools
15
. Essentially, the diversity among students in California is notable.
Instrumentation and Data Collection
SPSS
In order to generate, compute, and design the data, the Statistical Package for the Social
Sciences (SPSS) software program will be used.
School Characteristics Index (SCI)
The school characteristics index (SCI), established in April 2000 (California Department of
Education, 2011), is used within the California Public School Accountability Act (PSAA) for the
creation of similar school scores. The current comprehensive list of characteristics used to group
varying schools into similar groups comprise of: student mobility, student ethnicity, student SES,
percent of teachers who are fully credentialed, percent of teachers who hold emergency
credentials, percent of students who are English learners (ELs), average class size per grade
13
Retrieved July 31, 2012 from: http://www.cde.ca.gov/ds/sd/cb/cefenrollgradetype.asp
14
Retrieved July 31, 2012 from: http://www.cde.ca.gov/ds/sd/cb/ceffingertipfacts.asp
15
Retrieved July 31, 2012 from: http://www.cde.ca.gov/ds/sd/cb/cefelfacts.asp
OTL AND SUCCESS 74
level, whether the school operates in a multi-track year-round educational program, percent of
grade span enrollments (grade 2, 3 to 5, 6, 7 to 8, and 9 to 11), percent of students in the gifted
and talented education program, percent of students with disabilities (SWDs), percent of
reclassified fluent English-proficient (RFEP) students, and percent of migrant education students.
Of the characteristics included in the calculation for the SCI, data generated by the
California Department of Education (2012) indicated that the highest correlations with the PSAA
API accountability indicator were the two SES components – average parent education and
percent of students participating in free or reduced price lunch programs – with correlation
coefficients of 0.81 and -0.75, respectively (p ! 0.05). The next highest correlation with API
was the percent of Hispanic or Latino, r = -0.66, p ! 0.05. Furthermore, the correlations between
API and the percent of students classified as ELs and reclassified fluent-English-proficient
(RFEP) and percent of Asian were r = -0.59 and r = -0.44, respectively, with p ! 0.05 for both.
Aside from the correlation between API and the percent of White students (r = 0.58, p ! 0.05),
the other variables included in the SCI were correlated to API at or below r = 0.34 on the
positive (percent of Black or African American) and r = -0.49 on the negative (school mobility)
with p ! 0.05 for both.
Achievement Measures
California Standards Test (CST) data were utilized from the California Department of
Education (CDE). Data were used in the areas of Mathematics and Science from the 2003-2004,
2007-2008, and the 2010-2011 administrations of the California Standards Test (CST). 9
th
through 11
th
grade CST data were collected on the groups of students in Algebra 1, Geometry,
Algebra 2, and Summative Math. The California Standards Test (CST) is the official measure of
OTL AND SUCCESS 75
school performance as designated by the PSAA
16
. The CST’s are criterion-referenced exams
developed from the California academic content standards based on what teachers are supposed
to teach and what students are expected to learn. Student performances are rated in five tiers
with the following labels: far below basic (FBB), below basic (BB), basic (B), proficient (P), and
advanced (A).
Limitations of the Study
According to Creswell (2009), validity in quantitative research “refers to whether one can
draw meaningful and useful inferences from scores on particular instruments” (p. 235). There
are three types of limitations to the validity of a quantitative study: Statistical Conclusion
Validity, Internal Validity, and External Validity. First, threats to statistical validity arise when
experimenters draw inaccurate inferences from the data because of inadequate statistical power
or where assumptions about the statistical data are violated (Creswell, 2009). For the purpose of
this paper, four threats to statistical conclusion validities will be taken into account:
1) Statistical power less than .80 due to insufficient sample size (N < 100-200)
2) Diminished power due to low reliability of measurements (coefficient alpha < .70)
3) Diminished power due to restriction of range on one or more variables
4) Inflated Type I error rate (p > .05) due to multiple statistical tests
Second, threats to internal validity will also be taken into account for this study. According to
Creswell (2009), threats to internal validity are “experimental procedures, treatments, or
experiences of the participants that threaten the researcher’s ability to draw correct inferences
from the data about the population in an experiment” (p. 162). Essentially, internal validity has
to do with the credibility of a causal inference or whether the treatment has the observed effect or
16
Retrieved from http://www.cde.ca.gov/ta/ac/pa/cefpsaa.asp
OTL AND SUCCESS 76
whether the effect can be explained by one or more uncontrolled factors (Levy & Hocevar,
2013). The various types of threats to internal validity are described in Table 3.4 (Creswell,
2009), which also include actions that researchers can take in response to the threats to internal
validity.
Table 3.4
Types of Threats to Internal Validity
Type of Threat to Internal
Validity
Description of Threat In Response, Actions the
Researcher Can Take
History Because time passes during an
experiment, events can occur
that unduly influence the
outcome beyond the
experimental treatment.
The researcher can have
both the experimental and
control groups experience
the same external events
Maturation Participants in an experiment
may mature or change during
the experiment, thus
influencing the results.
The researcher can select
participants who mature or
change at the same rate
(e.g., same age) during the
experiment.
Regression Participants with extreme
scores are selected for the
experiment. Naturally, their
scores will probably change
during the experiment. Scores,
over time, regress toward the
mean.
A researcher can select
participants who do not
have extreme scores as
entering characteristics for
the experiment.
Selection Participants can be selected
who have certain
characteristics that predispose
them to have certain outcomes
(e.g., they are brighter).
The researchers can select
participants randomly so
that characteristics have the
probability of being equally
distributed among the
experimental groups.
Mortality Participants drop out during
an experiment due to many
possible reasons. The
outcomes are thus unknown
for these individuals.
A researcher can recruit a
large sample to account for
dropouts or compare those
who drop out with those
who continue, in terms of
the outcome.
Diffusion of treatment Participants in the control and
experimental groups
communicate with each other.
The researcher can keep the
two groups as separate as
possible during the
OTL AND SUCCESS 77
This communication can
influence how both groups
score on the outcomes.
experiment.
Compensatory/resentful
demoralization
The benefits of an experiment
may be unequal or resented
when only the experimental
group receives the treatment
(e.g., experimental group
receives therapy and the
control group receives
nothing).
The researcher can provide
benefits to both groups,
such as giving the control
group the treatment after
the experiment ends or
giving the control group
some different type of
treatment during the
experiment.
Compensatory rivalry Participants in the control
group feel that they are being
devalued, as compared to the
experimental group, because
they do not experience the
treatment.
The researcher can take
steps to create equality
between the two groups
such as reducing the
expectations of the control
group.
Testing Participants become familiar
with the outcome measure and
remember responses for later
testing.
The researcher can have a
longer time interval
between administrations of
the outcome or use different
items on a later test than
were used in an earlier test.
Instrumentation The instrument changes
between a pre-test and post-
test, thus impacting the scores
on the outcome
The researcher can use the
same instrument for the pre-
test and post-test measures.
Note. Retrieved directly from Creswell, 2009, pp. 164-165.
Last, the external threats to validity arise when “experimenters draw incorrect inferences
from the sample data to other persons, other settings, and past or future situations” (Creswell,
2009, p. 162). In other words, external validity is the extent to which a study’s results can be
generalized to other samples, settings, measurements, and interventions. Creswell (2009) offers
a chart that describes only three of the threats to external validity along with actions that
researchers can take in response to the external threats. That chart is shown in Table 3.5 below.
OTL AND SUCCESS 78
Table 3.5
Types of Threats to External Validity
Types of Threats to
External Validity
Description of Threat In Response, Actions the
Researcher Can Take
Interaction of selection and
treatment
Because of the narrow
characteristics of
participants in the
experiment, the researcher
cannot generalize to
individuals who do not have
the characteristics of
participants.
The researcher restricts
claims about groups to
which the results cannot be
generalized. The researcher
conducts additional
experiments with groups
with different
characteristics.
Interaction of setting and
treatment
Because of the
characteristics of the setting
of participants in an
experiment, a researcher
cannot generalize to
individuals in other settings.
The researcher needs to
conduct additional
experiments in new settings
to see if the same results
occur as in the initial
setting.
Interaction of history and
treatment
Because results of an
experiment are time-bound,
a researcher cannot
generalize the results to past
or future situations.
The researcher needs to
replicate the study at later
times to determine if the
same results occur as in the
earlier time.
Note. Retrieved directly from Creswell, 2009, p. 165.
OTL AND SUCCESS 79
CHAPTER FOUR: RESULTS
Current educational reforms, such as the NCLB and Race To The Top (RTTT) initiatives,
affect how school, teacher, and student performances are measured, especially in light of a fast
growing technological era. And, as state and national standards are currently transitioning into
the Common Core Standards, states and schools are still looking for better ways to measure
student, teacher, and school performance under the current standards assessments. The purpose
of standards is to provide a guideline of what students should know and be able to do (Ravitch,
1993; Ravitch, 1995; Lachat, 1999; Schwartz & Robinson, 2000). In the past, the efforts to hold
schools accountable for student performance were being cast out onto an uneven playing field
due to the lack of students’ opportunities to learn the content reflected on the standards test (Well
& Oakes, 1996). In essence, according to Ravitch (1993), without standards, schools and
education reformers would not know whether their efforts were making a difference. The
research in this study aims to reveal whether or not the efforts of schools and education
reformers are making a difference. The following research questions in this study will answer
that question:
1. Has NCLB led to greater opportunity-to-learn (OTL) in high school math coursework
(Algebra 1, Geometry, Algebra 2, and Summative) in California?
2. Has NCLB led to greater success in math coursework in California?
3. How stable are the eight scores OTL and SS scores for Algebra 1, Geometry, Algebra
2, and Summative Math?
4. How reliable, or internally consistent, are the math composites of OTL and SS?
5. What is the correlation between the SCI and the OTL status composite scores and SS
scores?
OTL AND SUCCESS 80
6. What are the descriptive characteristics and reliability coefficients (i.e. internal
consistency) of the composites of the residuals of the OTL and SS scores?
7. What are the descriptive statistics and stability of the composites of the residuals of
the OTL and SS scores?
For the purpose of this study, the unadjusted OTL, Success Scores (SS), and the
Composite scores were computed for the year 2004 to 2012, a total of nine years. However,
because the California Department of Education has not published the 2012 Base API Data file,
which discloses the Schools Characteristics Index (SCI) for each school in the state, the adjusted
OTL, SS, and Composite scores, were computed for the years 2004 to 2011, a total of 8 years.
Also, the Statistical Package for the Social Sciences (SPSS) software program was used to
generate the data in this chapter.
Research Question 1: Opportunity to Learn Results
Descriptive Statistics – Opportunity to Learn Results at the School Level
In answering research question number one—whether NCLB led to greater opportunity-
to-learn (OTL) in 9
th
, 10
th
, and 11
th
grade math coursework (Algebra 1, Geometry, Algebra 2,
and Summative Math, respectively) in California—four tables below present a summary of key
descriptive statistics – the minimums, maximums, means, and standard deviations of the OTL
scores for the four math CST’s.
Algebra 1 OTL
In Table 4.1, the school level descriptive statistics show that the median level of the OTL
scores for Algebra 1 has increased consistently each year from 2004 to 2012. The mean for the
Algebra 1 OTL was 0.68 in 2004 and increased to 0.91 in 2012. Notably, the standard deviations
(SD) decreased throughout the years 2004 to 2012 from 0.22 to 0.13 (Table 4.1). In other words,
OTL AND SUCCESS 81
as the number of students taking Algebra 1 increased from 2004 to 2012, the less the Algebra 1
OTL scores deviated from the mean.
Table 4.1
Algebra 1 School Level OTL Descriptive Statistics
N Minimum Maximum Mean Std. Deviation
OTLAlg1Score2004 877 .04 1.00 .6758 .21568
OTLAlg1Score2005 907 .04 1.00 .7177 .21190
OTLAlg1Score2006 953 .11 1.00 .7728 .19509
OTLAlg1Score2007 946 .15 1.00 .8115 .17861
OTLAlg1Score2008 938 .03 1.00 .8280 .17445
OTLAlg1Score2009 926 .06 1.00 .8552 .16398
OTLAlg1Score2010 921 .08 1.00 .8837 .14903
OTLAlg1Score2011 909 .05 1.00 .8998 .13872
OTLAlg1Score2012 908 .07 1.00 .9136 .12748
Valid N (listwise) 803
Note. OTLAlg1Score = Opportunity to Learn score for Algebra 1;
In Figure 4.1a, below, the means of the OTL scores for Algebra 1 from the years 2004 to
2012 are represented linearly. Because the means of the Algebra 1 OTL scores in Table 4.1 were
computed mutually exclusive for each year, the line graph (Figure 4.1a) excluded cases listwise.
In other words, the data points in the graph represent schools that have Algebra 1 OTL scores for
all years from 2004 to 2012. However, in Figure 4.1b, the line graph represents the means for
the Algebra 1 OTL scores that excluded cases variable by variable to ensure that the data points
on the line illustrate the Algebra 1 OTL means mutually exclusive for each of the years 2004 to
2012. For Geometry, Algebra 2, and Summative Math, the descriptive statistics for OTL and SS
will illustrate the means mutually exclusive for each year, but the linear representations for the
OTL and SS will illustrate data points for the schools that have OTL and SS scores for all years
2004 to 2012. The purpose is to see any key differences between the means in the descriptive
statistics tables and the means in the linear graphs.
OTL AND SUCCESS 82
Figure 4.1a
School Level OTL Algebra 1 Mean Scores Listwise
OTL AND SUCCESS 83
Figure 4.1b
School Level OTL Algebra 1 Mean Scores Variable by Variable
Geometry OTL
For the OTL scores in Geometry, similar descriptive statistics were computed and
analyzed. The mean of the Geometry OTL scores was 0.49 in 2004, steadily increased to 0.59 in
2008, and finally in 2012, the mean score was 0.66 (Table 4.2).
OTL AND SUCCESS 84
Table 4.2
Geometry School Level OTL Descriptive Statistics
N Minimum Maximum Mean Std. Deviation
OTLGeoScore2004 827 .05 1.00 .4962 .18081
OTLGeoScore2005 855 .10 1.00 .5199 .17646
OTLGeoScore2006 906 .06 1.00 .5462 .18320
OTLGeoScore2007 906 .08 1.00 .5668 .17785
OTLGeoScore2008 897 .03 1.00 .5901 .16854
OTLGeoScore2009 898 .03 1.00 .6126 .16994
OTLGeoScore2010 889 .03 1.00 .6296 .16312
OTLGeoScore2011 880 .07 1.00 .6511 .16090
OTLGeoScore2012 875 .06 1.00 .6556 .16973
Valid N (listwise) 761
Note. OTLGeoScore = Opportunity to Learn score for Geometry;
Figure 4.2
Graph of the Means of School Level OTL Geometry Scores
OTL AND SUCCESS 85
Algebra 2 OTL
Likewise, the descriptive statistics for the OTL scores for Algebra 2 also increased.
These scores ranged from 0.40 in 2004 to 0.52 in 2012 (Table 4.3).
Table 4.3
Algebra 2 School Level OTL Descriptive Statistics
N Minimum Maximum Mean Std. Deviation
OTLAlg2Score2004 794 .04 1.00 .4000 .16470
OTLAlg2Score2005 827 .08 1.00 .3984 .17055
OTLAlg2Score2006 864 .05 1.00 .4116 .16836
OTLAlg2Score2007 875 .05 1.00 .4248 .17362
OTLAlg2Score2008 869 .03 1.00 .4412 .17362
OTLAlg2Score2009 870 .04 1.00 .4659 .17031
OTLAlg2Score2010 863 .04 1.00 .4891 .17127
OTLAlg2Score2011 863 .04 1.00 .4931 .17005
OTLAlg2Score2012 859 .04 1.00 .5047 .18612
Valid N (listwise) 744
Note. OTLAlg2Score = Opportunity to Learn score for Algebra 2.
In Figure 4.3, below, a simple line chart was graphed illustrating the means of the
Algebra 2 OTL scores from 2004 to 2012. An interesting point to mention is the mean of the
Algebra 2 OTL score decreased from 2004 to 2005. Possible reasons for the decrease will be
explained in Chapter 5.
OTL AND SUCCESS 86
Figure 4.3
Graph of the Means of School Level OTL Algebra 2 Scores
Summative Math OTL
No different than the previous subjects, the means of the Summative Math OTL scores
have also increased from 2004 to 2012. However, what is different about the Summative Math
OTL scores compared to the previous subjects is the number of schools offering the Summative
Math CST increased by N=87 whereas the other subjects, Algebra 1, Geometry, and Algebra 2
only increased by N=31, N=48, and N=65, respectively.
OTL AND SUCCESS 87
Table 4.4
Summative Math School Level OTL Descriptive Statistics
N Min. Max. Mean Std. Deviation
OTLSummativeMathScore2004 759 .02 1.00 .1833 .12275
OTLSummativeMathScore2005 777 .01 .98 .1873 .12953
OTLSummativeMathScore2006 802 .03 .93 .1940 .12632
OTLSummativeMathScore2007 817 .02 .84 .2032 .13188
OTLSummativeMathScore2008 830 .02 .81 .2150 .13628
OTLSummativeMathScore2009 833 .02 .86 .2288 .14021
OTLSummativeMathScore2010 838 .02 1.00 .2412 .14645
OTLSummativeMathScore2011 837 .02 .92 .2514 .14922
OTLSummativeMathScore2012 846 .02 .95 .2737 .15542
Valid N (listwise) 705
Note. OTLSummativeMathScore = Opportunity to Learn score for Summative Math;
Figure 4.4
Graph of the Means of School Level OTL Summative Math Scores
OTL AND SUCCESS 88
Research Question 2: Subject Level Success Results
In answering research question number two—whether NCLB led to greater success (SS)
in 9
th
, 10
th
, and 11
th
grade math coursework (Algebra 1, Geometry, Algebra 2, and Summative
Math, respectively) in California—four tables, along with corresponding linear representations of
the means, provide a summary of key descriptive statistics – the minimums, maximums, means,
and standard deviations of the SS scores for the four math CST’s. As mentioned above, the
students who scored Basic and above on the CST’s for Mathematics are identified as successful.
Algebra 1 Success Scores
In Table 4.5, the descriptive statistics show that the mean level of the SS scores for
Algebra 1 increased consistently each year from 2004 to 2012. The mean for the Algebra 1 SS
was 0.43 in 2004 and increased to 0.65 in 2012. A minimum value of 0.00, in Table 4.5,
represents the amount of students who have scored Basic and above on the Algebra 1 CSTs. A
graphical representation of the data is shown in Figure 4.5.
Table 4.5
Descriptive Statistics Algebra 1 Success
N Minimum Maximum Mean Std. Deviation
Alg1SuccessScore2004 877 .00 .99 .4283 .17779
Alg1SuccessScore2005 907 .00 1.00 .4889 .17766
Alg1SuccessScore2006 953 .03 1.00 .5035 .17856
Alg1SuccessScore2007 946 .01 1.00 .5297 .17551
Alg1SuccessScore2008 938 .01 .99 .5475 .17243
Alg1SuccessScore2009 926 .02 .99 .5672 .17165
Alg1SuccessScore2010 921 .00 1.00 .6011 .17175
Alg1SuccessScore2011 909 .01 .99 .6259 .16451
Alg1SuccessScore2012 908 .00 1.00 .6465 .16273
Valid N (listwise) 803
OTL AND SUCCESS 89
Figure 4.5
Graph of the Means of Algebra 1 Success Scores
Geometry Success Scores
In Table 4.6, the descriptive statistics show that the mean level of the SS scores for
Geometry has also increased consistently each year from 2004 to 2012. The mean for the
Geometry SS was 0.35 in 2004 and increased to 0.48 in 2012. In other words, the average
percent of students who were successful in the Geometry CST’s has increased by 0.12, or 12%.
A graphical representation of the data is shown in Figure 4.6.
OTL AND SUCCESS 90
Table 4.6
Descriptive Statistics Geometry Success
N Minimum Maximum Mean Std. Deviation
GeoSuccessScore2004 827 .01 1.00 .3475 .17035
GeoSuccessScore2005 856 .00 .98 .3584 .16859
GeoSuccessScore2006 906 .00 1.00 .3732 .17554
GeoSuccessScore2007 906 .00 1.00 .3767 .17893
GeoSuccessScore2008 897 .01 .98 .3884 .17490
GeoSuccessScore2009 898 .00 1.00 .4074 .17684
GeoSuccessScore2010 889 .00 .95 .4366 .17273
GeoSuccessScore2011 880 .02 .94 .4677 .17381
GeoSuccessScore2012 875 .02 1.00 .4816 .18113
Valid N (listwise) 761
Note.
Figure 4.6
Graph of the Means of Geometry Success Scores
OTL AND SUCCESS 91
Algebra 2 Success Scores
In Table 4.7, the descriptive statistics show that the median level of the SS scores for
Algebra 2 has also increased consistently each year from 2004 to 2012. The mean for the
Algebra 2 SS was 0.25 in 2004 and increased to 0.37 in 2012. A graphical representation of the
data is shown in Figure 4.7.
Table 4.7
Descriptive Statistics Algebra 2 Success
N Minimum Maximum Mean Std. Deviation
Alg2SuccessScore2004 794 .00 1.00 .2505 .14193
Alg2SuccessScore2005 827 .00 .91 .2536 .15243
Alg2SuccessScore2006 864 .00 .89 .2607 .14791
Alg2SuccessScore2007 875 .00 .92 .2798 .15907
Alg2SuccessScore2008 869 .00 .92 .2976 .16092
Alg2SuccessScore2009 870 .00 .95 .3158 .16122
Alg2SuccessScore2010 863 .01 1.00 .3414 .16349
Alg2SuccessScore2011 863 .01 .91 .3525 .16370
Alg2SuccessScore2012 859 .01 1.00 .3702 .17768
Valid N (listwise) 744
Note.
OTL AND SUCCESS 92
Figure 4.7
Graph of the Means of Algebra 2 Success Scores
Summative Math Success Scores
In Table 4.8, the descriptive statistics show that the median level of the SS scores for
Summative Math has also increased consistently each year from 2004 to 2012. The mean for the
Summative Math SS was 0.12 in 2004 and increased to 0.21 in 2012. A graphical representation
of the data is shown in Figure 4.8.
OTL AND SUCCESS 93
Table 4.8
Descriptive Statistics Summative Math Success
N Minimum Maximum Mean Std. Deviation
SUMMSuccessScore2004 760 .00 .77 .1220 .10209
SUMMSuccessScore2005 777 .00 .78 .1298 .10874
SUMMSuccessScore2006 802 .00 .92 .1381 .11172
SUMMSuccessScore2007 817 .00 .84 .1399 .11522
SUMMSuccessScore2008 830 .00 .80 .1531 .12130
SUMMSuccessScore2009 833 .00 .85 .1628 .12483
SUMMSuccessScore2010 838 .00 .85 .1786 .12754
SUMMSuccessScore2011 837 .01 .89 .1866 .13295
SUMMSuccessScore2012 846 .00 .89 .2078 .13816
Valid N (listwise) 705
Note.
Figure 4.8
Graph of the Means of Summative Math Success Scores
OTL AND SUCCESS 94
Research Question 3: Stability of OTL and SS Scores
The third research question was to examine the stability of the four OTL scores and the
four SS scores of the math courses Algebra 1, Geometry, Algebra 2, and Summative Math.
Simple correlations were computed on nine variables (OTL and SS scores for 2004 to 2012) for
each of the math courses. The OTL scores for Algebra 1 were correlated for the years 2004 to
2012. Similarly, the OTL scores for Geometry, Algebra 2, and Summative Math were correlated
for the years 2004 to 2012. According to Salkind (2011), the correlation coefficient is a
numerical index that reflects the relationship between two variables. It is also known as a
bivariate correlation. The output of the correlation coefficient is a number that ranges between -
1 and 1.
Stability of Algebra 1 OTL Scores
In Table 4.9, the bivariate correlations for the OTL scores for Algebra 1 are shown. The
pairing of the Algebra 1 OTL Score for 2005 (OTLAlg12005) with OTLAlg12004 resulted in a
Pearson Product-Moment Correlation of r = 0.77. Likewise, the pairings of the OTLAlg12006
with OTLAlg12005, OTLAlg12007 with OTLAlg12006, OTLAlg12008 with OTLAlg12007,
OTLAlg12009 with OTLAlg12008, OTLAlg12010 with OTLAlg12009, OTLAlg12011 with
OTLAlg12010, and OTLAlg12012 with OTLAlg12011 were highly correlated with coefficients
at r = 0.74, 0.76, 0.79, 0.77, 0.79, 0.81, and 0.83, respectively. All of the correlation coefficients
were significant (Table 4.9).
OTL AND SUCCESS 95
Table 4.9
Bivariate Correlations for Algebra 1 OTL
OTLAlg1
Score2004
OTLAlg1
Score2005
OTLAlg1
Score2006
OTLAlg1
Score2007
OTLAlg1
Score2008
OTLAlg1
Score2009
OTLAlg1
Score2010
OTLAlg1
Score2011
OTLAlg1
Score2012
Pearson Correlation 1 .768
**
.564
**
.486
**
.448
**
.430
**
.343
**
.307
**
.279
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLAlg1
Score2004
N 877 857 854 852 849 842 838 829 834
Pearson Correlation .768
**
1 .735
**
.603
**
.529
**
.497
**
.406
**
.373
**
.331
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLAlg1
Score2005
N 857 907 889 887 882 874 869 860 863
Pearson Correlation .564
**
.735
**
1 .756
**
.630
**
.549
**
.497
**
.418
**
.353
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLAlg1
Score2006
N 854 889 953 937 924 913 905 892 893
Pearson Correlation .486
**
.603
**
.756
**
1 .791
**
.629
**
.537
**
.446
**
.393
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLAlg1
Score2007
N 852 887 937 946 925 913 908 895 895
Pearson Correlation .448
**
.529
**
.630
**
.791
**
1 .771
**
.668
**
.564
**
.503
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLAlg1
Score2008
N 849 882 924 925 938 917 914 900 903
Pearson Correlation .430
**
.497
**
.549
**
.629
**
.771
**
1 .792
**
.638
**
.579
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLAlg1
Score2009
N 842 874 913 913 917 926 908 896 896
Pearson Correlation .343
**
.406
**
.497
**
.537
**
.668
**
.792
**
1 .809
**
.705
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLAlg1
Score2010
N 838 869 905 908 914 908 921 902 900
Pearson Correlation .307
**
.373
**
.418
**
.446
**
.564
**
.638
**
.809
**
1 .826
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLAlg1
Score2011
N 829 860 892 895 900 896 902 909 896
Pearson Correlation .279
**
.331
**
.353
**
.393
**
.503
**
.579
**
.705
**
.826
**
1
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLAlg1
Score2012
N 834 863 893 895 903 896 900 896 908
Note. Observed probability is at the 0.01 level (2-tailed).
OTL AND SUCCESS 96
Stability of Geometry OTL Scores
In the Table 4.10, the bivariate correlations for the OTL scores for Geometry are
shown. The pairing of the Geometry OTL Score for 2005 (OTLGeo2005) with
OTLGeo2004 resulted in a Pearson Product-Moment Correlation of r = 0.86. Likewise,
the pairings of the OTLGeo2006 with OTLGeo2005, OTLGeo2007 with OTLGeo2006,
OTLGeo2008 with OTLGeo2007, OTLGeo2009 with OTLGeo2008, OTLGeo2010 with
OTLGeo2009, OTLGeo2011 with OTLGeo2010, and OTLGeo2012 with OTLGeo2011
were highly correlated with coefficients at r = 0.84, 0.85, 0.86, 0.85, 0.85, 0.85, and 0.82,
respectively. All of the correlation coefficients were significant (Table 4.10).
OTL AND SUCCESS 97
Table 4.10
Bivariate Correlations for Geometry OTL
OTLGeo
Score2004
OTLGeo
Score2005
OTLGeo
Score2006
OTLGeo
Score2007
OTLGeo
Score2008
OTLGeo
Score2009
OTLGeo
Score2010
OTLGeo
Score2011
OTLGeo
Score2012
Pearson Correlation 1 .860
**
.783
**
.741
**
.683
**
.666
**
.629
**
.594
**
.540
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLGeo
Score 2004
N 827 811 810 805 804 802 794 788 788
Pearson Correlation .860
**
1 .841
**
.794
**
.732
**
.692
**
.688
**
.637
**
.570
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLGeo
Score 2005
N 811 855 845 839 828 827 823 816 816
Pearson Correlation .783
**
.841
**
1 .848
**
.780
**
.735
**
.705
**
.655
**
.585
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLGeo
Score 2006
N 810 845 906 885 871 870 858 851 847
Pearson Correlation .741
**
.794
**
.848
**
1 .863
**
.790
**
.750
**
.702
**
.619
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLGeo
Score 2007
N 805 839 885 906 875 874 863 856 849
Pearson Correlation .683
**
.732
**
.780
**
.863
**
1 .849
**
.790
**
.736
**
.657
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLGeo
Score 2008
N 804 828 871 875 897 877 870 862 856
Pearson Correlation .666
**
.692
**
.735
**
.790
**
.849
**
1 .847
**
.766
**
.659
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLGeo
Score 2009
N 802 827 870 874 877 898 875 864 859
Pearson Correlation .629
**
.688
**
.705
**
.750
**
.790
**
.847
**
1 .847
**
.724
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLGeo
Score 2010
N 794 823 858 863 870 875 889 867 864
Pearson Correlation .594
**
.637
**
.655
**
.702
**
.736
**
.766
**
.847
**
1 .817
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLGeo
Score 2011
N 788 816 851 856 862 864 867 880 859
Pearson Correlation .540
**
.570
**
.585
**
.619
**
.657
**
.659
**
.724
**
.817
**
1
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLGeo
Score 2012
N 788 816 847 849 856 859 864 859 875
Note. Observed probability is at the 0.01 level (2-tailed).
OTL AND SUCCESS 98
Stability of Algebra 2 OTL Scores
In the Table 4.11, the bivariate correlations for the OTL scores for Algebra 2 are
shown. The pairing of the Algebra 2 OTL Score for 2005 (OTL OTLAlg22005) with
OTLAlg22004 resulted in a Pearson Product-Moment Correlation of r = 0.88. Likewise,
the pairings of the OTLAlg22006 with OTLAlg22005, OTLAlg22007 with
OTLAlg22006, OTLAlg22008 with OTLAlg22007, OTLAlg22009 with OTLAlg22008,
OTLAlg22010 with OTLAlg22009, OTLAlg22011 with OTLAlg22010, and
OTLAlg22012 with OTLAlg22011 were highly correlated with coefficients at r = 0.90,
0.90, 0.91, 0.91, 0.86, 0.88, and 0.86, respectively. All of the correlation coefficients
were significant (Table 4.11).
OTL AND SUCCESS 99
Table 4.11
Bivariate Correlations for Algebra 2 OTL
OTLAlg2
Score2004
OTLAlg2
Score2005
OTLAlg2
Score2006
OTLAlg2
Score2007
OTLAlg2
Score2008
OTLAlg2
Score2009
OTLAlg2
Score2010
OTLAlg2
Score2011
OTLAlg2
Score2012
Pearson Correlation 1 .881
**
.863
**
.817
**
.800
**
.768
**
.685
**
.705
**
.669
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLAlg2
Score 2004
N 794 779 777 776 777 775 769 770 764
Pearson Correlation .881
**
1 .903
**
.857
**
.826
**
.802
**
.730
**
.728
**
.679
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLAlg2
Score 2005
N 779 827 811 809 802 806 797 797 792
Pearson Correlation .863
**
.903
**
1 .896
**
.838
**
.824
**
.739
**
.749
**
.707
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLAlg2
Score 2006
N 777 811 864 852 843 844 832 832 830
Pearson Correlation .817
**
.857
**
.896
**
1 .910
**
.866
**
.796
**
.788
**
.759
**
Sig. (2-tailed) .000 .000 .000 0.000 .000 .000 .000 .000
OTLAlg2
Score 2007
N 776 809 852 875 851 851 841 838 836
Pearson Correlation .800
**
.826
**
.838
**
.910
**
1 .907
**
.829
**
.821
**
.787
**
Sig. (2-tailed) .000 .000 .000 0.000 0.000 .000 .000 .000
OTLAlg2
Score 2008
N 777 802 843 851 869 854 845 843 840
Pearson Correlation .768
**
.802
**
.824
**
.866
**
.907
**
1 .858
**
.848
**
.798
**
Sig. (2-tailed) .000 .000 .000 .000 0.000 .000 .000 .000
OTLAlg2
Score 2009
N 775 806 844 851 854 870 852 849 843
Pearson Correlation .685
**
.730
**
.739
**
.796
**
.829
**
.858
**
1 .884
**
.868
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLAlg2
Score 2010
N 769 797 832 841 845 852 863 847 843
Pearson Correlation .705
**
.728
**
.749
**
.788
**
.821
**
.848
**
.884
**
1 .885
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLAlg2
Score 2011
N 770 797 832 838 843 849 847 863 849
Pearson Correlation .669
**
.679
**
.707
**
.759
**
.787
**
.798
**
.868
**
.885
**
1
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 .000
OTLAlg2
Score 2012
N 764 792 830 836 840 843 843 849 859
Note. Observed probability is at the 0.01 level (2-tailed).
OTL AND SUCCESS 100
Stability of Summative Math OTL Scores
In the Table 4.12, the bivariate correlations for the OTL scores for Summative
Math are shown. The pairing of the Summative Math OTL Score for 2005 (OTLSUMM
2005) with OTLSUMM2004 resulted in a Pearson Product-Moment Correlation of r =
0.93. Likewise, the pairings of the OTLSUMM2006 with OTLSUMM2005,
OTLSUMM2007 with OTLSUMM2006, OTLSUMM2008 with OTLSUMM2007,
OTLSUMM2009 with OTLSUMM2008, OTLSUMM2010 with OTLSUMM2009,
OTLSUMM2011 with OTLSUMM2010, and OTLSUMM2012 with OTLSUMM2011
were highly correlated with coefficients at r = 0.93, 0.93, 0.93, 0.94, 0.94, 0.94, and 0.93,
respectively. All of the correlation coefficients were significant (Table 4.12).
OTL AND SUCCESS 101
Table 4.12
Bivariate Correlations for Summative Math OTL
OTL
SUMM
2004
OTL
SUMM
2005
OTL
SUMM
2006
OTL
SUMM
2007
OTL
SUMM
2008
OTL
SUMM
2009
OTL
SUMM
2010
OTL
SUMM
2011
OTL
SUMM
2012
Pearson Correlation 1 .926
**
.900
**
.873
**
.850
**
.846
**
.818
**
.814
**
.803
**
Sig. (2-tailed) 0.000 .000 .000 .000 .000 .000 .000 .000
OTLSUMM
2004
N 759 734 735 743 742 742 741 738 744
Pearson Correlation .926
**
1 .930
**
.895
**
.866
**
.846
**
.841
**
.820
**
.812
**
Sig. (2-tailed) 0.000 0.000 .000 .000 .000 .000 .000 .000
OTLSUMM
2005
N 734 777 762 764 765 763 765 760 766
Pearson Correlation .900
**
.930
**
1 .934
**
.907
**
.898
**
.866
**
.852
**
.821
**
Sig. (2-tailed) .000 0.000 0.000 .000 .000 .000 .000 .000
OTLSUMM
2006
N 735 762 802 788 789 790 786 781 790
Pearson Correlation .873
**
.895
**
.934
**
1 .932
**
.905
**
.881
**
.865
**
.844
**
Sig. (2-tailed) .000 .000 0.000 0.000 .000 .000 .000 .000
OTLSUMM
2007
N 743 764 788 817 805 799 800 796 804
Pearson Correlation .850
**
.866
**
.907
**
.932
**
1 .937
**
.920
**
.897
**
.862
**
Sig. (2-tailed) .000 .000 .000 0.000 0.000 0.000 .000 .000
OTLSUMM
2008
N 742 765 789 805 830 811 813 811 814
Pearson Correlation .846
**
.846
**
.898
**
.905
**
.937
**
1 .940
**
.916
**
.883
**
Sig. (2-tailed) .000 .000 .000 .000 0.000 0.000 0.000 .000
OTLSUMM
2009
N 742 763 790 799 811 833 816 813 816
Pearson Correlation .818
**
.841
**
.866
**
.881
**
.920
**
.940
**
1 .938
**
.902
**
Sig. (2-tailed) .000 .000 .000 .000 0.000 0.000 0.000 .000
OTLSUMM
2010
N 741 765 786 800 813 816 838 819 824
Pearson Correlation .814
**
.820
**
.852
**
.865
**
.897
**
.916
**
.938
**
1 .925
**
Sig. (2-tailed) .000 .000 .000 .000 .000 0.000 0.000 0.000
OTLSUMM
2011
N 738 760 781 796 811 813 819 837 825
Pearson Correlation .803
**
.812
**
.821
**
.844
**
.862
**
.883
**
.902
**
.925
**
1
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 0.000
OTLSUMM
2012
N 744 766 790 804 814 816 824 825 846
Note. Observed probability is at the 0.01 level (2-tailed).
OTL AND SUCCESS 102
Stability of Algebra 1 Success Scores
In Table 4.13, the bivariate correlations for the success scores (SS) for Algebra 1
are shown. The pairing of the Algebra 1 Success Score for 2005 (A1SS2005) with
A1SS2004 resulted in a Pearson Product-Moment Correlation of r = 0.91. Likewise, the
pairings of the A1SS2006 with A1SS2005, A1SS2007 with A1SS2006, A1SS2008 with
A1SS2007, A1SS2009 with A1SS2008, A1SS2010 with A1SS2009, A1SS2011 with
A1SS2010, and A1SS2012 with A1SS2011 were highly correlated with coefficients at r
= 0.91, 0.91, 0.92, 0.92, 0.90, 0.91, and 0.89, respectively. All of the correlation
coefficients were significant (Table 4.13).
OTL AND SUCCESS 103
Table 4.13
Bivariate Correlations for the Algebra 1 Success Scores
A1 SS 2004 A1 SS 2005 A1 SS 2006 A1 SS 2007 A1 SS 2008 A1 SS 2009 A1 SS 2010 A1 SS 2011 A1 SS 2012
Pearson Correlation 1 .912
**
.864
**
.842
**
.823
**
.792
**
.749
**
.719
**
.691
**
Sig. (2-tailed) 0.000 .000 .000 .000 .000 .000 .000 .000
A1 SS
2004
N 877 857 854 852 849 842 838 829 834
Pearson Correlation .912
**
1 .906
**
.866
**
.848
**
.816
**
.768
**
.753
**
.718
**
Sig. (2-tailed) 0.000 0.000 .000 .000 .000 .000 .000 .000
A1 SS
2005
N 857 907 889 887 882 874 869 860 863
Pearson Correlation .864
**
.906
**
1 .908
**
.878
**
.848
**
.800
**
.764
**
.726
**
Sig. (2-tailed) .000 0.000 0.000 .000 .000 .000 .000 .000
A1 SS
2006
N 854 889 953 937 924 913 905 892 893
Pearson Correlation .842
**
.866
**
.908
**
1 .923
**
.885
**
.833
**
.801
**
.761
**
Sig. (2-tailed) .000 .000 0.000 0.000 .000 .000 .000 .000
A1 SS
2007
N 852 887 937 946 925 913 908 895 895
Pearson Correlation .823
**
.848
**
.878
**
.923
**
1 .916
**
.866
**
.841
**
.800
**
Sig. (2-tailed) .000 .000 .000 0.000 0.000 .000 .000 .000
A1 SS
2008
N 849 882 924 925 938 917 914 900 903
Pearson Correlation .792
**
.816
**
.848
**
.885
**
.916
**
1 .901
**
.865
**
.805
**
Sig. (2-tailed) .000 .000 .000 .000 0.000 0.000 .000 .000
A1 SS
2009
N 842 874 913 913 917 926 908 896 896
Pearson Correlation .749
**
.768
**
.800
**
.833
**
.866
**
.901
**
1 .908
**
.848
**
Sig. (2-tailed) .000 .000 .000 .000 .000 0.000 0.000 .000
A1 SS
2010
N 838 869 905 908 914 908 921 902 900
Pearson Correlation .719
**
.753
**
.764
**
.801
**
.841
**
.865
**
.908
**
1 .894
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 0.000 0.000
A1 SS
2011
N 829 860 892 895 900 896 902 909 896
Pearson Correlation .691
**
.718
**
.726
**
.761
**
.800
**
.805
**
.848
**
.894
**
1
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 0.000
A1 SS
2012
N 834 863 893 895 903 896 900 896 908
Note. Observed probability is at the 0.01 level (2-tailed).
OTL AND SUCCESS 104
Stability of Geometry Success Scores
In Table 4.14, the bivariate correlations for the success scores (SS) for Geometry
are shown. The pairing of the Geometry Success Score for 2005 (GeoSS2005) with
GeoSS2004 resulted in a Pearson Product-Moment Correlation of r = 0.93. Likewise, the
pairings of the GeoSS2006 with GeoSS2005, GeoSS2007 with GeoSS2006, GeoSS2008
with GeoSS2007, GeoSS2009 with GeoSS2008, GeoSS2010 with GeoSS009,
GeoSS2011 with GeoSS2010, and GeoSS2012 with GeoSS2011 were highly correlated
with coefficients at r = 0.91, 0.91, 0.93, 0.93, 0.93, 0.92, and 0.90, respectively. All of
the correlation coefficients were significant (Table 4.14).
OTL AND SUCCESS 105
Table 4.14
Bivariate Correlations for the Geometry Success Scores
Geo SS
2004
Geo SS
2005
Geo SS
2006
Geo SS
2007
Geo SS
2008
Geo SS
2009
Geo SS
2010
Geo SS
2011
Geo SS
2012
Pearson Correlation 1 .931
**
.911
**
.885
**
.876
**
.857
**
.823
**
.811
**
.772
**
Sig. (2-tailed) 0.000 0.000 .000 .000 .000 .000 .000 .000
Geo SS
2004
N 827 811 810 805 804 802 794 788 788
Pearson Correlation .931
**
1 .913
**
.885
**
.887
**
.863
**
.837
**
.824
**
.790
**
Sig. (2-tailed) 0.000 0.000 .000 .000 .000 .000 .000 .000
Geo SS
2005
N 811 856 846 840 828 827 823 816 816
Pearson Correlation .911
**
.913
**
1 .912
**
.898
**
.886
**
.857
**
.843
**
.810
**
Sig. (2-tailed) 0.000 0.000 0.000 0.000 .000 .000 .000 .000
Geo SS
2006
N 810 846 906 885 871 870 858 851 847
Pearson Correlation .885
**
.885
**
.912
**
1 .931
**
.905
**
.875
**
.853
**
.819
**
Sig. (2-tailed) .000 .000 0.000 0.000 0.000 .000 .000 .000
Geo SS
2007
N 805 840 885 906 875 874 863 856 849
Pearson Correlation .876
**
.887
**
.898
**
.931
**
1 .926
**
.898
**
.877
**
.836
**
Sig. (2-tailed) .000 .000 0.000 0.000 0.000 0.000 .000 .000
Geo SS
2008
N 804 828 871 875 897 877 870 862 856
Pearson Correlation .857
**
.863
**
.886
**
.905
**
.926
**
1 .922
**
.895
**
.843
**
Sig. (2-tailed) .000 .000 .000 0.000 0.000 0.000 .000 .000
Geo SS
2009
N 802 827 870 874 877 898 875 864 859
Pearson Correlation .823
**
.837
**
.857
**
.875
**
.898
**
.922
**
1 .915
**
.869
**
Sig. (2-tailed) .000 .000 .000 .000 0.000 0.000 0.000 .000
Geo SS
2010
N 794 823 858 863 870 875 889 867 864
Pearson Correlation .811
**
.824
**
.843
**
.853
**
.877
**
.895
**
.915
**
1 .901
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 0.000 0.000
Geo SS
2011
N 788 816 851 856 862 864 867 880 859
Pearson Correlation .772
**
.790
**
.810
**
.819
**
.836
**
.843
**
.869
**
.901
**
1
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 0.000
Geo SS
2012
N 788 816 847 849 856 859 864 859 875
Note. Observed probability is at the 0.01 level (2-tailed).
OTL AND SUCCESS 106
Stability of Algebra 2 Success Scores
In Table 4.15, the bivariate correlations for the success scores (SS) for Algebra 2
are shown. The pairing of the Algebra 2 Success Score for 2005 (A2SS2005) with
A2SS2004 resulted in a Pearson Product-Moment Correlation of r = 0.94. Likewise, the
pairings of the A2SS2006 with A2SS2005, A2SS2007 with A2SS2006, A2SS2008 with
A2SS2007, A2SS2009 with A2SS2008, A2SS2010 with A2SS009, A2SS2011 with
A2SS2010, and A2SS2012 with A2SS2011 were highly correlated with coefficients at r
= 0.93, 0.93, 0.94, 0.94, 0.896, 0.91, and 0.91, respectively. All of the correlation
coefficients were significant (Table 4.15).
OTL AND SUCCESS 107
Table 4.15
Bivariate Correlations for the Algebra 2 Success Scores
A2 SS
2004
A2 SS
2005
A2 SS
2006
A2 SS
2007
A2 SS
2008
A2 SS
2009
A2 SS
2010
A2 SS
2011
A2 SS
2012
Pearson Correlation 1 .935
**
.910
**
.907
**
.879
**
.868
**
.827
**
.826
**
.791
**
Sig. (2-tailed) 0.000 .000 .000 .000 .000 .000 .000 .000
A2 SS
2004
N 794 779 777 776 777 775 769 770 764
Pearson Correlation .935
**
1 .925
**
.910
**
.895
**
.878
**
.835
**
.821
**
.767
**
Sig. (2-tailed) 0.000 0.000 0.000 .000 .000 .000 .000 .000
A2 SS
2005
N 779 827 811 809 802 806 797 797 792
Pearson Correlation .910
**
.925
**
1 .925
**
.899
**
.890
**
.828
**
.844
**
.778
**
Sig. (2-tailed) .000 0.000 0.000 .000 .000 .000 .000 .000
A2 SS
2006
N 777 811 864 852 843 844 832 832 830
Pearson Correlation .907
**
.910
**
.925
**
1 .940
**
.910
**
.864
**
.872
**
.824
**
Sig. (2-tailed) .000 0.000 0.000 0.000 0.000 .000 .000 .000
A2 SS
2007
N 776 809 852 875 851 851 841 838 836
Pearson Correlation .879
**
.895
**
.899
**
.940
**
1 .937
**
.881
**
.893
**
.840
**
Sig. (2-tailed) .000 .000 .000 0.000 0.000 .000 .000 .000
A2 SS
2008
N 777 802 843 851 869 854 845 843 840
Pearson Correlation .868
**
.878
**
.890
**
.910
**
.937
**
1 .896
**
.908
**
.850
**
Sig. (2-tailed) .000 .000 .000 0.000 0.000 .000 0.000 .000
A2 SS
2009
N 775 806 844 851 854 870 852 849 843
Pearson Correlation .827
**
.835
**
.828
**
.864
**
.881
**
.896
**
1 .913
**
.892
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 0.000 .000
A2 SS
2010
N 769 797 832 841 845 852 863 847 843
Pearson Correlation .826
**
.821
**
.844
**
.872
**
.893
**
.908
**
.913
**
1 .913
**
Sig. (2-tailed) .000 .000 .000 .000 .000 0.000 0.000 0.000
A2 SS
2011
N 770 797 832 838 843 849 847 863 849
Pearson Correlation .791
**
.767
**
.778
**
.824
**
.840
**
.850
**
.892
**
.913
**
1
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000 0.000
A2 SS
2012
N 764 792 830 836 840 843 843 849 859
Note. Observed probability is at the 0.01 level (2-tailed).
OTL AND SUCCESS 108
Stability of Summative Math Success Scores
In Table 4.16, the bivariate correlations for the success scores (SS) for Summative
Math are shown. The pairing of the Summative Math Success Score for 2005
(SummSS2005) with SummSS2004 resulted in a Pearson Product-Moment Correlation of
r = 0.95. Likewise, the pairings of the SummSS2006 with SummSS2005, SummSS2007
with SummSS2006, SummSS2008 with SummSS2007, SummSS2009 with
SummSS2008, SummSS2010 with SummSS009, SummSS2011 with SummSS2010, and
SummSS2012 with SummSS2011 were highly correlated with coefficients at r = 0.96,
0.96, 0.96, 0.96, 0.96, 0.95, and 0.95, respectively. All of the correlation coefficients
were significant (Table 4.16) at p < .05.
OTL AND SUCCESS 109
Table 4.16
Bivariate Correlations for the Summative Math Success Scores
SummSS
2004
SummSS
2005
SummSS
2006
SummSS
2007
SummSS
2008
SummSS
2009
SummSS
2010
SummSS
2011
SummSS
2012
Pearson Correlation 1 .951
**
.942
**
.938
**
.928
**
.920
**
.905
**
.889
**
.876
**
Sig. (2-tailed) 0.000 0.000 0.000 0.000 .000 .000 .000 .000
SummSS
2004
N 760 734 736 744 743 743 742 739 744
Pearson Correlation .951
**
1 .955
**
.946
**
.937
**
.930
**
.916
**
.899
**
.888
**
Sig. (2-tailed) 0.000 0.000 0.000 0.000 0.000 .000 .000 .000
SummSS
2005
N 734 777 762 764 765 763 765 760 766
Pearson Correlation .942
**
.955
**
1 .956
**
.948
**
.941
**
.922
**
.913
**
.895
**
Sig. (2-tailed) 0.000 0.000 0.000 0.000 0.000 0.000 .000 .000
SummSS
2006
N 736 762 802 788 789 790 786 781 790
Pearson Correlation .938
**
.946
**
.956
**
1 .960
**
.949
**
.933
**
.923
**
.905
**
Sig. (2-tailed) 0.000 0.000 0.000 0.000 0.000 0.000 0.000 .000
SummSS
2007
N 744 764 788 817 805 799 800 796 804
Pearson Correlation .928
**
.937
**
.948
**
.960
**
1 .956
**
.945
**
.929
**
.914
**
Sig. (2-tailed) 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
SummSS
2008
N 743 765 789 805 830 811 813 811 814
Pearson Correlation .920
**
.930
**
.941
**
.949
**
.956
**
1 .956
**
.942
**
.927
**
Sig. (2-tailed) .000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
SummSS
2009
N 743 763 790 799 811 833 816 813 816
Pearson Correlation .905
**
.916
**
.922
**
.933
**
.945
**
.956
**
1 .951
**
.941
**
Sig. (2-tailed) .000 .000 0.000 0.000 0.000 0.000 0.000 0.000
SummSS
2010
N 742 765 786 800 813 816 838 819 824
Pearson Correlation .889
**
.899
**
.913
**
.923
**
.929
**
.942
**
.951
**
1 .948
**
Sig. (2-tailed) .000 .000 .000 0.000 0.000 0.000 0.000 0.000
SummSS
2011
N 739 760 781 796 811 813 819 837 825
Pearson Correlation .876
**
.888
**
.895
**
.905
**
.914
**
.927
**
.941
**
.948
**
1
Sig. (2-tailed) .000 .000 .000 .000 0.000 0.000 0.000 0.000
SummSS
2012
N 744 766 790 804 814 816 824 825 846
Note. Observed probability is at the 0.01 level (2-tailed).
OTL AND SUCCESS 110
Research Question 4: Reliability of the Composites of OTL and SS Scores
Internal Consistency of Composite OTL Scores – Using Cronbach’s Alpha
Before explaining the reliability of the composites of the OTL and SS, a simple
description of the composite scores and the purpose of computing the reliability of the composite
scores is necessary. Essentially, a composite score is the sum of two or more variables that
measure the same content. For example, a student’s grade point average (GPA) is the mean of
the composite of a student’s grades for each course for which they are enrolled. Thus, the GPA
reveals the student’s overall academic standing. In this study, a composite score represents the
sum of the mean scores of each math subject per year. For example, a composite of the OTL
scores for the year 2004 would be the sum of the mean scores of the OTL scores for Algebra 1,
Geometry, Algebra 2, and Summative Math. The same computation applies to the success
scores of each math subject per year. Thus, the composites of OTL or SS measure a school’s
overall OTL or SS score.
So, in order to answer the fourth research question—which asks how reliable are the
math composites of OTL and SS—an internal consistency reliability test is necessary.
According to Salkind (2011), an internal consistency reliability test is used to determine whether
the items on a test are consistent with one another. So, one way to measure the internal
consistency reliability is by using the Cronbach’s alpha value, which is a value that suggests the
items being tested are measuring the same content (Robinson Kurpius and Stafford, 2006). In
other words, a high Cronbach’s alpha for the composites of the OTL scores suggests that the
same content is being measured, namely, the OTL scores. The composites for OTL summarize
one set of scores from the four subjects (Algebra 1, Geometry, Algebra 2, and Summative Math)
OTL AND SUCCESS 111
into one mean score. The following table reveals the Cronbach’s alpha for the OTL composites
for each year.
Table 4.17
Reliability Statistics for OTL
Year
Cronbach's
Alpha
N of
Items
2004 .865 4
2005 .856 4
2006 .836 4
2007 .828 4
2008 .832 4
2009 .821 4
2010 .808 4
2011 .820 4
2012 .787 4
Item-Total Statistics for the Composite OTL Scores
An item-total statistics chart is another way to measure how each item, if item deleted in
the composites of the OTL scores, compare to each other. Cronbach’s alpha is also used in this
chart to determine how each item correlates with respect to the composite of the other three
items. For example, in 2004, the corrected item-total correlation for the Algebra 1 OTL score
compared to the composite of the other three math OTL scores is 0.533. In the same year, for
Geometry, Algebra 2, and Summative Math, the corrected item-total correlations are 0.870,
0.833, 0.743, respectively. In 2008, the corrected item-total correlation coefficient for the
Algebra 1 OTL score was 0.338, and for Geometry, Algebra 2, and Summative Math, the
corrected item-total correlation coefficients were 0.829, 0.811, and 0.734, respectively.
Similarly, in 2011, the corrected item-total correlations for the composites of the Algebra 1,
OTL AND SUCCESS 112
Geometry, Algebra 2, and Summative Math OTL scores were 0.249, 0.817, 0.834, and 0.722,
respectively. The correlation for the Algebra 1 OTL score is the only correlation that drastically
decreased from 2004 to 2011.
Table 4.18
Item-Total Statistics for 2004 Composite OTL Scores
Scale Mean
if Item
Deleted
Scale
Variance if
Item Deleted
Corrected
Item-Total
Correlation
Squared
Multiple
Correlation
Cronbach's
Alpha if
Item Deleted
OTLAlg1Score 2004 1.0959 .183 .533 .366 .921
OTLGeoScore 2004 1.2937 .157 .870 .807 .759
OTLAlg2Score 2004 1.3931 .174 .833 .847 .780
OTLSummative Math
Score2004
1.6213 .217 .743 .720 .839
Note. For valid cases N=728; Excluded cases N=254;
Table 4.19
Item-Total Statistics for 2008 Composite OTL Scores
Scale Mean
if Item
Deleted
Scale
Variance if
Item Deleted
Corrected
Item-Total
Correlation
Squared
Multiple
Correlation
Cronbach's
Alpha if Item
Deleted
OTLAlg1Score 2008 1.2502 .180 .338 .227 .919
OTLGeoScore 2008 1.4967 .129 .829 .720 .706
OTLAlg2Score 2008 1.6522 .124 .811 .842 .712
OTLSummative Math
Score2008
1.8838 .154 .734 .764 .763
Note.
OTL AND SUCCESS 113
Table 4.20
Item-Total Statistics for 2011 Composite OTL Scores
Scale Mean
if Item
Deleted
Scale
Variance if
Item Deleted
Corrected
Item-Total
Correlation
Squared
Multiple
Correlation
Cronbach's
Alpha if
Item Deleted
OTLAlg1Score 2011 1.3967 .178 .249 .171 .912
OTLGeoScore 2011 1.6545 .110 .817 .706 .685
OTLAlg2Score 2011 1.8138 .099 .834 .818 .672
OTLSummativeMathS
core2011
2.0649 .123 .722 .708 .737
Note.
Internal Consistency of Composite Success Scores – Using Cronbach’s Alpha
Similar to the measuring the internal consistency of the composites of the OTL scores,
the same procedure was done to the composites of the success scores of the math courses.
However, the corrected item-total correlations for the Algebra 1 SS scores were very different
from the OTL scores. In 2004, the corrected item-total correlations for the Algebra 1, Geometry,
Algebra 2, and Summative Math SS scores were 0.896, 0.946, 0.938, and 0.898, respectively
(Table 4.22). In 2008, the corrected item-total correlations for the Algebra 1, Geometry, Algebra
2, and Summative Math SS scores were 0.870, 0.932, 0.942, 0.907, respectively (Table 4.23).
Also, in 2011, the corrected item-total correlations for the Algebra 1, Geometry, Algebra 2, and
Summative Math SS scores were 0.838, 0.919, 0.934, 0.893, respectively (Table 4.24). The
following table reveals the Cronbach’s alpha for the SS composites for each year along with
three tables of the item-total statistics for the composites of the success scores.
OTL AND SUCCESS 114
Table 4.21
Reliability Statistics for SS
Year
Cronbach's
Alpha
N of
Items
2004 .956 4
2005 .957 4
2006 .954 4
2007 .956 4
2008 .960 4
2009 .958 4
2010 .949 4
2011 .954 4
2012 .941 4
Note.
Table 4.22
Item-Total Statistics for 2004 Composite Success Scores
Scale Mean
if Item
Deleted
Scale
Variance if
Item
Deleted
Corrected
Item-Total
Correlation
Squared
Multiple
Correlation
Cronbach's
Alpha if
Item
Deleted
Alg1Success Score2004 .7468 .157 .896 .831 .944
GeoSuccess Score2004 .8406 .147 .946 .899 .930
Alg2Success Score2004 .9404 .170 .938 .912 .929
SUMMSuccess
Score2004
1.0817 .207 .898 .862 .960
Note.
OTL AND SUCCESS 115
Table 4.23
Item-Total Statistics for 2008 Composite Success Scores
Scale Mean
if Item
Deleted
Scale
Variance if
Item Deleted
Corrected
Item-Total
Correlation
Squared
Multiple
Correlation
Cronbach's
Alpha if
Item Deleted
Alg1Success
Score2008
.8549 .184 .870 .785 .957
GeoSuccess
Score2008
1.0254 .166 .932 .869 .941
Alg2Success
Score2008
1.1202 .174 .942 .916 .935
SUMMSuccess
Score2008
1.2736 .208 .907 .884 .955
Note.
Table 4.24
Item-Total Statistics for 2011 Composite Success Scores
Scale Mean
if Item
Deleted
Scale
Variance if
Item Deleted
Corrected
Item-Total
Correlation
Squared
Multiple
Correlation
Cronbach's
Alpha if
Item Deleted
Alg1Success
Score2011
1.0176 .190 .838 .731 .954
GeoSuccess
Score2011
1.1872 .162 .919 .845 .932
Alg2Success
Score2011
1.3027 .164 .934 .898 .926
SUMMSuccess
Score2011
1.4794 .195 .893 .857 .943
Note.
OTL AND SUCCESS 116
Research Question 5: Correlation of Input-Unadjusted Composites with SCI
For research question five, the correlation of the School Characteristics Index (SCI) and
the composites of the OTL and SS scores for the four math subjects will be discussed. The SCI
for each school in the state has already been computed by the California Department of
Education. Essentially, the SCI for each school is the index of student, teacher, and school
characteristics that may influence achievement assessment results. Following the descriptive
statistics, a table of the correlations between the three variables for three different years, 2004,
2008, and 2011 will be illustrated.
The means for the composites of the OTL scores for 2004, 2008, and 2011 have
increased to 0.45, 0.52, and 0.58, respectively. The means for the composites of the SS scores
for 2004, 2008, and 2011 have increased to 0.30, 0.36, and 0.42, respectively. Note that the
composites of OTL and SS have not been adjusted for SCI—which will be discussed in the next
research question.
Descriptive Statistics for the Unadjusted Composites of OTL and SS
Table 4.25
Descriptive Statistics for the Unadjusted Composites of OTL, SS, and SCI 2004
Mean Std. Deviation N
CompOTL2004 .4503 .13938 728
CompSS2004 .3008 .13658 728
SCI2004 159.33 9.88 934
Note.
OTL AND SUCCESS 117
Table 4.26
Descriptive Statistics for Unadjusted Composites of OTL, SS, and SCI 2008
Mean Std. Deviation N
CompOTL2008 .5236 .12419 805
CompSS2008 .3562 .14173 805
SCI2008 165.38 9.29 931
Note.
Table 4.27
Descriptive Statistics for Unadjusted Composites of OTL, SS, and SCI 2011
Mean Std. Deviation N
CompOTL2011 .5775 .11564 803
CompSS2011 .4156 .13946 803
SCI2011 169.88 8.62 921
Note.
Correlations for the Unadjusted Composites of OTL and SS with SCI
Not accounting for the SCI, correlations were computed for the OTL and SS composites
and SCI for three different testing years, 2004, 2008, and 2011. The pairing of the
CompOTL2004 and SCI2004 yielded a Pearson Correlation coefficient of 0.549 at p = .001
(Table 4.28). The pairing of the CompSS2004 and SCI2004 yielded a correlation coefficient of
0.843 at p = .001 (Table 4.28). Similarly, in 2008, the pairing of CompOTL2008 with SCI2008
and CompSS2008 with SCI2008 yielded correlation coefficients of 0.573 and 0.856, respectively
at p = .001 (Table 4.29). Finally, in 2011, the pairing of CompOTL2011 with SCI2011 and
CompSS2011 with SCI2011 yielded correlation coefficients of 0.551 and 0.826, respectively at p
= .001 (Table 4.30). Essentially, the composites of the OTL scores are not as correlated with
OTL AND SUCCESS 118
SCI as the high composites of the success scores are with SCI. In other words, a school’s overall
success score is highly correlated to a school’s SCI.
Table 4.28
Correlations of the Unadjusted Composites of OTL, SS, and SCI for 2004
Comp
OTL2004
Comp
SS2004
SCI
2004
Pearson Correlation 1 .816
**
.549
**
Sig. (2-tailed)
.000 .000
Comp
OTL2004
N 728 728 721
Pearson Correlation .816
**
1 .843
**
Sig. (2-tailed) .000
.000
Comp
SS2004
N 728 728 721
Pearson Correlation .549
**
.843
**
1
Sig. (2-tailed) .000 .000
SCI
2004
N 721 721 934
Note. Observed probability is at the 0.01 level (2-tailed).
Table 4.29
Correlations of the Unadjusted Composites of OTL, SS, and SCI for 2008
Comp
OTL2008
Comp
SS2008
SCI
2008
Pearson Correlation 1 .833
**
.573
**
Sig. (2-tailed)
.000 .000
Comp
OTL2008
N 805 805 796
Pearson Correlation .833
**
1 .856
**
Sig. (2-tailed) .000
.000
Comp
SS2008
N 805 805 796
Pearson Correlation .573
**
.856
**
1
Sig. (2-tailed) .000 .000
SCI
2008
N 796 796 931
Note. Observed probability is at the 0.01 level (2-tailed).
OTL AND SUCCESS 119
Table 4.30
Correlations of the Unadjusted Composites of OTL, SS, and SCI for 2011
Comp
OTL2011
Comp
SS2011
SCI
2011
Pearson Correlation 1 .840
**
.551
**
Sig. (2-tailed)
.000 .000
Comp
OTL2011
N 803 803 800
Pearson Correlation .840
**
1 .826
**
Sig. (2-tailed) .000
.000
Comp
SS2011
N 803 803 800
Pearson Correlation .551
**
.826
**
1
Sig. (2-tailed) .000 .000
SCI2011
N 800 800 921
Note. Observed probability is at the 0.01 level (2-tailed).
Research Question 6: Reliability of the Composites of the Residuals of the OTL and SS
scores
Internal Consistency of Composites of the Residual OTL Scores
The sixth research question was to determine the reliability of the composites of the
residuals of the OTL and SS scores? The difference between the composites of the OTL and SS
scores and the composites of the residuals of the OTL and SS scores is that the residuals have
been adjusted for using the SCI. In other words, the OTL and SS scores alone can be a good
measure of OTL and SS performance, however, schools cannot be compared fairly to each other
unless specific factors are accounted for, in this case, the SCI.
So, like the fourth research question, the following tables illustrate Cronbach’s alpha of
the composites of the residuals of the OTL scores for all years between 2004 and 2011, and
subsequently, the key descriptive statistics and item-total statistics for the years 2004, 2008, and
OTL AND SUCCESS 120
2011. Cronbach’s alpha is used to determine the internal consistency of the composites of the
residuals of OTL and SS scores for the four math courses. Table 4.31 displays the values of
Cronbach’s Alpha for the composites of the residuals of the OTL scores for the years 2004 to
2011. Although the Cronbach’s alphas for the reliability of the composites of the residuals of the
OTL scores are significantly high, the values decrease from 0.868 to 0.788 between the years
2004 and 2011, respectively (Table 4.31).
Table 4.31
Reliability Statistics for the Composites of the Residuals of OTL Scores
Year
Cronbach's
Alpha
N of
Items
2004 0.868 4
2005 0.844 4
2006 0.846 4
2007 0.815 4
2008 0.809 4
2009 0.793 4
2010 0.762 4
2011 0.788 4
Note.
Descriptives of Residuals of the OTL Scores
The key descriptive statistics for the composites of the residuals of the SS scores were
computed for the years 2004, 2008, and 2011. The following three tables illustrate the
descriptive statistics for the composites of the residuals of the OTL scores.
OTL AND SUCCESS 121
Table 4.32
Composites of OTL Residual Scores 2004 – Descriptive Statistics
N Minimum Maximum Mean Std. Deviation
ZRESIDAlg1OTL2004 853 -2.76406 2.85685 -.0054165 1.00374069
ZRESIDGeoOTL2004 811 -2.99279 3.81196 -.0178442 .99542817
ZRESIDAlg2OTL2004 785 -3.40754 4.02840 -.0156512 .99354810
ZRESIDSUMMOTL
2004
751 -3.01034 4.64540 -.0107038 .99699856
Valid N (listwise) 721
Table 4.33
Composites of OTL Residual Scores 2008 – Descriptive Statistics
N Minimum Maximum Mean Std.
Deviation
ZRESIDAlg1OTL2008 913 -2.55897 1.99483 -.0625036 .97252781
ZRESIDGeoOTL2008 882 -2.38079 3.28829 -.0912996 .93423389
ZRESIDAlg2OTL2008 854 -3.01915 3.53815 -.0825044 .93336438
ZRESIDSUMMOTL
2008
821 -3.06190 4.00813 -.0651354 .96459335
Valid N (listwise) 796
Table 4.34
Composites of OTL Residual Scores 2011 – Descriptive Statistics
N Minimum Maximum Mean Std.
Deviation
ZRESIDAlg1OTL2011 893 -2.71305 1.48203 -.0468216 .98813772
ZRESIDGeoOTL2011 870 -2.41150 2.98981 -.1232345 .94505336
ZRESIDAlg2OTL2011 856 -2.86984 3.20800 -.1730611 .90318188
ZRESIDSUMMOTL
2011
831 -2.93484 3.88299 -.1334924 .92347054
Valid N (listwise) 800
OTL AND SUCCESS 122
Item-Total Statistics for the Composites of the Residuals of the OTL Scores
Additionally, item-total statistics charts were computed for the composites of the
residuals of the OTL scores 2004, 2008, and 2011. In 2004, the corrected item-total correlation
for the residuals of the Algebra 1 OTL scores compared to the composite of the other three
residualized math OTL scores is 0.579 (Table 4.35). In the same year, for Geometry, Algebra 2,
and Summative Math, the corrected item-total correlations are 0.818, 0.819, 0.676, respectively
(Table 4.35). In 2008, the corrected item-total correlation coefficient for the residuals of the
Algebra 1 OTL scores was 0.372, and for Geometry, Algebra 2, and Summative Math, the
corrected item-total correlation coefficients were 0.757, 0.758, and 0.633, respectively (Table
4.36). Similarly, in 2011, the corrected item-total correlations for the composites of the Algebra
1, Geometry, Algebra 2, and Summative Math OTL scores were 0.315, 0.750, 0.767, and 0.605,
respectively (Table 4.37). Similar to the unadjusted composites of the OTL scores for the fourth
research question, the correlation for the Algebra 1 OTL score decreased as well from 2004 to
2011.
Table 4.35
Composites of OTL Residual Scores 2004 – Item Total Statistics
Scale
Mean if
Item
Deleted
Scale
Variance if
Item
Deleted
Corrected
Item-Total
Correlation
Squared
Multiple
Correlation
Cronbach's
Alpha if
Item
Deleted
ZRESIDAlg1OTL2004 .0223806 6.574 .579 .408 .885
ZRESIDGeoOTL2004 .1032157 5.703 .818 .711 .790
ZRESIDAlg2OTL2004 .0974217 5.810 .819 .737 .791
ZRESIDSUMMOTL
2004
.1488359 6.068 .676 .553 .849
Note.
OTL AND SUCCESS 123
Table 4.36
Composites of OTL Residual Scores 2008 – Item Total Statistics
Scale
Mean if
Item
Deleted
Scale
Variance if
Item
Deleted
Corrected
Item-Total
Correlation
Squared
Multiple
Correlation
Cronbach's
Alpha if
Item
Deleted
ZRESIDAlg1 OTL2008 -.2629919 5.798 .372 .220 .872
ZRESIDGeo OTL 2008 -.2149318 4.572 .757 .617 .696
ZRESIDAlg2 OTL2008 -.1949648 4.508 .778 .717 .685
ZRESIDSUMM
OTL2008
-.1882085 4.736 .633 .577 .757
Note.
Table 4.37
Composites of OTL Residual Scores 2011 – Item Total Statistics
Scale
Mean if
Item
Deleted
Scale
Variance if
Item
Deleted
Corrected
Item-Total
Correlation
Squared
Multiple
Correlation
Cronbach's
Alpha if
Item
Deleted
ZRESIDAlg1 OTL2011 -.4379030 5.637 .315 .153 .870
ZRESIDGeo OTL2011 -.3189646 4.283 .750 .636 .653
ZRESIDAlg2 OTL2011 -.2786676 4.372 .767 .713 .648
ZRESIDSUMM
OTL2011
-.2797568 4.717 .605 .523 .730
Note.
Internal Consistency of Composites of the Residual SS Scores
On the same note, the following tables will illustrate Cronbach’s alpha of the composites
of the residuals of the SS scores for all years between 2004 and 2011, and subsequently, the key
descriptive statistics and item-total statistics for the years 2004, 2008, and 2011. Again,
OTL AND SUCCESS 124
Cronbach’s alpha is used to determine the internal consistency of the composites of the residuals
of the SS scores for the four math courses. Table 4.38 displays the values of Cronbach’s Alpha
for the composites of the residuals of the SS scores for the years 2004 to 2011. Although the
Cronbach’s alphas for the reliability of the composites of the residuals of the SS scores are
significantly high, the values are not consistently decreasing, as are the composites of the
residuals of the OTL scores.
Table 4.38
Reliability Statistics for the Composites of the Residuals of SS Scores
Year
Cronbach's
Alpha
N of
Items
2004 0.871 4
2005 0.831 4
2006 0.870 4
2007 0.856 4
2008 0.873 4
2009 0.871 4
2010 0.847 4
2011 0.874 4
Note.
OTL AND SUCCESS 125
Descriptives of Residuals of the OTL Scores
Table 4.39
Composites of Success Residual Scores 2004 – Descriptive Statistics
N Minimum Maximum Mean Std.
Deviation
ZRESIDAlg1SS2004 853 -4.87124 5.65141 -.0028633 1.01126816
ZRESIDGeoSS2004 811 -3.75755 6.03240 -.0051151 1.00568530
ZRESIDAlg2SS2004 785 -4.19065 5.65956 -.0076689 1.00054349
ZRESIDSUMMSS 2004 752 -4.26760 6.55565 -.0120840 .99810499
Valid N (listwise) 721
Note.
Table 4.40
Composites of Success Residual Scores 2008 – Descriptive Statistics
N Minimum Maximum Mean Std.
Deviation
ZRESIDAlg1SS 2008 913 -2.80443 3.70981 -.0138694 .91065933
ZRESIDGeoSS 2008 882 -3.57517 4.80708 -.0379159 .92709157
ZRESIDAlg2SS 2008 854 -4.15309 5.09036 -.0179621 .95163015
ZRESIDSUMMSS2008 821 -4.20325 3.53249 -.0140565 .96070891
Valid N (listwise) 796
Note.
Table 4.41
Composites of Success Residual Scores 2011 – Descriptive Statistics
N Minimum Maximum Mean Std.
Deviation
ZRESIDAlg1SS2011 893 -3.16150 3.81463 -.0439820 .90116598
ZRESIDGeoSS2011 870 -2.78217 3.37763 -.0464013 .92004083
ZRESIDAlg2SS2011 856 -4.40675 2.92292 -.0654662 .92738203
ZRESIDSUMMSS2011 831 -3.08388 3.10296 -.0343743 .92469742
Valid N (listwise) 800
Note.
OTL AND SUCCESS 126
Item-Total Statistics for the Composites of the Residuals of the SS Scores
Moreover, item-total statistics charts were computed for the composites of the residuals
of the SS scores 2004, 2008, and 2011. In 2004, the corrected item-total correlation for the
residuals of the Algebra 1 SS scores compared to the composite of the other three residualized
math OTL scores is 0.665 (Table 4.42). In the same year, for Geometry, Algebra 2, and
Summative Math, the corrected item-total correlations are 0.767, 0.806, 0.678, respectively
(Table 4.42). In 2008, the corrected item-total correlation coefficient for the residuals of the
Algebra 1 SS scores was 0.616, and for Geometry, Algebra 2, and Summative Math, the
corrected item-total correlation coefficients were 0.753, 0.822, and 0.734, respectively (Table
4.43). Similarly, in 2011, the corrected item-total correlations for the composites of the Algebra
1, Geometry, Algebra 2, and Summative Math SS scores were 0.653, 0.748, 0.811, and 0.712,
respectively (Table 4.44). Although the corrected item-total correlations for the residuals of the
OTL scores decreased for Algebra 1, the corrected item-total correlation for the residuals of the
Algebra 1 SS scores did not decrease in contrast to the OTL scores between 2004 and 2011.
Table 4.42
Composites of Success Residual Scores 2004 – Item Total Statistics
Scale Mean
if Item
Deleted
Scale
Variance if
Item
Deleted
Corrected
Item-Total
Correlation
Squared
Multiple
Correlation
Cronbach's
Alpha if
Item
Deleted
ZRESIDAlg1SS 2004 .1752769 5.967 .665 .513 .859
ZRESIDGeoSS 2004 .2227381 5.294 .767 .629 .818
ZRESIDAlg2SS 2004 .1917296 5.216 .806 .670 .802
ZRESIDSUMMSS
2004
.3031838 5.204 .678 .545 .859
OTL AND SUCCESS 127
Table 4.43
Composites of Success Residual Scores 2008 – Item Total Statistics
Scale Mean
if Item
Deleted
Scale
Variance if
Item
Deleted
Corrected
Item-Total
Correlation
Squared
Multiple
Correlation
Cronbach's
Alpha if
Item
Deleted
ZRESIDAlg1SS2008 -.0759872 5.566 .616 .483 .880
ZRESIDGeoSS2008 -.0331173 5.156 .753 .582 .829
ZRESIDAlg2SS2008 -.0478292 4.871 .822 .748 .800
ZRESIDSUMMOTL
2008
.0697689 4.806 .734 .712 .837
Note.
Table 4.44
Composites of Success Residual Scores 2011 – Item Total Statistics
Scale Mean
if Item
Deleted
Scale
Variance if
Item
Deleted
Corrected
Item-Total
Correlation
Squared
Multiple
Correlation
Cronbach's
Alpha if
Item
Deleted
ZRESIDAlg1SS 2011 -.0799616 5.276 .653 .486 .868
ZRESIDGeoSS 2011 -.0437425 4.915 .748 .577 .832
ZRESIDAlg2SS 2011 -.0537253 4.722 .811 .699 .806
ZRESIDSUMMSS
2011
-.0187927 4.840 .712 .624 .847
Note.
OTL AND SUCCESS 128
Research Question 7: Stability of the Composites of the Residuals of the OTL and SS Scores
For the seventh research question, the stability of the residuals of the OTL and SS scores
of the four math courses Algebra 1, Geometry, Algebra 2, and Summative Math was examined.
Stability of the Composites of the Residuals of the OTL Scores
A table illustrating the bivariate correlations of the composites of the residuals (Table
4.46) follows a table of the key descriptive statistics (Table 4.45), which includes the means,
minimums, maximums, skewness, standard error of skewness, kurtosis, and the standard error of
kurtosis. The composites of the residuals of the OTL scores for Algebra 1, Geometry, Algebra 2,
and Summative Math were correlated for the years 2004 to 2011. Subsequently, histograms of
the composites of the residuals of the OTL scores were plotted for the years 2004, 2008, and
2011 in Figures 4.9, 4.10, and 4.11, respectively.
OTL AND SUCCESS 129
Table 4.45
Descriptive Statistics for the Composites of OTL Residuals
CompResi
dOTL2004
CompResi
dOTL2005
CompResi
dOTL2006
CompResi
dOTL2007
CompResi
dOTL2008
CompResi
dOTL2009
CompResi
dOTL2010
CompResi
dOTL2011
Valid 721 746 780 793 799 811 807 804
N
Missing 261 236 202 189 183 171 175 178
Mean .0075 -.0347 -.0068 -.0636 -.0793 -.1034 -.1147 -.1433
Std. Error of Mean .03183 .03013 .03086 .02919 .02885 .02826 .02685 .02814
Median .0085 .0081 .0298 -.0487 -.0754 -.0836 -.0878 -.1346
Mode -2.50
a
-2.34
a
-2.35
a
-2.30
a
-2.58
a
-2.46
a
-2.22
a
-2.11
a
Std. Deviation .85463 .82306 .86181 .82207 .81542 .80476 .76275 .79781
Variance .730 .677 .743 .676 .665 .648 .582 .637
Skewness .480 .268 .316 .164 .034 .081 .027 .031
Std. Error of Skewness .091 .090 .088 .087 .086 .086 .086 .086
Kurtosis 1.650 .929 .954 .518 .597 .085 .058 .087
Std. Error of Kurtosis .182 .179 .175 .173 .173 .171 .172 .172
Range 6.64 6.15 5.71 5.17 6.12 5.44 4.96 4.75
Minimum -2.50 -2.34 -2.35 -2.30 -2.58 -2.46 -2.22 -2.11
Maximum 4.14 3.80 3.35 2.87 3.55 2.98 2.74 2.63
Sum 5.38 -25.91 -5.32 -50.42 -63.35 -83.84 -92.59 -115.21
a. Multiple modes exist. The smallest value is shown
OTL AND SUCCESS 130
Table 4.46
Bivariate Correlations for the Composites of OTL Residuals
CompResid
OTL2004
CompResid
OTL2005
CompResid
OTL2006
CompResid
OTL2007
CompResid
OTL2008
CompResid
OTL2009
CompResid
OTL2010
CompResid
OTL2011
Pearson Correlation 1 .874
**
.787
**
.726
**
.667
**
.627
**
.588
**
.555
**
Sig. (2-tailed)
.000 .000 .000 .000 .000 .000 .000
CompResid
OTL2004
N 721 697 699 695 693 702 696 693
Pearson Correlation .874
**
1 .870
**
.779
**
.716
**
.660
**
.617
**
.587
**
Sig. (2-tailed) .000
.000 .000 .000 .000 .000 .000
CompResid
OTL2005
N 697 746 733 726 723 730 728 724
Pearson Correlation .787
**
.870
**
1 .900
**
.813
**
.742
**
.699
**
.668
**
Sig. (2-tailed) .000 .000
.000 .000 .000 .000 .000
CompResid
OTL2006
N 699 733 780 754 754 763 754 749
Pearson Correlation .726
**
.779
**
.900
**
1 .897
**
.798
**
.749
**
.702
**
Sig. (2-tailed) .000 .000 .000
.000 .000 .000 .000
CompResid
OTL2007
N 695 726 754 793 770 772 766 761
Pearson Correlation .667
**
.716
**
.813
**
.897
**
1 .886
**
.812
**
.751
**
Sig. (2-tailed) .000 .000 .000 .000
.000 .000 .000
CompResid
OTL2008
N 693 723 754 770 799 779 773 770
Pearson Correlation .627
**
.660
**
.742
**
.798
**
.886
**
1 .885
**
.806
**
Sig. (2-tailed) .000 .000 .000 .000 .000
.000 .000
CompResid
OTL2009
N 702 730 763 772 779 811 782 779
Pearson Correlation .588
**
.617
**
.699
**
.749
**
.812
**
.885
**
1 .898
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000
.000
CompResid
OTL2010
N 696 728 754 766 773 782 807 790
Pearson Correlation .555
**
.587
**
.668
**
.702
**
.751
**
.806
**
.898
**
1
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000
CompResid
OTL2011
N 693 724 749 761 770 779 790 804
Note. Observed probability is at the 0.01 level (2-tailed).
OTL AND SUCCESS 131
Figure 4.9
Histogram of the Composites of OTL Residuals for 2004
OTL AND SUCCESS 132
Figure 4.10
Histogram of the Composites of OTL Residuals for 2008
OTL AND SUCCESS 133
Figure 4.11
Histogram of the Composites of OTL Residuals for 2011
OTL AND SUCCESS 134
Stability of the Composites of the Residuals of the SS Scores
A table illustrating the bivariate correlations of the composites of the residuals of
the SS scores (Table 4.48) follows a table of the key descriptive statistics (Table 4.47),
which includes the means, minimums, maximums, skewness, standard error of skewness,
kurtosis, and the standard error of kurtosis. The composites of the residuals of the SS
scores for Algebra 1, Geometry, Algebra 2, and Summative Math were correlated for the
years 2004 to 2011. Subsequently, histograms of the composites of the residuals of the
SS scores were plotted for the years 2004, 2008, and 2011 in Figures 4.12, 4.13, and 4.14,
respectively.
OTL AND SUCCESS 135
Table 4.47
Descriptive Statistics for the Composites of Success Residuals
CompResi
dSS2004
CompResi
dSS2005
CompResi
dSS2006
CompResi
dSS2007
CompResi
dSS2008
CompResi
dSS2009
CompResi
dSS2010
CompResi
dSS2011
Valid 721 746 780 793 799 811 807 804
N
Missing 261 236 202 189 183 171 175 178
Mean .0584 .0402 .0530 .0274 .0069 .0029 -.0023 -.0233
Std. Error of Mean .03032 .02867 .02971 .02799 .02822 .02784 .02654 .02734
Median .0128 .0370 .0195 .0272 -.0183 .0092 .0324 -.0080
Mode -2.42
a
-2.87
a
-2.58
a
-2.92
a
-2.77
a
-2.71
a
-2.43
a
-2.54
a
Std. Deviation .81426 .78301 .82987 .78825 .79782 .79284 .75393 .77510
Variance .663 .613 .689 .621 .637 .629 .568 .601
Skewness 1.130 .433 .676 .405 .195 .172 -.009 .050
Std. Error of Skewness .091 .090 .088 .087 .086 .086 .086 .086
Kurtosis 6.885 3.415 2.752 1.495 1.463 .881 .729 .457
Std. Error of Kurtosis .182 .179 .175 .173 .173 .171 .172 .172
Range 8.50 7.58 7.03 6.17 6.82 5.79 5.31 5.39
Minimum -2.42 -2.87 -2.58 -2.92 -2.77 -2.71 -2.43 -2.54
Maximum 6.08 4.70 4.45 3.24 4.05 3.08 2.89 2.85
Sum 42.12 29.98 41.33 21.70 5.49 2.35 -1.84 -18.72
Note. a. Multiple modes exist. The smallest value is shown
OTL AND SUCCESS 136
Table 4.48
Bivariate Correlations for the Composites of Success Residuals
CompResidS
S2004
CompResidS
S2005
CompResidS
S2006
CompResidS
S2007
CompResidS
S2008
CompResidS
S2009
CompResidS
S2010
CompResidS
S2011
Pearson Correlation 1 .765
**
.706
**
.656
**
.607
**
.587
**
.549
**
.520
**
Sig. (2-tailed)
.000 .000 .000 .000 .000 .000 .000
CompResid
SS2004
N 721 697 699 695 693 702 696 693
Pearson Correlation .765
**
1 .789
**
.696
**
.659
**
.616
**
.562
**
.519
**
Sig. (2-tailed) .000
.000 .000 .000 .000 .000 .000
CompResid
SS2005
N 697 746 733 726 723 730 728 724
Pearson Correlation .706
**
.789
**
1 .840
**
.763
**
.697
**
.649
**
.613
**
Sig. (2-tailed) .000 .000
.000 .000 .000 .000 .000
CompResid
SS2006
N 699 733 780 754 754 763 754 749
Pearson Correlation .656
**
.696
**
.840
**
1 .854
**
.755
**
.692
**
.656
**
Sig. (2-tailed) .000 .000 .000
.000 .000 .000 .000
CompResid
SS2007
N 695 726 754 793 770 772 766 761
Pearson Correlation .607
**
.659
**
.763
**
.854
**
1 .851
**
.772
**
.713
**
Sig. (2-tailed) .000 .000 .000 .000
.000 .000 .000
CompResid
SS2008
N 693 723 754 770 799 779 773 770
Pearson Correlation .587
**
.616
**
.697
**
.755
**
.851
**
1 .834
**
.766
**
Sig. (2-tailed) .000 .000 .000 .000 .000
.000 .000
CompResid
SS2009
N 702 730 763 772 779 811 782 779
Pearson Correlation .549
**
.562
**
.649
**
.692
**
.772
**
.834
**
1 .852
**
Sig. (2-tailed) .000 .000 .000 .000 .000 .000
.000
CompResid
SS2010
N 696 728 754 766 773 782 807 790
Pearson Correlation .520
**
.519
**
.613
**
.656
**
.713
**
.766
**
.852
**
1
Sig. (2-tailed) .000 .000 .000 .000 .000 .000 .000
CompResid
SS2011
N 693 724 749 761 770 779 790 804
Note. CompResidSS = Composites of the Residuals for the Success scores; **. Correlation is significant at the 0.01 level (2-tailed).
OTL AND SUCCESS 137
Figure 4.12
Histogram of the Composites of Success Residuals for 2004
OTL AND SUCCESS 138
Figure 4.13
Histogram of the Composites of Success Residuals for 2008
OTL AND SUCCESS 139
Figure 4.14
Histogram of the Composites of Success Residuals for 2011
OTL AND SUCCESS 140
CHAPTER FIVE: SUMMARY AND DISCUSSION
The launching of the Sputnik in 1957, the Coleman Report in 1967, A Nation at Risk in
1983, and the current reauthorization of the Elementary and Secondary Education Act (ESEA),
known as the No Child Left Behind (NCLB), are all historical events that compelled the federal
government to respond to the needs of the American society. Currently, as the final descent of
the NCLB comes to an end in 2014, the federal government is once again responding to
America’s trail in the advancements of science, technology, engineering, and mathematics, in
comparison to other countries. In response to a competitive technological age, the spotlight for
the federal government is centered on the American educational system. And, with the ushering
in of yet another reauthorization of the ESEA, which will involve the Common Core Standards
(CCS), states and schools are still wrapped up in trying to find better ways to measure school
quality, teacher effectiveness, and most importantly, student achievement.
Nevertheless, NCLB is still holding states to unrealistically high expectations for all
students. Essentially, NCLB requires all students, as measured by state standardized tests, to
reach proficiency in Mathematics and English by the year 2014, and as a result, many schools are
still being challenged with the extremely high demand that all students reach proficiency in Math
and English (Edsource, 2010). Although a great amount of research criticizes the NCLB for it’s
implausible goal, more recent research has shown the positive effects of the NCLB (Chubb,
Linn, Haycock, & Wiener, 2005; Maleyko, 2011; Veith, 2013). In particular, this study answers
the question of whether or not the NCLB increased student performance for the state of
California. In short, the answer to that question is an irrevocable yes!
The purpose of this study was to critically examine the effects of the NCLB on student
achievement, as measured by the California Standards Test (CST), in four mathematics courses:
OTL AND SUCCESS 141
Algebra 1, Geometry, Algebra 2, and Summative Mathematics. In essence, so much research
has been done to criticize the details of the NCLB, but little has been done to show the actual
effects of the NCLB. Essentially, this current research aimed to: 1) expose the positive growth in
student performance, given the influence of historical events and federal interest, and 2) expose a
more equitable and necessary way to measure school quality. To that end, this study evaluated
the extent to which the achievement accountability measures—unadjusted and adjusted subject-
level equivalent indicators for opportunities to learn (OTL) and success (SS) in California—
changed over time, specifically between the years of 2004 and 2012, and the extent to which
each of these indicators were correlated to the School Characteristics Index (SCI). In addition,
this study also examined the stability and internal consistency of the input-unadjusted and input-
adjusted OTL and SS scores, the composites of the input-unadjusted and input-adjusted OTL and
SS scores, and the composites of the residuals of the OTL and SS scores. The research questions
that were examined were:
1. Has NCLB led to greater opportunity-to-learn (OTL) in high school math coursework
(Algebra 1, Geometry, Algebra 2, and Summative) in California?
2. Has NCLB led to greater success in math coursework in California?
3. How stable are the eight scores OTL and SS scores for Algebra 1, Geometry, Algebra
2, and Summative Math?
4. How reliable, or internally consistent, are the math composites of OTL and SS?
5. What is the correlation between the SCI and OTL status composite scores and SS
scores?
6. What are the descriptive characteristics and reliability coefficients (i.e. internal
consistency) of the composites of the residuals of the OTL and SS scores?
OTL AND SUCCESS 142
7. How stable are the composites of the residuals of the OTL and SS scores?
Summary of Findings
There were seven research questions that propagated this current study. Essentially, the
following discussion will be categorized into four phases: 1) Phase one addresses the escalating
trend and stability of the unadjusted OTL and SS scores, 2) Phase two addresses the internal
consistency of the composites of the unadjusted OTL and SS scores, 3) Phase three addresses the
correlation between the composites of the OTL and SS scores with SCI, and 4) Phase four
addresses the reliability and stability of the composites of the residuals of the OTL and SS
scores.
Phase One – Trend and Stability of OTL and SS scores
Phase one summarizes the findings from the first three research questions, which was to
examine the descriptive statistics and stability of the OTL and SS scores between 2004 and 2012.
OTL - Trend
In brief, the answer to whether OTL has increased between 2004 and 2012 is an
irrevocable yes. The descriptive statistics for the OTL scores show that all the means for OTL
for all the four math courses have increased between 2004 and 2012. For example, the OTL for
Algebra 1 increased from 68% in 2004 to 91% in 2012, which was the largest increase among
the four math OTL scores. Falling right behind Algebra 1, the scores for the OTL in Geometry
increased from 49% OTL in 2004 to 66% in 2012, a 17% increase. For Algebra 2, the OTL
increased 11% between 2004 and 2012 and finally the OTL for Summative Math had a 10%
increase. Also, the amount of schools that administered math CST’s between 2004 and 2012 has
OTL AND SUCCESS 143
also increased, which indicates that the OTL scores in 2012 are much larger in comparison to the
OTL scores in 2004.
OTL - Stability
In brief, the answer to whether OTL was stable between 2004 and 2012 is also an
irrevocable yes. For the Algebra 1 OTL, the correlation coefficients were all greater than .74.
All of the correlations for Algebra 1 OTL were statistically significant at p < .05. Furthermore,
the correlation coefficients for the Geometry OTL were all greater than .82 and less than .86.
And, the Algebra 2 OTL correlation coefficients were greater than .86 and less than .91. Finally,
the correlation coefficients for the Summative Math OTL were greater than .93 and less than .94.
The correlations for Geometry OTL, Algebra 2 OTL, and Summative Math OTL were
statistically significant at p < .05. To add, the correlation of the 2004 Algebra 1 OTL with the
2012 Algebra 1 OTL is .28, which indicates that the Algebra 1 OTL scores for 2004 are not
highly correlated to the Algebra 1 OTL scores in 2012. Similarly, the correlation of the 2004
Geometry OTL with the 2012 Geometry OTL is .54, which indicates that the Geometry OTL
scores for 2004 are not highly correlated to the Geometry OTL scores in 2012. However, for
Algebra 2 and Summative Math, the correlation coefficients of the 2004 OTL scores for Algebra
2 and Summative Math with the 2012 Algebra 2 and Summative Math are .67 and .80,
respectively. So, the greatest decrease of the correlations between the 2004 with the 2012 OTL
scores was observed in the bivariate correlations for Algebra 1 OTL.
Success - Trend
Like the OTL trend, the answer to whether student success (SS) has increased between
2004 and 2012 is also an irrevocable yes. The descriptive statistics for the SS scores show that all
the means for SS for all the four math courses have increased between 2004 and 2012. For
OTL AND SUCCESS 144
example, the SS for Algebra 1 increased by 22% from 2004 to 2012, which is the largest increase
among the four math SS scores. Following Algebra 1, the scores for the SS in Geometry
increased from 35% in 2004 to 48% in 2012, a 13% increase. Between the years of 2004 and
2012, the SS for Algebra 2 increased by 12% and the SS for Summative Math increased by 9%.
As was the case for the trend of OTL, the SS scores in 2012 are larger in comparison to the SS
scores in 2004.
Success - Stability
In a nutshell, the answer to whether SS was stable between 2004 and 2012 is an
irrevocable yes. In fact, all of the SS correlations for all the math courses were all greater than
.89 and less than .96. For the SS in Algebra 1, the correlation coefficients were between .89 and
.92. All of the correlations for Algebra 1 SS were statistically significant at p < .001.
Furthermore, the correlation coefficients for the Geometry SS were all greater than .90 and less
than .93. The Algebra 2 SS correlation coefficients were greater than .89 and less than .94.
Finally, the correlation coefficients for the Summative Math SS were greater than .95 and less
than .96. All of the correlations for Geometry SS, Algebra 2 SS, and Summative Math SS were
statistically significant at p < .001. Different from the discussion about the stability of OTL, the
correlation of the 2004 Algebra 1 SS with the 2012 Algebra 1 SS does not decrease as great. Its
correlation is .69. The rest of the correlations of the 2004 SS scores with the 2012 SS scores
differ by no greater than .16. Nevertheless, the greatest decrease of the correlations between the
2004 with the 2012 math SS scores was observed in the Algebra 1 SS bivariate correlations.
OTL AND SUCCESS 145
Phase Two – Internal Consistency of the Composites of OTL and SS scores
Phase two is the summary of the findings from research question four, which was to
examine the internal consistency of the composites of OTL and SS.
Internal Consistency of OTL using Cronbach’s Alpha
In short, a composite score is the sum of two or more variables that measure the same
content. In this case, the same content being measured is OTL. Essentially, the findings in
chapter 4 suggest that the composites of the OTL and SS of the four math courses are extremely
internally consistent. In other words, Cronbach’s Alpha for the composites of the OTL scores
for all years were no less than .79. The greatest Cronbach’s Alpha for the composites of the
OTL scores was .87.
Internal Consistency of SS using Cronbach’s Alpha
As described above, a composite score is the sum of two or more variables that measure
the same content. In this case, the same content being measured is success. Cronbach’s Alpha
for the composites of the SS scores for all years was no less than .94. The greatest Cronbach’s
Alpha for the composites of the SS scores was .96. The reliability for SS is much greater than
the reliability of OTL, although the reliability coefficients for both OTL and SS are still
extremely high. The reliabilities of the composites of the SS scores suggest that being successful
in one course is highly correlated to being successful in the next sequential course.
Item-Total Statistics of OTL using Cronbach’s Alpha
In addition to examining the Cronbach’s Alpha for the composites of the OTL and SS
scores, the item-total statistics were also examined in this study to determine the reliability of the
composites of the OTL and SS scores if an item from the composites was deleted. In other
words, the item-total statistics determine the reliability of the variables with respect to each
OTL AND SUCCESS 146
other. Nevertheless, the least reliable coefficient for the composites of OTL was Algebra 1. In
2004, the Cronbach’s Alpha for the Algebra 1 OTL was .53 and in 2008 and 2012, the
Cronbach’s Alpha for the Algebra 1 OTL was .34 and .25, respectively. The Cronbach’s Alpha
for the Algebra 1 OTL is the only correlation that drastically decreased from 2004 to 2011. This
suggests that as the number of schools offering Algebra 1 increased from 2004 to 2011, the
correlations of the composites of OTL decreased. In other words, just taking Algebra 1 does not
necessarily indicate whether a school will receive a higher OTL for the reason that most schools
are offering Algebra 1 in the 9
th
grade. Moreover, the Cronbach’s Alphas for the Geometry and
Algebra 2 OTL were no less than .78 and no greater than .87 between the years of 2004 and
2011.
Item-Total Statistics of SS using Cronbach’s Alpha
On the other hand, the item-total statistics for the composites of SS were very different
than the statistics for OTL. For example, if the Algebra 1 SS was deleted from the composite of
the SS of the math courses, the Cronbach’s Alphas for 2004, 2008, and 2011 of Algebra 1 were
.896, .87, and .84, respectively. This suggests that success in Algebra 1 is a reliable indicator to
the being successful in the other three math courses. The Cronbach’s Alphas only got greater for
the other three courses. If the SS of Geometry was deleted from the composite, the reliabilities of
the SS with respect to the other three was .95, .93, and .92 for the years 2004, 2008, and 2011,
respectively. In other words, being successful in Geometry is a good indicator for being
successful in Algebra 2 or Summative Math. Essentially, the item-total statistics findings
suggest that being successful in one math course is highly correlated to being successful in the
OTL AND SUCCESS 147
successive math course. To add, Geometry and Algebra 2 had two of the highest Cronbach’s
Alphas throughout all the years, with coefficients of no less that .91 and no greater than .95.
Phase Three – Correlation of Input-Unadjusted composites with SCI
Phase three summarizes the findings from research question five, which examined the
reliability and stability coefficients of the composites of the residuals of OTL and SS were
examined. In short, the SCI for each school is the index of student, teacher, and school
characteristics that influence achievement assessments. The SCI for each school is computed by
the California Department of Education. Because the California Department of Education has
not published the 2012 Base API Data file, which discloses the Schools Characteristics Index
(SCI) for each school in the state, the unadjusted and adjusted Composite scores with SCI, were
computed for the years 2004 to 2011. Thus, the summary of findings from research questions
five, six, and seven, are discussed in phase three and four.
So, in phase three, descriptive statistics were computed and all of the means for the
composites of OTL and SS increased from 2004 to 2011. The means of the composites of OTL
increased from .45 to .57 from 2004 to 2011, respectively. The means of the composites of SS
also increased from .30 to .42 from 2004 to 2011, respectively. In addition, the unadjusted
composites of OTL and SS were then correlated with SCI. The correlations for the composites
of OTL with SCI for the years 2004, 2008, and 2011 were .55, .57, and .55, respectively. The
correlations for the composites of SS with SCI for the years 2004, 2008, and 2011 were .84, .86,
and .83, respectively. All of the correlations were statistically significant at p = .001.
Fundamentally, the composites of the OTL scores are not as correlated with SCI as the
composites of the success scores are with SCI. Therefore, a school’s overall success score is
highly correlated to a school’s SCI.
OTL AND SUCCESS 148
Phase Four – Reliability and Stability of the Residuals of OTL and SS composites
Phase four summarizes the findings from research questions six and seven, for which the
internal consistency, descriptive statistics, the item-total statistics, and the stability of the
composites of the residuals of OTL and SS were examined. The residual scores are the
composites of the OTL and SS scores regressed with the SCI.
Internal Consistency of the Composites of OTL and SS Residuals
The internal consistency of the composites of the residuals was determined by using
Cronbach’s Alpha. In short, the Cronbach’s Alpha for the composites of the OTL residuals was
.87 in 2004 and decreased to .79 in 2011. The Cronbach’s Alpha for the composites of the SS
residuals ranged from .83 to .87, not a consistent pattern as the OTL residual composites,
however, the Cronbach’s Alpha were nonetheless extremely high.
Means of the Composites of OTL and SS Residuals
The means of the composites of the OTL residuals were all values less than zero ranging
from a maximum value of -0.01 in 2004 to a minimum value of -0.17 in 2011. By definition, the
mean values of the residuals should equal zero and because the values of the composites of the
residuals of OTL were close to zero, but not equal to zero, there was a slight error in the
computation of the residuals. Nevertheless, the means for the composites of the residuals of
OTL stayed fairly close to zero.
Similarly, the means of the composites of the residuals of SS were also negative values
less than zero ranging from a maximum value of -0.002 in 2004 to a minimum value of -0.07 in
2011. The same problem occurred with the composites of the residuals of SS as with the
composites of the OTL residuals.
OTL AND SUCCESS 149
Item-total Statistics of the Composites of OTL and SS Residuals
For the composites of the residuals for OTL, Algebra 1 had the lowest Cronbach’s Alpha
with coefficients of .68, .37, and .32 for the years 2004, 2008, and 2011, respectively. These
findings were similar to the Cronbach’s Alpha for the unadjusted composites of OTL mentioned
above. To explain, if the item, Algebra 1 OTL, was deleted from the composites of the residuals,
its effect on the Cronbach’s Alpha for the composites of the other three items would be the least.
This indicates that Algebra 1, by the year 2011, has the least affect on the OTL status of the other
three math courses and the reason is that more and more schools were administering the Algebra
1 CST by the 9
th
grade.
For the composites of the residuals for SS, the Cronbach’s Alphas for all the items ranged
from .61 to .83. Similar to previous paragraph, the Cronbach’s Alpha for the composites of the
residuals for Algebra 1 remained the lowest in comparison to the other three, but still had a high
correlation. In other words, being successful in Algebra 1 is highly correlated to the possibility
of being successful in the other math courses. And, the Cronbach’s Alpha for the composites of
the residuals for the SS in Geometry and Algebra 2, were greater than Algebra 1 and that pair
(i.e. Geometry and Algebra 2) had high correlations consistently for all the years between 2004
and 2011.
Stability of the Composites of OTL and SS Residuals
The results of the seventh research question indicate that the composites of the OTL
residuals were extremely stable with correlation coefficients no less than .87 and not greater than
.90. From 2004-2005 to 2012-2011, the bivariate correlation coefficients were .87, .87, .90, .90,
.89, .89, and .90, respectively. Similarly, the composites of the SS residuals were also extremely
stable with correlation coefficients no less than .77 and not greater than .85. The correlation
OTL AND SUCCESS 150
coefficients for the composites of the SS residuals for 2004 to 2011 were .77, .79, .84, .85, .85,
.85, and .85, respectively. The significantly high correlations of the composites of the residuals
of both OTL and SS indicate that SCI significantly impacts both the opportunity to learn and
success scores for California schools that offer Algebra 1, Geometry, Algebra 2, and Summative
Math. In other words, these extremely stable scores show that SCI must be a factor in
performance and can be used for accountability purposes.
Discussion
Essentially, this current research aimed to: 1) expose the positive growth in student
performance, given the influence of historical events and federal interest, and 2) expose a more
equitable and necessary way to measure school quality. The results of this study are consistent
with the results of previous studies on using standardized test scores to measure success (Black,
2012; Veith, 2013; Levy, 2011). However, much of the previous studies done on success were
analyzed at the district or state level. The significance of this study is that success and a
students’ opportunity to learn have been done at the school level and subject level for the whole
state of California. In the past, studies on student success were viewed through the lens of a
students’ opportunity to learn (Oakes, 1990; Oakes, 2005). In other words, more students were
successful if they were given an opportunity to learn higher level courses, but because students
were placed in lower level math courses, identified as tracking, a student’s opportunity to learn
and success in math courses and the ability to take subsequent advanced mathematics courses
were negatively affected (Oakes, 1985, 1990, 1992, 1995, 2005; Oakes and Guiton, 1995).
However, since the NCLB, a student’s opportunity to learn and success in higher-level math
courses (i.e. Algebra 1, Geometry, Algebra 2, and Summative Math) in California has increased
OTL AND SUCCESS 151
and this study confirms that the increase of the OTL and success of students in California is
nothing but impressive. Table 5.1 is a summary of the percent increase of both OTL and
success, as measured by the CST’s for Algebra 1, Geometry, Algebra 2, and Summative Math
from 2004 to 2012.
Table 5.1
Summary of the Percent Increase for OTL and SS
CST Math Subject OTL Percent Increase Success Percent Increase
Algebra 1 35.19% 50.95%
Geometry 32.12% 38.59%
Algebra 2 26.18% 47.78%
Summative Math 49.32% 70.33%
Note. Percent increases are from 2004 to 2012.
Remarkably, the percent increase in the OTL means is very similar to the percent
increase in the Success means. It does not seem arbitrary that the OTL and SS means are very
similar due to the fact that if more students are given an opportunity to take a higher-level math
course, the greater their likelihood they are of being successful (Oakes, 1986). One may inquire
about what caused the increase in the amount of students enrolling in higher-level math classes.
In retrospect, amongst a few accountability systems, such as the NCLB, RTTT, and California’s
Algebra for All initiative in 2005, essentially opened the door of opportunity for many students in
California, especially English Language Learner (ELL) students (Veith, 2013). The Algebra for
All (2005) initiative mandated that districts should enroll all students by the 8
th
grade. Supported
by the NCTM, Algebra for All became a goal in school reform (Miller & Mercer, 1997). Some
have identified Algebra as a “gatekeeper” because successful completion of an Algebra course is
a prerequisite for college-bound students or further studies in math and many jobs or later
opportunities (Chazan, 1996; Silver, 1997). Essentially, many California districts and schools
OTL AND SUCCESS 152
have done just that—require students to take Algebra 1. The results of this study illustrate the
longitudinal positive growth.
Moreover, the data of the reliability and stability in this study address enough information
about how well California is doing since the NCLB. In a study about the impact of the NCLB,
the author claims that multiple years of data are needed in order to reliably evaluate school
performance and adds that “the measure of cross cohort gains was found to be an unreliable
measure, especially when data was examined from year to year” (Maleyko, 2011, p. 317).
Congruent with Maleyko (2011) is a more recent study (Black, 2012) on the unreliability of
value-added models, which are a type of year-to-year growth model in which states or districts
use student background characteristics and/or prior achievement, and other data as statistical
controls in order to isolate the specific effects of a particular school, program, or teacher on
student academic progress, and included are state standardized assessments (Goldschmidt et al.,
2005; MET Project, 2009
17
). The study by Black (2012) confirms that because value-added
scores reset to zero and the growth is based on the prior year’s “value-added score”, the value-
added models are unreliable and unstable. By design, since teachers are being evaluated on a
relative scale, there will always be teachers at the bottom tier.
Although there is some debate around the use of standards based assessments for
measuring content and performance, the efforts to finding better ways to measure progress and
success are noticeable. As one who is still in the classroom teaching mathematics, for the most
part, the CST’s, although they are multiple choice, assess mathematical content. So, when
success comes to mind, it seems unlikely that an “educated guesser” on the math CST’s would
score Basic or above on the CST’s unless the student had some mathematical content knowledge,
17
Retrieved on May 4, 2013 from: http://www.metproject.org/downloads/Preliminary_Findings-
Research_Paper.pdf
OTL AND SUCCESS 153
including the fundamentals (i.e. Substitution property, solving equations, order of operations,
interpretation of data, identifying quadratics, etc.). Furthermore, schools have spent, and are still
spending a great deal of time and money prepping students for the CST’s as well as teaching test
taking strategies to students in order to meet federal and state requirements under the NCLB.
In the past, however, the problem was not with what students needed to know for the
standardized testing, (i.e. content and strategies), it was the inequity to the access of resources
that helped students learn what they needed to know. Particularly, the disadvantaged students
who come from low-income schools have not been given the proper educational resources to
succeed in school. And, enough research has been done to show that students who come from
low-income backgrounds lack access to quality learning resources in contrast to their more
privileged counterparts, who have the access to learning resources at home, such as parental
support, internet, financial support, as well as resources to help them perform higher on
standardized testing (i.e. tutors, online resources, test prep material, etc.).
Essentially, federal and state mandates have responded to these issues and schools are
being held accountable to providing those resources to students. So, in response, the California
Department of Education has identified socioeconomic factors that contribute to student
performance and success, namely, the Schools Characteristics Index (SCI). In the past, the
efforts to hold schools accountable for student performance were being cast out onto an uneven
playing field due to the lack of students’ opportunities to learn the content reflected on the
standards test (Well & Oakes, 1996) and the SCI takes into account some of these factors. So,
according to Hursh (2007), “because test scores strongly correlate with a student’s family
income, a school’s score is more likely to reflect its students’ average family income than
teaching or the curriculum” (p. 506). Thus, this study has purposed to expose a more equitable
OTL AND SUCCESS 154
and necessary way to measure school quality, specifically using residualized OTL and SS scores
to measure success.
Implications
This current study generated a number of relevant implications as it relates to practice of
performance evaluation. The following sections will mark out the implications, which are
supported by the findings of this study, into two categories: theoretical and practical
implications.
Theoretical Implications:
The findings of this study have theoretical implications that can help change not only the
common perception of the NCLB, but reveal the positive effects of NCLB on attainment and
achievement of student performance in mathematics for high schools in California.
Attainment
Historically, the increase in students’ access to higher levels of mathematics is influenced
by policies mandated by the federal government. The outcomes of the Coleman Report (1966),
A Nation at Risk (1983), NCLB (2002), and the RTTT (1999) influenced how and what is taught
in American schools and essentially affected the number of students gaining access to education.
More recently, under the NCLB accountability system, which holds American schools to an
extreme standard, students’ access to education has increased tremendously over the last decade.
Enough research focuses on the implausibility of NCLB’s goal and it’s negative effect (Jennings,
& Rentner, 2006; Dee & Jacob, 2010; Rush & Scherff, 2012). However, previous and emerging
research discloses the positive effects NCLB has on student attainment and performance (Chubb,
Linn, Haycock, & Wiener, 2005; Maleyko, 2011; Veith, 2013).
OTL AND SUCCESS 155
Two recent studies confirmed that in California alone, access to early algebra has
increased student performance in California for minority and English learner (EL) students,
especially since the implementation of the Algebra for All initiative in 2005 (Levy, 2011; Veith,
2013). The two studies also confirm that the percent of disadvantaged students having access to
the gateway course (i.e. Algebra 1) has increased. The results of this study corroborate with
previous studies on increased levels of attainment in mathematics, namely the percent increase of
all the math OTL scores discussed above.
Achievement
Simply stated, the NCLB positively affected student performance in mathematics courses
in California since 2004. The results in this study confirm that the NCLB has had a positive
effect on student performance; however, research done previously focused on elementary and
middle school Math and ELA performances (Dee & Jacob, 2010). The literature highlighting
NCLB’s affect on high school math performance is extremely scarce, and none have actually
focused on the NCLB’s effect on math performance at the school level and at the subject-level.
The results in this study substantiate previous findings about the increase in math performance
but also contribute a more detailed description of school performance at the subject level.
Practical Implications:
The findings of this study have practical implications that can change the way schools
and teachers are being evaluated. Contrary to popular belief, NCLB has really generated
significant and positive outcomes in school quality and student performance (Porter, Linn, &
Trimble, 2005; Jennings, & Rentner, 2006; Dee & Jacob, 2010; O’Brien, 2010). Also, research
suggests that recent measurements, such as the API and AYP, are unreliable measurements for
school quality. Maleyko (2011) states:
OTL AND SUCCESS 156
“… that the current accountability provisions in NCLB have not been effective in
evaluating school performance and at providing schools with a failing or successful label.
It is apparent from both this study and the literature that there are numerous problems
with the implementation of AYP in order to motivate effective school improvement
efforts” (p. 323).
Moreover, a recent study has been done to uncover the unreliability of value added scores
(Black, 2012). Along with Black (2012), other studies have shown that although the API scores
are stable, they do not adjust for significant factors that contribute to student performance, such
as SES, parental education, minority status, ELL status, percent of students participating in free
and reduced lunch, and percent of teachers with full versus emergency credentials. The API and
AYP scores are an amalgamation of many different factors that affect the score. Thus, there is
no research that specifically examines the NCLB’s effect on success in subject specific
mathematics courses as measured by the CST’s.
Measuring School Quality Appropriately
The results in this study not only highlight the subject-level success scores in California
from 2004 to 2012, but also highlight how computing the subject-level Success scores
appropriately, adjusting for SCI, is a more accurate, reliable, stable, and fair way to measure
school quality. Because the correlations between SCI (which are significant factors that
contribute to success) and success are extremely high, it is only fair to evaluate school quality by
simply accounting for SCI. In the past, the efforts to hold schools accountable for student
performance were being cast out onto an uneven playing field due to the lack of students’
opportunities to learn the content reflected on the standards test (Well & Oakes, 1996). In other
words, the ways in which to measure the success of a school must be brought to an even playing
OTL AND SUCCESS 157
field before assessing the quality of the school. Thus, a school’s score should be adjusted
accounting for the factors that contribute to success and should be measured longitudinally, not
on a year-to-year basis (as does the value-added measurements). Thus, the results in this study,
along with other studies (Black, 2012; Levy, 2011; Veith, 2013), show that SCI must be
accounted for in order to measure school quality.
Recommendations for Further Research and Conclusion
Several implications were derived from this study for theoretical and practical purposes
as it relates to accountability systems that link student assessment data to measuring school
quality. Thus, it is extremely important that this study be replicated in other states, as designed,
to confirm its conclusions. First, since this study focused primarily on the success of high school
mathematics courses, as measured by the CST’s, this study should be replicated in the state of
California to measure the success rates in English Language Arts (ELA) for grades 9 to 12.
Right now, since there is a greater push toward increasing the proficiency levels of both English
and Math (i.e. NCLB and RTTT), measuring the success rates of ELA data, and adjusting for the
SCI, will substantiate and confirm not only the results of this study, but also the positive effects
of NCLB. Although NCLB set an implausible goal, it did raise the bar for teaching and learning
and for states to apply the same procedures done in this study to each of the standardized tests in
their respective states, would only add to the literature the effects of NCLB on student
performance.
Furthermore, another recommendation for further research would be to examine the rates
at which students enter into 4-year institutions after being given the opportunity to learn higher
level math courses (i.e. Algebra 1, Geometry, Algebra 2, Precalculus and higher). Given that the
OTL AND SUCCESS 158
federal government has continually been involved in education since the launch of the Sputnik,
the more recent drive toward STEM education would meet a dead end unless research is done on
finding how many students are actually given the opportunity to learn the material needed to
create a workforce that can compete with other countries. As the NCLB comes to a close and the
rise of a new set of standards (i.e. the Common Core Standards) become the framework for our
nation’s educational system, using the lens of solid and reliable literature will help us understand
where our nation has been and where it is going.
Overdue in its reauthorization, the process of rewriting NCLB has been tedious, but the
latest reauthorization of ESEA (1965) was rewritten just week ago by Democratic Senator for
Iowa Tom Harkin. The bill, known as “Strengthening America's Schools Act of 2013” (SASA),
is not as implausible as NCLB but provides a similar framework and goal: to get all children to
graduate from high school with the knowledge and skills necessary for success in college and/or
a career. The SASA of 2013 bill promises to: 1) Focus greater attention on children in their early
years to ensure they come to school ready to learn, 2) Encourage equity through greater
transparency and fair distribution of resources, 3) Sustain current state reform efforts and provide
them the flexibility they need to improve their schools, and 4) Support great teachers and
principals and ensure that all children receive the best instruction.
18
Moreover, in order to ensure
that there are great teachers and principals at every school, the SASA bill calls for supporting
evaluations that will help promote school success. (See footnote 18). Thus, a recommendation
would be to inspect the SASA bill more closely, and its demands, and consider using the results
of this study as a more reliable and fair way to measure school quality, including the quality of
teachers and principals.
18
Retrieved on June 14, 2013 from
http://www.help.senate.gov/imo/media/doc/ESEA%20Summary%206.4.13.pdf
OTL AND SUCCESS 159
Finally, a recommendation for further research would be to examine more closely the
schools in this study that are at the upper extremes of the normal curve and find what they are
doing different from all of the other schools in the state. Using both quantitative and qualitative
methods, in conjunction with the methods in this study, to identify what over-performing schools
are doing compared to underperforming schools will only help narrow the gap in the
understanding of what it truly means to educate and how to ensure that all students have an
equitable educational experience. As ideologies in curriculum design continue to change and
influence the educational system, so do the policies driven by those ideologies change the way
we assess, evaluate, and create schema for a new generation of students.
OTL AND SUCCESS 160
References
Aiken, W. M. (1942). The story of the eight-year study. New York: Harper
American Psychological Association. (1999). Standards for Educational and Psychological
Tests and Manuals. Washington, DC: American Psychological Association.
Andrews, R. L., & Soder, R. (1987). Principal leadership and student achievement,
Educational Leadership, 44, 9-11.
Atkinson, R. D. and Mayo, M. J. (December 9, 2010). Refueling the U.S. Innovation
Economy: Fresh approaches to science, technology, engineering and
mathematics (STEM) education. The Information Technology & Innovation
Foundation, Forthcoming. Available at SSRN: http://ssrn.com/abstract=1722822
Apps, J. W., 1934. (1973). Toward a working philosophy of adult education. Syracuse, N.Y.:
Syracuse University, Publications in Continuing Education.
Baker, E. L., Barton, P. E., Darling-Hammond, L., Haertel, E., Ladd, H. F., Linn, R. L.,
Ravitch, D., Rothstein, R., Shavelson, R. J., & Shepard, L. A. (2010). Problems
with the use of student test scores to evaluate teachers (Briefing Paper No. 278).
Washington, DC: Economic Policy Institute.
Ballou, D., Sanders, W., & Wright, P. (2004). Controlling for student background in value-
added assessment of teachers. Journal of Educational and Behavioral Statistics,
29(1), 37-65.
Barlex, D. (2007). STEM: an important acronym. The Design and Technology
Association. Warwickshire, UK: D&T News (36).
Barth, R. S. (1977). Beyond open education. The Phi Delta Kappan, 58(6), 489-492.
Bennett, N. (1976). Teaching styles and pupil progress. Cambridge, Mass: Harvard
OTL AND SUCCESS 161
University Press.
Bennett, C. (2001). Genres of research in multicultural education. Review of
Educational Research, 71(2), 171-217.
Bickel, R. D. (1998). A brief history of the commitment to inclusion as a facet of equal
educational opportunity. New Directions for Student Services, 1998(83), 3-13.
Bill & Melinda Gates Foundation (2009). MET Project: Working with Teachers to
Develop Fair and Reliable Measures of Effective Teaching, Seattle, WA: Bill &
Melinda Gates Foundation.
Black, A. (2012). A comparison of value-added, ordinary least squares regression, and the
california STAR accountability indicators. ProQuest, UMI Dissertations Publishing).
Black, P., & Wiliam, D. (1998, Oct.). Inside the black box: Raising standards through
classroom assessment. Phi Delta Kappan, 80, 139-148.
http://www.pdkintl.org/kappan/kbla9810.htm
Bobbitt, F. (1918). The curriculum. Boston: Riverside Press.
Bowles, S. S., & Levin, H. M. (1968). More on multicollinearity and the effectiveness of
schools. The Journal of Human Resources, 3(3), 393-400.
Braun, H. (2005), Using Student Progress to Evaluate Teachers: A Primer on Value-
Added Models, Princeton, NJ: Educational Testing Service.
Breiner, J. M., Harkness, S. S., Johnson, C. C. and Koehler, C. M. (2012), What Is
STEM? A Discussion About Conceptions of STEM in Education and
Partnerships. School Science and Mathematics, 112:3–11.
Brunello, G., & Checchi, D. (2007). Does school tracking affect equality of opportunity?
new international evidence. Economic Policy, 22(52), 781-861.
OTL AND SUCCESS 162
Bruner, J.S. (1963). Needed: A theory of instruction. Educational Leadership, 20(8), 523- 532.
Burris, C. C. (2010). Detracking for success. Principal Leadership, 10(5), 30-34.
Bybee, R. W., & Fuchs, B. (2006). Preparing the 21st century workforce: A new reform
in science and technology education. Journal of Research in Science Teaching,
43(4), 349-352.
Bybee, R. W. (2007). Do we need another sputnik? The American Biology Teacher,
69(8), 454-457
California Department of Education. (2012a). 2011-12 Academic Performance Index reports:
Information guide. Retrieved June 1, 2013, from
www.cde.ca.gov/ta/ac/ap/documents/infoguide12.pdf
California Department of Education. (2012b). Overview of the 2011-12 accountability progress
report. Retrieved June 1, 2013, from
http://www.cde.ca.gov/ta/ac/ay/documents/overview12.pdf#search=Overview%20of%20th
e%20201112%20accountability%20progress%20%20report&view=FitH&pagemode=none
California Department of Education. (2012c). Construction of school characteristics index and
similar schools ranks. Retrieved June 1, 2013, from
http://www.cde.ca.gov/ta/ac/ap/documents/tdgreport0400.pdf#search=Construction%20of
%20California?s%20School%20Characteristics%20Index%20and%20Similar%20Schools
%20Ranks&view=FitH&pagemode=none
California Department of Education. (2012). Descriptive Statistics and Correlation Tables for
California’s 2011 School Characteristics Index and Similar Schools Ranks. Retrieved
June 1, 2013, from
http://www.cde.ca.gov/ta/ac/ap/documents/tdgreport1112.pdf#search=Descriptive%20Stati
OTL AND SUCCESS 163
stics%20and%20Correlation%20Tables%20for%20California?s%202012%20School%20C
haracteristics%20Index%20and&view=FitH&pagemode=none
Carver, R. P. (1975). The coleman report: Using inappropriately designed achievement
tests. American Educational Research Journal, 12(1), 77-86.
Chubb, J., Linn, R., Haycock, K., & Wiener, R. (2005). Do we need to repair the monument?
Debating the future of No Child Left Behind. Education Next, 5(2), 9–19.
Cohen, S. (1976). The history of the history of American education, 1900-1976: The uses of the
past. Harvard Educational Review, 46(3), 298.
Coleman, J.S., Campbell, E.Q., Hobson, C.J., McPartland, J., Mood, A.M., Weinfeld,
F.D., & York, R.L. (1966). Equality of Educational Opportunity. Washington,
D.C.: U.S. Government Printing Office. Washington, DC: National Center for
Education Statistics, Institute of Education Sciences, U.S. Department of
Education.
Coleman, J. S. (1970). Reply to Cain and Watts. American Sociological Review, 35(2),
242-249.
Coleman, J. (1981). Public schools, private schools, and the public interest. Public Interest, (64),
19-30.
Corcoran, S. P. (2010). Can teachers be evaluated by their students’ test scores? Should they be?
The use of value-added measures of teacher effectiveness in policy and practice.
Providence, RI: Annenberg Institute for School Reform at Brown University. Retrieved
July 29, 2012, from: http://www.annenberginstitute.org/pdf/valueAddedReport.pdf
Counts, G. S., (1932a). Dare progressive education be progressive? Progressive Education,
9(4), 257-263.
OTL AND SUCCESS 164
Counts, G. S., (1932b). Dare the school build a new social order? New York: John Day.
Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods
approaches (3rd
ed.). Thousand Oaks, Calif: Sage Publications.
Darling-Hammond, L. (1992). Standards of Practice for Learner-centered Schools. New York:
National Center for Restructuring Schools and Teaching, Teachers College, Columbia
University.
Darling-Hammond, L. (2000). Teacher quality and student achievement: A review of state
policy evidence. Arizona State University: Education Policy Analysis Archives.
Available at: http://epaa.asu.edu/epaa/v8n1
Darling-Hammond, L., LaFors, J., & Snyder, J. (2001). Educating teachers for California's
future. Teacher Education Quarterly, 28(1), 9-55.
Darling-Hammond, L. (2001). Educating teachers for California's future. Teacher Education
Quarterly, 28(1), 9-55.
Dee, T., & Jacob, B. (2010). Evaluating NCLB. Education Next, 10(3), 54-61.
Demanet, J., Houtte, M. V., & Stevens, P. A. (2012). Self-esteem of academic and
vocational students: Does within-school tracking sharpen the difference? Acta
Sociologica, 55(1), 73-89.
Department for Education and Skills. (2006) STEM Programme Report. London: Author.
Dewey, J. (1897). My Pedagogic Creed. The School Journal, 54(3), 77-80.
Dewey, J. (1910). Science as subject-matter and as method. Science, 31, 121–127.
Dewey, E. & Dewey, J. (1915). Schools of tomorrow. New York: E. P. Dutton.
Dewey, J. (1916/2008). Democracy and education 1916. Schools: Studies in Education, 5(1/2),
87-95.
OTL AND SUCCESS 165
Dewey, J. (1938/2012). Education and democracy in the world of today. Schools: Studies
in Education, 9(1), 96-100.
Doan Holbein, M. F. (1998). Will standards improve student achievement? Education,
118(4), 559.
Douglas, J., Iversen, E., & Kalyandurg, C. (2004) Engineering in the K-12 classroom: An
analysis of current practices and guidelines for the future. Washington: ASEE
Engineering K12 Centre.
Edelman, M. W. (1973). Southern school desegregation, 1954-1973: A judicial-political
overview. Annals of the American Academy of Political and Social Science, 407(1), 32-
42.
Edsource, Inc. (2009, October). The new federal education policies: California's Challenge.
http://www.edsource.org/pub_new-fed-policies.html
Edsource, Inc. (2010, June). California and the "Common Core": Will there be a new debate
about K-12 standards?
Ellett, C. D., & Teddlie, C. (2003). Teacher evaluation, teacher effectiveness and school
effectiveness: Perspectives from the USA. Journal of Personnel Evaluation in
Education, 17(1), 101-128.
Freire, P. (1970). Pedagogy of the oppressed. New York: Seabury Press.
Freire , P. (1970/2006). Pedagogy of the Oppressed: 30th Anniversary Edition . New York :
Continuum.
Garcia, G. (2002). Chapter 1 (Introduction). In Student Cultural Diversity:
Understanding and Meeting the Challenge (3rd ed.), Boston, MA: Houghton
Mifflin, 3- 39.
OTL AND SUCCESS 166
Gattie, D. & Wicklein, R. (2007). Curricular Value and Instructional Needs for Infusing
Engineering Design into K- 12 Technology Education. Journal of Technology
Education, 19(1), 6-18.
Goldschmidt, P., Roschewski, P., Choi, K., Auty, W., Hebbler, S., Blank, R., et al.
(2005). Policymakers’ guide to growth models for school accountability: How do
accountability models differ? Washington, D.C.: Council of Chief State School
Officers. Retrieved July 29, 2012, from http://www.ccsso.org/Resources/
Publications/ Policymakers%E2%80%99_Guide_to_Growth_Models _ for_
School_Accountability_ How_Do_Accountability_Models_Differ.html
Guskey, T. R. (1998). The age of our accountability. Journal of Staff Development, 19(4),
36-44.
Haertel, E. H., & Herman, J. L. (2005). A historical perspective on validity arguments for
accountability testing. Yearbook of the National Society for the Study of Education, 104(2),
1-34.
Hanushek, E. A. (1992). The trade-off between child quantity and quality. Journal of
Political Economy, 100, 84–117.
Harvard Law Review (1965). The civil rights act of 1964. Harvard Law Review, 78(3), 684-
696.
Haycock, K. (1998). Good teaching matters... A lot. Thinking K-16, 3(2), 3-14.
Hocevar, D. (2010). Can state test data be used by elementary school principals to make teacher
level and grade-level instructional decisions? Unpublished Paper, University of Southern
California.
OTL AND SUCCESS 167
Hocevar, D., Brown, R., & Tate, K. (2008). Leveled assessment modeling project. Unpublished
manuscript, University of Southern California.
Holme, J. (2002). Buying homes, buying schools: School choice and the social construction of
school quality. Harvard Education Review, 72(2), 177-205.
Hong, Y. (2010). A comparison among major value-added models: A general model
approach.
Horn, C. (2005). Standardized assessments and the flow of students into the college
admission pool. Educational Policy, 19(2), 331-348.
Horwitz, R. A. (1979). Psychological effects of the "open classroom". Review of Educational
Research, 49(1), 71-85.
Hughes, C., & Bell, D. (2011). Underpinning the STEM agenda through technological
textiles? An exploration of design technology teachers' attitudes. Design and
Technology Education, 16(1), 53-61.
Hursh, D. (2007). Assessing 'no child left behind' and the rise of neoliberal education policies.
American Educational Research Journal, 44(3), 493-518.
Jennings, J. F. (1987). The sputnik of the eighties. The Phi Delta Kappan, 69(2), 104-109.
Jennings, J., & Rentner, D. S. (2006). Ten big effects of the no child left behind act on public
schools. The Phi Delta Kappan, 88(2), 110-113.
Johanningmeier, E. V. (2010). A nation at risk" and "sputnik": Compared and
reconsidered. American Educational History Journal, 37(2), 347-365.
Johnson, J. A., 1932. (1969). Introduction to the foundations of American education.
Boston: Allyn and Bacon.
OTL AND SUCCESS 168
Johnson, J. A., Musial, D. L., Hall, G. E., Gollnick, D. M., and Dupuis, V. L. (2005).
Introduction to the Foundations of American Education (13th Edition). Allyn & Bacon.
Kane, T. J., Rockoff, J. E., & Staiger, D. O. (2008). What does certification tell us about teacher
effectiveness? evidence from New York City. Economics of Education Review, 27(6), 615-
631.
Kantor, H. (1991). Education, social reform, and the state: ESEA and federal education
policy in the 1960s. American Journal of Education, 100(1), 47-83.
Katz, M. B. (1971). Class, bureaucracy, and schools: The illusion of educational change in
America. New York: Praeger Publishers.
Kelly, A. V. (2004). The curriculum: theory and practice. 5th edition SAGE Publications,
Customer Care.
Kozol, J. (1991). Savage inequalities: Children in american schools. Ann Arbor, MI:
Crown Publishing.
Kennedy, I. (2006). The Sputnik crisis and America’s response. University of Central Florida.
Kuenzi, J., Matthews, M., & Mangan, B. (2006). Science, Technology, Engineering, and
Mathematics (STEM) Education Issues and Legislative Options. Congressional Research
Report. Washington, DC: Congressional Research Service.
Kurpius, S. E., & Stafford, M. E. (2006). Testing and measurement. A user-friendly guide. Sage
Publications, Inc.
Lachat, M.A. (1999). Standards, equity, and cultural diversity. Providence, RI: LAB at
Brown University.
Lauer, P.A., Snow, D., Martin-Glenn, M., Van Buhler, R.J., Stoutemyer, K. & Snow-
Renner, R. (2005). The influence of standards on K – 12 teaching and student
OTL AND SUCCESS 169
learning: A research synthesis. Regional Educational Laboratory (Contract #ED-
01-CO-0006). McREL.
Lee, J., & Reeves, T. (2012). Revisiting the impact of NCLB high-stakes school accountability,
capacity, and resources: State NAEP 1990–2009 reading and math achievement gaps and
trends. Educational Evaluation and Policy Analysis, 34(2), 209-231.
Leland, C. H., & Kasten, W. C. (2002). Literacy education for the 21st century: It's time to close
the factory. Reading and Writing Quarterly: Overcoming Learning Difficulties, 18(1), 5-
15.
Leonard, W. H., & Penick, J. E. (2005). Assessment of standards-based biology teaching. The
American Biology Teacher, 67(2), 73-76.
Levy, A.B. (2011). Ready, Set, Algebra? The Impact of Mandating Algebra I by Grade Eight.
Los, Angeles: University of Southern California.
Levy, A.B. & Hocevar, D. (2013). The mediating effect of early Algebra on the relationship
between SES and success in Algebra I and Algebra II in California: A Path Analytic
Model. Manuscript in preparation. Los Angeles, California: University of Southern
California.
Linn, R. L., (2005). Fixing the NCLB accountability system. Los Angeles: University of
California, National Center for Research on Evaluation, Standards, and Student
Testing.
Lissitz, R., Doran, H., Schafer, W., & Willhoft, J. (2006). Growth modeling, value added
modeling and linking: An introduction. In R. W. Lissitz (Ed.), Longitudinal and
Value-Added Models of Student Performance (pp. 1-46). Maple Grove, MN: JAM
Press.
OTL AND SUCCESS 170
Lissitz, R. W., & Samuelsen, K. (2007). A suggested change in terminology and emphasis
regarding validity and education. Educational Researcher, 36(8), 437- 448.
Maleyko, G. (2011). The impact of no child left behind (NCLB) on school achievement and
accountability.
Marzano, R. J. (2003). What works in schools. Alexandria, VA: ASCD.
Mayo, M. J. (2009). Video games: A route to large-scale STEM education? Science
(New York, N.Y.), 323(5910), 79-82.
McCaffery, D.F., Koretz, D.M., Lockwood, J.R., & Hamilton, L.S. (2003). Evaluating
value-added models for teacher accountability. Santa Monica, CA: Rand Corporation.
Miller, S.P. & Mercer, C.D. (1997). Educational Aspects of Mathematics Disabilities.
Journal of Learning Disabilities, 30(1), 47-56.
Moore, R. (2000). For knowledge: Tradition, progressivism and progress in education-
reconstructing the curriculum debate. Cambridge Journal of Education, 30(1), 17-36.
Morgan, K. (1974). Socialization, social models, and the open education movement: Some
philosophical considerations. Studies in Philosophy and Education, 8(4), 278-314.
Mullens, J. E., Leighton, M. S., Laguarda, K. G., & O'Brien, E. (1996). Student Learning,
Teaching Quality, and Professional Development: Theoretical Linkages, Current
Measurement, and Recommendations for Future Data Collection. Working Paper Series.
District of Columbia, Policy Studies Associates, Inc., Washington, DC: 120.
National Science Board Commission of Pre-College Education in Mathematics, Science, and
Technology. (1983). Educating Americans for the 21st Century: A plan of action for
improving mathematics, science, and technology education for all American elementary
and secondary students so that their achievement is the best in the world by 1995.
OTL AND SUCCESS 171
Washington, DC: National Science Foundation.
National Council on Education Standards and Testing, (January 24, 1992). Raising
Standards for American Education: A Report to Congress, the Secretary of
Education, the National Education Goals Panel, and the American People,
Government Printing Office.
Nelson-Barber, S. (1999). A better education for every child: The dilemma for teachers of
culturally and linguistically diverse students. Including Culturally and Linguistically
Diverse Students in Standards-Based Reform: A report on McREL’s Diversity
Roundtable.
Nichols, D. J. (2005). Brown v. Board of education and the no child left behind act:
competing ideologies. Brigham Young University Education and Law Journal, 2005,
151-261.
Noddings, N., & Enright, D. S. (1983). The promise of open education. Theory into
Practice, 22(3), 182-189.
Nye, B., Hedges, L., & Konstantopoulos, S. (2004). How large are teacher effects?.
Educational Evaluation and Policy Analysis, 26(3), 237-257.
O'Brien, R. H. (2010). The effects of NCLB on student performance in Virginia and New York
city. ProQuest, UMI Dissertations Publishing.
Oakes, J. (1985). Keeping track: How schools structure inequality. New Haven: Yale
University Press.
Oakes, J. (1986). Keeping track, part 1: The policy and practice of curriculum inequality. Phi
Delta Kappan, 68(1), 12-17.
OTL AND SUCCESS 172
Oakes, J. (1990). Multiplying inequalities: The effects of race, social class, and tracking
opportunities to learn mathematics and science. Santa Monica, CA: Rand Corporation.
Oakes, J. (1992). Can tracking research inform practice? Technical, normative, and political
considerations. Educational Researcher, 21, 12-21.
Oakes (1995). Opportunity to learn: Can standards-based reform by equity-based reform? In I.M.
Carl (Ed.), Prospects for School Mathematics (pp. 78-98). Reston, VA: The National
Council of Teachers of Mathematics.
Oakes, J. & Guiton, G. (1995). Matchmaking: The dynamics of high school tracking
decisions. American Educational Researcher, 32, 3-33.
Oakes, J. (2005). Keeping track: How schools structure inequality. New Haven, Conn.;
London: Yale University Press.
Orfield, G., & Yun, J. T. (1999). Resegregation in American schools. Civil Rights Project
Harvard University.
Pedretti, E., & Hodson, D. (1995). From rhetoric to action: Implementing STS education
through action research. Journal of Research in Science Teaching, 32(5), 463-485.
Pitt, J. (2009) Blurring the Boundaries – STEM Education and Education for Sustainable
Development. Design and Technology Education: An International Journal, 14(1), p 37-
48.
Ravitch, D. (1981). The meaning of the new coleman report. The Phi Delta Kappan, 62(10), 718-
720.
Ravitch, D. (1993). Launching a revolution in standards and assessments. The Phi Delta
Kappan, 74(10), 767-772.
OTL AND SUCCESS 173
Ravitch, D. (1995). National standards in American education: A citizen’s guide. Washington,
DC: Brookings Institution Press.
Rivkin, S. G., Hanushek, E. A., & Kain, J. F. (2005). Teachers, schools, and academic
achievement. Econometrica, 73(2), 417-458.
Rogers, G. (2005). Pre-engineering’s place in technology education and its effect on
technological literacy as perceived by technology education teachers. Journal of
Industrial Teacher Education, 41(3), 6-22.
Rogers, K. B. (1998). Using current research to make "good" decisions about grouping. NASSP
Bulletin, 82(595), 38-46.
Rush, L. S. & Scherff, L. (2012). NCLB 10 years later. English Education, 44(2), 91.
Sainsbury (2007) The Race to the Top: A Review of Government's Science and Innovation
Policies, HMSO, London. Available at http://www.hm-
treasury.gov.uk/sainsbury_index.htm. Accessed April 7, 2012.
Sanders, W. L., & Horn, S. P. (1994). The Tennessee value-added assessment system
(TVAAS): Mixed-model methodology in educational assessment. Journal of
Personnel Evaluation in Education, 8(3), 299-311.
Sanders, W. & Horn, S. (1998). Research findings from the Tennessee value-added
assessment system (TVAAS) database: Implications for educational evaluation and
research. Journal of Personnel Evaluation in Education, 12(3), 247 – 256.
Sanders, W. L. (2000). Value-added assessment from student achievement data: opportunities
and hurdles. Journal of Personnel Evaluation n Education, 14(4), 329-339.
OTL AND SUCCESS 174
Sanders, W. L., & Rivers, J. C. (1996). Cumulative and residual effects of teachers on future
student academic achievement. University of Tennessee Value-Added Research and
Assessment Center.
Sanders, W. L., Saxton, A. M., & Horn, S. P. (1997). The Tennessee value-added Assessment
system: A quantitative outcomes-based approach to student assessment. In J. Millman
(Ed.), Grading teachers, grading schools. Is student achievement a valid evaluation
measure? (pp. 137-162). Thousand Oaks, CA: Corwin Press.
Schiro, M.S. (2008). Curriculum theory: Conflicting visions and enduring concerns. Los
Angeles: Sage.
Schwartz, R. B., Robinson, M. A., Kirst, M. W., & Kirp, D. L. (2000). Goals 2000 and the
standards movement. Brookings Papers on Education Policy, 2000(3), 173-214.
Sherman, S. C. (2009). Haven't we seen this before?: Sustaining a vision in teacher
education for progressive teaching practice. Teacher Education Quarterly, 36(4), 41-60.
Silver, E.A. (1997). On my mind: “Algebra for All” – Increasing students’ access to Algebraic
ideas, not just Algebra courses. Mathematics Teaching in the Middle School, 2, 204-207.
Sloan, F. A. (1974). Open education american style. Peabody Journal of Education, 51(2), 140-
146.
Stephan, W. G. (1980). “A Brief Overview of School Desegregation.” In W. G. Stephan, and
J.R. Feagin. (Eds.), School Desegregation. (pp. 3-23.), New York: Pleum Press.
Stevens, F.I. (1996). Closing the achievement gap: Opportunity to learn, standards, and
assessment. In B. Williams (Ed.), Closing the achievement gap: A vision for
changing beliefs and practices (pp. 77–95). Alexandria, VA: Association for
Supervision and Curriculum Development.
OTL AND SUCCESS 175
Stotts, J. L. (2011). The STEM initiative---a multiple case study of mission-driven
leadership in two schools implementing STEM in Texas: Successes, obstacles, and
lessons learned.
Subotnik, R. F., Tai, R. H., Rickoff, R., & Almarode, J. (2010). Specialized public high schools
of science, mathematics, and technology and the STEM pipeline: What do we know now
and what will we know in 5 years? Roeper Review, 32(1), 7-16.
Tate, W. F., Jones, B. D., Thorne-Wallington, E., & Hogrebe, M. C. (2012). Science and the city:
Thinking geospatially about opportunity to learn. Urban Education, 47(2), 399-433
The White House (2009). Retrieved from http://www.whitehouse.gov/the-press-
office/president-obama-launches-educate-innovate-campaign-excellence-sci
ence-technology-en on April 4, 2012.
Thomas, N. C. (1983). The development of federal activism in education: A contemporary
perspective. Education and Urban Society, 15(3), 271-90.
Thompson, S. (2001). The authentic standards movement and its evil twin. The Phi Delta
Kappan, 82(5), 358-362.
Tice, T.N. (1998). Detracking schools. The Education Digest, 63(7), 44.
Tyack, D. B. (1974). The one best system: A history of American urban education.
Cambridge, Mass: Harvard University Press.
Tyler R.W., (1949). Basic Principles of Curriculum and Instruction. The University of
Chicago Press, London.
Porter, A.C., Linn, R. L., and Trimble, C.S., (2005). The effects of state decisions about NCLB
adequate yearly progress targets. Educational Measurement, Issues and Practice, I(4),
32.
OTL AND SUCCESS 176
University of Florida. (2000a). Prototype analysis of school effects. Value-Added Research
Consortium.
University of Florida. (2000b). Measuring gains in student achievement: A feasibility study.
Value-Added Research Consortium.
U.S. Department of Education. (2006). A test of leadership: Charting the future of U.S. higher
education. Washington, DC: Author.
Van Houtte, M., & Stevens, P. A. J. (2009). Study involvement of academic and vocational
students: Does between-school tracking sharpen the difference?
American Educational Research Journal, 46(4), 943-973.
Veith, V. (2013). Algebra for all and its relationship to English learner's opportunity-to-learn
and algebra I success rates. Los, Angeles: University of Southern California.
Wells, A. S., & Oakes, J. (1996). Potential pitfalls of systemic reform: Early lessons from
research on detracking. Sociology of Education, 69, 135-143.
Wenglinsky, H. (2000). How teaching matters: Bringing the classroom back into discussions of
teacher quality. Princeton, N.J.: Education Testing Service.
Available at: http://www.ets.org/research/pic
Wicklein, R. (2006). 5 Good reasons for engineering design as the focus for technology
education. The Technology Teacher, 65(7), 25-29.
Williams, J. (2011). STEM Education: Proceed with Caution. Design and technology
Education, 16(1), 10.
Wise, A. E. (1982). Legislated Learning: The Bureaucratization of the American Classroom
(2nd ed.). Berkeley: University of California Press.
OTL AND SUCCESS 177
Wise, A. E. and Leibbrand, J. A. (May/June 2001). Standards in the new millennium: Where
we are. Where we’re headed. Journal of Teacher Education, Vol. 52(3), 244-255
Wolf, D.P., & Reardon, S.F. (1993, March). Equity in the design of performance assessments:
A handle to wind up the tongue with? Paper presented at the Ford Foundation National
Symposium on Equity and Education Testing and Assessment, Washington, DC.
Wright, S. P., Horn, S. P., & Sanders, W. L. (1997). Teacher and classroom context effects
on student achievement: Implications for teacher evaluation. Journal of Personnel
Evaluation in Education, 57-67.
OTL AND SUCCESS 178
Appendix
Normal Curve Equivalents and Percentiles
19
19
Retrieved from: http://www.dlenm.org/Figures/A.1-
%2828%29_FINAL_Percentiles_NCEs_2009-08-23%20copy.pdf on April 10, 2013.
Abstract (if available)
Abstract
The No Child Left Behind Act of 2001 has put many schools under a lot of pressure to meet its high demands. In this quantitative study, the effects that the NCLB act has had on students’ opportunity to learn (OTL) and Subject Level Success (SS) from 2004 to 2012 in 9th, 10th, and 11th grade math coursework (Algebra 1, Geometry, Algebra 2, and Summative Math) were examined. The California Standards Test (CST) data, which comes from the California Department of Education (CDE) website, was used to calculate the opportunity to learn and success rates. Essentially, the unadjusted scores of OTL and SS greatly increased between 2004 and 2012. The magnitude of the OTL changes were very large, ranging from 26 to 49 percent. Along the same lines, the success changes were very large, ranging from 39 to 70 percent. ❧ Numerous research studies have documented that raw achievement tests scores cannot be used for accountability purposes because they are highly correlated with socioeconomic status. Input-adjusted scores are a promising alternative to value-added measurements introduced by Hocevar and his colleagues at the University of Southern California. In marked contrast to value-added measurements, the composite input-adjusted scores for both OTL and SS scores were internally consistent (alphaOTL2011 = 0.79 and alphaSS2011 = 0.87) and stable (rOTL2011 = 0.90 and rSS2011 = 0.85). Potential uses for input-adjusted scores in practice are discussed.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Algebra for all and its relationship to English learner's opportunity-to-learn and algebra I success rates
PDF
Accountability models in remedial community college mathematics education
PDF
Assessing the effectiveness of an inquiry-based science education professional development
PDF
A longitudinal study on the opportunities to learn science and success in science in the California community college system
PDF
Use of accountability indicators to evaluate elementary school principal performance
PDF
Moving from great to greater: Math growth in high achieving elementary schools - A gap analysis
PDF
College readiness in California high schools: access, opportunities, guidance, and barriers
PDF
The effects of open enrollment, curriculum alignment, and data-driven instruction on the test performance of English language learners (ELLS) and re-designated fluent English proficient students ...
PDF
The practice and effects of a school district's retention policies
PDF
Sustaining arts programs in public education: a case study examining how leadership and funding decisions support and sustain the visual and performing arts program at a public high school in Cal...
PDF
The impact of the Norton High School early college program on the academic performance of students at Norton High School
PDF
A comparison of value-added, orginary least squares regression, and the California Star accountability indicators
PDF
Promising practices of California community college mathematics instructors teaching AB 705 accessible courses
PDF
The impact of ""wall-to-wall"" small learning communities: career academy participation and its relationship to academic performance and engagement
PDF
PowerPoint design based on cognitive load theory and cognitive theory of multimedia learning for introduction to statistics
PDF
The impact of Algebra for all policies on tracking, achievement, and opportunity to learn: a longitudinal study of California middle schools
PDF
The 2003-2012 impact of Algebra When Ready on indicators of college readiness across California school districts
PDF
Optimal applications of social and emotional learning paradigms for improvements in academic performance
PDF
An evaluation of the School Assistance and Intervention Team process in California public schools: lessons learned and indications for policy change
PDF
Examining the effects of structured dialogue grounded in socioculturalism as a tool to facilitate professional development in secondary science
Asset Metadata
Creator
Gavrilovic, Daniel Miodrag
(author)
Core Title
Examining opportunity-to-learn and success in high school mathematics performance in California under NCLB
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education
Publication Date
07/26/2013
Defense Date
06/19/2013
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
California math success,effects of NCLB,high school math performance,math performance under NCLB,OAI-PMH Harvest,opportunity to learn
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Hocevar, Dennis (
committee chair
), García, Pedro Enrique (
committee member
), Hasan, Angela Laila (
committee member
)
Creator Email
danevrilo@gmail.com,dgavrilo@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-300809
Unique identifier
UC11294056
Identifier
etd-Gavrilovic-1849.pdf (filename),usctheses-c3-300809 (legacy record id)
Legacy Identifier
etd-Gavrilovic-1849.pdf
Dmrecord
300809
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Gavrilovic, Daniel Miodrag
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
California math success
effects of NCLB
high school math performance
math performance under NCLB
opportunity to learn