Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
An analysis of the impact of the total educational support system direct-instruction model on the California standards test performance of English language learners at experimental elementary school
(USC Thesis Other)
An analysis of the impact of the total educational support system direct-instruction model on the California standards test performance of English language learners at experimental elementary school
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
AN ANALYSIS OF THE IMPACT OF THE TOTAL EDUCATIONAL
SUPPORT SYSTEM DIRECT-INSTRUCTION MODEL ON THE
CALIFORNIA STANDARDS TEST PERFORMANCE OF ENGLISH
LANGUAGE LEARNERS AT EXPERIMENTAL ELEMENTARY SCHOOL
by
Terry Leon Nichols
____________________________________________
A Dissertation Presented to the
FACULTY OF THE ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
August 2009
Copyright 2009 Terry Leon Nichols
ii
DEDICATION
This work is dedicated to my beautiful wife, Catherine, my son, Benjamin,
and Steven. Ben, you will always be my baby boy and the joy you provide is a
blessing. Steven, you were there when I started and you love the Trojans and so
you will be the first person that I will say “Fight On!
And finally, my wife, and soul mate, m’lady Catherine. M’Lady, I found the
love of my life and those beautiful emeralds that look deep into your soul always
remind me of this. Your never ending love, patience, and guidance has been the
essence for my strength and willingness to go on. “terry”
iii
ACKNOWLEDGMENTS
I want to thank Dr. Dennis Hocevar, my chair, for his continuous support
and encouragement. Thank-you Dr. “D” for your ability to make difficult
concepts understandable—the true sign of a great teacher. Your guidance was
irreplaceable. I have learned that numbers do have “heart” and your heart is in
your work.
Thanks Dr. Margaret Reed for your assistance during Dr. Hocevar's
absence. Your kindness in a time of turmoil was the calm I needed. Thanks also
goes to Dr. Dean Conklin and your support in being a part of my work at USC
and my work at Duarte.
Judi Gutierrez deserves special recognition for her comment: “Hey you
might want to look into this USC cohort in Monrovia.” Thanks Judi, that
information was the beginning of this journey.
Additionally, I wish to thank the administration and staff at
Experimental Elementary School. The staff and administration has been
scrutinized more than any school in the district. You are resilient and committed
to all kids in your school. Without your hard work this study would not have
happened.
I also want to thank my family, Catherine, Steven, Jennifer, Joshua, and
Benjamin, for all their support and understanding. You have graciously
iv
overlooked the time that I could not spend with you but you never let me slip too
far from your hearts or minds.
Thanks to my mother and father and yes dad, I do have a real job! The
life-long lessons that you provided for me concerning goals, values, and
perseverance have assisted in ways that you will never know in my obtaining this
degree.
v
TABLE OF CONTENTS
DEDICATION .................................................................................................. ii
ACKNOWLEDGEMENTS ............................................................................. iii
LIST OF TABLES ......................................................................................... viii
LIST OF FIGURES .......................................................................................... x
ABSTRACT ..................................................................................................... xi
CHAPTER 1 PROBLEM, ANALYSIS, AND SOLUTION:
ENGLISH LEARNERS’ PERFORMANCE ON THE
CALIFORNIA STANDARDS TEST ................................................... 1
Problem Identification ........................................................................... 3
Problem Analysis and Interpretation ..................................................... 9
School Level Factors ........................................................................... 10
Curriculum and Instruction ..................................................... 10
Instructional Leadership .......................................................... 13
Teacher Level Factors ......................................................................... 15
Researched-Based Strategies .................................................. 15
Teacher Expectations .............................................................. 20
Problem Solution ................................................................................. 23
Implementation of Total Educational Support
System ..................................................................................... 23
Knowledge Gaps ..................................................................... 24
Motivational Barriers .............................................................. 25
Organizational Barriers ........................................................... 26
Purpose, Design, and Utility ............................................................... 28
Purpose .................................................................................... 28
Design ..................................................................................... 28
Utility ...................................................................................... 31
CHAPTER 2 REVIEW OF THE LITERATURE ......................................... 33
Direct Instruction ................................................................................ 36
English Language Learner Achievement ................................ 39
vi
Effect of Direct Instruction on the Student
Achievement of ELL ............................................................... 42
Other View Points ................................................................... 44
Summary ............................................................................................. 46
CHAPTER 3 SUMMARY OF RESEARCH DESIGN ................................. 49
Quantitative Evaluation Design .......................................................... 50
Limitations .............................................................................. 53
Qualitative Design ............................................................................... 54
Intervention ............................................................................. 55
Participants and Setting (Experimental Group) ...................... 56
Benchmark School/Comparison Group .............................................. 57
Instrumentation ................................................................................... 58
Achievement ........................................................................... 58
Interviews ................................................................................ 62
Observations ............................................................................ 63
Summative Analysis ........................................................................... 64
CHAPTER 4 RESULTS ................................................................................ 66
Pre/Post Dependent Groups’ Design ................................................... 67
Non-equivalent Comparison Group Design ........................................ 68
Pre/post Dependent Groups’ Results .................................................. 69
Statistical Significance ............................................................ 69
Practical Significance .............................................................. 72
Comparison School Results ................................................................ 77
API Comparisons ................................................................................ 78
Adequate Yearly Progress Comparison .................................. 81
CHAPTER 5 SUMMARY, DISCUSSION, AND,
RECOMMENDATIONS .................................................................... 86
Overview ............................................................................................. 86
Purpose and Method ............................................................................ 86
Intervention ......................................................................................... 88
Summary of Findings—Experimental Elementary School ................. 88
Quantitative Findings .............................................................. 89
Statistical Significance ............................................................ 92
Practical Significance .............................................................. 92
Qualitative Findings ................................................................ 95
Interviews ................................................................................ 95
Observations ............................................................................ 99
The Third Grade Dip ............................................................. 100
Summary of Findings of Experimental School and
Benchmark School ............................................................................ 106
vii
Academic Performance Index ............................................... 106
Adequate Yearly Progress ..................................................... 107
Implications for Practice at Experimental Elementary
School ................................................................................................ 108
Implications for Research ................................................................. 114
Gather Quantitative and Qualitative Data ......................................... 119
Standardize Instructional Delivery Through Trainer-of-
trainer Models ................................................................................... 121
“Focused Instruction” Model ............................................................ 123
Increase the Leadership Capacity of the Faculty and Staff ............... 125
Limitations ........................................................................................ 126
External Validity ................................................................... 126
Internal Validity .................................................................... 127
Conclusions ....................................................................................... 129
REFERENCES .............................................................................................. 133
APPENDICES
APPENDIX A. TOTAL EDUCATION SYSTEMS SUPPORT
LESSON TEMPLATE ....................................................... 140
APPENDIX B. TOTAL EDUCATION SYSTEMS SUPPORT
DIRECT CONSTRUCTION MODEL .............................. 141
APPENDIX C. DEFINITION OF SAFE HARBOR .................................. 142
viii
LIST OF TABLES
Table 1. School and District Enrollment ....................................................... 4
Table 2. Staff Demographics ......................................................................... 5
Table 3. Percent of All Students Scoring At and Above
Proficient in ELA-CST, District/the Experimental
School .............................................................................................. 6
Table 4. Percent ELL Scoring At and Above Proficient in ELA-CST
District/the Experimental School .................................................... 7
Table 5. Lesson Design Models .................................................................. 37
Table 6. Selection Criteria for Experimental and Control Groups,
2007 School Year ......................................................................... 53
Table 7. CST Quintiles and Quality Values ................................................ 59
Table 8. Pre- Versus Post-Intervention CST ELA performance Band
Differences for ELL ...................................................................... 70
Table 9. Pre- Versus Post-Intervention CST ELA Performance Band
Differences School-wide ............................................................... 71
Table 10. Per- Versus Post-Intervention ELL CST ELA Performance
Band Differences: Practical Significance..................................... 72
Table 11. Pre- Versus Post-Intervention ELL CST ELA
Percent Basic and Above .............................................................. 75
Table 12. Pre- Versus Post–Intervention ELL CST ELA
Percent Proficient and Above........................................................ 76
Table 13. Comparison of API Base Scores of Experimental
and Benchmark School from 1999–2008 ...................................... 78
Table 14. API School-Wide Comparisons .................................................... 79
Table 15. API English Language Learner Subgroup Comparison ................ 80
ix
Table 16. Results of the API at Experimental School Utilizing
ELL Results from Students Who Have CST Results
for both the 2007 and 2008 CST in ELA ...................................... 81
Table 17. AYP English Language Arts for the Experimental and
Comparison English Language Learner Students
Percent at or Above Proficient ...................................................... 84
Table 18. Pre- Versus Post-Intervention CST ELA Performance
Band Differences for English Language Learners ...................... 102
Table 19. Pre- Versus Post-Intervention ELL CST ELA Percent
Basic and Above ......................................................................... 102
Table 20 Pre- Versus Post–Intervention ELL CST ELA Percent
Proficient and Above................................................................... 103
Table 21 Example of Safe Harbor .............................................................. 144
x
LIST OF FIGURES
1. Control Groups’ Design ............................................................................. 51
2. Experimental and Control Groups ............................................................. 51
xi
ABSTRACT
The purpose of this case study was to analyze the impact of direct
instruction on the performance of 151 elementary English Language Learners.
Total Educational Support System (TESS) was the direct instruction model that
was selected as the intervention. Data were drawn from the English Language
Arts portion of the California Standards Test for the 2007 and 2008
administration. A mixed-method methodology was utilized with a primary focus
on the quantitative portion to determine both statistical and practical significance
using a dependent group design and a non-equivalent control group benchmark
design. Qualitative information was gathered through informal interviews and
observations.
The study found that 151 English Language Learners demonstrated both
statistical and practical significant gains at the school-wide level, and at grades 4
and 6. The gains were exemplified by an improved API and an increase in the
number of English Language Learner students who had improved their
performance band performance. The experimental and benchmark groups were
compared on the post-test data with the experimental school demonstrating a
greater gain in raw change in percent of English Language Learners proficient and
above, and percent of basic and above utilizing the California Standards Test
English Language Arts results for English Language Learners at the experimental
xii
school and benchmark school. The study concludes with a discussion on the
theoretical and practical implication and four site recommendations.
1
CHAPTER 1
PROBLEM, ANALYSIS, AND SOLUTION: ENGLISH LEARNERS’
PERFORMANCE ON THE CALIFORNIA STANDARDS TEST
“We must teach them and we must teach them well. Their lives depend on
it” (Chall, 2002, p. vii). This statement addresses the importance of the teacher
and their ability to motivate, inspire, and engage students. A student with the
same ability as one of his/her peers assigned to a high-performing teacher will
score significantly higher in 3 years than a student assigned to a low-performing
teacher (Marzano, 2003). This difference can be as great as 50 points. This
difference is compounded when those same students are classified as low socio-
economic status with parents whose education level is low, Hispanic, and ELL
(ELL). The schools that this group of children attend are more likely to have
teachers assigned who are low performing and unqualified. This information
establishes an urgency for addressing a method or model that allows teachers to
improve their craft and becomes practitioners of best practice. There are some
schools that are making this happen (Reeves, 2003).
According to the 2000 census, 47 million people, or 18% of the
population, in the United States speak a language other than English at home. By
2030, this number should have increased to 40%. California has over 40% of the
entire population of ELL in the nation that speaks Spanish. This is resulting in an
2
achievement gap for the ELL as identified by Jack O’Connell in his state of
education address. But, there are some schools and districts in California that
would be considered islands of excellence (Togneri & Anderson, 2003). These
schools are making a difference and closing the achievement gap for the ELL and
are overcoming all odds.
The Experimental School had many of the identified characteristics of
low-performing schools that include low social-economic status, low parent
education, and an ELL population over 50%. But, this school had consistently
decreased the number of students who were Far Below Basic and Below Basic as
measured by the California Accountability system instituted in 1999. The school
was considered low performing according to the California State Accountability
System and the No Child Left Behind (United States Department of Education,
2002) criteria established in 2001. The school was in year four of program
improvement and was likely to advance to year five program improvement if they
followed the trends for schools across the state of California.
The Experimental School had been a participant in a multitude of
programs created to address the lack of progress of ELL as measured by the
California Standards Test (CST). Programs and changes must be effective,
efficient, and equitable (Clark & Estes, 2002). The ELL had demonstrated
progress but the programs that had been implemented at the Experimental School
have not been analyzed to determine if the programs attributed to the gains in
3
student achievement that had occurred at the Experimental School. One specific
program that had been the focus over the past 3 years at the Experimental School
was the Total Educational Support System (TESS) (Travernetti & Rodriguez,
2006) model for direct instruction. TESS had been implemented at the
Experimental School campus to address the low performance of all students with
a laser-like focus on the ELL (Reeves, 2006). This direct instruction initiative
was in the third year and there had been an increase in the number of ELL who
were scoring proficient and advanced on the CST in English Language Arts
(ELA). The focus of this paper was to attempt to analyze the effect of the TESS
model of direct instruction on the performance of ELL as measured by the CST at
the Experimental School in English Language Arts.
Problem Identification
The school district had a student population of approximately 4,000
students. The district was an urban school district on the outskirts of one of the
world’s most populated areas, Los Angeles, California. The district consisted of
five elementary schools, one intermediate school, one continuation school, and a
comprehensive high school. The school district’s enrollment was in steady
decline with an average of 100 students per year exiting for a variety of reasons.
The demographics of the district were similar to the demographics of the
Southern California region: 68% of the students are Hispanic/Latino, 10%;
4
Caucasian, 8.4%; African American, Filipino, Asian 6.8%; and other (6.8%). The
student population of the Experimental School differed from the districts. At the
Experimental School, 87.5% of the students were Hispanic/Latino, 8.3% African
American, less than 1% Caucasian, and 12% other. The differences at the
Experimental School were also reflected in the number/percentage of students
qualifying for free and reduced lunches 472/96.1%, and the number and
percentage of ELL 297/59.9%.
Table 1
School and District Enrollment
School
Enroll
African-
American
not
Hispanic
Hispanic
or Latino
White not
Hispanic
Other
Asian
Ell
Free and
Reduced
Lunch
Experimental
School
496
43
(8.7%)
434
(87.5%)
3
(0.6%)
16
(3.2%)
0
(0%)
297
(59.9%)
472
(95.2%)
District Total
4,366
366
(8.4%)
2,954
(67.7%)
438
(10.0%)
298
(6.8%)
296
(6.8%)
951
(21.8%)
2,674
(61.0%)
The district breakdown of staff differed from the student population. The
staff was 70% Caucasian, 10% African American, 15% Hispanic and Latino, with
a variety of ethnicities and races rounding out the remainder. The Experimental
School’s breakdown of staff was similar to the district in this regards. Ninety-
nine percent of the teaching staff was fully credentialed and identified as highly
5
qualified as determined by the No Child Left Behind Act (United States
Department of Education, 2002).
Table 2
Staff Demographics
Staff
Demographics Number
African
American
not
Hispanic
Hispanic
or
Latino
White not
Hispanic
Multiple
Responses
Experimental
School
25
1
(4%)
5
(20%)
18
(72%)
1
(4%)
District Total
211
21
(10%)
31
(15%)
148
(70%)
11
(5%)
The district had demonstrated Adequately Yearly Progress (AYP) and was
not identified as a Program Improvement District. Two schools had been
identified as program improvement. One school was frozen in year one as a result
of demonstrating AYP for 2007. The Experimental School was in year four of
program improvement. The number of students scoring proficient and above in
English Language Arts on the CST had increased district wide, however, the
students at the Experimental School, including the ELL were below the district
average.
The district’s Academic Performance Index (API), California’s
accountability index, had increased 79 points since 2002 and was at 705. The
6
Experimental School had also increased their API since 1999. The numerical
representation of the API had increased from 457 in 1999 to 679 in 2007. This
score was below the median for elementary schools statewide. Table 3 shows the
percentage of students who were scoring at the proficient and advanced level.
Table 3
Percent of All Students Scoring At and Above Proficient in ELA-CST,
District/the Experimental School
Grade 2004 2005 2006 2007 2004-2007
2 34/17 40/20 44/28 47/34 +13/17
3 17/5 27/14 31/19 33/19 +16/14
4 33/20 38/23 42/33 50/43 +17/13
5 32/25 31/17 33/17 38/20 +6/-5
6 21/18 30/27 36/39 33/18 +12/0
The Experimental School underperformed in all grade levels except grade
2. In grades 2 through 4 there had been an increase in the number of students who
were performing at the proficient and advanced level. The 6
th
grade students had
shown progress during the years except for one year. The 5
th
grade students had
consistently shown less growth across the district and had actually decreased in
number of students who were proficient and advanced.
7
Table 4 illustrates ELL progress for the district, grades 2-6 in response to
the NCLB (criteria for schools identified as program improvement). The number
of ELL scoring proficient and advanced in English Language district wide had
increased since the year 2004. The number of ELL at the proficient and advanced
level had increased at the Experimental School but at lower percentages than the
district. The number of ELL scoring proficient or advanced had increased
minimally over the years but both the district and the Experimental School still
fall far below the target of 100% of proficiency in 2014.
Table 4
Percent of ELL Scoring At and Above Proficient in ELA-CST
District/Experimental School
Grade 2004 2005 2006 2007 2004-2007
2 17/2 25/17 24/20 34/32 +17/30
3 5/2 6/9 20/15 17/12 +12/10
4 11/7 12/6 28/28 39/42 +28/35
5 6/3 2/0 7/0 15/13 +9/10
6 5/8 9/2 5/4 7/2 +2/-6
Togneri and Anderson (2003) identified seven principal findings that
districts should exhibit to improve student achievement. Several of these factors
were evident at the district level. The district demonstrated the courage to
acknowledge poor performance embracing the Stockdale Paradox: “We will
8
prevail no matter what the circumstances but we must face the brutal facts”
(Collins, 2001, pg. 86). These facts are identified through a detailed analysis of
the data. The district believes that in order to find solutions to poor performance,
they must spend time with data. They contract with an outside agency to analyze
and provide reports for administrators at the district and site level, disaggregated
by race, gender, grade, socio-economic status, and language fluency. The district
has also supported the implementation of the EduSoft Data Management system
(2006). The EduSoft software provides a workable structure in place for
analyzing data which is essential for successful schools and districts (Bowman &
Deal, 2003). EduSoft allows for the development of formative and summative
assessment. Common benchmark and formative assessments have been designed
by the teachers at grade levels 2–6. These assessments provide staffs with
necessary detailed information which can be used to modify and improve
instruction in Language Arts for ELL. The data also provides for a more
comprehensive approach to the professional development needs of principals and
teachers in order to improve student achievement with a focus on the ELL
(MacIver & Farley, 2003).
All principals set goals that are Specific, Measurable, Attainable, Results-
oriented and Timebound (SMART) (DuFour, DuFour, Eaker, & Many, 2006).
Each performance goal contains specific actions for closing the achievement gap
for ELL. The goals are revisited during three follow-up data days with the
9
Assistant Superintendent of Instruction. Using a cycle of inquiry procedure, a
dialogue is held with the principal to evaluate progress and identify barriers and
needed support from the district office. This theory of action is then put in place,
modified, and refined for continuous improvement for the school and, especially,
for ELL (Agullard & Goughnour, 2006).
The ELL at the Experimental School were not meeting the annual
measurable objectives (AMO) set by the NCLB legislation of 2001. The ELL at
the Experimental School had only met their objectives one year in the last 6 years.
The 5
th
grade students had never met the established AMO since its inception.
Problem Analysis and Interpretation
Marzano (2003) specifically identifies teacher-level factors, school-level
factors, and student-level factors as the three main focus points which affect
student achievement. For the purposes of this dissertation, the following was
addressed: school-level organizational factors (guaranteed and viable curriculum
and instructional leadership) and teacher-level factors (researched-based strategies
and teacher expectations). These were analyzed from an organizational,
motivational, and knowledge perspective to determine possible barriers that could
contribute to these gaps in the achievement of ELL (Clark & Estes, 2002). This
dissertation does not directly address the student-level factors for the students
who attended the Experimental School.
10
School Level Factors
Curriculum and Instruction
A guaranteed and viable curriculum has the most impact on student
learning (Marzano, Walters, & McNulty, 2005). Students must be provided with
an opportunity to learn (OTL) the curriculum. In order to meet this obligation for
OTL, instructors and administrators must be cognizant of the fact that the
guaranteed and viable curriculum can be anything but guaranteed. Instruction
can vary from classroom to classroom. The variance in what was being taught at
the Experimental School three years ago was evident. The classrooms were
textbook driven. Most of the student works were commercially created. In an
initial curriculum calibration by DATA WORKS, it was determined that over
60% of what was being taught in the classrooms was not grade-level appropriate
(Dr. Francisco Rodriguez, personal communication October 2005). At the
Experimental School, there was not an agreement about the guaranteed
curriculum.
Marzano (Marzano e al., 2005) has divided the guaranteed and viable
curriculum into three detailed components: the intended curriculum, the
implemented curriculum, and the attainted curriculum. The implementation of the
intended curriculum is paramount to student achievement (Marzano et al., 2005).
11
Decisions and a thorough understanding of these components have a direct affect
on student achievement and their opportunity to learn (Marzano, 2003).
With Marzano’s (Marzano et al., 2005) work being the catalyst, and the
present reality of the Experimental School’s identification as a program
improvement school in 2002, the staff began to analyze the current curriculum
through the lens of the intended, implemented, and attained curriculum.
Through this analysis, there became an understanding that in order to
succeed with students at the Experimental School the intended curriculum had t o
be a collection of the mandated state and local standards and learning objectives.
The Experimental School conducted a curriculum calibration to determine the
implemented curriculum. And finally, the school looked at student results to
determine the attained curriculum or what students actually learned. Teacher
teams analyzed the intended curriculum represented by the state standards and
narrowed the intended curriculum to the essential learnings that would be
guaranteed at the Experimental School. The Experimental School’s narrowing of
this intended curriculum was characteristic of schools which break the ranks in
their ability to perform at a higher than predicted level (McREL, 2005). These
activities took place as a result of the program improvement status and urgency
for change.
An intended curriculum does not guarantee that it is implemented
(Marzano, 2003). Even with this type of collaborative effort, the implemented
12
curriculum—those standards and concepts that were actually addressed in the
Experimental School classrooms—varied. Studies conducted by Doyle (1992),
Sotodlsky (1989), Yoon, Bursetin, & Gold (undated) came to the same conclusion
that “teachers often make independent and idiosyncratic decisions around
implementation of the intended curriculum”(Marzano, 2003, pg. 23).
During classroom observations at the Experimental School, it was
observed that the intended curriculum was not consistently implemented even
though teachers stated their commitment to implement with fidelity. It was
apparent that the teachers believed that the students were not adequately prepared
and that students’ prior knowledge was deficient. Teachers were especially
concerned about the ELL. It was assumed that students were behind or lacking.
Therefore, more time was spent in remediation of students with less time devoted
to the grade level appropriate standards based instruction. There was also limited
evidence of time spent specifically with ELL.
Pacing guides were provided by the district for the Open Court reading
series but there was still evidence of inconsistency in the implementation. The
teachers believed they were implementing the curriculum but the other aspect of
the curriculum—the attained curriculum and what students learn—demonstrated a
definite gap in learning for all students and especially apparent with ELL as
evident by Tables 3 and 4. There was not a deliberate and intentional process for
13
implementation of the agreed upon curriculum (Roberts, Starkman & Scales,
1999).
Instructional Leadership
The first step in school reform begins with the principal and leadership
(Marzano et al., 2005). “Leadership is a cultural practice” (Elmore, 2003, p. 5).
The principal must provide leadership to maintain a focus on student learning.
This focus must be systematic and inclusive of all departments (McREL, 2005).
A one standard deviation increase in teacher perception of principal leadership is
associated with a 10 percentile-point gain in school achievement (Marzano et al.,
2005). What were the teachers’ perceptions of the principal?
The Experimental School had teachers and certain grade levels that were
demonstrating some success. But these islands of excellence had not expanded to
the entire school (Togneri & Anderson, 2003). There did not seem to be a clear
focus of what needed to be done (Waters, & Cameron, 2006). The Experimental
School had multiple variations of required action plans. They had a Single School
Plan for Student Achievement, as well as an action plan that was designed with
the assistance of a Local Assistance Intervention Team. These plans addressed all
of the previously mentioned components but with a key emphasis on the
instructional leadership. There seemed to be a lack of implementation of the
plans. At first glance, the lack of implementation of the various programs and
14
strategies seemed to fit in the realm of “first order change” (Waters, & Cameron,
2006, p. 65). Four years ago, through conversations, interviews and classroom
observations, the teachers believed that the principal was basically a manager for
a safe and orderly environment. The principal’s main focus was the safety of the
school. No mention was made of the principal in regards to changing
instructional practices. In fact, the staff stated that new principals were not
allowed to change the culture and the school. The principal was expected to
accept the current reality and assimilate. The staff was not satisfied with the
results of the students, especially the ELL, but what else could they do? This type
of under-confidence was expressed by several teachers (Clark & Estes, 2002).
The type of change would require a break with the past or a second-order
change. The principal had to be the key to the beginning of the change in culture.
There needed to be a different approach for success and assistance in setting the
vision and direction for the school. Since teachers’ perceptions of the principal
are so correlated to student achievement, the principal’s skills in providing the
vision was crucial (Marzano, et al., 2005).
The principal cannot do the work alone. Instructional leadership should be
distributed across stakeholders (Togneri & Anderson, 2003). Four years ago the
leadership team did not extend past the office. The principal, literacy coach, and
the out reach consultant were the leadership team. There was little opportunity
for teachers to be included. This led to seemingly top down directions in regards
15
to any type of change at the school. The principal was continuously being
dragged into the management or first-order change of the school (Marzano et al.,
2005). This type of focus took the principal away from areas of greatest need—
the achievement of ELL
Teacher Level Factors
Researched-Based Strategies
With the beginning of the creation of a guaranteed and viable curriculum
at the Experimental School, the next step was to analyze the instructional
strategies that were being used to address the learning needs of the Experimental
School student population, especially ELL. Teachers at the Experimental School
did not seem to take responsibility for the lack of success of the ELL. This was
evident 4 years ago with the lack of a consistent time for English Language
Development (ELD) instruction and the lack of specific language acquisition
objectives in lesson plans. At the Experimental School, there was a lack of
various and unique methods for addressing the diversified needs of the students.
Since the focus of this paper was to analyze the achievement of ELL,
several classroom observations were performed using the Sheltered Instruction
Observation Protocol (SIOP), beginning with instructional rigor. The level of
questions and activities that students were being asked to complete were often at
the lowest cognitive levels (Anderson & Krathwohl, 2001). Some teachers
addressed their ELL using a tiered-questioning technique but it was not evident in
16
all classes (Hill & Flynn, 2006). All teachers were made aware of the language
level of their students but the knowledge of the students’ language level did not
consistently manifest itself in the modification of the question level in the
classroom. Teacher level of questioning did not seem to be affected by the
students’ language level. Most questions that were observed could be classified
in the remembering and understanding level according to Anderson and
Krathwohl.(Anderson & Krathwohl, 2001). Scaffolding was seen more as a
separate activity and not connected to the lesson. Teachers who addressed
prerequisite skills were not connecting the skill with the current lesson.
Workshop in Open Court was one of the methods for scaffolding, but workshop
was not consistently used in all classrooms.
The majority of the learning objectives observed fit into cognitive
processes of remember and understand (Anderson & Krathwohl, 2001). There
were few examples of conceptual, procedural and metacognitive knowledge
activities. The teachers employed direct instruction techniques but spent minimal
time in assisting students with the type of strategic knowledge that was necessary
for solving problems for all students (Anderson & Krathwohl). Even though there
was evidence that student’s language ability was considered in the development
of the lesson objective, it was also evident that the language objective was
explicitly addressed only in the pull out time for ELL and not intentionally
integrated into the daily class activities. As previously stated, there instructional
17
methods were limited. Materials and processes for English Language instruction
were addressed in the lesson design but little guided practice and limited checking
for understanding was observed with very little differentiation for ELL.
It was assumed that the activities designed were deemed meaningful and
understandable by ELL but the explicit checking for understanding according to a
student’s language level was not consistent in classes. Activities or independent
practice were not always explicitly connected to the learning objective nor was
the practice a match in the cognitive processes (Anderson & Krathwohl, 2001).
For example, students were asked to explain or justify their choices but the
independent practice would be to identify their choices. This was an example of
the disconnect between the learning objective and independent practice.
Examples of methods to focus on key vocabulary were restricted and the
attempt to connect to previous learning consisted of a reference to the previous
day’s lesson. Since teachers did not specifically identify the language levels of
their students, many teachers’ language rate and differentiation of strategies, in
regards to student language needs, were not apparent. Explanation was provided
by teaches but no special effort was exerted for language levels of the students.
Students were directed to ask their partners if they had questions but sometimes
the students who were providing information were not sure, or did not seem
comfortable in explaining the concepts. Comprehensible input, as identified with
the SIOP, was not explicitly addressed. As was previously mentioned, there was
18
little change made by teachers in their oral presentations, and in most classes it
was not evident at all.
Interaction was limited at the Experimental School. Teachers at schools
that were beating the odds acted as coaches and facilitators to promote more
active involvement of students (McREL, 2005). Teachers at these schools
searched for multiple approaches to solving problems. They constantly probe and
ask questions for clarifications and reflection. The Experimental School had
limited examples of questions or responses from students which are important
aspects of language acquisition for ELL. Students were provided limited
opportunities to formulate questions and wait time was negligible when a question
required a response. The lack of oral responses by students did not provide
opportunities for judgment about integrating language skills.
Teachers at the Experimental School were primarily textbook driven and
devoted little time on developing concepts outside the textbooks. Concept
development was limited to exemplifying with textbook examples (Anderson &
Krathwohl, 2001). When groups were observed, the groups were not engaging in
any type of collaboration or accountable talk (Resnick, 2003). Groups had not
designed a protocol for assignments of members to facilitate the collaboration.
The Experimental School teachers were predominantly traditional in their
approach with minimum checking for understanding and little or no modeling.
19
Finally, hands-on activities were observed but the objectives for the
activities were not always clear. Schools beating the odds have examples of
project-based learning and connection to the curriculum (McREL, 2005). The
practice application focused on worksheet completion in many classrooms with
few examples for implementing the concept beyond examples provided in the
classroom (Anderson & Krathwohl, 2001).
Language objectives, when posted, were assumed to be supported by the
lesson. This assumption was not supported by details of how the connection was
made. Examples of written learning objectives were visible, but when teachers
were questioned about why the independent practice was chosen, most teacher
responses focused on the textbook for justification.
Review of the learning objectives was not explicitly addressed in all
classes. Some of the classes were not finished with the instruction for the day’s
lesson when a transition occurred such as recess, lunch, or transitioning from
English Language Arts to math. One of the reasons for this occurrence was
because the time that was spent in the classroom was not organized for efficiency.
Successful schools have classroom time allotted in an efficient manner (Marzano,
2003).
Another research-based strategy that was addressed by the administration
and staff was The Professional Learning Community concept. Schools beating
the odds have a focus on Professional Learning Communities. In looking at the
20
continuum for Professional Learning Communities, there were indicators of the
initiation and developing stages for the Professional Learning Community
concept at the Experimental School in 2004 (DuFour et al., 2006). The principal
and some of the staff had participated in the initial awareness provided by DuFour
et al., but few of the concepts were observable. The importance of collaboration
was not being questioned. Even with this proven practice, there was still a gap
between the expectation of the Professional Learning Community and the reality
of the student achievement at the Experimental School. ELL were consistently
performing at low levels.
Teacher Expectations
Effective schools exhibit several common indicators that include safe and
orderly environment, climate of high expectations, strong instructional leadership
by the principal, frequent monitoring of students progress, opportunities to learn,
and more time on task (Lyman & Villani, 2004). Teachers’ expectations about
students are often based on whatever data is available that may or may not be
accurate. Incorrect information can affect a teacher’s belief about a student for
the long term (Cotton, 1989). The amount of information that is provided to
teachers is basically around the level of achievement that the student has attained
in the past. This information by itself can provide a skewed vision of the child’s
ability (Cotton, 1989).
21
The Experimental School’s staff had access to a great deal of summative
data in a multitude of groupings. CST data, grade data, attendance data, multiple
measure results, teacher observation, classroom, gender, ethnicity, and any
combination data was available or be created. This data was generated to provide
teachers with essential information so that instruction could be customized for
individual students. However, Cotton explains that data can be used to
predetermine perceptions about a child’s ability to learn. Data with too much
reliance on high-stake results can create an atmosphere for the child that does not
allow for success and negatively skew teacher expectations about a student's
ability to learn (Cotton, 1989).
Teacher expectations can and do affect student achievement. Individual
teachers’ decisions have a greater impact on student achievement than any school
factor (Marzano, 2003). Four years ago, teachers at the Experimental School
believed that the students as a whole, especially ELL, were not prepared for the
rigor of the standards. The teachers used the low test scores as evidence to
support their belief that the students lacked ability and were not motivated to
learn. How did anyone expect teachers to motivate the unmotivated (Kuykendall,
2004)?
Schools that are beating the odds build and maintain a vision, direction,
and focus for student learning (McREL, 2005). Goal setting and high
expectations are just some of the examples of essential components that can affect
22
student achievement at a school (Cotton, 1989, Marzano et al., 2005). The
Experimental School Single School Plan for Student Achievement addressed the
vision, mission, and goals for the Experimental School. The document detailed
the importance of success for all kids and that all students can perform at a high
level. There was discussion and conversation about the importance of a college
going culture. Interviews with teachers and activities provided information that
was evidence of these beliefs. However, there were comments, especially from
some teachers, that did not seem to support the school plan. Teachers had the
following comments about ELL: these kids will struggle, the parents don’t know
how to assist the child, what can we do when we only have the child for six hours,
how can we expect a child to master the content when they cannot even master
the English Language. These beliefs were counter to the beliefs of staffs at high-
performing schools. The staffs at these schools stated that “you either believe
your kids can learn and you give the test, or you don’t” (Lyman & Villani, 2004,
p. 119). Johnson states that teachers’ beliefs about a child can lower the
expectations for the students (Johnson, 2002). Teachers and educators become
the gatekeepers for the success of our students and there progress (Blossfeld &
Shavit, 1993). Teachers were not convinced that hard work by the student or
themselves could improve student achievement and labeled the hard-working
student as an overachiever (Resnick, 2003). There seems to be doubt about this
possibility for ELL at the Experimental School. Some of the staff at the
23
Experimental School did not accept these myths. But many of the staff did not
question their own beliefs or debunking of the myths around student success
(Johnson, 2002).
Problem Solution
This section details the actions that the Experimental School could take in
order to close the gap between the current and desired achievement level for the
ELL in all grades as measured by the English-Language Arts (ELA) portion of the
California Standards Test (CST)
Implementation of Total Educational
Support System
The intervention that was implemented at the Experimental School to
address the gap in ELL achievement was a form of direct instruction referred to as
the Total Education Support System (TESS). Direct instruction was selected as
the intervention since it is a proven method for improving English Language
Learner student achievement (Elva, 1980). The TESS format is modeled after the
Madeline Hunter lesson design (Hunter, 1982).
ELL at the Experimental School participated in 30 minutes daily of ELD
instruction. The students were divided by their California English Language
Development Test (CELDT) levels and homogenously grouped. The curriculum
24
for the all students consisted of the district-adopted language arts program, Open
Court ELL component, and Santillian supplement for English Language
Development. Even with the agreed-upon and common curriculum there was a
gap in the achievement of the ELL compared to their English-speaking peers.
This gap was attributed to a lack of common vocabulary around instruction an
inconsistent delivery of the guaranteed curriculum for students at the
Experimental School (Marzano et al., 2005).
TESS was implemented to address these issues. But, the teachers were not
familiar with the TESS lesson design and there was knowledge, motivation, and
organizational gaps that had to be addressed to ensure the fidelity of
implementation.
Knowledge Gaps
If knowledge gaps are evident, according to Clark and Estes, education
and training are often the first steps in addressing the issues (Clark & Estes,
2002). The staff was provided a two-day overview of the TESS model. This
overview was followed by individual training provided in three-day cycles. Each
teacher participated in a three-day intensive training that led teachers through a
step-by-step process utilizing the TESS template in a pre-observational
conference. The teachers developed a TESS lesson by completing each
component of the TESS model as well as a language objective for the ELL in the
25
class. This pre-conference was followed by an observation and post-conference.
The goal was to establish a common vocabulary which is essential in the
identification of best practice by establishing a consistency around English
Language Learner instruction at the Experimental School (Marzano et al., 2005).
The process exemplified the best adult learning scenario with actual hands-on
experience (Ormrod, 2006). This three-step process provided the necessary steps
for teachers to acquire the knowledge and skills necessary for successful
implementation. The TESS template provided the structure and job aid for a
common lesson design (Clark & Estes, 2002). The observations and a post-
conference allowed for immediate and effective feedback for implementation.
Finally, the ongoing follow-up and information was provided by identified staff
employing a trainer-of-trainer concept.
Motivational Barriers
Teachers at the Experimental School had limited buy-in to the
implementation of the Total Education Support System (TESS) at the
Experimental School. The first wave of teacher training was not mandated and
only teachers that volunteered for the training were trained. After that first year,
the principal decided to make the TESS model a requirement for all teachers.
Even though all of the staff committed or were directed to take part in the training
the under-confidence, as identified by Clark and Estes (2002), was apparent.
26
With this in mind, the staff designed time to analyze current data of ELL.
The data provided performance information on the English Language Arts portion
of the CST for ELL at the Experimental School. Data from successful schools
were provided to the staff at the Experimental School to demonstrate that ELL at
schools beating the odds were meeting the goals set by the state and federal
accountability models (McREL, 2005).
Another step in addressing the under-confidence of teachers was
deconstructing state standards in English Language Arts. Standards were
deconstructed into learning objectives, or chunked. Teachers utilized these
learning objectives to develop formative assessments and pacing guides to
determine instructional improvement and intervention. The principal and
leadership constantly voiced the belief that the TESS model would increase the
likelihood for ELL success, therefore boosting the teachers’ confidence and their
self-efficacy which leads to increased student achievement (Ormrod, 2006).
Organizational Barriers
During the implementation of the direct-instruction model, there arose a
need for teacher collaboration (DuFour et al., 2006). The principal designed time
during the regular school day so that teachers could meet and discuss student
learning once a week as well as plan lessons using the TESS model for lesson
design (Appendix A). Collaboration time during the school day is the first step in
changing a culture from isolated practitioners to collaborative learners (DuFour et
27
al., 2006). Teachers now had the time and a format for discussing English
Language Learner student success around best practice and lesson design using
the TESS model.
In addition, the principal and leadership team set clear expectations in
regards to the use of the TESS model in the classroom. Principals and leadership
members participated in informal walk throughs to monitor the implementation
and use of direct instruction. The leadership team and principal constantly made a
connection between the expectation of the implementation of TESS and how this
implementation would affect the success of ELL.
Finally, schools beating the odds have a systematic protocol for analyzing
data for instructional improvement (McREL, 2005). The Experimental School
designed time for analysis of data and developed protocols for addressing the
student results. The Experimental School purchased the EduSoft system for data
management so that English Language Learner progress could be monitored and
allow for more individualized intervention.
Other work processes and procedures were analyzed by staff, district
office staff, outside monitoring agencies, and Local Assistance Intervention
Teams to ensure that processes were efficient and properly implemented. The
monitoring of these processes became more of a leadership team responsibility
(Marzano et al., 2005). These processes can affect the culture of the school. This
was one organizational component that had come into question when it came to
28
identifying areas that were hindering the success of the ELL at the Experimental
School. The principal and the leadership team continued to evaluate and analyze
programs and how the culture could be negatively affecting the success of ELL.
Purpose, Design, and Utility
Purpose
The primary purpose of this study was a pre/post quasi-experimental
sequential explanatory design incorporating both summative and formative
evaluations. It was intended to determine whether the Total Educational Support
System design for direct instruction was effective in meeting the needs of all ELL
at the Experimental School to become proficient on the English Language Arts
portion of the California Standards Test as required by No Child Left Behind
(United States Department of Education, 2002). The essential question was: Will
English Language Learner students whose teacher uses the Total Education
Support System for lesson design score higher on the ELA portion of the CST?
Design
Benchmark School (an elementary school) was selected as the benchmark
school for comparison. Benchmark School is one of the 100 similar schools
grouping along with the Experimental School. More importantly, the Benchmark
29
School has outperformed the Experimental School in both the API increase and
the percent of ELL who have been identified as proficient and advanced.
Benchmark School and the Experimental School share similar
characteristics in percentage of free and reduced lunch students(the Experimental
School 96, Benchmark School 99), percentage of students classified as ELL (the
Experimental School 62, Benchmark School 91), and average parent education
level (the Experimental School 1.85, Benchmark School 1.34) as provided by the
California Department of Education (CDE) website. Benchmark School has
almost twice the number of students identified as ELL compared to the
Experimental School 51, Benchmark School 103). In comparison of the two
schools, only 23.8% of the ELL were identified as proficient on the 2006-2007
CST, missing the cut point by .6%. Where as 36.1% of the ELL at Benchmark
School were proficient or advance on the ELA portion of the 2006 -2007 CST.
One point of interest was that the number of ELL’s at Benchmark School had
increased while the number of ELL’s at the Experimental School had declined.
Benchmark School was chosen as the comparison school to attempt to
learn what programs, procedures or processes were making the difference,
especially for ELL. It was hoped to be able to identify specific actions that were
occurring at Benchmark School that were attributing to the success of the ELL.
Through the analysis of the post-test data, 2008 CST, along with informal
30
interviews and observations, it was intended to determine the actions that could be
replicated at the Experimental School to increase ELL achievement.
A sequential explanatory design was utilized which had a primary focus
on the quantitative data. The results had an impact on the continuation or
refinement of the TESS model at the Experimental School.
The summative data included CST results for comparison for student in
grade 4 for 2007 to the CST results for the same students in grade 5 from 2008.
This data was utilized to attempt to determine the effectiveness of the Total
Education Support System. This data was also used to identify strengths and
weaknesses for future modification and refinement.
Formative data included results from the California Standards Test for
English Language Arts and informal interviews and conversations with the
Experimental School principal, teachers, and support staff. The purpose of
gathering this formative data was to improve instruction and services for students
of the Experimental School and to gather information on the fidelity of
implementation.
Experimental Elementary School’s goal was to remain at Program
Improvement Level 4 and not advance to Level 5. Experimental School intended
to exit Program Improvement in 2009 and that is a challenge. No Child Left
Behind (United States Department of Education, 2002) requires that the
percentage of students who must be proficient or above must increase each year at
31
about an 11% rate. Experimental Elementary has struggled with this and the fact
that the percent will now increase by 11% adds an urgency and concern for
improvement.
The summative data was predominantly the CST ELA for ELL data
comparing the same students’ results for two consecutive years. The quantitative
analysis was a Pre/post Dependent Groups’ Design using the California Standards
Test results as compiled by AIRES student information system and the EduSoft
Data Management system. In addition, a non-equivalent control group was also
used to compare growth of ELL’s at the Experimental School to ELL’s at
Benchmark School on the post-test, 2008 CST English Language Arts scores.
The quantitative analysis was a pre/post comparison of the 2007 and 2008 CST
data for the schools. Qualitative data was collected through informal interviews
and classroom observation.
The boundary of this study focused on ELL who had taken both the 2007
ELA component of the CST at the Experimental School and the 2008 ELA CST.
The accountability model utilized was the California Standards Test.
Utility
The utility of this study was for relevance practicality and applicability to
my current work as Assistant Superintendent of Instructional services. It is my
responsibility to determine the effectiveness of programs that are implemented at
32
all schools and, especially, the programs that are being implemented for the ELL.
ELL is a major concern at my school district. It was hoped that this study will
provide more insight into the programs and processes and their impact on ELL.
33
CHAPTER 2
REVIEW OF THE LITERATURE
Is there valid evidence that one educational practice is more effective than
another for increased student achievement (Chall, 2002)? What really works in
the classroom (Marzano, 2003)? Do innovation, invention, and originality have a
greater impact on student achievement than a skilled delivery of a standard
lesson? What is it that educators can do that will consistently result in an increase
in student achievement? These types of questions have provided the onus for a
multitude of research that range from the effect of preschool programs (Karoly,
2001) to the tenure of superintendents in school districts (Waters & Marzano,
2006).
With the onset of the No Child Left Behind (NCLB) (United States
Department of Education, 2002) legislation of 2001 and the increased number of
students who are identified as ELL, the urgency to identify those variables that
can be associated with a continuous improvement in student achievement has
become even more of a priority. The intent and purpose of NCLB is to address
and devise methods to eliminate the persistent achievement gap between
advantaged and disadvantaged students including the ELL. NCLB requires
annual high stakes testing of all students to determine progress toward mastery of
rigorous content standards designed by each individual state with the hope that
34
these assessments will provide results that can be interpreted and utilized to
improve educational opportunities (Department of Education, 2006).
Direct instruction is one instructional strategy associated with a positive
affect on student achievement (Chall, 2002). Chall refers to direct instruction as
the teacher-centered or traditional approach contrasted with a student-centered or
progressive style. The teacher-centered approach can be traced back to the work
of Thorndike (1973) and their references to cues, effective feedback,
reinforcement, and engagement. These approach can have a positive affect on
student achievement that range from .88 – 1.25 standard deviations. Even though
this research was not identified as direct instruction, the concept of a teacher-
centered approach can be easily connected (Chall, 2002).
The California system of public education is attempting to educate the
most diverse student population in the nation to some of the highest content and
performance standards in the country. The number of students where English is
not their primary home language has increased exponentially over the last 10
years. Forty percent of the students in the California school system are classified
as ELL (Lachat, 2004). The ELL are experiencing difficulty with the current state
and federal accountability system and there is a definite demonstrated
achievement gap (O'Connell, 2008).
The purpose of this literature review was to identify and summarize recent
research findings concerning direct instruction and its affect on the academic
35
achievement of ELL. The Experimental School was chosen as the focus for this
study. The Experimental School has a significant number of ELL who have
historically under-performed on the English Language Arts portion of the
California Standards Test and have failed to demonstrate adequate yearly progress
since the inception of the NCLB (United States Department of Education, 2002)
legislation.
Utilizing Creswell’s (2003) model for quantitative and mixed-method
review, the introduction was followed by four segments that begin with a
discussion of the independent variable—direct instruction. Following this
discussion, the dependent variable—student achievement of ELL—was analyzed
to detail the specific issue of underperformance by the ELL at the national, state,
and local level. The next component of the chapter highlights evidence that
illustrates a correlation between the dependent variable—English Language
Learner student achievement—and the independent variable—direct instruction.
The concluding portion of the literature review summarizes the conclusions and
views drawn from the research that includes agreements and disagreements that
arise along with cautions on the transferability of programs from one setting to
another.
36
Direct Instruction
Direct instruction information was abundant. Selections were narrowed to
three authors (Chall, 2002; Hunter, 1982; Marzano, 2006). These selections
provided a confluence of research studies that described the details of direct
instruction.
Hunter (1982) has often been identified as the founder of the direct-
instruction model in regards to the current pedagogy that is seen in classrooms
across the United States. Hunter describes the direct-instruction model as an
efficient and effective method of instruction that is focused on the way in which
learning occurs in the classroom (Hunter, 1982). As Gursky stated in Teacher
Magazine in 1991:
She [Hunter] crusades against the intuitive, spontaneous, and
improvisational approach to education (student centered) and spreads her
model of teaching as an applied science in which education is based on
proven research findings that link cause and effect [teacher centered].
(Gursky, 1991, p. 29)
Hunter’s model, also referred to as the instructional theory into practice
model (Hunter, 1986), has been replicated and modified over the years. Different
groups and organizations have modified her design but the basic tenets are still
apparent. The Total Education Support System (TESS) is a derivative of the
Hunter design. Table 5 provides a comparison of four different direct-instruction
models.
37
Table 5
Lesson Design Models
TESS Hunter Marzano DATA Works
d
econstruct Standards Standard Effective Units State Content
Standard
a
Learning Objective Short Term
Learning
Objective
Learning Goals Learning Objective
Importance
a
Preview/review Anticipatory Set Engage Students
Celebrate Success
Activate Prior
Knowledge
a
Concept
Development
Instruction Deepen Understanding Concept Development
a
Skill Development Interact with New
Knowledge
Skill Development
a
Guided Practice Generate and Test
Hypothesis
Guided Practice
a
Closure
Checking for
understanding
Track Students Progress Closure
Closure
a
Independent practice Independent
practice
Practice new Knowledge Independent Practice
a
English Language
Development
English Language
Development
a
Checking for understanding
As Table 5 compares types of lesson designs, it is to be noted that the
Checking for Understanding (CFU) is an essential component of all the direct-
instruction models that are listed. CFU is addressed in two distinct methods.
Two of the models list CFU as a separate component that is a separate activity for
the overall activity. What did you want the student to know and how do you
check to see if they know (DuFour et al., 2006). The TESS model is unique in
that this model emphasizes that CFU occurs in all components of the lesson
38
delivery. The TESS model provides for the ongoing CFU which is more aligned
with the effective and timely feedback referred to by Marzano (2003) and
Dewey’s (1938) critical feedback that is necessary for the teacher-centered
classroom success (Chall, 2002).
Direct instruction has had a varied history when it comes to titles and
descriptions. Chall (2002) categorizes all classroom instruction into two
educational typologies: (a) student centered or progressive and (b) teacher
centered/traditional or direct instruction. The two methods, also identified by
Jackson (1986) as mimetic (teacher centered), or transformative (student centered)
have some overlap but are distinct in the roles of the teacher, selection of the
essential curriculum, utilization of educational materials, range of activities, types
of grouping, learning objectives, amount of time, evaluation of program, and
pacing of the year. The teacher-centered model, in recent years, has become
known as the direct-instruction method of instruction (Chall, 2002).
It was somewhat interesting in the readings of Lachat (2004) that several
references were made to the importance of a student-centered classroom. This
reference was in contrast to the identical term presented by Chall (2003). Chall
would have considered the term as it was used by Lachat as teacher centered in
terms that the teacher was the primary determiner of the goals for the classroom
with the student in mind. Student centered, as defined by Chall, was in reference
to more of a constructivist model that students directed (Chall, 2002).
39
Hill and Flynn (2006) were more inclined to discuss the importance of
cooperative learning and were less inclined to discuss the importance of a
particular direct-instruction model. But, once again, Hill and Flynn were
consistent around the importance of several of the factors for direct instruction
that include timely feedback and clear learning goals, as well as connectiveness
for the lesson (Hill & Flynn, 2006). Even though there were some discrepancies
in the vocabulary, there was congruence in the components of the importance of a
lesson design and delivery method.
In summary, direct instruction has definitive components that include
standards, skill and concept development, and guided and independent practice
with some type of checking for understanding and learning. It is a teacher
centered model whereas the teacher designs the lessons and provides information
to the student that is centered on a set of agreed-upon standards with a focus on
opportunity to learn for all students in the classroom. Even though the student-
centered classroom has advocates, the quantitative data does not support the
success that the advocates for the system advocate (Chall, 2002).
English Language Learner Achievement
The vast majority of the ELL in California are low socio-economic
Hispanic/Latino children who are underperforming on the California Standards
Test, especially on the English Language Arts portion. There is a persistent gap
40
in the student achievement of ELL (O'Connell, 2008). This gap is not just a
California issue but is evident across the nation. In the early 1980s the gap had
narrowed nationally according to the National Assessment of Educational
Progress but since the 1990s this gap has once again began to increase. The
College Boards’ National Task Force on Minority Achievement identifies several
convincing examples that the achievement gap is real. The difference in the
achievement of minorities is evidenced in elementary school and continues to be
evident through post secondary education (Johnson, 2002). Hispanic Latinos are
consistently out-performed by their Anglo and Asian counterparts on all
standardized forms of assessment. Only one out of ten 12
th
grade students
identified as proficient on the NAEP in 1998 were minorities. The achievement
gap is not only evident on assessments, but is also exhibited in other measures of
achievement such as grades and class ranks, college entrance, college completion,
and attainment of advanced degrees. All of these results emphasize the
importance of addressing the achievement gap at all levels but especially in the
elementary grades with a constant monitoring and follow up process for ELL
(Johnson, 2002).
At the state level, O’Connell (2008) has declared that closing the
achievement gap for minorities with an emphasis on ELL is the primary focus for
California Schools. A state wide summit was conducted in Sacramento to begin
the education of the state around closing this gap for the ELL. Forty percent of
41
the students in California are ELL with Spanish being the primary language. The
state is reflective of the national results with fewer ELL demonstrating
proficiency at all grade levels on English Language Arts. More ELL are retained
in California schools. Fewer ELL are passing the California High School Exit
Exam. Fewer ELL graduate from high school and, therefore, are identified as
dropouts. There are fewer ELL enrolled in GATE, honors, and advance
placement courses. Fewer ELL meet the A-G requirement for acceptance to a
University of California or California State University (California Department of
Education, 2008).
At the local level, the Experimental School has many of the issues that are
the focus of O’Connell’s Close the Achievement Gap initiative for the state.
There has been a lack of consistent performance of the ELL at the Experimental
School. The Experimental School ELL have demonstrated adequate yearly
progress (AYP) one time since 2003, and this was utilizing the safe harbor
calculation for growth for NCLB (United States Department of Education, 2002)
compliance. In essence, the ELL subgroup has yet to meet the established
benchmark as set by NCLB. The ELL subgroup at the Experimental School has
the fewest number of proficient and advanced students on the California
Standards Test than any other significant subgroups on the campus with the ELL
consistently underperforming their peers at other grade levels. All of these factors
set the stage for urgency for improvement of the ELL at the Experimental School.
42
Effect of Direct Instruction on the Student
Achievement of ELL
Several studies have been conducted to compare the outcomes of teacher-
centered and student-centered schools attempting to control for all variables
except for the educational approach (Chall, 2002; Hill & Flynn, 2006; Lachat,
2004). One of the earliest studies was conducted by the Follow Through
organization beginning at Head Start and ending with the 3
rd
grade (Watkins,
1997). This study contrasted a direct-instruction model with two separate
approaches: cognitive and affective teacher model, both considered student
centered as described by Chall (2002). The Follow Through (n.d.) project found a
significant difference in the student success in reading, math and writing. In the
consensus of the research it was agreed that the direct-instruction model was
definitely the most effective model in its affect on student achievement and that
the more directive the teacher the better the results (Chall). The findings were
validated in a similar study conducted by Stevens and Rosenshine in 1981. The
researchers concluded that a teacher-centered/direct-instruction model provided
the greatest impact on student achievement that was especially noted in reading
and math. In addition, students’ use of higher-order thinking skills is impacted by
direct instruction (Stevens & Rosenshine, 1981).
In 1978, Kennedy of the Office of Education came to the conclusion that
the programs that were most successful were the programs where the teacher is
43
the planner and not the student. Programs where teachers are identified as
facilitators are not as effective especially with low-income students. The basic
skills are positively affected by the use of direct instruction (Chall, 2002).
In 1996, Adams and Engleman (1996) came to the same conclusion. They
found that students who were taught using the direct-instruction model did
significantly better than those students who were taught by other means. In
addition, they found that students identified as poor and involved with direct
instruction exhibited positive affects in student achievement as well as social
factors (Adams & Engleman, 1996).
Gage was one of the first people to conduct what is now called a
metanalysis on direct instruction in elementary schools. Gage found that students
in traditional schools (teacher-centered) demonstrated more success than those
students in an open classroom. Open education was less consistent and was
especially inadequate for low socio-economic and minority students. In fact,
Gage was especially concerned about the negative affects of student centered
education on the low socio-economic and minority student (Gage, 1978).
Marzano in many of his metanalysis works agreed with Gage and advocated for
the use of the Hunter model in his metanalysis, (Marzano, 2003).
Good and Brophy (1987) agreed with Gage’s finding in general that
students in traditional schools perform better than students from a progressive
approach institution. Even though there may be some students who would benefit
44
from a student-centered approach, the potential disadvantages outweigh the
advantages of this approach especially with poor and minority students (Good &
Brophy, 1987).
Successful middle schools with predominant populations of Latino
students used direct instruction and followed several of the characteristics of a
traditional, teacher centered, direct-instruction approach. Most focus was on the
domains measured by the standards based exam use by Texas with little use of
interdisciplinary approaches (Waits et al., 2006).
The 90-90-90 schools also exhibited some of the characteristics that were
listed as teacher-centered schools. The essential curriculum was established using
the agreed-upon standards. All schools have standards including the 90-90-90
schools, but the importance was in the implementation of the standards. Schools
that have implemented the standards with fidelity have out consistently out
performed their peers in the improvement of student achievement. (Reeves,
2003).
Other View Points
Not all studies agreed that direct instruction is the best method for student
success. There is still an ongoing discussion about the roles and responsibilities
of schools. Should school focus on affect and motivation of students (student
centered) or knowledge and intellect (teacher centered). The ambiguity of
45
messages from some of the leading researchers of today also seem to be
struggling with the question about learning content or interacting with content
(DuFour et al., 2008).
Student-centered curriculum has been touted as the alternative method for
student learning since the turn of the century. Dewey, English Infant School, and
Montessori schools are historical examples of the curriculums based on growth
and development that have continued to be presented as methods for student
learning (Chall, 2002). One of the largest studies that compared the student-
centered and teacher-centered approaches was Project Follow Through (n.d.).
Project Follow Through continued the debate about which is superior, the low
structure/education through experience or high structure model of direct
instruction. The “gooeys” as referred to in the Time article of 1981 (McGarth &
Di Pietro, 1981, p. 1) stated that students learn to read best when provided with a
rich environment that encourages students to learn what they need. Learning
centers are part of this classroom where students interact with the content and are
able to express their thoughts. This type of classroom was an example that Bestor
(1985) refers in his comment that “the idea that schools had a responsibility to
teach something and that something was thinking” (p. 10). But, the low structure,
progressive, student-centered programs have not shown the continuous
improvement that has been attributed to teacher-centered or direct-instruction
models (Chall, 2002).
46
Upper-middle-class parents prefer the student centered type of classroom
but the minority student and low socio-economic status students consistently
demonstrate improvement with direct-instruction model (Chall, 2002). The
direct-instruction model continued to demonstrate significant effects, especially
with minority students. Direct instruction can significantly improve beginning
bilingual children's achievement more than regular bilingual instruction.
Bilingual education may be enhanced by incorporating direct instruction into its
teaching method. School characteristics may interact with the effects of any
specific teaching method (Duran, 1980). An experiment conducted in the
Chicago Schools in 1992-93 demonstrated that there was significant change in the
raw scores of students who were instructed using DI over the comparison group
that used other methods (Wrobel, 1996). These same types of quantitative results
have not been consistently evident with the progressive or student-centered model
(Chall, 2002). And as Walberg (1990) in his metanalysis of classrooms stated,
the teacher-centered model was actually more a part of the student-centered
classroom since the students were still in rows and bolted down seats..
Summary
The direct-instruction model is an instructional strategy that has
consistently demonstrated a positive affect on student learning, especially for
minority and low socio-economic status students (Chall, 2002). English
47
Language Learners have historically demonstrated less than stellar performance
on the types of assessment used to determine a school’s effectiveness as measured
by the No Child Left Behind initiative of 2001 (United States Department of
Education, 2002). There are two methods of change in a school: change the
instruction or change the motivation (Marzano, 2003). Conclusions drawn from
the studies suggest that direct instruction is a strategy that benefits schools in
meeting ELL’ needs for student achievement (Chall, 2002; Lachat, 2004; Hill &
Flynn, 2006; Marzano, 2006).
There is significant evidence that for the lower socio-economic students,
the direct-instruction model has been found to make a statistically significant
impact on student learning (Chall, 2002). Many of today’s minority students,
especially ELL are identified in the lower socio-economic quartile (California
Department of Education, 2008).
But, there is a word of caution that is necessary in the assumption of the
transferability of any program, methodology, or concept. Transferability cannot
be assumed. Direct instruction does not work when the components are not
adhered to such as gathering information about the student before instruction
occurs (Amberge, 2002). Ergo, the fidelity of implementation is an important
component in this study as well as the setting, participants, and time. But, it
seems that the concept of direct instruction has a high probability of addressing
48
the lack of student achievement of ELL at Experimental Elementary and,
therefore, was the intervention that is implemented for this study.
49
CHAPTER 3
SUMMARY OF RESEARCH DESIGN
The primary purpose of this study was to evaluate the effect of the direct-
instruction model, Total Education Support System (Independent Variable) on the
achievement of ELL (Dependent Variable) as measured by the English Language
Arts (ELA) portion of the California Standards Test (CST) at the Experimental
School. This study did not analyze the mathematics achievement since the ELL
subgroup has, over the years, demonstrated adequate yearly progress (AYP) in
mathematics. But, the ELL’ subgroup at the Experimental School had
consistently been unsuccessful in demonstrating AYP in English Language Arts
(ELA).
A mixed method approach was used to gather data and information. Data
was collected from CST results for ELL for the year 2007-2008 from the
Experimental School and a benchmark school selected from the similar school
rankings as defined by the California Department of Education, for 2007–2008 for
comparison. A sequential explanatory design was used with a primary focus on
the quantitative data from the CST and limited qualitative data gathered from
informal conversations, interviews, and observations (Creswell, 2003).
50
Quantitative Evaluation Design
Pre/post dependent Groups’ Design. A quasi-experimental design was
used to analyze the change in the ELA results of the CST for the ELL (dependent
variable) at the Experimental School from 2006-07 (pre-intervention) to 2007-08
(post-intervention). The following statistics was used for each dependent
variable: (a) a dependent groups t-test to assess the statistical significance of the
change (p < .15), (b) Cohen’s d to assess practical significance (criterion for
practical significance (d > .20), and (c) percentage change to assess practical
significance per NCLB (10% improvement).
Non-equivalent control group design. This design includes an
experimental group and a benchmark comparison group which were not randomly
assigned. The Experimental and benchmark groups were compared on the post-
test data utilizing a raw change in percent proficient and above, and raw change in
percent of basic and above utilizing the CST ELA results for ELL at the
Experimental School and Benchmark School. It must be noted that the non-
equivalent control group design lacks internal validity because of selection bias.
Therefore, causal inferences concerning the TESS intervention and student
achievement cannot be proven. Below is a graphic representation of the non –
equivalent (pre –test/post-test) control group design. In this design the X
represents the Total Education Support System intervention.
51
2007 2008
Group A
Experimental School 0 ___________X___________0
------------------------------------
Benchmark 0_________________________0
Control Group
Figure 1. Control Groups’ Design
Data outcomes on the 2006-2007 CST ELA for ELL at the Experimental
School was compared to the data outcomes of the 2007–2008 ELA results for
ELL at the Experimental School. This result allows for analysis of the difference
in the pre-test (2006-2007 ELA portion for the CST) to the post-test (2007–2008
ELA portion of the CST) that occurred at the Experimental School. The
following statistics were used for each dependent variable: (a) a dependent
groups t-test to assess the statistical significance of the change (p < .15), (b)
Cohen’s d to assess practical significance (criterion for practical significance (d >
.20), and (c) percentage change to assess practical significance per NCLB
(percentage change > 10%).
A more detailed description of the experimental and control groups are
presented below:
Experimental Group Experimental School ELL Pre (2007) X Post (2008)
Control Group Benchmark School ELL Pre (2007) Post (2008)
Figure 2. Experimental and Control Groups
52
Benchmark School
Additionally, data outcomes for 2007-2008 from the Experimental School
were compared to 2007-2008 data outcomes from Benchmark School, the
selected benchmark school. The rationale for this analysis was comparative in
regards to the percent of ELL who were proficient and above.
Benchmark School was one of the 100 similar schools grouped with the
Experimental School as identified by the California Department of Education.
Benchmark School was selected because of the demographic similarities in three
areas: Average parent education level, number of students who qualify for free
and reduced lunch, and number of ELL. Table 6 provides a comparison of the
Experimental School and Benchmark School in these areas as well as the
Academic Performance Index for both schools. Benchmark School’s parent
average education level is lower than the Experimental School’s average parent
education level. Benchmark School has a greater number of students who qualify
for free and reduced lunch, and a greater number of ELL than the Experimental
School. A school that has a lower average parent education level, greater number
of free and reduced lunch, and greater number of ELL often indicates a lower
Academic Performance Index than a school with contrasting numbers (Reeves,
2003). Taking this information into account, it would be a logical assumption that
the Experimental School would out-perform Benchmark School.
53
Table 6
Selection Criteria for Experimental and Control Groups, 2007 School Year
School Name
API
Scores
State
Rank
Similar
Schools
Rank
Grade
Levels
Average
Parent
Education
Level
Percent
Free or
Reduced
Price
Lunch
(STAR)
Percent
English
Learners
(STAR)
Experimental
School
679
(26
points
growth
from
2006)
2 5 K-6 1.85 96% 62%
Benchmark
School
(Control)
763
(30
points
growth
from
2006)
5 10 K-5 1.34 98% 91%
But, that is not the case. Benchmark School’s Academic Performance Index is
134 points higher than Maxell’s and Benchmark School is not in Program
Improvement, whereas the Experimental School is identified as a year 4 program
improvement school.
Limitations
A threat to the internal validity was selection and sample size. In regards to
selection, the groups were not randomly assigned (represented by the dashes in
the Figure 1) and are heterogeneous samples with similar characteristics as
54
established by the California Department of Education. The Experimental group
was analyzed utilizing a dependent group t-test of students with CST results for
both 2007 and 2008 at the school, and designation as an ELL. The sample size
affected the validity. Therefore, the change in the results from 2007 to 2008 may
be attributable to other factors than the TESS direct instruction intervention.
It is important to acknowledge that the findings of this study are limited to
the Experimental School. Causation will not be proven. This study was limited
to ELL who have taken the CST both in 2007 and 2008 at the Experimental
School. This study should provide a possible insight into the effectiveness of the
TESS on the ELL at the Experimental School.
Qualitative Design
Informal interviews and informal observations were conducted at the
Experimental School. An attempt was made to interview all certificated staff
members, support staff, and administration. The rationale for gathering this
qualitative data is to determine the beliefs of the teachers concerning the impact
of the TESS of direct instruction and the fidelity of implementation o the
intervention. This qualitative data will assist in the analysis and provide deeper
understanding of the quantitative results.
Champion’s (2002) four-level evaluation model was utilized to evaluate
information concerning the fidelity of implementation of the TESS. Focus was on
55
three areas when talking to teachers in an attempt to evaluate the major points of
impact of TESS: (a) reaction, (b) learning, and (c) transfer.
1. How do you feel about TESS?
2. What did you learn or was validated from the TESS implementation?
3. What behaviors changed in the classroom for teachers? Students?
The questions focused on the behaviors/experiences, feelings/emotions,
opinions/values, and knowledge of the interviewee in regards to the fidelity of
implementation of the TESS model.
Intervention
Direct instruction is one intervention that has shown to improve student
achievement (Chall, 2002). Direct instruction was the intervention that was used
with the experimental group, specifically, the TESS. The TESS model is one
variety of a direct-instruction model that consists of the following 10 components:
1. Unpacking the standard
2. Learning Objective
3. Preview/Review
4. Explain Conceptual Knowledge
5. Explain Procedural Knowledge
6. Guided Practice
7. Independent Practice
56
8. Closure
9. Checking for understanding (continual)
10. Language Objective
The TESS lesson design instructional model is similar to the models
created by Hunter (Marzano, 2006). One difference of the TESS model to other
direct-instruction models focuses on the Checking for Understanding component
(CFU). The TESS instruction model emphasizes the importance of CFU during
all aspects of the lesson design, whereas in other models the CFU occurs as a
separate component.
Participants and Setting (Experimental Group)
The participants in the experimental group were ELL (in 2008) who
participated in both the 2007 CST and the 2008 CST at the Experimental School.
The group included re-designated students. The Experimental group numbered
151 children. Of these 151 students, 77 were male and 74 female. All of the
students had been in the United States for over 12 months and all had been at the
Experimental School for 2 years.
The teaching staff and administration at the Experimental School were
also participants in the research study. Twenty-three certificated staff members
were informally interviewed and observed with a specific emphasis on teachers
57
who were interviewed and observed to compare their responses to other members
of the staff. Support staff and administrators were also included in the interviews.
The setting for this study was the Experimental School. The Experimental
School is a kindergarten through sixth grade traditional elementary school with an
enrollment of 496 students with the following demographic characteristics: (a)
Hispanic or Latino, 87.5%; (b) African American, 8.7%; (c) White, 0.6%; and (4)
Other, 3.2%. Of the 496 students enrolled at the Experimental School, 297
students (59.9%) are designated as English learners. One hundred percent of the
student population at the Experimental School participated in the free and reduced
lunch program through the National School Lunch Program. In addition, 95% of
the students qualified as socio-economically disadvantaged.
Benchmark School/Comparison Group
Benchmark School was selected as the benchmark school based on
information obtained from the 2007 Similar Schools Report and Academic
Performance Index (API) Base Report) from the CDE website (California
Department of Education, 2006) and the 2007 School Demographic
Characteristics Academic Performance Index (API) Growth Report obtained from
the CDE website (California Department of Education, 2006b) to obtain
information on the similarities between the two schools. The similarities are: (a)
API scores, (b) Grade levels within the schools, (c) percentage of students
58
participating in the free or reduced price lunch program, (d) percentage of
students classified as English learners and participating in STAR and (e) parent
education level.
It is important to note that even though the benchmark school, Benchmark
School, has a lower parent education level, higher number of students who are
identified as free and reduced lunch, and a higher percentage of ELL, Benchmark
School is out-performing the Experimental School. These three indicators often
times have a high negative correlation to the outcomes of the California Standards
Test as reported by the Academic Performance Index. Benchmark School is
definitely one of the schools who is beating the odds (McREL, 2005).
Instrumentation
Achievement
Data collected by the California Department of Education (CDE) was
utilized, specifically in the area of English-Language Arts from the 2006-2007
and 2007-2008 administrations of the California Standards Test (CST). Data was
collected on ELL subgroup including re-designated (R-FEP) students at the
Experimental School and Benchmark School.
The California Standards Test (CST) is the official measure of school
performance as designated by the Public School Accountability Act (EdSource,
2007). The CSTs are criterion referenced exams developed from the California
59
academic content standards based on what teachers are supposed to teach and
what students are expected to learn. Student performances are rated in quintiles
with the following labels: far below basic (FBB), below basic (BB), basic (B),
proficient (P), and advanced (A). Table 7 shows the quintiles by grade level and
the “cut points” for each quintile and the assigned values for each quintile.
Table 7
CST Quintiles and Quality Values
Grade FBB 200 BB 500 B 700 P 875 A 1,000
Second 150-261 262-299 300-349 350-401 402-600
Third 150-258 259-299 300-349 350-401 402-600
Fourth 150-268 269-299 300-349 350-392 393-600
Fifth 150-270 271-299 300-349 350-394 395-600
Sixth 150-267 268-299 300-349 350-393 394-600
Student scaled scores on the CST are placed in one of the five
performance bands. The percentage of scores in each performance band is then
multiplied by the weight that is assigned to each performance band to calculate
the API. There are two methods for summarizing test scores. The API is the state
measure of accountability. The federal measure that is part of the No Child Left
Behind (United States Department of Education, 2002) reauthorization of the
Elementary and Secondary Education Act is known as Adequate Yearly Progress
(AYP).
60
California uses the API, which is an improvement growth model, to rank
schools in three different ways. First, schools are divided into types (elementary,
middle, and high school). The API is assigned and then schools are broken into
deciles that represent 10% of the schools. Deciles ranks are assigned from 1
(lowest) to 10 (highest) and provide information on how a school compares to all
other schools in California. Second, schools are placed in groups of 100 similar
schools. Each school is then assigned a rank based on their predicted
performance using a regression analysis procedure. This similar school rank is
then posted to provide some information the performance of schools with some of
the same type of challenges. Third, each school is expected to achieve a target for
improving their API score and the state tracks the change from year-to-year.
Starting in the year 2005, numerically significant subgroups were also assigned an
API for measuring progress. The API scores range from 200 (lowest) to 1,000
(highest) and all schools are expected to have a score of at least 800 (EdSource,
2007).
The federal measure, Adequate Yearly Progress (AYP), originates from
the accountability system under NCLB. AYP is a status model of accountability.
Therefore, the focus of NCLB is on the number of students scoring proficient and
advanced at the school-wide level and for all numerically significant subgroups at
a particular school. A numerically significant subgroup is any racial, ethnic, or
socio-economic group of students who have a minimum of 100 students or is 15%
61
of the overall enrollment of the school. Additionally, schools are held
accountable for testing 95% of students in each subgroup. Failure to meet AYP
or to have a minimum participation rate of 95% results in sanctions. Schools and
districts failing to make their AYP for two consecutive years face consequences.
Consequences escalate each year for a school in program improvement. These
consequences range from tutoring services to the closing of a school. A school is
designated as a program improvement school when the number of students who
are designated as proficient and advanced is below the cut points established by
NCLB. The cut points are set for school-wide attainment as well as numerically
significant subgroups. These cut points increase annually with the 2008 cut point
for ELA being set at 35.2% and for 2009 the cut point is 46%. Each year the
percentage needed to demonstrate AYP will increase by a minimum of 10%. By
2014, 100% of the children in the United States are to be proficient and advanced.
The dependent variable for this study is the Experimental School ELL
performance on the CST. The ELL subgroup at the Experimental School has not
demonstrated adequate yearly progress for 6 consecutive years. The urgency for
continuous improvement is apparent.
Procedures. The STAR results were compiled for the ELL who met the
criteria established earlier in the study from the experimental group for the 2007
and 2008 test administrations. These outcomes were analyzed for the percentage
62
gain, Cohen’s d/effect size, and also a dependent group t-test to determine the
statistical significance.
Interviews
Direct instruction has been shown to have a significant effect on ELL
(Lachat, 2004). But, the fidelity of implementation can affect the success of a
proven model. Knowing something works and not using it is the same as not
knowing (Reeves, 2006).
Interviews may surface feelings, values, and opinions that may not be
otherwise observed (Patton, 2002). The researcher attempted to focus on the
reaction, learning, and transfer as identified by Champion (2002), but as Patton
(2002) states “I was prepared to go with the flow” (p. 342). The interview
focused on teacher perceptions about the TESS and how staff feels that TESS had
assisted in the success of ELL. There was no guide, but there were central themes
for the informal conversations.
1. What do teachers think about the TESS model?
2. What new learning occurred or what current practices were
validated?
3. How has TESS affected the students and teachers at the
Experimental School?
63
Procedures. The entire staff at the Experimental School was interviewed,
including the classified and administration. The interviews occurred individually
and in small groups. The data was gathered in an attempt to determine an overall
perception of the staff’s belief of the impact or possible lack of impact of the
TESS. The interviews were completed by October 2008.
The interviews duration varied from 10 minutes for individual interviews
to 50 minutes for small groups. Teachers in all grade levels were interviewed to
compare possible differences in teacher perception of the effect of TESS. The
information was collected in a note taking format.
Observations
Observations were used to gather another perspective of the fidelity of
implementation. Filling the gaps between goals and reality are the essence of our
work. The doing gap is one of the major obstacles for change (Reeves, 2006).
The observations provided information that may not have been evident in the
interviews. Observations provided information for comparison to the interview
responses that participants may have been reluctant to discuss during the
interview. The observations may provide information about the implementation
that was missed or inaccurately represented in the interviews (Patton, 2002).
Procedures. Observations in grades three through six occurred during the
fall during the English Language Arts portion of the instructional day at the
64
Experimental School. An observational document that specifically addresses the
TESS direct-instruction model was employed (Appendix B). Observations were
10-25 minutes in length. Hand written notes were employed to ensure an
unobtrusive observational method. The researcher’s current responsibility
required observations of instruction, therefore, the researcher was both a
participant and non participant.
Summative Analysis
The purpose of this intervention study was the determination of statistical
and practical significance through quantitative analysis utilizing STAR data from
the English Language Arts portion of the CST. The 2007 and 2008 results were
used in the analysis of both the dependent pre/post dependent groups’ design and
a non-equivalent control group design.
The pre/post design was used to analyze the change at the Experimental
School, from pre-intervention (2007) to post-intervention (2008). The following
statistics were used for each dependent variable, outlined above: (a) dependent
groups t-test to assess statistical significance of the change in the pre/post results,
(b) Cohen’s d to assess practical significance, and (c) percentage gain to assess
practical significance per NCLB
In addition, an informal comparison was completed in regards to the
growth at the benchmark school, Benchmark School, using API growth and raw
65
change in the percentage of ELL who were proficient and advanced from 2007 to
2008. The groups were not randomly assigned and the treatment was only
administered to the group at the Experimental School. The intent of the
comparison of results from the benchmark school was to compare the
achievement of the Experimental School to a high-performing school, Benchmark
School. The comparison should hopefully provide insights into practices that
provide ELL with the greatest opportunity to learn (Marzano, 2003). The
Benchmark School results should provide additional information on the
effectiveness of the TESS model compared to the results at Benchmark School
without the treatment. As with any benchmarking methodology, the hope is that
any best practices can be replicated and not necessarily created. There is no need
to reinvent the wheel (Tucker, 1996).
66
CHAPTER 4
RESULTS
The primary purpose of this study was to determine the impact of direct
instruction, specifically the Total Educational Support System design model, on
the achievement of ELL at Experimental Elementary to on the English Language
Arts portion of the California Standards Test. A sequential explanatory design
(Creswell, 2003) was utilized including a pre/post dependent groups t-test and
non-equivalent comparison group for the summative evaluation portion of this
study. As mentioned previously in this study, the comparison group was a
benchmark school according to Tucker’s definition in Benchmarking: A Guide
for Educators (Tucker, 1996). Benchmark Elementary was selected from the list
of similar schools provided by the California Department of Education.
Benchmark had a higher Academic Performance Index, State Ranking, and
Similar School Ranking than Experimental Elementary. The non-equivalent
comparison group, Benchmark Elementary, results were used to compare to the
post-test results of ELL in Grades 3-6 at Experimental Elementary on the English
Language Arts Portion of the California Standards Test. Benchmark Elementary
was part of the study in order to improve the generalizability of the results using
the concept of proximal similarity (Creswell, 2003).
The methodology for comparison focused on three dependent variables at
Benchmark Elementary and Experimental Elementary. The three dependent
67
variables were: (a) California Standards Test (CST) and English Language Arts
(ELA) performance band scores for ELL at Experimental Elementary, (b)
percentage of ELL who scored basic and above on the CST ELA, and (c)
percentage of ELL who scored proficient and above on the CST ELA. The
performance band scores were coded as follows: 0 = Far Below Basic,
1 = Below Basic, 2 = Basic, 3 = Proficient, and 4 = Advanced.
Pre/Post Dependent Groups’ Design
Selection bias was a limitation of the study since the groups were not
randomly selected. To address this limitation and enhance the internal validity, a
dependent groups’ design was utilized. The dependent group consisted of English
Language Learner students at Experimental Elementary who had taken the
California Standards Test English Language Arts portion of the CST in both 2007
and 2008. These results were used to determine and analyze both for statistical
and practical significance. Selection bias was a limitation and can affect the
internal and external validity. Therefore, a dependent groups’ design was utilized
to reduce the differences in the groups since each subject served as his or hers
own control (King & Minimum, 2003).
The dependent groups’ design was employed to analyze the change at
Experimental Elementary from 2007 (pre-intervention) and 2008 (post-
intervention) for English Language Learners in grades 3-6 in English Language
68
Arts (ELA). The following statistics were used to analyze California Standards
Test ELA performance band scores at the experimental school:
1. A dependent groups t-test to assess the statistical significance of the
change (criterion for statistical significance = p < .15).
2. Cohen’s d to assess practical significance (criterion for practical
significance = d > .20).
3. Raw change from 2007 to 2008 to asses practical significance
(criterion for practical significance = 10%).
4. Percentage change to assess practical significance (criterion for
practical significance = 10% improvement).
Non-equivalent Comparison Group Design
This design included an experimental group and one comparison group
which were not randomly selected. Guided by Tucker’s (1996) process of
identifying a benchmark school, Benchmark School was selected as the
comparison school. Benchmark School was a similar school to Experimental as
identified by the California Department of Education. Benchmark Elementary
School and Experimental Elementary School were similar based on their
ethnic/racial composition, socioeconomic status, English Learners percentages,
and parent education levels. Benchmark school was selected because of their
performance. Benchmark school had a higher Academic Performance Index
69
(API) in 2007 than Experimental Elementary (Benchmark’s API 765,
Experimental’s API 681). Benchmark was not in program improvement.
Experimental was a program improvement school year 4. Benchmark school had
a higher state rank in 2007 (Benchmark State Rank = 5, Experimental State
Rank = 2). Benchmark school had a higher similar school rank in 2007
(Benchmark Similar School rank = 10, Experimental Similar School rank = 5).
Both, Benchmark and Experimental Elementary schools were above the similar
school median growth score for 2008 (702) as determined by the California
Department of Education.
The treatment, a direct-instruction model known as Total Education
Support System (TESS), was administered to the experimental group’s ELL in
grades 2, 3, 4, and 5 during the 2007–2008 school year. The statistical analysis
for the experimental versus comparison group contrast was descriptive rather than
inferential and focused on data gathered from the state and federal accountability
system.
Pre/post Dependent Groups’ Results
Statistical Significance
Table 8 illustrates the pre/post statistical test findings (p < .15) for the
experimental school’s (grades 2 through 5) ELL for the year 2007. In Table 8, the
ELL participants were disaggregated by school and by grade.
70
Table 8
Pre- Versus Post-Intervention CST ELA performance Band
Differences for ELL
Grouping
ELL
2008
Pre N
2007
Post N
2008
Pre M
2007
Post M
2008
Difference
t-ratio
Observed
Probability
School 151 151 1.83 1.99 .16 2.481 .014*
Grade 3 32 32 1.75 1.59 -.16 -.961 .344
Grade 4 32 32 1.62 2.28 .65 4.71 .000*
Grade 5 44 44 2.27 2.20 -.06 -.62 .538
Grade 6 43 43 1.58 1.86 .27 2.71 .009*
*p < .15
Results in row one indicate that in the experimental school, there was a
significant increase in ELL CST ELA performance bands school-wide for 2007-
2008: T (150) = 2.481, p = .014. Statistically significant results (p < .150) also
were found for the 4
th
grade and 6
th
grade. The observed gain for students in the
4
th
grade was .656, which is over one-half of a performance band gain from the
2007 results, t (31) = .4.715, p = .000. The 6
th
grade ELL had an observable gain,
although smaller, t (42) = 2.716, p =.009. There was a decrease in the mean for
grade 3, T(31) = -.961, p = .344. Grade 5 exhibited a decrease in the mean,
T(43) = -.621, p = .538 but not statistically significant.
Further disaggregation of the data illustrates similar results for all students
as shown in Table 9. One noteworthy item in Table 9 was the data analysis for all
71
students in the 2008 3rd grade cohort who did show a decrease of -.28 of a
performance band between the 2007 (pre-test) mean and the 2008 (post-test)
mean. The decrease school-wide is statistically significant in that it differs from
the ELL 3
rd
grade cohort in regards to the statistical indications, T(49) = -2.527, p
= -.015. Also, the 5th grade school-wide results demonstrated a gain of .05
performance bands from the 2007 pre-test mean to the 2008 post-test mean
`school-wide unlike the 5
th
grade ELL subgroup that demonstrated a decrease in
the mean.
Table 9
Pre- Versus Post-Intervention CST ELA Performance Band
Differences for all students
Grouping
School-
wide
Pre
N
2007
Post N
2008
Pre M
2007
Post M
2008
Difference
t-
ratio
Observed
Probability
School 3-6 226 226 1.85 2.08 +.23 2.73 .007*
Grade 3
2008 ELA
50 50 1.88 1.60 -.28 -2.52 -.015*
Grade 4
2008 ELA
45 45 1.53 2.16 +.63 5.58 .000*
Grade 5
2008 ELA
68 68 2.16 2.21 +.05 .49 .625
Grade 6
2008 ELA
63 63 1.73 2.33 +.60 1.86 .067*
p < .150.
72
Practical Significance
Statistical significance is highly susceptible to sampling error (Creswell,
2003). Ergo practical significance becomes an important aspect of the study.
Practical significance was assessed utilizing three methods: raw change from
2007–2008, effect size (Cohen’s d), and percentage change. The raw change
score is the post-test score minus the pre-test score. Effect size was computed
using the ratio of the change from 2007–2008 to the pre-test standard deviation.
Percentage change was assessed using the ratio of the change from 2007–2008 to
the pre-test mean. Results are shown in Table 10.
Table 10 displays the practical significance for ELL school-wide and by
individual grade levels.
Table 10
Pre - Versus Post-Intervention ELL CST ELA Performance Band Differences
Practical Significance
Grouping
ELL 3-6
Pre
M
2007
Post
M
2008
Pre/post
Change
Pre
SD
Effect
Size
Percent
Change
School 1.82 1.99 +.17 1.06 + .15 +.09
Grade 3 1.75 1.59 -.16 1.22 -.13 -.08
Grade 4 1.62 2.28 +.66 .79 +.83 +.40
Grade 5 2.27 2.20 -.07 1.13 -.06 -.03
Grade 6 1.58 1.86 +.28 .91 +.30 +.18
Note. Effect size > .20 and percent change > .10
73
Raw Change. School-wide and at grades 4 and 6 there was a gain between
the pre-test mean and post-test mean. There was a significant gain by the 4
th
grade cohort (.66, more than two-thirds of performance band). The 6
th
grade
students’ gain was smaller, .28, but is still over one-fourth of a performance band
gain. As a school, the ELL mean improved from pre-test 2007 to post-test mean
of .17. The 3
rd
and 5
th
grade means declined from 2007 to 2008 with the 3
rd
grade
had the greatest decline (-.08).
Effect Size. Both the 4
th
grade ELL (+.83) and the 6
th
grade ELL (+.30)
exceeded the pre-study standard of practical significance of .20. The effect size
of the 4
th
grade was over four-fifths of a standard deviation.
Percent Change. Practical significance was also assessed utilizing the
percentage change in the performance band scores from pre-test (2007) to post-
test (2008). The preset standard for practical significance was 10%. The data in
Table 10 confirms that there was increase in the percentage change from 2007
(pre-test) to 2008 (post-test) in the school-wide results as well as in two grade
levels, 4
th
and 6
th
. The school-wide percentage change increased by .09 but did
not meet the pre-study cut point for practical significance of p > 10%. The
percentage gains for the 6
th
grade students (.28) exceeded the standard for
practical significance but the pre-test mean was the lowest of the grade levels.
The 4
th
grade percentage gain was noteworthy (.40) both in regards to the
74
percentage gain but also when the pre-test mean (1.68) was the next to the lowest
mean for the 2007 pre-test and then the 2008 post-test mean (2.28) was the
highest for any grade level. Table 10 provides the details for these comparisons.
Tables 11 and 12 summarize the pre/post results for two additional
indices: Percentage of the students who scored basic and above and the
percentage of the students who scored proficient and above (per the NCLB)
(United States Department of Education, 2002) on the California Standards Test
in English Language Arts. The basic and above category was added to provide
additional information about the probability that all students would be proficient
and above by 2014. The assumption was that it would be plausible that an
English Language Learner who had scored basic on the English Language Arts
portion of the California Standards Test would be more likely to score proficient
in the future. Also, for the possibility of reauthorization of NCLB, basic and
above might become a more pragmatic target for 2014.
The results for Table 11 illustrate a practically significant (p > 10%) gain
school-wide (14%) and at grade 4 (38%) for ELL scoring basic and above the
English Language Arts portion of the CST. Table 11 findings illustrate that a
positive percent gain was shown at grade 3 (5%), grade 5 (2.5%), and grade 6
(7.1%), but did not meet the criteria for practical significance. This is the first
analysis that yielded a positive gain for all areas. The gains at grade 3 and grade
5, although minimal, and not practically significant were positive whereas in
75
previous analysis, the findings demonstrated a negative result in the achievement
gains for grade 3 and grade 5. Table 11 findings do not illustrate practical
significance at grade 6 whereas previous analysis in Tables 8 and 9 results in
grade 6 demonstrated statistical significance.
Table 11
Pre- Versus Post-Intervention ELL CST ELA Percent Basic and Above
Percent Basic and Above
Grouping ELL
Pre 2007
Post 2008
Pre/Post
Change
Percent
Change
School .63 .70 +.07 11%
Grade 3 .59 .62 +.03 5.0%
Grade 4 .56 .78 +.22 39%
Grade 5 .77 .79 +.02 2.5%
Grade 6 .56 .60 +.04 7.1%
The finding in Table 11 mirror the findings reported in Tables 8 and 9.
Practically significant growth in CST ELA performance occurred at the school-
wide level and at grades 4 and 5 for ELL scoring proficient and above. Table 12
illustrates the decline in achievement in grade 3 (-48%) for ELL achieving
proficient and above and at grade 5 (-.20%) for students scoring proficient and
above. Two of the changes in percent of students proficient and above were vast
in the difference from 2007 (pre-test) and 2008 (post-test) (grade 4, 388%, grade
6, 78%). But a word of caution is necessary. These results had to be qualified to
the extent that the pre-test scores from these same students in 2007 were low.
76
Only 9% of the 4
th
grade were proficient or advanced on the ELA CST in 3
rd
grade. And, just 14% of the 6
th
grade students were proficient on the 2007 ELA
CST in the 5
th
grade. Thus, the researcher would assume that these types of gains
could be possible. The researcher warns against the assumption that the 4
th
grade
results were substantially better than other grades.
But, the 4
th
grade findings show another noteworthy change from the pre-
test in 2007 to the post-test in 2008. The percent of 3
rd
grade ELL who scored
proficient and above on the 2007 (pre-test) was the lowest of all grades in 2007
(9%). This dependent group of ELL in 2008 had the highest percentage of
students scoring proficient and above (44%). Once again, causality could not be
determined. There was no control for the vertical alignment of the California
Standards Test nor was there a control factor for teacher efficacy. The findings
and scope of this study were isolated to Experimental Elementary because random
assignment did not occur.
Table 12
Pre- Versus Post–Intervention ELL CST ELA Percent Proficient and Above
Percent Proficient and Above
Grouping ELL
Pre 2007
Post 2008
Pre/post
Change
Percent
Change
School .26 .30 +.04 +15.4%
Grade 3 .31 .16 -.15 -48%
Grade 4 .09 .44 +.35 +388%
Grade 5 .45 .36 -.09 -20%
Grade 6 .14 .25 .11 78%
77
Comparison School Results
The design of this study incorporated the use of a comparison/benchmark
school, Benchmark School. Benchmark School was selected from the similar
schools list for the 2007–2008 Accountability Progress Report (APR). CDE
classifies similar schools utilizing 17 characteristics that include, but not limited
to: School population, demographics, free and reduced lunch percentage, English
Language Learner percentage, and average level of parent education. The
comparison school was also selected, as previously explained, because of their
outstanding academic performance by comparison of academic performance
index, state school rank, and similar-school ranking. The comparison school was
rated higher in all of these areas as well as not being recognized as a program
improvement school.
Table 6 illustrated the comparison of the similar-school characteristics for
the experimental school, and the benchmark/comparison school for API, parent
education level, percent free and reduced lunch, and percent ELL.
78
API Comparisons
In further analysis of Benchmark Elementary School and Experimental
Elementary characteristics, Experimental Elementary would have been expected
to have had a higher API than Benchmark School. Experimental had a higher
average parent education level, lower percentage of free and reduced lunch, and a
lower percentage of students identified as ELL than did Benchmark School.
Experimental Elementary’s parent educational levels were higher than
Benchmark’s. Experimental Elementary had a lower percentage of students who
qualified for free and reduced lunch than Benchmark. Experimental Elementary
had a lower percentage of ELL than Benchmark Elementary. With this
knowledge, the researcher expected Experimental’s API to be lower than
Benchmark’s API. However, that was not the case. Table 13 illustrates the
differences in the two schools’ API. Table 13 shows each school’s API from
1999 until the present.
Table 13
Comparison of API Base Scores of Experimental and Benchmark School from
1999–2008
Years 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008a
Experimental
API
457 489 515 567 619 640 636 656 681 724
Benchmark
API
421 490 593 568 648 666 696 733 765 766
Difference +36 -1 -78 -1 -29 -26 -60 -77 -84 -42
a
2008 growth score.
79
Benchmark School's API increased 345 points while Experimental’s API
increased by 267 points. Table 14 illustrates the comparison between the
experimental and benchmark schools utilizing the state accountability system’s
API for the 2007 and 2008 school years. The experimental school out-gained the
benchmark school by 42 points on the 2008 API (Experimental = +43,
Benchmark = +1)
Table 14
API School-Wide Comparisons
School API 2007 API 2008 Gain
Experimental School 681 724 +43
Benchmark School 765 766 +1
Table 15 also illustrates an even greater gain on the API for the
experimental school ELL subgroup (+48 API points) compared to the benchmark
school (+1 API points) for a 47-point difference. (E = +48, B=- +1). The data in
Tables 7 and Table 8 illustrated that the experimental school outperformed the
benchmark school during the 2007–2008 school year as measured by the API.
80
Table 15
API English Language Learner Subgroup Comparison
School API 2007 API 2008 Gain
Experimental ELL 667 715 +48
Benchmark ELL 762 763 +1
The previous API comparisons and analysis were based on the API
components for elementary schools which are a confluence of the following five
components: English Language Arts, Mathematics, 5
th
grade science, 4
th
grade
writing sample, and CAT 6 results. Even though the ELA portion of the CST is
weighted at 48%, the determination of the API is influenced by the results of the
other areas. The previous comparisons are also limited by the fact that the year-
to-year API ratings were calculated with different cohorts of students.
The researcher calculated an API, as illustrated in Table 16, that illustrates
the results of the API at Experimental School utilizing only the English Language
Arts results of the students who were part of the dependent group utilized for the
study. Row 1 of Table 16 illustrates the growth of the Experimental Elementary
using all of the different components for calculating API as posted by the
California Department of Education.
Utilizing the API calculation spreadsheet provided by the California
Department of Education, the researcher calculated the grade level API utilizing
81
only the 151 ELL in this study that had scores for the 2007 English Language
Arts California Standards Test (pre-test) and the 2008 ELA CST (post-test). The
results of these calculations were consistent with the results of previous tables that
demonstrated an increase in student achievement at the school-wide level and for
grades 4 and 6 for all students and for the ELL. Row 2 shows that the ELL
actually grew more than the overall school in English Language Arts and that
grade 3 had less of a drop in their API compared to all students. English language
students’ API at grade 6 gained more than the grade 6 API for all students.
Table 16
Results of the API at Experimental School Utilizing ELL’ Results from Students
Who Have CST results for both the 2007 and 2008 CST in ELA
English Language Arts Only API
All Students
Pre
2007
Post
2008
Gain
ELL
2007
ELL
2008
Gain
School 681 724 +43 667 715 +48
ELA School 677 699 +22 669 701 +32
Grade 3 673 622 -51 647 628 -19
Grade 4 619 745 +126 644 763 +119
Grade 5 731 739 +8 742 740 -2
Grade 6 651 699 +48 627 691 +64
Adequate Yearly Progress Comparison
Adequate Yearly Progress (AYP), unlike the California Academic
Performance Index (API), gives credit only for the percentage of students who are
82
identified as proficient or advanced on the state-wide achievement tests, known as
the California Standards Test (CST). Each elementary, middle, and secondary
school must demonstrate progress in student achievement school-wide and on the
achievement of numerically significant subgroups of students within schools. The
primary measure of success is the achievement of a specific, and gradually
increasing, percentage of students in each subgroup scoring proficient or
advanced on the CST in English-Language Arts and Mathematics.
The AYP target for percent proficient for 2008 was 35.2 % in English-
Language Arts. Table 16 showed that Benchmark Elementary surpassed the AYP
target for 2008. As previously stated, Experimental Elementary did demonstrate
AYP even though they did not meet the 35.2% percentage proficient and
advanced for ELL. Experimental Elementary had 32.8% of their ELL identified
as proficient or better. The experimental school met the criteria using an
alternative method for determining AYP known as “safe harbor.” A school, LEA,
or subgroup meet the criteria for Safe Harbor, which is an alternate method of
meeting the annual measurable objectives (AMO) if a school, LEA, or subgroup
shows progress in moving students from scoring at the below proficient level to
the proficient level (Appendix C). Experimental Elementary School posted a
greater increase in ELL who were identified as proficient and above from 2007 to
2008 compared to Benchmark Elementary during the same time period.
83
These AMO increase every year until the year 2014, whereas all students
are expected to be proficient or advanced in ELA and Mathematics (United States
Department of Education, 2002) . In addition, schools are accountable for testing
95% of students within each subgroup.
An elementary school must meet 17 criteria for 2008 in order to
demonstrate AYP. If a school does not meet the AMO in all 17 areas for 2
consecutive years, the school is then identified as a Program Improvement School
with escalating consequences that range from choice for students to opt out of
attending the school, to the school being dissolved, reconfigured, re-staffed, and
reopened as a charter or magnet. In some states, not in California, local control is
dissolved and the school is placed in receivership of a sort with an outside entity
determining the next steps for improvement.
Experimental Elementary was in year 4 of program improvement (PI).
Benchmark School was not a program improvement school. Even though
Experimental School was in year 4 of program improvement, they did meet all 17
criteria for AYP and therefore froze in year 4.
Experimental Elementary was identified as program improvement as a
result of not meeting the criteria in one of 17 areas of compliance for 2
consecutive years. A school that is frozen in program improvement status is a
school that has met the criteria for AYP for only 1 year. A school must meet this
criteria for AYP for 2 consecutive years to exit program improvement.
84
Experimental Elementary demonstrated AYP for 2008 and they must meet the 17
criteria again for 2009 to exit program improvement. But, Experimental School
did not advance into program improvement year 5 in 2008. These benchmarks
advance every year until the year 2014 when all students are expected to be
proficient or better in Language Arts and Mathematics
Table 17 provides information for the experimental and benchmark
schools on the federal accountability measure known as Adequate Yearly
Progress (AYP). AYP originated from the accountability system under No Child
Left Behind (United States Department of Education, 2002).
Table 17
AYP English Language Arts for the Experimental and Comparison English
Language Learner Students Percent at or Above Proficient
Group
Experimental
AYP
Benchmark
AYP
2007 2008 Gain 2007 2008 Gain
ELL 23.8% 32.8% 9% 36.1% 35.5% -.6%
School-wide 28.3% 36.9% 8.6% 37.0% 36.7% -.3%
Table 17 illustrates the fact that the experimental school outperformed the
benchmark school in comparison of percent of ELL who were identified as
proficient and advanced. The experimental school increased the percentage of
85
students in this category by 9%, whereas the benchmark school actually
demonstrated a decline in their percentage.
86
CHAPTER 5
SUMMARY, DISCUSSION, AND, RECOMMENDATIONS
Overview
The preceding chapters in this study provided the rationale and
methodology of the researcher’s intent for the evaluation of the intervention,
direct instruction, and its impact on the achievement of the ELL at Experimental
Elementary School. This chapter concludes the study by elaborating on the
implications of the quantitative findings, illuminating conclusions drawn from the
informal qualitative data, and comparisons to benchmark schools.
Recommendations are provided for the site studies and possible reflections for
future study.
Purpose and Method
The purpose of this study was to examine the impact of the intervention
direct instruction, specifically the Total Educational Support System (TESS)
model on the achievement of ELL (ELL) at Experimental Elementary. The
change in individual ELL’ performance bands of the California Standards Test
(CST) for English Language Arts from 2007 (pre-intervention) to 2008 (post-
intervention) were analyzed by school and grade level in Experimental
Elementary School with a dependent groups’ design. A sequential explanatory
87
design model (Creswell, 2003) was used for the study with a focus predominantly
on the quantitative data and minimally on the qualitative data. These data were
analyzed to measure the effectiveness of the intervention on the three dependent
variables: (a) percent proficient and above on English Language Arts as
measured by the California Standards Test, (b) percent basic and above on
English-Language Arts as measured by the CST, and (c) proficiency band scores
on English Language Arts as measured by the CST for the ELL subgroup.
The participants in this dependent group study were ELL at Experimental
Elementary, a traditional K-6 school, in a suburban setting 25 miles from
downtown Los Angeles. The dependent groups’ design was selected in order to
enhance the validity of the results. One of the limitations of any study is selection
bias. The dependent groups’ design allows for some control of this issue by
allowing the participants to be their own control group (Creswell, 2003).
Therefore, the students in this study were only those ELL who had participated in
both administrations of the 2007 (pre-intervention) and 2008 (post-intervention)
California Standards Test in English Language Arts. There were 151 ELL in
grades 3–6 who met these qualifications and who had received the direct
instruction intervention, Total Educational Support System (TESS) model of
direct instruction.
Informal interviews and observations of the Experimental School staff
were conducted in grades 3–6 to obtain formative information about the fidelity of
88
implementation of the direct instruction intervention model, TESS. The times for
interviews varied from 10 to 30 minutes in length. Teachers were interviewed
individually, in small groups, and as a faculty as a whole. The observations were
informal and varied in length from a 2–3 minute informal walk to a 50-minute
observation.
Intervention
The Total Educational Support System (TESS) was the name of the direct
instruction model that was utilized for this study. A detailed explanation and
description of the direct instruction strategy was previously outlined in Chapters 2
and 3 including a comparative analysis to other instructional models in Table 5.
Summary of Findings—Experimental Elementary School
The following section provides evidence in answering the research
question that provided the focus for this study: Does the intervention, direct
instruction, have an effect on the academic achievement of English Language
Learners (ELL) on the English Language Arts (ELA) portion of the California
Standards Test (CST).
89
Quantitative Findings
Overall, there was a positive gain in the Academic Performance Index and
Adequate Yearly Progress for the ELL at the experimental school. Even though
causality cannot be proven, the direct instruction model intervention (TESS)
seems to have had a positive impact on the overall achievement of the ELL on the
English Language Arts portion of the California Standards Test. As previously
mentioned, a dependent groups’ design was utilized for analysis of data from the
2007 CST ELA results (pre-test) compared to the 2008 CST ELA (post-test)
results for the 151 ELL. The overall Academic Performance Index (API) score of
the entire school increased from a base (pre-intervention) of 681 in 2007 to 724
(post-intervention) in 2008. This was a growth of 43 API points. As was
illustrated in Table 14 the ELL subgroup grew from a pre-intervention base of
667 to a post-intervention growth API of 715 for a growth of 48 API points. One
of the limitations of the study utilizing the API as a comparison tool was that the
2007 (pre-intervention) school wide API and 2008 (post-intervention) school
wide API were computed using different cohorts of children. In order to address
this concern, the researcher computed an API strictly for the 151 ELL who were
participants in the study utilizing only the English Language Arts results from the
151 participants in the dependent groups’ design for 2007 (pre-intervention)
compared to the 2008 (post-intervention). This comparison is illustrated in
Table 16.
90
Table 16 illustrated that, overall, utilizing solely the English Language
Arts results, the API for all students for grades 3–6 in English Language Arts
grew from 2007 (pre-intervention) base API of 677 to a 2008 (post-intervention)
growth API of 699, for a positive gain in of 22 points. Of the 151 ELL who were
participants in the study, API increased during the same assessment period from a
2007 (pre-intervention) base API of 669 to a 2008 (post-intervention) growth API
of 701 for a gain of 32 points. The results using only the ELA were consistent in
regards to gains and declines in API at various grade levels as well. There was a
gain in the overall ELL API school-wide of 22 points. Grades 4, and 6 posted
positive gains and grades 3 and 5 posted declines in their grade-level API. These
results were similar to other statistical analysis in gains and declines.
A positive percentage gain was demonstrated by all students who were
identified as basic and above in the comparing of the pre-intervention (2007)
results to the post-intervention (2008) results at Experimental Elementary School.
In 2007 (pre-intervention) 63% of the students at the school were identified as
basic and above on the ELA portion of the California Standards Test (CST)
compared to the 2008 (post–intervention) results where the percent of students
who were basic and above on the ELA portion of the CST was 70%. This is a
14% gain. In further analysis utilizing the ELL results exclusively on the ELA
portion of the CST all ELL by grade levels demonstrated a positive percent gain
91
for students scoring basic and above with the 4
th
grade ELL subgroup posting a
39% gain on the percentage of students basic and above.
As previously highlighted in Chapter 4, the percentage of students
identified as proficient and advanced on the ELA portion of the CST increased
from 28.3% in 2007 (pre-intervention) to 36.9% in 2008 (post-intervention). This
is an 8.3% increase in the number of students scoring proficient and advanced.
The percent of ELL scoring proficient and advanced on the ELA portion of the
CST increased from 23.8% (pre-intervention) in 2007 to 32.8% (post-
intervention) in 2008 for an increase of 9.1%.
As a result of these increases of proficient and advanced students,
Experimental Elementary School met all criteria for AYP in 2008 (post-
intervention) and froze in program improvement at level 4. A school that freezes
in Program Improvement is a school that has met all of the criteria for AYP for 1
year. It takes 2 years of meeting all criteria for a school to exit Program
Improvement. Experimental Elementary School had failed to demonstrate AYP
for 6 years prior to the intervention (2001-2007). Direct Instruction was the
primary intervention that was implemented in order to address this lack of success
in meeting the adequate yearly progress criteria.
92
Statistical Significance
In reviewing the finding for statistical significance as seen in Table 8,
Experimental Elementary School demonstrated positive gains in the following
areas: (a) School wide, (b) 4
th
grade, (c) 6
th
grade, (d) ELL school-wide, (e) ELL
grade 4, and (f) ELL grade 6. The pre/post changes in the means and the
increases in the six components played a significant role in Experimental School’s
ability to demonstrate AYP in all areas for the 2008 school year and exceed their
growth target on the API in the same time frame.
Table 8, reflected a significant increase in the CST English Language Arts
performance band scores in the ELL subgroup from pre-intervention (2007) to
post-intervention 2008. There was a decrease in the ELL performance bands for
grades 3 and 5. The decrease was not statistically significant for the English
Language Learner subgroup. But, Table 9 did show a statistically significant
decline in the overall achievement for all students at 3
rd
grade at Experimental
Elementary.
Practical Significance
A closer look at these data show that the percentage of ELL scoring at far
below basic did decrease nominally from 18.7% in 2007 pre-intervention to
15.8% in 2008. The percentage of below basic (BB) remained constant in 2007
and 2008, 25%. The percentage of basic did increase from 25% pre-intervention
93
in 2007 to 43.7% post-intervention in 2008. The percent of ELL who were
proficient and advanced decreased and could be attributable to the increase of
basic students in the 2008 administration of the California Standards Test. Third
grade ELL scoring proficient and advanced on the English Language Arts portion
of the CST decreased from 31% pre-intervention 2007 to 15.6% post intervention
2008. But, in looking at the 3
rd
grade results overall, the ELL actually performed
as well or better than the school-wide results for 3
rd
grade. The results for all 3
rd
grade students showed an increase in the far below basic from 16% in 2007 to
22%% in 2008. The comparisons of performance for ELL and overall 3
rd
grade
performance were similar with the below basic, decreasing from 24% in 2007 to
20% in 2008. The percentage of students scoring basic increased overall in the 3
rd
grade from 24% in 2007 to 36% in 2008. This increase in percentage of students
scoring basic as a grade was similar to the English Language Learners’ results
whereas the grade had a decrease in the percentage of students scoring proficient
and advanced from 36% in 2007 to 22% in 2008. Overall, the dip that was
discussed previously in regards to the percent of students who were proficient and
advanced was apparent at the 3
rd
grade level. It must be noted that the percent of
ELL who are far below basic decreased from the 2
nd
to the 3
rd
grade.
The largest mean change of performance level was at the 4
th
grade (+.66
more than one-half of a performance band, Table 10). This change was a result of
4
th
grade ELL decreasing the percent of students scoring far below basic. Basic
94
level decreased from 43.7% in 2007 (pre-test) to 21.8 % in 2008 post-test. This is
a 50% decrease in the number of students at the 4
th
grade scoring far below basic
and below basic from pre-intervention 2007 to post-intervention 2008.
Table 10 illustrated a decrease in the percentage of ELL scoring basic
from 43.7% in 2007 (pre-test) to 34.7% in 2008 (post-test). Unlike the 3
rd
grade
results where 3
rd
grade students moved to lower quintiles and, therefore,
negatively impacted the API the percent of 4
th
grade ELL scoring at proficient and
advanced increased. The percentage of 4
th
grade ELL scoring at the proficient
and advanced level in 2007 increased from 12.5 % in 2007 to 43.75 % in 2008.
This is over a 300% increase as was earlier detailed.
The 5th-grade ELL had a decline in the mean of -.07. But, in spite of this
decline, the number of ELL who scored far below basic on the CST decreased
from a pre-intervention percentage (2007) of 11% to a post-intervention (2008) of
6.6%. The dip in the mean was a result in a decline in the percentage of students
scoring proficient and advanced from 46.6% in 2007 to 35.5% in 2008. This
decrease in percentage was due solely to the decline in the percent of proficient
ELL from 2007(35%) to 2008 (24%). The percentage of ELL who scored
advanced remained constant at 11% for both years.
Data for the ELL in the 6
th
-grade demonstrated an increase in the mean
from pre-intervention 2007 to post-intervention 2008 of +.28. This increase was
reflected in the following changes of ELL by performance bands. The percentage
95
of ELL decreased from 15.9% in 2007 to 9% in 2008. This decrease was also
apparent in the percentage of ELL scoring below basic in 2007. The percentage
decreased from 27.7% in 2007 to 22% in 2008. The percentage of ELL scoring
basic in 2007 stayed consistent with a small decrease in the percentage of ELL
scoring basic on the English Language Arts CST in 2008 (40.9%) compared to
2007 (43%). There was a 100% increase in the percent of ELL scoring proficient
and advanced from 2007 (13.6%) to 2008 (27.2%).
Qualitative Findings
In addition to the quantitative data, informal qualitative data was collected
using interviews and observations. Both participant and non-participant
observations were conducted.
Interviews
Individual and group interviews were conducted that varied in length from
5 minute individual interviews to a 50 minute group/staff interview. Using
Champion’s (2002) methods for the gathering of qualitative data participants
responded to three types of questions:
1. What were the teachers’ reactions to the implementation and
effectiveness of direct instruction on the achievement of the English
Language Learner?
96
2. What, if any, new learning resulted from the implementation of direct
instruction?
3. What are they doing differently as a result of the direct instruction
implementation?
The researcher gathered information with a primary focus on the teacher
perception of the effectiveness of the direct instruction intervention (TESS) and
its effect on the English Language Learner in their classroom.
Question 1: What were teachers’ reaction to the implementation of the
intervention?
A strong majority of the interviewees responded positively about the direct
instruction model. They honestly admitted to the resistance at first but as
previously noted, the administration began the implementation with volunteers to
build a core support group for the change. This core group became the
cheerleaders for direct instruction and support for teachers who struggled. The
resistance was stronger at some grade levels, such as 3
rd
grade, than others. But,
with the resistance noted, an overwhelming majority agreed that direct instruction
was a major contributor to the increase in the performance of ELL on the ELA
portion of the California Standards Test.
Question 2: What new learning occurred from the implementation of
direct instruction?
97
Teachers commented that the new learning was the common vocabulary
around direct instruction and the importance of checking for understanding
throughout the lesson. The common vocabulary of the direct instruction model
(TESS) provided teachers a narrowed focus of certain specific portions of the
instructional lesson. For example, when teachers spoke of the skills development
of the lesson, they could focus and discuss the quality instead of just talking about
teaching. The teachers were able to specifically discuss the science of teaching
(Marzano, 2006).
This was the first time that the concept of checking for understanding was
presented in a manner that was required throughout the lesson and not just at the
end of the lesson. As seen in Table 5, checking for understanding throughout the
lesson was a distinguishing factor of the TESS model of direct instruction from
other models including Hunter’s original model (1982). Teachers knew that
checking for under standing was important but they had learned, through TESS, to
increase the frequency during the entire lesson.
Teachers also stated that their level of understanding of the sequence of
the standards had increased dramatically since the implementation of the direct
instruction model. Their experience in deconstructing the standards had provided
a very detailed comprehension of the standard and also what they, as instructor,
should do to assist students. Deconstructing standards allowed for teachers to
develop learning objectives and to determine the difference between the learning
98
objective and a standard. The standard is made up of many learning objectives
and this was major ah ha for all teachers at Experimental Elementary.
Question 3: How did teacher behavior change as a result of the
implementation of the direct instruction intervention?
Every teacher who was interviewed commented on how much more detail
they provided for each lesson that they designed. The greatest change in behavior
came from the amount of required collaboration with colleagues. Teaching in
isolation was no longer possible because of the amount of time that was spent in
design of lesson and also spent in observing other teachers teach. Since the direct
instruction model required specific components and behaviors these components
could be analyzed and refined.
Interviews were held with the administration and the direct instruction
consultant on their reaction in regards to the fidelity of implementation of the
direct instruction intervention. There was a general feeling from both the
administration and consultant that direct instruction was being implemented with
fidelity. Both the administration and consultant concurred with the teachers that
direct instruction had been a major contributing factor to the success of the
school.
99
Observations
As previously mentioned informal observations were conducted to assist
with validating the interviews and perspective on the fidelity of implementation in
the classroom. Transfer, as identified by Champion (2002) should be evident
when observing a teacher’s planning and lesson delivery. The observations were
conducted utilizing the TESS lesson feedback form (Appendix A). The fidelity of
the implementation of the direct instruction intervention was determined on how
many of the components of the TESS lesson were evident in the classroom.
Overall, the level of implementation of the direct instruction model
seemed to be satisfactory. All teachers had some type of learning objective and
were checking for understanding according to a pre-determined agreed upon
frequency level by the staff and administration. Two teachers were checking for
understanding every 5 minutes.
There seemed to be a sense of commitment to the implementation and an
attempt to implement with fidelity. But, it was interesting to note that the 3
rd
grade team had the most difficulty with providing adequate and accurate evidence
of implementation even though they had had the most assistance with the
development of direct instruction lessons and pacing. There was a general
consensus that the teachers were attempting to implement with fidelity the
technical aspects of the direct instruction model.
100
The Third Grade Dip
Because of the dip in the 3
rd
grade scores at Experimental Elementary
School, these results require some additional explanation. The 3
rd
grade dip in
achievement occurred in all of the statistical and practical significance analysis
that were conducted in this study. But, the dip is consistent with the state-wide
data for all students at these grade levels. The following data will show that the
English Language Learner’s dip at Experimental Elementary School was less than
other schools and the state dip between the same grade levels. The data
explanations assisted in the evaluation of the impact on the intervention, direct
instruction, and on the ELL performance on the ELA portion of the CST.
There are different perspectives about the possible cause of the dip
between the 2
nd
and 3
rd
grade results on the English Language Arts results on the
CST. These perspectives focus at the state level on the vertical alignment of the
CST and test administration. At the local level, the factors of alignment and
administration are taken into account, along with teacher-level factors that do
have an impact on student achievement.
The decrease in student achievement for the 3
rd
grade students is
sometimes attributed to the vertical alignment, or more specifically lack, of
vertical alignment of the California Standards Test (CST). Lack of vertical
alignment is an issue at all grade levels. There is no indication in the original
creation of the exam that vertical alignment was a focus in the design. This lack
101
of consideration of alignment, therefore, could cause the dip between the two
grade levels.
Another possibility for the cause of the dip at 3
rd
grade is in the
differentiation in the administration of the exam. The 2
nd
grade CST is read to the
students except in the reading comprehension portion of the exam and the 3
rd
grade exam is not read to the 3
rd
grade students. This test administration
difference is discussed as a possibility for the cause of the 3
rd
grade dip.
Table 18 illustrates the dip in students scoring proficient and above at
Experimental Elementary School compared to the state level. Tables 18, 19, and
20 compare the achievement of Experimental School’s 3rd grade from the 2007
pre-test to the 2008 post-test to the state results. It must be noted that the
comparisons group for Experimental School was a dependent group model,
whereas the data for the state is based on independent groups. But, the N is very
large for the state data lending to the validity of the analysis.
Tables 18, 19, and 20 also illustrate that the dip in the third grade is
consistent with the overall dip in the English Language Learner performance at
the state level.
102
Table 18
Pre- Versus Post-Intervention CST ELA Performance Band
Differences for English Language Learners
Grouping
ELL 2008
Pre N
2007
Post N
2008
Pre M
2007
Post M
2008
Difference
School 151 151 1.83 1.99 +.16
Grade 3 32 32 1.75 1.59 -.16
State Grade 3 166178 148608 1.80 1.56 -.24
Table 19
Pre- Versus Post-Intervention ELL CST ELA Percent Basic and Above
Percent Basic and Above
Grouping ELL
ELL 2008
Pre 2007
Post 2008
Pre/Post
Change
Percent
Change
School .63 .70 +.07 11%
Grade 3 .59 .62 +.03 5.0%
State Grade 3 .60 .55 -.05 -8.3%
103
Table 20
Pre- Versus Post–Intervention ELL CST ELA Percent Proficient and Above
Percent Proficient and Above
Grouping ELL
ELL 2008
Pre 2007
Post 2008
Pre/Post
Change
Percent
Change
School .26 .30 +.04 +15.4%
Grade 3 .31 .16 -.15 -48%
State Grade 3 .30 .17 -.13 -43%
In fact, Experimental School’s dip was less than the states dip for all ELL
at grade 3 (Experimental School = -.16, California = -.24). Experimental School
had a positive gain in the percentage of students scoring basic and above from the
pre-test to the post-test results compared to the state (Experimental School = +5%,
California = -8.3%) The only area where Experimental School’s 3
rd
grade did not
outperform the state was in the decrease of ELL who scored proficient or above
(Experimental School = -48% , California = -43%) This result is indicative of
Experimental School’s historical growth whereas they have exhibited a history of
moving students from far below basic and below basic to basic, but have had
difficulty in moving students from basic to proficient and above. This is
illustrated in the fact that Experimental has demonstrated the greatest growth of
any school in the district using the API as the measurement (+267) but
104
Experimental Elementary School is the only school in Experimental’s district that
is currently in program improvement.
The state factors considered are, for the most part, conjecture and not in
the control of Experimental Elementary. But the teacher and organizational level
factors can be influenced by the school. There is a possibility the dip at the 3
rd
grade at Experimental Elementary could be attributed to the fidelity of
implementation of direct instruction. There seemed to be an implementation gap
with the 3
rd
grade teachers’ preparation, planning, and collaboration. This gap
was addressed earlier in Chapters 1 and 2. The implementation gap was evident
in the lack of knowledge, a lack of motivation, and lack of an organizational
structure to facilitate the direct instruction implementation.
Informal interviews of the faculty and administration provided some
insight on how well grade levels took advantage of, and embraced, the direct
instruction intervention TESS. These interviews and observations provided more
details on what extent teachers accepted the support that was provided. The 3
rd
grade teachers were somewhat resistant to the implementation of the direct
instruction concept and needed additional support in the implementation phase.
The support for the 3
rd
grade included weekly grade-level articulation and
planning meetings, on site contextual professional development (Fullan, 2008), as
well as assistance and support from the training specialist and site administrators.
Professional differences and conflicting belief systems existed regarding what the
105
ELL could or could not achieve. Underconfidence, as described by Clark and
Estes (2002), and previously detailed in Chapter 3 was most evident from the 3
rd
grade teachers. Interviews with the principal validated this observation.
The 3
rd
grade team was strong in their cohesiveness. They did receive the
most attention according to the principal and received specific training from the
consultant that varied in the intensity and directives. Teachers at the 3
rd
grade
level spent much more time in the consulting model rather than at the cognitive
coaching level (Costa, & Garmston, 2005). Each 3
rd
grade lesson was designed
with the consultant or administrator present to ensure that the fidelity of the
design was consistent. The 3
rd
grade team was assigned a program specialist
whose primary job was to work daily with the team.
These state and school factors, along with other extraneous factors not
accounted for, could have contributed to the dip at the 3
rd
grade that were not
consistent with the growth demonstrated at the other grade levels. But, comparing
to other schools in the district and the state results, the 3
rd
grade dip for the ELL at
Experimental Elementary was commensurate in regards to the change in the
performance levels, percent correct, and raw score. In comparison to the highest-
performing school in the district (API 829), Experimental 3
rd
grade ELL posted a
greater gain in percent correct and raw score from pre-intervention 2007 results to
post-intervention 2008 CST results. Experimental’s ELL also demonstrated less
dip in the performance level compared to the highest performing school’s ELL on
106
the ELA portion of the CST during this time period. Therefore, it could be
possible that even though there was a dip at the 3
rd
grade level at Experimental
Elementary, that this dip was less severe than other schools in the district and
state-wide. The intervention, direct instruction model (TESS), could have been a
contributing factor to the English Language Learner’s achievement at
Experimental Elementary on the English Language Arts portion of the California
Standards Test.
Summary of Findings of Experimental School
and Benchmark School
The experimental and benchmark schools were compared in three areas:
School-wide Academic Performance Index (API); English Language Learners’
API; and English Language Learners’ Adequate Yearly Progress (AYP).
Academic Performance Index
Experimental Elementary and Benchmark Elementary both demonstrated
gain in the 2008 API. Experimental Elementary School posted a 43-point gain in
their API moving, from a 2007 API of 681 to a 2008 API of 724. Benchmark
`Elementary grew by 1-point on the API, moving from a 2007 API of 765 to a
2008 API of 766. The API for the English Language Learner Subgroup at
Experimental Elementary grew by 48-points, moving from a 2007 API of 667 to a
107
2008 API of 715. Benchmark Elementary ELL grew 1-point from a 2007 API of
762 to a 2008 API of 763 as illustrated in Tables 14 and 15. A further analysis of
the ELL subgroup at Experimental Elementary school utilizing the English
Language Arts results for the dependent group shows a gain of 32-points in the
API from a 2007 API of 669 to a 2007 API of 701.
Adequate Yearly Progress
As discussed in Chapter 3, the No Child Left Behind (NCLB) legislation
of 2001 (United States Department of Education, 2002) required that all schools
meet a certain status or cut point of students who were identified as proficient and
advanced with the goal of 100% of students being proficient or advanced by 2014.
NCLB sets an annual measurable objective (AMO) each year that is used to
determine if a school has demonstrated adequate yearly progress (AYP). The
AMO for 2008 for English Language Arts was 35.2 % for elementary and middle
schools.
Table 17 illustrated the percentage of students at Experimental and
Benchmark schools who were identified as proficient and advanced for 2008.
Both schools met or exceeded the AMO school-wide. Even though Benchmark
Elementary had a decrease in the percentage of students identified as proficient
and advanced, the percentage of students meeting the criteria exceeded the
required AMO for 2008 for the ELL subgroup.
108
Experimental Elementary did not meet the AMO in the traditional method
as Table 17 illustrated. Experimental met the AMO in an alternative fashion, Safe
Harbor, as discussed previously (Appendix C)
Table 17 also showed that Experimental Elementary had a greater
percentage of students achieving proficiency than Benchmark Elementary school-
wide (Experimental 36.9%, Benchmark 36.7%). Experimental’s ELL subgroup
had a lower percentage of students who were proficient and advanced compared
to the benchmark school (Experimental 32.8%, Benchmark 35.6%) but as Table
17 illustrated the gap between the two schools narrowed considerably.
Implications for Practice at
Experimental Elementary School
There has been a significant change in the academic performance of ELL
at Experimental Elementary School. ELL at Experimental posted a 48-point API
growth from pre-intervention (2007) to post-intervention (2008) overall, and a 32-
point API growth during the same time frame in English Language Arts.
Experimental met all 17 of the criteria for Adequate Yearly Progress for the first
time in 6 years which included the ELL subgroup. The ELL subgroup had
historically not demonstrated AYP, especially in the English Language Arts
content area. Experimental’s ELL subgroup did meet their AMO for the first time
in 6 years with over 32.8% of the ELL identified as proficient and advanced.
109
Both the 4
th
and 6
th
grade ELL surpassed the standard for practical significance
established prior to the study of p >.10 with the 4
th
grade posting a 388%
percentage gain in the number of students who were proficient and advanced from
2007 pre-intervention to 2008 post-intervention results! This study has provided
evidence supporting that the achievement of the English Language Learner has
made positive gain and that the direct instruction intervention, Total Educational
Support System (TESS) could have possibly contributed to that success.
The teacher-level factors of underconfidence for student low expectations,
and underconfidence in themselves as identified by Clark and Estes (2002) were
addressed using the direct instruction model. Teachers, as a result of the team
planning and discussion, centered on the direct instruction model instead of
teaching in isolation. Teachers’ self efficacy seemed to improve as they a focused
on direct instruction and began to determine, discuss, and refine the researched-
based best practices and strategies which are essential for continuous
improvement.
The design of the lesson is important but the delivery is a major
contributing factor to student academic success. If there is a best way to teach,
why not look at the direct instruction method as a possibility to be replicated
across the district at all grade levels. Experimental Elementary had begun to
continually look at the design and were beginning to research how to identify and
replicate the best strategies that could be reproduced in other classroom and
110
schools. Teachers from other schools inside the district should be provided time
to observe and discuss the implications of the direct instruction model and its
impact on the achievement of all students, but specifically the English Language
Learner. There could be district-wide implications which affect the allocation of
resources of time, people, and money for support and professional development in
the future around the direct instruction model of instructional design and delivery.
There are implications that supported the research from Marzano et al.
(2005) on the effect of a good teacher on students’ achievement or the concept of
expert teaching. (Fullan, Hill, Creola, 2006f) Expert teaching requires time for
reflection and collaboration. (Marzano, Walters, & McNulty, 2005). As a result
of the implementation of the direct instruction model, Experimental Elementary
redesigned the school day for collaboration. During this collaboration the
protocols or processes could be adapted to allow for a better use of the
instructional day. The direct instruction model provided a platform for the
discussion of the practices that were most effective in assisting with student
learning. The direct instruction model allowed for the underconfidence of the
teacher and its affect on their efficacy and expectations for students to be
addressed. Teachers were able to discuss more openly their concerns and design
methods and strategies that were not previously utilized to improve their ability to
address students’ lack of achievement.
111
There has been a focus by the district to develop best practices at all grade
levels but especially for the English Language Learner utilizing Marzano’s work
(2003) along with Hill and Flynn’s work (2006). Instructional strategies were
identified that could be implemented by teachers inside the direct instruction
lesson design. These strategies were implemented randomly and with no specific
protocol or expectation for use. But, the direct instruction model allowed for these
strategies to be strategically implemented, monitored, and refined.
The study implies that direct instruction possibly could be expanded to
include all students and in all content areas. Math is the other major component
of Adequate Yearly Progress and direct instruction could be assistive in
improving the student results on the California Standards Test (CST) and a
school’s ability to assist non-proficient students in becoming proficient as
mandated by the No Child Left Behind legislation of 2001 (United States
Department of Education).
The success of the English Language Learner could be attributed to other
factors at the school. Previously in this study there was reference made to the
school and teacher factors which can affect student achievement. A great deal of
attention was paid to Experimental Elementary as a result of their PI status. There
has been a myriad of programs that had been implemented at the school that range
from technology assistive instruction, data management system, and an increased
112
urgency for analysis of data. But, the one factor that had not been explicitly
analyzed was the quality of the instructional design and delivery.
The ability of a teacher has a direct affect on student performance and
student success (Marzano, 2003). Utilizing the research for direct instruction
(Adams & Engleman, 1996; Chall, 2002; Duran & Elva, 1980; Hill & Flynn,
2006; Hunter, 1982; Lachat, 2004) and its positive influence on student
achievement, Experimental Elementary, with support and direction from the
central office, decided to implement direct instruction school-wide. The faculty
and staff selected a model for direct instruction, Total Educational Support
System (TESS), that was implemented with fidelity during the 2008 school year.
This direct instructional model provided a vehicle whereas the administration and
staff were able to address the teacher-level and school-level issues in a systematic
fashion.
As a result of the decision to implement direct instruction, teachers were
able to employ a common vocabulary to address the teacher-level factors of
undeconfidence and knowledge of best practice which were hindering the student
achievement at Experimental Elementary School (Marzano, 2003).
School factors of opportunity to learn, instructional leadership, and
guaranteed and viable curriculum were addressed with the direct instruction
model as the foundation. The school day was reorganized in order to provide
time for planning and design of lessons for the English Language Learner.
113
Teachers made collaborative decisions on what must be accomplished during the
school year by eliminating extraneous activities and refining the design of their
day to become more efficient and effective in meeting the needs of the English
Language Learner.
Direct instruction became a pivotal part of the culture at Experimental
Elementary School and had significant impact on other initiatives at the school
which will provide for continuous improvement. The direct instruction model
provided an opportunity for planning doing, checking, and acting that is crucial to
the success in the classroom. Teachers were able to discuss weaknesses not only
of the students but also in regards to their delivery and design. The discussion of
what was working and effective and what was not allowed for a better design
since there was a common language so necessary around instruction (DuFour et
al., 2008).
The staff, administration, parents, and students became to believe in their
ability to implement best practices and have transferred their learning to other
initiatives in the school system. The direct instruction model was the beginning
and was the foundation for all instruction that occurred at Experimental
Elementary.
There was considerable work that needed to be done in order to maintain
the momentum at Experimental Elementary, but the idea that teachers and schools
can make a difference in student performance was now part of the school culture.
114
The process that was used to implement the direct instruction model is now being
used to determine what programs should be implemented at Experimental
Elementary as well as refinement of the programs currently in place.
Implications for Research
Further research should be completed in several areas that are focused on
the teacher and school-level factors that were addressed previously in this study.
Experimental Elementary School had implemented several researched-based
initiatives including the effect of consistent collaboration time, frequent and
formative assessments, vertical alignment of the California Standards Test,
intervention during the school day, the effect of specialized programs at the
school, first- and second-order change, and also the effect of the principals’
leadership on the change process. This study was limited to the analysis of direct
instruction on the achievement of the English Language Learner. Each of the
previously mentioned change initiatives could individually be the focus of a study
to determine each program or change impact on student achievement.
Direct instruction was implemented as the intervention focus for this
study. The implementation of direct instruction created a need for an increase in
the amount of time that teachers spent together planning and discussing their next
steps. The administration reconstructed the school day in order to provide a
115
weekly early release time for students in order for teachers to dialogue and plan
lessons during the school day (Dufour et al., 2006).
The principal and the leadership team met weekly with each teacher for 50
minutes to discuss the teachers’ and students’ progress and next steps for
implementation of the guaranteed curriculum (Marzano, 2003). The school
designed common assessments, pacing guides, and an increase in the frequency of
formative assessments to constantly monitor the progress of the students. All of
these changes and additions were brought about and implemented as a result of
the implementation of the direct instruction model, Total Educational Support
System. Some schools implement many of the initiatives that were listed without
the implementation of a common lesson design. Research into these schools’
effectiveness in improving student achievement might add to the generalizability
of this study.
There are research implications concerning the vertical alignment of the
California Standards Test. Certain grades had tremendous gains but the results
imply that there could be a question about the rigor of the assessments from year-
to-year. The dip at 3
rd
grade is one example that is difficult to explain and the
implications for this research in the causality of this dip would be beneficial in
the explanation of the success of the direct instruction model and other
interventions in addressing future studies.
116
The direct instruction model assisted in the focus of the interventions.
Direct instruction provided on-time data to enhance focused teaching for
individual students (Fullan, 2008). Interventions were designed during the school
day for some of the most impacted learners as encouraged by DuFour. (Dufour,
DuFour, Eaker, & Karhanek, 2004). The implications concerning the impact of
this type of strategy could possibly focus future research.
Specialized programs were in place on the campus that included the Mind
Institute, a specialized Saturday Academy for ELL and a World Café parent
involvement activity. Each of these researched-based interventions could have
impacted student achievement. Direct Instruction was implemented as the
primary instructional delivery method in the CORE program. (Buffum, Mattos, &
Weber, 2009). The success of the CORE program which consists of the state
adopted text is the key to successful student achievement. (Buffum et al., 2009).
There are implications in regards to the research on teacher expectations
and student performance. The underconfidence that was demonstrated and noted
in Chapters 1 and 3 at Experimental had changed. There are still considerations
to be analyzed that directly address this concept. Teachers made a commitment to
look at all students with a different view and not one that limited the students or
the teachers’ ability to succeed. The collaborative conversations around direct
instruction provided more and more opportunities to look at data about how
students were doing in all classrooms. It seems as if this process needs to expand
117
to include more and more time to discuss and identify and plan for student
success.
Finally, leadership and magnitude of change are the final areas that could
impact the focus of future research possibilities. The leadership at Experimental
had taken a proactive stance in changing the school culture but the leadership—
principal—did not change. One of the major factors that usually affect a change
in performance is a change in leadership, specifically the principal. (Marzano et
al., 2005). There was not a change in principal during this study. But,
Experimental Elementary School did provide evidence of improvement in
Academic Performance Index and did meet their annual measurable objectives.
This type of result could have an impact on the focus of future research in the area
of a demand for changing of leadership (principal) in order to assist with a change
in results at the school. Experimental Elementary’s change occurred without a
change in the in principal. This is not a common phenomena. Most dramatic
changes in student performance have a conspicuous connection to a change in
principal. That did not happen at Experimental Elementary.
Site-Based Recommendations
The primary purpose of this study was to evaluate the effect of the direct
instruction on the achievement of ELL as measured by the English Language Arts
(ELA) portion of the California Standards Test (CST) at Experimental Elementary
118
School in order to meet the accountability requirements of the No Child Left
Behind federal legislation of 2001 (United States Department of Education,
2002). The scope of this study was limited to a case study of English Language
Learner students at Experimental Elementary and to one intervention, direct
instruction. The direct instruction model that was implemented was known as the
Total Educational Support System (TESS). TESS was implemented as an
intervention to improve the percentage of ELL scoring proficient and advanced on
the CST. CST results from pre-intervention (2007) to post-intervention (2008)
were compared for statistical and practical significance. ELL at the 4
th
and 6
th
grade demonstrated both statistical significance (p < .15) and practical
significance (effect size >.20 and percent change > .10) changes in their results.
Qualitative results gathered with informal interviews and observations supported
the fact that the intervention was implemented with fidelity at a majority of the
grade levels.
The researcher determined a focus that was specifically aligned to the
success of ELL on the English Language Arts portion of the California Standards
Test. The researcher was able to utilize methods established by Clark and Estes
(2002) to identify the knowledge, motivation, and organizational performance
gaps in two areas: School-level factors and teacher-level factors as identified by
Marzano which influence student achievement. (Marzano, 2003). The ability to
119
identify these gaps allowed for the researcher to measure how ELL were
performing relative to the intervention of direct instruction.
The analysis of the data leads to recommendations for Experimental
Elementary in regards to the continued refinement, implementation, and possible
expansion of the intervention, direct instruction, at Experimental Elementary
School. Utilizing Fullan’s (2008) framework for change and his collaborative
work completed with Fullan, Hill, and Crevola (2006) the researcher has made
recommendations on how best to continue the fidelity of implementation of the
intervention. These recommendations are designed to assist in the desire to
address each individual English Language Learner at Experimental Elementary.
Experimental Elementary School should:
1. Continue to gather both qualitative and quantitative data.
2. Continue to standardize the instructional delivery through
collaboration and observation with a trainer-of-trainer models.
3. Explore a focused instructional strategy for all students.
4. Institutionalize methods to improve the leadership capacity of the
entire faculty and staff.
Gather Quantitative and Qualitative Data
Fullan states that “Learning is the Work” in his book (Fullan, 2008, p. 75).
This work begins and ends with data. Experimental Elementary should continue
120
to gather both quantitative (CST results) and qualitative (observations and
interviews) data for comparison and analysis. The quantitative data could be
utilized to begin the intellectual, one of the 21 Leadership Responsibilities
(McREL, 2005) for the staff. This type of activity will provide the staff with a
greater understanding of the correct action to take in the future in regards to direct
instruction. Experimental School utilized EduSoft as its data management system
and was able to disaggregate the quantitative data by ethnicity, grade, subgroup,
teacher, and students. The qualitative can provide additional information
concerning the implementation of direct instruction. The focus of all data
collection should be on how it affects the instruction responding to learning needs
of the ELL (Fullan, Hill, & Crevola, 2006). This information will better prepare
the leadership to address the type of performance gaps (Clark & Estes, 2002) and
how to close these gaps. This recommendation includes the development of
additional formative assessment for more frequent and formative assessments that
will provide for more timely feedback than the trimester assessments and the final
summative assessment, California Standards Test. By utilizing EduSoft’s
capability to design standards-based exams, the school will be able to create
rigorous exams that will, in turn, increase the rigor of the instruction in the
classroom. A systematic method of measuring progress will increase the amount
of rigor that was an issue in the classrooms. Gathering both quantitative and
121
qualitative data will assist in addressing the low expectations for students held by
many teachers at Experimental Elementary as identified in previously.
Time should be allocated prior to the start of the year to review the
assessment results of the formative and summative assessments that have been
created to measure the progress of ELL. The results should be analyzed and items
reviewed to improve the correlation between the formative assessments and the
CST results. This analysis will allow for a more laser-like focus on specific
standards and strategies to improve individual student’s academic success.
The gathering of the data should include the possibility of researching new
methods to analyze and disaggregate the data. A value added model such as some
type of Leveled Accountability System method to analyze the growth of students
might be a possibility. This process would allow for one more method to
determine growth by students and also the affect of certain teachers on the
students in their classes.
Standardize Instructional Delivery Through
Trainer-of-trainer Models
The second recommendation has to do with the standardization and
refinement of the instructional delivery method. An organization must develop a
culture of continuous improvement. (Fullan, et.al 2009). In order to refine the
instructional delivery model, there must be frequent and formative feedback to the
122
teacher about the methods or strategies that are being employed in their
classroom. This type of contextual staff development (Fullan, 2008) is essential
for the continued success of the direct instruction intervention which has been
implemented at Experimental Elementary. There needs to be a method to define
and identify the best methods and then implement with little variation those that
are known to be effective and crucial to student success. (Fullan, 2008). This type
of standardization must include effective and timely feedback. (Marzano, 2003)
One method of standardizing the delivery design is to begin the
implementation of the trainer-of-trainer model for support. Experimental
Elementary should identify those teachers who are most proficient with the direct
instruction delivery model and then have them trained by the direct instruction
consultants in order to become the experts on the campus. The literacy coach and
Out Reach Consultants would be two of the non-classroom faculty trained as
experts in designing, observing, and debriefing their colleagues using the Total
Educational Support System as the foundation of the conversation. Other key
teachers at each grade level could be trained to become experts so that peer
teaching and learning in context is the focus for the school. Teachers from other
school sites could be part of the expert team as well. Periodically, outside experts
could be brought in to gauge the consistency of the direct instruction that was
being implemented and practiced from outside the school that would include
teachers from other sites, district office personnel, and outside TESS consultants.
123
The trainer-of-trainer professional development model would allow for the buy in
from the staff since the development has been designed site specific and teachers
have “made it their own” (Marzano, 2005, p. 81). There would still be a need for
the consultant or district office personnel to periodically observe and provide
feedback to the coaches to ensure rigor and consistency of the process. This type
of professional development would be the most effective since it combines both
the district office, outside consultants, and site-specific focus. (Guskey, 2000).
The trainer-of-trainer model would allow for the staff to “nail down common
practices so that innovation is possible” (Fullan, 2008, p. 79). Once the technical
components of direct instruction are standardized, then the conversation around
quality of implementation can occur followed by research based best practices
concerning strategies for teaching. The conversations focused on how teachers
teach school in order to allow for the identification of best practices for
addressing concerns in a timely manner.
“Focused Instruction” Model
Fullan, Hill, and Crevola (2006) state that there must be a focus on
precision and not just perfection. They also state that until the day that educators
can know where each child is performing each day, there will always be a gap in
our ability as educators to properly address the students’ needs. Education must
become more personalized (Fullan, Hill, & Crevola, 2006). One method of
124
accomplishing this is to develop individual learning plans. Fullan, Hill, and
Crevola (2006) refer to these plans as Critical Learning Instructional Path (CLIP).
A CLIP would provide on-time data for the teacher to constantly measure a
student’s progress day-to-day and hour-to-hour. Experimental Elementary had
successfully employed a strategy that placed a face to the percentage.
Experimental Elementary staff and leadership constantly looked at students and
their relation to what it would take to move that child to the next performance
band as measured by the CST. But these exams and discussions were normally
held for a few students each week. The students were grouped for interventions
as a result of common needs so the intervention was more focused. But even with
this type of detail, there is still a need for more frequent measuring and assistance
for students. To move to the focused instructional level, Experimental
Elementary needed to design Individual Student Plans (ISP) for each student.
These plans include reading levels, CELDT levels, past CST scores, District
scores, and achievement goals that include, but not solely based on, the California
Standards Test. These individual student plans would also include local
assessments and logs of every assessment for every child. The goal for the school
was to know where every child was every day in regards to their progress toward
meeting the achievement goals that included demonstrating proficiency in all
content areas.
125
The major factor that precludes this type of focused instruction is the lack
of the resource of time. DuFour et al. (2004) state that the first and foremost issue
that the administrator should address is the matter of time. What is it we could
eliminate that would allow for the school to participate in those practices that we
know are beneficial to student success. Experimental Elementary should consider
developing a conceptual model that would allow for a visual progression of the
instructional process.
Increase the Leadership Capacity of
the Faculty and Staff
The final recommendation is based on the importance of leadership and its
effect on student achievement. One of the problems that was identified earlier in
the study was the ineffectiveness of the leader in moving student achievement.
School leadership must shift from an individual to a team of individuals (Marzano
et al, 2005). Experimental Elementary must continue to build a culture of
collective efficacy where as the group believes that they can make a difference in
the achievement of ELL at Experimental Elementary. The intellectual stimulation
that is necessary to challenge the beliefs of effective leadership will be a
beginning (Marzano e al., 2005). The staff will need special focus on the
magnitude of change and whose primary responsibility is it when it comes to
assisting with level-one and level-two changes. Staff should be provided time to
126
identify which of the 21 leadership behaviors that are their primary responsibility.
Faculty and staff should then be provided opportunities to practice their
leadership skills by leading collaborative groups, parent groups, and other
traditionally principal-lead activities. Providing an avenue for teachers to practice
and implement their leadership abilities will improve the school and students.
Limitations
External Validity
A non-equivalent control group design was employed so there would be
little to none transferability or generalizations outside of the experimental group.
No causal inference can be made from this quasi-experiment. The nonequivalent
control group design lacks internal validity because of selection bias. Therefore,
causal inferences concerning the direct instruction intervention and student
achievement cannot be proven.
The aspirational component of this study included a benchmark school to
attempt to address the external validity issue. The concept of proximal similarity
was used to hopefully address the lack of external validity of the study. But, even
though Experimental Elementary and Benchmark Elementary were identified as
similar by the California Department of Education, selection bias cannot be
excluded. There could be many unobserved occurrences that could have had an
effect on the results of the study. No two groups, unless randomly selected, can
127
be considered to be exactly alike or similar. Nevertheless, the experimental and
benchmark groups were compared on the post-test data utilizing a raw change in
percent proficient and above and raw change in percent of basic and above
utilizing the CST ELA results for ELL at Experimental Elementary School and
Benchmark Elementary School.
The study was also limited by the time of the study and the people who
were in the study and the location. The schools did have different grade
configurations that could lead to even less transferability, if any. The results can
only be used for the school site with little or none transferability to any other
district, school, grade level, or student.
Internal Validity
Selection bias was a threat to the internal validity in four areas: Non-
randomized identification of participants, history, maturation, and testing. The
study was a quasi-experiment, therefore, participants were not randomly assigned.
A dependent groups’ design was utilized, where as the participants
became there own control group. But, even with the design, there was no control
for the experiences that different students encountered between pre-test and post-
test, both inside and outside of the classroom during the experimental year.
Teacher efficacy could be placed in this area as well.
128
Maturation was another item that must be considered as a threat to the
internal validity of the study. ELL are essentially learning to master English and
also the content of their grade level. Students do not develop at the same rate
both physically and emotionally. This difference in maturation could have an
influence on the results of the study.
Finally, the testing threat to internal validity must be addressed. Students
are affected by testing differently, especially an English Language Learner. The
students in the study had one more year of education and time for development of
English and test-taking skills which could have affected the results of the study.
Teachers and students understood that a study was occurring and that the results
would be analyzed.
Other limitations that could be considered were the fidelity of
implementation of the intervention. This could be a threat to both the internal and
external validity. There were no methods to ensure that each teacher was
implementing the intervention with the same fidelity.
The dependent variable, California Standards Test, has a limitation in
regards to the vertical alignment of the exams. There is no clear agreement on the
established rigor for the CST from grade level to grade level.
The final limitation was the combination of multiple interventions that
were in place prior to the implementation of direct instruction, specifically the
Total Education Support System (TESS). The intervention—direct instruction—
129
was the focus of the study, but there could be other factors, both known and
unknown, that could have contributed to the positive results in student
achievement.
Conclusions
Reeves (2006) states, “we have the sufficient knowledge to do the right
things that would improve student achievement but for some reason we don’t do
what we know” (Reeves, 2006, p. 90). Our goal is for each and every teacher to
become consciously competent (Carter, 2009). That is, teachers should be able to
identify what they are doing that is making a dramatic difference in student
achievement (Marzano et al., 2005). Identifying what works and then sharing
with others is what Marzano refers to as “collective efficacy” (Marzano, et al.,
2005, p. 99). This collective efficacy provides the support necessary for teachers
in making that dramatic difference. I must still agree with Crevolla, Fullan, and
Hill (2006):
Too much educational research has proceeded on the natural sciences
model and has been preoccupied with the search for grand theory and with
describing what is, rather than focusing on issues of design and focusing
on what ought to be. (p. 38)
As seen by this study, Experimental Elementary School has looked at what
was in regards to English Language Learner student achievement. They
established a vision for what student achievement should look like and are now
proceeding to design action steps in order to make their vision a reality. Direct
130
instruction was the intervention they selected to make a dramatic difference with
ELL achievement. Direct instruction supports the notion that all students should
be successful and that is what ought to be.
This study illustrated some significant gains for ELL on the English
Language Arts portion of the California Standards Test. But more work must be
done in regards to Marzano’s focus on effective feedback, opportunity to learn,
and a guaranteed and viable curriculum for all students, but especially for ELL
(Marzano, 2003). The English Language Learner sub group is growing
geometrically and the impact that this particular subgroup will have on our society
will be huge unless it is addressed systematically (Lachat, 2004). The staff at
Experimental Elementary School has the opportunity to analyze their actions and
begin the journey to becoming consciously competent (Carter, 2009). The
principal has the opportunity to become a learning leader (Reeves, 2006). The
change in the staff and the change to the administration will begin the
transformation of Experimental Elementary School into a learning community
where continuous improvement is the culture. (Dufour et al., 2006). Experimental
School had been through many changes in the last 7 years but had not lost their
desire to make a difference. They had been scrutinized more than any school in
the district and were still closely monitored for progress and support.
The question: What was the impact of direct instruction on the
performance of ELL on the California Standards Test? In answer to the above
131
question, causality was not established, but direct instruction, specifically the
Total Education Support System (TESS), provided the onus for the first step in
addressing the concerns at the school about their ability as a staff to make a
difference with their students’ achievement. TESS provided a foundation for
conversations that focused on the process (teaching and learning) and not the
person (teacher/school/administrator) (Marzano, 2006). TESS provided a
vocabulary so that teachers could discuss their work in a manner that allowed for
professional conversations around tangible solutions to improving student
achievement.( DuFour etal., 2004). Teachers were able to identify their gaps and,
then through collaborative discussion, begin to build their collective efficacy with
the TESS model as the basis for discussion (Marzano, et al., 2005). The staff at
Experimental Elementary School for the first time in years was expressing hope,
confidence, and pride in what they were able to do for students.
At Experimental Elementary, the knowledge of what to do seemed to have
focused on the courage and knowledge of the faculty and staff to admit there was
a problem, especially with the English Language Learner population and student
learning (Reeves, 2006). The staff soon came to the realization that identifying
the issue was not the end of the journey but only the first step in their efforts to
assist students. The staff began to accept the fact that they had the power to make
a dramatic difference in student achievement and was willing to take the
appropriate steps to address the issue. Experimental Elementary accepted the
132
realization that in order to stay on the continuous improvement path they had to
have a high level of internal accountability (Fullan, Hill, & Crevola, 2006).
Thomas Friedman quotes, “It is easier to act your way into a new way of
thinking than to think your way into a new way of acting” (Friedman, 2008, p.
375). DuFour et al. (2008) state it in the following manner in “grab them by their
practices and their beliefs and values will follow” (p. 108). Direct instruction
changed the teacher behavior in the classroom at Experimental Elementary. But
more importantly, direct instruction seemed to change the staff’s belief and
thinking in regards to the English Language Learner’s ability for success. The
quote at the beginning of this study by Chall said that “we” must teach them and
teach them well because there are lives at stake (Chall, 2002. p. vii).
Experimental Elementary began to realize that the “we” cannot be anyone else but
themselves. To the researcher, that was the key.
133
REFERENCES
Adams, G. L., & Engleman, S. (1996). Research on direct instruction: 25 Years
beyond DISTAR ed. Seattle, WA: Educational Achievement Systems.
Agullard, K., & Goughnour, D. S. (2006). Central office inquiry: Assessing
organization, roles, and actions to support school improvement. San
Francisco: WestEd.
Amberge, C. (2002, Fall). When direct instruction does not work. Journal of
Direct Instruction, 2, 50-51.
Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching
and assessing: A revision of Bloom's Taxonomy of Educational
Objectives. San Francisco: Addison Wesley Longman.
Bestor, A. (1985). Educational wastelands: The retreat from learning in our
public schools (2
nd
ed.) Urbana: University of Illinois press
Blossfeld, H., & Shavit, Y. (1993). Persisting barriers: Changes in educational
opportunities in thirteen countries. Boulder, CO: Westview Press.
Bolman, L., Deal, T.E. (2003). Reframing organizations: Artistry, choice, and
leadership (3rd ed). San Francisco: Josey-Bass.
Buffum, A., Mattos, M., & Weber, C. (2009). Pyramid response to intervention:
RTI, professional learning communities, and how to respond when kids
don't learn. Bloomington, IL: Solution Tree.
California Department of Education (May, 2006). 2005–2006 Accountability
Progress Reporting System, 2005–2006 Academic Performance Index
Reports Information Guide. Retrieved November 17, 2008, from
http://api.cde.ca.gov/APIBase2006/2005BaseSchSS.
California Department of Education (2008). Statewide graduation rate for
students with UC/CSU course requirements. Retrieved June 24, 2008,
from cde.ca.gov/datastatitics/UC/CSU.
Carter, L. (2009). Five big ideas: Leading total instructional alignment.
Bloomington, IN: Solution Tree.
134
Chall, J. S. (2002). The academic achievement challenge: What really works in
the classroom? New York: Guilford Publications.
Champion, R. (2002, Summer). Taking measure: Choose the right data for the
job. Journal of Staff Development, 23, 65-66.
Clark, R. E., & Estes, F. (2002). Turning research into results: A guide to Select
the right performance solutions. Atlanta, GA: CEP Press.
Costa, A., & Garmston, R. (2005). Cognitive coaching foundation seminar
learning guide. Highlands Ranch, CO: Center for Cognitive Coaching.
Cotton, K. (1989). Expectations and student outcomes. NW Archives: School
improvement research series (SIRS). Retrieved My 5, 2008, from
http://www.nwre.\l.org/sepd/sirs/4/cu7.html.
Creswell, J. W. (2003). Research design: Qualitative, quantitative and mixed
methods approaches (2nd ed.). Thousand Oaks, CA: Sage Publications.
Department of Education (2006, July 27). Building partnerships to help English
Language Learners. Retrieved June 2008, from
http://www.ed.gov/nclb/methods/english/lepfactsheet.pdf.
Dewey, J. (1938) Experience and education New York: Macmillan
DuFour, R., DuFour, R., & Eaker, R. (2008). Revisiting professional learning
communities at work: New insights for improving schools. Bloomington,
IN: Solution Tree.
DuFour, R., DuFour, R., Eaker, R., & Karhanek, G. (2004). Whatever it takes:
How professional learning communities respond when kids don't learn.
Bloomington, Indiana: National Educational Services.
DuFour, R., DuFour, R., Eaker, R., & Many, T. (2006). Learning by doing: A
handbook for professional learning communities at work. Bloomington,
IN: Solution Tree.
Duran, E., & Elva, D. (1980, August). Teaching reading to disadvantaged
Hispanic children based on direct instruction. Las Cruces, NM: New
Mexico State University.
135
EdSource (2007). EdSource: Resource cards on California schools. Mountain
View, CA: EdSource.
Elmore, R. (2003, November). A plea for strong practice. Educational
Leadership, 62(3), 6-10.
Friedman, T. (2008). The world is hot, flat, and crowded: Why we need a green
revolution and how it can renew America (1
st
ed.). New York: Farrar,
Straus & Giroux .
Fullan, M. (2008). The six secrets of change: What the best leaders do to help
their organizations survive and thrive. San Francisco, CA: Josey-Bass.
Fullan, M. Hill, P., & Crevola, C. (2006) Breakthrough. Thousand Oaks, CA:
Corwin Press
Gage, N. L. (1978). The scientific basis of the art of teaching. New York:
Teachers College Press.
Good, T. L., & Brophy, J. E. (1987). Looking in classrooms. New York:
Harper& Row.
Gursky, D. (1991, October). Madeline. Teacher Magazine, 3(2), 28-34.
Guskey, T. R. (2000). Evaluating professional development. Thousand Oaks,
California: Corwin Press.
Hill, J. D., & Flynn, K. M. (2006). Classroom instruction that works with
English Language Learners. Alexandria, VA: Association for
Supervision and Curriculum Development.
Hunter, M. (1982). Mastery teaching: Increasing instructional effectiveness in
elementary, secondary schools, colleges and universities. El Segundo,
CA: TIP Publications.
Hunter, M. (1986, November). Comments on the Napa County, California
follow-through project . Elementary School Journal, 87(2), 172-179.
Jackson, P. W. (1986) The practice of teaching. New York: Teachers College
Press.
136
Johnson, R. S. (2002). Using data to close the achievement gap: How to
measure equity in our schools (2nd ed.). Thousand Oaks, CA: Corwin
Press.
Karoly, L. (2001). Investing in the future: Reducing poverty through human
capital investments. In S. H. Danzinger, & R. H. Haveman (Eds.),
Understanding poverty (pp. 314-356). Cambridge, MA: Harvard
University Press.
King, B. M., & Minium, E. M. (2003). Statistical reasoning in psychology and
education (4
th
ed.). Danvers, MA: John Wiley and Sons.
Kuykendall, C. (2004). From rage to hope: Strategies for reclaiming Black and
Hispanic students (2nd ed.). Bloomington, IN: National Educational
Service.
Lachat, M. A. (2004). Standards-based instruction and assessment for English
Language Learners. Thousand Oaks, CA: Corwin Pres.
Lencioni, P. (2002). The five dysfunctions of a team. San Francisco, CA:
Jossey-Bass.
Lyman, L. L., & Villani, C. J. (2004). Best leadership practices for high-poverty
schools. Toronto: Scarecrow Education.
MacIver, M. A., & Farley, E. (2003). Bringing the district back in: The role of
the central office in improving instruction and student achievement
(Report #65). Baltimore, MD: Center for Research on the Education of
Students Placed at Risk, John Hopkins.
Marzano, R. J. (2003). What works in schools. Alexandria, VA: Association of
Supervision and Curriculum Development.
Marzano, R. (2006). The art and science of teaching. Alexandria, VA:
Association of Supervision and Curriculum Development..
Marzano, R. J., Walters, T., & McNulty, B. A. (2005). School leadership that
works: From research to results. Aurora, CO: Mid-continent Research
for Education and Learning.
137
McGarth, E., Di Pietro, L., & Marie, J. (1981, November 9). "Pricklies" vs
"Gooeys.” Time, pp. 1-2). Retrieved November 2, 2008, from
Http://www.time.com/time/magazine/article.
McREL (Mid-continent Research for Education and Learning). (2005). Final
report: high-needs schools—what does it take to beat the odds?
(Technical Report). Aurora, CO: Mid-continent Research for Education
and Learning.
O'Connell, J. (2008). State of education address. Sacramento, CA: California
Department of Education.
Ormrod, J. E. (2006). Educational psychology: Developing learners (5th ed.).
Upper Saddle River, NJ: Pearson Education.
Patton, M. Q. (2002). Qualitative research and evaluation methods (3rd ed.).
Thousand Oaks, CA: Sage Publications.
Project Follow Through. (n.d.). Direct instruction research summary. Retrieved
January 12, 2009, from http://www.projectpro.com.
Reeves, D. (2003) Accountability in action. Toronto: Scarecrow Education.
Reeves, D. B. (2006). The learning leader: How to focus school improvement
for better results. Alexandria, VA: Association of Supervision and
Curriculum Development.
Resnick, L. (2003). Principles of learning: Study tools for educators (3rd ed.).
Pittsburgh, PA: University of Pittsburgh.
Roberts, C., Starkman, N., & Scales, P. C. (1999). Great places to learn: How
asset-building schools help students succeed. Minneapolis, MN: Search
Institute.
Senge, P. M. (1990). The fifth discipline: The art and practice of the learning
organization. New York: Currency Doubleday.
Stevens, R. & Rosenshine, B. (1981). Advances in research on teaching.
Exceptional Education Quarterly, 2(1), 1-9.
138
Stodolsky, S. S. (1989). Is teaching really by the book? In P.W. Jackson & S.
Harountunian-Gordon (Eds.), Eighty-ninth yearbook of the National
Society for the Study of Education, Part I (pp. 15–184). Chicago:
University of Chicago Press.
Sousa, D. A. (2006). How the brain learns (3rd ed.). Thousand Oaks, CA:
Corwin Press.
The Center for Comprehensive School Reform and Improvement (2005, May).
Things to remember during the teacher hiring season. Retrieved January
4, 2008, from www.centrforcsri.org.
Thorndike, R. I. (1973). Reading comprehension education in fifteen countries:
International studies in evaluation III. New York: Wiley.
Togneri, W., & Anderson, S. E. (2003). Beyond islands of excellence: What
districts can do to improve instruction and achievement in all schools.
Washington, D.C.: The Learning First Alliance and the Association for
Supervision and Curriculum Development.
Travernetti, G., & Rodriguez, F. (2006). Total Educational Systems Support,
1656 W. Escalon Avenue, Fresno, CA 93711.
Tucker, S. (1996). Benchmarking: A guide for educators. Thousand Oaks, CA:
Corwin Press.
United States Department of Education. (2002). No Child Left Behind Act of
2001. Retrieved May 1, 2008, from
http://www.nclb.gov/next/overview/index.html.
Waits, M. J., Campbell, H. E., Gau, R., Jacobs, E., Rex, T., & Hess, R. K. (2006).
Why some schools with Latino children beat the odds and others don't.
Phoenix, AZ: Morrison Institute for Public Policy
Walberg, H. J. (1990). Productive teaching and instruction: Assessing the
knowledge base. Phi Delta Kappan, 71, 470-478.
Waters, T. & Cameron, G. (2007). The balanced leadership framework:
Connecting vision with action. Denver, CO: Mid-continent Research for
Education and Learning.
139
Waters, T. J., & Marzano, R. J. (2006). School district leadership that works:
The effect of superintendent leadership on student achievement. Denver,
CO: Mid-continent Research for Education and Learning.
Watkins, C. L. (1997). Project follow through: A case study of contingencies
influencing instructional practices of the educational establishment.
Cambridge, MA: Cambridge Center for Behavioral Studies.
Wrobel, S. (1996). The effectiveness of direct instruction on the various reading
achievement categories. (Report No. CS 012-460). Washington, DC:
United States Department of Education. (ERIC Document Reproduction
Service No. ED395 292.
Yoon, B., Burstein, L., Gold, K. (undated). Assessing the content validity of
teachers’ reports of content coverage and its relationship to student
achievement (CSE Rep. No. 328). Los Angeles: Center for Research in
Evaluating Standards and Student Testing, University of California, Los
Angeles.
140
APPENDIX A
TOTAL EDUCATION SYSTEMS SUPPORT
LESSON TEMPLATE
Time Lesson Plan Component Teacher
Action
Check Student
Understanding
(What questions and/or
student behaviors will
inform you that students
can move on?)
Unpacked Standard
(What part of this standard can I
reasonably expect to teach in this
lesson?)
5-8
Minutes
(prepare
students
for lesson)
Learning objective (Is it clearly
defined and does it match
independent practice?)
20-25
minutes
Teach
(provide
input)
Preview
(Activate prior
knowledge/provide relevance)
and/or Review
(Necessary sub-skills)
Included
in above
time frame
Teach
Explain Procedural Knowledge
(How does the expert complete
the task identified in the learning
objective? Can include modeling,
graphic organizers, steps)
10-15
Minutes
Release
Guided Practice
(Gradually release students
towards independent practice by
practicing multiple variations and
providing corrective feedback)
5+
Minutes
Independent Practice
(Does the student independent
practice indicate you met
learning objective?)
141
APPENDIX B
TOTAL EDUCATION SYSTEMS LESSON
DIRECT CONSTRUCTION MODEL
Teacher: ___ School: ___ Date: ___
Time
Lesson Plan
Component
Teacher Action
Check?
Indicator
Unpacked Standard Not
observed
Observed
developing
Observed
proficient
Exemplary
Learning Objective
• Is it clearly defined?
• Is it made clear to
students?
Not
observed
Observed
developing
Observed
proficient
Exemplary
Anticipatory Set
• Activate prior
knowledge and/or
•Review necessary
sub-skills and/or
• Provide relevance
Not
observed
Observed
developing
Observed
proficient
Exemplary
Explain Conceptual
Knowledge
Are student taught the
big idea, definition,
rule, etc.?
Not
observed
Observed
developing
Observed
proficient
Exemplary
Explain Procedural
Knowledge (Skill)
Does the expert show
how to complete the
task identified in the
learning objective?
Not
observed
Observed
developing
Observed
proficient
Exemplary
Guided Practice
Were students
gradually released to
independent practice
with necessary
corrective feedback?
Not
observed
Observed
developing
Observed
proficient
Exemplary
Closure
Did this serve as a
final check before
releasing students?
Not
observed
Observed
developing
Observed
proficient
Exemplary
Independent
Practice
1. Does it match learning objective? Y N
2. Does it match guided practice? Y N
3. Are 75%-80% of students able to complete task successfully? Y N
Commendations Recommendations/Notes
Source: Travernetti and Rodriguez, 2006.
142
APPENDIX C
DEFINITION OF “SAFE HARBOR”
NCLB contains a “safe harbor” provision for meeting annual measurable
objectives (AMO) in some circumstances. The procedure is applied in the 2008
AYP reports when these circumstances occur. Safe harbor is an alternate method
of meeting the AMO. Specifically, if a school, a Local Education Agency (LEA),
or a subgroup does not meet its AMO criteria in either or both content areas and
shows progress in moving students from scoring below the proficient level to the
proficient level or above on the assessments, AYP may be achieved if all of the
following conditions are met:
• The percentage of students in the school, LEA, or subgroup performing
below the proficient level in either ELA or mathematics decreased by at least 10%
of that percentage from the preceding school year.
• The school, LEA, or subgroup had at least a 95% participation rate for
the assessments in ELA and mathematics.
• The school, LEA, or subgroup demonstrated at least a 1-point growth in
the API or had a Growth API of 620 or more.
• The school, LEA, or subgroup met graduation rate criteria, if applicable.
New in 2008, safe harbor for LEAs is applied for both grade spans and
143
numerically significant subgroups within grade spans of an LEA. A confidence
interval adjustment of 75% is applied to safe harbor calculations
Example of Safe Harbor
In the example of safe harbor shown on Table 21, the elementary school
shows 5% of its students scoring at the proficient level or above school-wide in
2007 in ELA (shown as PP07 in row D, column A). In 2008, the school’s percent
at the proficient or above level in ELA increased to 13% (shown as PP08 in row
D, column B). Except for ELA, the school met all the other criteria for making
AYP. (It made its AMO in mathematics, its API was above the target, and the
95% participation rate was met.) The school would not ordinarily make AYP in
2008 because 13% was below the AMO of 35.2% for ELA. However, the
school’s percentage at the below proficient level in ELA decreased by the safe
harbor requirement of at least 10% with the 75% confidence interval adjustment
(shown in the calculation steps in rows E through I). According to safe harbor
rules, the school meets AYP because the percentage of students below the
proficient level decreased by at least 10% from the preceding school year in ELA,
the content area in which AMO was not met, and it met its other AYP criteria.
144
Table 21
Safe Harbor Example Elementary School
STEPS 2007 ELA
A
2008 ELA
B
Calculations
C
A. Number Proficient or Above
(NP)
10
(NP
07
)
26
(NP
08
)
B. Number Below (NBP) 190
(NBP
07
)
174
(NBP
08
)
C. Total Number of Valid
Scores (TN)
200
(TN
07
)
200
(TN
08
)
D. Percent Proficient or Above
(PP)
5
(PP
07
)
13
(PP
08
)
(NP/TN) x 100
E. Percent Below Proficient
(PBP). The 2008 rate should
decrease by at least 10$ from
the 2007 rate to meet Safe
Harbor criteria
95
(PBP
07
)
87
(PBP
08
)
100 – PP
F. Maximum percent Below
Proficient (MPBP. This is the
maximum percent below
proficient for 2008 to meet Safe
Harbor criteria.
85.5
(MPBP)
0.9 X PBP
07
G. Minimum Percent Proficient
Safe Harbor (PPSH) for 2008.
This is the minimum 2008
percent proficient or above
necessary to meet the Safe
Harbor criteria in 2008
14.5 (PPSH) 100 – MPBP
H. 75% Confidence Interval
(CI). This is the extra margin
of error provided to the 2008
percent proficient or above
1.9911
(CI)
0.68 x SQRT
(PP
07
x
PBP
07
/TN
07
+PPSH x
MPBP/TN
08
)
I. 2008 Percent Proficient for
2008 Safe Harbor with 75%
Confidence Interval (PPCI). If
this rate is higher than the
minimum PPSH for 2008, the
Safe Harbor criteria were met.
14.991
(PPCI)
PP08 + CI If PPCI
> PPSH, criteria met.
Note. This school met the Safe Harbor criteria for the AMO in ELA because the “2008 Percent Proficient for
2008 Safe Harbor with 75% Confidence Interval” (14.9911057) is greater than the “Minimum Percent
Proficient Safe Harbor for 2008” (14.5%).
Source: California Department of Education, 2008, pp. 41-42.
145
The 75% confidence interval provides an extra margin of error in the
calculations to enhance reliability and are applied to school and Local Education
Agency (LEA) reports, including the LEA grade span reports used to determine if
an LEA is identified for program improvement.
The school met its 2008 AMO in mathematics school-wide, but the school
missed its 2008 AMO in ELA school-wide. Also in 2008, the school had at least
a 95% participation rate for both ELA and mathematics and a 2008 Growth API
of 620. The school had no numerically significant subgroups in either 2007 or
2008.
Abstract (if available)
Abstract
The purpose of this case study was to analyze the impact of direct instruction on the performance of 151 elementary English Language Learners. Total Educational Support System (TESS) was the direct instruction model that was selected as the intervention. Data were drawn from the English Language Arts portion of the California Standards Test for the 2007 and 2008 administration. A mixed-method methodology was utilized with a primary focus on the quantitative portion to determine both statistical and practical significance using a dependent group design and a non-equivalent benchmark design. Qualitative information was gathered through informal interviews and observations.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
An evaluation of the impact of direct instruction intervention on the academic achievement of English language learners
PDF
The impact of professional learning communities and block scheduling on English language learner students at Oak Point Middle School
PDF
English learners' performance on the California Standards Test at Aviles Elementary
PDF
English language learners utilizing the accelerated reader program
PDF
An evaluation of the impact of a standards-based intervention on the academic achievement of English language learners
PDF
The block schedule and the English language learners: impact on academic performance and graduation rate at Oxbow High School
PDF
The effect of a language arts intervention on the academic achievement of English language learners
PDF
The impact of restructuring the language arts intervention program and its effect on the academic achievement of English language learners
PDF
Explicit instruction’s impact on the student achievement gap in K-12 English language learners
PDF
The effectiveness of the cycle of inquiry on middle school English-learners in English-language arts
PDF
The impact of the Norton High School early college program on the academic performance of students at Norton High School
PDF
Influences that impact English language acquisition for English learners: addressing obstacles for English learners’ success
PDF
Examining the effectiveness of the intervention programs for English learners at MFC intermediate school
PDF
The effects of open enrollment, curriculum alignment, and data-driven instruction on the test performance of English language learners (ELLS) and re-designated fluent English proficient students ...
PDF
Instructional proficiency strategies for middle school English language learners
PDF
Goal orientation of Latino English language learners: the relationship between students’ engagement, achievement and teachers’ instructional practices in mathematics
PDF
A phenomenological study of the impact of English language learner support services on students’ identity development
PDF
Support for English learners: an examination of the impact of teacher education and professional development on teacher efficacy and English language instruction
PDF
Evaluation of the progress of elementary English learners at Daisyville Unified School District
PDF
Preparing English language learners to be college and career ready for the 21st century: the leadership role of secondary school principals in the support of English language learners
Asset Metadata
Creator
Nichols, Terry Leon
(author)
Core Title
An analysis of the impact of the total educational support system direct-instruction model on the California standards test performance of English language learners at experimental elementary school
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education (Leadership)
Publication Date
06/03/2009
Defense Date
04/16/2009
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
direct instruction,English language learner,OAI-PMH Harvest
Place Name
California
(states)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Hocevar, Dennis (
committee chair
), Conklin, Dean (
committee member
), Reed, Margaret (
committee member
)
Creator Email
tlnichol@usc.edu,tnichols@duarte.k12.ca.us
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m2278
Unique identifier
UC1306278
Identifier
etd-Nichols-2902 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-242771 (legacy record id),usctheses-m2278 (legacy record id)
Legacy Identifier
etd-Nichols-2902.pdf
Dmrecord
242771
Document Type
Dissertation
Rights
Nichols, Terry Leon
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
direct instruction
English language learner