Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Cognitive and motivational factors that affect the achievement of elementary students on standardized tests
(USC Thesis Other)
Cognitive and motivational factors that affect the achievement of elementary students on standardized tests
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
COGNITIVE AND MOTIVATIONAL FACTORS THAT AFFECT THE
ACHIEVEMENT OF ELEMENTARY STUDENTS ON STANDARDIZED TESTS
by
E Don Kim
A Dissertation Presented to the
FACULTY OF THE ROSSIER SCHOOL OF EDUCATION
University of Southern California
In Partial Fulfillment of the Requirements for the Degree
DOCTOR OF EDUCATION
August 2003
Copyright 2003 E Don Kim
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
UMI Number: 3103915
Copyright 2003 by
Kim, E. Don
All rights reserved.
®
UMI
UMI Microform 3103915
Copyright 2003 by ProQuest Information and Learning Company.
All rights reserved. This microform edition is protected against
unauthorized copying under Title 17, United States Code.
ProQuest Information and Learning Company
300 North Zeeb Road
P.O. Box 1346
Ann Arbor, Ml 48106-1346
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
University of Southern California
Rossier School of Education
Los Angeles, California 90089-0031
This dissertation written by
E Don Kim
under the discretion of h i s Dissertation Committee,
and approved by all members of the Committee, has
been presented to and accepted by the Faculty of the
Rossier School of Education in partial fulfillment of the
requirements for the degree of
D o c to r o f E d u cation
Date
an
Dissertation Committee
Chairperson
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
ii
ACKNOWLEDGMENTS
For the daunting task of this doctoral dissertation, there are some individuals that
I would like to thank. First, I wish to express my deepest gratitude to my committee
chair Dr. Harry O’Neil. His wisdom, friendship, and counsel carried me through the
process. His humor, enthusiasm, and willingness to assist me in any way needed inspired
me to continue the endeavor. In my opinion, he was the perfect chair. He was detailed
but never overbearing. He taught me much as a chair and as a professor. I also wish to
thank the other members of my committee, Dr. Stu Gothold and Dr. Larry Picus for their
assistance. Dr. Picus’ finance class is still probably the most memorable one of my
U.S.C. career. I learned much from the three courses I had with Dr. Gothold. I also thank
him for his wise guidance as my advisor through out my years in the doctoral program.
All three gentlemen have my highest admiration and respect.
There were other students and colleagues that I wish to thank for their assistance.
Dr. Ramona Chang, Dr. Susan Adams, Mrs. Ada Garza, Mrs. Christie Forshey, and Mrs.
Kim Hoops-Goolsby are all administrators in the Torrance Unified School District who
assisted me in this endeavor. They each provided me with data, their dissertations, or
their experiences for me to utilize. Fellow U.S.C. students Dr. Ashley Chen, Ms. Sabrina
Chuang, Dr. Rita Camarillo, and Ms. Lori Marshall each assisted me with sample
dissertations to view. Thank you to each of them. Thanks also goes to Mr. John Martois
for his assistance with the statistical analysis that were generated for this dissertation.
His cheerful nature and careful explanations helped greatly.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
To my parents I have a lifetime o f gratitude for their support and guidance. It is
because of their belief in the importance o f education that I continue to strive
academically and professionally. Lastly, I would like to express my gratitude to my wife
Rosa. Without her love, support, and sacrifice this task would have been much more
difficult. She experienced years without having a husband completely mentally around,
as well as years of helping to pay for my schooling. She did much more than that
however. She also spent dozens o f hours organizing a database, entering data, and
generating statistics for this dissertation. Her love and dedication to me will never be
forgotten. She is the greatest.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
iv
TABLE OF CONTENTS
Acknowledgements ii
List o f Tables vii
ABSTRACT ix
CHAPTER I INTRODUCTION 1
Background of the Problem 1
Statement of the Problem 3
Purpose of the Study 5
Significance o f the Study 6
Definition of Terms 7
CHAPTER II LITERATURE REVIEW 9
Standardized T ests 10
Positive Aspects of Standardized Testing 10
Negative Aspects of Standardized Testing 15
Factors That Affect Student Achievement on Standardized Tests 16
Primary Language 17
Student Motivation 20
Anxiety 22
Teacher Attitudes 24
Gender 25
Socioeconomic Status 27
Summary of Literature 3 0
CHAPTER m METHODOLOGY 33
Research Design 33
Research Hypotheses 33
Pilot Study 34
Purpose of Pilot Study 34
Method o f Pilot Study 3 5
Participants 35
Procedure 35
Categorizing Open Ended Responses 37
Estimation of Inteijudge Reliability 3 9
Revised Estimation of Interjudge Reliability 42
Discussion of Pilot Study 43
Main Study 44
Method o f Main Study 44
Participants 45
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
V
Procedure 46
Questionnaires 47
CHAPTER IV RESULTS AND FINDINGS 49
Data Analysis 49
Research Hypothesis One 53
First Written Response 53
Estimation of Inteijudge Reliability 5 5
Second Written Response 57
Estimation o f Inteijudge Reliability 60
Revised Estimation o f Inteijudge Reliability 61
Research Hypothesis Two 65
Research Hypothesis Three 66
Research Hypothesis Four 67
Research Hypothesis Five 69
Other Findings 70
Gender 70
Primary Language 73
Teacher Attitude 75
Socioeconomic Status 77
Summary of Findings 8 0
CHAPTER V SUMMARY, DISCUSSION, CONCLUSIONS, 83
RECOMMENDATIONS
Summary of Research Hypotheses 84
Discussion of Findings 85
Limitations 92
Recommendations 93
REFERENCES 98
APPENDICES
Appendix A: University Park Institutional Review Board Approval 106
for Review of Research Involving Human Subjects
Appendix B: UPIRB Informed Consent for Non-Medical Research 107
Appendix C: UPIRB Assent Form for Research 112
Appendix D: Permission from the Torrance Unified School 114
District Board of Education to Conduct Study
Appendix E: Survey About SAT-9/Standardized Tests (Pilot Study) 115
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
vi
Appendix F: 10 Written Student Responses for Estimation of 116
Inteijudge Reliability
Appendix G: Questionnaire About SAT-9/Standardized Tests 117
(Main Study)
Appendix H: How I Feel Questionnaire STAIC Form C-2 119
Appendix I: Sample Student Responses for the Question: 120
What do you think helps you do better on these tests?
Appendix J : Sample Student Responses for the Question: 122
Why do you think it is important to take tests like these?
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
vii
LIST OF TABLES
1. Pilot Study Standardized Tests Questionnaire Results 36
2. Open Ended Question Responses from Pilot Study 38
3. Sample Frequency of Open Ended Question Response Types 40
4. Estimation of Inteijudge Reliability (Pilot Study) 41
5. Revised Estimation of Inteijudge Reliability After Raters Conferred 42
6. Types o f Responses for Question: 54
“What do you think helps you to do better on these (standardized) tests?”
7. Estimation o f Interjudge Reliability for Question #1 55
8. Chi Square Comparing Cognition and Motivation on “What do you think 57
helps you do better on these (standardized) tests?” Written Responses
9. Types o f Responses for Question: 59
“Why do you think it is important to take tests like these?”
10. Estimation o f Inteijudge Reliability for Question #2 60
11. Revised Estimation o f Inteijudge Reliability After Raters Conferred 61
12. Chi Square Comparing Cognition and Motivation on “Why do you think it 64
is important to take tests like these?” Written Responses
13. Attitude Scores By Grade Levels 65
14. Anxiety Scores By Grade Levels 66
15. Raw Score Means by Grade Level 68
16. Anxiety and Total Reading and Math Scores by Grade Level 68
17. Attitude and Total Reading and Math Scores by Grade Level 69
18. Gender with Total Reading, Total Math, Anxiety Score and 71
Attitude Score: Group Statistics
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
viii
19. T-test Comparing Gender with Total Reading, Total Math, Anxiety 71
Score and Attitude Score: Independent Samples Test
20. English and Non-Native English Speaking with Total Reading, 74
Total Math, Anxiety Score and Attitude Score: Group Statistics
21. T-test Comparing English and Non-Native English Speaking with 7 5
Total Reading, Total Math, Anxiety Score and Attitude Score:
Independent Samples Test
22. Teacher Attitude with Total Reading, Total Math, Anxiety Score 76
and Attitude Score: Group Statistics
23. T-test Comparing Teacher Attitude with Total Reading, Total Math, 77
Anxiety Score and Attitude Score: Independent Samples Test
24. Comparing Socioeconomic Status with Total Reading, Total Math, 79
Anxiety Score and Attitude Score: Descriptive Statistics
25. ANOVA Comparing Socioeconomic Status with Total Reading, 80
Total Math, Anxiety Score and Attitude Score
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
ix
ABSTRACT
Standardized testing in the American public education system (K-12) has been a
popular and sometimes controversial method to assess the learning completed by
students in schools for decades. However in recent years, the latest wave o f reform in
American education, that has often been called the “accountability and assessment
movement,” has resulted in standardized testing growing in magnitude and importance.
Doing well or poorly on these tests can affect individual students’ growth and
opportunities, e.g. retention and gifted and talented programs. Doing well or poorly on
these tests can mean principals get fired or a school receives hundreds of thousands of
dollars each year. Standardized tests like the Stanford Achievement Test, Version 9
(SAT-9) given to most of the public and private school students in states like California,
could also affect the election or re-election of governors and perhaps even presidents.
The literature review in this dissertation included why standardized tests were
given to students. There were both positive and negative reasons why they were utilized.
Next, factors that affected the achievement of students were discussed. Many of these
factors have been studied in depth by researchers and educators. Factors that were
reviewed in this dissertation include primary language, student motivation, anxiety,
teacher attitudes, gender, and socioeconomic status.
The central question for this investigation was whether cognition or motivation
plays the greater role in the achievement o f elementary students on standardized tests.
Elementary students were asked to respond to questions regarding these issues. The
methodology for data collection involved the use of questionnaires. 305 elementary
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
X
students in grades 3 ,4 , and 5 answered questions about their attitudes and anxiety about
standardized tests. There was a pilot study and main study. The purpose of the pilot
study was to test whether or not students o f this age group could coherently answer
survey questions, to generate categories for the open ended responses, and to refine the
survey questions for the main study. The pilot was completed and the feasibility o f the
main study was established. The main study was conducted in a similar method as the
pilot study. Interjudge reliability between two separate raters of the responses was
estimated and was satisfactory.
There were five research hypotheses that were tested in this study. Hypothesis 1 :
Elementary students will perceive their success on standardized tests to be
predominantly due to motivational factors not cognitive ones. The first research
hypothesis was partially supported by the data collected, statistics generated, and
analysis completed. For this first hypothesis, there were two different questions that
student wrote responses to. For the first question, “What do you think helps you do
better on these tests?” the data did not support the hypothesis. However for the second
question, “Why do you think it is important to take tests like these?” the results did
support it.
Hypothesis 2: Elementary students will have less positive attitudes towards
standardized tests as they are promoted from grades 3 through 5. The second research
hypothesis was not supported. The attitude o f students did not change significantly as
they grew older. Hypothesis 3: The anxiety o f elementary students will be inversely
related to their attitude about standardized tests. This research hypothesis was not
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
xi
supported. For the entire sample there was no significant relationship between the
variables.
Hypothesis 4: The anxiety of elementary students will be inversely related to
their Stanford Achievement Test, Version 9 mathematics and language arts test scores.
This research hypothesis was partially supported. There was statistical evidence to
support that there was a relationship between anxiety and mathematics but not with
language arts. Hypothesis 5: The attitudes of elementary students will be positively
related to their SAT-9 test scores in mathematics and language arts. The last research
hypothesis was partially supported. There was statistical evidence to support a
relationship between attitude and language arts, not with mathematics.
There were some other results that came about from questions raised by the
literature review. With respect to gender, girls in this study did not have higher anxiety
than boys overall. With respect to primary language, non-native English speakers had
poorer attitudes and these students performed worse on the language arts portion of
standardized tests. With respect to teacher attitude, students who thought that their
teacher liked standardized tests had a more positive attitude towards these tests also.
Finally with respect to socioeconomic status, students did not demonstrate a more
positive attitude nor higher anxiety whether they were high, medium or low SES.
In summary, the central question for this investigation was whether cognition or
motivation played the greater role in the achievement of elementary students on
standardized tests. Elementary students were asked to respond to questions regarding
these issues. After data was collected and analyzed, the conclusion was that both
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
cognition and motivation play significant roles in determining how well elementary
students perform on standardized tests.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
1
CHAPTER I
INTRODUCTION
Background o f the Problem
There have been several waves of reforms for the American public education
system beginning in the eighteenth century. (Cuban, 1992) We are still firmly
entrenched in the assessment and accountability reform movement that began in the
early 1980s. (Odden, 2000) In response to cries o f politicians, the media, and parents,
the public education system set a course to prove that it could continue to serve the
needs of Ameri can children like it had done for hundreds of years. (Colvin, 2001; Elam,
Rose & Gallup, 1992; Hoff, 2001) The assessment and accountability reform began with
the media announcing the decreasing scores by high school students on standardized
tests like the Scholastic Achievement Test (SAT-I). These scores continued to decline
through out the 1970s (Odden, 2000) Blame began to focus upon the public education
system, primarily by politicians. At first it was “back to basics” and the “three R’s” that
politicians stated were needed in American schools. Schools were not properly
completing the task of educating America’s children. (Wirt and Kirst, 2000) Claims such
as these were explicitly stated in government published reports like “It’s Elementary,”
(California Department of Education, 1992) Recent presidents, whether Republican or
Democrat, e.g. George H.W. Bush, Bill Clinton, and now George W. Bush, all joined the
call for American schools to be held more accountable. (Odden, 2000)
One of the primary platform planks that George W. Bush campaigned for
president in 2000 upon was more accountability in schools. He proposed to bring many
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
2
of the reforms that he had championed in Texas while Governor there to the federal,
national level. His educational program is now being called “No Child Left
Behind.”(Bush, 2001) Based upon his election platform, Congress has now completed
what has been called “the biggest overhaul of the federal role in education in 25 years”
according to one member of the House of Representatives (Anderson, 2001, p. A l) This
“overhaul” included increased use o f standardized testing, improved teacher
qualifications, definitions for adequate yearly progress o f low performing schools to
hold them accountable, and standards for learning English for Limited English Proficient
students. President Bush signed this bill into law on January 8, 2002. (Anderson, 2002)
Some of the specifics of this education reform bill are that all states were
required to implement annual standardized reading and math tests for students in grades
three through eight by the 2005-06 school year. States are required to set high targets for
student proficiency in reading and math by 2013. States are required to set benchmarks
for different ethnic groups and non-English speaking students so that they become
proficient in English within three years of attending American schools. States must
submit plans to have all teachers qualified in their subject area within four years. This
may include states requiring teachers to take subject area tests or college classes.
Schools that do not adequately improve after two years will be given federal funds for
assistance. In the 2002 budget year, Congress authorized $26.4 billion dollars for K-12
public education. (Anderson, 2001) Parents o f students in schools that do not improve
could have the option o f sending their children to other local public schools or to receive
aid for tutoring. Schools that do not meet improvement goals after five years will be
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
3
restructured. The faculty or principal may be changed. A state may take over any public
school or convert it to a charter school. (Anderson, 2001) All o f the language of the bill
will have to worked our by federal courts eventually. However, it is expected that the use
o f standardized tests will increase as a result of the new federal law.
Statement of the Problem
There were several different factors that may affect an individual student’s scores
on standardized tests. These factors include primary language, student motivation,
anxiety, teacher attitude, gender, and socioeconomic status. More attention has been paid
to these factors in states where “high stakes testing” has taken place. (Muir, 2001, and
Sackett, Schmitt, Ellingson & Kabin, 2001) “High stakes testing” can be defined as
“(testing) programs designed to measure not only the achievement of students, but also
of teachers, principals, and schools. It was also used to describe assessment tools that
can have a variety o f consequences.” (DeCesare, 2001, p. 10) For example, in California
all public school students in grades 2 through 11 now take the Stanford Achievement
Test, Version 9 (SAT-9). At this time from this one test alone, every public school in
California is ranked on an Academic Performance Index (API) which ranges from 200 to
1000 points. Schools with an API of over 800 are considered “High Performing
Schools.” Schools that meet their target improvement goals each year receive money and
all of the employees at the school, whether they are a teacher or not, receive cash
bonuses. (Keller, 2000) For the year 2000 test administration, the figures were $591 per
employee and $63 to the school for each student. The employees each received the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
bonuses. The school received the money per student. A school with 1,000 students for
example, received $63,000 to do with what ever they wished.
Schools that do not meet their target improvement goals could be “reconstituted,”
where the principal is fired and the state takes control of the school. (DeCesare, 2001)
This “reconstitution” is the extreme and has not occurred to any schools yet in the state
of California. (Bushman, Goodman, Brown-Welty, & Dom, 2001) Poorly performing
schools that are ineligible for cash bonuses are however, eligible to receive state money
to assist in improving. For example, a grant o f approximately $200 per student is
available. This means a school with 1,000 students could receive up to $200,000. More
likely though what has occurred to a school with poor results is more parental pressure,
school board and administrative pressure, reduced enrollment as parents take their
children to private schools, to charter schools, to home schooling, to other schools in the
district on intradistrict permit, or to other schools outside their district on interdistrict
permit, and a loss of professional prestige for the school and faculty. (DeCesare, 2001)
Intradistrict permits are granted to students who live in the same school district but wish
to attend another school in the same district. Interdistrict permits are granted to students
who live outside a school district to attend a school inside of it.
An “off the shelf’ multiple choice, standardized test like the SAT-9 was an
inexpensive and simple tool for politicians to demonstrate how their educational policies
were effective. How inexpensive are they? In 2000, the estimate was that states spent
approximately $105 million on standardized tests. This figure represents only .003% of
the total costs of education spent by the states that year. (Cizek, 1998) The Governor of
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
5
California, Grey Davis stated that he would not run for reelection unless test scores went
up. (Fording, 2001) Since overall scores have increased, the Governor has interpreted
this to claim that his educational policies are effective. He is currently running for
reelection touting that “ test scores are up three years in a row.” (Davis for Governor
television advertisement, January, 2002) For most parents, a single number like an API
score is an easy way to tell if their local school is doing well or not. With these federal
and state political and economic issues, it is most likely that standardized testing will
continue to increase in the next few years in California.
Purpose of the Study
The primary purpose o f this study was to determine whether cognitive or
motivational factors constituted the greater role in elementary students’ perceptions of
achievement on standardized tests. There was good reason to study the factors that affect
student scores on standardized tests particularly in California where testing has
increased. The State Testing and Reporting System (STAR) officially began there in
1998. (Fording, 2001) There were previous programs and systems that utilized
standardized tests to measure the achievement of students in the state also. (DeCesare,
2001) The STAR utilized the Harcourt Company’s published test, the Stanford
Achievement Test, Version 9 (SAT-9). For the first two years of its administration, 1999
and 2000, the SAT-9 California version was made up of only multiple choice questions.
With the 2001 administration, it also included one essay question prompt for 4t h , 7t h , and
10t h graders to write to. That year there were also changes in the subject matter tested. 4t h
graders no longer were tested in science and 5t h graders were no longer in social studies
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
6
like in the first two years of the test. Now all elementary students up through fifth grade
are currently only tested in language arts and mathematics in California.
Primary language, student motivation, anxiety, teacher attitude, gender and
socioeconomic factors each plays a role in how students perform on standardized tests.
There have been many studies conducted in these different areas: in primary language
August and Hakuta (1998), Garcia (2000), and Gandara (1997); in student motivation
Bandura (1997) and O’Neil (2000); in anxiety Covington (1992) and Spielberger (1980,
1982); in teacher attitude Woolfolk (2001); in gender Halpem (1992) and Gallagher
(1998), and in socioeconomic status Kohn (2000, 2001). However, relatively little
research has been conducted upon the specific topic o f elementary aged children and
their perceptions of standardized tests. This study investigated the topic.
For example, the central research question that was examined in depth in this
study was whether cognitive or motivational factors played the larger role in affecting
elementary students’ achievement on standardized tests. If the answer was cognitive,
then in-class time with their teacher assisting their critical thinking and memorization
skills was paramount for students to do well. If the answer was motivational, then
students performed better when given an incentive to strive for.
Significance of the Study
It was the author’s hope that by contributing to the basic knowledge of the
factors that affected students’ achievement on standardized tests that all stakeholders,
including parents, teachers, administrators, school board members, politicians, test
makers, educational researchers etc. could make better and more well informed decisions
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
7
about these tests. There have been relatively few studies conducted about elementary
students’ perceptions and attitudes towards standardized tests. O’Neil (1992, 2000) has
conducted studies on middle school and high school age students about their
performance on standardized tests. This study involved elementary age students directly
and what they perceived as assisting them in achieving higher scores on these
standardized tests.
Definition o f Terms
Standardized tests are ones that were given nation-wide under uniform conditions
and scored according to uniform procedures. (Woolfolk, 2001) They can be multiple
choice questions only, or only essay question prompts that students write to, or a
combination o f both. For the purposes of clarity, when the term “standardized test” is
used in this dissertation, it means multiple choice only standardized tests unless
otherwise stated. This study dealt with grades 3 ,4 , and 5 and those students’ multiple
choice test scores. Most standardized tests are norm-referenced which means a student’s
individual scores on the test were compared to a national average. (Woolfolk, 2001) The
most common type of standardized test is called an achievement test. It is meant to test
what a student has learned in a specific subject. The best known test in California is the
Stanford Achievement Test. (DeCesare, 2001) It is a norm-referenced, primarily
multiple choice, achievement test. In California, all students in grades two through
eleven in public schools are required to take the Stanford Achievement Test, Version 9.
In grades two through five, elementary students are tested only in language arts and
mathematics. In grades six through eleven, students are tested in language arts,
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
8
mathematics, science, and social studies. (California STAR Report, 2001) Many private
schools in the state also utilized the SAT-9 or other well known standardized tests like
the Iowa Test of Basic Skills, Comprehensive Test of Basic Skills, and California
Achievement Test. (Cizek, 1998) Some schools in California also tested kindergarteners,
1s t graders, and high school seniors. (Ragland, 2001)
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
9
CHAPTER II
LITERATURE REVIEW
The purpose of this chapter is to review the available literature on the subject of
standardized testing for this study. The organization of this chapter is divided up into
four parts. The first section is information about standardized tests. The second is a
summary of the positive and negative affects of standardized testing. The third section
concerns the available research on the various factors that can affect the achievement of
students on these types o f tests. These include primary language, student motivation,
anxiety, teacher attitude, gender, and socioeconomic status. The final section is a
summary with conclusions and implications.
The majority of information gathered was from the years 1990 to 2002. The data
bases utilized to search for information were Psyclnfo, ERIC, and HOMER. The
keywords specified during the searches include: “standardized tests, achievement tests,
Stanford Achievement Test, public schools, K-12 education, elementary, variables, and
factors.” Journals and periodicals that were hand searched for related articles were the
2001 and 2002 editions of Phi Delta Kappan, and Education Week and other
measurement and policy journals. Several texts, research studies, and other printed
resources utilized for university courses were examined. Several documents from
federal, state, county, and school district that are routinely delivered to the author of the
study were also utilized.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
10
Standardized Tests
Standardized tests are tests that were given nation-wide under uniform conditions
and scored according to uniform procedures. (Woolfolk, 2001) Most standardized tests
are norm-referenced which means any student’s individual scores on the test are
compared to a national average. The most common type of standardized test is called an
achievement test which is meant to test what a student has learned in a specific subject.
The best known test in California is the Stanford Achievement Test. It is a norm-
referenced, achievement test. The current version utilized to test students in grades two
through eleven is the ninth version therefore it was commonly called the SAT-9.
Positive Aspects of Standardized Testing
Each year, approximately three million students in California take the SAT-9
standardized test. (Keller, 2001) There are several reasons why so many students take
this exam and why standardized tests, in general, are very effective assessment tools for
education. There are several different types of standardized tests that are given to
students through out the United States every school day. Some are utilized to test
whether students are gifted or talented intellectually in order to be considered a high
achieving student. In California this program is called Gifted and Talented Education.
Once qualified, a student has access to a variety of during school, after school, weekend,
and summer programs each for the purpose o f enrichment. Many magnet, private, and
charter schools utilize standardized tests to determine admittance. Performing well on
standardized tests like the SAT-I or the Graduate Record Examination means entrance or
denial to practically all colleges and universities in the United States.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
11
Specifically, we examined standardized tests that were utilized for the purposes
of keeping schools accountable. After Assembly Bill 65 passed there were several
changes that occurred in California public schools. (DeCesare, 2001) One o f the portions
of the law required that school districts must track the writing abilities of students. In the
school district where the pilot study and main study of this dissertation took place all
fourth, seventh, and tenth graders were required to take a standardized test in writing.
Because of this requirement, many elementary principals believed that students’ writing
skills have improved. (Chang, 2000)
The Stanford Achievement Test that California students took in 1999-2001 was
approximately twelve hours long. For second through fifth graders, they answered
multiple choice questions about mathematics and language arts. For sixth through
eleventh graders, they answered multiple choice questions about mathematics, language
arts, science, and social studies. (Harcourt, 2001) For most of these students, the test was
entirely multiple choice questions. Only fourth, seventh, and tenth graders had a 30
minute writing section. The reasons why the state of California selected to utilize the
SAT-9 in its testing system, the STAR, were both political and economic. The Stanford
Achievement Test has been in existence for over thirty years; it was well known and
earlier versions were already being utilized in California public schools for various
reasons. (DeCesare, 2001) In addition, there was a lobbying effort by the Harcourt
Company for the state to purchase its test. Harcourt promised to adapt its SAT test
specifically to align to the California state content standards by the year 2002
administration. (Marzano, 2001) This was still in the process of occurring with that test
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
12
administration. Over all, administering standardized tests like the SAT-9 was
inexpensive for states. Performance assessments would be much more expensive. (Kohn,
2000)
After the results of the students’ performance were known, then each public
school in California was ranked with an Academic Performance Index (API) number
from 200 to 1000. A school with an API above 800 was called a “Higher Performing
School” and was considered doing well. Schools below 400 were considered
“dangerously low.” Schools between 400 and 800 should be doing better according to
the state. (Harcourt, 2001) This was a very simple measurement that all stakeholders
could understand. This included the state department of education, politicians, school
boards, superintendents, principals, teachers, parents, and students. Anyone could
quickly and easily recognize if a school was doing its job o f educating its students
properly according to these standards.
Another useful purpose of standardized tests is that a state or country could
determine how its education system compared with other states or nations. For example,
the best known study comparing countries was the Third International Mathematics and
Science Study (TIMSS) which was originally conducted in 1995. There was a follow up
study in 1999 called the TIMSS-R. (TIMSS, 2000) 42 countries participated in the study.
Of the 27 nations that participated in the fourth grade mathematics and science portion
of TIMSS, American students were in the middle, placing 15t h . Among eighth graders,
American students placed 14t h out of 25 nations that participated in that portion.
American twelfth graders did not fare well; they placed 19th out o f 21 nations that
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
13
participated in that portion. (TIMSS, 2000) These statistics could be quite useful in
discovering highlights or lowlights in the American public education system.
The National Assessment of Educational Progress (NAEP) is an organization that
has collected achievement data and kept statistics since 1969 about education in the
United States. It calls its annual reports “the Nation’s Report Card.” Its statistics
demonstrate that the mathematics scores of American fourth graders has increased from
213 to 228 in the last ten years. (U.S. Dept, of Education, 2001) While in the last ten
years, the reading scores of fourth graders in the U.S. went from 217 to 214 and back to
217. When this data was disaggregated, the results were both interesting and useful. For
example, reading scores during this time, went up for White students, down for Hispanic
students, and stayed the same for Black students. (U.S Department of Education, 2001)
In math, fourth grade boys increased 15 points (214 to 229) while girls only increased 13
points (213 to 226). Californian fourth graders have improved from 208 to 214 in the last
ten years. However, that was still considerably behind the top states like Massachusetts
and Minnesota at 235 each. This information is another example of the benefits of
standardized testing.
Perhaps the most important reason why standardized tests are utilized so much in
public education is that they were both reliable and valid measures of student learning
and achievement. “Current standardized tests provide exceptionally reliable and valid
information about student achievement.” (Cizek, 1998, p. 33) As stated previously,
standardized tests are administered under uniform conditions and scored according to
uniform procedures. Most standardized tests were norm-referenced which meant any
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
14
student’s individual scores on the test is compared to a national average. The process in
which standardized tests are normed like on the SAT-9, is that each test question is
answered by a cross section of the same age students in the United States. Each test
question is meant to be missed by half of the test takers. If too many students get a
question right or wrong, then it was not utilized in the following year’s administration.
This process, along with the uniform conditions and procedures, assured reliability in
standardized testing. “Reliability refers to the consistency o f such measurements when
the testing procedures is repeated on a population o f individuals or groups.” (American
Educational Research Association, 1999, p. 25)
Standardized tests like the SAT-9 are also valid measures of a student’s progress
in learning. “Validity refers to the degree to which evidence and theory support the
interpretations of test scores entailed by proposed uses of tests.” (American Educational
Research Association, 1999, p. 9) Haertel (1999) stated that standardized tests like the
SAT-9 were valid because they were large scale; millions of students took it every year.
In 1999, the SAT-9 was aligned with national content standards for each of the subjects
of language arts, mathematics, social studies, and science. Beginning with the 2002 test
administration, the SAT-9 will also be aligned with the California state content standards
in each o f those subject areas. (Harcourt, 2001) Therefore, California students were
being tested on the topics that they should have learned in class during each of their
school years.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
15
Negative Aspects of Standardized Testing
There is a large body of research about the negative aspects o f standardized
testing. There are authors who state that this type of assessment is bad educational
practice. Kohn (2000, 2001) called for educators to cease immediately all standardized
testing practices. He stated various reasons why. They included that schools lower their
standards by having students only study certain topics or types o f questions in order to
score high on one test. This practice leads to superficial thinking. Another reason why
was that standardized tests don’t measure what was actually learned. A good example of
this was that currently, students in California still have not taken a standardized test that
was aligned with the California state content standards. (Marzano, 2001) A final reason
why standardized tests are bad practice according to Kohn was that there were many
other much better measures available like performance assessments.
Linn (2000) stated that the current standardized testing system needed reform.
He called for using multiple measures instead o f just one test. No school’s effectiveness
could be measured by one test he stated. Standardized testing is currently utilized to
exclude students from opportunities; it should be the opposite according to Linn (2000).
The emphasis should be placed upon year to year results instead the current practice that
emphasized school versus school results; a school should only be compared to itself in
the past. And lastly, all stakeholders should be aware of the uncertain aspects o f test
results reporting. If one did not understand stanines, percentiles, quartiles, etc., then any
statistics could be utilized to prove any point that any one wished to make. (Linn, 2000)
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
16
Other authors like Popham (1999) also discussed the negative aspects of
standardized tests. He stated that test scores did not truly reflect the education that
occurred at a school. Such tests were a misleading estimate o f a staff s effectiveness.
The process in which standardized tests were normed was flawed he stated. For example,
each test question is meant to be missed by half of the test takers. Then when schools’
scores were reported in percentiles, half always would be below average and half above
average. At an individual school, every student in a grade could be above grade level and
getting good grades, but could be below the 50t h percentile still. Even if a school
increased substantially in its raw scores from the previous year, it may still decrease in
the percentiles if every other school also performed well. (Popham, 1999)
Nolen, Haladyna, and Hass (1992) reported the above issues involving norming
and percentiles as some reasons why teachers and principals did not like and did not trust
standardized tests. There were other reasons also according to the researchers. They
reported that in general, educators did not believe that these tests were valid instruments
for measuring a student’s knowledge or a school’s educational effectiveness. Few
educators believe that one test can truly represent what learning took place in a school;
only multiple measures could begin to tell the real story. (Nolen, Haladyna, and Hass,
1992)
Factors Affecting Student Achievement on Standardized Tests
The remainder o f this literature review will focus upon factors that influence
student scores on standardized tests. The primary areas o f focus were primary language,
student motivation, anxiety, teacher attitudes, gender, and socioeconomic status.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
17
Primary Language
Primary language is an important factor that affects student performance on
standardized tests. Valdes and Figueroa (1994) have reported research in the area of
bilingualism and test taking. They stated “When a bilingual individual confronts a
monolingual test, developed by monolinguistic individuals, and standardized and
normed on a monolingual population, both the test taker and the test are asked to do
something that they cannot. The bilingual test taker cannot perform like a monolingual.”
(Valdes and Figueroa, 1994, p. 11) Their conclusion was that standardized tests were not
an accurate measure of a Limited English Proficient (LEP) student’s knowledge. Their
conclusions were supported by many other authors and experts. (August and Hakuta,
1997,1998; Garcia, 1991; Gandara, 1997; Willig, 1998)
August and Hakuta (1997) have written much on the subject of assessing how
and what students learn in a second language such as English. They reported that the
ability to read was the key to being able to perform on standardized tests. Even topics
such as social studies and science were language dependent on achievement tests. If a
student could not read proficiently in English, there was little possibility to assess what
he or she knew with this type of assessment. August and Hakuta explained that with
students who were learning English for the first time, reading comprehension and
vocabulary could be problematic. On standardized tests, there were many words that the
native speaker knew but the non-native one may not know. Often limited English
speaking students have not been exposed to macro structures like paragraphs, topic and
supporting sentences, compare and contrast essays, etc.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
18
Garcia (1991) also cautioned against relying on standardized test scores for
limited English speaking students. She identified many testing factors that would hinder
their performance on any reading based test. They included unfamiliar passage
vocabulary, paraphrase vocabulary in test items, unfamiliar passage topics, scriptally
implicit questions, and time limitations. She analyzed test results for Hispanic students
in elementary and middle schools and compared them to English only speaking students.
She also conducted interviews with these students. Her research demonstrated that the
bilingual students had much more knowledge in oral interviews than a standardized test
could measure. As Garcia concluded, information about the home culture and language
of these bilingual students was rarely taken into account for these testing situations.
Because o f these factors, students whose first language was not English, could not be
accurately assessed by an English language standardized test Garcia (1991)concluded.
Gandara (1997) has written about measurement issues like those on standardized
tests. As he reported, almost all evaluations o f programs and assessment o f limited
English proficient students sought to have them perform at the same level o f English
only students. All standardized tests were based that the student at the 50t h percentile
was properly achieving at grade level. This was not an equitable measurement tool for
students whose primary language was not English. All native speaking students, he
argued, have a “head start” in language upon entering school. The L.E.P. students must
“catch up” with them. While limited English proficient students were learning first how
to read and write, their counterparts were learning subject specific vocabulary, rhetorical
modes, and other more advanced language features. L.E.P. students must be offered
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
19
more remediation and enrichment in order to achieve at the level of native speakers
according to Gandara (1997).
Willig (1988) has also written on the subject o f testing bilingual students. She
argued that a primary reason why limited English speaking students do poorly on all
tests was because of the language component to them. This included a lack of exposure
to the vocabulary used on tests. But more importantly, the entire test was loaded with
language written in a way only native speakers would understand the nuances and
subtleties in it. She stated that socioeconomic factors also played a role in test taking
ability. Bilingual students tended to be more lower class; tests were generally normed
with a representative sample o f high, middle and lower class students. This could affect
the language utilized for standardized tests also. Willig concluded that there was bias
built into this type of testing (1998). This was the view o f other researchers like Miller-
Jones (1989), Ogbu and Simmons (1998), and Zehr (2000).
With students who have learned some English, utilizing standardized tests could
appear as though they could accurately measure depth and breadth o f knowledge. The
results could be very deceptive according to some authors. Hakuta and Garcia (1989)
and Hakuta, Ferdman, and Diaz (1987) have demonstrated that single language tests
only measure the monolingual part of the bilingual. In that respect, achievement tests
were accurate. However, they did not measure any of the mental content, processes, or
abilities in the second language. There were many other factors involving
conceptualization, translation, associative recall, and retrieval involved that were not
taken into account according to these authors.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
20
Student Motivation
Motivation is another important factor that affects the scores of students on
standardized tests. Woolfolk (2001) defined motivation as an internal state that arouses,
directs, and maintains behavior. If one was motivated to do well in school or any
education related endeavor, than one will. There were two types o f motivation.
(Gottfried, 1990) There was the intrinsic type in which the motivation was associated
with the activities that were their own reward. Then there was the extrinsic type, which
was created by external factors like rewards and punishments. Researchers agreed that
intrinsic motivation was better for educational purposes and long range goals and tasks.
(Stipeck, 1998)
O’Neil (2000) has reviewed much of the recent literature on motivation. He
asked the research question what factors other than cognitive, e.g. students’ low prior
knowledge, academic engagement, teachers’ lack o f professional preparation, or lack of
standards, could be responsible for low test scores? He stated that motivational factors
like effort, self-efficacy, and anxiety must be considered as well. If motivation had a
great affect on test achievement like cognitive factors, than states, districts, and schools
should change the way they administered standardized tests according to O’Neil (2000).
Motivation tended to decrease over time as students get older. This was the
conclusion reached by Paris (1991). He studied why elementary students in the United
States tended to perform better in comparison to their peers in other countries on
standardized tests than their American high school counterparts. He stated that after
repeated administrations of these tests, the cumulative effect upon students was that
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
21
there was a negative impact on their motivation to do well. They were less motivated to
try hard, became disillusioned about testing in general, and utilized inappropriate test
taking strategies. Eventually this undermined the validity of the standardized tests; the
test did not measure actual student learning according to Paris (1991).
Many other researchers have concluded that motivation decreased as students
become older. Baker (2000) attempted to motivate high school seniors with cash
incentives to try harder on Advanced Placement tests, but most students would not try
harder. Other examples of motivation decreasing as students become older included
research by Barnett and Hixon (1997) and Wentzel, Wienberger, Ford, and Feldman
(1990).
There were many factors according to Goins (1993) that influenced student
motivation. Self-efficacy was one factor that was closely related to motivation. Woolfolk
(2001) defined self-efficacy as a person’s sense o f being able to deal with a particular
task. Perhaps the most important work on this topic has been done by Bandura (1997).
According to his research, if a student believed that he or she could learn how or was
capable of completing the actions for finishing a job or reaching a specific goal than they
were more likely to achieve. This was true of standardized tests also. If they had a high
sense of self-efficacy, then students were much more likely to perform better on any test
according to Bandura (1997). This was also the conclusion reached by Wiggins, Schatz,
and West (1994).
Brookheart and DeVoge (1999) completed a quantitative study on the role of
self-efficacy. They concluded that there is a positive causal relationship between
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
22
students’ sense o f self-efficacy and completion of tasks, amount o f effort, and
achievement in many classroom assessments. Good, clear, and positive communication
from the teacher to his or her students helped to provide them with the feeling that they
could accomplish much and do well. Teacher attitude was important also. This study
also had some significance for that section discussed later in this chapter about teacher
attitude. They concluded that there was a positive causal relationship between student
self-efficacy and standardized test achievement. (Brookheart and DeVoge, 1999)
Anxiety
Anxiety affects how students perform in school and plays a significant role on
how well elementary students performed on standardized tests. It can be described as
“uneasiness, foreboding, or tension.” (Woolfolk, 2001, p. 384) The study of anxiety has
continued for decades. Anxiety is both a cause and an effect of school failure; students
perform poorly because of it and then their poor performance increases their anxiety. It
can affect how student performed in all aspects of school and education. The reason why
anxiety interferes with school achievement is that when a student learns something new
or is being tested, he or she must be concentrating. Instead, students with high anxiety
divide his or her time between the learning/cognitive skill and the preoccupation with
how nervous they felt. (Tobias, 1985)
Standardized tests cause much anxiety among students. The time factor involved
had some effect. For example, anxious elementary age students performed as well as
non-anxious ones when there was no time limit on completing math problems. However,
when there was a time limit, anxious students performed significantly worse then their
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
23
non-anxious counterparts. (Hill and Eaton, 1977) Students also knew that tests like the
SAT-9 would be greatly scrutinized by their parents, teachers, and principal, thus
causing more test anxiety. Spielberger (1983) also discussed why anxiety was a factor on
standardized tests for young children. For example, he stated that intellectually gifted
students performed better on tests because they were more emotionally stable and less
nervous.
According to Covington (1992), there were two types of anxiety, trait and state.
Trait anxiety existed when a student was generally anxious about completing a variety of
tasks. State anxiety existed only temporarily in students in certain situations, like taking
a standardized test. Spielberger (1970) has developed a State-Trait Anxiety Inventoiy
(STAIC) to specifically measure whether elementary age students were experiencing
state or trait anxiety. This anxiety inventory was utilized for collecting data for this
dissertation. Costello, Hedl, Papay and Spielberger (1975), and O’Hearn, Spielberger
and Vagg (1980), and Hedl and Papay (1982), and Papay and Spielberger (1986) have
completed studies measuring kindergarten through fourth grade students with the
STAIC. They concluded that the older students become the more anxious they become;
this was the case in both of the state and trait types of anxiety also.
In a recent study completed by Peleg-Popko (2002) the level o f students’ test
anxiety was greatly affected by family interaction. This research included hundreds of
parents and their elementary age children. The researcher concluded that student’s level
of anxiety was decreased by the encouragement of their parents. This took the form of
autonomy primarily; children who were allowed a degree o f independence and decision
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
24
making capability, along with consistent and clear guidelines, had more self-efficacy.
These parameters led to a less anxious student.
Teacher Attitudes
Teacher attitudes are also closely tied with motivation and anxiety. It is not only
the duty of teachers to teach and of students to learn, but it is also a primary duty of
teachers to motivate their students to achieve higher. Motivation could have a critical
affect on any test. If a teacher made his or her students feel that performing well on a
standardized test was important, then they would perform better. (Frisbie and Andrews,
1990) Along with student self-efficacy, there was also teacher efficacy according to
Greenwood, Olejnik, and Parkay. (1990) They defined it as a teacher’s belief that he or
she could reach even the most difficult students to help them learn. It was one of the few
characteristics of teachers that was correlated with student achievement. Teachers with a
high sense of teacher efficacy would try harder and longer with all of their students. This
also correlated to achievement on standardized test scores. (Hoy and Woolfolk, 1990)
Differences in teacher attitudes could have a significant affect on student
achievement as hypothesized by Brown and Walberg (1993). In a quantitative study, the
way a teacher gave instructions on how to proceed with a standardized test made a
significant difference in their students’ scores. When teachers gave directions in a clear,
fun, and enthusiastic manner, student got higher scores. With monotone and dull
directions, students did not perform as well. This was just one example of teacher
attitude affecting the performance of students on standardized tests.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
25
Gender
There has been some recent research about the effects o f gender on standardized
testing. In answering the question: do boys or girls perform better on standardized tests?
there was no definitive study that concluded either way. Some studies have demonstrated
that males scored higher (Gallagher, 1998) and some have demonstrated that females did
(Slate, Jones, Tumbough, and Bauschlicher, 1994). Some studies have even concluded
that on some types of standardized assessments, females performed better and on other
types, males performed better. In the 1980s and early 1990s, the research supported the
belief that boys performed better. It was widely believed that males outperformed
females in the areas of math and science. (U.S. Department o f Education, 2001) This
was supported by large scale national and international assessments like the TIMSS
report in 1995 and 1999 and by the NAEP statistics in mathematics as long as it has
existed.
The idea that males outperformed females was also demonstrated by many
measures like attendance, classes taken, and state test scores according to Sheehan,
Cryan, Wiechel, and Bandy (1991). In another extensive study, Gallagher (1998)
examined what differences exist involving gender and standardized tests. She stated that
there were factors responsible for why boys performed better on standardized tests while
girls did better on other types of tests. The factors included hard work, persistence, and
initiative which were each more typically valued by parents in their sons than their
daughters. This was part of a socialization process that occurred that favored boys to
perform some tasks better according to the author. Biology also made a difference.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
26
According to Gallagher’s research, there was a genetic disposition of boys toward
subjects like math and science. Teacher attitude and feedback to students also mattered.
Comments by teachers to girls in class tended to be academic in nature while feedback to
boys tended to be behavioral in nature. The result was that girls thought they were wrong
in answering questions on subject matter or content more often. Courses taken could be a
factor also. Males on average had more math and science courses than females upon
entering college. Not only could this hurt female performance because of lack of prior
knowledge but also in the area of self-confidence. Males tended to be more confident
than females when it came to most areas of schooling, including while taking tests.
(Gallagher, 1998)
From a cognitive processing perspective, work by Halpem (1996) demonstrated
that males were better at certain content domains while females were better at others.
Women excelled at tasks that required rapid access and retrieval of information from
memory. Men excelled at tasks that required the retention and manipulation of a mental
representation. Standardized tests tend to rely heavily on quick mental manipulation
tasks that men excelled at. Therefore she concluded that there was a gender difference;
that males performed better than females on standardized tests for the above stated
reasons.
Males performed better on math and science according to national measures like
NAEP, however in other subjects like language arts, females outperformed males. For
the last 15 years, females have outperformed males on the reading portion o f the NAEP.
(U.S. Department of Education, 2001) In recent years, research studies have
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
27
demonstrated that females outperformed males in many areas o f schooling, including on
some types of standardized tests. Slate, Jones, Tumbough, and Bauschlicher’s (1994)
research demonstrated that the contention that males performed better on standardized
tests had diminished and actually reversed in recent years. In utilizing the Stanford
Achievement Test with hundreds of students, their results demonstrated that females
performed better especially on the language arts portion. This was consistent with many
other studies. For example, Sheehan, Cryan, Wiechel, and Bandy (1991) demonstrated
that females outperformed males on standardized tests and that this effect was multiplied
when girls began school at an earlier age.
In the most recent study published on the subject, Helwig, Anderson, and Tindal
(2001) demonstrated that gender was not a significant factor in how students performed
on standardized math and reading tests. Originally, the study measured whether gender
influenced teachers’ perceptions of students’ math achievement in the third and fifth
grades. One of the other findings of the study was that gender was not a contributing
factor how well those elementary school students performed on standardized tests. After
taking into account all of the above research, it is evident at this time that there are
conflicting views on how gender affects student performance on standardized tests.
Socioeconomic Status
Socioeconomic status is defined as relative standing in the society based on
income, power, background, education, and prestige. (Woolfolk, 2001) According to
some researchers, socioeconomic status is the primary factor in determining student
performance on standardized tests. (Kohn, 2000, 2001) “Don’ t let anyone tell you that
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
28
standardized tests are not accurate measures. The truth is that they offer a remarkably
precise method for gauging the size of the houses near the school where the test was
administered.” (Kohn, 2001, p. 349) Most educators would recognize the truth in this
statement. According to Kohn, the vast majority o f empirical investigations on this topic
had found that socioeconomic status accounts for an overwhelming proportion of the
variance in standardized test results.
The education level of parents is one component of socioeconomic status. The
number of years o f education that one had was directly related to compensation. (USA
Today, 8/20/00) The children of well educated parents would perform higher on
standardized tests according to researchers like Donegan and Trepanier-Street (1998).
They also concluded that ethnic groups like Caucasians and Asians tended to attain
greater levels o f education than some other ethnic groups like African Americans and
Hispanics here in the United States. Because of the higher education level of their
parents, students of these certain ethnic groups tended to value more and hold in high
regard education and testing. (Donegan and Trepanier-Street, 1998) This cultural
perspective, along with parents’ socioeconomic status, explained why some students
performed better than others on standardized tests according to these authors.
In the state of California, the Academic Performance Index (API) was also
related to socioeconomic status. Based on the SAT-9 alone, every public school in
California was ranked with an API index score between 200 and 1000 (Keller, 2001)
Each school was rated by one number that was easily recognizable and these scores were
published in local newspapers. What then tended to occur was that wealthier families
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
29
moved into areas with schools already with high test scores thus ensuring that high SES
students were homogeneously grouped together even more. (Kohn, 2001)
California’s API system did try to take into account socioeconomic status when it
reported the scores o f schools in an effort to differentiate good schools in poor areas.
The API system reported school scores in two different ways. The first was by overall
rank, a comparison with all schools in the state. (Linn, 2001) A school ranked as a “7”
out of 10 for example, meant they were in the 60t h to 70t h percent of schools. The second
way schools were ranked was by a similar schools formula. This took into account
socioeconomic status by including parents’ education level, use of free transportation,
Title I funding, mobility of students, and students on free and reduced lunch. Each
school was then only compared to the 100 closest schools to it taking into account those
factors. The assumption was that those 100 schools would be similar in socioeconomic
status. Therefore a school with a “4” in the similar schools rank meant that a school only
did as well as the 30t h to 40t h percent o f the 100 closest schools to it in SES. In this
system there would be some 1/10 schools in California (in the 1s t to 10t h percentile in
Overall Rank / but in the 90t h to 99t h percentile in Similar Schools) and some 10/1
schools. Unfortunately, many school districts and some newspapers did not publish the
similar schools ranking of the local schools. (Linn, 2000)
The elementary school (School X) that participated in the pilot study, as
described more in Chapter III, provides a good example of the API similar schools
ranking. School X’s API index for 2001 was 832. Any school over 800 was considered a
“High Performing School.” School X was a “9” out of 10 on the Overall Rank. This
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
30
meant when compared to all schools in the state, it performed somewhere in the 80t h to
tin
90 percentile. However, in the Similar Schools rank, School X was only a “2.” This
meant when it was compared to only the 100 schools in the state that were most similar
to it in socioeconomic status, it was somewhere in the 11t h to 20t h percentile of those
schools.
Summary of Literature
The purpose of this chapter has been to summarize the available literature on the
topic of standardized tests and the factors that may affect the performance of elementary
students on them. Standardized tests continued to be inexpensive, simple to understand,
and a useful tool in assessment and measurement. (Cizek, 1998) There were many
authors, however, who believed that these tests were negatively impacting children.
(Fisher, 2001; Stiggens, 2001; Kohn, 2001; Linn, 2000; Popham, 1999)
There are many factors that affect student scores on standardized tests.
According to the recent research, there were six factors that are of critical importance.
Primary language is an important factor that affected standardized test scores. There
have been many studies completed and much literature written to demonstrate that
standardized tests did not fairly or adequately measure limited English proficient
students’ knowledge. (Valdes and Figueroa, 1994; August and Hakuta, 1997; Willig,
1998) Related to this was the idea of cultural bias in the standardized testing process.
Some argued that all of these types of tests were biased in some way.
Student motivation is a very important factor on standardized tests. If a student
was intrinsically motivated to do well, then he or she would achieve higher. (Stipeck,
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
31
1998 and Bandura, 1997) If a student felt that they could accomplish a task or can learn
how to, he or she was much more likely to actually complete tasks and perform better on
tests; this was due to self-efficacy. Anxiety is another important factor. It diminished
performance in all aspects of schooling. (Covington, 1992) Both o f the state and trait
types of anxiety caused elementary students to perform worse on standardized tests.
(Papay and Spielberger, 1986)
Teacher attitude is also critical upon the performance o f elementary students on
standardized tests. What a teacher said and did in the classroom affected his or her
students greatly. If a teacher conveyed to students that a test was important, then they
would perform better. (Brown and Walberg, 1993) The sense o f teacher efficacy
displayed would affect especially the lowest achieving students in a class. The teacher’s
job was to motivate students. Gender was another factor that could affect standardized
tests scores. According to researchers there was a gender difference. (Gallagher, 1998)
In some areas o f testing like reading and writing, females outperformed males. While in
other areas like on most multiple choice standardized tests, males outperformed females.
(Halpem, 1992)
Socioeconomic status is the final critical factor affecting student performance of
standardized tests. According to some authors it was the most primary factor in
determining performance on standardized tests. That there was a direct causal correlation
between SES and test performance had been suggested in many research studies. (Kohn,
2000, 2001) The state of California recognized this and so ranked schools by SES in its
API Index. When these six important factors were looked at together, one could begin to
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
understand reasons why certain elementary students performed better on standardized
tests than others.
In this study, the factors that have been discussed in the literature review were
investigated. Elementary students in grades 3, 4, and 5 were asked to answer questions
about each o f the above factors: primary language, student motivation, anxiety, teacher
attitude, gender, and socioeconomic status. They were also asked to write about why
they thought standardized tests were important and what they thought assisted them in
performing better on them. A central focus o f the study was to attempt to understand
which factors between cognitive/learning ones and motivational ones played the more
important role in how elementary students performed on standardized tests.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
33
CHAPTER III
METHODOLOGY
Research Design
The primary purpose of this study was to determine whether cognitive or
motivational factors played the greater role in elementary students’ perceptions of
achievement on standardized tests. The primary method of data collection was through
questionnaires. This design was composed of a pilot study and a main study.
Research involving elementary age children is sensitive. The University of
Southern California University Park Institutional Review Board for the Review of
Research Involving Human Subjects granted approval for this dissertation (see Appendix
A). Parental permission for students who participated was obtained for the research (in
Appendix B), as well as from the students themselves (see Appendix C). Approval to
conduct this study had also been granted by the board of education of the school district
in which the research was conducted, (in Appendix D) Permission o f the principals and
teachers at the individual schools was also given. All participation was voluntary. There
was no reward or incentive to participate. All of these steps were taken by the researcher
as to not place pressure on any institution or individual.
Research Hypotheses
The hypotheses that were tested during this study included the following:
Hypothesis 1 : Elementary students will perceive their success on standardized
tests to be predominantly due to motivational factors not cognitive ones.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
34
Hypothesis 2: Elementary students will have less positive attitudes towards
standardized tests as they are promoted from grades 3 through 5.
Hypothesis 3. The anxiety o f elementary students will be inversely related to
their attitude about standardized tests.
Hypothesis 4: The anxiety of elementary students will be inversely related to
their Stanford Achievement Test, Version 9 mathematics and language arts test scores.
Hypothesis 5: The attitudes of elementary students will be positively related to
their SAT-9 test scores in mathematics and language arts.
Pilot Study
Purpose of Pilot Study
There were three primary purposes of the pilot study that was first completed.
The first purpose addressed feasibility. Could elementary students complete a survey
involving yes/no questions on their attitudes and perceptions about standardized tests as
well as their thoughts on how their teachers and parents feel about these types of tests?
There was also an open-ended question that asked the students to identify what they
thought helped them to perform better on standardized tests. The second purpose was to
gather information on the topic of what factors affect elementary students’ perceptions
of their achievement on standardized tests. The third purpose was to conduct a formative
evaluation of the methods and procedures for the main study. The topic of the main
study being which factor plays the greater role in elementary students’ perceptions of
their performance on standardized tests, either cognitive or motivational factors? as
stated in Hypothesis 1.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
35
Method o f Pilot Study
Participants
The pilot study took place in an elementary school in Southern California. School
tj.
X was located in a middle class, suburban area. It serviced kindergarten through the 5
grade. The school’s population was 860 students. It was approximately 40% Caucasian,
40% Asian, 10% Hispanic, and 10% other. Approximately 40% o f the students were
English Language Learners whose first language was not English. Approximately six
percent of the students were on free and reduced lunch. The majority of students lived in
stable households with nuclear families. School X had a relatively stable faculty and
administration. Nothing unusual or worthy o f noting (e.g. violence, teacher strike, forced
administration change, etc.) had occurred there in recent years.
386 3r d , 4t h , and 5t h graders at School X were participants in the pilot study. There
were responses gathered from 120 3r d graders, 145 4t h graders, and 121 5t h graders. All
participants’ data was utilized in the analysis; no participants were dropped from the
analysis. In the school district which School X was a member of, all students in grades 2
through 11 take the Stanford Achievement Test, Version 9 (SAT-9) each May, whether
they were special education or Limited English Proficient students.
Procedures
School X administered the SAT-9 from May 14 through May 25, 2001 for
approximately two hours during each of those days. The day following the completion of
all o f the tests, in each of the 3r d , 4t h , and 5t h grade classes, teachers asked students to
complete a survey of ten yes/no questions and an open ended response question. This
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
36
survey is in Appendix E. After the surveys were completed, teachers collected them and
left them in the school’s office. These surveys were then analyzed. An item analysis was
completed on each survey question. The results are in Table 1.
Table 1
filot Study Standardized Tests Questionnaire Results
3r d
Graders
4t h
Graders
5t h
Graders
Question Yes No Yes No Yes No
1. Do you like standardized
tests like the SAT- 9?
78
70%
33
30%
70
49%
74
51%
53
43%
69
57%
2. Do you think it is important
to take these kinds of tests?
109
99%
1
1 %
140
95%
7
5%
116
95%
6
5%
3. Do you think you did well on
the SAT-9 test?
102
95%
5
5%
121
83%
25
17%
86
72%
33
28%
4. Do you think your teacher
likes these tests?
75
67%
37
33%
106
72%
41
28%
83
72%
33
28%
5. Do you think your parents
like these tests?
92
75%
30
25%
105
72%
40
28%
93
78%
26
22%
6. Do you speak English to
your parents at home?
82
73%
31
27%
120
79%
32
21%
91
78%
25
22%
7. Do you live at home with
both your parents?
91
81%
21
19%
121
85%
23
15%
95
83%
20
17%
8. Did either of your parents
finish college?
99
93%
8
7%
119
86%
19
14%
95
87%
14
13%
9. Are you a boy or a girl? 59
boys
61
girls
73
boys
72
girls
60
boys
61
girls
10. What do you think helps you
do better on these tests?
(open ended responses)
As may be seen in Table 1, an analysis of the data demonstrates the following
results. 78% of third graders stated that they liked standardized tests while only 43% of
fifth graders said they did. This demonstrated the trend that the older a student becomes,
their attitude towards standardized test became increasingly negative. 98% of all of the
students in grades three through five surveyed think that standardized tests were
important. 95% of third graders think that they did well on the recent SAT-9 test while
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
37
only 72% of fifth graders thought they did well. 70% o f all of the students in grades
three through five think that their teacher liked standardized tests. 75% of all of the
students, grades three through five, thought that their parents liked standardized tests.
24% of the students spoke to their parents in a different language than English. 83% of
the students lived with both o f their parents. 89% o f students stated that one or both of
their parents was a college graduate. 192 o f the participants were boys and 194 were
girls.
Categorizing Open Ended Responses
After the tabulation o f the yes/no questions from the student questionnaires
shown in Table 1, a method for logging and categorizing responses for the open ended
question was produced. The question was: What do you think helps you do better on
these (standardized) tests? The researcher looked at 100 of the responses from the
students from each of the grades and sorted them into 17 categories. In order to assist in
this categorization, the researcher wrote down as many possible responses that could be
thought ofbefore the responses were read, 67 in all. In some cases, there was no written
response at all. In some cases, the researcher could not discern a coherent response. And
in some cases, the student wrote much and included information about two, three, or
even more categories. All of the responses were synthesized into 17 types which are
listed in Table 2.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
38
Table 2
Open Ended Question Responses from Pilot Study
“What do you think helps you to do better on these (standardized) test?”
Response Types Frequency % o f Total Responses
1. Anxiety
2. English Language Learner
3. Good/bad test taker
4. Incentives
5. Multiple choice and essay
questions
6. Nothing can help
7. Practice test taking skills
8. Previous education
9. Self-esteem and self-efficacy
10. Sleep and nutrition
11. Studying
12. Study specific subjects
13. Time factor
14. Tutoring/extra classes
15. Multiple responses
16. Unclear response
17. No response
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
39
On the open ended response question: What do you think helps you do better on
these (standardized) tests? students gave a variety of answers. Some responses were
short, just a few words. Some were long with students writing more than the allocated
area to write. There were some completely blank surveys also; it is possible that some of
these were limited English proficient students without the ability to write in English yet.
(n ~4)
In order to test the categorization further, a sample o f 25 student responses were
placed into these categories. As stated all of the responses are categorized into types in
Table 2. Some responses were typical for children like “give us candy.” Other responses
clearly demonstrated the higher level cognitive skills possessed by some elementary age
students. Examples of this higher level thinking included the ability to analyze their test
taking ability (e.g. reread questions), process for organizing subject matter (e.g. a
spelling versus a vocabulary question), scaffolding new knowledge onto old (math word
problems), note taking skills (e.g. while in class write information down), memorization
devices (e.g. times tables), and many other ideas and methods.
Estimation of Interjudge Reliability
Next the researcher randomly selected ten surveys from 5t h graders, read the
responses, and logged them into the 17 types of categories. This was shown in Table 3.
The actual written responses of the students were included in Appendix F.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
40
Table 3
Sample Frequency of Open Ended Question Response Types (n = 10)
Response Types Frequency % of Total Responses
1. Anxiety
2. English Language Learner 1 10
3. Good/bad test taker
4. Incentives 1 10
5. Multiple choice and essay
questions
6. Nothing can help
7. Practice test taking skills
8. Previous education 1 10
9. Self-esteem and self-efficacy 1 10
10. Sleep and nutrition 1 10
11. Studying 2 20
12. Study specific subjects 1 10
13. Time factor
14. Tutoring/extra classes 1 10
15. Multiple responses 1 10
16. Unclear response
17. No response
Totals 10 100%
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
41
Of the responses, eight of the ten of them were the only one in a particular category.
Two responses were from the same category, sleep and nutrition. This demonstrated a
variety of response types. Next the researcher had another individual categorize the same
ten surveys. The second rater was also an elementary administrator like the first, both
working in the same school district. The second rater independently categorized the
same responses. This is demonstrated in Table 4.
Table 4
Estimation of Interjudge Reliability (Pilot Study) (n = 10)
Written Response Rater #1 Rater #2 Agreement
1 Category #2 Category #7 No
2 Category #14 Category #14 Yes
3 Category #15 Category #15 No
4 Category #11 Category #11 Yes
5 Category #9 Category #9 Yes
6 Category #4 Category #4 Yes
7 Category #10 Category #4 No
8 Category #11 Category #16 No
9 Category #8 Category #11 No
10 Category #12 Category #11 No
Percent Agreement 50%
Rater 2 logged all ten responses into different categories. A comparison of Raters 1
and 2 demonstrated some equality; five of ten responses were categorized the same by
both, thus demonstrating a 50% estimation of interjudge reliability.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
42
Revised Estimation of Interjudge Reliability
50% interjudge reliability was not considered high enough for this type of
research. Therefore the following method was utilized in order to revise the study by
increasing its accuracy. Raters 1 and 2 conferred and came to a joint decision about their
results. The revised results are shown in Table 5.
Table 5
Revised Estimation of Intequdge Reliability After Raters Conferred (n = 10)
Written Response Rater #1 Rater #2 Agreement
1 Category # 2 Category #2 Yes
2 Category #14 Category #14 Yes
3 Category #15 Category #15 Yes
4 Category #11 Category #11 Yes
5 Category #9 Category #9 Yes
6 Category #4 Category #4 Yes
7 Category #4 Category #4 Yes
8 Category #11 Category #16 No
9 Category #8 Category #8 Yes
10 Category #12 Category #12 Yes
Percent Agreement 90%
After a ten minute discussion about the meaning of various terms utilized in the category
headings, Rater 1 changed one of his responses to a different category and Rater 2
changed three of her responses to different categories. The result of the changes was that
nine of the ten responses were now categorized into the same type by both raters. This
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
43
demonstrated an acceptable 90% revised estimation of inteijudge reliability. The same
raters performed similar tasks in the main study.
Discussion o f Pilot Study
The results of the pilot study yielded some interesting results. It has been
anecdotally perceived that the older students become, the less they liked tests in general.
The results of the survey clearly demonstrated a wide gap among students that were only
two years apart in age. While 78% of third graders liked standardized tests, a huge drop
of only 49% of fourth graders, and a smaller drop of 43% of fifth graders stated so. It
also appeared that the older students become, their perception was that they did not do as
well on standardized tests. This was demonstrated by the perception of 95% of third
graders believing that they did well on the recent SAT-9 test, then only 83% of fourth
graders, and finally only 72% of fifth graders believed that they did well. It was obvious
that elementary students believed that taking standardized tests were important. Where
the idea came from may be in part due to parental, teacher, school, or peer pressure to do
well. Clearly the majority of these students believed that their teachers (70%) and
parents (75%) liked them to take standardized tests. Other results involving primary
language, socioeconomic status, and parents were unremarkable and expected for the
area that the school was located.
A thorough data analysis of the types of categories for the open ended question
was not completed as its function was to estimate feasibility and suggest changes for the
main study. The analysis that occurred in the pilot study was completed to refine
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
44
questions for the survey in the main study and to train raters for the estimation of
inteijudge reliability for the main study.
The following changes were made based on the pilot study: 1. a second open
response question was added (Why do you think it is important to take tests like these?),
2. items measuring anxiety were added from the State-Trait Anxiety Inventory for
Children, Form 2 (STAIC) (Spielberger, 1972), and 3. some of the yes/no questions were
changed to responses on a Likert scale of 1 - 4 (1 = almost never, 2 = sometimes, 3 =
often, 4 = almost always). The revised questionnaire for the main study can be found in
Appendix G. In Appendix H, is a copy of the How I Feel Questionnaire STAIC Form C-
2 that was also be given to the students to complete.
Main Study
Method of Main Study
This dissertation was completed in two different phases. Phase 1, the pilot study
as described above, was completed to improve the feasibility of the main study. In Phase
2, the data collection for the main study occurred in April of 2002. The purpose of the
main study was to attempt to determine if cognition or motivation played the greater role
in affecting elementary student achievement of standardized tests. The same five
research hypotheses in the pilot study guided the main study:
Hypothesis 1 : Elementary students will perceive their success on standardized
tests to be predominantly due to motivational factors not cognitive ones.
Hypothesis 2: Elementary students will have less positive attitudes towards
standardized tests as they are promoted from grades 3 through 5.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
45
Hypothesis 3. The anxiety of elementary students will be inversely related to
their attitude about standardized tests.
Hypothesis 4: The anxiety of elementary students will be inversely related to
their Stanford Achievement Test, Version 9 mathematics and language arts test scores.
Hypothesis 5: The attitudes of elementary students will be positively related to
their SAT-9 test scores in mathematics and language arts.
Participants
The participants in the main study consisted of 305 students in grades 3 (n = 95),
4 (n - 96), and 5 (n = 114). They attended two different elementary schools in the same
school district as School X in the pilot study. School X did not participate in the main
study. The other elementary schools that participated in the main study are similar in
size, socioeconomic status, and student population to each other and School X. The first
one which will be called School Y, had 679 students. It was approximately 42%
Caucasian, 41% Asian, 8% Hispanic, and 9% other. Approximately 39% of the students
are English Language Learners whose first language was not English. Approximately
five percent of the students were on free and reduced lunch. The second school which
will be called School Z, had 621 students of which approximately 31 % are Caucasian,
30% Hispanic, 24% Asian, and 15% other. Approximately 51% of the students are
English Language Learners whose first language was not English. Approximately 14
percent of the students were on free and reduced lunch.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
46
Procedure
Permission had been obtained from the board of education of the school district
in which all three of the schools that participated are members of. Principals of the
schools volunteered. These principals then asked for teachers in their schools who taught
3 r d , 4th , and 5th grades to volunteer to participate in this study. Out of a possible 25
classrooms, the teachers of 23 (92%) volunteered. These teachers then asked students to
volunteer to participate in the study. Out of a possible 595 students in those classes, 377
(63%) participated by attempting to complete the questionnaire. The approval of the
University of Southern California University Institutional Park Review Board for the
Review of Research Involving Human Subjects had previously been obtained.
Questionnaires were delivered to the two participating sites to principals who
then distributed them to participating classrooms. Teachers passed out questionnaires to
students with the following instructions:
“Class, for those of you who brought back the permission form
signed by your parents, we are participating in a research project by the
University of Southern California. You know, U.S.C. They want to know
what you think about standardized tests like the SAT-9 that you took last
May and will take again this May. Your answers will be kept confidential,
secret. No one else except U.S.C. will look at your papers. Please write
your name, age, and date on the first page that says “How I Feel
Questionnaire” on the top. Today is _____________ . Do not write any
teacher’s names anywhere on the papers. Then read and answer the 20
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
47
questions by circling the answers. On the next page that says
Questionnaire About SAT-9/Standardized Tests on the top, read and circle
answers for the first 12 questions. Then read and think about questions
#13 and #14 and write an answer for each one. Any questions? Go ahead
and get started.”
Students who did not participate in completing the questionnaire performed other
various task or assignments while remaining in the classroom. Some read textbooks, or a
novel of their choosing, worked on math, vocabulary or spelling worksheets, researched
in encyclopedia or on a computer, etc.
This data collection occurred in April of 2002. All of the surveys were collected
by the teachers and picked up by the researchers. Each of the public schools in
California, and those schools that participated in this research administered the SAT-9
last in May of 2001 again administered the SAT-9 from May 13 through May 24, 2002.
Questionnaires
The questionnaires were made up of both the one created by the researchers that
had been refined and revised after the pilot study (in Appendix F) and the How I Feel
STAIC Form C2 (in Appendix G), were administered for the main study in classes by
teachers. The STAIC Form C2 portion of the questionnaire was made up of 20 questions
about their feelings of anxiety at school in general. Students recorded their responses on
a three point Likert scale of “hardly ever” = 1 point, “sometimes” = 2, or “often” = 3
points. Questions on the researcher-made portion of the questionnaire were answered on
a Likert scale of 1 - 4 instead of yes/no answers like in the pilot study. For scoring
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
purposes “almost never = 1 point, “sometimes” = 2, “often” = 3, and “almost never” = 4
points.
There was an additional open-ended response question involving students’
perceptions about the importance of standardized tests: “Why do you think it is
important to take tests like these?” The original question from the pilot study remained
on the questionnaire: “What do you think helps you do better on these tests?” It took
students approximately 20 to 40 minutes to complete the questionnaires.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
49
CHAPTER IV
RESULTS AND FINDINGS
This chapter reports the results of the data collection of this research study. Data
collection was completed in April of 2002 at the two previously mentioned elementary
schools with 3r d , 4th , and 5th graders. 377 questionnaires were originally collected.
However, many had to be omitted. Five were omitted because there was no name on the
questionnaire. Three were omitted because there were very few items marked. 64
questionnaires were omitted because the student did not take the SAT-9 the previous
year. Therefore, the study was based upon 305 fully completed questionnaires from
students who had completed the SAT-9.
3 r d Grade 4th Grade 5th Grade Total
Questionnaires turned in 106 130 141 377
Complete Questionnaires 95 96 114 305
Data Analysis
The data collected was analyzed. The software program SPSS, Version 10 was
utilized for generating statistics. All of the answers to the researcher-made questionnaire
were tabulated in a similar format as in the pilot study. Each variable was correlated with
every other one for exploratory purposes. Questions 5 - 11 on the questionnaire that
utilized a four point Likert scale were summed and constituted an Attitude Score. The
How I Feel Questionnaire STAIC Form C2 that was scored utilizing its published
instructions, constituted an Anxiety Score. This questionnaire was made up of 20
questions about anxiety. Students recorded their responses on a three point Likert scale.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
50
Total scores from the STAIC questionnaire were correlated with the sum of questions 5,
6, 7, 8, 9, 10, and 11 from the researcher made questionnaire in order to test some
hypotheses. Each of the open-ended responses from all of the questionnaires were
categorized similarly to the pilot study with the same two individual raters as before. For
the first open-ended question “What do you think helps you do better on these tests?” the
responses were tabulated onto the same 17 categories. For the second, new question
“Why do you think it is important to take tests like these?” the responses were tabulated
into 16 categories that the researchers generated. Another estimation of inteijudge
reliability for both questions was again completed with the student researcher and a
second rater. The second question had 16 response categories. This second rater was the
same individual as in the pilot study.
Each of the five hypotheses was examined in the main study. The first
hypothesis: Elementary students will perceive their success on standardized tests to be
predominantly due to motivational factors not cognitive ones, was tested with the survey
questions “Why do you think it is important to take tests like these?” and “What do you
think helps you do better on these tests?” After responses had been categorized, a chi
square analysis was utilized on each of the two written response sets to test this
hypothesis.
For Hypothesis 2: elementary students will have less positive attitudes towards
standardized tests as they are promoted from grades 3 through 5, it was tested with the
sum of questions 5 -1 1 on the researcher made questionnaire utilizing analysis of
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
51
variance. In order to statistically test this hypothesis, the grades 3, 4, and 5 were used as
three levels of a grade factor for the one way ANOVA.
In Hypothesis 3: the anxiety of elementary students will be inversely related to
their attitude about standardized tests, was tested correlationally. The sum of questions 5
- 11 on the researcher-made portion of the questionnaire (Attitude Score) was correlated
utilizing Pearson Product Correlation with total scores from the STAIC Form C2 portion
of the questionnaire (Anxiety Score).
Hypothesis 4: the anxiety of elementary students will be inversely related to their
Stanford Achievement Test, Version 9 mathematics and language arts test scores, was
tested utilizing Pearson Product Correlation between the Anxiety Score/total STAIC
scores and the mathematics and language arts raw scores of students on the 2001 SAT-9.
Since anxiety was viewed as a trait, it was considered to be stable and an estimation of
the relationship between anxiety and standardized test scores. Since the STAIC anxiety
questionnaire was given after the SAT-9, no causal interpretation was possible.
However, such an approach of measuring trait anxiety after a test has been taken by the
Educational Testing Service (ETS) when they studied the relationship between anxiety
and standardized test performance on the Graduate Record Examination. (Powers, 2001)
The final hypothesis: the attitudes of elementary students will be positively
related to their SAT-9 test scores in mathematics and language arts, was again tested
with Pearson Product Correlation. The Attitude Score/questions 5 - 11 on the researcher
made questionnaire was correlated with students’ raw scores from the 2001 SAT-9 in
language arts and mathematics.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
52
Other questions on the researcher made questionnaire dealt with other factors
besides cognition and motivation that were responsible for students performing well on
standardized tests. They included gender (Are you a boy or girl?), primary language (Do
you speak English with your parents at home?), teacher attitude (Do you think your
teacher likes these kinds of tests?), and socioeconomic status (Who graduated from
college. . . mom and/or dad? and I live at home with. . . mom and/or dad?). These
answers to these questions were correlated with attitudes and the STAIC anxiety
questionnaire responses. The following results were expected concerning students and
standardized tests. On SAT-9 math scores, females will have a negative attitude and
more anxiety and will therefore demonstrate a negative relationship compared to males.
On SAT-9 language arts scores, females will have a positive attitude and less anxiety
and will therefore demonstrate a positive relationship compared to males. Students
whose primary language is not English, will have a negative attitude and more anxiety
and will therefore demonstrate a negative relationship compared with native English
speaking students on SAT-9 scores. Students who believe that their teacher likes
standardized tests will have a positive attitude and less anxiety and will therefore
demonstrate a positive relationship compared to students who think that their teacher
does not like these types of tests on SAT-9 scores. Lastly, students whose parents
graduated from college will have a more positive attitude and be more anxious about
standardized tests. Compared to students who do not live with parents or whose parents
did not attend college, a positive relationship will be demonstrated with attitude but a
negative relationship with anxiety will also be demonstrated.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
53
Research Hypothesis One
Elementary students will perceive their success on standardized tests to be
predominantly due to motivational factors not cognitive ones.
First Written Response
The responses to the question: “What do you thinks helps you do better on these
tests?” were categorized into 17 different response types as in the Pilot Study. This data
is presented in Table 6. These same 17 response types were again utilized to categorize
the response from this, the Main Study.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
54
Table 6
Types of Responses for Question:
“What do you think helps you to do better on these (standardized) tests?”
Response Types Motivation/
Cognition
Frequency % of Total
Responses
1. Anxiety Motivation 19 6.2
2. English Language Learner Cognition 5 1.6
3. Good/bad test taker Cognition 3 0.9
4. Incentives Motivation 3 0.9
5. Multiple choice and essay
questions
Cognition 1 0.3
6. Nothing can help Motivation 11 3.6
7. Practice test taking skills Cognition 28 9.2
8. Previous education Cognition 6 1.9
9. Self-esteem and self-efficacy Motivation 35 11.5
10. Sleep and nutrition Motivation 26 9.5
11. Studying Cognition 100 32.8
12. Study specific subjects Cognition 20 6.6
13. Time factor Neither 2 0.6
14. Tutoring/extra classes Cognition 17 5.6
15. Multiple responses Neither 10 3.2
16. Unclear response Neither 18 5.9
17. No response Neither 1 0.3
Totals 305 100
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
55
Estimation o f Interjudge Reliability
As previous performed for the Pilot Study, an estimation of inteijudge
reliability was again performed. Ten questionnaires were randomly selected and the
same two raters from the Pilot Study again independently sorted responses into
categories. This is demonstrated in Table 7 below.
Table 7
Estimation of Inteijudge Reliability for Quesldon #1 (n= 10)
Written Response Rater #1 Rater #2 Agreement
1 Category #1 Category #1 Yes
2 Category #7 Category #7 Yes
3 Category #11 Category #11 Yes
4 Category #11 Category #11 Yes
5 Category #9 Category #9 Yes
6 Category #17 Category #17 Yes
7 Category #10 Category #10 Yes
8 Category #11 Category #12 No
9 Category #8 Category #8 Yes
10 Category #15 Category #15 Yes
Percent Agreement 90%
There was a 90% agreement by the two independent raters. This was considered reliable
enough that a revised estimation of inteijudge reliability for the total sample was not
necessary. When the two independent raters disagreed on which category an individual
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
56
response belong in, they conferred and reached consensus on one category. This
occurred 30 times with the 305 responses.
Of the 305 student responses, approximately one-third (32.8%) stated that
“Studying” (Item 11) was the method that helps them perform better on standardized
tests. The next highest response was the category of “Self-esteem and self-efficacy (Item
9).” 35 students or 11.5% gave written responses that stated that they perform better if
they think they can do well and they try hard. In other words, they had confidence in
their own ability. The “Sleep and nutrition” (Item 10) category was next with 9.5% of
the responses. Students stated that they should get a good night’s sleep and/or a good
breakfast in order to do well on tests. “Practice test taking skills” (Item 7) was next with
9.2%. There were fewer responses in other categories. When the responses in the
“Tutoring, Studying specific subjects, Studying, and Previous education” categories are
combined, 143 students (46.9%), nearly half had a written response that dealt
specifically with classroom learning in some form. Examples of each of the 17 different
categories are in Appendix I.
Each of the 305 written responses in their 17 separate categories were then
assigned into a motivational factor or a cognitive factor in order to test the first research
hypothesis. Whether they were motivational or cognitive was also labeled in Table 6.
Item 10 “Sleep and nutrition” caused some discussion between the researchers. It was
decided that it would be placed in the motivation area because several teachers involved
in the study gave candy to their students before they began the standardized testing for
the day. The reason for the candy was to give the students a “sugar rush” for the exam.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
57
This candy also became motivational for the students because they looked forward to it
and asked for it. There were some responses that could not be assigned as either
motivational or cognitive. These were labeled neither. 180 responses were labeled as a
cognitive factor. 94 were motivational. 31 were neither.
To test the research hypothesis whether students perceived their success on
standardized tests was due to motivational not cognitive factors, a chi square analysis
was completed. All categories of written student responses were divided into Cognition
and Motivation and neither. Those responses in the categories that fell under either
Cognition or Motivation were analyzed. The results were in Table 8. The chi square for
the first question was significant. (X2(180, N = 305) = 13.49, p < .001) However
cognition, not motivation, was the greater factor as the results demonstrated. On this first
written response question, the research hypothesis was not supported.
Table 8
Chi Square Comparing Cognition and Motivation on “What do you think helps you do
better on these (standardized) tests?” Written Responses
Frequency
Cognition 180
Motivation 94
Second Written Response
There was also a new written response question for the Main Study: “Why do
you think it is important to take tests like these?” All of the responses were read by the
researcher and then categorized into 16 different types. As with the first question, the
student researcher and chair of the dissertation that this study reported upon, developed
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
these 16 categories. They are in Table 9 along with whether it was a Motivational
Cognitive response, Frequency, and % of Total Responses.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
59
Table 9
Types of Responses for Question:
“Why do you think it is important to take tests like these?”
Response Types Motivation/
Cognition
Frequency % of Total Responses
1. (left blank) Neither 3 0.9
2. Corroborate grades we receive Cognition 7 2.3
3. Measure what we have learned Cognition 47 15.4
4. Prepare us for a future job Motivation 17 5.6
5. To get good grades Motivation 26 8.5
6. Promotion to next grade or
retention
Motivation 40 13.1
7. Identify the smart students otivation 23 7.5
8. We learn more during the test Cognitive 25 8.2
9. Prepare us for higher grades
and college
otivation 34 11.1
10. It’s the ultimate test given Cognitive 5 1.6
11. Express your feelings otivation 7 2.3
12. So teachers or parents can see
we learned
otivation 30 9.8
13. It’s not important Neither 4 1.3
14. Unclear response Neither 20 6.6
15.1 don’t know why Neither 5 1.6
16. President, governor, state,
school board, or district wants
to know what you have learned
otivation 12 3.9
Totals 305 100
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
60
Estimation of Interjudge Reliability
To determine the reliability of the categorized responses, just as in the Pilot
Study, the researcher completed an estimation of inteijudge reliability. The researcher
randomly selected ten questionnaires, read the responses, and logged them into the 16
types of categories. Independent of the first rater, a second individual with no training or
discussion specifically about the 16 categories, also categorized the same ten responses.
This person also rated the written response in the Pilot Study and the first one in the
Main Study. The results are demonstrated in Table 10 below.
Table 10
Estimation of Inteijudge Reliability for Quesition #2 (n = 10)
Written Response Rater #1 Rater #2 Agreement
1 Category #3 Category #3 Yes
2 Category #14 Category #12 No
3 Category #9 Category #9 Yes
4 Category #5 Category #11 No
5 Category #9 Category #14 No
6 Category #2 Category #5 No
7 Category #5 Category #5 Yes
8 Category #8 Category #8 Yes
9 Category #8 Category #10 No
10 Category #16 Category #16 Yes
Percent Agreement 50%
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
61
Only five of ten responses were independently categorized the same by both raters, thus
demonstrating a 50% estimation of inteijudge reliability.
Revised Estimation o f Interjudge Reliability
A 50% inteijudge reliability was not considered high enough for this type of
research. Therefore the following method was utilized in order to revise the study by
increasing its accuracy. Raters 1 and 2 conferred about their results and agreed upon
specific examples. Then they each categorized the student responses independently. The
revised results are demonstrated in Table 11 below.
Table 11
Revised Estimation of Inteijudge Reliability After Raters Conferred (n = 10)
Written Response Rater #1 Rater #2 Agreement
1 Category #3 Category #3 Yes
2 Category #12 Category #12 Yes
3 Category #9 Category #9 Yes
4 Category #5 Category #5 Yes
5 Category #14 Category #14 Yes
6 Category #2 Category #2 Yes
7 Category #5 Category #5 Yes
8 Category #8 Category #8 Yes
9 Category #8 Category #10 No
10 Category #16 Category #16 Yes
Percent Agreement 90%
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
After a discussion about the meaning of various terms utilized in the category
headings, each rater changed some response categories. The result of the changes was
that nine of the ten responses were then categorized into the same type by both raters.
This demonstrated an acceptable 90% revised estimation of inteijudge reliability. Next,
each of the 305 written responses were read independently by the two raters and
categorized. After examining the results, there were 45 responses that the two raters
independently disagreed upon. A Kappa statistic was completed as an alternative
estimation of reliability. Abedi (1996) suggested that this statistic was a more
appropriate index for reliability. The Kappa statistics were .76 was Question #1 and .72
for Question #2. These results indicated a good reliability with the data sets for these two
questions. The two raters conferred on each of the 45 responses. A consensus was
reached for every one of these and the response was assigned to one of the 16 categories.
Each of the 305 written responses in their 16 separate categories were then
assigned by the researchers into a motivational factor, a cognitive factor, or as neither in
order to test the first research hypothesis as with the first question. Whether they were
motivational or cognitive was also labeled in Table 9. Item 7 “Identify the smart
students” caused some discussion as to whether it was motivational or cognitive. The
researchers decided to place it in the motivational category because the student
responses were primarily about the desire to be on school honor roles like the
“Principal’s List or the “E Club.” Again there were some responses that could not be
assigned as either motivational or cognitive. 84 responses were labeled as a cognitive
factor. 189 were motivational. 32 were neither.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
63
The category that had the most response was “Measure what we have learned”
(Item 3) with 47 (15.4%). The next one in number of responses was “Promotion to next
grade or retention” (Item 6) with 13.1%. Students marked in this category wrote a
response that discussed the possibility of either being retained or meeting their grade
level standards in order to be promoted. The third highest category was “Prepare us for
higher grades or college” (Item 9) with 11.1%. Many students wrote about wanting to be
prepared for future grades and even entrance into the college of their choice.
For this written response there was more evidence that learning and cognition
were on the minds of elementary students, not motivation. When the categories
“Measure what we have learned, We learn more during the test, so parents and teachers
can see we learned, and President, governor, state, school board, district wants to know
what you learned” are combined, 110 students (36.0%) gave a response that dealt with
learning and cognition.
Again to test the research hypothesis whether student perceived their success on
standardized tests was due to motivational not cognitive factors, a chi square analysis
was completed on this second written response. All categories of written student
responses were divided into Cognition, Motivation, and neither as with the first written
question. Those responses that fell under Cognition and Motivation were analyzed. The
results were in Table 12. This second written response question, like the first,
demonstrated that there was a statistical significance between cognition and motivation
in achievement on standardized tests. (X2 (84, N = 305) = 20.19, p < .001) However, in
contrast to the first item, the results demonstrated that motivation was the greater factor,
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
64
rather than cognition. Therefore on this written response, the research hypothesis was
supported.
Table 12
Chi Square Comparing Cognition and Motivation on “Why do you think it is important
Frequency
Cognition 84
Motivation 189
There were other interesting responses. Several students demonstrated some
knowledge of politics as a determinate of why they took standardized tests. 12 of them
responded in the “President, governor, state, school board, district wants to know what
you learned” (Item 16) category. Other students clearly stated that taking these test were
not important and did not have value to their education. Four responded firmly that
taking standardized tests was not important (Item 13). Some students just wanted to let
their teachers or parents, and in one case her principal, know that they were learning. 30
responded in the “So teachers and parents can see we learned” (Item 12) category. Some
elementary students have a need to please adults. Examples of each of the 16 categories
are in Appendix J.
Overall, the support for the first research hypothesis was mixed. It was expected
that elementary students would perceive their success on standardized tests to be
predominantly due to motivational factors not cognitive ones. The chi square analysis
indicated that the hypothesis on the second question (What do you think helps you do
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
65
better of these types of tests?), but not the first (Why do you think it is important to take
tests like these?). In both cases the results were significant.
Research Hypothesis Two
Elementary students will have less positive attitudes towards standardized tests
as they are promoted from grades 3 through 5. Questions 5 through 12 on the attitude
portion of the questionnaire included questions like “Do you like tests like the SAT-9?”
“Do you think it is important to take these kinds of tests?” “Do you think you do well on
standardized tests like these?” “Do you worry about not doing well on these kinds of
tests?” And “Do you think that if you try really hard that you do better on these kinds of
tests?” The attitude scores of students changed little as they got older. 3r d graders had a
mean attitude score of 23.74. The 4th graders average was 23.52. The mean of the 5th
graders’ attitude score was the lowest at 23.15. These results are in Table 13.
Table 13
Attitude Scores By Grade Levels______
Attitude Score N Mean SD
3r d 95 23.74 3.785
4th
96 23.52 3.569
5th 114 23.15 6.860
Total 305 23.45 5.096
The results of the one way analysis of variance demonstrated that the attitude of
elementary age students towards standardized tests did not change significantly as they
got older. F(2, 3 = 2) = .357, p = .70. Therefore the research hypothesis that: Elementary
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
66
students will have less positive attitudes towards standardized tests as they are promoted
from grades 3 through 5 was not supported.
Research Hypothesis Three
The anxiety of elementary students will be inversely related to their attitude
about standardized tests. In order to test the hypothesis, descriptive and inferential
statistics were completed by grade level. The results of the STAIC Form 2 portion of the
questionnaire were calculated to provide an anxiety score for students in each of the
three grade levels. The mean anxiety score for all three grades 3,4, and 5 was 40.11. The
average anxiety score for students was relatively consistent across the grades. The mean
of the 3r d graders was 39.47. 4th graders were at 40.54. And 5th graders had a mean
anxiety score with 40.29.
Table 14
Anxiety Scores By Grade Levels
Anxiety Score N Mean SD
3r d 95 39.47 2.858
4th
96 40.54 3.155
5th 114 40.29 3.750
Total 305 40.11 3.326
The statistical results demonstrated that the anxiety of elementary students did
not increase as they became older. F(2, 302) = 2.74, p =.066. However the items on the
STAIC described anxious feelings in general, not specifically about school or testing. In
his research, Spielberger (1973) reported the Anxiety Scores means for 1,554 4th , 5th and
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
67
6th graders. He did not sample 3r d graders. For 4th grade girls the mean was 38.1 and for
boys 36.3. For 5th grade girls the mean was 38.7 and for boys 36.4. Our sample had
higher Anxiety Scores.
To specifically test the research hypothesis, the attitude and anxiety scores of the
sample were correlated. The results of the Pearson Product Correlation indicated a non
significant correlation of r = -.010, p = .867. With the total sample there was no
significant correlation between attitude and anxiety. However, when the data was
disaggregated for the three different grade levels, one group of students demonstrated a
positive correlation between attitude and anxiety. For 3r d graders only, the results of the
Pearson Product Correlation was r = .243, p = .018. Therefore there was a correlation for
that sample subgroup. For 4th graders the results of the Pearson Product Correlation was
r = -.037, p = .724 and for 5th grade it was r - -.081, p = .389.
In summary, when the means of their attitudes scores and anxiety scores were
correlated, the results demonstrated that the research hypothesis was not supported by
the data for the entire sample. However, when the means were correlated for each grade
level, there was a significant correlation for 3r d graders only. Therefore for that sample
subgroup the research hypothesis was partially supported. However, overall the
hypothesis was not supported.
Research Hypothesis Four
The anxiety of elementary students will be inversely related to their Stanford
Achievement Test, Version 9 mathematics and language arts test scores. Their raw score
means are reported in Table 15.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
68
Table 15
Raw Score Means by Grade Level
2001 SAT-9 3m Grade 4th Grade 5th Grade
Total Math 56.72
(74 items)
56.92
(76 items)
59.89
(78 items)
Total Reading 77.27
(118 items)
50.01
(74 items)
55.61
(84 items)
The mean Total Math scores of the participants increased each year from 56.72
in third grade to 56.92 in fourth grade to 59.89 in fifth grade. However, this was in part
due to the number of items on the math portion of the SAT-9 increased each grade level
from 74 to 76 to 78. The Total Reading scores varied primarily because the number of
items for each was quite different. They ranged from 77.27 (118 items) in third grade to
50.01 (74 items) in fourth grade to 55.61 (84 items) in fifth grade.
Table 16
Anxiety and Total Reading and Math Scores by Grade Level
3r d Grade 4th Grade 5th Grade
Anxiety Score 39.47 40.50 40.97
Total Reading 77.27 50.01 55.61
Total Math 56.72 56.92 59.89
The Anxiety Score was correlated with Total Reading and Total Math scores to
test the hypothesis. Again the significance level utilized was p < .05. The results of the
Pearson Product Correlation between Total Reading and anxiety indicated a non
significant correlation of r = .028, p = .627. The results of the Pearson Product
Correlation between Total Math and anxiety indicated a significant correlation of r =
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
69
.147, p = .010. The research hypothesis was tested with a correlated coefficient between
the students’ Anxiety Score and their math and reading scores. The results indicated that
the research hypothesis was not supported. There was a negative relationship between
elementary students’ anxiety and their performance on the math portion of standardized
tests. There was no support for a relationship between anxiety and the reading portion
however.
Research Hypothesis Five
The attitudes of elementary students will be positively related to their SAT-9 test
scores in mathematics and language arts. To test this hypothesis, the Attitude Score of
the student participants was correlated to the Total Reading and Total Math scores from
the 2001 administration of the SAT-9 with the grade levels also.
Table 17
Attitude and Total Reading and Math Scores by Grade Level
3r d Grade 4th Grade 5th Grade
Attitude Score 23.74 23.52 23.15
Total Reading 77.27 50.01 55.61
Total Math 56.72 56.92 59.89
The Attitude Score was correlated with Total Reading and Total Math scores to
test the hypothesis. Once again the significance level utilized was p < .05. The results of
the Pearson Product Correlation between Total Reading and attitude indicated a
significant correlation ofr = .147, p = .010. The results of the Pearson Product
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
70
Correlation between Total Math and attitude indicated a non-significant correlation of r
= .005, p = .929.
In summary, the research hypothesis was tested with a correlation coefficient
between the students’ attitude score and their total math and total reading scores. The
results indicated that the research hypothesis was partially supported. There was a
positive relationship between elementary students’ attitude and their performance on the
language arts portion of standardized tests. There was no support for a relationship
between attitude and the mathematics portion however.
Other Findings
Others results included relationships among gender, primary language, teacher
attitude, socioeconomic status, along with attitude and anxiety.
Gender
As reported in Table 19, boys in this sample scored approximately the same or
slightly higher than the girls on each variable. The boys and girls had approximately the
same Anxiety Score, 40.16 to 40.07. They had approximately the same mean Attitude
Score, 23.82 to 23.05. Boys were slightly higher in both results, but not enough to be
statistically significant. In Total Reading, although it appeared that boys significantly
outperformed girls 63.56 to 57.45, the results of a t-test (in Table 20) demonstrated that
the results were not statistically significant. This was also true for Total Math, 58.45 to
57.45.
In order to test our hypothesis, a series of t-tests were completed utilizing each of
the factors Total Reading, Total Math, Anxiety Score and Attitude Score with gender.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
71
After this was completed, the results demonstrated no statistical significance. The results
were indicated in Tables 19 and 20. The statistics demonstrated no significant
differences with these factors. The original hypothesis that girls would demonstrate a
negative attitude and more anxiety towards math and therefore perform lower on the
math portion of standardized test was partially supported. They did not have a negative
attitude. They did demonstrate more anxiety than the Boys. However for the Girls, the
higher anxiety did not hinder their test performance and scores compared to Boys.
Table 18
Gender with Total Reading, Total Math, Anxiety Score and Attitude Score:
Group Statistics_________________________________________________
N Mean SD
Total Reading Boys 157 63.56 30.427
Girls 148 57.45 30.188
Total Math Boys 157 58.45 20.547
Girls 148 57.45 20.957
Anxiety Score Boys 157 40.16 3.581
Girls 148 40.07 3.043
Attitude Score Boys 157 23.82 6.165
Girls 148 23.05 3.614
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
72
Table 19
T-test Comparing Gender with Total Reading, Total Math, Anxiety Score and Attitude
Score: Independent Samples Test ____________ ____________ ____________
t df Sig.
(2-tailed)
Total Reading
(Equal variances assumed)
1.761 303 .079
Total Math
(Equal variances assumed)
.420 303 .674
Anxiety Score
(Equal variances assumed)
.240 303 .810
Attitude Score
(Equal variances assumed)
1.316 303 .189
Total Reading and Total Math scores on the SAT-9 were correlated with each
gender. For boys there was a statistical significance that demonstrated a relationship
between attitude and language arts (Total Reading). The results of the Pearson Product
Correlation indicated a significant correlation of r = .204, p — .011. There was no
significance for boys with language arts and the Anxiety Score (r = .078, p = .332) and
no significance with Total Math and anxiety (r = .127, p = .114) or attitude (r = -.012, p
= .879).
For girls in the sample there was a statistical significance that demonstrated a
relationship between anxiety and mathematics. The results of the Pearson Product
Correlation indicated a statistically significant correlation of r = .172, p = .036. The p <
.05 (2-tailed) level continued to be utilized. There was no statistical significance for girls
with Total Math and the Attitude Score (r = .031, p = .707) and no significance with
Total Reading and anxiety (r = -.038, p = .648) or attitude (r = -.038, p = .647).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
73
Primary Language
The statistics for English speaking versus non-native English speaking students
were analyzed. The results can be found in Table 21. Students that marked on their
questionnaire “Almost Always” on the Likert scale to the question “Do you speak
English to your parents at home?” were considered “English Only” speakers. If they
responded in the other three categories (“Often, Sometimes, Almost Never”), then they
were considered “Non-Native” speakers. The original hypothesis of the researchers was
that Non-Native speaking students would have a negative attitude and therefore perform
worse on the SAT-9 than their English Only speaking counterparts. The Anxiety Score
for both groups was nearly identical, 40.06 and 40.21. English Only speakers performed
higher in Total Reading 64.12 to 56.58 and Total Math 59.30 to 56.57 than Non-Native
speakers.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
74
Table 20
English and Non-Native English Speaking with Total Reading, Total Math, Anxiety
Score and Attitude Score: Group Statistics
N Mean SD
Total Reading English Only 155 64.12 29.185
Non-Native 149 56.58 31.094
Total Math English Only 155 59.30 19.242
Non-Native 149 56.57 22.196
Anxiety Score English Only 155 40.06 3.378
Non-Native 149 40.21 3.239
Attitude Score English Only 155 24.89 6.050
Non-Native 149 21.97 3.289
Whether these means were statistically significant was answered with a t-test.
We continued to utilize the p < .05 (2-tailed) level. The results are in Table 22. Total
Reading, Total Math, Attitude Scores, and Anxiety Scores of non-native English
speakers were related to those students who only spoke English. The results
demonstrated that there was a relationship with two of the factors. There was a
relationship between the Total Reading scores with those students who only spoke
English. The significance level was .030. Therefore it could be stated that students
perform better on the language art portions of standardized tests because they are
English only speakers. The original hypothesis was partially supported by the statistics.
However the math portion of standardized tests scores were not affected by this poorer
attitude and there was no statistical relationship.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
75
The second significant relationship was that English only speakers had a more
positive attitude than Non-Native speakers. Students whose first language was not
English tended to have lower attitude scores that manifested itself in lower language arts
scores also. The significance level was less than .001. This demonstrated that the attitude
of students was in part due to speaking only English, or vice versa. The causal effect
cannot be determined. The results involving anxiety were not significant.
Table 21
T-test Comparing English and Non-Native English Speaking with Total Reading, Total
Vlath, Anxiety Score and Attitude Score:!ndependent Samples Test
t df Sig.
(2-tailed)
Total Reading
(Equal variances assumed)
-2.182 302 .030
Total Math
(Equal variances assumed)
-1.146 302 .253
Anxiety Score
(Equal variances assumed)
.396 302 .693
Attitude Score
(Equal variances assumed)
-5.206 302 .001
Teacher Attitude
Participants that marked either “Often or Almost Always” to the questions “ Do
you think your teacher likes these kinds of tests?” were considered students who believe
that their teacher had a “Positive Attitude” about standardized tests. If they marked
“Almost Never or Sometimes,” then the student was considered one who believed that
their teacher had a “Negative Attitude” about these tests. The attitude and anxiety scores
of students who believed that their teacher liked standardized tests were compared to
those students who believed that their teacher did not like such tests. The original
hypothesis stated that students who believed that their teachers like standardized tests
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
76
would have a better attitude and less anxiety about the tests. The results in Table 23
demonstrate that the students who thought their teacher had a “Positive Attitude” did
slightly better according to descriptive statistics in both attitude and anxiety. If they
believed that their teacher liked standardized tests, students had a higher Attitude Score,
24.77 to 22.12. For the Anxiety Score, the statistic, as expected, was the opposite, 39.12
to 41.59.
Table 22
Teacher Attitude with Total Reading, Total Math, Anxiety Score and Attitude Score:
Group Statistics_________________________________________________________
N Mean SD
Total Reading Positive Attitude 181 62.33 30.930
Negative Attitude 123 58.34 29.540
Total Math Positive Attitude 181 57.55 21.003
Negative Attitude 123 58.65 20.436
Anxiety Score Positive Attitude 181 40.28 3.288
Negative Attitude 123 39.81 3.313
Attitude Score Positive Attitude 181 24.98 5.589
Negative Attitude 123 21.26 3.115
To test whether these means were statistically significant, a t-test was completed
with the Total Reading, Total Math, Anxiety Scores, and Attitude Scores. The results are
indicated in Table 24. There was a significant relationship between the student’s Attitude
Score and Teacher Attitude. The significance level was less than .000 with the p < .05
(2-tailed) level being utilized again. This supported the contention that when a teacher
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
77
had a positive attitude towards standardized tests then so did his or her students. With
anxiety there was no statistically significant relationship. Further more, there was no
impact on the real test scores students received. Students who thought that their teacher
had a positive attitude about standardized tests did not perform any higher on Total
Reading or on Total Math. The original hypothesis of the researchers was partially
supported. Students who believed that their teacher liked standardized tests did
demonstrate a more positive attitude towards the tests. There was no evidence to support
that there was a significant relationship with anxiety however.
Table 23
T-test Comparing Teacher Attitude with Total Reading, Total Math, Anxiety Score and
Attitude Score: Independent Samples Tesi
t df Sig.
(2-tailed)
Total Reading
(Equal variances assumed)
-1.124 302 .262
Total Math
(Equal variances assumed)
.455 302 .650
Anxiety Score
(Equal variances assumed)
-1.202 302 .230
Attitude Score
(Equal variances assumed)
-6.711 302 .000
Socioeconomic Status
The last analysis involved socioeconomic status. It was hypothesized that
students whose parents graduated from college will have a more positive attitude and
will be more anxious about standardized tests. Compared to students who do not live
with parents or whose parents did not attend college, a positive relationship will be
demonstrated with attitude but a negative relationship with anxiety will also be
demonstrated. Students who marked that both of their parents graduated from college
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
78
were considered “High SES” and were compared to students who reported that only one
parent graduated from college, called “Middle SES,” and with those who reported that
neither parent graduated from college, called “Low SES.” The assumption was that
students whose parents graduated from college lived in higher socioeconomic conditions
than their counterparts. It was decided by the researchers not to utilize the results of the
question whether the student lived with their parents. Residing with parents is not
considered a measure of socioeconomic status.
As may be seen in Table 25, the results demonstrated that there was little
difference among the students from families with both parents who graduated from
college (High SES), from families with only one parent who graduated from college
(Medium SES), and from families where neither parent graduated (Low SES).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
79
Table 24
Comparing Socioeconomic Status with Total Reading, Total Math, Anxiety Score and
Attitude Score: Descriptive Statistics_________________________________________
N Mean SD
Total Reading Low SES 72 60.68 34.251
Middle SES 60 63.47 26.143
High SES 173 59.56 30.207
Total Math Low SES 72 53.03 22.781
Middle SES 60 60.13 18.849
High SES 173 59.27 20.235
Anxiety Score Low SES 72 39.49 3.076
Middle SES 60 39.95 3.596
High SES 173 40.43 3.307
Attitude Score Low SES 72 22.97 3.768
Middle SES 60 23.08 9.024
High SES 173 23.77 3.442
In order to test the hypothesis that students from higher socioeconomic
conditions demonstrate a more positive attitude and more anxiety towards standardized
tests, a one way analysis of variance was completed. The results are in Table 26. The
findings demonstrated that the hypothesis of the researchers was not supported. There
were no significant statistical differences among students from High, Medium, or Low
socioeconomic families. There were also no significant statistical differences among the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
8 0
SES groups in performance on the SAT-9 in either the Total Reading or Total Math
portions.
Table 25
ANOVA Comparing Socioeconomic Status with Total Reading, Total Math, Anxiety
Score and Attitude Score
F df Sig.
Total Reading .366 2/302 .694
Total Math 2.748 2/302 .068
Anxiety Score 2.171 2/302 .116
Attitude Score .822 2/302 .441
Summary
In summary, this chapter reported the results and findings of this study. Of the
five research hypotheses and other findings, some were supported, some partially
supported, and some not at all.
Hypothesis 1 : Elementary students will perceive their success on standardized
tests to be predominantly due to motivational factors not cognitive ones. The first
research hypothesis was supported by some of the data collected, statistics generated,
and analysis completed. For this first hypothesis, there were two different questions that
student wrote responses to. For the first question, “What do you think helps you do
better on these tests?” the data did not support the hypothesis. However for the second
question, “Why do you think it is important to take tests like these?” the results did
support the hypothesis that motivation played the greater factor than cognition on how
well elementary students performed on standardized tests.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
81
Hypothesis 2: Elementary students will have less positive attitudes towards
standardized tests as they are promoted from grades 3 through 5. The second research
hypothesis was not supported. The attitude of students did not change significantly as
they grew older. Hypothesis 3. The anxiety of elementary students will be inversely
related to their attitude about standardized tests. This research hypothesis was not
supported by the data in this study. For the entire sample there was no significant
relationship. However for the third graders only, the hypothesis was supported.
Hypothesis 4: The anxiety of elementary students will be inversely related to
their Stanford Achievement Test, Version 9 mathematics and language arts test scores.
This research hypothesis was partially supported. There was statistical evidence to
support that there was a relationship between anxiety and mathematics but not with
language arts. Hypothesis 5: The attitudes of elementary students will be positively
related to their SAT-9 test scores in mathematics and language arts. The last research
hypothesis was partially supported by the data collected, statistics generated, and
analysis completed in this study. There was statistical evidence to support a relationship
between attitude and language arts, but not with mathematics.
Of the Other Findings, some hypotheses were supported and some were not.
With respect to gender, girls did not have more anxiety than boys overall. With respect
to primary language, non-native English speakers had poorer attitudes and these students
performed worse on the language arts portion of standardized tests. With respect to
teacher attitudes, students who thought that their teacher liked standardized tests had
more positive attitudes towards these tests also. Finally with socioeconomic status,
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
82
students did not demonstrate more positive attitude nor higher anxiety whether they were
living in high, medium or low SES conditions.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
83
CHAPTER V
SUMMARY, DISCUSSION, CONCLUSIONS, RECOMMENDATIONS
This chapter summarizes the purpose of this study and the results of the research
hypotheses that it was based upon. It also discusses the limitations of the study and
recommendations by the researcher.
In our current national climate of “high stakes testing,” it was important to
understand the advantages and disadvantages of standardized testing. In California for
example, all public school students in grades 2 through 11 now take the Stanford
Achievement Test, Version 9 (SAT-9) each year in May. From this one test alone, every
public school is ranked on an Academic Performance Index (API). The score a school
receives is critical. It means that a school could receive much extra funding or that
members of its staff could be fired.
The primary purpose of this study was to determine whether cognitive or
motivational factors played the greater role in elementary students’ perceptions of
achievement on standardized tests. Some evidence in this study supported one view and
some supported the other. In answering the written response question “What do you
think helps you do better on these (standardized) tests? elementary students believed that
cognition was the greater factor. However on a different question “Why do you think it
is important to take tests like these? The students answered that motivation was the
greater factor. Learning, studying, and cognitive abilities were the keys to being
successful on standardized tests according to students. Motivational factors such as
rewards and incentives were just as important also. There were also other factors
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
84
reported such as primary language, student motivation, anxiety, teacher attitude, gender,
and socioeconomic factors that each played some role in how students performed on
standardized tests.
Summary of Research Hypotheses
Hypothesis 1: Elementary students will perceive their success on standardized
tests to be predominantly due to motivational factors not cognitive ones. The first
research hypothesis was partially supported by the data collected, statistics generated,
and analysis completed. There were two questions that students wrote responses to. For
the first question, “What do you think helps you do better on these tests?” the data did
not support the hypothesis; cognition was the greater factor. The data for the second
question, “Why do you think it is important to take tests like these?” did support the
hypothesis that motivation was the greater factor.
Hypothesis 2: Elementary students will have less positive attitudes towards
standardized tests as they are promoted from grades 3 through 5. The second research
hypothesis was not supported. The attitude of students did not change significantly as
they grew older.
Hypothesis 3. The anxiety of elementary students will be inversely related to
their attitude about standardized tests. Overall this research hypothesis was not
supported by the data in this study. For the entire sample there was no significant
relationship. However for the third graders only, the hypothesis was supported.
Hypothesis 4: The anxiety of elementary students will be inversely related to
their Stanford Achievement Test, Version 9 mathematics and language arts test scores.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
85
This research hypothesis was partially supported. There was statistical evidence to
support that there was a relationship between anxiety and mathematics but not with
language arts.
Hypothesis 5: The attitudes of elementary students will be positively related to
their SAT-9 test scores in mathematics and language arts. The last research hypothesis
was partially supported by the data collected, statistics generated, and analysis
completed in this study. There was statistical evidence to support a relationship between
attitude and language arts, not with mathematics however.
Of the Other Findings, some hypotheses were supported and some were not.
With respect to gender, girls did not have more anxiety than boys overall. With respect
to primary language, non-native English speakers had a poorer attitude and these
students performed worse on the language arts portion of standardized tests. With
respect to teacher attitude, students who thought that their teacher liked standardized
tests had a more positive attitude towards these tests also. Finally with respect to
socioeconomic status, students did not demonstrate a more positive attitude nor higher
anxiety whether they were considered high, medium or low SES.
Discussion of Findings
Hypothesis 1 : Elementary students will perceive their success on standardized
tests to be predominantly due to motivational factors not cognitive ones. The fact that
half of the findings did and half did not support the hypothesis was a bit of a surprise. It
was unexpected that the results of only one of the two written response question
demonstrated motivation was the greater factor. When discussion of this study first
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
8 6
began, it was believed by the researchers that motivation played a large role in how well
elementary students performed on standardized tests. In the classroom, students always
seem to respond to rewards, incentives, or treats while completing their work including
during testing. Some authors have discussed how important motivation is to student
learning like Woolfolk (2001). Other researchers have completed studies on how
important a factor motivation is upon student achievement on standardized tests like
O’Neil (2000).
The fact that the other written response question supported that cognition was the
greater factor, as students perceived, in assisting them to achieve higher on standardized
tests should not be a complete surprise. After all any student, even the youngest
elementary ones, would state that learning and studying and remembering what they
have been exposed to in class are the keys to being successful academically at school.
This would apply to testing situations also. In an article by Yeh (2001), he stated that
state-mandated tests should focus on critical thinking skills, not upon rote memorization.
Currently, the critical thinking skills, cognitive abilities, and true learning of students
have been sacrificed for better scores on standardized tests according to the author.
Hypothesis 2: Elementary students will have less positive attitudes towards
standardized tests as they are promoted from grades 3 through 5. The results of the data
were unexpected in that attitude did not change significantly over the grades three
through five. It was believed that the attitude of students would become more negative as
they grew older. Many teachers who have worked with various grade levels could
confirm that students liked standardized tests less and less as they got older. By the time
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
87
they were high school students, the majority attempted several different methods in order
to avoid taking these tests. In a related study, researcher Paris (1991) concluded that
motivation and attitude of students tended to become worse with repeated
administrations of standardized tests. The attitudes of the elementary students in this
study were not negatively affected as demonstrated by the data collected. The sample
utilized for this study was only 3r d , 4th , and 5th graders. It is believed that if a larger grade
range was utilized or a longitudinal study was completed that the research hypothesis
would eventually be supported by that data.
Hypothesis 3. The anxiety of elementary students will be inversely related to
their attitude about standardized tests. Again unexpectedly, this research hypothesis was
not supported by the data. For example Costello, Hedl, Papay and Spielberger (1975),
and O’Hearn, Spielberger and Vagg (1980), and Hedl and Papay (1982), and Papay and
Spielberger (1986) have all completed studies measuring kindergarten through fourth
grade students with anxiety measures. They all concluded that the older students
become, the more anxious they become; this was the case in both of the state and trait
types of anxiety also. It was unknown before the study was completed if anxiety and
attitude were inversely related. If one increased then the other decreased. According to
the data in this study, it seems that on standardized tests for students, they were not
correlated.
Hypothesis 4: The anxiety of elementary students will be inversely related to
their Stanford Achievement Test, Version 9 mathematics and language arts test scores.
There have been studies completed on test anxiety that have concluded that this inverse
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
relationship was true. They include Hill and Eaton (1977), Spielberger (1983), and
Tobias (1985). Each reported on what causes test anxiety. This researcher believed
before this study that anxiety and test achievement would be inversely related. This
research hypothesis was partially supported because there was statistical evidence to
support a relationship between anxiety and mathematics. This was not a surprise to the
researchers. There has also been much anecdotal evidence that math was the school
subject that caused the most anxiety among students. Perhaps this was because it was
stressed more that other subjects. Also, some people are not logic and numbers
orientated. Unlike engineers and mathematicians, some people would do anything to
avoid having to perform any mathematic computations. The fact that language arts was
not correlated to anxiety was not a surprise for the researcher either. Reading is
constantly stressed in schools as an enjoyable activity. Even poor readers can find a book
at their reading level and that interests them. Teachers would state that reading was a fun
activity for their students so there was no cause for anxiety.
Hypothesis 5: The attitudes of elementary students will be positively related to
their SAT-9 test scores in mathematics and language arts. There have been some studies
completed on the related topic of student self-efficacy. Student attitude and self-efficacy
are closely related. Research has been reported by Goins (1993), Bandura (1997), and
Brookheart and DeVoge (1999). They generally agree that students with good attitude
and high self-efficacy tend to perform better in testing situations. Like in research
hypothesis 4, there was statistical evidence to support a relationship between attitude and
language arts, not with mathematics however. As previously mentioned, teachers would
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
89
state that reading was an enjoyable activity for their students so it was not surprising that
students had a positive attitude towards language arts and performed better on that
portion of standardized tests. On the math portion though, there was no correlation with
attitude. According to the data in this study, there was no support that a positive attitude
assisted students in their performance in math.
There were other factors that affect student performance on standardized tests
that were examined in this study. They included primary language, teacher attitude,
gender, and socioeconomic status. This was not the first research completed on various
factors affecting performance on standardized tests. Willingham, Pollack, and Lewis
(2002) completed a four- year study on what accounted for grades and test scores in high
school seniors. The factors they reported on were school skills, initiative, competing
activities, family background, student attitudes, and teacher ratings. They concluded that
in varying degree each of these factors contributed to the differences in students’
academic grades and achievement on standardized tests. This new study by Willingham,
Pollack and Lewis supported the idea in this study that there are other “out of the
classroom” factors that affected how well students achieved on standardized tests.
As expected, with respect to primary language, non-native English speakers had
a poorer attitude and these students performed worse on the language arts portion of
standardized tests. There have been several studies completed on what effect being a
non-native English speaker has upon test performance including ones by Willig (1988),
Garcia (1991), Valdes and Figueroa (1994), August and Hakuta (1997), and Gandara
(1997). These researchers would argue that it is bad educational practice to test non
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
90
native English speakers with these standardized tests. On the math portion, there was no
statistical difference however. This could be explained by the fact that many of the
students in this study whose first language was not English, were of Asian ancestry. Of
the two schools that participated in the Main Study, one had an Asian population of 41%
and the other 24%. At least anecdotally, Asians and Asian-Americans seem to do well
particularly in math, but not necessarily in language arts.
As expected with respect to teacher attitude, students who thought that their
teacher liked standardized tests had a more positive attitude towards these tests also.
This was consistent with studies completed by Frisbie and Andrews (1990), Hoy and
Woolfolk (1990), and Brown and Walberg (1993). Many of the things that a teacher did
or said in the classroom seemed to affect his or her students very much. This included
their positive and negative attitudes about standardized tests also.
As expected with respect to gender, girls did have more anxiety than boys in this
study. However, this did not result in lower standardized test scores on the math portion
for girls. Many studies in the past have supported the contention that girls did not
perform as well in math than boys. There have been several studies completed on how
gender affects achievement on standardized tests. These include ones by Halpem (1996)
and Gallagher (1998) that reported how and why boys outperformed girls on portions of
these tests. This study did support the idea that girls have more anxiety about math.
When the test results were examined however, the girls in this sample did not perform
worse. In other studies by Sheehan, Cryan, Wiechel, and Bandy (1991) and by Slate,
Jones, Tumbough, and Bauschlicher (1994) data supported the idea that girls
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
outperformed boys on many standardized tests. In this study however, boys and girls
performed virtually the same on both math and language arts portions of standardized
tests. Thus supporting the idea that gender is not an important factor on standardized
tests, at least with the sample in this study.
Lastly with respect to socioeconomic status, it was expected that higher SES
students would have a more positive attitude as well as higher anxiety about
standardized tests. However, the data supported that students did not demonstrate a more
positive attitude nor higher anxiety whether they were identified as high, medium or low
SES. Some authors have written how important a factor SES is upon student
achievement on standardized tests. They include Kohn (2000, 2001) and Linn (2000,
2001). This researcher speculated that higher SES students would have a more positive
attitude and more anxiety. This was because typically in higher SES families, education
is stressed more than in low ones. This meant that school was a positive experience for
students, as well as, since it is so critical to success in life, a child must do well in their
education. Surprisingly, the research hypothesis regarding attitude and anxiety expected
was not demonstrated by the data in this study. Perhaps the results did not support the
original hypothesis because of the sample. Both schools are located in what is
considered to be a middle-class suburban area with large numbers of Asian-American
students as previously mentioned. In one of the schools five percent of the students were
on free and reduced lunch and in the other 14% were.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
92
Limitations
There were some limitations to this study that should be discussed. First, the
three schools at which the pilot and main studies occurred were fairly homogeneous and
therefore so were the students that participated in the sample. All three were from the
same school district in southern California. A unified district with 17 elementary schools
all that could be described as middle class and suburban. The three schools are all within
two miles of each other. The only conditions to the schools being in the study was that
first, the one in the pilot study was where a researcher was the principal, and that the two
schools in the main study, the two principals volunteered. There was a greater number of
Asian and fewer African-American students in the sample than if it had been truly
representational of the United States or California.
There has not been a study like this similar in regard to participants and data
collection that the researchers could find anywhere in published literature. Therefore
much of the instrumentation had to be invented. Spielberger’s How I Feel Questionnaire
STAIC Form 2 (1973) was the one piece of the instrumentation that was well
established. However, even that assessment is not specifically concerned with test
anxiety. Aside from the Speilberger assessment, most of the rest of the design for data
collection was original work. The researchers attempted to make the data collection and
results as reliable as possible with the utilization of such new instruments as an
estimation of inteijudge reliability via Kappa statistic.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Recommendations
It was the author’s hope that by contributing to the basic knowledge of the
factors that affected students’ achievement on standardized tests that all stakeholders,
including parents, teachers, administrators, school board members, politicians, test
makers, educational researchers etc. could make better and more well informed decisions
about these tests.
As a result of this study the recommendation to teachers is that they should not
be stressed about performing well on standardized tests. There are many factors that
have little to do with them and little to do with the learning that occurs in the classroom.
These factors include socioeconomics (Kohn, 2000, 2001), primary language (Willig,
1998; August and Hakuta, 1997; Gandara, 1997; Garcia 1991), gender (Halpem, 1996
and Gallagher, 1998), motivation (O’Neil, 2000 and Paris (1991), and anxiety
(Spielberger 1982) according to various researchers. This study supported that the
factors of motivation (Research Hypothesis 1), anxiety (Research Hypothesis 4), and
primary language were factors. One factor that teachers can control is their own attitude.
This study demonstrated that students have a more a positive attitude towards
standardized tests when they perceive that their teachers do. Therefore teachers should
exude a positive attitude in the classroom so that their students have the best possible
opportunities to perform well.
In a recent study by Olson (2002), she described the measures that the Lincoln
School District in Nebraska has established to improve student learning. Specifically, the
district has turned away from the notion that standardized test scores must be improved,
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
94
instead the focus was upon how to improve daily classroom instruction. This was done
by teaching teachers a variety of classroom assessments such as peer review, oral
presentations, projects, portfolios, etc. All Lincoln teachers go to annual inservice
workshops by the Assessment Training Institute where they learn how to invent their
own assessment tools. As an aside, all of the schools in the district have had their state
standardized test results increase since teacher trainings began. The Lincoln School
District has programs that should be studied and replicated by other states and school
districts.
The recommendation for superintendents and principals is that they should not
pressure teachers for results. This sets a climate of stress and fear for the entire school
and district. (Popham, 1999) Instead, principals should stress solid instructional practices
and all of the positive programs and activities at their schools. Schools and districts
should look to utilize more multiple measures and performance assessments. (Linn,
2000) They should look to build more innovative programs like the one in Lincoln,
Nebraska that Olson (2002) reported upon. All state-wide testing should include some
type of performance assessment. As a report from the National Education Association
(1993, p. 14) states: “... rely on multiple indicators, such as weighted combinations of
standardized test scores, essay or performance exams, and teacher judgment of
portfolios... when assessment is triangulated across several indicators, the biases and
errors of each can be resolved.”
The recommendation for parents is to not fall into idea that standardized tests are
the best way to measure a school. One API score in the newspaper should not be the sole
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
criterion for parents to decide if their children’s school is worthy or not. (Linn, 2000 and
Popham, 1999) The critical thinking skills, cognitive abilities, and true learning of
students are sacrificed for increasing a few points on a scale like the Academic
Performance Index in California. (Yeh, 2001) The first research hypothesis queried if
cognition or motivation played a greater role in the performance of students on
standardized tests. The data supported that each is important. Instead of looking at scores
in the newspaper, parents should visit and observe classrooms and see first hand how
much learning occurs on a daily basis. Parents should not stress standardized tests with
their children either. They can cause much unhealthy anxiety in ones so young such as
elementary students.
Abolishing all standardized testing is not a recommendation as a result of this
study. There are practical and educational uses for these tests. (TIMSS, 2000 and NAEP,
2001) Perhaps the most important reason why standardized tests were utilized so much
in public education was that they were both reliable and valid measures of student
learning and achievement. (Cizek, 1998) However, one test alone like the SAT-9 should
not be utilized in California. Instead, each individual state should produce its own
assessment tool with multiple measures based on its own content standards. This
assessment should not be utilized to compare one school against another. It should only
be used to chart the growth of an individual school longitudinally. The Commission on
Instructionally Supportive Assessment (2001) also recommends that states should allow
test-makers a minimum of three years to produce a state-wide test in order to satisfy the
Standards for Educational and Psychological Testing (AERA, 1999) and similar test
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
96
quality guidelines. The statistics garnered from the assessment could still be utilized to
identify at-risk, learning disabled, and gifted students; all useful applications for
standardized tests. (Woolfolk, 2001)
There are some specific recommendations in the area of research. There have
been relatively few studies conducted about elementary students’ perceptions and
attitudes towards standardized tests. O’Neil (1992,2000) has conducted studies on
middle school and high school age students about their performance on standardized
tests. This study involved elementary age students directly and what they perceived as
assisting them in achieving higher scores on these standardized tests. More research is
needed in the area of standardized testing. As previously discussed, there has been only
one recent study that focused on some of the same issues as this study (Willingham,
Pollack, and Lewis, 2002). Overall, more research is needed in the area of how children
learn, especially during their elementary years.
Another reason for more research is to make standardized tests more equitable
for all students who take them. The next step is to refine the currents flaws that are
inherent with individual test questions. Many questions are biased either culturally or
grant a definite advantage to students from higher socioeconomic families. “... each
indicator or measure has certain built in weaknesses that can never be technically
resolved. (NEA, 1993, p. 14) Students whose first language was not English, those with
learning disabilities, and pupils who learn best by different learning modalities others
than those emphasized by standardized tests may be at a disadvantage. (Valdes and
Figueroa, 1994; August and Hakuta, 1997; Willig, 1998) In this study primary language
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
did play a roll in how students performed on these tests. In this study, students who were
identified as belonging to a lower socioeconomic background were not disadvantaged by
these tests, but there are still many authors and much research that support the contention
that standardized tests are not equitable for these students. (Kohn, 2001)
It is the researcher’s hope that others will continue this area of study and
continue to explore what factors affect student achievement on standardized tests. This
includes research in other states and nations where standardized testing is occurring.
This study focused upon upper elementary grades only. Another study focused upon
primary, middle, and high school students would be a benefit for K-12 educators.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
98
REFERENCES
Abedi, J. (1996). Interrater/test reliability systems (ITRS). Multivariate Behavioral
Research, 31(4), 409-417.
American Educational Research Association. (1999). Standards for educational and
psychological testing. Washington, DC: American Educational Research
Association.
Anderson, N. (2001, November 29). Lawmakers resolve key educational issues. Los
Angeles Times, A38.
Anderson, N. (2001, December 19). Congress oks overhaul of public schools. Los
Angeles Times, A1 & A30.
Anderson, N. (2002, January 9). Bush signs public schools bill into law. Los Angeles
Times, A1 & A32.
August, D. & Hakuta, K. (1997). Bilingualism and second language learning assessment.
Improving Schools for Language Minority Children: A Research Agenda.
Washington, DC: National Academy Press.
August, D. and Hakuta, K. (1998). Cognitive aspects of school learning: Literacy
development and content learning, Improving Schools for Language Minority
Children: A Research Agenda, Washington, DC: National Academy Press.
Baker, E.L. (2000). Focus groups on motivational incentives for low stakes tests with
senior high school students and their parents. Los Angeles: National Center for
Research on Evaluation, Standards, and Student Testing.
Bandura, A. (1997). Self-efficacy: The exercise of control. New York: Freeman.
Barnett, J.E. & Hixon, J.E. (1997). Effects of grade level and subject on student test
score predictions. Journal of Educational Research, 90(3), 179-174.
Brookheart, S.M. & DeVoge, J.G. (1999) Testing a theory about the role of classroom
assessment in student motivation. Applied Measurement in Education, 12(4),
409-425.
Brown, S.M. & Walberg, H.J. (1993). Motivational effects on test scores of elementary
students. Journal of Educational Research, 86(3), 133-136.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
99
Bush, G.W. (2001). No child left behind. A blueprint for educational reform.
Washington, DC: U.S. Department of Education.
Bushman, J., Goodman, G., Brown-Welty, S., & Dorn, S. (2001). California testing:
How principals choose priorities. Thrust for Educational Leadership, 59(1) 33-
36.
California Department of Education (1992) It’s elementary: Elementary grades task
force report.
California State Testing and Reporting System Report (2001). Sacramento: California
Department of Education.
Carver, R.P. (1992). What do standardized tests of reading comprehension measure in
terms of efficiency, accuracy, and rate? Reading Research Quarterly, 27(4), 346-
359.
Chang, R.M. (2000). The elementary principal as an instructional leader in improving
student performance. Unpublished doctoral dissertation, University of Southern
California.
Cizek, G.J. (1998). Filling in the blanks: Putting standardized tests to the test. Fordham
Report, 2(11).
Colvin, R.L. (2001, August 3) 4th , 8th grades still come up short in math. Los Angeles
Times, A1 & A25. The Commission on Instructional Supportive Assessment.
(2001). Building tests to support instructional and accountability; A guide for
policy makers. Washington, DC.
Costello, R.J., Hedl, J.J., Papay, J.P. and Spielberger, C.D. (1975). Effects of trait and
state anxiety on the performance of elementary school children in traditional and
individualized multiage classrooms. Journal of Educational Psychology, 67(6),
840-846.
Covington, M.V. (1992). Making the grade: A self-worth perspective on motivation and
school reform. New York: Holt, Rinehart, & Winston.
DeCesare, D. (2001). How high are the stakes in high-stakes testing? Principal. 81(3),
10- 12.
Donegan, M.M. & Trepanier-Street, M.L. (1998). Teacher and parent views on
standardized testing: A cross-cultural comparison of the uses and influencing
factors. Journal of Research in Childhood Education, 13(1), 85-93.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
100
EdSource. (2001). Test and more tests: the road ahead for student assessment. 24th
Annual EdSource Forum on California Schools. Sacramento.
Elam, S.M., Rose, L.C., & Gallup, A.M. (1992). The 24th annual Gallup-Phi Delta
Kappan poll of the public’s attitudes towards public schools. Phi Delta Kappan,
(September), 41-53.
Fisher, M. (2001, May 8). Mountain of tests slowly crushing school quality. Washington
Post, Bl.
Fording, L. (2001). A matter of time. Newsweek Web. Retrieved February 5, 2001, from
http://www.msncb.com/new/521.163 .asp/cp 1: = 1.
Forgione, P.D., Jr. (1999) Achievement in the United States: Are students performing
better? Washington, DC: National Center for Education Statistics, Office of
Educational Research and Improvement.
Frisbie, D.A. & Andrews, K. (1990). Kindergarten pupil and teacher behavior during
standardized achievement testing. Elementary School Journal, 90(4), 435-448.
Gallagher, A.M. (1998). Gender and antecedents of performance in mathematics testing.
Teacher College Record, 100(2), 297-314.
Gandara, P. (1997). Review of instruction of limited English proficient students: A
Report to the California Legislature. Los Angeles: The Linguistic Minority
Institute.
Garcia, E.G. (2000). Bilingual children’s reading. Handbook of Reading Research,
Lawrence Erlbaum Company.
Greenwood, G.E., Olejnik, S.F., & Parkay, F.W. (1990). Relationships between four
teacher efficacy belief patterns and selected teacher characteristics. Journal of
Research and Development in Education, 23(2) 102-106.
Goins, B. (1993). ERIC/EECE Report: Student motivation. Childhood Education, 69(5),
316-317.
Gonzales, V. & Schallert, D.L. (1999). An integrative analysis of the cognitive
development of bilingual and bicultural children and adults. Language Cognitive
Development in Second Language Learning: Educational Implications for
Children and Adults. Needham Heights, MA: Allyn & Bacon.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
101
Gottfried, A.E. (1990). Academic intrinsic motivation in young elementary school
children. Journal of Educational Psychology, 82(3), 525-538.
Haertel, E.H. (1999). Validity arguments for high-stakes testing: In search of the
evidence. Educational Measurement: Issues and Practice, Winter 1999, 5-9.
Hakuta, K. & Garcia, E.E. (1997). Bilingualism and education. American Psychologist,
44, 374-379.
Halpem, D.F. (1996). Changing data, changing minds: What the data on cognitive sex
differences tell us and what we hear. Learning and Individual Differences, 8, 73-
82. Harcourt Educational Measurement. (2001). Making California education
strong together: STAR program SAT 9 results show continuous improvements in
student achievement since 1998. Harcourt Assessment Company.
Hedl, J.J and Papay, J.P. (1982). The factor structure of the state-trait anxiety inventory
for children: kindergarten through the fourth grades. Personal and Individual
Differences, 3, 439-446.
Helwig, R., Anderson L., & Tindal, G. (2001). Influence of elementary student gender
on teachers’ perceptions of mathematics achievement. The Journal of
Educational Research, 95(2), 93-102.
Hill, K.T. & Eaton, W.O. (1977). The interaction of test anxiety and success-failure
experiences in determining children’s arithmetic performance. Developmental
Psychology, 13, 205-211.
Hoff, D.J. (2001). Progress lacking in US students’ grasp of science. Education Week,
XXI(13), 1, 14.
Hoy, W.K. & Woolfolk, A.E. (1993). Teachers’ sense of efficacy and the organizational
health of schools. Elementary School Journal, 93, 355-372.
Keller, B. (2001). Most California school to get cash for meeting test targets. Education
Week. Retrieved October 25, 2000, from
http://www.edweek.org/ew/ew printstory.cfm/slug=06calif.h20.
Kohn, A. (2001). Beware of the standards, not just the tests. Education Week, XXI(4),
38-52.
Kohn, A. (2000). The case against standardized testing: Raising the scores, ruining the
schools. Portsmouth, NH: Heinemann.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
102
Kohn, A. (2001). Fighting the tests: A practical guide to rescuing our schools. Phi Delta
Kappan. January 2001. 349-357.
Kohn, A. (1999). The schools our children deserve. Portsmouth, NH: Heinemann.
Linn, R.L. (2000). Assessments and accountability. Educational Researcher, 29(2), 4-
16.
Linn, R.L. (2001). Reporting school quality in standards-based accountability systems.
National Council on Measurement in Education, 9(4), 4-5.
Marzano, R.J. (2001). In search of the standardized curriculum. Principal, 81(3), p. 6-9.
Miller-Jones, D. (1989). Culture and testing. American Psychologist, 44(2), 30-35.
Muir, M. (2001). When the stakes are high. NWEducation, Fall 2001, 30-35.
National Assessment of Educational Progress, (n.d.). Mathematics 2000 major results.
Retrieved October 17, 2001, from
http://nces.ed.gov/nationsreportcard/mathematics/results.
National Center for Education Statistics. (2001). Fourth grade reading highlights 2000.
(NCES Publication 2001-513). Jessup, MD: ED-Pubs.
National Center for Education Statistics. (2001). Mathematics highlights 2000. (NCES
Publication 2001-518). Jessup, MD: ED-Pubs.
National Center for Research on Evaluation, Standards, and Student Testing, (n.d.).
Retrieved November 23, 2001 from http://www.cse.ucla.edu/CRESST.
National Education Association. (1993). The Role of high stakes testing in school
reform: A report from professional standards and practice. Chicago.
Nolen, S.B., Haladyna, T.M., & Haas, N.S. (1992). Uses and Abuses of achievement test
scores. Educational Measurement: Issues and Practice, 11(2), 9-15.
Odden, A.R. (2000) Educational leadership for America’s schools. New York: McGraw-
Hill, Inc.
Ogbu, J. & Simons, D. (1998). Voluntary and involuntary minorities: A cultural and
ecological theory of school: Performance with some implications for education.
Anthropology and Education Quarterly, 29(2).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
103
O’Heam, T.P., Spielberger, C.D. and Vagg, P.R. (1980). Is the state-trait anxiety
inventory multidimensional? Personal and Individual Differences, 1, 207-214.
Olson, L. (2002). Up close and personal. Education Week. 21(37), 28-33.
O’Neil, H.F., Jr. (1992). Final report of experimental studies on motivation and NAEP
test performance. Los Angeles: National Center for Research on Evaluation,
Standards, and Student Testing.
O’Neil, H.F., Jr. (2000). Motivation and low stakes testing. Policy Brief for National
Assessment Governing Board.
Papay, J.P. and Spielberger, C.D. (1986). Assessment of anxiety and achievement in
kindergarten and first and second grade children. Journal o f Abnormal Child
Psychology, 14(2), 279-286.
Paris, S.G. (1991). A developmental perspective on standardized achievement testing.
Educational Researcher, 20(5), 12-20.
Peleg-Popko, O. (2002). Children’s test anxiety and family interaction patterns. Anxiety
Stress and Coping, 15(1), 45-59.
Popham, W.J. (1999). Why standardized tests don’t measure educational quality.
Educational Leadership, 56(6), 8-15.
Powers, D.E. (2001). Test anxiety and test performance comparing paper-based and
computer-adaptive versions of the Graduate Record Examination general test,
Journal o f Educational Computing Research, 24(3), 249-273.
Ragland, J. (2001, October 6). In Santa Paula, kindergarteners put to the test. Los
Angeles Times, B8.
Resnick, L.B. & Resnick, D.P. (1992). Assessing the thinking curriculum: New tools for
educational reform. Changing assessments: Alternative views of aptitude,
achievement, and instruction. Boston: Kluwer Academic Publishers.
Sackett, P.R., Schmittt, N., Ellingson, J.E., & Kabin, M.B. (2001) High-stakes testing in
employment, credentialing, and higher education. American Psychologist, 56(4),
302-318.
Sheehan, R., Ciyan, J.R., Wiechel, J., & Bandy, I.G. (1991). Factors contributing to
success in elementary schools: Research findings for early childhood educators.
Journal o f Research in Childhood Education, 6(1), 66-75.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
104
Shepard, L.A. & Bliem, C.L. (1995). An analysis of parent opinions and changes in
opinions regarding standardized tests, teacher’s information, and performance
assessment, Summary for Technical Report 397. Retrieved November 23,2001,
from http://www.cse.ucla.edu/CRESST/Summary/397shepard.htm.
Spielberger, C.D. (1976). The nature and measurement of anxiety. Cross-cultural
research on anxiety. Washington, D.C.: Hemisphere/Wiley.
Slate, J.R., Jones, C.H., Tumbough, R., & Bauschlicher, L. (1994). Gender differences
in achievement test scores. Research in Schools, 1(1), 56-62.
Stiggens, R.J. (1999). Assessment, student confidence, and school success. Phi Delta
Kappan, 81(3), 191-198.
Stiggens, R.J. (2001). The unfulfilled promise of classroom assessment. Educational
Measurement: Issues and practice, 20(3), p. 5-15.
Stipeck, D. (1998). Motivation to learn: From theory to practice (3r d ed.). Boston: Allyn
and Bacon.
Third International Mathematics and Science Study. (n.d.)Retrieved October 17, 2001,
from http://nces.ed. gov/timss/timss95/index.asp
Tobias, S. (1985). Test anxiety: Interference, defective skills, and cognitive capacity.
Educational Psychologist, 20, 135-142.
Tucker, M.S. and Codding, J.B. (1998) Standards for our schools: How to set them,
measure them, and reach them. San Francisco: Jossey-Bass Publishers.
USA Today. (2000, August 2). Al.
U.S. Department of Education, Office for Civil Rights (2000). The use of tests as part of
high-stakes decision-making for students: A resource guide for educators and
policy-makers. Washington, DC.
Valdes, G. & Figueroa, R. (1994). Bilingualism and testing: A special case of bias.
Ablex Publishing Corporation.
Wentzel, K.R., Weinberger, D.A., Ford, M.E., & Feldman, S.S. (1990). Academic
achievement in preadolescence: The role of motivational, affective, and self-
regulatory processes. Journal of Applied Developmental Psychology, 11(2) 179-
193.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
105
Wiggins, J.D., Schatz, EX., &West, R.W. (1994). The relationship of self-esteem to
grades, achievement scores and other factors critical to school success. School
Counselor, 41(4), 239-244.
Willingham, W., Pollack, J., & Lewis, C. (2002). Grades and test scores: Accounting for
observed differences. Journal o f Educational Measurement, 39(1), 1-37.
Woolfolk, A. (2001). Educational psychology (8th ed.) Needham Heights, MA: Allyn &
Bacon.
Yeh, S. (2001). Tests worth teaching to: Constructing state-mandated tests that
emphasize critical thinking. Educational Researcher, 30(9), 12-17.
Zehr, M.A. (2000). Higher California test scores in bilingual education, Education Week,
20(1).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
106
Appendix A
University Park Institutional Review Board Approval for Review of Research Involving
Human Subjects
— : ~;-UMVERSITY-0F--SOlJTHEKK CALIFORNIA— ■ ■
U n iv e r s it y P a r k
I n s t t t u t io n a l R e v ie w B o a r d
ADM 300/MC 4019
Tel: (213) 740-6709 -
Fax:(213)740-8919
MPA No. M-I299
use
UNIVERSITY
'"'F SOUTHERN
CALIFORNIA
Review of Research Involving Human Subjects
APPROVAL NOTICE
Office of the Provost
University Park
institutional Review
B oard (UPIRS)
Date: April 8, 2002
Principal Investigatorfs): Harold O’Neil, Ph.D. / E. Don Earn
Project Title: Cognitive and Motivational Factors that Affect Elementary School
Students on Standardized Tests
u s e UPIRB #02-01-005
The University Park Institutional Review Board has reviewed the information you
submitted pertaining to the above proposal at its meeting o f______ N/A______ and.has:
0 Approved study Educ Psych SocWk Socio Bus Amen
Approved the Delegated Review tf D □ G 0 U
C Approved the Claim of Exemption
0 Approved continuation
D Approved amendment
S' Approved under die expedited review by the chair - 45 CFR 46.110 Category #7
Conditions of Approval:
The Investigators must provide the following requested information prior to proceeding research (which
includes contacting, recruiting, and enrolling potential subjects):
N/A
ERB APPROVAL EXPIRES: April 7, 2003 Your protocol is approved for a 12-month
period. If this research study continues beyond 12 months, you must recuelt rs-approval o f this study prior
to the expiration date by submitting an Application for Continuing Review Status Report Form. This form
should also be used when your study is completed to notify the UPIPJg.
NOTE: The IRB must review all advertisements and/or recruiting materials. Serious adverse events,
amendments and/or changes in the protocol must be submitted to the UPIRB for approval. Changes may
; not be implemented until you have received the Board’s approval. Exception: changes involving subjects’
safety may be implemented prior to notification to the UPIRB.
University of
Southern California
)S Angeies,
California 50089-4019
Tel: 213 740 6709 • ’
Fax 213 740 3919
e-mail:
upirb@usc.edu
Marlene S. Wagner, Pit.D.
Chairperson
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
107
'.use
UNIVERSITY
( f SOUTHERN
CALIFORNIA
Rossier School
o f Education
Division of Learning
and Instruction
■ «
Curriculum and Teaching'
Waite Phillips Hall 1004
Tel: 213 740 3471
Fax: 213 740 3671
Educational Psychology
and Technology
Waite Phillips Hall 600 *
'Tel: 253 740 3465
Pax: 213 7 4 0 2367
University of
Southern California
Los Angeles,
California 9Q089-0031
www.usc.edu/dept/
education
Appendix B
UPIRB Informed Consent for Non-Medical Research
P age 1 o f 5
Attachment B
University of Southern California
Rossier School of Education
Division of Learning and Instruction
CONSENT TO PARTICIPATE IN RESEARCH
Cognitive and Motivational Factors That Affect the Achievement of
Elementary Students on Standardized Tests
You are asked to allow your child to participate in a research study conducted by
Mr. E Don Kim, M.A and Harold F. O'Neil, Ph.D., from the Learning and
Instruction Department at the University of Southern California.' The results will
contribute to Mr. Kim’s doctoral dissertation. Your child was selected .as a
possible participant in this study because we are interested in the attitudes and
perceptions of standardized tests by elementary school students. Your school
volunteered to assist us in this study. Approximately 250 participants will be
selected from elementary school students in the area. Your child’s participation is
strictly voluntary. There' will be no repercussion in any way for not participating
PURPOSE OF THE STUDY ■
The purpose of this study is to learn more about what helps and hinders
elementary age students to better on standardized tests like the SAT-9 that they
take each May. The first hypothesis or idea that will be tested is whether
motivational factors (like candy, money, or extra recess time) or cognitive factors
(their thinking skills and what they have learned in class) are more important to
students doing better on standardized tests. The question to think about is: will
elementary students do better on standardized tests if during classroom learning,
the primary focus is upon improving thinking skills? Or will elementary students
do better on standardized tests if they are offered rewards and incentives in order
to. motivate them? This study will compare motivational factors and
cognitive/learning factors to try to decide which plays the greater role.
There are other ideas or questions that will be tested in this study also. They
include elementary students’ attitudes towards standardized tests; do they like or
dislike them. Another question is: if a student has a good attitude about tests, then
will he or she be less anxious about them? Another question that will be looked at
Date of Preparation: 3/8/02
U SC UPIKB f02-Q1-505
Expiration Date: ^pg 0 j 2003
PPMW
APR
2002
l U
u se UNIVERSITY PARK
INSTITUTIONAL REVIEW BOARD
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
(
is: if a student is more anxious about tests (test anxiety), will he or she do worse
on standardized tests like the SAT-9? And the last idea question in this study is: if
a student has a good attitude about tests will he or she do better on the SAT-9?
PROCEDURES
If you and your child volunteer to participate in this study, we would ask your
child to do the following things:
A survey has beer created by the researchers to collect data about elementary
students’ attitudes and perceptions about standardized tests. This survey utilizes
part of the Kow I Feel Questionnaire S7AIC Form C2 that is produced by
Mindgarden. At a monthly elementary principals meeting, principals, will be asked
to volunteer their school for this research study. Permission forms and the surveys
will be given to principals at those elementary schools that volunteer to participate
in the study. These principals will discuss what the study is about with the
teachers at their schools and ask them to volunteer to participate. The. teachers
will tell students in their classes about the study. Your child has been given an
Assent Form for him or her and this Consent Form for you to sign. If you give
permission lor your child to participate, then please sign and have your child
return this form and your child’s Assent Form signed by him or her. te their
teacher.
After permission is received., teachers will ask students to complete the survey.
The survey will take about 20 minutes. When completed it will be collected by
the teacher and then collected by the researchers. Teachers are specifically
requested not to look at the completed surveys. In the survey the students circle
responses to 32 questions about anxiety, parents' education level languages they
speak, who they live with, and how the}' fee: about tests. They will also write
answers to 2 questions about what the}' think helps them on tests and why tests
2 r£ unponsnt.
Information from the surveys will be analyzed by’ the researchers. This includes a
correlation between attitudes and anxiety along with studenqscores on las; year’s
SAT-9 test. We would also like permission to obtain your child’s previous S.AT-S
scores in the 2 areas of total math and total language arts mom the school district’s
testing department. By signing this Consent Form, you will also be giving
permission for the researchers to do so. Only the 2 researchers will have access to
these scores and then they will be destroyed.
POTENTIAL r is k s a n d d is c o m f o r t s
There axe minimal potential risks or discomforts.
Date of Prenaratiou: 3/8/02
u se UPIRB*# 0 2 -0 1 -0 0 5
Expiration Date: A P R C
U SC UNIVERSITY PARK
IN STITU TIO N A L REVIEW BOARD
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
109
Page 3 of 5
POTENTIAL BENEFITS TO SUBJECTS AND/OR TO SOCIETY
Your child will not receive any direct benefit from participating in this study.
However, there is the potential benefit to participants as a result of this study
which is to help them in getting a better over all education in the long run, The
participants will think about and identify what helps them in performing better
on standardized tests. Hopefully, this will aid them in their performance on all
their tests in the future. The data gathered as a result from this study will be
shared with elementary principals and teachers with the goal of assisting them
to provide a better education for their students.
The potential benefit to society that is expected from the research includes
adding to the.knowled.ge about the factor^that affect the achievement o f
students on standardized tests. This is especially true about the ioforination on
cognitive and motivational factors that will be looked at. This, information will
help, all educators in providing a better learning environment for all students.
This study with elementary school students will be one of the few about their
perceptions and attitudes towards standardized testing. Thus, it will add to
society’s knowledge on the subject.
PAYMENT FOR PARTICIPATION
None of the participants will receive any payment.
CONFIDENTIALITY
Any information that is obtained in connection with this study and that can be
identified with your child will remain confidential. Names of schools or any
individuals will not be kept. Ail papers with names or any identifying information
will be destroyed immediately after it is typed into a computer program that only
identifies numbers, not names.. During analysis all information used in this study
will not include participant’s name (or other identifying information) to them. The
data will be kept in a locked office. Teachers are requested specifically not to look
■ a t the completed surveys before they are collected by the researchers. After it is
collected, only Mr. Kim and Dr. O’Neil will have access to the raw' data until it is
destroyed. The office is in an alarmed school administration building. When the
results of the research are published or discussed in conferences, no information
will be included that would reveal your child’s identity’ or that of any individuals
or schools.
PARTICIPATION AND WITHDRAWAL
You can choose whether your child will participate in this study or not If you
volunteer to be in this study, you or your child may withdraw at any time without
Date of Preparation: 3/8/02
use UPIRB #02-01-005
Expiration- Date: ft J yjQJ
A PR
u se U N IV ER SITY PARK
m s n n i T i n N A i r f v i e w b o a r d
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
110
P age 4 o f 5
consequences of any kind. Your child may also 'refuse to answer any questions
they don’t want to answer and still remain in.the study. The investigator may
withdraw you from this research if circumstances arise which warrant doing so.
IDENTIFICATION OF INVESTIGATORS
If you have any questions or concerns about the research, please feel free to
contact:
Student Investigator
E Don Kim -v
2004 Mathews Ave. #1
Redondo Beach, CA 90278
310-533-4672
dkim@tusd.kl2.ca.us
or Faculty Sponsor
Harold F. O'Neil, PhD.
Department of Learning and Instruction
University of Southern California
WPH 600
Los Angeles, CA 90089-0031
(213) 740-2366
honeil@usc.edu
RIGHTS OF RESEARCH SUBJECTS
You or your child may withdraw your consent at any time and discontinue
participation without penalty. You are not waiving any legal claims, rights ot
remedies because of your participation in this research study. If you have
questions regarding your rights as a research participant, contact the University
Park ERB, Office of the Vice Provost for Research, Bovard Administration
Bunding, Room 300, Los Angeles, CA 90089-4019, (213) 745-6709 or
upirb@usc.edu.
Date of Preparation: 3/8/02
U S C U PIRB #02-01-005
Expiration Date: APR 8 7 ZG 03
use U N IV E R S IT Y PARK
INSTITUTIONAL R EV IEW BOARD
with permission o f the copyright owner. Further reproduction prohibited without permission.
I ll
'^iG M ltilfem O E lpllS E A R C K S t3B JE G ^,-PA SE jp,iO R ^ EEGjUL
;R E g i^ s E S if a iF i:a ia * * > \ ~ _______
I understand the procedures described above. My questions have been answered
to my satisfaction, and I agree to participate in this study. I have been given a
copy of this form.
Name of Student Participant
Name of Parent or Guardian
Signature of Parent or Guardian Date
j SIGNATURE aFlNVESTIGA-TOR
* „ i
I have explained the research to the participant or his/her legal representative, and
answered all of his/her questions. I believe that he/she understands the
information described in this document and freely consents to participate.
Name of Investigator
Signature of Investigator Date
j " SIGNATURE GT WETNESS (If an oral trails]
1
My signature as witness certified that the participant or his/her legal
representative signed this consent form in my presence as'tis/her voluntary' act
and deed. ■
Name of Witness
Signature of Witness Date
P P ( F I Inn)
h j
Date o f Preparation: 3/8/02
u se UPIRB #02-01-005
...
h
APR. 8 20C 2
if
Expiration D a t e : 0 7 £ Q Q 3 use U N IV ER SITY PARK
INSTITU TIO NAL REVIEW BOARD
with permission of the copyright owner. Further reproduction prohibited without permission.
use
UNIVERSITY ,
= F s o u t h e r n
CALIFORNIA
Rossier School
o f Education
Division of Learning
and instruction
Curriculum and Teaching'
Waite Phillips Hail 1004
Tel: 213 740 3471
Fax: 213 740 3571
Educational Psychology
and Technology
Waite Phillips Hail 500
Tel: 213 740 3465
Fax: 213 740 2367
University of
Southern California
Los Angeles,
California 90089-0031
www.usc.edu/dept/
education
Appendix C .
UPDRB Assent Form for Research
Attachment €
University of Southern California
Rossier School of Education
Division of Learning and Instruction
ASSENT FORM FOR RESEARCH
Page 1 o f2
ASSENT TO PARTICIPATE IN ‘ RESEARCH
Cognitive and Motivational Factors That Affect the Achievement
of Elementary Students on Standardized Tests
1. Hello, my name is E Don Kim. j
2. We are asking you to take part in a research study because we are trying to ;
leam more about what you think about standardized tests like the SAT-9 you
take every May.
■ j
• i
3. If you agree to be in this study you will answer questions in a survey. The
questions will be about how you feel about school, how you feel about tests,
who do you live with, your parents education, and what languages do you j
speak. On these questions you just circle answers. Then there are two j
questions you will be asked to write to. They are about what you think helps i
you on tests and why tests are important. This will take about 20 minutes.
4. There is no risk or harm that will come to you because^ou answer the survey
questions.
5. We hope that you benefit from this survey by helping us to help your teachers
. . find out what helps you to do better on these kinds of tests.
6. Please talk this over with your parents before you decide whether or not to
participate. We will also ask your parents to give their permission for you to
112
Date of Preparation: 3/8/02
USC UPERB # o2-Qlrfi(15
Expiration Date: APR « 7 20Q3
M
P i P R B WE
fil
I I
A P R 8 2 0 0 2 y) Z J
■ U SC UNIVERSITY PARK
INSTITUTIONAL REVIEW BOARD
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Consent Form for Non-Medical Research Page 2 o f 2
take part in tins study. But even, if your parents say “yes” you can still decide
not to do this.
7. If you don’ t want to be in this study, you don’t have to participate. Remember,
being in this study is up to you and no one will be upset if you don’t -want to
participate or even if you change your mind later and want to stop.
8. You can ask me any questions that you have about the study. If you have a
question, you can call me at 310-533-4672.
9. Signing your name at the bottom means that you agree to be in this study.
You and your parents will be given a copy of this form'after you have signed it.
Name of Student Date
Student’s Signature
Name of Investigator Date
Investigator’s Signature
Date of Preparation: [insert date]
USC UPMB #: -02-01-005
Expiration Date: APR D 7 2003
USC U N IV ER SITY PARK
INSTITUTIONAL REVIEW BOARD
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
114
Appendix D ■ '
Permission from the Torrance Unified School District Board of Education to Conduct
Study
• <
. ,, ' ‘ November 19,2001 i
i
(Action)
Board of Education authorisation is requested to allow Mr. E Don Kim to survey approximately 250 third through fifth ;
grade students as part of his dissertation study at the University of Southern California. His dissertation is looking at the
comparative effects of motivation and cognition on student achievement on standardized tests. Mr. Kim would like to j
distribute an anonymous written survey to students as to how they feel about standardized tests, and what they think !
helps them to be successful. Students would have the option of completing the survey or not completing the survey. Mr. i
Kim will not survey students in his own school as that might affect the responses; rather he will utilize the student !
populations at other Torrance Unified schools. This survey will be conducted in January 2002, and should have no
impact on student performance on the May SAT9 tests.
Rectynnendanon: Transmitted to the Governing Board recommending that aufhorizatioit be given to Mr. E Don Kim to
survey approximately 250 third through fifth Torrance Unified School District students as part of his dissertation study.
TO: BOARD OF EDUCATION
FROM: SUPERINTENDENT < r i - v . , - .
ASSISTANT SUPERINTENDENT - EDUCATIONAL SERVICES (Elementary)-
■SUBJECT: . . AUTHORIZATION TO CONDUCT SURVEY
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
115
Appendix E
(Pilot Study)
Survey About SAT-9/Standardized Tests
Grade: 3 4 5 (Circle your answer)
1. Do you like standardized tests like the SAT-9? Yes No
2. Do you think it is important to take these kinds of tests? Yes No
3. Do you think you did well on the SAT-9 test? Yes No
4. Do you think your teacher likes these tests? Yes No
5. Do you think your parents like these tests? Yes No
6. Do you speak English to your parents at home? Yes No
7. Do you live at home with both of your parents? Yes No
8. Did either of your parents finish college? Yes No
9. Are you a boy or girl? Boy Girl
10,. What do you think affects how you do on these tests?
(Please write vour answer:)
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
116
Appendix F
10 Written Student Responses for Estimation of Inteijudge Reliability
“What do you think helps you do better on these tests?”
1. I think looking at a dictionary to get more vocabulary is helping, of course before the
test.
2. What would help me on these tests, I going to a tutor where I am going now.
3. To do better on the SAT-91 think getting sleep early and getting enough rest is
important. Also having a good breakfast. Also I do better if I do not care how many
people finish before me. My teacher made it easier this year.
4. By studying hard and studying before the test.
5. I think it is mostly yourself because you have to want to learn and take in what the
teacher tells you. If you don’t want to, you might be hiding a genius inside.
6. When we did our tests (SAT-9) I think candy helped us think such as Starburst.
7. The candy we get before the test.
8. I think concentrating helps me do better on these tests.
tli
9. I think it helps us to go over all of the things we did in the 4 grade and the
beginning of school. I think it prepares us for the next grade. I also think it helps us
in all subjects to see how you remember and if you have been listening in class.
10.1 study math, language, spelling, and the other subjects.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
117
Appendix G
(Main Study)
Name
Questionnaire About SAT-9/Standardized Tests
1. I am in Grade: 3 4 5 (Circle your answers)
2. I am a: boy girl
3. I live at home with: both Mom and Dad Mom only Dad only neither Mom or Dad
4. Who graduated
from college?: both Mom and Dad Mom only Dad only neither Mom or Dad
Almost Almost
Never Sometimes Often Always
5. Do you like standardized tests like the SAT-9? 1 2 3 4
6. Do you think it is important to take these kinds 1 2 3 4
of tests?
7. Do you think you do well on standardized tests 1 2 3 4
like the SAT-9?
8. Do you worry about not doing well on these 1 2 3 4
kinds of tests?
9. Do you think if you try really hard that you do 1 2 3 4
better on these kinds of tests?
10. Do you think your teacher likes these kinds of 1 2 3 4
tests?
11. Do you think your parents like these kinds of 1 2 3 4
tests?
12. Do you speak English with your parents at 1 2 3 4
home?
13. What do you think helps you do better on these tests? (Please write your answer
below.)
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
14. Why do you think it is important to take tests like these? (Please write your answer
below.)
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
119
Appendix H
How I Feel Questionnaire STAIC Form C-2
HOW-I-FEEL QUESTIONNAIRE
STAIC Form C-2
Name: ________ ' _________________: _________________ Age:___________ Date:,
DIRECTIONS: A number of statements which boys and girls use to describe
themselves are given below. Read each statement carefully and decide if it is
hardly-ever, or sometimes, or often true for you. Then for each statement, put an
X in the box in front of the word that seem s to describe you best. There are no
right or wrong answers. Don’ t spend too much time on any one statement.
Remember, choose the word which seem s to describe how you usually feel.
1. I worry about making mistakes.............................. □ hardly-ever □ sometimes □ often !
2. I feel like crying ......... ..:.......................................... □ hardly-ever □ sometimes □ often!
i
3. I feel unhappy............................................................. □ hardly-ever □ sometimes □ often!
' ;
4. I have trouble making up my mind ................. □ hardly-ever □ sometimes □ often!
5. It is difficult for me to face my problems.............. □ hardly-ever □ sometimes □ often!
6. I worry too much............................................ ........... □ hardly-ever □ sometimes □ often!
7. I get upset at h om e................................................. □ hardly-ever □ sometimes □ often j
8. I am shy.................................... ............................... □ hardly-ever □ sometimes □ often!
i
9. I feel troubled............................. .............................. □ hardly-ever □ sometimes O often !
10. Unimportant thoughts run through my mind
and bother m e ................................................... □ hardly-ever □ sometimes □ often i
!
11. I worry about sch ool !...................................... □ hardly-ever □ sometimes □ often:
12. I have trouble deciding what to d o !............... □ hardly-ever □ sometimes □ often:
13. I notice my heart beats fa st.................................... □ hardly-ever □ sometimes O often!
14. I am secretly afraid................................................... □ hardly-ever □ sometimes □ often :
15. I worry about my parents........................................ □ hardly-ever □ sometimes □ often!
16. My hands get sweaty................................................ □ hardly-ever □ som etim es □ often!
17. I worry about things that may happen ..... □ hardly-ever □ sometimes □ often !
18. It is hard for me to fall asleep at night.................. □ 'hardly-ever □ som etim es □ often j
19. I get a funny feeling in my stomach...................... □ hardly-ever □ som etim es □ often]
20. I worry about what others think of m e................... □ hardly-ever □ sometimes □ often!
Published by M in d Garden, inc. I690 W oodside R o a d Redwood C ity C alifornia 94061 (650)261-3500 -
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
120
Appendix I
Sample Student Responses for the Question:
What do you think helps you do better on these tests?
Response Types Student Response Example
1. Anxiety I think that relaxing and thinking about each
question and not worrying about the test so much.
That helps me do better.
2. English Language Learner I think Sat 9 Tests help me do better in english
school and japanese school. These tests are very
important so I will study.
3. Good/bad test taker Sometimes I know it but I still get a bad grade.
4. Incentives When the teacher gives us candy, I try harder.
5. Multiple choice and essay
questions
I will try harder if there are multiple choice
questions.
6. Nothing can help Nothing
7. Practice test taking skills My mom gets me these packets that are supposed
to be like the real test. I try to do them. Sometimes
my mom times me.
8. Previous education I learned a lot in the 3r d grade last year.
9. Self-esteem and self-efficacy If you try your best and be confident that we will
do well.
10. Sleep and nutrition Things that help me do better on these tests are
sleeping well, eating healthy foods, and trying my
best.
11. Studying I do better if I study really hard and paying
attention in class.
12. Study specific subjects I study extra spelling and vocabulary words with
my parents each week.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
1 2 1
13. Time factor If I had more time than I could do better on these
tests. Especially the writing parts.
14. Tutoring/extra classes I go to the Kumon by the ice cream store two times
a week. I study English and math. It helps I think.
15. Multiple responses I think studying as hard as I can, get a good sleep,
eat breakfast, also pay attention and try hard.
16. Unclear response I think I do better when I think about my
grandfather.
17. No response
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
122
Appendix J
Sample Student Responses for the Question:
Why do you think it is important to take tests like these?
Response Types Student Response Examples
1. (left blank)
2. Corroborate grades we receive So my teacher can see that I deserve the good
grades I got.
3. Measure what we have
learned
It is important to take these tests because then we
can see how much we have learned in the last year.
4. Prepare us for a future job I want to get a good job after I graduate from
college.
5. To get good grades I think it is important to take these kinds of tests
because if I get a bad grade it would make me try
harder and I might get a good grade.
6. Promotion to next grade or
retention
So I can pass and go on to the Sixth grade.
7. Identify the smart students It is important to take these tests because it proves
and shows how smart we are to everybody.
8. We learn more during the test I learn more during the test.
9. Prepare us for higher grades
and college
It is important to take these tests to improve my
mind. It gets me prepared for the SATs I’ll take to
get into a good college.
10. It’s the ultimate test given Because it is the most important test you can take.
You must take it.
11. Express your feelings I can tell how I feel.
12. So teachers or parents can
see we learned
My teacher wants to know where the level is that I
am.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
123
13. It’s not important It’s not important.
14. Unclear response I think it is important for my reasons.
15.1 don’t know why I don’t know.
16. President, governor, state,
school board, or district
wants to know what you
have learned
I think it is important to take these tests so that the
people in government can see what we are learning
in class.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
A case study: An analysis of the adequacy of one school district's model of data use to raise student achievement
PDF
Connecting districts and schools to improve teaching and learning: A case study of district efforts in the Los Coyotes High School District
PDF
A longitudinal comparative study of the effects of charter schools on minority and low-SES students in California
PDF
A case study of social promotion and retention policies, strategies, and programs
PDF
Effects of teaching self -monitoring in a distance learning course
PDF
Essential elements of superintendent training and preparation programs as indicated by practicing superintendents in the state of California
PDF
A substitute teacher preservice staff development program: A case study of the Los Angeles County Office of Education
PDF
An analysis of student -level resources at a California comprehensive high school
PDF
A formative evaluation of the training effectiveness of a computer game
PDF
An evaluation of the current level of Korean parent involvement at Third Street Elementary School
PDF
Knowledge, attitudes and beliefs of Christian African university students
PDF
An analysis of the implementation of content standards in selected unified school districts of California
PDF
Black male perception of opportunity structures
PDF
An evaluation of perceived task value, self-efficacy, and performance in a geography blended distance course
PDF
A longitudinal look at what's important in comprehensive reform: A case study
PDF
A study of equity in education finance: An analysis of the Archdiocese of Los Angeles elementary schools
PDF
An analysis of the use of data to increase student achievement in public schools
PDF
A closer look at the impact of teacher evaluation: A case study in a high performing California elementary school
PDF
Improving student achievement: An urban success story
PDF
"Show me the money": Analyzing student-level resource allocation and use in a large urban elementary school
Asset Metadata
Creator
Kim, E. Don (author)
Core Title
Cognitive and motivational factors that affect the achievement of elementary students on standardized tests
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
education, elementary,education, tests and measurements,OAI-PMH Harvest
Language
English
Contributor
Digitized by ProQuest
(provenance)
Advisor
O'Neil, Harold F. (
committee chair
), Gothold, Stuart (
committee member
), Picus, Lawrence O. (
committee member
)
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c16-368386
Unique identifier
UC11335006
Identifier
3103915.pdf (filename),usctheses-c16-368386 (legacy record id)
Legacy Identifier
3103915.pdf
Dmrecord
368386
Document Type
Dissertation
Rights
Kim, E. Don
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the au...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus, Los Angeles, California 90089, USA
Tags
education, elementary
education, tests and measurements