Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
The relationship between teacher assessment practices, student goal orientation, and student engagement in elementary mathematics
(USC Thesis Other)
The relationship between teacher assessment practices, student goal orientation, and student engagement in elementary mathematics
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
THE RELATIONSHIP BETWEEN TEACHER ASSESSMENT PRACTICES,
STUDENT GOAL ORIENTATION, AND STUDENT ENGAGEMENT IN
ELEMENTARY MATHEMATICS
By
Corinne Elizabeth Hyde
A Dissertation Presented to the
FACULTY OF THE ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
May 2009
Copyright 2009 Corinne Elizabeth Hyde
ii
DEDICATION
For my husband, Jonathon, who is the reason I’ve gotten this far; for my family,
who gave me a love of learning and keep me laughing; for my best friend Rajiv, who
reminds me who I am.
iii
ACKNOWLEDGEMENTS
To my dissertation chair, Dr. Robert Rueda, my committee members, Dr.
Kimberly Hirabayashi and Dr. Denzil Suite, my husband, my family, and my friends, I
give my heartfelt thanks for your support, guidance, and encouragement throughout this
process.
iv
TABLE OF CONTENTS
Dedication ...................................................................................................................... ii
Acknowledgements ....................................................................................................... iii
List of Tables.................................................................................................................. vi
Abstract .........................................................................................................................vii
Chapter 1: Introduction .................................................................................................... 1
Purpose of the Study .................................................................................................... 4
Research Question........................................................................................................ 6
Limitations of the Study ............................................................................................... 7
Delimitations of the Study ............................................................................................ 7
Chapter 2: Literature Review ........................................................................................... 8
Introduction ................................................................................................................. 8
Sourced Searched ......................................................................................................... 9
Review of the Literature ............................................................................................. 10
Types of Assessment .............................................................................................. 10
Impact of State Mandated Testing .......................................................................... 15
Engagement and Motivation ................................................................................... 16
Learning Environments and Motivation .................................................................. 26
Assessment, Motivation, and Engagement .............................................................. 27
Chapter 3: Methodology ................................................................................................ 34
Research Approach and Rationale .............................................................................. 34
Participants ................................................................................................................ 34
Measures .................................................................................................................... 36
Teacher Assessment Type Measure ........................................................................ 37
Student Goal Orientation Measure .......................................................................... 39
Student Engagement Measure ................................................................................. 43
Procedures ................................................................................................................. 46
Data Analysis ............................................................................................................. 49
Chapter 4: Results.......................................................................................................... 50
Correlation ................................................................................................................. 50
Summary.................................................................................................................... 55
Chapter 5: Discussion .................................................................................................... 57
Synthesizing the Results ............................................................................................. 57
Implications ............................................................................................................... 61
Future Research ......................................................................................................... 63
Conclusion ................................................................................................................. 64
References ..................................................................................................................... 66
v
Appendix A ................................................................................................................... 71
Appendix B ................................................................................................................... 74
Appendix C ................................................................................................................... 80
Appendix D ................................................................................................................... 84
Appendix E ................................................................................................................... 87
Appendix F .................................................................................................................... 90
vi
LIST OF TABLES
Table 1: Description of the Sample Population .............................................................. 35
Table 2: Means and Independent Samples T-Test Results for Gender ............................ 36
Table 3: Questions Included on the Teacher Assessment Type Survey ........................... 39
Table 4: Factor Analysis for Student Goal Orientation Survey ....................................... 42
Table 5: Factor Analysis for Student Engagement Survey .............................................. 46
Table 6: Correlation between Goal Orientation, Engagement, and Assessment Type ..... 50
Table 7: Stepwise Regression for Performance Avoid Orientation ................................. 51
Table 8: Stepwise Regression for Mastery Orientation ................................................... 52
Table 9: Stepwise Regression for Performance Approach Orientation ............................ 53
Table 10: Stepwise Regression for Emotional Engagement ............................................ 53
Table 11: Stepwise Regression for Behavioral Engagement ........................................... 54
Table 12: Stepwise Regression for Cognitive Engagement ............................................. 55
vii
ABSTRACT
This study set out to examine the relationship between the type of assessment a
teacher uses and the goal orientation and engagement of students in fifth grade
mathematics. Prior research on assessment types, goal orientation, and engagement was
examined, and from this research it was predicted that there would be significant
relationships between the type of assessment a teacher uses and the goal orientation and
engagement of his or her students. To determine if a relationship existed, teachers (n = 8)
were given a questionnaire developed by McMillan, Myran, and Workman (2002), which
provided information on whether teachers used constructed response assessment, teacher
made assessment, or objective assessment. Additionally, students (N = 115) were given a
modified form of a portion of the Patterns of Adaptive Learning Survey by Midgley et
al., (2000) designed to measure their performance approach, performance avoid, and
mastery goal orientations. Students were also given a modified form of the Feelings
About School Inventory (Fredricks, Blumenfeld, Friedel, & Paris, 2005) designed to
measure their cognitive, emotional, and behavioral engagement. The results of the
analyses found that there was no significant relationship between teacher assessment type
and student goal orientation or engagement.
1
CHAPTER 1
Introduction
The current focus of education in the United States is on accountability.
Teachers, schools, and school districts are charged with assessing and evaluating student
learning in more rigorous ways than ever before. The No Child Left Behind Act (U.S.
Department of Education, 2002) requires that states implement achievement testing that
all students must pass in order to be considered “proficient” in a particular subject area.
This increased focus on high-stakes assessment and evaluation has changed the way
many schools do business. Many schools have cut subjects that fall outside the purview
of NCLB, and many believe that NCLB encourages teachers to “teach to the test”
(Abrams, Pedulla, & Madaus, 2003; Loveless et al., 2007; Rose & Gallup, 2007).
One other aspect of the emphasis on assessment in the classroom that needs to be
examined is how student goal orientation and engagement may have changed. While it
would be difficult to determine, due to a lack of prior research, what the baseline goal
orientation and engagement for students was before the implementation of NCLB, we can
examine goal orientation and engagement as they stand now as they relate to assessment.
Goal orientation can be defined as the reasons why individuals and organizations
pursue goals, and the way that those individuals and organizations approach tasks
motivationally. Everyone has goals, but how each person goes about achieving those
goals can differ. One person might be motivated by grades, social and academic
comparisons, or achievement; this person would most likely be categorized as having a
performance orientation. Conversely, a person might be motivated by a desire to excel,
regardless of the performance of others. This person, who wished to master the
2
knowledge or skill, rather than outperform others, would be considered to have a mastery
orientation (Elliot & Dweck, 1988).
Within these orientations, there are further categorizations. Each of these
orientations can be considered either an “approach” orientation or an “avoidance”
orientation. In general, one who is working towards something (e.g. a high grade, a first
place finish, mastery of a language) would be considered to have an approach orientation.
However, one who is looking to avoid failure, or to avoid going below a minimum
standard (not coming in last place, learning enough to “get by” in a language, not getting
the worst grade in the class) would be considered to have an avoidance orientation
(Dweck, 1986; Elliot & Dweck, 1988; Elliot & Haraciewicz, 1996).
Related to specific situations and tasks, each person may have either a
performance approach orientation, a performance avoidance orientation, a mastery
approach orientation, or a mastery avoidance orientation. Many educators assert that a
mastery approach orientation is the desired orientation in most situations. The reasoning
behind this assertion is that if students are motivated to master knowledge and skills, that
they will be motivated to achieve their personal best both in and out of the classroom;
they will become “lifelong learners” (Ames, 1992; Ames & Archer, 1998; Barron &
Harackiewicz, 2001; Lau & Nie, 2008) .
Additionally, student engagement is a crucial factor in student success.
Engagement can be defined in three ways: behavioral engagement, cognitive
engagement, and emotional engagement. Behavioral engagement is how involved a
student is in the task at hand. Cognitive engagement is how much desire the student has
to become involved in their own learning and to master knowledge. Emotional
3
engagement is the student’s feelings about the environment and people around them
(Fredricks, Blumenfeld, and Paris, 2004).
In contrast to motivation, engagement requires the student to actually become
involved in their learning in some way. Motivation is a necessary requirement for
engagement to occur, but motivation can certainly take place without engagement.
Students may participate in activities without being fully engaged (Reeve, Jang, Carrell,
Jeon, & Barch, 2004). However, when a student is truly engaged in an activity or in a
lesson, then learning can occur. Additionally, engagement can affect a student’s
motivation to take part in activities and to master knowledge in a given subject area
(Meece, Blumenfeld, & Hoyle, 1988). Because assessment is such an important part of
classroom instruction, it is vital that researchers also examine whether there is a
relationship between evaluation methods and student engagement.
The current focus in the educational world on accountability can cause friction
between the types of assessment teachers use in the classroom and the types of
assessment students will be required to complete by the state or district. The emphasis on
high stakes testing and achievement of certain levels of performance impacts the kind of
assessments teachers implement in their classrooms (Abrams, Pedulla, & Madaus, 2003;
Loveless et al., 2007; Rose & Gallup, 2007). Some teachers administer frequent teacher-
created assessments, some use projects and observations of group work to assess, and
some don’t assess formally at all beyond the testing mandated by the school district and
state. In low income areas, pressures to alter evaluation methods to reflect those used by
district and state agencies may be increased in an effort to raise students’ scores on the
4
tests mandated by the state and district (Abrams, Pedulla, & Madaus, 2003; Loveless et
al., 2007; Rose & Gallup, 2007).
However, despite any differences in resources, each state requires all schools to
adhere to the same standards and eventually produce the same results. As a result,
teacher assessment practices can be affected, which may be related to student goal
orientation and engagement. There is some indication that accountability pressures may
be detrimental to students and/or teachers, as documented in an article by Abrams,
Pedulla, and Madaus (2003), which describes how teachers alter their classroom
assessment practices, often against their professional opinion of what would actually be
best for the student, in order to reflect the format and content of state mandated tests.
However, one unexplored area is how different types of classroom evaluation systems
and assessment-related practices relate to student engagement and goal orientation.
Purpose of the Study
Given the preceding factors, the purpose of this study was to examine the
relationships among the self-reported assessment practices of the teacher, student
engagement, and student goal orientation. If the ultimate goal of education is to create
academically proficient students who are also lifelong learners, then it would be desirable
to attempt to increase student engagement as well as foster a mastery approach
orientation (while still maintaining a performance orientation to a lesser extent) (Ames,
1992; Ames & Archer, 1988; Barron & Harackiewicz, 2001; Lau & Nie, 2008).
To this end, it will be beneficial to examine what types of evaluation teachers
report using, beyond high-stakes testing. The first step is determining the type of
5
assessment that is occurring in the classroom. This includes informal and formal
assessments in the classroom, not including district and state mandated testing.
Additionally, this study examines the relationship between student goal
orientation, student engagement, and teacher assessment practices. Understanding the
role that assessment plays in the classroom involves more than just measuring what type
of assessment is used in the classroom, but also how those assessments may relate to
students’ motivation and engagement.
The final product of the educational system is not the teacher but rather the
student. Determining if there is a relationship between self-reported teacher evaluation
practices, student engagement, and student goal orientation may lead to changes that
foster a mastery approach orientation and more cognitive and emotional engagement
among students in our public schools.
Instead of focusing on assessment, goal orientation, and engagement in general,
this study focused on these factors as they relate to elementary mathematics. The reason
for this is that, first, there is very little research available on how assessment may affect
elementary-age students. Perhaps because it is easier (for reasons of comprehension
abilities and access to sample populations) to work with older students, the research that
does exist on the topic of assessment focuses almost completely on high school or
college-age students. While it is certainly important to explore the effects of assessment
on older students, elementary age students may react in completely different ways to
various types of assessment.
Additionally, the use of various types of assessment in mathematics is rarely
studied. The research that does exist in this area is exclusively focused on older students
6
(generally high school or college-age students). Few would argue the importance of
elementary-age students having a thorough understanding of mathematics concepts, yet
this topic has been sorely under-researched. By examining how the types of assessments
that fifth grade teachers give in mathematics are related to their students’ goal orientation
and engagement with mathematics, this study sheds light on how this population of
students and this subject area differs from the published research.
Research Question
This study focused on answering one main research question related to
assessment and goal orientation. This research question was designed to determine if
teacher assessment practices are related to student goal orientation and student
engagement. The following research question was focus of this study.
• What is the relationship between teachers’ self-reported assessment practices,
student goal orientation, and student engagement in fifth grade mathematics?
The following model (Figure 1) shows the relationships examined in this study.
Figure 1. Model of constructs examined in this study.
Teacher
Assessment
Practices
(traditional,
portfolio,
performance)
Student
Engagement
(behavioral,
cognitive,
emotional)
Student Goal
Orientation
(performance
approach,
performance
avoid, mastery)
7
Limitations of the Study
Because this study was based on self-report instruments, it must be understood
that while the teachers indicated that they use certain evaluation methods, this may not
accurately reflect their actual classroom assessment practices. The self-reported nature
of the assessment instruments given to the students to measure student goal orientation
and engagement also means that the results of these instruments may be affected by the
honesty of the students, their ability to interpret the meaning of the questions, or even
their attitude towards the teacher. Additionally, this study only focuses on fifth grade
students, and only assesses evaluation practices in mathematics.
Delimitations of the Study
This study only focused on the relationship between teachers’ self-reported
assessment practice, student goal orientation, and student engagement. For the scope of
this study, there was no measurement or analysis of other motivational constructs. This
study also took place at three school sites, which were extremely similar in population
demographics.
8
CHAPTER 2
Literature Review
Introduction
To foster the highest level of motivation and engagement possible among
students, it is necessary to examine the relationship between teacher assessment practices,
student motivation, and student engagement. This allows teachers and administrators to
make informed decisions about the type of evaluations to include in classroom
assessments.
The following literature review defines three types of assessment examined in this
study, analyzes research related to these types of assessment, and examines the
relationship between evaluation and assessment methods, student motivation, and student
engagement in mathematics. This literature review addresses the most recent findings
related to student engagement, student motivation, and the effects of assessment methods
on these variables. In addition, this review will define common terms and types of
assessment and evaluation.
In the first section of this literature review, literature related to the various types
of assessment in mathematics is reviewed. The second section of this literature review
focuses on literature directly related to the effects of state-mandated testing or other high-
stakes testing on teacher practices and student motivation. Next, motivation constructs
and engagement constructs are defined and explained. In the next section, findings
related to the effects of learning environments on student engagement and goal
orientation are discussed. Finally, in the last section research related to the relationship
between assessment and student motivation and engagement is reviewed.
9
Sources Searched
For this literature review, the ERIC online database, PsycINFO, and Google
Scholar were searched using the following terms (“assessment/evaluation” indicates that
two separate searches were performed, first using the term “assessment” and then the
term “evaluation”): assessment/evaluation and elementary, student engagement and
assessment, goal orientation and assessment, mathematics and assessment/evaluation,
elementary and mathematics and assessment/evaluation, motivation and
assessment/evaluation, goal orientation and assessment/evaluation, teacher made
assessment/evaluation, performance assessment/evaluation, constructed response
assessment/evaluation, traditional assessment/evaluation, pencil and paper
assessment/evaluation, engagement and assessment/evaluation, authentic
assessment/evaluation, alternative assessment/evaluation, performance assessment and
mathematics, traditional assessment and mathematics, performance assessment and
engagement, performance assessment and goal orientation, traditional assessment and
goal orientation, and traditional assessment and engagement. Additionally, some
literature included in this study was found by searching the reference lists of other studies
included in this literature review.
The searches were limited to peer-reviewed journals only, and only included
research articles from 1998-2008, with the exception of a few seminal works related
directly to the research question being examined or in cases where no current literature on
the topic was available. Studies included in this literature review are limited to works
focusing on K-12 students, with the exception of a few articles that are either
10
exceptionally relevant to the topic at hand, or on topics where no research was found that
dealt with K-12 students.
Review of the Literature
Types of Assessment
For the purposes of this study, it is necessary to define some terms related to
assessment, as well as various types of assessment used in the classroom. First, it is
necessary to define and describe the differences between formative and summative
evaluations. It should be noted that for the purposes of this review, the terms
“evaluation” and “assessment” are used interchangeably.
Summative evaluations are evaluations that generally take place either at the end
of a program, after the completion of an intervention strategy, or at the end of a school
year. Currently, as a result of the NCLB legislation, summative evaluations have been
mandated in every state. Summative evaluations are intended to examine the
effectiveness of a particular program or intervention, specifically how much student
learning has taken place over a set period of time (Airasian & Madaus, 1972).
Formative evaluations, on the other hand, take place during, not after, the
implementation of a program or intervention or during a set period of time. The intent of
formative evaluation is to provide an in-progress measure of student learning or the
effectiveness of a program, with the intention of implementing changes based on the
results of the assessment (Airasian & Madaus, 1972).
Currently, all public schools in the state of California are required to implement
state mandated end-of-year pencil and paper assessments, which are considered
summative assessments. However, teachers are not mandated by the state to use a
11
particular type of formative assessment during the school year to inform their teaching
practice. Thus, the choice of formative assessment varies in each district, school, and
classroom.
Three types of formative assessment that may be implemented in the elementary
school classroom are examined in this study: objective assessment, constructed response
assessment, and teacher made assessment. Various types of assessment can differ greatly
in their objectivity, their method of implementation, and their accuracy in assessing
student progress (Woolfolk, 2004; Pintrich & Schunk, 2002).
Objective assessment is the type of assessment that is likely the most familiar type
of assessment to non-educators. This type of assessment involves a set of problems or
questions on paper that the student must answer in writing. With objective assessment,
each student is generally given the same test (sometimes multiple forms may be used),
and all tests are commonly graded by the same standard. Numeric scores or letter grades
are often used to indicate whether the student has “passed”, or met the standards
necessary to be considered proficient in a particular knowledge or skill set. Objective
assessment is still the most widely used form of assessment in the classroom (Woolfolk,
2004; Pintrich & Schunk, 2002). No research was found that looked specifically at the
effects of objective assessment; objective assessment is seen as the standard and research
on assessment tends to focus on how alternative types of assessment (including
constructed response assessment) affect student motivation, engagement, and
achievement.
Teacher made assessment is a type of assessment that differs not in its structure or
method of delivery, but in where it originates. Teacher made assessment is any type of
12
assessment instrument (project, multiple choice test, quiz, etc.) that was designed by the
classroom teacher. Assessments designed by textbook companies, school districts, and
supplementary educational materials publishers are widely available for use in the
classroom. However, the majority of the assessment that teachers do in the classroom is
teacher made (Boothroyd, McMorris, & Pruzek, 1992; Marso & Pigge, 1988). This type
of assessment is important to examine not only because it is so widely used, but also
because it is so under-researched. No recent information was found on the reliability and
validity of teacher made assessments, and studies and articles that do mention teacher
made assessment report that teachers in general feel unsure of their ability to design
rigorous assessments and unsure of how to properly use the data that they glean from
teacher made assessments (Boothroyd et al., 1992; Frey & Schmitt, 2007). Additionally,
many teachers report having little to no training in assessment during their preservice
training or during professional development (Frey & Schmitt, 2007). Though no recent
studies were found that examine the effects of teacher made assessment, teacher made
assessment is the primary type of assessment that goes on in the classroom (Boothroyd et
al., 1992; Frey, et al., 2005; Marso & Pigge, 1988). Therefore, it is important to examine
the effects that this type of assessment may have on students.
Constructed response assessment consists of providing students with an
opportunity to solve a problem or apply knowledge to a situation (ideally one that mimics
real life) in order to demonstrate their mastery of a particular skill or body of knowledge.
Students can solve a problem or apply knowledge in their own way, and proficiency is
judged in a manner that reflects a range of proficiency; answers can be more than just
correct or incorrect. Constructed response assessments are typically judged using a
13
rubric that delineates the various skills or knowledge to be evaluated during the
assessment, and these skills or knowledge are judged on a continuum. Constructed
response assessments are generally considered to be more objective than a portfolio
assessment, but less objective than an objective assessment (Woolfolk, 2004; Pintrich &
Schunk, 2002).
A study by Fuchs, Fuchs, Karns, Hamlett, and Katzaroff (1999) examined the
impact of implementing performance assessment (which is a type of constructed response
assessment) in sixteen mathematics classrooms across grades two, three, and four.
Teachers were randomly assigned to either performance assessment or non-performance
assessment groups. The teachers assigned to the performance assessment group
completed professional development on performance assessment and administered three
performance assessments over the course of several months. The teachers worked
together to score the performance assessments and provided feedback to the students on
their performance. Teachers were then given questionnaires assessing their knowledge of
performance assessments and how much time they allotted to various types of instruction
in the classroom (word problems, computation, basic facts, problem-solving, and other).
Additionally, instructional planning sheets used by the teachers to describe how they
planned instruction to reflect the content of the performance assessments were collected.
Finally, students were given analogous, related, and novel problem solving tasks (Fuchs
et al., 1999).
The data collected indicated that teachers who were implementing the
performance assessments tended to use more problem solving and word problems in the
classroom and less basic facts and computation than the non performance assessment
14
teachers. Additionally, teachers’ knowledge about performance assessment and the
implementation of performance assessment did increase. For students who were at or
above grade level, the use of performance assessment increased students’ problem
solving skills for related and analogous tasks. Problem solving skills for below grade
level students was comparable to that of non-performance assessment students. For
students who were above grade level, performance assessment also increased problem
solving for novel tasks (Fuchs et al., 1999). The results of this study show some of the
potential benefits for at grade level and above grade level students due to the
implementation of performance assessment in the classroom.
The few studies previously reviewed were the only current studies found that
dealt with the various types of assessment in mathematics. The vast majority of literature
that has been published on assessment in the last 10 years has focused on ways to
implement various types of assessment in the classroom or on the use of portfolio
assessments with preservice teachers. There is very little research available that deals
with the various types of assessment in mathematics.
It could be surmised that constructed response assessment would increase student
cognitive and emotional engagement and mastery goal orientation by providing greater
student involvement and an increase in choice. Research articles reviewed in a later
section of this literature review will provide some insight into the relationship between
assessment type, student goal orientation, and student engagement.
In the next section, the review turns to the impact of state mandated testing on
students in our public schools. In light of the constant pressure to perform on state
mandated tests that many schools now feel, it is important to consider how state
15
mandated testing may have impacted assessment practices at school sites and in
individual classrooms.
Impact of State Mandated Testing
The emphasis on accountability brought forth by the No Child Left Behind
legislation (U.S. Department of Education, 2002) has resulted in states adopting
mandated standardized testing for all K-12 public schools. Students are now tested on a
yearly basis in a variety of subjects. As a result of not meeting the standards set by the
state boards that implement these high-stakes tests, many schools have felt pressure to
increase test scores or face losing funding, losing staff, or losing control over the
administration and curriculum at the school site (Abrams, Pedulla, & Madaus, 2003;
English & Steffy, 2001; Jones, Jones, & Hargrove, 2003).
Abrams, Pedulla, and Madaus (2003) analyzed data from the National Survey of
Teachers’ Perceptions of the Impact of State Testing Programs (Pedulla, Madaus, Ramos
& Miao, 2003) to draw conclusions about how state mandated testing is affecting
teachers. This 80-item survey asked teachers a variety of questions about their classroom
teaching practice, the state testing program, and student learning. The results of this data
indicated that teachers are pressured to produce high test scores and spend instructional
time on test preparation (Abrams, Pedulla, & Madaus, 2003). As a result of this pressure,
many teachers resort to altering their teaching practices to reflect the types of assessment
that students will experience when evaluated at the end of the school year. This means
that many teachers rely solely on this type of traditional paper-and-pencil assessment as
both summative and formative assessment. In fact, many teachers will choose, in spite of
their better knowledge about what is necessary for quality instruction, to tailor their
16
assessment methods to reflect the format and content of the state-mandated tests
(Abrams, Pedulla, & Medaus, 2003).
Some research concludes that the implementation of state-mandated tests has a
positive effect on student performance. One study in particular (Lane, Parke, & Stone,
2002) utilized a series of surveys over 5 years of teachers, principals, and students, as
well as analyzing student performance data on the Maryland School Performance
Assessment Program and the Maryland Learning Outcomes. This study indicated that
schools where instruction (which includes both lessons and assessments) was altered to
closely match the Maryland School Performance Assessment Program showed greater
gains on the Maryland School Performance Assessment Program than schools where
instruction was not altered (Lane, Parke, & Stone, 2002).
While the authors of the study conclude that this is an indication that student
learning has been positively affected by the implementation of the Maryland School
Performance Assessment Program and the Maryland Learning Outcomes, these results
could be interpreted differently. An increase in scores could be the result of altering
instruction to closely reflect the format and content of the standardized test. Teachers
who are teaching to the test spend instructional time showing students how to answer
questions in specific content areas for the test, as opposed to gaining a deep knowledge
and understanding of the various subject areas.
Engagement and Motivation
The various types of evaluation used in the classroom can have a variety of effects
on the student. Academic performance, interest, self-efficacy, student motivation, and
student engagement are just a sampling of the factors that can be affected by the method
17
of evaluation used in the classroom (English & Steffy, 2001; Fuchs et al., 1999; Jones,
Jones, & Hargrove, 2003). However, the scope of this study will only focus on student
goal orientation and student engagement. Therefore, for this literature review, only
studies that relate to student engagement and student goal orientation will be reviewed.
Student engagement. The following portion of this literature review will discuss
the types of student engagement and review seminal and recent studies related to defining
and conceptualizing student engagement as well as the relationship between student
engagement and achievement.
In common language, engagement is typically thought of as synonymous with
attention or interest. However, within the body of research related to student
engagement, engagement can be quantified in varying ways. The most commonly
accepted conceptualization of engagement includes three different types of engagement:
behavioral engagement, cognitive engagement, and emotional engagement (Fredricks,
Blumenfeld, and Paris, 2004).
Behavioral engagement is the aspect of engagement that follows the colloquial
definition of engagement most closely. Behavioral engagement deals with participation
and involvement in activities of both an academic and an extracurricular nature
(Fredericks, Blumenfeld, and Paris, 2004). Behavioral engagement is considered vital to
getting students to attend school regularly, keeping students doing the schoolwork, and
keeping students from dropping out (Fredericks, Blumenfeld, and Paris, 2004).
Emotional engagement has to do with the students’ feelings towards the people
and environment around them (Fredricks, Blumenfeld, and Paris, 2004). This includes
parents, teachers, other students, the school site, and academic work. Emotional
18
engagement is considered to be crucial in creating bonds between the student and the
school, the student and other students, the student and teachers, and the student and his or
her work. This, like behavioral engagement, helps to keep the student in school and
working towards either mastery or performance goals (Fredericks, Blumenfeld, and Paris,
2004).
Finally, cognitive engagement is vital, but not as visible as behavioral
engagement. A student can exhibit behavioral engagement without being cognitively
engaged, by simply participating (Fredericks, Blumenfeld, and Paris, 2004). A student
who is cognitively engaged has a desire to become involved in his or her own learning,
and has a desire to master knowledge (Fredricks, Blumenfeld, and Paris, 2004).
Engagement can also be quantified differently by different teachers. Following a
phenomenographic study of classroom teachers, Harris (2008) identified six qualitatively
different ways of conceptualizing student engagement. Phenomenography involves using
qualitative methods to describe the varying understandings of a concept held within a
particular group, not focusing on beliefs or understandings of individuals. In this study,
the six conceptualizations of student engagement that Harris identified included
participating in classroom activities and following school rules, being interested in and
enjoying participation in what happens at school, being motivated and confident in
participation in what happens at school, being involved by thinking, purposefully
learning to reach life goals, and owning and valuing learning. These conceptualizations
reflect the three types of engagement described by Fredricks et al. (2004): behavioral
engagement, cognitive engagement, and emotional engagement.
19
Student engagement has been widely shown to have a great effect on student
achievement (Finn, 1989; Marks, 2000; Willingham, Pollack & Lewis, 2002). Students
who are engaged in their school experience are much more likely to have higher
achievement. However, students who are disengaged from school or from the classroom
are more likely to have lower achievement. Additionally, various factors, such as age,
gender, socioeconomic status, grade level, and race/ethnicity can affect how engaged or
disengaged a student might be in a particular school environment, in a particular class, or
with a particular activity (Finn, 1989; Marks, 2000; Willingham, Pollack, & Lewis,
2002).
Park (2005) conducted a study using national survey data to determine if student
engagement affects student achievement, after background variables have been taken into
account. The survey data used was gathered from first grade students and their math
teachers. Survey data indicating student engagement was compared to survey data
indicating growth in student achievement over time in math, controlling for factors such
as socioeconomic status, gender, and minority status. The results of the analysis showed
that student engagement does have a positive effect on student achievement in
mathematics. Park concluded that student engagement is a significant predictor of student
academic success.
Marks (2000) conducted a study that also examined how reform initiatives have
affected student engagement and analyzes patterns in student engagement. Data was
collected from 3,669 elementary, middle, and high school students in mathematics and
social studies classrooms within schools that had implemented some kind of reform
initiative designed to increase student engagement. Students were given a survey that
20
asked about attitudes and experiences related to either social studies or mathematics and
related to school in general. The survey also included questions about students’ personal
lives and families.
Results of the study that are relevant to this literature review showed that as
students advance in grade level, their level of engagement decreases, though students in
mathematics reported greater engagement than students in social studies classes.
Elementary mathematics students also indicated that there is greater classroom support
for social studies than for mathematics. These students also experience greater parental
involvement with school. Additionally, it was found that a student’s general orientation
towards school had a significant effect on that student’s engagement in a particular
subject area (Marks, 2000).
Student Motivation. Motivation, on the other hand, refers to “the process whereby
goal-directed activity is instigated and sustained” (Pintrich & Schunk, 2002, p. 5).
Motivation constructs that are relevant to this study include intrinsic motivation, extrinsic
motivation, and mastery and performance goal orientation. Intrinsic and extrinsic
motivation will be briefly defined, and then goal theory, which is one of the two foci of
this study, will be examined in greater depth.
Intrinsic motivation refers to internal desires or reasons for a person to pursue a
particular goal. A person who is intrinsically motivated for a particular task will pursue
that task without any type of external motivational factors being applied. They may
pursue the goal for a variety of internal reasons, but they do push themselves to pursue it
(Pintrich & Schunk, 2002). Extrinsic motivation refers to external influences that can
motivate a person to work towards a goal. When institutions, society, teachers, parents,
21
coaches, or friends push a person to reach a certain goal, either blatantly or obliquely, this
constitutes extrinsic motivation (Pintrich & Schunk, 2002). These two types of
motivation can exist, and often do, in conjunction with one another; a person can be both
intrinsically and extrinsically motivated to pursue a particular goal. Additionally, these
types of motivation may exist in greater or lesser quantity; a person may be extremely
extrinsically motivated to pursue a goal, but only marginally intrinsically motivated
(Pintrich & Schunk, 2002).
Goal Theory. The following portion of this literature review will discuss goal
theory in more detail, because one of the foci of this study is student goal orientation. In
this section, goal theory and the orientations that make up goal theory will be discussed,
and seminal and recent studies relating to the conceptualization and measurement of goal
theory in general will be analyzed.
A person’s general motivational state can be categorized as either focused on
performance or on mastery (Pintrich & Schunk, 2002). These states are referred to,
respectively, as a performance goal orientation and a mastery goal orientation. A
performance goal orientation contends that a person is motivated to pursue a goal in order
to be judged positively by others or to avoid failure. A mastery goal orientation, on the
other hand, exists when a person is motivated to pursue a goal in order to master
knowledge or improve himself or herself in some way, without concern for recognition or
comparison to the achievement of others. Like intrinsic and extrinsic motivation, these
goal orientations can exist in conjunction with one another. A person can hold both a
mastery goal orientation and a performance goal orientation for a particular task (Pintrich
& Schunk, 2002).
22
Additionally, both performance and mastery goal orientations can be described as
either approach or avoidance goals, thus creating performance-approach, performance-
avoidance, mastery-approach, and mastery-avoidance goals (Pintrich & Schunk, 2002).
Performance-approach goals are goals that focus on achieving a certain high standard,
like being the fastest or most proficient at a task. Performance avoidance goals are goals
that focus on attaining a minimum level of performance, such as “not being last”.
Mastery-approach goals are goals that focus on achieving mastery of knowledge or skills,
regardless of comparison to others. Mastery-avoidance goals focus on achieving a
minimum level of proficiency with a task (Elliot & Harackiewicz, 1996; Elliot & Church,
1997).
Elliot and Harackiewicz (1996) published a study in which they tested the
division of performance goals into both approach and avoidance goals. In this study, the
researchers completed two experiments designed to test their theory that performance
orientations could be defined as either approach or avoidance focused. The first
experiment involved giving 54 female undergraduate students a series of puzzles in
various conditions designed to either induce a performance approach orientation, a
performance avoid orientation, a mastery orientation, or a performance neutral
orientation. The students’ performance, time spent on the task, enjoyment of the task,
and involvement in the task was measured. Results of the study supported the
conceptualization of the performance approach goal orientation and performance avoid
goal orientation constructs (Elliot & Harackiewicz, 1996).
The second experiment by Elliot and Harackiewicz (1996) consisted of 47 male
undergraduates and 45 female undergraduates who were given similar puzzles to the
23
students in the first experiment. This time, however, only three conditions were
measured: performance avoid condition, performance approach condition, and mastery
condition, though the conditions were implemented in the second experiment in a more
subtle manner than in the first experiment. The goal of this experiment was to provide
further support for separating performance goal orientations into both performance
approach goal orientations and performance avoid goal orientations. The same measures
were used in the second experiment as in the first experiment. The results of the data
from the second experiment, like the first experiment, supported the separation of
performance goal orientation into both performance approach goal orientation and
performance avoid goal orientation (Elliot & Harackiewicz, 1999).
A study by Elliot and Church (1997) also tested the causes and consequences of
performance approach goal orientation, performance avoid goal orientation, and mastery
goal orientation. In this study, 82 male and 122 female undergraduate students enrolled
in a personality psychology course. These students were given separate instruments to
measure their fear of failure, achievement motivation, competence expectancy,
achievement goals, competence perceptions, intrinsic motivation, and graded
performance (Elliot & Church, 1997).
The results of the study indicated that both mastery and performance avoid goals
were linked with achievement motivation and fear of failure, respectively. Performance
approach goals were linked with both achievement motivation and fear of failure.
Students with mastery and performance approach goals had high competence
expectancies, while students with performance avoid goals had low competence
expectancies. The results of this research show that students with mastery goal
24
orientations have high motivation to achieve as well as high expectation that they will
achieve. Students with performance approach orientations had a fear of failure as well as
high expectations for their own competency. Students with performance avoid
orientations had a fear of failure as well as low expectations of competency (Elliot &
Church, 1997).
These results suggest that developing a mastery goal orientation among students
would be the most beneficial, as it leads to students having high motivation to achieve,
and high expectations for their own competence. In fact, it has been generally accepted
among the educational community in recent years that fostering intrinsic motivation and
mastery goal orientations among students is the ideal goal, because it increases the
likelihood that students will become “lifelong learners” who pursue knowledge on their
own throughout their school careers and the rest of their lifetimes (Pintrich & Schunk,
2002)
Dweck (1986) published a seminal meta-analysis that described how motivation
affects children’s acquisition, transfer, and use of knowledge and skills. The findings of
the studies analyzed in this meta-analysis demonstrate how performance or learning
(mastery) goals impact children’s reactions to their successes and failures as well as their
cognition in a particular task. Performance goals tended to cause more negative reactions
to failure than mastery goals. Additionally, cognition was more positively affected by
mastery goals than by performance goals.
A study by Pintrich (2000) examined how mastery and performance goal
orientations are related to motivation, strategy use, affect, and performance. In this study,
150 eighth and ninth grade math students were given questionnaires which were an
25
adapted version of the Motivated Strategies for Learning Questionnaire (MSLQ)
(Pintrich, Smith, Garcia, & McKeachie, 1993). Additionally, data on actual student
grades earned was collected. This data was collected at the beginning of the eighth grade
year, the middle of the eighth grade year, and the beginning of the ninth grade year.
One of the ways in which Pintrich et al. (1993) examined data was to determine
the effects of mastery and performance goals over time, as well as the interaction
between mastery and performance goals over time. The results of the study indicated that
students low in both performance and mastery goals had the lowest interest and value in
mathematics. Students with high performance goals had greater interest and value for
mathematics than the students who had low performance and low mastery goals, but this
interest and value decreased over time. Students who had high mastery goals and high
performance goals showed the same high levels of interest and value as the students who
had high mastery goals and low performance goals. However, over the course of the
study, the group with high mastery and low performance goals showed a decrease in
interest and value over time that led them to the same levels of interest and value at the
beginning of ninth grade as the students who began with low mastery goals and either
low or high performance goals (Pintrich, 2000). The results of this study show that while
mastery goal orientations may be the ideal currently put forth by many educators,
performance orientation, in combination with mastery orientation, is still a necessary
component to student success.
The literature on goal orientation in general shows that goal orientation can be
described as either performance approach, performance avoid, or mastery. While there is
some support for the delineation of mastery goals into both approach and avoid goal
26
orientations, there is still confusion regarding exactly how to quantify and measure
mastery avoid goal orientation. Therefore, for the purposes of this study, goal orientation
will be categorized as either performance approach, performance avoid, or mastery.
The following section of the literature review will focus on how the learning
environment (including the various physical and psychological features of the classroom)
may be related to students’ motivation, particularly their goal orientation. Analyzing the
research in this area is important to determine what factors may potentially affect the goal
orientation and engagement of the students in this study.
Learning Environments and Motivation
The learning environment encompasses many factors that can have an effect on
motivation. One framework that defines the various factors that have an effect on student
motivation is the TARGET framework, which was described by Epstein (1988), but was
formatted to the TARGET acronym by Ames (1992). TARGET consists of six
components that can have an effect on student motivation. These factors include task
design, distribution of authority, recognition of students, grouping arrangements,
evaluation practices, and time allocation. Task design refers to the design of the lessons
and assignments that are used in the classroom to facilitate student learning. Distribution
of authority refers to how much, if any, opportunity students have to function in
leadership roles and have control over activities. Recognition of students consists of the
elements in the classroom, such as token economies or praise, which recognize student
achievement. Grouping arrangements is, as it sounds, the way in which groups are
composed for students to work with each other. The ways in which teachers assess
student learning, and the focus of this literature review, is the evaluation practices
27
component. The evaluation practices of the classroom include the ways in which
teachers formatively and summatively assess student learning. Finally, time allocation
consists of the amount of time available to complete tasks, the amount of work required
in a given amount of time, and the pace of instruction. Each of these factors alone or in
combination can have an effect on student goal orientation.
Church, Elliot, and Gable (2001) examined the correlation between perceptions of
the classroom environment, student achievement goals, and grades among
undergraduates. The participants in this study were undergraduate students enrolled in a
chemistry course who were assessed using measures of their perceptions of the classroom
environment, measures related to another research project, and then measures of their
achievement goals. These students were assessed before the first formal examination in
the course had taken place to ensure that performance feedback did not play a role in the
results. The results of this study showed that mastery goals were more evident when
there was no harsh evaluation or strong focus on evaluation. Conversely, performance
goals were more evident when there was a strong emphasis on evaluation or when harsh
evaluations were used.
Assessment, Motivation, and Engagement
The research that most closely relates to the subject of this study addresses the
relationship between assessment and student motivation and engagement. There were
360 studies found when searching peer-reviewed articles published between 1998 and
2008 in the ERIC and PsycINFO databases with the following terms:
assessment/evaluation and motivation, assessment/evaluation and engagement. However,
28
of those studies, only seven studies actually examined the relationship between
evaluation/assessment and student motivation or engagement.
One study by Brookhart, and Devoge (1999), conducted among third grade
students, used a combination of classroom assessments and interviews to gauge what
relationship may exist between classroom assessment and student motivation and
achievement. In this study, four different assessment events were observed, and students
were given surveys before and after each assessment event in order to measure their self
efficacy, perceived characteristics of the task, amount of effort invested, and achievement
on the task. The study concluded that there was a positive relationship between the
variables of task, self-efficacy, effort, and achievement (Brookhart & DeVoge, 1999).
Further, a study that took place in social studies classes in an urban high school
showed that student goal orientation can vary depending upon the type of assessment
used (Brookhart & Durkin, 2003). This study took place in one teacher-researcher’s
classroom, and took the form of a case study describing 12 diverse assessment events.
Students were assessed before and after each assessment event using the Motivated
Strategies for Learning Questionnaire (Pintrich et al., 1993) and the Student Activity
Questionnaire (Meece et al., 1988; Meece & Miller, 2001). The results of this study
showed that student perceptions of self-efficacy and the task, amount of mental effort,
strategy use and goal orientation varied dependent upon the assessment and the type of
student.
Additionally, Brookhart, Walsh, and Zientarski (2006) examined the effect of
classroom assessment on achievement, compared to prior achievement. This study was
conducted with 223 eighth graders in various classroom assessment environments.
29
Teachers involved in this study selected one pencil and paper test and one performance
assessment for inclusion in this study. At the time each assessment was given, students
were assessed for motivation and effort using the Motivated Strategies for Learning
Questionnaire (Pintrich et al., 1993). It was found that classroom assessment
environment and prior achievement had virtually the same effect on current student
achievement. This means that how a student is evaluated in a current class is just as vital
to their current achievement as is their previous performance in that subject area.
Stefanou and Parkes’ (2003) study closely resembles the study described in this
dissertation. Students in three fifth grade classes took three types of tests, including a
traditional paper and pencil test, laboratory exercises, and a performance assessment. The
effect of each assessment type was examined in relation to student motivation and
cognitive engagement. Students were assessed using the Science Activity Questionnaire
(reported in Meece, Blumenfeld, and Hoyle, 1988). The results showed, unsurprisingly,
that paper and pencil assessments encouraged a more performance-oriented focus than
did laboratory exercises. However, performance assessments also encouraged a more
performance-oriented focus than laboratory exercises. The results of Stefanou and
Parkes’ (2003) study are extremely important and relevant to this study because they
clearly show that the type of evaluation that is implemented in the classroom can result in
students adopting a performance or mastery orientation.
Haigh (2007), found contradictory results to Stefanou and Parkes (2003).
Although the subjects were college students, the results suggest a possible alternative to
the findings of previous research that indicate that alternative assessments increase
student motivation more than traditional paper-and-pencil tests. The study examined the
30
effects of a weekly class quiz on motivation, engagement, and achievement in college
students. The results of this study showed that taking a weekly quiz on the previous
week’s information, and then receiving immediate feedback on performance on the quiz
increased student motivation and engagement. However, the researchers did not
determine if this had an effect on student goal orientation. It is possible that these weekly
quizzes were increasing student performance orientation, either encouraging a
performance approach or a performance avoid orientation. The students who experienced
increased motivation and engagement may have only been motivated and engaged in
order to receive passing grades on these weekly quizzes (Haigh, 2007).
Hancock (2007) compared the relationship between assessment type and student
academic achievement and motivation, using performance assessments and traditional
pencil and paper assessments with graduate students. This study involved a control
group, which was given traditional paper and pencil assessment, and a treatment group,
which was assessed using performance assessment. Both groups were given the exact
same instruction prior to being assessed. Then, students’ motivation was assessed using
the Motivated Strategies for Learning Questionnaire (MSLQ) (Pintrich et al., 1993).
Results of the study indicated students who were given the performance assessments
throughout the course performed higher on the final examination (which was a paper and
pencil assessment) than students who had taken the paper and pencil assessments. The
performance assessment students also showed higher levels of motivation to learn,
according to the MSLQ, than students who took the paper and pencil examinations.
A study that examines the relationship between assessment and motivation among
students of a more comparable age range was published by Maslovaty and Kuzi (2002).
31
This study examined the relationship between motivation and assessment in four
elementary schools in Israel. Two of the schools studied used traditional paper and
pencil assessment to evaluate student progress, while the other two used alternative
assessment. The results of the study showed large differences in the way students
perceive assessment, as well as large differences in the types of motivational goals
(performance or mastery) adopted by the students. The students at the school where
alternative assessment was used to evaluate student progress were more likely to exhibit
mastery goals than students at schools that used the traditional paper and pencil tests.
Research shows the type of assessment used in the classroom has a significant
relationship with the motivation and engagement of the students in the classroom. How
teachers choose to evaluate the student, the way in which those evaluations are
administered, and the way in which the results of those evaluations are used are related to
the goal orientation and engagement of the student.
General Findings and Gaps in the Research
The research discussed in this literature review illustrates that state-mandated
testing and high-stakes testing have a great potential impact on how instruction is
implemented in the classroom and on how teachers assess their students. The influence
of state mandated tests has caused many teachers to alter curriculum to reflect format and
content of the high stakes tests (Abrams, Pedulla, & Madaus, 2003; English & Steffy,
2001; Jones, Jones, & Hargrove, 2003). Additionally, the literature reviewed shows a
strong link between self-efficacy and student goal orientation. While this relationship is
not the focus of this study, it would be useful to examine in future research.
32
With little exception, the studies reviewed in this literature review show that there
is a significant relationship between assessment, goal orientation and engagement.
Students who are assessed using alternative assessments of some kind tend to have higher
motivation in general, and tend to have a mastery goal orientation more than students
who are evaluated using traditional paper and pencil assessments (Brookhart, Walsh, &
Zientarski, 2006; Hancock, 2007; Stefanou & Parkes, 2003. This could be due to an
increased level of interest for students who take assessments that do not follow the
traditional format. Additionally, this could be due to students having greater self-efficacy
with their ability to pass assessments in alternative formats. Another possible reason is
that constructed response assessments, for example, could be providing a broader and
deeper sampling of students’ knowledge than the relatively narrow slice of work involved
in taking a traditional paper-and-pencil test. It is possible that this could also result in
decreased test anxiety and greater focus on mastering the information instead of simply
receiving a test score.
There is the possibility that objective tests can increase motivation. Based on the
previously reviewed research, it would seem likely that any increase in motivation as a
result of objective tests, which are typically used to provide numeric scores that rate
student achievement, would be an increase in performance goal orientation as opposed to
mastery goal orientation.
While there seems to be a clear link between the implementation of traditional
paper and pencil tests and an increase in the level of performance orientation among
students and decreased student engagement, the current literature is lacking. The vast
33
majority of the studies on the link between assessment and student motivation and
engagement were conducted with students at either secondary or postsecondary levels.
For this reason, it is important to examine what correlation, if any, may exist
between the type of assessment given to elementary age students and those students’
motivation and engagement, and what the nature of that link may be. Additionally, the
studies that have been conducted in the past have not focused on this relationship in
conjunction with mathematics curriculum. Because mathematics performance is so often
taught, practiced, and evaluated using objective assessment methods, even in the
elementary grades, it would be beneficial to examine what effect this type of instruction
and assessment has on student motivation and engagement (Woolfolk, 2004; Pintrich &
Schunk, 2002). Additionally, researching this topic can provide insights into what kind
of motivation and engagement may be possible for mathematics students at the
elementary level.
If state testing has resulted in teachers patterning their assessment to mimic state
testing formats, and if the results of this study show that objective assessment practices
are related to student goal orientation and engagement, then conclusions can be drawn
about what indirect effects state mandated testing has had on student goal orientation and
engagement. Additionally, and perhaps most importantly, determining the nature of the
relationship between student goal orientation and engagement and teacher assessment
practices can inform administrative and teacher decisions on how to assess students’
learning so as to increase engagement and encourage goal orientations that lead to student
success.
34
CHAPTER 3
Methodology
Research Approach and Rationale
In order to examine the relationship between assessment methods used in the
classroom, student goal orientation, and student engagement, it was necessary to employ
several measurement tools. The measurement tools used in this study were self-report
instruments administered to both teachers and students. While self-report instruments
can reflect inaccuracies due to intentional or unintentional misrepresentation by the
individual being evaluated, these tools are the best option when surreptitious observation
is impossible and observers in the classroom would alter teaching practices. While
extended observation over a long period of time would be an ideal component of this
study, there is no way to implement this without the presence of the researcher in the
classroom affecting the practices of the teacher and the behavior of the students.
Additionally, due to the legal requirements involved with studying minor students,
observing students in the classroom without their knowledge would be impossible. As a
result, this study consists of three self-report assessment instruments designed to measure
the assessment practices of the teacher, the goal orientation of the student, and the
engagement of the student. The surveys measuring student goal orientation and student
engagement was administered to students. The survey measuring teacher assessment
practices was administered to teachers.
Participants
The sample population of this study consisted of 115 fifth-grade students in eight
classrooms at three Southern California elementary schools. These schools were all part
35
of the same school district on the outskirts of a major metropolitan area, and were located
in close proximity to each other. The schools were recruited through contact with
district-level administrators. The eight fifth grade teachers at these schools were also part
of this study.
Data was collected from the school on student age, gender, and ethnicity for the
purpose of describing the sample population. The student population consisted of
approximately 87% Hispanic students, 8% White students, 2% African American
students, and less than 1% each of Asian American, Pacific Islander, and Native
American students. The mean age for the student population was 10.2 years, with the
youngest student aged 9 years and the oldest student aged 11 years. The sample
population consisted of 47 male students and 68 female students. (See Table 1)
Table 1
Description of the Sample Population
Students
N= 115
Age
Mean 10.2
Low 9
High 11
Gender
Male 47
Female 68
Ethnicity
Hispanic 100
White 9
African American 2
Asian American 1
Pacific Islander 1
Native American 1
Decline to State 1
Additionally, means were calculated and an independent samples t-test was run to
determine if there was a significant difference between males and females on any of the
36
factors examined in this study. (see Table 2) Based on the data collected for this study,
males and females differed significantly only in cognitive engagement (p < .05). In the
interest of accurately describing the sample population, this information is included in
this study, although the research question doesn’t focus on gender differences.
Table 2
Means and Independent Samples T-Test Results for Gender
Male Female T Sig. (2 tailed)
Mean SD Mean SD
Equal
var.
assumed
Equal
var. not
assumed
Equal
var.
assumed
Equal
var.
not
assumed
P.Avoid 9.17 3.01 9.824 3.52 1.04 1.07 0.30 .29
P. Approach 16.23 5.26 16.28 4.90 0.05 0.05 0.96 0.96
Mastery 21.89 2.67 22.49 3.29 1.02 1.06 0.31 0.29
Behavioral E. 19.11 4.12 20.34 4.64 1.46 1.50 0.15 0.14
Cognitive E. 17.70 5.65 20.01 5.48 2.20 2.19 0.03* 0.03
Emotional E. 19.21 6.01 20.31 7.16 0.86 0.89 0.39 0.38
*p < .05
For this study, eight fifth-grade classrooms at three schools were included in order
to gain a large enough sample size to determine if there are significant relationships
between teacher evaluation practices and student motivation and engagement. The
schools chosen for this study follow a traditional calendar (where vacation time is given
in the summer, not broken up throughout the year). Additionally, these schools serve
very similar populations of students that this researcher sampled from, in order to
increase generalizability of the findings of this study as much as possible. The schools
selected were as similar as possible on all demographic factors, such as size,
race/ethnicity of students and faculty, mean age of teachers, and socioeconomic status.
Measures
This study examines the relationship between teacher assessment practices,
student goal orientation and engagement. For this study, three separate assessment
37
instruments were used to measure these three separate factors. These assessments were
scored using a five point Likert-type scale (or six point for the teacher survey), with 1
indicating that the statement is “not at all true”, and 5 indicating that the statement is
“very true” (or from “not at all” to “completely” in the case of the teacher survey).
Information on the age, gender, and ethnicity of the students was collected in order to
provide an accurate description of the sample. Student demographic information was
gleaned from student rosters obtained from teachers and/or administrators.
Teacher Assessment Type Measure
The first assessment instrument that was used was a 10 item scale given to the
teachers to determine what types of assessment teachers used in their classrooms and how
much these assessments determined how students are graded in mathematics. For this
purpose, an instrument designed by McMillan, Myran, and Workman (2002) in which
teachers rate how much they relate to a specific item was used to analyze teachers’
assessment and grading practices. Each item on the original survey uses the stem “To
what extent were final grades of students in your single class based on,” with various
items attached to measure teachers’ assessment practices. The scale developed by
McMillan, Myran, and Workman (2002) consists of three subsets of items: factors used
in determining grades, types of assessments used in determining grades, and cognitive
level of assessments used in determining grades. However, for the purposes of this study,
only the scale that measures the types of assessments used in determining grades was
utilized. This scale consists of 10 items that are designed to determine what types of
assessment the teacher actually uses in the classroom (See Appendix A).
38
McMillan, Myran, and Workman (2000) drew the items on this survey from
questionnaires that had been previously administered, in addition to previous research on
assessment and grading. While for this study, only the items that were related to the type
of assessment given were used, the original study included 47 items that asked teachers
about student performance, cognitive level of the assessments, student improvement, and
student effort. The researchers asked a group of 15 teachers to review the items for
clarity and completeness, and then revised the items and pilot tested a second time (this
time with 23 teachers) for clarity and reliability. Twenty-three teachers participated in
the second pilot test, and retook the test following a four week interval in order to
determine reliability. Following the second pilot test, items with low reliability were
eliminated. The final survey consisted of 34 items which were divided into three
categories: factors used to determine grades, type of assessment, and cognitive level of
assessment (McMillan, Myran, & Workman, 2002). As previously described, only the
portion of the scale that tests teacher assessment practices was used for this study.
Additionally, that portion of the scale was altered to reflect domain specificity.
For this study, there were not enough teacher participants to perform an exploratory
factor analysis (n = 8). However, reliability was calculated for the teacher assessment
measure. For the teacher assessment measure, teacher made assessment had a reliability
of .71, constructed response assessment had a reliability of .79, and objective assessment
had a reliability of .91. Table 3 shows the altered items that were used for each of the
types of assessment on the teacher assessment type scale.
39
Table 3
Questions Included on the Teacher Assessment Type Survey
Assessment Type
To what extent are final grades in
mathematics in your class based on_____
Constructed
Response Objective
Teacher
Made
oral presentations?
performance assessments (e.g., structured
teacher observations or ratings of
performance, such as a speech or paper)?
essay-type questions?
pieces of work completed by teams of
students?
projects completed by individual
students?
authentic assessments (e.g., real world
performance tasks)?
objective assessments (e.g. multiple
choice,
matching, short answer)?
assessments provided by publishers or
supplied to the teacher (e.g., in
instructional guides or manuals)?
major examinations?
assessments designed primarily by
yourself?
Student Goal Orientation Measure
Additionally, students were given an assessment to measure their goal orientation.
Goal orientation was measured using the Patterns of Adaptive Learning Scales (PALS)
(Midgley et al., 2000), though only certain portions were used. For the purposes of this
study, only the Performance-Approach Goal Orientation (revised), the Performance-
Avoid Goal Orientation (revised), and the Mastery Goal Orientation (revised) were used.
However, the language of the PALS was altered to reflect only student goal
orientation in mathematics, instead of student goal orientation in general. While for
elementary students the scales are general, for high school students, the scales are domain
40
specific. These domain specific scales showed equal if not greater reliability than the
general scales (Midgley et al., 2000). Therefore, the language used on the elementary
level scales in this study matched the language on the middle and high school scales as
much as possible. The items were arranged on the survey tool in random order. The
revised version of the PALS student goal orientation scale consisted of 14 items that use
domain specific language (See Appendix B). Each of these items was categorized as
either indicating mastery goal orientation, performance approach goal orientation, or
performance avoid goal orientation.
The PALS scales have been widely used and shown to have high validity and high
reliability, according to the developers of the PALS instrument (Midgley et al., 2000).
The PALS has been implemented by its designers in a variety of schools in the United
States. The PALS has been given to diverse populations of students, and has been shown
to be reliable with all populations assessed. When tested by the creators of the PALS, the
subscale used to measure student performance approach orientation, performance avoid
orientation, and mastery orientation had Cronbach’s alphas of .89, .74, and .85,
respectively, indicating that all three scales have high reliability. In order to determine if
these scales were valid, the creators of the PALS performed confirmatory factor analysis
on the student goal orientation scale. Goodness of fit (GFI = .97) indicates that this scale
also has high validity (Midgley et al., 2000).
In order to measure student goal orientation in mathematics, a portion of the
Patterns of Adaptive Learning Survey was included in the student survey instrument
(Midgley et al., 2000). The Patterns of Adaptive Learning Survey included items
designed to measure performance approach orientation, performance avoid orientation,
41
and mastery orientation. However, the original items from the Patterns of Adaptive
Learning Survey were altered to reflect domain specificity. For example, an item that
originally read “One of my goals is to show others that I’m good at my class work” was
altered to read “One of my goals is to show others that I’m good at math.” Examples of
items used in the student survey include “One of my goals is to keep others from thinking
I’m not smart in math” and “It’s important to me that I don’t look stupid in math” to
measure performance avoid orientation, “One of my goals is to look smart in math
compared to the other students in my class” and “One of my goals is to show others that
math is easy for me” to measure performance approach orientation, and “One of my goals
is to master a lot of new math skills this year” and “It’s important to me that I thoroughly
understand my math class work” to measure mastery orientation.
Because the items were altered to be domain specific, exploratory factor analysis
was used to determine if the items loaded on each of the three types of student goal
orientation: performance approach, performance avoid, and mastery. Table 4 shows the
results of the factor analysis.
This factor analysis showed that each of the items loaded on the particular factor
for which they were intended. However, for the items that did not load as strongly (“It’s
important to me that my teacher doesn’t think that I know less than others in math”, “One
of my goals during math time is to avoid looking like I have trouble doing the math
work”, and “It is important to me that I don’t look stupid in math.”) reliability was
analyzed to determine what effect removing those items would have.
All three items load on the performance avoid factor, which has a Cronbach’s
alpha of .55 with all items included. If the item “It’s important to me that my teacher
42
doesn’t think that I know less than others in math” is removed, Cronbach’s alpha
becomes .62, so this item was removed from the analyses. If the item “One of my goals
during math time is to avoid looking like I have trouble doing the math work” is
removed, Cronbach’s alpha becomes .46, which indicates that reliability of the instrument
would decrease slightly upon removal of this item. Additionally, if the item “It is
important to me that I don’t look stupid in math” is removed, Cronbach’s alpha becomes
.44, which indicates that reliability of the instrument would also decrease slightly upon
removal of this item. Because removing any of these three items would not significantly
increase reliability, all three items were included in the instrument. All three goal
orientation factors were found to have good reliability (performance avoid α = .62,
performance approach α = .81, mastery α = .76).
Table 4
Factor Analysis for Student Goal Orientation Survey
Factor
Item P. Avoid P. Approach Mastery
It’s important to me that my teacher doesn’t think that I know
less than others in math
0.216
One of my goals during math time is to avoid looking like I
have trouble doing the math work
0.253
It is important to me that I don’t look stupid in math.
0.217
One of my goals is to keep others from thinking I’m not smart
in math.
0.818
It’s important to me that other students in my class think I am
good at math.
0.554
One of my goals is to look smart in math compared to the other
students in my class.
0.662
One of my goals is to show others that I’m good at math.
0.649
One of my goals is to show others that math is easy for me.
0.664
It’s important to me that I look smart in math compared to
others in my class.
0.821
It is important to me that I learn a lot of new math concepts this
year.
0.718
One of my goals is to master a lot of new math skills this year.
0.719
One of my goals in math is to learn as much as I can.
0.551
It’s important to me that I thoroughly understand my math class
work.
0.548
It is important to me that I improve my math skills this year.
0.615
43
Student Engagement Measure
Finally, students were given an assessment to determine their level of engagement
in mathematics. Fredricks, Blumenfeld, Friedel, and Paris (2005) created an assessment
to measure students’ behavioral, emotional, and cognitive engagement. This scale was
modified for use in this study. As with the PALS, the language of this scale was altered
to measure only student engagement in mathematics, as opposed to student engagement
in general.
The scale developed by Fredricks, Blumenfeld, Friedel and Paris (2005),
measures cognitive, behavioral, and emotional engagement. The scale uses a series of 19
questions in order to determine a student’s behavioral, cognitive, and emotional
engagement. The language of this scale, while directed toward the age group this
researcher studied, does not reflect engagement within a particular domain. The scale
measures engagement in school in general, therefore the language of the items on the
scale was changed to reflect engagement specifically in mathematics.
Additionally, one item was removed because of its lack of relevance to
determining engagement in mathematics. Responses to the item “I try to watch TV
shows about the things we do in school” wouldn’t necessarily reflect a student’s lack of
cognitive engagement with mathematics, but rather a lack of mathematics-related
television programming that relates to what the student is learning in school. While it
may be fairly simple to find television programming related to American history or
literary works, mathematics related programming is much scarcer. Therefore this item,
which may have limited relevance to engagement in mathematics, and which cannot be
altered to fit the purposes of this study, was removed from the questionnaire.
44
In the Feelings About School Inventory, (Fredricks et al., 2005), individual items
represent a student’s behavioral engagement, cognitive engagement, and emotional
engagement. In the revised form of this scale for use in this study, there are five items
that measure behavioral engagement, six items that measure emotional engagement, and
seven items that measure behavioral engagement. The revised version of the Feelings
About School Inventory consisted of a total of 18 items that use domain specific language
(See Appendix B). Each of these items was categorized as either indicating behavioral
engagement, cognitive engagement, or emotional engagement.
The scale was found to have good face validity and good predictive validity
(Fredricks et al., 2005). This scale was used to conduct a study among a diverse
population of inner-city elementary school students, and was found to be reliable among
that sample of students (Fredricks et al., 2005). Because this study took place with
elementary school students from a variety of backgrounds, one can assume that this scale
will be reliable for the purposes of this study. In order to determine the validity and
reliability of this measure, Fredricks et al. (2005) administered the Feelings About School
Inventory in two waves, collecting reliability and validity data with each wave. The
creators of this instrument examined the distribution of the responses to the instruments,
as well as testing the internal consistency of the instruments using Cronbach’s alpha. The
results of these tests indicated that all three measures had high reliability (behavioral α =
.77, cognitive α = .82, emotional α = .86). Additionally, reliability for fifth grade was
found to be .73. The authors also conducted exploratory factor analysis and ran zero
order correlations between teacher assessments of student behavior and student reports of
45
engagement, to determine if the Feelings About School Inventory was valid (r =.29 to
.43).
In order to measure student engagement in mathematics, students were given a
survey instrument which included, in part, the Feelings About School Inventory
(Fredricks et al., 2005). The Feelings About School Inventory was modified to be
domain specific, in order to reflect student engagement only in mathematics. For
example, an item that originally read “I feel excited by my work at school” was altered to
read “I feel excited by my work during math time.”
Because the original items were altered to be domain specific, exploratory factor
analysis was used to determine if the items loaded on each of the three types of student
engagement: emotional, behavioral, and cognitive (see Table 5). Items measuring
emotional engagement included “I like math time” and “I feel happy during math time.”
Items that measured behavioral engagement included “I follow the rules during math
time” and “I pay attention during math time.” Items that measured cognitive engagement
included “I read or use extra math books to learn more about the things we do in school”
and “I talk with people outside of school about what I am learning in math.”
The item “I complete my math work on time” loads on the behavioral engagement
factor. The items for behavioral engagement had a Cronbach’s alpha of .77 with all items
included. If the item “I complete my math work on time” is removed, Cronbach’s alpha
becomes .79, which is not a significant enough difference to remove the item from the
questionnaire. The item “I check my math schoolwork for mistakes” loads on the
cognitive engagement factor. The items for cognitive engagement had a Cronbach’s
alpha of .75 with all items included. If the item “I check my math schoolwork for
46
mistakes” is removed, Cronbach’s alpha becomes .73, which indicates that reliability of
the instrument would decrease slightly upon removal of this item. Because removing
either of these items would not significantly increase reliability, both items were included
in the instrument. Therefore, all three factors were found to have high reliability
(emotional α = .87, behavioral α = .77, cognitive α = .75).
Table 5
Factor Analysis for Student Engagement Survey
Factor
Item Emotional Behavioral Cognitive
I like math time. 0.668
I feel excited by my work during math time. 0.576
My classroom is a fun place to be during math
time. 0.6
I am interested in the work in math. 0.53
I feel happy during math time. 0.761
I feel bored during math time. (reversed) 0.453
I follow the rules during math time. 0.611
I get in trouble during math time. (reversed) 0.729
When it is math time, I just act as if I am working.
(reversed) 0.537
I pay attention during math time. 0.713
I complete my math work on time. 0.141
I check my math schoolwork for mistakes. 0.267
When I read a math problem, I ask myself
questions or use strategies to make sure I
understand what it is about 0.599
I read or use extra math books to learn more about
the things we do in school. 0.361
If I don’t know what a math problem means, I do
something to figure it out. 0.631
If I don’t understand a math problem that I read, I
go back and read it over again. 0.589
I talk with people outside of school about what I
am learning in math. 0.372
Procedure
This study was implemented in fifth grade classrooms in three schools, consisting
of 115 students and 8 teachers. These instruments were administered approximately
47
halfway through the school year, in order to give students time to fully experience their
teacher’s assessment methods. The student instruments were given in the individual
classrooms, and were all administered by the same researcher. Students were given one
test booklet that included the instrument that measures student goal orientation and the
instrument that measures student engagement.
In order to recruit students for this study, the researcher visited each school site
one week prior to administration of the surveys to distribute parental consent forms (see
Appendix C) and explain the purpose and methods of the study to the fifth-graders at
these school sites. Additionally, students were informed that if they brought back the
parental consent form (whether it was signed or not), they would be entered in a drawing
to win an iPod Shuffle ($49 value). Approximately 240 parental consent forms were
distributed at that time and 115 were returned (signed) one week later, when the student
and teacher assessments were administered. Teachers were asked if there were any
students who did not have the English language proficiency to understand the items on
the survey (so that their responses could be excluded), but no students were identified as
such. Therefore, all 115 student surveys were included in the data analysis.
Parents (through consent forms), teachers, and students were informed that the
responses on the surveys were completely confidential, and that no one at home or at the
school site would see the answers. Ideally, this increased the likelihood that participants
would be honest and accurate in their answers, because participants knew that their
results would never be identified with them personally.
Participants were also informed that participation was totally voluntary, and that
they could skip any questions they did not want to answer. They were informed that
48
there were no risks or benefits involved with participating in this study, and that data will
be stored for 3 years and then destroyed.
On the day that student surveys were given, students were provided with a student
assent form (see Appendix F). Students who opted to participate in the survey were then
asked to sign the assent form, and the teacher was asked to leave the classroom. Students
were then given the survey, and students who chose not to participate read quietly while
the other students worked. The survey took approximately 20 minutes for students to
complete.
Additionally, the assessment books were labeled with a number that indicated to
the researcher which student the test belongs to, as well as the student’s teacher.
However, labeling the tests with only a number, not the student’s name, provided
students with an increased sense of privacy and anonymity. Ideally, this increased their
willingness to respond honestly and accurately on the measures they are given, because
they knew that their responses couldn’t be linked back to the individual student by
anyone except the researcher.
Teachers were not present in the classroom when student assessment instruments
were administered. The teachers were assessed separately for their evaluation practices
before or after school by the same researcher. Teachers were informed that they had been
assigned a particular letter, which would be used to identify their assessment as well as
the assessments given to the students in their class.
Teachers were provided with an information sheet (see Appendix E) on the day
the teacher surveys were given, and were informed that by choosing to take the surveys,
49
they were providing their consent to participate. Teachers were then given the teacher
surveys, which took approximately 15 minutes to complete.
Data Analysis
The data gathered from these assessments was analyzed in several ways, in order
to gain a clear picture of the relationship between the variables: student goal orientation,
student engagement, and teacher assessment practices. First, descriptive analysis was
used to describe the sample population of students in terms of gender, age, and ethnicity.
Reliability data was calculated for the teacher survey based on the previous research
(McMillan, Myran, & Workman, 2003). Factor analysis was used to determine if the
student surveys showed three cognitive factors and three goal orientation factors.
Additionally, a Pearson’s correlation was used to examine if there was a significant
correlation between teacher assessment type and student goal orientation and teacher
assessment type and student engagement. Finally, stepwise regression was used to
determine if assessment type was a predictor for student goal orientation or engagement.
50
CHAPTER 4
Results
The following chapter presents the results of the research described in the
previous chapter. This chapter will answer the research question presented in Chapter 1:
• What is the relationship between teachers’ self-reported assessment practices,
student goal orientation, and student engagement in fifth grade mathematics?
Correlation
Data was collected using a teacher survey of assessment type and a student survey
of goal orientation and engagement to determine if there is any relationship between the
type of assessment that a teacher chooses to use in the classroom and the goal orientation
and engagement of his or her students. To that end, a Pearson correlation matrix was
created using the data collected from the student engagement, student goal orientation,
and teacher assessment type surveys (See Table 6). These results show that there was no
significant relationship between assessment type and goal orientation.
Table 6
Correlation between Goal Orientation, Engagement, and Assessment Type
Mean SD
1 2 3 4 5 6 7 8 9
1. P. Avoid 9.56 3.32 -
2. Mastery 22.32 3.11 -0.05 -
3. P. Approach 16.26 5.02 .495
**
0.131 -
4. Emotional
Eng. 19.86 6.71 -0.05 .477
**
.207
*
-
5. Behavioral
Eng. 19.83 4.46 0.058 .387
**
0.11 .672
**
-
6. Cognitive
Eng. 21.29 6.42 0.011 .420
**
0.12 .692
**
.549
**
-
7. Con. Response 14.63 4.10 0.067 0.021 0.01 0.115 0.159 0.09 -
8. Teacher Made 11.75 3.24 0.172 -0.1 0.04 0.02 0.151 0.01 .810
**
-
9. Objective 9.00 2.56 0.024 0.133 0.08 0.024 0.046 0.04 -0.07 0.12 -
**. Correlation is significant at the 0.01 level (2-tailed).
*. Correlation is significant at the 0.05 level (2-tailed).
Note: N = 115
51
These results show that there was no significant relationship between any of the
types of assessment and student goal orientation. However, to further explore the
relationship between teacher assessment practices and student goal orientation stepwise
regression was used. Stepwise regression with backward entry was used instead of the
forward method in order to decrease the likelihood of a Type II error through a
suppression effect. This involved testing all of the variables for significance and
removing those without significance one by one. All regressions examined the three
assessment types and their influence on performance avoid orientation, performance
approach orientation, and mastery orientation. Table 7 shows the stepwise regression for
the performance avoid orientation. No type of assessment significantly predicted
performance avoid orientation.
Table 7
Stepwise Regression for Performance Avoid Orientation
Model
Unstandardized
Coefficients
Standardized
Coefficients
B
Std.
Error Beta
Model 1 (Constant) 6.932 1.9
Constructed
Response
-
0.175 0.129 -0.215
Teacher
Made 0.375 0.169 0.352
Objective 0.079 0.145 0.051
Model 2 (Constant) 7.722 1.234
Constructed
Response
-
0.172 0.128 -0.211
Teacher
Made 0.365 0.168 0.343
Model 3 (Constant) 7.383 1.212
Teacher
Made 0.183 0.099 0.172
Note: R² = .047 for Step 1, R² = .045 for Step 2, R² = .029 for Step 3.
52
Stepwise regression with backward entry was also used to examine the
relationship between mastery orientation and teacher assessment type. Table 8 shows the
stepwise regression for mastery orientation. This regression shows that teacher made
tests significantly predict mastery orientation. However, though teacher made tests
significantly predict mastery orientation, teacher made tests are not a strong indicator of
mastery orientation among students.
Table 8
Stepwise Regression for Mastery Orientation
Model
Unstandardized
Coefficients
Standardized
Coefficients
B Std. Error Beta
Model 1 (Constant)
21.343 1.776
Constructed
Response
0.21 0.12 0.276
Objective
0.169 0.135 0.116
Teacher
Made
-0.305 0.158 -0.305
Model 2 (Constant)
23.023 1.16
Constructed
Response
0.217 0.121 0.285
Teacher
Made
-0.326 0.158 -.326*
Note: R² = .05 for Step 1, R² = .04 for Step 2. * p < .05
Additionally, stepwise regression was used to examine the relationship between
assessment type and performance approach orientation. Table 9 shows the results of this
stepwise regression. The results show that there was no significant relationship between
assessment type and performance approach orientation.
53
Table 9
Stepwise Regression for Performance Approach Orientation
Model
Unstandardized Coefficients Standardized Coefficients
B Std. Error Beta
Model 1 (Constant) 13.645 2.93
Constructed
Response -0.085 0.199 -0.069
Objective 0.205 0.223 0.087
Teacher
Made 0.167 0.261 0.104
Model 2 (Constant) 13.523 2.905
Constructed
Response 0.2 0.222 0.085
Teacher
Made 0.077 0.153 0.048
Model 3 (Constant) 14.557 2.053
Objective 0.187 0.22 0.08
Model 4 (Constant) 16.261 0.469
Note: R² = .010 for Step 1, R² = .009 for Step 2, R² = .006 for Step 3, R² = .000 for Step 4.
The results of the correlation also show that there was no significant relationship
between assessment type and student engagement. In order to further explore the
relationship between teacher assessment practices and student engagement stepwise
regression was used. Table 10 shows the stepwise regression for emotional engagement.
No type of assessment significantly predicted emotional engagement.
Table 10
Stepwise Regression for Emotional Engagement
Model
Unstandardized Coefficients Standardized Coefficients
B Std. Error Beta
Model 1 (Constant) 17.851 3.874
Constructed
Response 0.468 0.263 0.285
Teacher Made -0.448 0.345 -0.208
Objective 0.057 0.295 0.018
Model 2 (Constant) 18.418 2.513
Constructed
Response 0.47 0.261 0.286
Teacher Made -0.455 0.342 -0.212
Model 3 (Constant) 17.114 2.323
Teacher Made 0.189 0.154 0.115
Model 4 (Constant) 19.861 0.626
Note: R² = .029 for Step 1, R² = .029 for Step 2, R² = .013 for Step 3, R² = .000 for Step 4.
54
Stepwise regression was also used to examine the relationship between behavioral
engagement and teacher assessment type. Table 11 shows the stepwise regression for
behavioral engagement. This regression shows that assessment type does not
significantly predict behavioral engagement.
Table 11
Stepwise Regression for Behavioral Engagement
Model
Unstandardized
Coefficients
Standardized
Coefficients
B
Std.
Error Beta
Model 1 (Constant)
15.767 2.573
Constructed
Response 0.11 0.174 0.101
Teacher Made
0.109 0.229 0.076
Objective
0.128 0.196 0.061
Model 2 (Constant)
16.176 2.417
Constructed
Response 0.178 0.102 0.163
Objective
0.118 0.194 0.057
Model 3 (Constant)
17.305 1.534
Constructed
Response 0.174 0.101 0.159
Note: R² = .030 for Step 1, R² = .028 for Step 2, R² = .025 for Step 3.
The correlation matrix also shows that there was no significant relationship
between assessment type and cognitive engagement. Stepwise regression was used to
further examine the relationship between teacher assessment practices and student
engagement. Table 12 shows the stepwise regression for cognitive engagement. No type
of assessment significantly predicted cognitive engagement.
55
Table 12
Stepwise Regression for Cognitive Engagement
Model
Unstandardized
Coefficients
Standardized
Coefficients
B Std. Error Beta
Model 1 (Constant)
19.507 3.719
Constructed
Response 0.378 0.252 0.241
Teacher Made
-0.382 0.331 -0.185
Objective
0.088 0.283 0.029
Model 2 (Constant)
20.383 2.414
Constructed
Response 0.382 0.251 0.243
Teacher Made
-0.393 0.328 -0.191
Model 3 (Constant)
19.259 2.227
Constructed
Response 0.139 0.147 0.089
Model 4 (Constant)
21.287 0.598
Note: R² = .021 for Step 1, R² = .020 for Step 2, R² = .008 for Step 3, R² = .000 .
Summary
This study used three different surveys to examine the nature of the relationship
between teacher assessment type and student goal orientation and engagement. One
survey was given to teachers, and consisted of a modified form of a questionnaire
(McMillan, Myran, & Workman, 2002) that measured whether teachers used constructed
response assessment, teacher made assessment, or objective assessment. Students were
given a two part survey booklet that consisted of a modified form of the Patterns of
Adaptive Learning Survey (Midgley et al., 2000) which examined performance approach,
performance avoid, and mastery goal orientations and an altered version of the Feelings
About School Inventory (Fredricks et al., 2005) that measured student cognitive,
emotional, and behavioral engagement. Exploratory factor analysis and reliability data
were used to ensure that the questions aligned with the factors being studied and that the
instruments were reliable. A Pearson’s correlation matrix and stepwise regression were
56
used to examine the relationships between the variables. The results of the analyses
showed that there was no significant relationship found between teacher assessment type
and student goal orientation or engagement.
57
CHAPTER 5
Discussion
Synthesizing the Results
This study set out to determine if there was a relationship between the type of
assessment that teachers use in the elementary classroom, student goal orientation, and
student engagement. The prior research suggested that objective assessment would foster
performance orientations (both approach and avoid) among students, while constructed
response would foster a mastery orientation (Elliot & Church, 1997; Dweck, 1986;
Pintrich, 2000). Additionally, prior research suggested that objective assessment would
be related to behavioral engagement, and that constructed response would be related to
cognitive and emotional engagement (Finn, 1989; Fredricks, Blumenfeld, & Paris, 2004;
Marks, 2000; Willingham, Pollack & Lewis, 2002). It was difficult, however, to
hypothesize what type of relationship (if any) teacher made assessment would have with
student goal orientation and student engagement, since teacher made assessment could
include either constructed response assessment or objective assessment.
In order to measure the relationship between these factors, assessments were
given to students (N= 115) and their teachers (n = 8) in eight fifth grade classrooms in
Southern California. Teachers were given a questionnaire created by McMillan, Myran,
and Workman (2002) that was designed to measure what type of assessment teachers use
in the classroom: constructed response, objective, or teacher made. Students were given
a questionnaire that combined two surveys. The first part of the survey was taken from
the Patterns of Adaptive Learning Survey (Midgley et al., 2000), and was designed to
measure performance approach, performance avoid, and mastery goal orientations. The
58
second part of the survey was taken from the Feelings About School Inventory (Fredricks
et al., 2005), which was designed to measure behavioral, cognitive, and emotional
engagement. The language of each of these surveys was altered to reflect domain
specificity in mathematics, and the surveys were combined into one survey booklet with
two sections. Additionally, age, gender, and ethnicity data was collected on the students
in order to accurately describe the sample population.
The student surveys were analyzed using exploratory factor analysis and were
analyzed for reliability. The teacher survey, because of the small sample size and the
support in previous research, was only analyzed for reliability. Additionally, a Pearson’s
correlation matrix was constructed to determine if there was any relationship between the
type of assessment used by the teacher and the goal orientation and engagement of the
student. The results of this analysis showed that there was no significant relationship
between teacher assessment type and either student goal orientation or engagement.
In the literature review portion of this study, it was surmised based on previous
research that there would be relationships found between the type of assessment given in
the classroom and the goal orientation and motivation of the students in that classroom.
One hypothesis was that students would exhibit mastery orientations when teachers used
constructed response assessment. All the rage in recent years, constructed response
assessment allows students the freedom to answer a question or solve a problem in their
own way, through open ended questioning, projects, or performance assessment
(Woolfolk, 2004; Pintrich & Schunk, 2002). According to the findings of previous
research, it would make sense that assessments such as these that offer students more
choice and autonomy and potentially have a higher level of interest would be related to
59
higher mastery orientation in students (Elliot & Church, 1997; Dweck, 1986; Pintrich,
2000). However, it should be considered that the impact of high stakes testing on
student motivation may mask the effects of more autonomous, higher interest testing on
student motivation. Several studies have shown that high stakes testing affects student
motivation by increasing the use of practice tests in the classroom, increasing test
anxiety, increased pressure to perform from parents and teachers, and teaching to test
content (Abrams, Pedulla, & Medaus, 2003; Benmansour, 1999; Brookhart & DeVoge,
1999; Davies & Brember, 1998). The results of this study showed no relationship
between constructed response assessment and mastery orientation.
Additionally, while no research was found directly examining the effects of
objective assessment, practical experience suggested that students whose teachers
routinely utilize objective assessment would be more inclined toward performance
orientations (both approach and avoid). The use of number or letter scores on objective
assessments would seem to encourage students to focus on the score received rather than
on the knowledge mastered for any given assessment. It would follow that by using
objective assessment, teachers would be emphasizing the achievement of high scores, and
not necessarily mastery. Additionally, it would seem to follow that objective assessments
(typically multiple choice or short-answer pencil and paper tests) would be of less interest
to students, and would be related to lower levels of cognitive and emotional engagement.
Behavioral engagement, on the other hand, which is defined by participation in an
activity or assessment, doesn’t necessarily involve interest. Therefore, one would think
that objective assessment could be related to behavioral engagement, since it asks the
student to simply complete a task, typically without providing options for creativity or
60
personal interpretation. However, the results of this study showed no relationship
between objective assessment and student goal orientation or student engagement.
Finally, teacher made assessments were examined in this study. There was
strikingly little research on teacher made assessments, and what research did exist was
outdated. Even this outdated research didn’t examine what impact teacher made
assessments might have on student motivation or engagement. Teacher made
assessments can be objective or constructed response, and can include questions or tasks
that require either simple fact retrieval or higher order thinking. Teacher made
assessments are the most commonly used form of classroom assessment, and vary widely
in their structure and purpose (Boothroyd et al., 1992; Frey, et al., 2005; Marso & Pigge,
1988). Based on these facts, it was difficult to guess what kind of effect teacher made
assessment would have on student goal orientation and engagement. Accordingly,
teacher made assessment showed no significant relationship with student goal orientation
or engagement.
There are a variety of possible reasons that could have contributed to no
significant relationship being found among the variables. One possibility is that the
students were not able to accurately comprehend the questions being asked of them on
the student survey. While the PALS (Midgley et al., 2000) was originally tested with
elementary age students, the population with which this scale was tested was likely quite
different than the sample population of this study. Differences in characteristics of the
population could have easily affected the results of this study.
Additionally, the Feelings About School Inventory (Fredricks et al., 2005) was
never previously tested with elementary age students. While the language of the Feelings
61
About School Inventory was very similar to the language that was used on the PALS
(Midgley et al., 2000), it is possible that the language on the engagement portion of the
student survey was too difficult for the students who participated in this survey to
comprehend.
It is also possible that teachers were not reporting their actual assessment
practices accurately. Teachers may have had different ideas about what actually
constitutes assessment in the classroom. Or, there could have been differences in how
often each teacher assessed learning in mathematics. For example, if two teachers both
indicate that they use objective assessment in the classroom, but one teacher assesses
once a week while the other assesses every two months, there could be a vast difference
in how this assessment could affect students.
The results of this study could have also been affected by the prevalence of
standardized testing in the classroom. Students could be so greatly affected by this
standardized testing that any effects of assessments that teachers are using could be
masked. Also, there could be large differences in how teachers prepare for and
emphasize those standardized tests. The weight that teachers put on standardized testing
might have affected the student goal orientation or engagement.
Implications
This study, while modest in scale, provides a small step toward understanding
how the assessments that teachers choose to give in the classroom may affect students. In
light of the findings of this study, what educators should not do is stop thinking about the
impact that our assessments may have on students. Instead, teachers and researchers
need to examine the many other ways in which assessments could affect students. There
62
are many other possible factors related to assessment, goal orientation, and engagement
that could be influencing students.
Until research is completed that can further support the conclusions of this study,
it is important to continue to improve classroom assessment and provide teachers with the
necessary training and time to develop quality assessment tools and interpret the data
collected from those tools in meaningful ways. Therefore, educators should be focusing
on how to create rigorous assessments that examine student learning on a variety of levels
in reliable ways. In addition, it would be useful to provide professional development to
teachers on how to correctly interpret the results of teacher made tests.
Interpretation, in fact, is a major issue to consider when discussing classroom
assessment. The same assessment could have a vastly different impact in different
classrooms based on the way in which the teacher interprets the results and uses those
results in the classroom. For example, if two teachers each give an identical multiple
choice test, one may choose to post the results with student names on the wall of the
classroom to encourage students to score higher on the next test. The other teacher may
choose to conference with students about their scores and what strategies they used that
were successful, and what strategies they could use in the future. Obviously, these two
teachers would be fostering a very different type of classroom environment, even though
they were using the same type of test. They may also be fostering a different type of
goal orientation and engagement among their students.
Because this study showed no significant relationship between teacher assessment
type and student goal orientation and engagement, it would be easy to say that it doesn’t
matter what type of assessment teachers use in the classroom; students will have their
63
individual goal orientation and engagement regardless of what type of assessment is used.
However, this study is only a snapshot of how assessment may be affecting students in
the elementary mathematics classroom. While it was surprising that there were no
significant relationships found between these factors, this study was by no means large
enough or in-depth enough to completely discount the possibility that assessment type
can affect the goal orientation and engagement of students. What is important is that
teachers examine the types of tests they create for use in the classroom to determine if the
tests somehow encourage students to do the bare minimum or to just not fail. Could it be
that these assessments are given with a strong emphasis on a minimum passing score?
Could teachers be emphasizing passing the test instead of mastering the knowledge? The
scope of this study doesn’t provide answers for these questions, but these questions can
help teachers guide improvement of classroom assessment.
Future Research
The results of this study showed no significant relationship between teacher
assessment type and student goal orientation and engagement. In order to determine why
there was no significant relationship found, it would be useful to determine if what
teachers report about their assessment and evaluation practices actually carries through to
their practices in the classroom. To this end, it would be necessary to observe what
assessment practices actually take place in the classroom and compare this information to
what teachers self-report about what assessment practices they use. An important follow-
up to this study would be to determine if what teachers say about their assessment
practices is actually what they do.
64
In light of the results of this study, it would also be helpful to examine how
assessments are interpreted and used in the classroom. This study asked teachers to
indicate what type of assessment they use, but it did not examine how they interpret the
results or what they do with them, which could have had a great effect on the relationship
between teacher assessment type and student orientation or engagement. Examining how
assessment results are used in the classroom could be of great benefit to educators
looking to provide the highest quality assessment possible in the classroom. For
example, two teachers could use objective assessment but utilize the results in very
different ways. It would be extremely useful to determine what teachers do with results,
and if (and how) they share those results with students.
Conclusion
The previously discussed implications of this study are that teacher made
assessment and its relationship with student goal orientation needs to be examined more
closely. Because of the dearth of research on objective and teacher made assessment,
much more study is needed in order to understand what types of assessments teachers are
creating and how those assessments are being used in the classroom. Without further
research, it is difficult to determine how assessment impacts students in the classroom
and the ways that assessment can be improved.
In the future, it would be useful to replicate this study on a much larger scale in
order to determine if any patterns emerge in student goal orientation and engagement
with a larger sample of students and teachers. Additionally, conducting a study with a
much larger population of teachers would be extremely beneficial, as the impact of this
study was limited by the small amount of teachers involved. Additionally, it would be
65
useful to examine how teachers interpret and use assessment in the classroom, and how
they construct those assessments.
In this environment of constant high stakes testing and pressure to perform, it is
useful to examine how the assessment practices of teachers are affecting their students.
This study examined only one way in which assessment in the classroom affects students.
The push from the U.S. government in recent years has been for objective assessment,
high stakes testing, and hard data reporting. However, it is important to realize that
assessment happens in the classroom in a multitude of ways, and that each assessment
could potentially have an impact on not only a student’s goal orientation or engagement,
but also on a variety of other motivational factors and student achievement. Because
assessment is perhaps more prevalent in U.S. public schools than it has ever been, it is
vital that research is conducted to explore the possible impact that this assessment has on
students. By exploring and understanding assessment in the classroom, teachers and
administrators can make informed decisions about the best course of action for producing
high achieving, confident, and competent students.
66
REFERENCES
Abrams, L. M., Pedulla, J. J., & Madaus, G. F. (2003). Views from the classroom:
Teachers' opinions of statewide testing programs. Theory into Practice, 42(1), 18-
29.
Airasian, P. W., & Madaus, G. F. (1972). Functional Types of Student Evaluation.
Measurement and Evaluation in Guidance. 4(4), 221-233.
Ames, C. (1992). Classrooms: Goals, structures, and student motivation. Journal of
Educational Psychology, 84(3), 261-271.
Ames, C., & Archer, J. (1988). Achievement goals in the classroom: Students' learning
strategies and motivation processes. Journal of Educational Psychology, 80(3),
260-267.
Barron, K. E., Harackiewicz, J. M., & Tauer, J. M. (2001, April). The interplay of ability
and motivational variables over time: A 5 year longitudinal study of predicting
college student success. Paper presented at the Annual Meeting of the American
Educational Research Association, Seattle, WA.
Benmansour, N. (1999) Motivational orientations, self-efficacy, anxiety and strategy use
in learning high school mathematics in Morocco, Mediterranean Journal of
Educational Studies, 4, 1–15.
Boothroyd, R. A., McMorris, R. F., & Pruzek, R. M. (1992). What do teachers know
about measurement and how did they find out? Paper presented at the Annual
Meeting of the National Council on Measurement in Education, San Francisco,
CA, ERIC Document Reproduction Service No. 351 309.
Brookhart, S. M. (1994). Teachers' grading: Practice and theory. Applied Measurement in
Education. 7(4), 279-301.
Brookhart, S. M., & DeVoge, J. G. (1999). Testing a theory about the role of classroom
assessment in student motivation and achievement. Applied Measurement in
Education, 12(4), 409-425.
Brookhart, S. M., & Durkin, D. T. (2003). Classroom assessment, student motivation, and
achievement in high school social studies classes. Applied Measurement in
Education, 16(1), 27-54.
Brookhart, S. M., Walsh, J. M., & Zientarski, W. A. (2006). The dynamics of motivation
and effort for classroom assessments in middle school science and social studies.
Applied Measurement in Education, 19(2), 151-184.
67
Church, M. A., Elliot, A. J., & Gable, S. L. (2001). Perceptions of classroom
environment, achievement goals, and achievement outcomes. Journal of
Educational Psychology, 93(1), 43-54.
Davies, J. & Brember, I. (1998) National curriculum testing and self-esteem in year 2 the
first five years: a cross-sectional study, Educational Psychology, 18, pp. 365–375.
Deemer, S. A. (2004). Classroom goal orientation in high school classrooms: Revealing
links between teacher beliefs and classroom environments. Educational Research.
46(1), 73-90.
Dweck, C. S. (1986). Motivational processes affecting learning. American Psychologist.
Special Issue: Psychological Science and Education, 41(10), 1040- 1048.
Elliot, A. J., & Church, M. A. (1997). A hierarchical model of approach and avoidance
achievement motivation. Journal of Personality and Social Psychology, 72(1),
218-232.
Elliot, E.S., & Dweck, C.S. (1988). Goals: An Approach to Motivation and
Achievement. Journal of Personality and Social Psychology, 54(1), 5-12.
Elliot, A. J., & Harackiewicz, J. M. (1996). Approach and avoidance achievement goals
and intrinsic motivation: A mediational analysis. Journal of Personality and
Social Psychology, 70(3), 461-475.
English, F. W., & Steffy, B. E. (2001). Deep curriculum alignment. Lanham,MD:
Scarecrow Press.
Epstein, J. L. (1988). The influence of friends on achievement and affective outcomes. In
J. L. Epstein & N. Karweit (Eds.), Friends in school: Patterns of selection and
influence in secondary schools (pp. 177–200).
Finn, J. D. (1989). Withdrawing from school. Review of Educational Research. 59(2),
117-42.
Frary, R.B., Cross, L.H., & Weber, L.J. (1993). Testing and grading practices and
opinions of secondary teachers of academic subjects: Implications for instruction
in measurement. Educational Measurement: Issues and Practice, 12(3), 23-30.
Fredricks, J. A., Blumenfeld, P., Friedel, J., & Paris, A. (2005). School engagement. In
K. A. Moore & L. H. Lippman (Eds.), What do children need to flourish:
Conceptualizing and measuring indicators of positive development (pp. 305-321).
New York, NY: Springer.
68
Fredericks, J.A., Blumenfeld, P.C., & Paris, A.H. (2004) School engagement: Potential of
the concept, state of the evidence. Review of Educational Research, 74(1), 59–
109.
Frey, B. B., Petersen, S., Edwards, L. M., Pedrotti, J. T., & Peyton, V. (2005). Item-
writing rules: Collective wisdom. Teaching and Teacher Education, 21(4), 357-
364.
Fuchs, L. S., Fuchs, D., Karns, K., Hamlett, C. L., & Katzaroff, M. (1999). Mathematics
performance assessment in the classroom: Effects on teacher planning and student
problem solving. American Educational Research Journal. 36(3), 609-46.
Greene, B.A., Miller, R.B., Crowson, H.M., Duke, B.L., & Akey, K.L. (2004). Predicting
high school students' cognitive engagement and achievement: Contributions of
classroom perceptions and motivation. Contemporary Educational Psychology,
29(4), 462-482.
Gutman, L. M. (2006). How student and parent goal orientations and classroom goal
structures influence the math achievement of African Americans during the high
school transition. Contemporary Educational Psychology. 31(1), 44-63.
Haigh, M. (2007). Sustaining learning through assessment: An evaluation of the value of
a weekly class quiz. Assessment & Evaluation in Higher Education, 32(4), 457-
474.
Hancock, D. R. (2007). Effects of performance assessment on the achievement and
motivation of graduate students. Active Learning in Higher Education, 8(3), 219-
231.
Harris, L.R. (2008). A phenomenographic investigation of teacher conceptions of student
engagement in learning. The Australian Educational Researcher, 35(1), 57-79.
Jones, M. G., Jones, B. D., & Hargrove, T. Y. (2003). The unintended consequences of
high-stakes testing. Lanham, MD: Rowan & Littlefield.
Lane, S., Parke, C.S., & Stone, C.A. (2002) The impact of a state performance-based
assessment and accountability program on mathematics instruction and student
learning: Evidence from survey data and school performance. Educational
Assessment, 8(4), 279-315.
Lau, S., & Nie, Y. (2008) Interplay between personal goals and classroom goal structures
in predicting student outcomes: A multilevel analysis of person-context
interactions. Journal of Educational Psychology, 100(1), 15-29.
69
Loveless, T., Brookings Institution, Washington, DC, & Brookings Institution,
Washington, DC. Brown Center on Education Policy. (2006). The 2006 Brown
Center report on American education: How well are American students learning?
with special sections on the nation's achievement, the happiness factor in
learning, and honesty in state test scores. volume II, number 1. Washington, DC:
Brookings Institution Press.
Marks, H. M. (2000). Student engagement in instructional activity: Patterns in the
elementary, middle, and high school years. American Educational
Research Journal. 37(1), 153-84.
Marso, R. N., & Pigge, F. L. (1988). An analysis of teacher made tests: testing practices,
cognitive demands, and item construction errors. Paper presented at the Annual
Meeting of the National Council on Measurement in Education, New Orleans,
LA, ERIC Document Reproduction Service No. 298 174.
Maslovaty, N., & Kuzi, E. (2002). Promoting motivational goals through alternative or
traditional assessment. Studies in Educational Evaluation, 28(3), 199-222.
McMillan, J. H., Myran, S., & Workman, D. (2002). Elementary teachers' classroom
assessment and grading practices. Journal of Educational Research. 95(4), 203-
13.
Meece J. L., Blumenfeld, P. C., & Hoyle, R. H. (1988) Student's goal orientations and
cognitive engagement in classroom activities. Journal of Educational Psychology,
80(4), 514-523.
Meece, J.L., & Miller, S.D. (2000). A longitudinal analysis of elementary school
students’ achievement goals in literacy activities. Contemporary Educational
Psychology, 26(4), 454-480.
Midgley, C., Maehr, M. L., Hruda, L. A., Anderman, E., Anderman, L., Gheen, M.,
Kaplan, A., Kumar, R., Middleton, M., Nelson, J., Roeser, R., & Urdan, T.
(2000). Manual for the Patterns of Adaptive Learning Scale. Ann Arbor:
University of Michigan.
Park, S.Y. (2005). Student engagement and classroom variables in improving
mathematics achievement. Asia Pacific Education Review, 6(1), 87-97.
Pedulla, J., A., L., Madaus, G., R., M., Ramos, M., & Miao, J. (2003). Perceived
effects of state-mandated testing programs on teaching and learning:
Findings from a national survey of teachers. Chestnut Hill, MA: Center for
the Study of Testing, Evaluation, and Educational Policy, Boston College.
Pintrich, P. R. (2000). Multiple goals, multiple pathways: The role of goal orientation in
learning and achievement. Journal of Educational Psychology. 92(3), 544-55.
70
Pintrich, P.R., & Schunk, D.H. (2002). Motivation in education: Theory, research, and
applications. Upper Saddle River, NJ: Merrill Prentice-Hall.
Pintrich, P.R., Smith, D.A.F., Garcia, T., McKeachie, W.J. (1993). Reliability and
predictive validity of the Motivated Strategies for Learning Questionnaire
(MSLQ). Educational and Psychological Measurement, 53(3), 801-813.
Reeve, J., Jang, H., Carrell, D., Jeon, S., & Barch, J. (2004) Enhancing Students'
Engagement by Increasing Teachers' Autonomy Support. Motivation and
Emotion, 28(2), 147-169.
Rose, L. C., & Gallup, A. M. The 39th annual phi delta Kappa/Gallup poll of the public's
attitudes toward the public schools. Phi Delta Kappan, 89(1), 33-48.
Stefanou, C., & Parkes, J. (2003). Effects of classroom assessment on student motivation
in fifth-grade science. Journal of Educational Research. 96(3), 152-62.
Stiggins, R.J., & Conklin, N.F. (1992). In teachers’ hands: Investigating the practices of
classroom assessment. Albany: State University of New York.
Urdan, T., & Schoenfelder, E. (2006). Classroom effects on student motivation: Goal
structures, social Relationships, and competence beliefs. Journal of School
Psychology. 44(5), 331-349.
U. S. Department of Education. (2002). Public Law print of PL 107-110, the No Child
Left Behind Act of 2001 Retrieved May 6, 2008, from
http://www.ed.gov/policy/elsec/leg/esea02/107-110.pdf
Willingham, W., Pollack, J., Lewis, C. (2002) Grades and Test Scores: Accounting for
Observed Differences. Journal of Educational Measurement 39(1), 1–37.
Woolfolk, A. (2004). Educational Psychology. Boston, MA: Allyn & Bacon
71
Appendix A
TEACHER SURVEY
Please indicate your:
Gender: M or F Date of Birth:_____/_____/_______
For the following items, please circle one number only.
1. To what extent are final grades in mathematics in your class based on oral
presentations?
1 2 3 4 5 6
NOT AT ALL COMPLETELY
2. To what extent are final grades in mathematics in your class based on major
examinations?
1 2 3 4 5 6
NOT AT ALL COMPLETELY
3. To what extent are final grades in mathematics in your class based on pieces of work
completed by teams of students?
1 2 3 4 5 6
NOT AT ALL COMPLETELY
4. To what extent are final grades in mathematics in your class based on performance
assessments (e.g., structured teacher observations or ratings of performance, such as a
speech or paper)?
1 2 3 4 5 6
NOT AT ALL COMPLETELY
72
5. To what extent are final grades in mathematics in your class based on objective
assessments (e.g. multiple choice, matching, short answer)?
1 2 3 4 5 6
NOT AT ALL COMPLETELY
6. To what extent are final grades in mathematics in your class based on projects
completed by individual students?
1 2 3 4 5 6
NOT AT ALL COMPLETELY
7. To what extent are final grades in mathematics in your class based on authentic
assessments (e.g., real world performance tasks)?
1 2 3 4 5 6
NOT AT ALL COMPLETELY
8. To what extent are final grades in mathematics in your class based on assessments
provided by publishers or supplied to the teacher (e.g., in instructional guides or
manuals)?
1 2 3 4 5 6
NOT AT ALL COMPLETELY
73
9. To what extent are final grades in mathematics in your class based on assessments
designed primarily by yourself?
1 2 3 4 5 6
NOT AT ALL COMPLETELY
10. To what extent are final grades in mathematics in your class based on essay-type
questions?
1 2 3 4 5 6
NOT AT ALL COMPLETELY
74
Appendix B
STUDENT SURVEY
THE FIRST QUESTION IS AN EXAMPLE.
I like strawberry ice cream.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
HERE ARE SOME QUESTIONS ABOUT YOURSELF AS A STUDENT IN THIS
CLASS. PLEASE CIRCLE THE NUMBER THAT BEST DESCRIBES WHAT
YOU THINK.
1. It’s important to me that my teacher doesn’t think that I know less than others in math.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
2. It is important to me that I learn a lot of new math concepts this year.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
3. It’s important to me that other students in my class think I am good at math.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
4. One of my goals is to look smart in math compared to the other students in my class.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
75
5. One of my goals is to master a lot of new math skills this year.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
6. One of my goals in math is to learn as much as I can.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
7. One of my goals during math time is to avoid looking like I have trouble doing the
math work.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
8. One of my goals is to show others that I’m good at math.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
9. It is important to me that I don’t look stupid in math.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
10. One of my goals is to keep others from thinking I’m not smart in math.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
76
11. One of my goals is to show others that math is easy for me.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
12. It’s important to me that I thoroughly understand my math class work.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
13. It is important to me that I improve my math skills this year.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
14. It’s important to me that I look smart in math compared to others in my class.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
THE FOLLOWING QUESTIONS ARE ABOUT YOU AND YOUR
CLASSROOM. REMEMBER TO SAY HOW YOU REALLY FEEL. NO ONE AT
SCHOOL OR HOME WILL SEE YOUR ANSWERS.
1. I like math time.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
77
2. I follow the rules during math time.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
3. I check my math schoolwork for mistakes.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
4. I feel excited by my work during math time.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
5. I get in trouble during math time.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
6. I study at home for math even when I don’t have a test.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
7. When it is math time, I just act as if I am working.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
78
8. My classroom is a fun place to be during math time.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
9. When I read a math problem, I ask myself questions or use strategies to make sure I
understand what it is about.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
10. I pay attention during math time.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
11. I am interested in the work in math.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
12. I read or use extra math books to learn more about the things we do in school.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
79
13. I complete my math work on time.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
14. I feel happy during math time.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
15. If I don’t know what a math problem means, I do something to figure it out.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
16. I feel bored during math time.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
17. If I don’t understand a math problem that I read, I go back and read it over again.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
18. I talk with people outside of school about what I am learning in math.
1 2 3 4 5
NOT AT ALL TRUE SOMEWHAT TRUE VERY TRUE
80
Appendix C
University of Southern California
Rossier School of Education
Waite Phillips Hall
3470 Trousdale Parkway
Los Angeles, CA 90089
INFORMED CONSENT FOR NON-MEDICAL RESEARCH
PARENTAL PERMISSION
************************************************************************
CONSENT TO PARTICIPATE IN RESEARCH
The Relationship between Teacher Assessment Practices, Student Goal
Orientation and Student Engagement in Elementary Mathematics
Your child is asked to participate in a research study conducted by Corinne Hyde,
M.S.Ed. and Robert Rueda, Ph.D., from the Rossier School of Education at the
University of Southern California because your child is a fifth grade student in Southern
California School District. Results of the study will be included in Corinne Hyde’s
doctoral dissertation. Your child was selected as a possible participant in this study
because your child is currently a fifth grade student at (Jones/Park/Center) Elementary
school. A total of 60 subjects will be selected from your child’s elementary school to
participate. Your child’s participation is voluntary. You should read the information
below, and ask questions about anything you do not understand, before deciding whether
or not to participate. Please take as much time as you need to read the consent form and
discuss it with your family or friends. If you decide to let your child participate, you will
be asked to sign this form. You will be given a copy of this form.
PURPOSE OF THE STUDY
This study is designed to determine if there is a relationship between the way teachers
assess mathematics and how motivated and engaged students are to learn mathematics.
PROCEDURES
If you volunteer to participate in this study, we would ask you or your child to do the
following things:
□ Your child will be asked to complete a short (approximately 45-60 minute long)
survey asking about their feelings and habits related to math.
□ Your child’s teacher will be asked to provide age, gender, and ethnicity
information on your child.
81
This survey will take place in your child’s classroom, and your child’s teacher will not be
present during the survey. All students in the classroom who have consented to
participate will take the survey at one time.
POTENTIAL RISKS AND DISCOMFORTS
There are no potential risks involved in this study.
POTENTIAL BENEFITS TO SUBJECTS AND/OR TO SOCIETY
Your child will not benefit from this research study.
PAYMENT/COMPENSATION FOR PARTICIPATION
If you choose to return this consent form, your child will be entered in a raffle to win an
iPod Shuffle ($49 value). You do NOT have to consent to participate in the study in
order to be entered in the raffle; simply return the consent form unsigned if you do not
wish your child to participate.
POTENTIAL CONFLICTS OF INTEREST
There are no potential conflicts of interest for the investigators of this study.
CONFIDENTIALITY
Any information that is obtained in connection with this study and that can be identified
with you will remain confidential and will be disclosed only with your permission or as
required by law.
Only members of the research team will have access to the data associated with this
study. The data will be stored in the investigator’s office in a locked file
cabinet/password protected computer. Data may be released to the dissertation
committee.
The data will be stored for three years after the study has been completed and then
destroyed.
Each school, as well as the district itself, will receive a report outlining the results of the
study, as well as implications and suggestions for practice. These reports will be
distributed following the completion of the study. The reports will not identify
individuals and will only report findings for the group.
When the results of the research are published or discussed in conferences, no
information will be included that would reveal your identity.
PARTICIPATION AND WITHDRAWAL
You or you child can choose whether to be in this study or not. If your child volunteers
to be in this study, s/he may withdraw at any time without consequences of any kind.
Your child may also refuse to answer any questions s/he doesn’t want to answer and still
remain in the study. The investigator may withdraw your child from this research if
circumstances arise which warrant doing so.
RIGHTS OF RESEARCH SUBJECTS
You may withdraw your consent at any time and discontinue participation without
penalty. You are not waiving any legal claims, rights or remedies because of your
82
participation in this research study. If you have any questions about your rights and/or
your child’s rights as a study subject or you would like to speak with someone
independent of the research team to obtain answers to questions about the research, or in
the event the research staff can not be reached, please contact the University Park IRB,
Office of the Vice Provost for Research Advancement, Stonier Hall, Room 224a, Los
Angeles, CA 90089-1146, (213) 821-5272 or upirb@usc.edu
IDENTIFICATION OF INVESTIGATORS
If you have any questions or concerns about the research, please feel free to contact:
Corinne Hyde Robert Rueda, Ph.D.
Rossier School of Education Rossier School of Education
Waite Phillips Hall, 601B Waite Phillips Hall, 601B
Los Angeles, CA 90007 Los Angeles, CA 90007
XXX-XXX-XXXX XXX-XXX-XXXX
XXXXX@usc.edu XXXXX@usc.edu
83
SIGNATURE OF PARENT(S)
I have read (or someone has read to me) the information provided above. I have been
given a chance to ask questions. My questions have been answered to my satisfaction,
and I agree to have our child(ren) participate in this study. I have been given a copy of
this form.
□ I/we agree for the class teacher to provide the researchers with the age,
gender, and ethnicity information on our child.
□ I/we do not want the class teacher to provide the researchers with the age,
gender, and ethnicity information on our child.
Name of Subject
Name of Parent
Signature of Parent Date
SIGNATURE OF INVESTIGATOR
I have explained the research to the subject and his/her parent(s), and answered all of
their questions. I believe that the parent(s) understand the information described in this
document and freely consents for his/her child to participate.
Name of Investigator
Signature of Investigator D
84
Appendix D
Universidad del Sur de California
(University of Southern California)
Rossier School of Education
3470 Trousdale Parkway
Waite Phillips Hall, Suite 802
Los Angeles, CA 90089-4038
Concentimiento para participar en investigación
La Relación entre Exámenes, Orientacion de Estudiantes, y la Participation en el Estudio
de Matemáticas
Les pedimos su consentimento para que sus hijo(a) participen en una investigación
conducido para el studio de doctorado de la Corinne Hyde y Robert Rueda, Ph.D. Ellos
son de la escuela educativa Rossier en la Universidad del Sur de California.
Porque su hijo/hija esta en quinto año en el distrito de la ciudad de Sur de California, se
le invita a una participacion de studio para el doctorado de la investigadora Corinne
Hyde. Su hijo/a fue escojido como participante porque esta en quinto año en la escuela
(Jones/Park/Center).
Un total de 60 estudiantes serᾴn escojidos para participar en este estudio. La
participación de su niño(a) es voluntaria. Por favor lea toda la información adjunta y si
tiene alguna pregunta favor de comunicarse con nosotros antes de dar el consentimiento
para que su hijo(a) participe en esta investigación. Si déjà que su hijo/a participle tendrá
que firmar esta forma.
PROPÓSITO DEL ESTUDIO
El propósito de esta investigación es para determinar la relacion entre dar examenenes de
matemáticas y como motivan a los niῇos en querer aprender matemáticas.
PROCEDIMIENTOS
Si usted consiente que su hijo(a) participe, su hijo(a) se enfocará en lo siguiente:
• Su hijo/a tendra que llenar un cuestionario corto. (approximadamente 45-60
minutos) sobre sus sentimientos hacia matemáticas.
• El maestro/a tendra que dar informacion sobre la edad, sexo y etnicidad sobre él
estudiante.
El questionario sera tomado en el salon de su hijo/a. Todos los estudiantes que participen
tomarán el cuestionario a la misma vez.
RIESGOS Y MALESTAR POSIBLES
No hay ningun riesgto en este studio.
85
VENTAJAS QUE SEAN POSIBLES PARA LOS ESTUDIANTES Y LA COMUNIDAD
Su hijo/a no tendra ningun beneficio en tomar el questionario.
PAGO PARA PARTICIPAR
Si su hijo/a participa en este studio podra entrar a una rifa para ganar un iPod Shuffle
(valor de $49). Para entrar a la rifa simplemente firme y regrese esta forma con al
maestro/a de su hijo/a
DISCRECIÓN/CONFIDENCIAL
Cualquier información que se obtiene en relación con este estudio no se podrá
identificarse con su hijo(a). Permanecerá confidencial y sólo se descubrirá con su
permiso o como requerido por ley, o el equipo de investigadores.
Solamente, los profesores y los alumnos de los profesors quienes trabajan con su hijo(a)
tendran accesso a la información. Permanecerá confidencial y sólo se descubrirá con su
permiso o como requerido por ley.
La información sera destruida despues de tres anos. Se guardarán todas las notas del
campo, transcripciones y estudios en un armario cerrado con llave en una oficina
asegurado en la oficina de la investigadora principal, Corinne Hyde.
Cuando se publican los resultados de la investigación o durante presentaciones en
conferencias, ninguna información será incluida que revelaría la identidad de su hijo(a).
La escuela va a recibir el reporte de los resultados cuando se termine la investigacion.
PARTICIPACIÓN O RETIRARSE
Usted o su niño pueden escoger participar en este estudio o no. Si su hijo(a) decide
participar, pueden retirarse cuando quiera sin consecuencia. Su hijo(a) también puede
negar a contestar cualquier pregunta que él/ella no quiere contestar y todavía
permanecera en el estudio.
DERECHOS DEL PARTICIPANTE
En cualquier momento si no quieren seguir participando en la investigacion se pueden
retirar sin consecuencia. Su hijo/hija no tiene que contester todas las preguntas si no
quiere.. No está renunciando a ninguna demanda legal, o a los derechos o los remedios
debido a su participación en esta investigación.
Si tienen alguna pregunta sobre sus derechos por favor comunicarse con la investigadora
del studio que es Corinne Hyde de la Universidad Park IRB, Oficina de Investigaciones,
Salon Stonier, numero 224a, Los Angeles, CA 90089-1146, (213) 821-5272 o
upirb@usc.edu
IDENTIFICACIÓN DE INVESTIGADORES
Si hay alguna pregunta por favor comunicarse:
Corinne Hyde Robert Rueda, Ph.D.
Rossier School of Education Rossier School of Education
Waite Phillips Hall, 601B Waite Phillips Hall, 601B
Los Angeles, CA 90007 Los Angeles, CA 90007
XXX-XXX-XXXX XXX-XXX-XXXX
XXXXX@usc.edu XXXXX@usc.edu
86
FIRMA DE PADRES DE FAMILIA
Comprendo y estoy de acuerdo con todo lo que leí acerca de la investigación y le doy
permiso ha mi hijo(a) para participar en esta investigación. La investigadora les dejará
una copia de esta forma para sus archivos.
□ Estoy de acuerdo que el maestro/a de informacion a la investigadora sobre la
edad, sexo, y etnicidad de mi hijo/a, que es el estudiante.
□ No, no estoy de acuerdo que el maestro/a de informacion sobre nuestro
hijo/hija.
Nombre del estudiante
Nombre del Padre/Guardian
Firma del Padre/Guardian Fecha
FIRMA DE LA INVESTIGADORA (Corinne Hyde)
Les he explicado el studio a los padres y al estudiante. Tambien he contestado todas sus
preguntas. Yo creo que los padres entienden esta información y dan el consentimiento a
su hijo/a para participar.
Nombre de la Investigadora
Firma de la Investigadora Fecha
87
Appendix E
University of Southern California
Rossier School of Education
Waite Phillips Hall
3470 Trousdale Parkway
Los Angeles, CA 90089
INFORMATION SHEET FOR NON-MEDICAL RESEARCH
The Relationship between Teacher Assessment Practices, Student Goal
Orientation and Student Engagement in Elementary Mathematics
You are asked to participate in a research study conducted by Corinne Hyde, M.S.Ed. and
Robert Rueda, Ph.D., from the Rossier School of Education at the University of Southern
California because you are a fifth grade teacher in Southern California District. Results of
the study will be included in Corinne Hyde’s doctoral dissertation. You were selected as
a possible participant in this study because you are currently a fifth grade teacher at
(Jones/Park/Central) Elementary school. A total of 2 subjects will be selected from your
school to participate. Your participation is voluntary. You should read the information
below, and ask questions about anything you do not understand, before deciding whether
or not to participate. Please take as much time as you need to read the consent form. You
may also decide to discuss it with your family or friends. You may keep this copy of this
form.
PURPOSE OF THE STUDY
We are asking you to take part in a research study because we are trying to learn more
about the relationship between the ways that teachers assess mathematics and the
motivation and engagement of their students to learn mathematics.
Completion and return of the questionnaire or response to the interview questions
will constitute consent to participate in this research project.
PROCEDURES
You will be asked to complete a short survey that includes questions about your
assessment practices in mathematics. You will also be asked to provide age, gender, and
ethnicity information on your students if parents give permission.
POTENTIAL RISKS AND DISCOMFORTS
There are no anticipated risks to your participation; you may experience some discomfort
at completing the questionnaire or you may be inconvenienced from taking time out of
your day to complete the survey instrument. Any questions that make you uncomfortable
may be skipped and not answered.
POTENTIAL BENEFITS TO SUBJECTS AND/OR TO SOCIETY
You may not directly benefit from your participation in this research study.
88
This study may provide insight into how to better design assessment to increase student
motivation and engagement to participate in and learn mathematics.
PAYMENT/COMPENSATION FOR PARTICIPATION
You will not receive any payment for your participation in this research study.
POTENTIAL CONFLICTS OF INTEREST
There are no potential conflicts of interest involved with the investigators in this study.
CONFIDENTIALITY
Any information that is obtained in connection with this study and that can be identified
with you will remain confidential and will be disclosed only with your permission or as
required by law. The information collected about you will be coded using a fake name
(pseudonym) or initials and numbers, for example abc-123, etc. The information which
has your identifiable information will be kept separately from the rest of your data.
Only members of the research team will have access to the data associated with this
study. The data will be stored in the investigator’s office in a locked file
cabinet/password protected computer. Data may be released to the dissertation
committee.
The data will be stored for three years after the study has been completed and then
destroyed.
When the results of the research are published or discussed in conferences, no
information will be included that would reveal your identity
PARTICIPATION AND WITHDRAWAL
You can choose whether to be in this study or not. If you volunteer to be in this study,
you may withdraw at any time without consequences of any kind. You may also refuse
to answer any questions you don’t want to answer and still remain in the study. The
investigator may withdraw you from this research if circumstances arise which warrant
doing so.
ALTERNATIVES TO PARTICIPATION
Your alternative is to not participate.
RIGHTS OF RESEARCH SUBJECTS
You may withdraw your consent at any time and discontinue participation without
penalty. You are not waiving any legal claims, rights or remedies because of your
participation in this research study. If you have any questions about your rights as a
study subject or you would like to speak with someone independent of the research team
to obtain answers to questions about the research, or in the event the research staff can
not be reached, please contact the University Park IRB, Office of the Vice Provost for
Research Advancement, Stonier Hall, Room 224a, Los Angeles, CA 90089-1146, (213)
821-5272 or upirb@usc.edu
89
IDENTIFICATION OF INVESTIGATORS
If you have any questions or concerns about the research, please feel free to contact:
Corinne Hyde
Rossier School of Education
Waite Phillips Hall, 601B
Los Angeles, CA 90007
XXX-XXX-XXXX
XXXXX@usc.edu
Robert Rueda, Ph.D.
Rossier School of Education
Waite Phillips Hall, 601B
Los Angeles, CA 90007
XXX-XXX-XXXX
XXXXX@usc.edu
90
Appendix F
University of Southern California
Rossier School of Education
ASSENT FORM FOR RESEARCH
Page 1 of 2
ASSENT TO PARTICIPATE IN RESEARCH
The relationship between teacher assessment practices, student goal
orientation, and student engagement in elementary mathematics.
1. My name is Corinne Hyde.
2. We are asking you to take part in a research study because we are trying to learn more
about if the ways that teachers test your math knowledge changes how much you
want to learn math.
3. If you agree to be in this study you will be asked to answer some questions about how
you are tested in math and how you feel about math.
4. There are no risks to you if you choose to take part in this study.
5. There are no benefits to you if you choose to take part in this study.
6. Please talk this over with your parents before you decide whether or not to
participate. We will also ask your parents to give their permission for you to take part
in this study. But even if your parents say “yes” you can still decide not to do this.
Please take as much time as you need to read this form. You may also decide to
discuss it with your family or friends. If you decide to participate, you will be asked
to sign this form.
7. If you don’t want to be in this study, you don’t have to participate. Remember, being
in this study is up to you and no one will be upset if you don’t want to participate or
even if you change your mind later and want to stop.
8. You can ask any questions that you have about the study. If you have a question later
that you didn’t think of now, you can call me at XXX-XXX-XXXX or ask me next
time.
91
9. If you have any questions about your rights as a study subject or you would like to
speak with someonewho isn’t part of the research team to answer questions about the
research, or in the event the research staff can not be reached, please contact the
University Park IRB, Office of the Vice Provost for Research Advancement, Stonier
Hall, Room 224a, Los Angeles, CA 90089-1146, (213) 821-5272 or upirb@usc.edu
10. Signing your name at the bottom means that you agree to be in this study. You and
your parents will be given a copy of this form after you have signed it.
_________________________________ ____________________
Name of Subject Date
____________________________________
Subject’s Signature
___________________________________ ____________________
Name of Investigator Date
___________________________________
Investigator’s Signature
Abstract (if available)
Abstract
This study set out to examine the relationship between the type of assessment a teacher uses and the goal orientation and engagement of students in fifth grade mathematics. Prior research on assessment types, goal orientation, and engagement was examined, and from this research it was predicted that there would be significant relationships between the type of assessment a teacher uses and the goal orientation and engagement of his or her students. To determine if a relationship existed, teachers (n = 8) were given a questionnaire developed by McMillan, Myran, and Workman (2002), which provided information on whether teachers used constructed response assessment, teacher made assessment, or objective assessment. Additionally, students (N = 115) were given a modified form of a portion of the Patterns of Adaptive Learning Survey by Midgley et al., (2000) designed to measure their performance approach, performance avoid, and mastery goal orientations. Students were also given a modified form of the Feelings About School Inventory (Fredricks, Blumenfeld, Friedel, & Paris, 2005) designed to measure their cognitive, emotional, and behavioral engagement. The results of the analyses found that there was no significant relationship between teacher assessment type and student goal orientation or engagement.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Goal orientation of Latino English language learners: the relationship between students’ engagement, achievement and teachers’ instructional practices in mathematics
PDF
Parental involvement and student motivation: A quantitative study of the relationship between student goal orientation and student perceptions of parental involvement among 5th grade students
PDF
A quantitative analysis on student goal orientation and student perceptions of parental involvement among 6th grade middle school students
PDF
Student academic self‐efficacy, help seeking and goal orientation beliefs and behaviors in distance education and on-campus community college sociology courses
PDF
Student engagement: a quantitative analysis on aspects that are predictive of engagement
PDF
The relationship of model minority stereotype, Asian cultural values, and acculturation to goal orientation, academic self-efficacy, and academic achievement in Asian American college students
PDF
What are the relationships among program delivery, classroom experience, content knowledge, and demographics on pre-service teachers' self-efficacy?
PDF
What is the relationship between self-efficacy of community college mathematics faculty and effective instructional practice?
PDF
Relationships between a community college student’s sense of belonging and student services engagement with completion of transfer gateway courses and persistence
PDF
The relationship between students’ feelings of belonging to their campus community and racial ethnic identification in undergraduate university students
PDF
An examination of classroom social environment on motivation and engagement of college early entrant honors students
PDF
Moving from great to greater: Math growth in high achieving elementary schools - A gap analysis
PDF
Motivational and academic outcomes in retained middle school students
PDF
An evaluation of general education faculty practices to support student decision-making at one community college
PDF
The relationship between students’ computer self-efficacy, self-regulation, and engagement in distance learning
PDF
The relationship between learning style and student satisfaction in the Problem Based Learning Dental Program at the University of Southern California
PDF
Injecting warm fuzzies into cold systems: defining, benchmarking, and assessing holistic, person-centered academic advising
PDF
Help, I need somebody: an examination of the role of model minority myth and goal orientations in Asian American college students' academic help-seeking practices
PDF
The relationship between student perceptions of parental expectations, utility value, aptitude and English achievement among Asian American high school students
PDF
Transportation security officer engagement in the Transportation Security Administration: a study of a promising practice
Asset Metadata
Creator
Hyde, Corinne Elizabeth
(author)
Core Title
The relationship between teacher assessment practices, student goal orientation, and student engagement in elementary mathematics
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education (Leadership)
Publication Date
04/20/2009
Defense Date
03/27/2009
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
assessment,elementary,engagement,evaluation,goal orientation,Mathematics,Motivation,OAI-PMH Harvest
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Rueda, Robert S. (
committee chair
), Hirabayashi, Kimberly (
committee member
), Suite, Denzil (
committee member
)
Creator Email
Dr.CorinneHyde@gmail.com,MrsHyde@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m2094
Unique identifier
UC178658
Identifier
etd-Hyde-2856 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-221852 (legacy record id),usctheses-m2094 (legacy record id)
Legacy Identifier
etd-Hyde-2856.pdf
Dmrecord
221852
Document Type
Dissertation
Rights
Hyde, Corinne Elizabeth
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
elementary
evaluation
goal orientation