Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
The importance of WHY: teachers’ knowledge of student misunderstandings of ratios and proportional relationships
(USC Thesis Other)
The importance of WHY: teachers’ knowledge of student misunderstandings of ratios and proportional relationships
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
The Importance of WHY: Teachers’ Knowledge of Student Misunderstandings of Ratios and
Proportional Relationships
by
John Ezaki
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(URBAN EDUCATION POLICY)
December 2023
Copyright 2023 John Ezaki
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
ii
Acknowledgements
To my advisor Dr. Yasemin Copur-Gencturk, it is hard to express how grateful I have
been for your mentorship and guidance during this whole program and particularly during my
dissertation. It has been a long process and you have continued to support me every step of the
way without ever giving up on me. Without you, this dissertation would not have been possible,
so from the bottom of my heart, thank you so much.
To Dr. Allan Cohen and Dr. Laila Hasan thank you for your patience and expertise. You
have been amazing supports throughout this whole process, and I appreciate you investing in me
and my research.
To my friends, family, and the rest of the Rossier and USC communities thank you for
listening to and supporting me through my classes, a pandemic, and writing this dissertation. I
am so thankful that I never felt like I was doing this alone.
Finally, a huge thank you to the Herman and Rasiej Mathematics Initiative and the
National Science Foundation (Grant Number 1813760) for supporting this work. Any opinions,
findings, and conclusions or recommendations expressed in this material are those of the
author(s) and do not necessarily reflect the views of the Herman and Rasiej Mathematics
Initiative or the NSF.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
iii
Table of Contents
Acknowledgements………………………………………………………………………………..ii
List of Tables…………………………………………………………………………………….. .v
List of Figures…………………………………………………………………………………….vi
List of Abbreviations…………………………………………………………………………….vii
Abstract………………………………………………………………………………………...…ix
CHAPTER ONE: INTRODUCTION AND THEORETICAL FRAMING………………………1
Theoretical Framework……………………………………………………………………3
Content Knowledge……………………………………………………………………..... 3
Pedagogical Content Knowledge……………………………………………………….. 6
Mathematical Pedagogical Content Knowledge……..……………………...….. 7
Conceptualizations of Pedagogical Content Knowledge……….………………………. 7
Mathematical Knowledge for Teaching………………………………………… 8
The Network of Pedagogical Content Knowledge………………………………10
Situated Teacher Knowledge…………………………………………………… 12
Learning to Teach Mathematics – Teacher Education and Development Study...14
Cognitively Activating Instruction……………………………………………… 15
Conceptualization of CK and PCK for this Study……………………………………… 17
Conceptualization of Teachers’ Knowledge of Student Misunderstandings…….19
Ratios and Proportional Relationships…………………………………………………. 21
CHAPTER TWO: LITERATURE REVIEW……………………………………………………24
Teachers’ GLCK…………………………………………………………………………24
Teachers’ PSM………………………………………………………………………….. 25
Teachers’ URSM………………………………………………………………………... 27
Studies Measuring Multiple Knowledge Components……..…………………………... 29
Relating Teachers’ Knowledge of Student Thinking and Misunderstandings to
External Outcomes………………….…………………………………………... 33
Literature Summary…………………………………………………………………….. 35
CHAPTER THREE: METHODOLOGY………………………………………………………. 37
Research Questions…………………………………………………………………….. 38
Construct Development………………………………………………………………… 38
Instrument Creation…………………………………………………………….. 39
Items……………………………………………………………………. 41
Coding URSM Responses……………………………………………… 43
Item Types……………………………………………………………… 47
Study 1………………………………………………………………………………….. 48
Study Context…………………………………………………………………… 48
Participants……………………………………………………………… 49
Testing Teachers’ Knowledge Structure……………………………………….. 51
Confirmatory Factor Analysis………………………………………….. 51
Background Measure…………………………………………………… 56
Analytical Approach……………………………………………………………. 56
Study 2………………………………………………………………………………….. 57
Participants……………………………………………………………………… 57
Measures…………………………………………………………………………58
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
iv
Instructional Quality…………………………………………………….. 59
Coding Classroom Artifacts……………………………………………...60
Student Pre- and Post-test………………………………………………..62
Analytical Approach……………………………………………………………..62
Linear Regression……………………………………………………….. 62
Hierarchical Linear Modeling…………………………………………... 63
Benjamini-Hochberg Procedure………………………………………….66
CHAPTER FOUR: FINDINGS………………………………………………………………… 68
Research Question 1: What is the structure of teachers’ GLCK, PSM, and URSM of
ratios and proportional relationships? ………………………………………………….. 68
Research Question 2: What GLCK, PSM, and URSM of student misunderstandings of
ratios and proportional relationships do in-service middle school mathematics teachers
have? ……………………………………………………………………………………. 70
Research Question 3: What is the relationship between teachers’ GLCK, PSM, and
URSM of ratios and proportional relationships and their instructional quality?...............74
Research Question 4: What is relationship between teachers’ GLCK, PSM, and URSM of
ratios and proportional relationships and student achievement? ……………………….. 77
Results Summary………………………………………………………………...79
CHAPTER FIVE: DISCUSSION AND IMPLICATIONS…………………………………….. 80
Discussion of Findings………………………………………………………………….. 80
Limitations and Delimitations……………………………………………………………89
Recommendations for Future Research……………………………………………….....90
Conclusion……………………………………………………………………………..... 91
References………………………………………………………………………………………..93
Appendix A……………………………………………………………………………..107
Appendix B……………………………………………………………………………..114
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
v
List of Tables
Table 1: Rubric for Teachers’ Reason for the Student Misunderstanding……………………….45
Table 2: Teachers’ Background Information for Study 1………………………………………..50
Table 3: Item Means and Standard Deviations…………………………………………………..52
Table 4: Teachers’ Background Information for Study 2………………………………………..58
Table 5: Confirmatory Factor Analysis Fit Indices……………………………………………... 68
Table 6: Covariances of GLCK, PSM, and URSM by Factor Model……………………………69
Table 7: Average Scores on GLCK, PSM, and URSM Items…………………………….…….. 71
Table 8: Regression Results of Teachers’ Instructional Practices on their Knowledge
Components………………………………………………………………………………...……76
Table 9: Hierarchical Linear Model Results……………………………………………………..78
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
vi
List of Figures
Figure 1: Domains of Mathematical Knowledge for Teaching………………………………… 9
Figure 2: The Network of Pedagogical Content Knowledge………………………………….. 11
Figure 3: Teachers’ Knowledge: Developing in Context………………………………......….14
Figure 4: Example GLCK Item………………………………………………………………...41
Figure 5: Example PSM Item…………………………………………………………………..42
Figure 6: Example URSM Item……………………………………………………………….. 43
Figure 7: Distribution of Rationale Types Provided for All Items……………………………..72
Figure 8: Distribution of Rationale Types Provided by Item Type…………………………….72
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
vii
List of Abbreviations
Abbreviation Meaning Page
CK Content Knowledge 12
PCK Pedagogical Content Knowledge 12
SCK Specialized Content Knowledge 13
KOSM Knowledge of Student Misunderstandings 29
GLCK Grade-Level Content Knowledge 27
PSM Predicting Student Misunderstandings 30
URSM Understanding the Reasons for Student Misunderstandings 30
IST In-service Teachers 36
PST Pre-service Teachers 36
MKT Mathematical Knowledge for Teaching Framework 13
STK Situated Teacher Knowledge Framework 22
COACTIV Professional Competence of Teachers, Cognitively Activating
Instruction, and the Development of Students’ Mathematical Literacy
12
TEDS-M Learning to Teach Mathematics – Teacher Education and
Development Study
13
HLM Hierarchical Linear Modeling 44
CFA Confirmatory Factor Analysis 59
OLS Ordinary Least Squares Regression 72
CCSS Common Core State Standards 32
RMSEA Root Mean Square Error of Approximation 65
CFI Comparative Fit Index 65
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
viii
TLI Tucker-Lewis Index 65
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
ix
Abstract
Teachers’ knowledge of students’ misunderstandings (KOSM) is an essential theoretical
component of teacher knowledge. Yet, the relationship between KOSM and key outcomes such
as instructional quality and student achievement is unclear. For this dissertation project, I
conducted two studies aimed at understandings different facets of teachers’ KOSM.
Study 1 used a national sample of 743 in-service middle school mathematics teachers to
test whether two theoretical components of KOSM, predicting common misunderstandings
(PSM) and understanding the reasons leading to common misunderstandings (URSM), were
distinct constructs. Additionally, Study 1 used descriptive statistics to explore what grade level
content knowledge (GLCK), PSM, and URSM this national sample had broadly. I found that
GLCK, PSM, and URSM were unique components of knowledge and that teachers do possess
the grade level content knowledge and are able to predict the common misunderstandings, but
the rationales they provide for why students demonstrate the misunderstanding are largely
attributed to the procedures rather than the underlying concepts.
The sample for Study 2 included 37 teachers and 1,205 of their students. Linear
regression was used to test the relationship between GLCK, PSM, and URSM and three
measures of instructional quality (task selection, task enactment, and coherence of the
mathematics) and hierarchical linear modeling was used to assess the relationship between these
knowledge components and student achievement. Overall, URSM and PSM were significantly
related to task enactment, coherence of the mathematics, and student learning, suggesting that
URSM and PSM are important components of KOSM and should be considered in future work.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
1
CHAPTER ONE: INTRODUCTION AND THEORETICAL FRAMING
Teacher knowledge is believed to be one of the most necessary components for student
achievement and effective instruction (e.g., Baumert et al., 2010; Campbell et al., 2014; DarlingHammond, 1999; Gitomer & Zisk, 2015; Grossman & Richert, 1988; Grossman et al., 2005;
Hiebert et al., 2007; Hill et al., 2005; 2019; Kersting et al., 2012; Mewborn, 2003; National
Commission on Excellence in Education, 1983; National Council of Teachers of Mathematics,
2000; National Mathematics Advisory Panel, 2008). Scholars and policymakers have repeatedly
underscored the importance of having and preparing knowledgeable teachers (e.g., Association
of Mathematics Teacher Educators, 2017; Council for the Accreditation of Educator Preparation
(CAEP), 2018; Darling-Hammond et al., 2005; Hattie, 2003; Metzler & Woessmann, 2012;
National Commission on Teaching and America’s Future, 1996; No Child Left Behind, 2001;
Shulman, 1986). However, a challenge to ensuring all teachers are equipped with the knowledge
to teach students effectively is determining what type of knowledge is most important for
teachers to have. As Grossman, Schoenfeld, and Lee (2005) asked, "What do teachers need to
know about the subject they teach?" (p. 201). Answering this question has been surprisingly
challenging in large part because the priorities for teachers and what constitutes teacher
knowledge have changed over time.
For instance, in Classical times, schooling was only for the elites and spanned content,
morality, and physical training (Pascal, 1984). Teachers were expected to be fully versed in a
broad range of content and skills with much less focus on pedagogy. These schools were only for
the upper echelon of society, who had to pay to have their students taught, and thus teachers
were expected to be experts in their craft (Pascal, 1984). Much later, during the Industrial
Revolution there was a need for semi-skilled laborers who were "punctual, docile, and sober"
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
2
(Mokyr, 2001, p. 10). Schools were seen as a means of preparing large numbers of people for
factory work and the school adapted accordingly. School days began to follow more factory-like
schedules; rote memorization and obedience became key components of education, and teacher
knowledge was heavily pedagogical. Teachers were not necessarily valued for the depth of their
understanding or the ability to convey key ideas but instead expected to be people of high social
and moral standing (Kafka, 2016; Stone, 2012).
Over the last thirty years, scholars have been investigating this changing notion of what
constitutes teacher knowledge (e.g., An et al., 2004; Ball et al., 2008; Blum & Krauss, 2008;
Charalambous & Pitta-Pantazi, 2007; Charalambous et al., 2020; Copur-Gencturk & Tolar, 2022;
Grossman, 1990; Mishra & Koehler, 2006; Rowland & Turner, 2007; Schoenfeld, 2020;
Shulman, 1986), how to measure it (e.g., Copur-Gencturk et al., 2022b; Hill et al., 2004; Izsák et
al., 2010, 2012; Krauss et al., 2008; Orrill et al., 2020; Tchoshanov, 2011), and how it relates to
important outcomes such as instructional quality and student achievement (e.g., Baumert et al.,
2010; Chen et al., 2020; Copur-Gencturk, 2015; Hill & Chin, 2018; Hill et al., 2005; Kersting et
al., 2010, 2012; Sadler et al., 2013; Tchoshanov, 2011).
In the rest of Chapter 1, I provide an overview of the key theories that have been used to
understand teachers' knowledge and teachers' knowledge of student misunderstandings. In
Chapter 2, I explore the research on teachers' knowledge of student misunderstandings
specifically within the realm of ratios and proportional reasoning. Chapter 3 outlines the methods
of my research study that investigated teachers' knowledge of student misunderstandings and its
relation to teachers’ instruction and student achievement. Chapter 4 describes the results of the
study and Chapter 5 comprises the discussion of the results.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
3
Theoretical Framework
Teacher content knowledge has long been considered a key ingredient to students’
educational success (e.g., Baumert et al., 2010; Darling-Hammond, 1999; Hill et al., 2005;
Kersting et al., 2012; Shulman, 1986). As Ball said, "Teachers cannot help children learn things
they themselves do not understand" (1991, p. 5) since teachers are the guide for students and
clearly must possess some knowledge of the content in order to teach it. But it is still unclear
what type of knowledge is necessary for teachers to teach effectively. Many scholars have
created frameworks for teacher knowledge, ranging from general knowledge types that teachers
should have (e.g., Carlson et al., 2019; Gess-Newsom, 2015; Grossman, 1990; Mishra &
Koehler, 2006; Rowland & Turner, 2007; Shulman, 1986) to very specific conceptualizations
teachers of specific content should have (e.g., An et al., 2004; Ball et al., 2008; Blum & Krauss,
2008; Charalambous & Pitta-Pantazi, 2007; Copur-Gencturk & Tolar, 2022; Kunter, 2013;
Lamon, 2005; Magnusson et al., 1999; Pitta-Pantazi & Christou, 2011; Tattoo et al., 2008;
Vergnaud, 1988; Weiland et al., 2021). Across these conceptualizations of teacher knowledge
there are two consistent components, pedagogical content knowledge (PCK) and content
knowledge (CK), that are considered necessary for teachers.
Content Knowledge
Content knowledge is typically conceived as the knowledge of the mathematics,
including the underlying concepts behind different mathematical approaches along with the
ability to reason mathematically and make sense of the relationships across concepts (e.g.,
Copur-Gencturk & Tolar, 2022; Kilpatrick et al., 2015; NRC, 2001). That said, the breadth and
depth of CK has varied across conceptualizations. For instance, the theoretical framework for the
Professional Competence of Teachers, Cognitively Activating Instruction, and the Development
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
4
of Students’ Mathematical Literacy (COACTIV) study situated CK for secondary mathematics
teachers between; “(2) the school-level mathematical knowledge that good school students have,
and (3) the university-level mathematical knowledge that does not overlap with the content of the
school curriculum (e.g., Galois theory or functional analysis)” (Krauss et al., 2008, p. 876). In
terms of depth of CK, the COACTIV study defined CK as extending a bit past the level of
mathematics that a given teacher may be expected to teach. Describing the breadth of what CK
entails, the authors state that their conceptualization of CK covers, “a profound mathematical
understanding of the content of the secondary school mathematics curriculum… defined by the
curriculum and continuously developed on the basis of feedback from instructional practice”
(Baumert & Kunter, 2013, p. 33). Their description of the breadth of CK is vague but suggests
that they conceptualize CK as involving both conceptual and procedural understandings as
teachers should demonstrate “profound” understandings.
In the framework for the Teacher Education and Development Study in Mathematics
(TEDS-M), the researchers had CK focus on the content teachers were required to teach,
specifically noting that the content only went so far as three years past what the teachers would
teach (Tattoo et al., 2013). Similar to COACTIV, this centered CK on the grade level knowledge
with some extension. In contrast to COACTIV, TEDS-M was more prescriptive in what CK
encompassed. They named three cognitive domains (Knowing, Applying, Reasoning) that were
necessary to capture the breadth of teachers’ CK, which aligned closely to the National Research
Council’s (NRC; 2001) five mathematical proficiencies. The mathematical proficiencies along
with the COACTIV framework for CK describe how CK encompasses more than accurately
solving problems, including teachers’ strategic competence (Copur-Gencturk & Doleck, 2021)
and how they justify their approaches (NRC, 2001; Tattoo et al., 2013).
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
5
The Mathematical Knowledge for Teaching (MKT) framework approached the
conceptualization of CK quite differently, subdividing CK into common content knowledge,
horizon content knowledge, and specialized content knowledge (Ball et al., 2008). Common
content knowledge encompassed the foundational mathematical knowledge that is used by adults
and is not unique to mathematics teachers. Horizon knowledge described “a kind of
mathematical ‘peripheral vision’ needed in teaching, that is, a view of the larger mathematical
landscape that teaching requires” (Hill & Ball, 2009, p. 70). Horizon knowledge was captured
by teachers understanding of students’ learning progressions and their mathematical
development over time and has been described as, “having a sense of the larger mathematical
environment of the discipline being taught” (Jakobsen et al., 2013). They argued that teachers
should be aware of the content students will experience after their class and how it relates to the
grade level content, so that teachers can prepare their students to make connections in the future
and integrate current learning with future learning (Ball et al., 2008). The final component of CK
within the MKT framework was specialized content knowledge. Specialized content knowledge
(SCK) referred to the deep understandings of specific mathematical concepts or topics that go
beyond basic knowledge and are used in teaching (Hill et al., 2008). It involved having a rich and
flexible knowledge base about a particular area of mathematics, including its underlying
principles, connections to other mathematical ideas, and various representations and problemsolving strategies associated with that content. SCK enabled teachers to engage with the subject
matter in more sophisticated ways, anticipate and address student difficulties, and make
connections between different mathematical ideas within the domain. To clarify how this
knowledge was unique to teachers, Ball & Bass (2003) described how even mathematicians
approach mathematical situations quite differently from how mathematics teachers would, and
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
6
though both can reach correct conclusions, there is a uniqueness to teachers’ knowledge of the
content. The authors contrast SCK and PCK in that SCK is content knowledge that does not
require teachers to rely on their knowledge of teaching or students. So even though SCK is seen
as unique to teachers it is slightly different from PCK, although the distinction is somewhat
murky. In sum, the MKT framework encompassed a wider view of CK, acknowledging the
importance of knowing the foundational concepts and procedures, as well as having a deep
understanding of specific topics and knowing how the content fits in the larger mathematical
landscape.
Pedagogical Content Knowledge
The second major component of teacher knowledge, pedagogical content knowledge, was
introduced by Lee Shulman as part of his new conception of teacher knowledge (Shulman,
1986). He described PCK as, “the ways of representing and formulating subject matter that make
it comprehensible to others,” (Shulman, 1986, p. 9). Shulman believed that there was a unique
type of knowledge that blended both the content knowledge and pedagogical knowledge, and
which was necessary for effective teaching and used strictly by teachers. He argued that
everyday adults and even professionals who use mathematics often in their daily work would still
not need to know the nuances of how to teach the mathematics they use. Shulman’s novel
conception that teachers have a unique form of knowledge deviated from the field as a whole and
reinvigorated research on teachers’ knowledge (Depaepe et al., 2013).
Within Shulman’s conception of PCK, one major component was teachers’ knowledge of
students. Shulman (1986) argued that:
Pedagogical content knowledge also includes an understanding of what makes the
learning of specific topics easy or difficult: the conceptions and preconceptions that
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
7
students of different ages and backgrounds bring with them to the learning of those most
frequently taught topics and lessons. If those preconceptions are misconceptions, which
they so often are, teachers need knowledge of the strategies most likely to be fruitful in
reorganizing the understanding of learners. (Shulman, 1986, p. 9)
This conceptualization explicitly prioritizes teachers’ understanding of students and their
misunderstandings. It also explains that students bring preexisting skills and knowledge to the
classroom and that teachers need to be able to make sense of students’ level of understanding and
know how to continue to develop students’ understandings.
Mathematical Pedagogical Content Knowledge
PCK by its very name is rooted in specific content. Shulman (1986) described PCK
generically, yet more content-specific conceptualizations have been shown to generate more
nuanced perspectives of teaching (Mayer, 2004b). As well, research has shown that teachers
have different levels of PCK across domains (e.g., Hadfield et al., 1998; Hill et al., 2004).
Accordingly, to understand PCK, it must be situated in a specific domain. To understand how
teachers' PCK and specifically teachers' knowledge of student thinking and misunderstanding has
been conceptualized, the focus of this paper is on mathematics. There are many frameworks
useful to understand teachers' knowledge of other content, specifically English and Science (e.g.,
Gess-Newsome, 1999; Grossman, 1990; Veal & MaKinster, 1999), but hereafter, PCK will refer
to mathematical PCK and the focus will be restricted to mathematics.
Conceptualizations of Pedagogical Content Knowledge
Building on Shulman’s work, many mathematics scholars have refined or adapted the
notion of PCK (e.g., An et al., 2004; Ball et al., 2008; Blum & Krauss, 2008; Fennema & Franke,
1992; Kunter, 2013; Tattoo et al., 2012). Arguably the most commonly used framework in
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
8
mathematics comes from the work of Deborah Lowenberg-Ball and colleagues at the University
of Michigan (Ball et al., 2008; Hill et al., 2004; Hill et al., 2005; Hill et al., 2008).
Mathematical Knowledge for Teaching
Through many years of work and studies, Ball’s research team developed the Mathematical
Knowledge for Teaching (MKT) framework, which specifically addressed the knowledge needed
for teachers to teach mathematics. This work adapted Shulman’s framework by focusing on
teacher knowledge specific to teaching mathematics and attempted to make the conceptualization
more applicable and less abstract. Their focus was comprehending the mathematical knowledge
employed in the practice of teaching and identifying the specific contexts and ways in which
teachers' mathematical knowledge is useful in the work of teaching mathematics (Hill et al.,
2008).
Within the MKT framework, (Figure 1) teachers' pedagogical content knowledge is
subdivide into three categories (knowledge of content and teaching; knowledge of content and
curriculum; knowledge of content and students). One component of PCK, knowledge of content
and teaching was described as "an interaction between specific mathematical understanding and
an understanding of pedagogical issues that affect student learning" (Ball et al., 2008, p. 401).
Specifically, this referred to teachers' knowledge of how to effectively plan lessons and activities
to best support students. Knowledge of content and curriculum was similar to Shulman's
curricular knowledge and underscores the importance of teachers being familiar with different
teaching materials and knowing the strengths and limitations of said materials. Knowledge of
content and students is being able to "anticipate what students are likely to think and what they
will find confusing" (Ball et al., 2008, p. 401). This means teachers should know common
misunderstandings of students for specific content and what makes problems easier or harder.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
9
Figure 1
Domains of Mathematical Knowledge for Teaching
Note. Reprinted from “Content knowledge for teaching: What makes it special?”, by Ball, D. L.,
Thames, M. H., & Phelps, G., 2008, Journal of Teacher Education, 59, p. 403.
In a further discussion of the MKT framework (Hill et al., 2008), the researchers noted
that knowledge of content and students included how students "think about, know, or learn this
particular content" (p. 375). They reference the whole number bias, where students tend to treat
the components of fractions as whole numbers as an example of a common misunderstanding
teachers should be aware of. For instance, in adding !
"
and #
$
students often will add the
numerators and then the denominators making their solution %
& because they inappropriately
apply rules from whole numbers to fractions (Ni & Zhou, 2005). The MKT framework
acknowledges the importance of teachers understanding the specific ways in which students may
struggle or develop misunderstandings in mathematics. This includes recognizing common
misunderstandings, understanding the underlying reasoning behind those misunderstandings, and
being able to address and guide students through them. Teachers with strong MKT should be
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
10
able to anticipate and identify students' misunderstandings, diagnose their specific difficulties,
and design instructional strategies that address those misunderstandings effectively. This
knowledge allows teachers to adapt their instruction, provide appropriate scaffolding, and
facilitate meaningful learning experiences for their students, which Hill et al. (2008) argued is
distinct from content knowledge in that teachers can know the content without knowing how
students learn the content or the reverse.
The Network of Pedagogical Content Knowledge
Another model of teacher knowledge that built on Shulman’s pioneering work is from
An, Kulm, and Wu (2004), entitled “The Network of Pedagogical Content Knowledge”. Within
this framework, they subdivide PCK into three components, knowledge of curriculum,
knowledge of teaching, and knowledge of content. They define the components as:
Knowledge of content consists of broad mathematics knowledge as well as specific
mathematics content knowledge at the grade level being taught. Knowledge of
curriculum includes selecting and using suitable curriculum materials, fully
understanding the goals and key ideas of textbooks and curricula (NCTM, 2000).
Knowledge of teaching consists of knowing students’ thinking, preparing instruction, and
mastery of modes of delivering instruction (An et al., 2004, p. 147).
They acknowledge the importance of all three components but explicitly mention the core
component is knowledge of teaching. The all-important knowledge of teaching interacts in their
diagram with knowledge of students' thinking (see Figure 2). They make the case that to teach
for understanding, students must be the center of the thinking. For instance, teachers should
attend to the challenges students are demonstrating rather than blindly following the textbook or
curriculum. To center the students in teaching practice, An et al. (2004) described four
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
11
components of knowing students’ thinking. They were, “building on students’ mathematical
ideas, addressing students’ misconceptions, engaging students in mathematics learning, and
promoting students’ thinking mathematically” (An et al., 2004, p. 148).
Figure 2
The Network of Pedagogical Content Knowledge
Note. Reprinted from “The pedagogical content knowledge of middle school, mathematics
teachers in China and the US”, by An, S., Kulm, G., & Wu, Z., 2004, Journal of Mathematics
Teacher Education, 7(2), p. 147.
The network of pedagogical content knowledge model adds value in that it clarifies what
is encapsulated within knowledge of students’ thinking, by providing four explicit indicators of
knowing students thinking. Compared to the MKT framework, the Network of Pedagogical
Content Knowledge includes a specific component of teacher knowledge regarding how teachers
address students’ misunderstandings. The inclusion suggest that teacher’s responses are
important to consider and may be a distinct type of knowledge. The application of their
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
12
framework to their research also provides insights into how it can be used to code data and
assessing teachers’ knowledge of students’ thinking. A unique feature of the framework is the
centering of students within the framework, indicating their explicit belief that the goal of
teacher knowledge is to improve student learning.
Situated Teacher Knowledge
Fennema and Franke (1992) proposed an earlier model of teacher knowledge tailored
specifically for mathematics teachers, known as Situated Teacher Knowledge (STK). This model
furthered Shulman’s characterization of teacher knowledge by highlighting the “interactive and
dynamic” interplay among its components, as evidenced by the double-headed arrows (see
Figure 3). Fennema and Franke (1992) emphasized that teacher knowledge is constantly
evolving and not fixed in a static state.
The four main components of mathematics teachers’ knowledge in STK are knowledge
of mathematics, pedagogical knowledge, knowledge of learners’ cognitions in mathematics, and
teachers’ beliefs (Fennema & Franke, 1992). The center triangle represents that all the
components are situation-specific and thus vary depending on the context. They note that:
The context is the structure that defines the components of knowledge and beliefs that
come into play. Within a given context, teachers’ knowledge of content interacts with
knowledge of pedagogy and students’ cognitions and combines with beliefs to create a
unique set of knowledge that drives classroom behavior. (Fennema & Franke, p. 162,
1992)
This differs from the previous frameworks in the attention given to the context of each situation
and the notion that knowledge interacts with the given circumstances. Like the other
frameworks, they define knowledge of mathematics as the understanding of the procedures and
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
13
concepts of the curriculum and pedagogical knowledge as the knowledge of effective teaching
strategies and classroom routines. Knowledge of learners’ cognitions in mathematics is an
expansion of Shulman’s (1986) knowledge of students with more emphasis on student cognition
and learning theory. The category includes the knowledge of what will be challenging or easy for
students along with the knowledge of how students think and learn mathematics. Their model
grounds itself in psychology focusing more so than the other frameworks on how students make
sense of the mathematics and the development of their knowledge. This model, like An et al.
(2004), also includes teacher beliefs, which seemingly impact instructional decisions.
Regarding teachers' knowledge of student thinking and misunderstandings, STK
emphasizes that teachers need to understand students’ knowledge and learning trajectory while
simultaneously developing a deep understanding of their specific students' thinking and
reasoning through direct interactions with their students. This includes recognizing and
interpreting students' misconceptions and knowing why they happen when learning mathematics.
According to the STK framework, teachers acquire knowledge of student misunderstandings
through various means, such as observing and analyzing students' work, listening to their
explanations, engaging in one-on-one interactions, and facilitating classroom discussions.
Through these interactions, teachers gain insights into the specific ways in which students make
sense of mathematical concepts and the common misconceptions they may hold.
This knowledge of student thinking and misunderstandings is seen as an integral part of
teachers’ expertise as it enables teachers to tailor their instructional approaches, anticipate and
address potential misunderstandings, and provide appropriate interventions to support students'
learning. This knowledge is not only based on a general understanding of common
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
14
misunderstandings, but it is also situated and contextually specific to the particular students and
classroom dynamics.
Figure 3
Teachers’ Knowledge: Developing in Context
Note. Reprinted from “Teachers’ knowledge and its impact”, by Fennema, E., & Franke, M. L.,
1992, In D. A. Grouws (Ed.), Handbook of research on mathematics teaching and learning: A
project of the National Council of Teachers of Mathematics, p. 167.
Learning to Teach Mathematics – Teacher Education and Development Study
For the Learning to Teach Mathematics – Teacher Education and Development Study
(TEDS-M) and its pilot study, Mathematics Teaching in the 21st Century (MT21), the research
teams created nuanced frameworks for teachers’ PCK (Tattoo et al., 2008). For PCK, the TEDSM framework had three major sub-domains: mathematical curricular knowledge, knowledge of
planning, and enacting mathematics for teaching and learning (Tattoo et al., 2013). Mathematical
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
15
curricular knowledge covered key ideas such as creating appropriate learning goals and
identifying key mathematical ideas in the curriculum. Knowledge of planning encompassed
selecting appropriate tasks and planning how to represent content along with predicting student
responses. Enacting mathematics for teaching and learning involved analyzing student work,
generating appropriate questions to ask students, and providing useful feedback for their further
development (Tattoo et al., 2013).
The three domains in this conceptualization of PCK cover similar concepts as the other
frameworks, but knowledge of student thinking and misunderstandings is embedded within all
domains rather than being its own entity. For instance, in the TEDS-M framework there was no
core component of knowledge of students or knowledge of student misunderstandings. That said,
knowledge of planning had a specific indicator of, “predicting typical students’ responses,
including misconceptions” and enacting mathematics for teaching and learning included,
“diagnosing typical students’ responses, including misconceptions” (Tattoo et al., 2008, p. 39)
which demonstrate that knowledge of student misunderstandings was nevertheless embedded
within the conceptualization. The researchers used this framework to create assessments
measuring teachers PCK, systematically define the constructs, and in doing so, lead to more
clarity around their understanding of what PCK entails.
Cognitively Activating Instruction
The Professional Competence of Teachers, Cognitively Activating Instruction, and the
Development of Students’ Mathematical Literacy (COACTIV) study also created frameworks
for mathematics teachers’ PCK (Blum & Krauss, 2008; Kunter, 2013). The conceptualization of
PCK used in the COACTIV study was again derived from Shulman’s (1986) description of
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
16
teacher knowledge, but they were designed with German secondary mathematics teachers in
mind.
Within the COACTIV framework, PCK was characterized as having three dimensions,
knowledge of tasks, knowledge of students’ misconceptions and difficulties, and knowledge of
explanations and representations (Krauss et al., 2008). Knowledge of tasks was an addition to
Shulman’s conceptualization and argued to be necessary for mathematics teachers’ knowledge in
particular because of the key role tasks play in mathematics instruction (Christiansen & Walther,
1986). Task selection is critical because there is a significant amount of time allotted to
describing, investigating, and completing tasks in mathematics classrooms which is fundamental
in creating powerful learning opportunities for students (de Corte et al., 1996; Jordan et al., 2008;
Williams, 2002). In the prior frameworks, knowledge of tasks was not its own components but
often embedded within other components such as MKT’s knowledge of content and teaching
(Ball et al., 2008) or TEDS-M’s knowledge of planning mathematics for teaching and learning
(Tattoo et al., 2008).
The other two components of COACTIV's conceptualization of PCK mirrored some
components of the former frameworks. Namely COACTIV’s knowledge of students’
misconceptions and difficulties overlapped closely with Shulman’s (1986) notion of knowledge
of students, STK’s knowledge of learners (Fennema & Franke, 1992) and MKT’s knowledge of
content and students (Ball et al., 2008), with a more direct focus on the misunderstandings
students may have. All focus on teachers understanding how students interact with the content
and acknowledging what will likely present difficulties for the students. This knowledge of
misunderstandings is a critical component of knowledge of students because errors can reveal
insight into students’ knowledge and their reasoning (Vosniadou & Verschaffel, 2004), which
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
17
teachers can capitalize on to support their students’ comprehension. COACTIV’s
conceptualization also mentions that teachers should know the “cognitive reasons for a given
problem” regarding students understanding, which aligns with Fennema and Franke (1992).
The final component of their conceptualization of PCK was knowledge of explanations
and representations, which related to MKT’s knowledge of content and teaching (Ball et al.,
2008) and TEDS-M's enacting mathematics for teaching and learning (Tattoo et al., 2013). For
example, these components included the ability to explain and represent the mathematical
concepts with a more pedagogical emphasis than other components of teacher knowledge. This is
important as a large part of the act of teaching is supporting students through the choices of
purposeful explanations and representations and students most often require guidance to fully
understand a concept (Mayer, 2004a).
Conceptualization of CK and PCK for this Study
The mentioned frameworks of teachers' knowledge recognize the necessity of CK and
PCK. The way CK and PCK are conceptualized or subdivided differs, but the core understanding
that knowing mathematics and knowing how to teach mathematics are related yet distinct types
of knowledge is consistent across conceptualizations.
For this study, I used the existing frameworks of teacher knowledge and their
conceptualizations of CK and PCK to inform how the constructs were defined. As in past
studies, CK is conceived as encompassing more than just teachers’ procedural knowledge (e.g.,
Copur-Gencturk & Tolar, 2022; NRC, 2001; Tattoo et al., 2008). To pull from TEDS-M, CK
encompasses teachers “knowing” the content, being able to “apply” their knowledge to solve
problems, and being able to “reason” to solve non-routine problems and effectively justify their
choices (Tattoo et al., 2008). That said, the CK measured in this study does not capture the entire
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
18
breadth of what CK encompasses, so when discussing the subset of CK that is measured in this
study, it will be referred to as grade-level content knowledge (GLCK). For this study GLCK
captures teachers’ procedural knowledge as they solve grade level problems while also tapping
into their ability to make sense of mathematical situations and pick the appropriate method for
routine and non-routine word problems.
Across the conceptualizations mentioned PCK is defined as having components with
many different elements, yet there are some core ideas throughout. Looking at the through lines
between conceptualizations described and drawing from Depaepe et al.’s (2013) review of how
pedagogical content knowledge has been defined in mathematics studies, I define PCK as
including teachers' knowledge of content-specific instructional strategies, knowledge of student
thinking and misunderstandings, and knowledge of the curriculum and content-specific tasks.
Knowledge of content-specific teaching strategies is conceived such that teachers with strong
PCK have a repertoire of effective instructional strategies that are tailored to the particular
content they are teaching. They understand how to present and explain concepts in ways that
facilitate student learning and address common misconceptions. This overlaps with knowledge of
content and teaching from Ball et al., (2008), knowledge of teaching from An et al., (2004),
knowledge of enacting mathematics for teaching and learning from TEDS-M (Tattoo et al.,
2013), and knowledge of explanations and representations from COACTIV (Krauss et al., 2008).
Knowledge of student thinking and misunderstandings is understood as a teacher’s awareness
of the typical ways in which students understand and misunderstand specific content areas. It
includes making sense of what students understand from their explanation or from their work and
whether teachers can anticipate student difficulties and diagnose the reason for the
misunderstandings. Ball et al.’s (2008) knowledge of content and students, An et al.’s (2004)
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
19
knowledge of student thinking, and COACTIV’s knowledge of students’ misconceptions and
difficulties (Krauss et al., 2008) all cover similar ideas, where teachers need to have a strong
grasp on their students’ mathematical abilities and the typical challenges students may face.
Knowledge of the curriculum and content-specific tasks is in line with Shulman (1986) and
Ball et al.’s (2008) curricular knowledge, TEDS-M’s mathematical curricular knowledge (Tattoo
et al., 2013), and COACTIV’s knowledge of tasks (Krauss et al., 2008). It involves teachers not
only being familiar with a variety of instructional materials, resources, and technologies that can
support student learning but also having the ability to select and adapt these materials to meet the
needs of their students and align them with the curriculum goals.
These consistent components of PCK highlight the complex interplay between content
knowledge, pedagogical strategies, and an understanding of students' thinking. They reflect the
holistic nature of teachers' knowledge and the multifaceted expertise required to promote
effective teaching and learning in specific content areas.
Conceptualization of Teachers’ Knowledge of Student Misunderstandings
Although all the components of PCK are important, for this study I focus on a particular
subcomponent of PCK, teachers’ knowledge of student misunderstandings (KOSM). KOSM
repeatedly is incorporated into the frameworks or teacher knowledge as part of teachers’
knowledge of students or student thinking, so when considering my conceptualization of PCK, I
situate KOSM within the broader PCK component teacher’s knowledge of student thinking and
misunderstandings.
To flesh out the conceptualization of KOSM, I incorporated Even and Tirosh’s (1995)
framework for analyzing and understanding teacher’s knowledge of student thinking to my
conceptualization to define the depth of this component. Rather than define the parts of PCK,
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
20
Even and Tirosh distinguished teacher knowledge as either “knowing that” or “knowing why”
(Even & Tirosh, 1995, p. 7-10). They define “knowing that” in the context of teachers’
knowledge of students as, “research-based and experienced based knowledge about students’
common conceptions and ways of thinking in the subject-matter” (Even & Tirosh, 1995, p. 17).
“Knowing why” is defined as the “general knowledge about possible sources of these
conceptions, and also to the understanding of the sources of a specific student’s reaction in a
specific case” (Even & Tirosh, 1995, p. 17). Even and Tirosh (1995) discussed the necessity of
teachers “knowing that,” but the insufficiency of it. Their framework focuses on understanding
teachers’ knowledge of students as intrinsically related to knowing the content and students at a
deep level so that teachers can anticipate challenges and truly understand the “why” of the math,
which aligns with policy documents advocating for teachers’ coherent and conceptual
understanding of the content (NRC, 2001). By incorporating this framework of “knowing that”
versus “knowing why,” I was able to consider not only the scope of what is included in teacher’s
knowledge of student thinking and (mis)understandings, but also the depth at which it is
considered and assessed.
Thus, for this study, KOSM is operationalized as comprising teacher’s ability to “know that”
and “know why” for student misunderstandings. Specifically this means teachers being able to
(1) name and predict common student misunderstandings, which will be referred to as predicting
student misunderstandings (PSM; Ball et al., 2008; Krauss et al., 2008; Shulman, 1986; Tattoo et
al., 2008), (2) provide sound mathematical reasons for such misunderstandings, which will be
referred to as understanding the reasons for students misunderstandings (URSM; Even & Tirosh,
1995; Fennema & Franke, 1992; Krauss et al., 2008).
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
21
In practice, KOSM has been operationalized as the ability of teachers to accurately predict
common student misunderstandings, which only addresses the first component of this
conceptualization of KOSM (Chen et al., 2020; Hill & Chin, 2018; Sadler et al., 2013). URSM is
a necessary component of teachers’ knowledge of student misunderstandings when considering
whether teachers “know why” (Even & Tirosh, 1995) and has been included in subcomponents
of PCK focusing on students thinking (Fennema & Franke, 1992; Krauss et al., 2008). PSM
alone can only indicate whether teachers are familiar with misunderstandings without assessing
their knowledge for the underlying mathematical reasons leading to the misunderstanding,
suggesting they are distinct components of teachers’ knowledge of student misunderstandings.
Additionally, previous research has shown empirically that URSM is related to teachers’
reported instructional responses to remediating the misunderstandings (Copur-Gencturk et al.,
under review), making it an important component to consider.
Ratios and Proportional Relationships
Since student misunderstandings are content specific, KOSM must also be content
dependent. This study is situated in ratios and proportional relationships because these
mathematical concepts are foundational topics for middle school mathematics (Cai & Sun, 2002;
Lamon, 2020; Lobato et al., 2010; NRC, 2001; Thompson & Saldanha, 2003) and they represent
a transition from additive to multiplicative reasoning which is often challenging for both students
and teachers (e.g., Copur-Gencturk et al., 2022a; Izsák & Jacobson, 2017; Orrill & Brown 2012;
Post et al. 1988; Sowder et al., 1998; Weiland et al., 2019). Policy and standards documents,
such as The National Council of Teachers of Mathematics (NCTM, 1989, 2000) and the National
Governors Association Center for Best Practices & Council of Chief State School (2010) have
also advocated that these topics are “of such great importance that it merits whatever time and
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
22
effort must be expended to assure its careful development” (NCTM, 1989, p. 82). Such
importance is placed on these topics because they are necessary for later success in advanced
science or mathematics (Beckmann & Izsák, 2015; Lobato & Ellis 2010; Siegler et al., 2012).
Although these topics are important, many adults are not fluent in these concepts (e.g., Brown et
al., 2020b; Capon & Kuhn, 1979; Izsák & Jacobson, 2017; Lamon, 2007), making it a critical
area for research.
There are many definitions for these concepts (e.g., Beckmann & Izsák, 2015; CopurGencturk et al., 2022a; Lamon, 2005; Thompson & Thompson, 1994; Weiland et al., 2021) so
how they are conceptualized for this study is described below. A widely referenced definition of
ratios comes from the Common Core State Standards (CCSS) progression documents which
notes, “a ratio associates two or more quantities. A ratio is a pair of non-negative numbers, A:B,
which are not both 0.” (CCSS Writing Team, 2022, p. 3). The progression document (2022)
distinguishes ratio and the value of a ratio, which is “the quotient of A/B” (p. 168). Lamon
(2020) defines ratios as, "an ordered pair that conveys the relative sizes of two quantities. A ratio
may compare measures of two parts of the same set (a part-part comparison) or the measures of
two different quantities" (p. 31). Thompson and Thompson (1994) defined a ratio as a
multiplicative comparison of two quantities, distinguishing it from a rate, which was the
"reflectively abstracted constant ratio" (p. 192). These three definitions all acknowledge the
comparison of two quantities but have different levels of specificity. Thompson references the
multiplicative nature of ratios while Lamon discusses the types of ratios, and the CCSS do
neither. For this paper, ratios are defined similarly to Thompson and Thompson, as a comparison
of quantities where there is a multiplicative relationship between the quantities that is invariant
even as the quantities vary.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
23
Proportional relationships are defined by building on Weiland et al.'s (2021) framework
for a robust understanding of proportional reasoning and Copur-Gencturk et al.'s (2022a) work
on teachers' proportional reasoning levels. First, a proportional relationship is understood as a
multiplicative relationship between two quantities (Copur-Gencturk et al., 2022a; Lamon 2007;
Lobato and Ellis 2010; Sowder et al. 1998; Van Dooren et al., 2018). Within this relationship
between quantities, teachers must attend to the covariance, meaning they can identify the
relevant quantities and understand how they change in relation to one another. Second, they must
understand the invariance of quantities, such that there is a third abstractable quantity, known as
the intensive quantity (Thompson & Thompson, 1994). For instance, imagine juice made of
water and concentrate. If the ratio of water to concentrate is maintained in a proportional
relationship, regardless of the amount of each quantity, the taste is invariant, making taste the
intensive quantity.
Having defined the theoretical frame (CK, PCK, and specifically KOSM) and the way
that ratios and proportional relationships are defined for this study, I review the literature of
teachers’ knowledge of student thinking and misunderstandings for ratios and proportional
relationships and detail a study I conducted to investigate this knowledge and understand how it
relates to instruction and student learning.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
24
CHAPTER TWO: LITERATURE REVIEW
As a core component of teachers’ PCK, KOSM has strong theoretical importance.
Similarly, ratios and proportional relationships are a centerpiece of middle school mathematics
curricula and yet have demonstrated challenges. In this chapter I review the literature
investigating teachers’ knowledge of GLCK and KOSM, namely PSM and URSM, for ratios and
proportional relationships. There were no studies that investigated all three components
simultaneously, and no studies explicitly measured teachers’ PSM for ratios and proportional
relationships. Therefore, the chapter is organized around the three knowledge components,
synthesizing the studies that have captured GLCK, PSM, and URSM for ratios and proportional
relationships with the exception of PSM where I review studies outside of this content.
Teachers’ GLCK
The studies examining teachers' GLCK about ratios and proportional relationships have
repeatedly shown how challenging the content can be (Ayan & Isiksal-Bostan, 2018; CopurGencturk et al., 2022a; Cramer & Lesh, 1988; Ölmez, 2022; Pitta-Pantazi & Christou 2011; Post
et al. 1988; Simon & Blume 1994; Van Dooren et al., 2005; Weiland et al., 2019). Many of the
difficulties teachers show mirror those faced by students (e.g., Akar 2010; Harel & Behr 1995;
Orrill & Brown, 2012; Post et al. 1988; Riley, 2010). Understanding the covariance and
invariance of two quantities in a proportional relationship, rather than focusing on the additive
comparison (e.g., Copur-Gencturk et al., 2022a; Lamon, 2007; Lobato et al., 2010; Orrill &
Brown, 2012; Simon & Blume, 1994) and determining if situations are directly proportional
(e.g., Arican, 2019; Cramer et al., 1993; Fisher, 1988; Izsák & Jacobson, 2017; Masters, 2012)
are two common challenges with the content. Teachers also tend to rely on procedural strategies
to solve, such as using cross multiplication or keyword hunting, which likely contributes to the
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
25
common errors (e.g., Berk et al., 2009; Fisher, 1988; Harel & Behr, 1995; Orrill & Brown,
2012). Much of this work has been conducted with pre-service teachers (PSTs; e.g., Arican,
2018; 2019; Izsák & Jacobson, 2017; Ölmez, 2022; Son, 2013) who have different experiences
and relationships with the content compared to in-service teachers (ISTs; Copur-Gencturk &
Doleck, 2021). So, understanding whether there is a clear difference in this knowledge for ISTs
is likely based on prior work (Copur-Gencturk et al., 2022a), but worth investigating further.
Similarly, the research on teachers’ knowledge of ratios and proportional relationships has also
largely investigated the strategy types that teachers use (Arican, 2018; Fisher, 1988; Livy &
Vale, 2011), which may not reflect teachers’ level of understanding (Copur-Gencturk et al.,
2022a) making this viable content to explore more deeply.
Teachers’ PSM
When searching the literature, I did not find any studies investigating whether teachers
could predict the common misunderstandings (PSM) for ratios and proportional relationships. In
response, I explored the literature on teachers’ PSM outside of ratios and proportional
relationships to provide more empirical evidence for its importance and to understand how it has
been measured.
The studies that have measured teachers PSM have relied on multiple-choice assessments
where teachers’ PSM is measured by their ability to predict the most common student
misunderstanding for multiple choice items. For example, Sadler, Sonnet, Coyle, Cook-Smith,
and Miller (2013) surveyed 219 in-service physical science teachers and 9,556 students to
understand teachers’ KOSM for physical science. They first asked teachers to answer the
multiple-choice items, then select the most likely incorrect option students would choose. The
first item of each item set was used to measure the teachers’ GLCK and the second item was the
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
26
measure of PSM. Sadler et al. (2013) used multilevel modeling to relate PSM and student
performance and found a positive relation. Chen, Sonnert, Sadler, and Sunbury (2020) used the
same methodology but within the domain of biology. Their sample included 79 biology teachers
and more than 2,700 high school students. Chen et al. (2020) found that students who had
teachers with high levels of both GLCK and PSM demonstrated approximately twice as much
growth on the post-test compared to students whose teachers had high levels of only GLCK, only
PSM, or neither.
To construct their instruments, Sadler et al. (2013) and Chen et al., (2020) only included
PSM items in their analyses that demonstrated “strong misunderstandings” (Sadler et al., 2013).
Only including items with strong misunderstandings was important as KOSM for common
misunderstandings appeared to be more salient for student learning. Items were considered to
have a strong misunderstanding if the item had over 50% of students choose the same incorrect
response. Since Sadler et al. (2013) and Chen et al., (2020) collected student data, they used that
student data to test whether their items demonstrated these strong misunderstandings. For
instance, if 60 students out of 100 responded with the incorrect answer, and 32 of those incorrect
responses selected response B, the item would be considered as having a strong
misunderstanding. If no incorrect answer received 50% or more of the incorrect choices, then the
item was not considered to have a strong misunderstanding and was not included in their
analysis.
In contrast to the significant results of Sadler et al. (2013) and Chen et al. (2020), Hill and
Chin (2018) found less conclusive evidence for the relationship between PSM and student
achievement. Hill and Chin (2018) worked with 284 fourth and fifth grade mathematics teachers
and over 9,500 students. Their measure captured PSM in the same way as the prior studies with
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
27
teachers answering GLCK questions then anticipating the common misunderstanding (PSM).
When running their models, they found no statistically significant relationship between PSM and
student achievement. They add the caveat that their results may deviate from other studies
because they, “had trouble finding research-based student misconceptions at our grade levels and
thus constructed some items from “craft knowledge,” perhaps leading to the low observed score
reliability and weak connections to practice” (Hill & Chin, 2018, p. 1105). Thus, the
insignificance and deviation from the results of Sadler et al. (2013) and Chen et al., (2020) could
be due to the lack of items with theoretically and research grounded misunderstandings.
Since two of the three studies reported significant relationships between teachers’ PSM
and student achievement and considering PSM’s role as a key component of KOSM, it presents a
promising opportunity to explore PSM in this content area and its potential relationship with
student learning. Additionally, no work has been done exploring PSM for ratios and proportional
relationships and this content has clear research-based misunderstandings making this ideal
content to use.
Teachers’ URSM
The last component of KOSM, URSM, has only been preliminarily explored for ratios
and proportional relationships. Hines and McMahon (2005) and Jacobson et al., (2018) focused
on the process of teachers’ understanding of students’ reasoning to capture URSM. To elaborate,
Hines and McMahon (2005) presented five different problems with varied numbers of student
work, ranging from two to seven, to PSTs and had them rank the student solutions based on
“developmentally advanced reasoning” (p.90) and explain their decision. The 11 PSTs were
explicitly told not to grade them, but to sort based on the reasoning demonstrated by the students.
As the student work samples involved solutions showing research-based misunderstandings, the
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
28
study captured URSM as the teachers explained how they ranked the samples which often
involved explaining what led to the misunderstanding.
Hines and McMahon (2005) found that PSTs were not very familiar with students using
additive solutions to proportional situations. Six of 11 teachers believed the student work which
incorrectly used additive reasoning was unexplainable, meaning that these teachers could not
conceive of how the student had gotten to the error (Hines & McMahon, 2005). This
demonstrated the challenge that teachers have with “knowing why” students use different
approaches or what is underlying their thinking (Even & Tirosh, 1995) and demonstrated that the
teachers lacked URSM for the common misunderstanding of additive reasoning for proportional
relationships. Given that additive reasoning is well known in the literature as a struggle for
students, it was surprising to see many teachers faced difficulties when identifying student errors
and providing mathematical reasons for their occurrence. Furthermore, Hines and McMahon
(2005) also found that teachers were often unable to make sense of student answers even if the
strategies were correct, again showing an opportunity to develop teachers’ URSM or in this case,
understanding of the reason for the understanding rather than misunderstanding. If students did
not follow the typical algorithm of procedure, teachers often said the students’ response was not
as developmentally advanced even if it required more sophistication than the conventional
strategy.
Along similar lines, Jacobson, Lobato, & Orrill (2018) gave 12 teachers a problem with
two student solutions that were written explanations of each step of the students' solving process.
Teachers were interviewed and asked, "What do you think of the following work of one of this
teacher's students?" (Jacobson et al., 2018, p. 1548) along with prompting questions to elicit their
URSM (Jacobson et al., 2018, p. 1542). They analyzed the reasoning teachers used as the
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
29
teachers assessed the students' solutions, specifically investigating if and how teacher reasoning
changed during the analysis of the same student's work as the teacher made sense of each
subsequent step of the task. They also were investigating if teacher reasoning was consistent
across two different students’ work given that they were cognitively similar situations. The study
by Jacobson et al. (2018) is unique in that it was the only study that used a sample of ISTs, which
they mention was intentional to maximize the probability that teachers would be able to reason
deeply about the student work.
Jacobson et al., (2018) reported similar results to Hines and McMahon (2005) in that
even though nine of the 12 ISTs were able to accurately evaluate the student work for
correctness, they often had trouble making sense of the student reasoning, meaning they lacked
deep URSM. These results illustrate the challenge that teachers’ have with URSM for ratios and
proportional relationships and Jacobson et al.’s finding that teachers were able to effectively
evaluate work but not provide a rationale may suggest that URSM is a distinct type of
knowledge.
Studies Measuring Multiple Knowledge Components
The sections above touched on studies that largely focused on measuring teachers’
GLCK, PSM, or URSM independently. That said, there were a handful of studies that captured
multiple of these components, and which are most related to the current study.
For studies that measured multiple components, they predominately measured GLCK
along with something akin to URSM but again often not an explicit question for the reasons
leading to the misunderstanding. The studies primarily had teachers analyze student work and
respond to different questions to tap into different components of their knowledge.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
30
The measurement of GLCK involved having teachers provide the correct answer to an
item which was later used to measure other knowledge component when teachers analyzed
student work for the same item. For instance, Buforn et al., (2020) and Son (2013) both had their
PSTs first provide the correct answer to the question before analyzing the student work on as a
measure of their GLCK. Buforn et al. (2020) used a single ratio comparison problem and Son
(2013) used an item about scaling a rectangle. Masters (2012) did not have teachers answer the
questions before analyzing the student work and instead provided six items where the ISTs
determined whether the situations were proportional or nonproportional. These questions
measured GLCK as they were only selecting an answer to routine situations (i.e., determining if
a table represented a directly proportional situation).
These studies demonstrated that although teachers have some GLCK, there are
opportunities to improve. For instance, Masters (2012) found that the average score for ISTs on
the ratio and proportional relationship items was ~73% and Buforn et al. (2020) found that 65 of
91 teachers (71%) could solve the standard ratio comparison problem. Son (2013) found better
results when it came to PSTs identifying additive situations. Approximately 88% of the sample
correctly answered the problem, indicating that they recognized that similar figures demonstrate
a multiplicative relationship rather than an additive one.
After answering the item themselves, teachers in these studies were often asked to
analyze student work to the same item. Fernández, Llinares, and Valls (2013) had 39 PSTs
analyze four items with six written students' work for each by answering three questions. The
questions were 1) What did the student do? 2) What do you know about the student's
understanding of the mathematical topic? 3) What would you do next? All three questions
measured teachers’ PCK, namely the teachers’ knowledge of student thinking and teachers’
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
31
knowledge of content-specific instructional strategies. Relatedly, Son (2013) had PSTs analyze
student work to the same question they had just answered, measuring their knowledge of student
thinking, and decide on their follow-up instructional response, measuring their knowledge of
content-specific instructional strategies. A similar questioning sequence was also used by Buforn
et al. (2020) but the PSTs were asked how they would modify the problem to help students
continue their development instead of what they would do next as the teacher, which taps into
teachers’ PCK but not their KOSM. These questions targeting teachers’ PCK require a different
type of knowledge compared to the GLCK questions as these questions inherently involve
understanding something about the students or the design of the task along with an understanding
of the mathematics. Analyzing student solutions necessitates making sense of the math the
student did and considering how that approach demonstrates some amount of knowing, which is
quite different than the knowledge need to answer the problem. All three of these studies
explicitly investigated ratios and proportional relationships through teachers' analysis of student
work and used samples of PSTs.
Fernández et al., (2013) grouped teachers into four levels of understanding of student
thinking based on their responses to the three questions. The lowest level included teachers who
were inaccurately interpreting student work, meaning that they said students solved the problem
incorrectly, when in fact the student work was correct. The highest level included teachers who
were able to effectively interpret student work and identify the key mathematical elements of the
student’s understanding. They found that 25 of the 39 PSTs were in the lowest level and only
three in the highest level (Fernández et al., 2013), suggesting that teachers have opportunities to
continue to develop their PCK for ratios and proportional relationships. Buforn et al. (2020)
demonstrated a similar pattern where more than half of the PSTs only provided general
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
32
comments when asked about the mathematical understandings that can be derived from the
student's work. An example teacher response was, "Student 1 solved the problem correctly.
Student 2 solves the problem correctly, using the idea of proportionality" (Buforn et al., 2020, p.
434). Even though this PST was able to correctly identify whether the students solved correctly,
when asked to analyze the student's work, there was no attempt to address the mathematical
understandings the student does or does not demonstrate. Thus, these PSTs showed a focus on
correctness and a possible gap in their ability to make sense of student work and provide
mathematical reasons for their understanding or lack thereof, which is similar to their URSM.
After the PSTs in Son’s (2013) study answered the question themselves, they assessed student
work on the same problem. The student work incorrectly used an additive strategy to solve the
problem when it was a proportional situation, and of the PSTs who answered the initial problem
correctly, only 64% saw the student error as conceptually based (results were not provided for
the seven teachers who did not answer the initial GLCK question correctly). An example of the
procedure-based response was, "What Sally did was determine a 6 - 4 = 2cm difference then add
10cm + 2cm = 12cm. This is incorrect not setting up a ratio." (Son, 2013, p. 59). The teacher was
able to identify the error but as with the PSTs in Buforn et al.’s (2020) study, they did not attend
to the underlying mathematical issue, which in this case was that similar figures need to change
by the same factor. The studies show that there is a range of PCK ability particularly in making
sense of the underlying mathematical reasons of student work, but as a whole it demonstrates
that none of the three samples had a high percentage of teachers who were able to diagnose the
underlying reasons leading to student responses. They also point to the unique understandings
gathered from having teachers explore students’ thinking. As in Son’s (2013) study, even
teachers who demonstrated GLCK encountered challenges when providing a reason for the
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
33
misunderstanding, reinforcing the items target different components of knowledge and provide
different information. GLCK demonstrates a knowledge of the computational ability of the
teachers, while analyzing the student understanding provides a look at how the teacher makes
sense of how students interact with the math.
Relating Teachers’ Knowledge of Student Thinking and Misunderstandings to External
Outcomes
Across these studies, with the exception of PSM, which has not been measured for ratios
and proportional relationships there is a consistent trend. These studies aim to understand what
knowledge teachers have but overlook the connection between teachers’ knowledge and external
outcomes. Since the ultimate goal of educational research is to improve students’ educational
outcomes, it is imperative to understand how teachers’ knowledge relates to key outcomes. The
prior work has established the importance of GLCK, PSM, and URSM for ratios and
proportional relationships. The next step is connecting this knowledge to crucial outcomes to
determine which knowledge types should be prioritized for teachers, considering their limited
time for training and development.
Fernández et al. (2013) did ask teachers how they would respond to the student work, but
did not report their results so the relationship between URSM and instructional response was not
specified. Buforn et al. (2020) had teachers report how they would change a task given a
student’s misunderstanding and they found that 33% reporting a change with a conceptual focus,
49% reported a generic edit, and 18% reported a non-sensical change to the problem, but they
did not test the relationship between URSM and the reported task change. Similarly, Son (2013)
had teachers report how they would support the hypothetical student in overcoming their
misunderstanding finding that 32% reported a conceptual instructional strategy, 56% procedure-
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
34
based, and 12% a misguided strategy. Yet, Son (2013) did not test the relationship between
URSM and the instructional response.
Only one study investigating teachers’ URSM of ratios and proportional relationships
related this knowledge to an external outcome measure. Masters (2012) used pre-existing data to
assess the relationship between teachers’ URSM of ratios and proportional relationships and
student achievement. In the study, 137 eighth-grade ISTs took an initial survey that had 10
proportional relationship items and as mentioned, six items measured teachers GLCK,
identifying if a situation was proportional or nonproportional. The remaining four items
measured teachers’ URSM and had teachers examine student work to determine if the student
was correct or incorrect and then to explain why they knew the student was correct or incorrect.
This is similar to how URSM is conceptualized for this study, but the emphasis in Masters
(2012) was not as explicitly on the mathematical reasons underlying the common error and more
focused on how teachers came to their conclusion of correctness.
From the original sample of 137 teachers, 43 teachers administered a pre- and post-test to
their 1,090 students so that the relationship between teacher knowledge and student achievement
could be conducted. In the original study it was not clear how the 43 teachers of the subsample
were selected and whether the student pre- and post-tests used the same items or just the same
number of items, leading to some uncertainty around the sample and interpretation of the results.
To analyze this data, Masters (2012) used a two-level hierarchical linear model (HLM) to test the
relation between teachers’ knowledge and student post-test scores. Teacher knowledge was not
statistically significant at the 0.05 alpha level (b = 0.01, p = 0.21; Masters, 2012).
Masters (2012) chose to create a single measure of teacher knowledge, which conflated
teachers’ GLCK and URSM together. The four items where teachers’ analyzed student work to
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
35
determine correctness and provide a reason for their decision was coded as correct or incorrect
and summed with their scores on the six GLCK items to create a measure of teacher knowledge.
Thus, their knowledge score was driven more by their GLCK knowledge rather than their PCK
since there were six GLCK items.
Literature Summary
The literature on teachers' knowledge of student thinking and misunderstandings of ratios
and proportional relationships demonstrated that teachers can often answer grade level questions
about ratios and proportional relationships (Buforn et al., 2020; Masters, 2012; Son, 2013) but
may be unaware of the reasons for these misunderstandings or unable to make sense of the
student work (Hines & McMahon, 2005; Jacobson et al., 2018; Son, 2013), meaning they may
lack URSM although it was not been measured explicitly. The studies specifically measuring
PSM did not focus on ratios and proportional relationships yet have shown somewhat positive
relationships between PSM and student learning suggesting it may be an important component of
teacher knowledge (Chen et al., 2020; Sadler et al., 2013).
Most of these studies have been qualitative in nature and used small convenience samples
of PSTs to understand teachers' knowledge of students' mathematical thinking of ratios and
proportional relationships. This may be a concern as PSTs have not been classroom teachers and
have been shown to be demonstrably different from ISTs, suggesting that their knowledge of
students' thinking may be different from ISTs who have experience teaching students and
engaging with misunderstandings (e.g., Charalambous, 2016; Kleickmann et al., 2013).
Similarly, most of the studies exploring URSM have been more exploratory in nature, meaning
that the relationship between URSM and important outcomes such as student learning has been
largely unaddressed.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
36
Although there is a theoretical foundation arguing the importance of this type of teachers'
knowledge, empirical evidence demonstrating the impact is lacking, which I aim to address.
Thus, I investigate what GLCK, PSM, and URSM in-service middle school mathematics and test
the relationship between these knowledge components and key educational outcomes such as
instructional quality and student learning.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
37
CHAPTER THREE: METHODOLOGY
Based on the literature reviewed, there is a need to more clearly understand teachers’
PSM and URSM within the realm of ratios and proportional relationships, particularly using
more generalizable samples of in-service teachers. There is also an opportunity to further
develop instruments to explicitly measure this theoretically relevant type of knowledge and to
see how it relates to external outcomes such as instructional quality and student achievement so
teacher preparation programs and professional developments can determine to what extent this
should be prioritized. Building from the previously mentioned frameworks of teacher knowledge
and my conceptualization of KOSM along with the prior research on teachers’ KOSM for ratios
and proportional relationships, I created a new scale to measure teachers’ GLCK, PSM, and
URSM of ratios and proportional relationships. Then I conducted two studies to investigate
whether GLCK, PSM, and URSM are unique components of teacher knowledge, what GLCK,
PSM, and URSM teachers have for ratios and proportional relationships, and the relationship
between the different components of this knowledge and three measures of instructional quality
along with student learning.
In the rest of the chapter, I describe the process of item development and delve into the
methodology of the two studies conducted with the new measure. The first study tests the newly
developed measure to understand the structure of teachers’ GLCK, PSM, and URSM for ratios
and proportional relationships and then explores these knowledge components for a national
sample of ISTs. The second study builds on the first, using the same measure with a second
sample of ISTs and their students to relate GLCK, PSM, and URSM to three metrics of
instructional quality (cognitive demand of the task, the cognitive demand of the task in the
enactment, and the coherence between how well the learning opportunities and teachers’
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
38
expectations focused on building students’ robust mathematical knowledge) and student
achievement. The prior literature investigated how teachers would edit tasks (Buforn et al., 2020)
and measured whether teachers’ instructional responses to misunderstandings were procedural or
conceptual (Son, 2013) without relating this to teachers’ knowledge, which I attempt to capture
through teachers’ task selection and enactment of the task.
Research Questions
Thus, the research questions guiding these studies are:
1. What is the structure of teachers’ GLCK, PSM, and URSM of ratios and proportional
relationships?
2. What GLCK, PSM, and URSM of ratios and proportional relationships do in-service
middle school mathematics teachers have?
3. What is the relationship between teachers’ GLCK, PSM, and URSM of ratios and
proportional relationships and their instructional quality (task selection, enactment of the
task, and coherent focus on building conceptual understanding)?
4. What is the relationship between teachers’ GLCK, PSM, and URSM of ratios and
proportional relationships and student achievement?
Construct Development
To answer my research questions, I developed a scale to capture teachers’ GLCK, PSM,
and URSM. I used the prior KOSM studies as a foundation to begin developing the scale
regardless of content because the extant literature measuring PSM and URSM of ratios and
proportional relationships is scarce. Then I relied on the prior literature of ratios and proportional
relationships to develop items with clear research-based misunderstandings.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
39
KOSM, as mentioned previously can be conceived as, "an understanding of what makes
the learning of specific topics easy or difficult: the conceptions and preconceptions that students
of different ages and backgrounds bring with them to the learning of those most frequently
taught topics and lessons. If those preconceptions are misconceptions, which they so often are,
teachers need knowledge of the strategies" (Shulman, 1986, p. 9). Additionally, Ball et al.
defined it as being able to "anticipate what students are likely to think and what they will find
confusing" (Ball et al., 2008, p. 401). Based on the conceptualizations of teacher knowledge
discussed earlier (An et al., 2004; Ball et al., 2008; Fennena & Franke, 1992; Krauss et al., 2008
Shulman, 1986; Tattoo et al., 2008), KOSM is a theoretically important subdomain of teachers'
knowledge. Since ratios and proportional relationships have demonstrated challenges for
students and teachers alike and are considered some of the most important topics in middle
school mathematics, it is imperative to investigate this topic and doing so through the lens of
KOSM may provide new insights into how to support teachers’ and students’ development of
these challenging concepts.
Instrument Creation
For these studies, I chose to create a new scale to capture KOSM for ratios and
proportional relationships as there was no instrument that captured teachers’ PSM and URSM for
this content. The initial design was similar to prior KOSM instruments (Chen et al., 2020; Hill &
Chin, 2018; Sadler et al., 2013) where there were two multiple choice items in each item set. The
first item measured teachers’ GLCK by asking them to select the correct answer and the second
item measured their PSM by having teachers anticipate the common misunderstanding for the
proportional relationship item. Based on the theoretical framing of “knowing why” versus
“knowing that” (Even & Tirosh, 1995), I developed a third open response item for each item set.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
40
This additional question tapping into “knowing why” is also supported by other teacher
knowledge frameworks and empirical work. For instance, Fennema and Franke (1992) include in
their conceptual framework the knowledge of learners’ cognition which directly acknowledges
that teachers need to know the why behind students’ thinking, understanding what is leading to
the misunderstanding rather than just being able to predict the common misunderstanding.
Empirical there is support for investigating “knowing why” as Copur-Gencturk et al. (under
review) had teachers answer, “what do you find are typically the reasons for their [students’]
misunderstanding?” in their study of common fraction misunderstandings. They demonstrated
that teachers who could provide a conceptual reason for common misunderstandings were more
likely to remediate the misunderstanding using a specific and conceptual instructional response,
indicating that whether teachers have a mathematical reason leading to the misunderstanding is
important for their instruction. The prior studies measuring teachers’ knowledge of student
thinking and misunderstandings for ratios and proportional relationships frequently measured
URSM tangentially, such as having teachers describe what they believe the student understands
based on the student work (Burforn et al., 2020; Fernández et al., 2013) but did not relate it to
external outcomes (with the exception of Masters 2012, but she combined GLCK and PCK
components so it obfuscated the effect of them separately), so doing so could be useful in
understanding the importance of this knowledge component. Therefore, the third item for each
set of items asked teachers to provide a mathematical reason for what they believe is leading to
the common misunderstanding as it is theoretically important (e.g., Even & Tirosh, 1995;
Fennema & Franke, 1992; Krauss et al., 2008) and appears to be empirically relevant to teachers
providing conceptual instructional responses specific to the misunderstanding (Copur-Gencturk
et al., under review).
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
41
As mentioned, one issue that arose for Hill and Chin (2018) was that their items did not
have theoretically grounded common misunderstandings. They surveyed students and were able
to remove items that did not demonstrate a “strong misunderstanding,” but initially, their items
did not include questions that specifically targeted common misunderstandings. To address this,
the items included in the survey for this study were related to specific misunderstandings that are
commonly identified by teachers and in the literature. For example, problems that look
proportional but are nonproportional, such as linear situations, are widely regarded as causing
common misunderstandings because they are treated like directly proportional situations (e.g.,
Van Dooren et al., 2005). In addition to creating items based on the prior research-based
misunderstandings, approximately half of the items in this study were previously administered to
students and demonstrated strong misunderstandings using the criteria from the previous studies.
The remaining items were tested with teachers and shown to have acceptable psychometric
properties (Copur-Gencturk et al., in progress). For a full list of the items, see Table A1 in the
Appendix A.
Items
For the first item of each item set, the teachers (1) selected the correct answer to the
question as a measure of their GLCK (see Figure 4). These items were multiple choice, as in the
previous studies, and were created to capture students’ common research-based
misunderstandings for ratios and proportional relationships. Each item had one correct answer
and the remaining multiple-choice answers were created to represent viable incorrect strategies
to solve the problem. These items are referred to as the GLCK items for the remainder of the
paper.
Figure 4
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
42
Example GLCK Item
Sara and Lucy both work at the local department store and make the same amount of money per
hour. Sara starts working before Lucy, so when Sara made $25, Lucy had only made $8. When
Sara makes $75, how much money will Lucy have made? (assume they do not take breaks)
________________________________________________________________________
$24
$25
$50
$58
I don’t know
Then teachers were asked, (2) “What would you anticipate being the most common
incorrect answer students would come up with when solving the problem?” The question does
not refer to teachers’ own students, since students vary across classrooms and content. Teachers
may know their students more intimately and better be able to gauge what their students might
struggle with, but based on the understanding of KOSM, teachers should know what is more or
less challenging for students generally (Ball et al., 2008; Krauss et al., 2008; Shulman, 1986).
The second question for each item is hereafter referred to as the predicting student
misunderstandings (PSM) question as it measures whether teachers can predict the common
student misunderstandings for ratios and proportional relationships.
Figure 5
Example PSM Item
Sara and Lucy both work at the local department store and make the same amount of money per
hour. Sara starts working before Lucy, so when Sara made $25, Lucy had only made $8. When
Sara makes $75, how much money will Lucy have made? (assume they do not take breaks)
_____________________________________________________________________________
What would you anticipate being the most common incorrect answer students would come up
with when solving the problem?
$24
$25
$50
$58
I don’t know
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
43
The third item of each set will be referred to as the understanding reason(s) for student
misunderstanding (URSM) item and has not been used in conjunction with multiple choice PSM
items in the prior literature for ratios and proportional relationships (see Figure 6). Teachers were
asked, (3) “What is the underlying mathematical reason leading students to the most common
incorrect answer you chose?" The question explicitly asked for a mathematical reason to guide
teachers to demonstrate their mathematical understanding rather than attending to superfluous
information such as the context of the problem or the wording of the question. This third item
was constructed response as open-ended items offer a plethora of rich data that can provide a
more detailed understanding of teachers' thinking (e.g., Copur-Gencturk, 2021; Copur-Gencturk
et al., 2022a; Creswell & Poth, 2017; Swanson & Holton, 2005; Tatto, 2013).
Figure 6
Example URSM Item
Sara and Lucy both work at the local department store and make the same amount of money per
hour. Sara starts working before Lucy, so when Sara made $25, Lucy had only made $8. When
Sara makes $75, how much money will Lucy have made? (assume they do not take breaks)
______________________________________________________________________
What is the underlying mathematical reason leading students to the most common incorrect
answer you chose? Explain your thinking.
Coding URSM Responses
A rubric was created to code teachers’ responses to the open ended URSM items (the
questions asking what the underlying mathematical reason leading students to the most common
incorrect answer you chose). A single rubric for all the URSM questions rather than a rubric
unique for each item was developed so that the coding across items was more consistent in terms
of what was being measured. For each specific items, decision nodes were created so that the
overarching concepts were the same but the way in which they were demonstrated was specific
to each item.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
44
The rubrics were adapted from the “mathematical descriptive” rubrics created by Hiebert,
Miller, and Berk (2017) and the rubrics created to assess teachers’ rationale for fraction
misunderstandings (Copur-Gencturk et al., under review). The rubric was created through an
iterative process that began with a three-category scale (generic, procedural, and conceptual), but
was expanded to have seven categories. These extra categories were necessary to capture the
breadth of how teachers described the reasons leading to the misunderstandings. A sample rubric
along with examples of each URSM category is included below for the item from Figure 6.
The seven categories for teachers’ rationales spanned from providing an incorrect or no
rationale to providing a mathematically grounded rationale that addressed the key underlying
concept. None meant that a teacher did not provide a rationale for the misunderstanding and
incorrect meant that the teacher provided a rationale that was mathematically inaccurate. The
generic category encompassed nonmathematical rationales and overly broad rationales, whereas
task-specific rationales were very specific in mentioning that some component(s) of the task led
to the student misunderstandings. Procedural description meant that the teacher merely wrote
out the steps a student would take to get whatever they chose as the common incorrect answer
without discussing why they would inaccurately use that method or why it was problematic. The
next category, procedural-specific was similar to procedural description in that the focus was
largely on the methods students might use, but for procedural-specific the rationale had to also
explain why that procedure demonstrated inaccurate thinking or the teacher had to contrast the
incorrect procedure with the correct procedure. The final category, concept-specific, meant the
teacher described the problematic nature of the students’ solution or understanding by linking
how students make sense of the situation to a key mathematical concept or by describing the key
mathematical idea that was lacking that led to the common incorrect answer.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
45
Table 1
Rubric for Teachers’ Reason for the Student Misunderstanding (URSM)
Reason
Type
Description Example
None • There is no attempt to analyze the underlying reason
for the misunderstanding
• Reason is unrelated to their answer choice
“I don't know what mistake students would
make for this one.”
Incorrect
Reason
• Reason is correct, but teacher claims it is incorrectly
leading students to an incorrect answer
• Reason includes mathematically incorrect statements
“I think students would see 25 plus 50 is 75, so
they would just add $50 to Lucy's 8, leading to
$58. Once again, I think this is because
students don't understand that if they work at
the same hourly RATE, their payment is
proportional to the hours they work.”
Generic
Reason
• Reason is focused on nonmathematical external factors
such as parents or time spent in class
• Reason is mathematical but overly broad
• Reason is that the student guessed
“Students do not read word problems typically
and just extract the numbers to try and solve
the problems.”
Taskspecific
Reason
• Reason is specifically related to the tasks such as the
numbers, expressions, or words given in the problem
without explicitly addressing a key mathematical topic
in the issue
• The distinction between task-based and generic is that
task-based is localized and only applies for the set
values or context of the individual situation
“The underlying mathematical reason leading
students to the most common incorrect answer
I chose is that they think they need to know the
hourly pay rate.”
Procedural
Description
• Reason describes the specific actions leading to the
incorrect answer (emphasis is describing the action not
the thinking)
• Reason describes the error in the students’ procedure
(e.g., put values in the wrong place for cross
multiplication) with no attempt to explain why it is
leading students to the common incorrect answer
“The students could say that $75 and $25 is a
difference of $50, so $50 more than $8 would
be $58.”
Procedurespecific
Reason
• Reason describes the specific actions leading to the
incorrect answer AND explains why it is leading to
students incorrect answers or contrasts the specific
actions leading to the incorrect answer with the correct
procedure
• The underlying reasoning is the lack of key skills and
content central to the problem
“Students will assume since you have to
multiply by 3 to get from 25 to 75 then they
would do the same with 8. They would
multiply 8 x3 and get 24, instead of just taking
the difference between the starting amount and
then deduct that from the last amount.”
Conceptspecific
Reason
• Reason describes the problematic nature of the
students’ solution or understanding by linking how
students make sense of the situation to a key
mathematical concept
• Reason describes key mathematical understanding(s)
or concept(s) that are lacking
“Students will assume the relationship is
proportional when in fact Sara and Lucy have
different starting points but earn at the same
rate going forward. Therefore the rate is the
same but the relationship is not proportional.”
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
46
The same process was used to code each item. The raters (the author and an expert in
mathematics education) began by coding a small number of responses together to norm on the
rubric and how to apply it to the specific item, documenting any decision points for that item.
After both coders were clear on how to code for that item and there was substantial agreement,
the raters independently coded the rest of the item. After coding all the responses, the raters met
and reconciled any differences through discussion. On average the interrater agreement was 86%
with a range from 81% to 91% (average Kappa statistic of 0.81, range of 0.74-0.88), suggesting
there was substantial agreement between the raters.
After the qualitative coding was completed, numerical scores were assigned to the URSM
items based on how they were coded so they could be used in the subsequent analyses. We
scored the items on a five-point scale from 0-1 (0, 0.25, 0.5, 0.75, 1). None and incorrect reason
were grouped together because they both represented rationales that did not provide
mathematically accurate rationales and were scored as 0. Generic reason and task-specific
reason were scored as 0.25 and were grouped together because they focused on nonmathematical reasons but were distinct from none and incorrect because they were plausible.
Procedural description, procedural-specific reason, and concept-specific reason were scored as
0.5, 0.75, and 1 respectively and were all kept separate. The choice to keep them separate was
largely due to the qualitative differences in the responses and prior qualitative coding schemes.
As seen in Copur-Gencturk et al., (under review) procedural and conceptual reasons for the
misunderstanding were separated as one focused on the skill and the other on the underlying
concept. This precedent along with the fact that concept-specific reasons captured rationales that
explicitly focused on understanding why a student struggled by linking the misunderstanding to
their understanding of the concept rather than the procedure, made it a unique rationale type.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
47
Procedural description and procedural-specific reason are more related to one another in that
they both attend to the steps or approach a student could take that would lead to the common
misunderstanding. The key difference was that procedural description was heavily focused on
“knowing that,” (Even & Tirosh, 1995) meaning they described a student’s action rather than the
student’s thinking. For instance, many reasons coded as procedural description were strictly a
retelling of the steps that would lead to the misunderstanding. In contrast, procedural-specific
reasons involved an added layer that moved the response more towards “knowing why” (Even &
Tirosh, 1995) by demonstrating the teacher was somewhat attending to student thinking. This
often took the form of acknowledging that the steps described while discussing why they were
incorrect, suggesting the teacher considered the student thinking and diagnosed the issue as a
lack of key skills. This breakdown led to five numerical categories representing the unique
perspectives on the underlying mathematical reasons why students make common
misunderstandings.
Item Types
The instrument was developed specifically for proportional relationships and to cover a
breadth of proportional relationships the survey included eight item sets (three items per set)
across four item types. The four types of items were linear situations, qualitative comparisons,
additive situations, and indirect situations, with two of each item type. The item types represent
some of the common problem types for proportional relationship problems (Lamon, 2005; 2007;
Lobato & Ellis, 2010) and allowed for a broad understanding of teachers’ knowledge of the
common student misunderstandings of ratios and proportional relationships and their knowledge
of identifying these misunderstandings.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
48
Linear situation items represented situations where there was a constant rate, but the
relationship was not directly proportional because there was some initial quantity (y = ax + b,
where a is the slope and b in a nonzero constant). Figures 4-6 show an example of a linear
situation in which there is a constant rate (how much money they make per hour), but it is not
directly proportional because Sara started before Lucy in the example. Indirect situation items
were similar in that they are often assumed to be directly proportional but instead of having a
relationship modeled by y =kx, where k is the constant of proportionality, indirect situations are
modeled by x = k/y. For both the additive items and qualitative comparison items, teachers were
given two ratios and determined the relative size of them. The difference between these item
types was that the ratio comparison items provided values in the problems. Whereas, the
qualitative comparison items provided enough information to compare the ratios but did not
provide specific numbers that teachers could use to calculate. If they wanted to compute an
answer, they had to generate their own values and test them.
Study 1
Study Context
I conducted the first study to answer the first two research questions: (1) What is the
structure of teachers’ GLCK, PSM, and URSM of ratios and proportional relationships? (2)
What knowledge of student misunderstandings (PSM and URSM) of ratios and proportional
relationships do in-service middle school mathematics teachers have? Confirmatory factor
analysis (CFA) was used to test the theorized structure of teachers’ knowledge of GLCK, PSM,
and URSM of ratios and proportional relationships. Then, descriptive statistics were computed to
understand whether a national sample of in-service teachers was able to do the math correctly
(GLCK), predict the common student misunderstandings (PSM) and provide a mathematical
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
49
reason leading to the research-based misunderstandings (URSM). Specifically, I investigated the
types of knowledge teachers had and how it may have related to item type (linear situations,
qualitative comparison, additive situations, indirect situations) or question type (GLCK, PSM,
URSM).
Participants
The data for Study 1 came from a national sample of 743 in-service middle school
mathematics teachers. To gather data from the sample, 49,189 unique and publicly available
teachers’ emails were purchased from an educational database company (K12-data). Each email
was sent an invitation to complete an online survey for a larger National Science Foundation
(NSF) project investigating teachers’ knowledge of ratios and proportional reasoning (Grant
Number 1813760). Our emails had a bounce back rate of 10.21% (5,020 emails), meaning these
emails never reached teachers’ inboxes. Teachers were required to complete the survey on a
computer, not a mobile phone, and were able to take breaks and pick up where they left off.
Approximately 29.82% of teachers opened the invitation email and of those that opened the
email and were eligible to participate, there were 1,072 completes leading to a 8.16% completion
rate. At the end of this initial NSF survey teachers were awarded a $100 Amazon e-giftcard for
their participation.
After completing the initial NSF survey, teachers were immediately linked to the landing
page for this dissertation survey. The landing page included a brief introduction mentioning that
this was for my dissertation and that they could complete my dissertation survey for an
additional e-giftcard but were informed it was not required and would have no bearing on their
previous participation. The landing page also noted that the survey would need to be completed
in one sitting as there was no way for them to access the survey if they left the site. For the NSF
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
50
survey, there were 1072 in-service middle school mathematics teachers who completed the
survey entirely and subsequently 910 teachers moved past the landing page of my dissertation
survey. Of those, 743 of them fully completed my dissertation survey (69% of the NSF
completers, 82% of those who clicked past the landing page), which led to their inclusion in
Study 1.
In Table 3 below, the teachers in Study 1 are contrasted with the full sample from the
NSF study to see if there was attrition of a specific group based on their background
characteristics. Teachers in Study 1 were not statistically different from teachers in the full
sample regarding their advanced degree attainment, Χ!(1, N = 1072) = 0.03, p = 0.87, credential
in mathematics, Χ!(1) = 0.34, p = 0.56, and alternative certification, Χ!(1) = 1.69, p = 0.19.
There were significant differences based on gender, Χ!(3) = 9.22, p = 0.03 and ethnicity, Χ!(5) =
16.58, p = 0.01. Teacher demographics for the entire U.S. (U.S. Department of Education,
National Center for Education Statistics, 2021) are also included to determine how representative
Study 1 is compared to the US teaching force.
Table 2
Teachers’ Background Information for Study 1
Teacher Information Study 1 (%) NSF (%) U.S. (%)
Gender
Female 70.79 70.43 76.80
Male 27.46 27.33 23.20
Other 0.27 0.28 N/A
Prefer Not to Respond 1.48 1.96 N/A
Ethnicity
White 78.20 76.77 79.90
Hispanic 4.98 4.76 9.40
Black 5.25 5.69 6.10
Asian 4.17 4.94 2.40
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
51
Other 3.36 3.26 2.20
Prefer Not to Respond 4.04 4.57 N/A
Master’s degree or Higher (yes) 73.89 73.97 51.20
Credential (Mathematics) 78.60 78.73 N/A
Alternatively Certified (yes) 18.44 19.03 N/A
Note. For Study 1, N = 743. For NSF Sample, N = 1,072.
Most teachers in Study 1 were White and female, which is similar to the demographics of
U.S. teachers (U.S. Department of Education, National Center for Education Statistics, 2021).
Teachers had 15.68 years of experience on average with an SD of 7.78 years. Additionally, the
majority of teachers in Study 1 had advanced degrees (73.89%) and were traditionally certified
(81.66%, versus alternatively certified).
Testing Teachers’ Knowledge Structure
Once the data was collected and the URSM items were coded, I tested the scale to
understand the nature of teachers’ KOSM knowledge. To do this, I used CFA to determine if the
URSM items represented a novel construct distinct from GLCK and PSM. I used CFA instead of
exploratory factor analysis (EFA) because of the pre-specified theoretical structure for the items.
EFA is typically used when exploring and identifying underlying factors in a dataset without a
pre-specified theory, while CFA is used to confirm or validate a hypothesized factor structure
based on prior knowledge or theoretical frameworks (Brown, 2015). As such, CFA allowed for
the testing of the factor structure, whereas EFA cannot test a specific hypothetical structure. CFA
was also used to test which items should be retained to effectively measure the constructs.
Confirmatory Factor Analysis
Before testing the structure of the items, descriptive statistics were used to identify any
items that did not fit well within the instrument. Two GLCK items (Qualitative1GLCK,
Additive2GLCK) were removed due to having extreme values of skewness and/or kurtosis. The
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
52
threshold for removal was a skew value outside of −2 and 2 (Tabachnick & Fidell, 2013) and a
kurtosis value outside the range of −7 and 7 (Hair et al., 2010). The first item removed
(Qualitative1GLCK) had a skewness of -4.86 and a kurtosis of 24.57 and the second
(Additive2GLCK) also had values of -4.86 and 24.57 respectively. These two items both had
very high mean scores of 96%. This ceiling effect likely contributed to more extreme skewness
values and made them unlikely to differentiate teachers effectively.
The item-rest correlations were also examined to understand which items were highly
correlated with other items within the same scale. As a rule of thumb, items with an item-rest
correlation <0.20 are often removed (Mellenbergh & Van den Brink, 1998), so that was my
threshold. Two PSM items, Additive1PSM and Additive2PSM, had item-rest correlations of 0.13
and 0.08 respectively which was below 0.20 so they were removed. These values suggested that
the items were not well aligned with the rest of the PSM scale and therefore not a good measure
of the underlying construct. Thus, for the following analyses, 20 items (6 GLCK items, 6 PSM
items, and 8 URSM items) were analyzed. The average score for each item along with its
standard deviation is reported in Table 3.
Table 3
Item Means and Standard Deviations
Mean SD
GLCK Items
Additive1GLCK 0.87 0.34
Additive2GLCK* 0.96 0.19
Qualitative1GLCK* 0.96 0.19
Qualitative2GLCK 0.70 0.46
Linear1GLCK 0.49 0.50
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
53
Linear2GLCK 0.80 0.40
Indirect1GLCK 0.48 0.50
Indirect2GLCK 0.66 0.47
PSM Items
Additive1PSM* 0.45 0.50
Additive2PSM* 0.82 0.38
Qualitative 1PSM 0.77 0.42
Qualitative 2PSM 0.72 0.45
Linear1PSM 0.53 0.50
Linear2PSM 0.70 0.46
Indirect1PSM 0.51 0.50
Indirect2PSM 0.63 0.48
URSM items
Additive1URSM 0.64 0.26
Additive2URSM 0.58 0.23
Qualitative1URSM 0.63 0.29
Qualitative2URSM 0.52 0.29
Linear1URSM 0.44 0.34
Linear2URSM 0.61 0.33
Indirect1URSM 0.52 0.33
Indirect2URSM 0.54 0.32
Note. N = 743. Each item had a range from 0-1. *Indicates the item was removed from the final
instrument.
After removing the four poor fitting items, CFA was conducted to examine the structure
of the remaining items. Based on my conceptualization of teachers’ knowledge, I hypothesized
that the three items of each item set (GLCK, PSM, URSM) would be best modeled by three
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
54
latent factors, with one factor corresponding to GLCK, PSM, and URSM each. GLCK is not a
part of teachers’ PCK, suggesting that it should be represented by a unique factor. According to
my conceptualization, PSM and URSM tap into different knowledge which would lead to a 3-
factor model. PSM is more aligned with “knowing that” in terms of teachers being aware of the
misunderstanding, whereas URSM connects to “knowing why” as teachers must provide a valid
mathematical reason for the misunderstanding (Even & Tirosh, 1995). These differences in
knowledge led me to believe that GLCK, PSM, and URSM would represent separate knowledge
constructs, supporting a 3-factor model.
One benefit of CFA is that it allows for the testing of specific hypothetical structures. I
tested how well the hypothesized 3-factor model fit the data and was able to directly compare the
model fit to competing theoretical models. The most compelling alternative model was a 2-factor
model where GLCK items loaded onto one factor and the PSM and URSM items loaded onto a
single separate factor. Theoretically this model was justified since PSM and URSM are both
conceptualized as part of teachers’ KOSM and as such could have been represented as a single
factor. That said, Jacobson et al.’s (2018) findings that even teachers who could correctly
evaluate student work struggled to provide valid mathematical reasonings for their challenges
suggests that there are differences in the knowledge between knowing and identifying
misunderstandings and understanding the mathematical reasons for them.
Finally, a 1-factor model was examined to compare whether all three components of
teacher knowledge might be best represented by a generic knowledge factor. This was the least
established theoretical model as previous work has shown the distinctiveness of teachers' CK and
PCK (Copur-Gencturk & Tolar, 2022), so seeing all the items best load onto one factor was
unlikely.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
55
To account for the shared context within each set of GLCK, PSM, and URSM items, I
allowed for the correlation of the errors in the models. For instance, for the GLCK, PSM, and
URSM items within the same item set (i.e., Indirect1GLCK, Indirect1PSM, and Indirect1URSM)
had correlated errors to account for the testlet effect. From the CFA models, I also generated
factor scores for teachers’ GLCK, PSM, and URSM. Factor scores are composite scores of the
original observed variables (teachers’ scores on the items), with weights determined by the factor
loadings obtained from the factor analysis (Brown, 2015). The factor scores account for the
range of item loadings making the scores more representative of teachers’ knowledge of the
constructs.
To test how well the different models fit the data, common model fit indices were used:
root mean square error of approximation (RMSEA; Steiger, 1990; Steiger & Lind, 1980),
comparative fit index (CFI; Bentler, 1990), Tucker–Lewis index (TLI; Bentler & Bonett, 1980;
Tucker & Lewis, 1973). RMSEA represents an absolute fit index, meaning that the RMSEA
measures how different the current model is from a perfect model. In contrast, the CFI and TLI
are incremental fit indices, in that they are used to compare the fit between different models
where the higher value represents a better fit. The likelihood ratio test (Brown, 2015; Satorra &
Saris, 1985) was used to directly compare whether there was statistical improvement of model fit
between the 1-factor and 2-factor model and between the 2-factor and 3-factor model. Finally,
the Akaike information criterion (AIC; Akaike, 1987) and the Bayesian information criterion
(BIC; Schwartz, 1978) was used to compare the fit of across the models with lower values
indicating better fit.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
56
Background Measure
Along with the 20 items measuring their GLCK, PSM, and URSM for ratios and
proportional relationships the last portion of the online survey asked about teachers’
backgrounds. Many studies have included teacher background characteristics to try and uncover
what preparation and experiences may be associated with increased knowledge and student
achievement (e.g., Monk 1994, Hill et al., 2019). Although not as consistent across studies, many
studies investigate degree type, whether teachers have a bachelors or masters (e.g., Clotfelter et
al., 2007; Harris & Sass, 2011; Hill et al., 2019; Wayne & Youngs, 2003), and the field in which
it was received (e.g., Aaronson et al., 2007; Harris & Sass, 2011; Rowan et al., 2002).
Certification status has also been commonly captured (Copur-Gencturk et al., 2022a; for a
review see Cochran-Smith et al., 2012) and a new variable of interest has been certification type,
as alternative certification programs have shown dramatic growth in recent years (Grossman &
Loeb, 2021; Walsh & Jacobs, 2007). Because of the focus the prior literature has placed on these
teacher characteristics, participating teachers provided whether they have an advanced degree
(past a bachelor’s degree), certification field, and type of certification program they attended.
Analytical Approach
To answer research question 1 (What is the structure of teachers’ GLCK, PSM, and
URSM of ratios and proportional relationships?) I ran multiple CFA models to see which model
fit the data best. To answer research question 2 (What knowledge of student misunderstandings
(PSM and URSM) of ratios and proportional relationships do in-service middle school
mathematics teachers have? I computed the descriptive statistics for all the items overall and by
item type to understand broadly what knowledge teachers’ have of student misunderstandings of
ratios and proportional relationships.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
57
Study 1 assessed whether GLCK, PSM, and URSM were distinct components of teacher
knowledge while providing an overview of a national sample of teachers’ GLCK, PSM, and
URSM for proportional relationships. Past work (Copur-Gencturk et al., under review)
demonstrated that providing a mathematical rationale for student misunderstandings is associated
with teachers’ self-reported instructional responses and Study 2 investigated that in practice.
Study 2 built on Study 1 to explore how teachers’ GLCK, PSM, and URSM as measured by this
new instrument relates to their selection of cognitively demanding tasks, the enactment of the
task, and the coherent focus on developing students’ conceptual understanding. Student level
data was also used to understand the relationship between teachers’ GLCK, PSM, and URSM
and student achievement.
Study 2
Participants
The sample for Study 2 included 37 in-service middle school mathematics teachers who
were a part of a larger study measuring the impact of a virtual professional development on their
knowledge of ratios and proportional relationships (Copur-Gencturk & Atabas, under review).
There were 53 teachers in the larger study who had been involved in the project over the course
of the 2021-2022 school year. These teachers provided student data and instructional artifacts
which were used as a secondary data source for this study. As part of the larger study from which
these teachers were recruited, the teachers were sorted into a treatment and control group, where
the treatment group participated in an online professional development focused on improving
their CK and PCK for ratios and proportional relationships.
All 53 teachers from the original study were invited to complete the additional survey
measuring their GLCK, PSM, and URSM for an additional e-giftcard. Of the 53, 40 teachers
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
58
opened the survey (75.47%), and 37 responded and completed the survey in its entirety (69.81%
of the total invitations, 92.5% of those who opened the survey), leading to their inclusion in
Study 2. Their background and demographic information are included in Table 4. Only two of
the initial email invitations bounced 53 emails bounced (3.77%), but I was not able to determine
how many of the other email invitations were received by the teachers.
Table 4
Teachers’ Background Information for Study 2
Teacher Variable Study 2 (%)
Gender
Female 86.49
Male 10.81
Other 0.00
Prefer Not to Respond 2.70
Ethnicity
White 67.57
Hispanic 2.70
Black 2.70
Asian 5.41
Other 13.52
Prefer Not to Respond 8.11
Master’s degree or Higher (yes) 83.78
Credential (Mathematics) 56.76
Alternatively Certified (yes) 13.51
Note. N = 37.
Measures
As part of the larger study, meaning the data was collected prior to this study, these 37
teachers submitted classroom artifacts which were used as a proxy measure of their instructional
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
59
quality. Additionally, they had administered pre- and post-surveys to their students measuring
students' knowledge (N= 1,205) growth for proportional relationships before and after their unit
on proportional relationships.
Once they were recruited for this study, the 37 teachers in Study 2 completed the same 20
item survey as the teachers in Study 1 to measure their GLCK, PSM, and URSM of ratios and
proportional relationships. The coding process for the URSM items was the same as in Study 1
with an average agreement rate of 84% (range of 68% - 95%) and an average Kappa statistic of
0.79 (range of 0.61-0.92) again suggesting good agreement among the two raters.
Instructional Quality
Teachers’ instructional quality was captured through a set of classroom artifacts that
teachers submitted four times throughout their unit on ratios and proportional relationships. Each
round of artifact collection included the tasks the teacher used for the lesson, six students’
solutions representing different levels of understanding, and a reflection completed by the
teacher after the lesson. The reflection contained guiding questions to uncover why teachers
selected the specific tasks, the concepts they hoped to address in the tasks, and their process for
determining the student’s understanding from the student work.
There were three metrics for instructional quality, the cognitive demand of the task, the
enactment of the task, and the coherence of the math for conceptual understanding. The quality
of the task (potential cognitive demand) was a priority because research has demonstrated that
effective tasks can provide opportunities for students to develop robust understandings of the
content (e.g., Boston 2012; Boston & Smith, 2011). Additionally, selecting appropriate tasks is
included as a specific component in Krauss et al.’s (2008) conception of teacher knowledge and
embedded within other frameworks of teacher knowledge (Ball et al., 2008; Tattoo et al., 2008),
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
60
reinforcing its importance. Buforn et al., (2020) even included the modification of the task as a
measure in their study, suggesting knowledge of effective tasks is important for teachers.
Task enactment is also theoretically important as conceptions of teacher knowledge have
repeatedly included components relating to effective explanations and representations (e.g.,
knowledge of content and teaching from Ball et al., 2008; knowledge of explanations and
representations from Krauss et al., 2008; enacting mathematics for teaching and learning from
Tattoo et al., 2008). The repeated inclusion speaks to the importance of effectively implementing
and facilitating tasks.
The final instructional outcome, coherence of the mathematics, assessed the level of
conceptual focus across teacher’s planning and execution. It captured consistency in teachers’
task selection and interpretations of student, revealing the extent to which teachers provided a
conceptual focus to their students across a lesson cycle. This was included as a metric of
instructional quality as addressing conceptual understanding is a necessary component for
effective teaching (Bellert, 2015). Additionally, Son (2013) believed a conceptual focus to be
important as she tested how teachers would respond to misunderstandings and coded the
responses for whether they were conceptual or procedural, tying in both the enactment of the task
and the conceptual focus. Thus, there was both theoretical and empirical support for
understanding these three components of instruction and they helped to illuminate the extent to
which teachers did or did not provide an effective learning experience for their students.
Coding Classroom Artifacts
The coding of the artifacts was done for the larger project and there were 208 artifacts
that were coded. A brief overview of the coding is provided below, but for more technical
details, see Copur-Gencturk and Atabas, (under review). The classroom artifacts were coded
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
61
along three dimensions: (1) potential cognitive demand of the task, (2) the cognitive demand of
the task in the enactment, and (3) how well the learning opportunities and teachers’ expectations
focused on building students’ robust mathematical knowledge (i.e., coherent mathematics for
conceptual understanding).
For the first outcome, the tasks were coded for cognitive demand on a four-point scale
using the Instructional Quality Assessment (IQA) protocol (see Boston, 2012 for a description of
each scoring level). On the scale, a higher score meant that the task itself could provide an
opportunity for students to develop their conceptual knowledge of the content.
A second rubric from the IQA was used to code the student work that was submitted
along with the task. By coding student work, it allowed for the understanding of how much of
the potential cognitive demand of the task was required during the actual enactment. Again, a
higher score meant that during enactment there were more opportunities for students to deepen
their conceptual understanding.
Finally, the third outcome focused on how consistent and coherent teachers’ planning,
enactment, and expectations of the task were. Meaning that teachers who had coherent levels of
planning (i.e., the task they selected) and implementation (i.e., how students completed the task)
scored higher on this 4-point scale. This scale was created by Copur-Gencturk and Atabas (under
review). Examples and descriptions of the coding scheme for each outcome are included in
Appendix B. The authors found that there was substantial agreement between the raters for each
outcome (kappa statistics of .72, .75, and .72 for cognitive demand of the task potential, the
cognitive demand of the task in the enactment, and the coherence of the mathematics,
respectively) and that the reliability of each scale was acceptable, .74, .78, and .88, respectively
(Copur-Gencturk & Atabas, under review).
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
62
Student Pre- and Post-test
To measure student achievement, teachers in Study 2 administered a pre- and post-test to
their students before and after their unit on proportional relationships. The pre- and post-test used
11 multiple choice items to measure the students’ proportional relationship knowledge (see Table
A2 in Appendix A for a list of the topics covered by each item). The 11 items covered relevant
topics such as indirect proportions and linear situations (e.g., Brown et al., 2020a; Lamon, 2020)
and the content covered was highly related to the content covered by the items the teachers
completed, but they were not identical. The reliabilities (Cronbach’s alpha) of the pre- and posttest were 0.71 and 0.73 respectively.
Analytical Approach
Linear Regression
To understand the relationship between GLCK, PSM, and URSM and instructional
quality linear regression was used (RQ3). The three instructional quality outcomes were
cognitive demand of the task potential, the cognitive demand of the task in the enactment, and
the coherence of the mathematics. GLCK, PSM, and URSM were the predictors for the three
outcomes, but they had such high correlations that they were not included in the linear models
together. Instead, they were run in models individually, meaning there were three models for
each outcome, leading to nine total models.
Linear regression was used to test this relation because theoretically teacher instructional
quality should be predicted by their knowledge as Copur-Gencturk et al., (under review) found
that the way teachers characterized student misunderstandings and the rationale for the
misunderstandings predicted teachers’ reported instructional responses. It was anticipated that
higher URSM and PSM scores would be associated with higher instructional quality and that
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
63
GLCK would not be as predictive. This was hypothesized because URSM and PSM are
components of teachers’ PCK and PCK has been shown to be more related to classroom
instruction and student learning when compared to CK (e.g., Baumert et al., 2010; Bruckmaier et
al., 2016; Campbell et al., 2014; Chen et al., 2020; Hill et al., 2005; Kunter et al., 2013; Sadler et
al., 2013). As mentioned in Study 1, key background characteristics that have frequently been
used as covariates in prior studies were collected. Of those, whether teachers had a certification
in mathematics was included in the models as a covariate. Other characteristics such as having an
advanced degree and being alternatively certified that were possibly considered for inclusion
were not added because most of the teachers had advanced degrees (84%) and were not
alternatively certified (86%) making these not as useful to include as there were not enough
teachers in both categories.
Hierarchical Linear Modeling
Hierarchical linear modeling (HLM) was used to investigate the relationships between
teachers’ GLCK, PSM, URSM and student achievement (RQ4). HLM is similar to linear
regression since they both model the relationship between an outcome variable and a set of
predictor variables, but HLM is preferred for nested data structures (Rabe-Hesketh & Skrondal,
2008). In this case, HLM accounted for the nested structure of students within classrooms (or
teachers) while modeling the individual and classroom (or teacher) level effects simultaneously.
Students within the same classroom are subjected to many shared unobserved experiences that
could potentially influence them to act similarly, so using a nested model was appropriate to
investigate the relationship between student achievement and teacher knowledge. HLM has an
advantage over Ordinary Least Squares (OLS) regression in this case, because OLS assumes the
independence of residuals which may be violated by students being within the same classes and
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
64
schools, thus the results may be biased or inaccurate. HLM accounted for this by allowing the
residuals to be correlated, meaning students within the same class can have correlated residuals
(Raudenbush & Bryk, 2002).
HLM was conducted using Stata 16.1 using the xtmixed command (StataCorp, 2019).
Two-level models were used to investigate the relationships between teacher knowledge and
student post-test score. The first level was the student-level and included controls for students’
pre-test score and grade level.
The second level was the classroom level and included teacher variables such as the class
pre-test mean, certification type, and whether the teacher participated in the online professional
development. The pre-test class mean variable was calculated by taking the average of all the
students’ pre-test scores who were taught by the same teacher. Students’ pre-test deviation score
was centered around the pre-test class mean, so that a pre-test deviation score of 0 meant that the
student got the exact score as the average score for students of that teacher. Group mean
centering can be particularly useful when there are meaningful between group differences, which
in this case is likely true as there are notable differences across teachers (Raudenbush & Bryk,
2002). As mentioned, as part of the larger study from which these 37 teachers were drawn,
teachers were assigned to a treatment or control, where the treatment was given online
professional development. In this case 23 teachers participated in the treatment and 14 did not
and their status was controlled for this the model.
For the two-level models the dependent variable was students' post-test scores. The main
predictors were teachers’ GLCK, PSM, and URSM factor scores, which were generated by
computing the factor scores using the 3-factor CFA model from Study 1. The factor scores were
used because they account for the different factors loadings, making teachers’ GLCK, PSM, and
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
65
URSM factor scores likely more representative of their knowledge of the construct (Brown,
2015). The three predictors (GLCK, PSM, URSM) were kept separate because Study 1 indicated
that the structure was best modeled by each component having their own latent factor. They also
were not included in models simultaneously because of their high correlations. Thus, there were
three models run to predict student post test score.
In these models, students (s) are nested within teachers (t), and the outcome variable is
PostTest score. At level-one, the outcome (PostTest) is modeled as a function of the fixed effects
(β coefficients) for the level-one predictors, along with the random effect (r) representing the
unexplained variation at the student level. At level-two, the fixed effects (γ coefficients)
represent the influence of the level-two predictors on the intercept (�'() and the coefficients (�)()
for the level-one predictors. The random effect (�'() captures the unexplained variation at the
teacher level.
The two-level model shown below was used to explore these relations:
����� 1 (������� �����):
��������*( = �'( + �+(����������������*( + �!(�����*( + �*(
����� 2 (����ℎ�� �����):
�'( = �'' + �'+���������( + �'!������������������( + �'"���������(
+ �'#���ℎ����( + �'(
�+( = �+'
�!( = �!'
where ��������*( represents the outcome of performance on the post test for student s with
teacher t. ������������������( represents the knowledge component included in each
model (GLCK, PSM, or URSM). The equations above contain the following:
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
66
����������������*(= group mean centered variable representing the deviation between pretest performance for student s and average pre-test score for all students within teacher t
�����*( = grade level for student s within teacher t
Conditiont = whether teacher t received online intervention
������������������( = Represents GLCK, PSM, or URSM factor score for teacher t
���������(= average pre-test class performance for student s within teacher t
���ℎ����(= whether teacher t is certified in mathematics or not
�(= unexplained variation at the teacher level
Benjamini -Hochberg Procedure
Given that nine statistical tests were computed to understand the relationship between
teachers’ GLCK, PSM, URSM and the three metrics of instructional quality and three tests were
run to investigate the relationship between these knowledge components and student learning, I
used the Benjamini-Hochberg procedure with a 0.05 false discovery rate (Benjamini &
Hochberg, 1995) to control for the rate of false positives (Type I errors). The BenjaminiHochberg procedure is a statistical method used in multiple hypothesis testing that addresses the
issue whereby chance alone, some tests may yield statistically significant results even if there is
no true effect. This correction allows for the adjustment of the p-values obtained from multiple
tests to account for the increased likelihood of false positives. It controls the false discovery rate,
which is the expected proportion of false discoveries among all the rejected hypotheses. The
procedure involves ranking the p-values for the predictors from smallest to largest. Then critical
values are calculated by multiplying the quotient of the p-value’s rank (l) and the number of tests
run (m) for the outcome by the false discovery rate (Q), so the critical value = (� ÷ �) × �.
After calculating the critical values, one compares the observed p-values and the critical values,
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
67
finding the largest p-value that is smaller than that critical value. That p-value and those ranked
above it (lower ranking) are considered statistically significant. The results of the BenjaminiHochberg procedure are reported along with the regression and HLM results in Chapter 4.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
68
CHAPTER FOUR: FINDINGS
In this chapter, I describe the results from the two studies conducted. The results are
organized by the research questions guiding the study, with the results from Study 1 answering
research questions 1 and 2 and the results from Study 2 answering research questions 3 and 4.
Research Question 1: What is the structure of teachers’ GLCK, PSM, and URSM of ratios
and proportional relationships?
The results of the three CFA models testing the structure of teachers’ knowledge are
shown below. The fit of all the models was quite good as they had RMSEA values <0.05
(Steiger, 1990) and CFI and TLI values >0.90. That said, only the 3-factor model had a CFI
>0.95, which was suggested by Bentler (1990) as demonstrating good fit to the data and the 3-
factor model had the highest CFI and TLI values, and the lowest RMSEA. The likelihood ratio
test corroborated these results as it showed there was significant improvement in model fit
comparing the 3-factor model to the 1- and 2-factor models. The AIC and BIC values also
suggested that the three-factor model with each question type (GLCK, PSM, URSM) loading
onto their own factors was the best fit for the data as the 3-factor model had the lowest AIC, and
BIC (see Table 5). Thus, the 3-factor model was the best fit to the data confirming that the
URSM items measured a construct distinct from PSM and GLCK.
Table 5
Confirmatory Factor Analysis Fit Indices
Model �� df p CFI TLI RMSEA AIC BIC
Unidimensional 426.41 153 0.00 0.93 0.92 0.05 10432.93 10791.69
2-factora 374.80 152 0.00 0.95 0.93 0.04 10383.32 10746.75
3-factorb 341.41 150 0.00 0.95 0.94 0.04 10353.93 10726.67
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
69
Unidimensional 2-factor
∆�! df p ∆�! df p
2-factor 51.60 1 0.00
3-factor 85.00 3 0.00 33.40 2 0.00
a
The 2-factor model had the PSM and URSM questions loading onto the same factor. b
The 3-
factor model had GLCK, PSM, and URSM questions loading onto their own latent factors.
Although they proved to be modeled best by three separate factors, the CFA results
showed that the three individual constructs had high covariances, with GLCK and PSM having a
covariance of 0.88, GLCK and URSM having a covariance of 0.82, and PSM and URSM having
a covariance of 0.87, as seen in Table 6.
Table 6
Covariances of GLCK, PSM, and URSM by Factor Model
Variable GLCK PSM
3-Factor Model
PSM 0.88
URSM 0.82 0.87
2-Factor Model
KOSM1 0.88
1
KOSM here refers to a single factor with the PSM and URSM items loading onto it.
Cronbach's alpha was used to empirically test the reliability of the instrument. The
reliability was tested for each latent factor GLCK, PSM, and URSM separately. The reliabilities
were 0.68, 0.61, and 0.60 respectively for GLCK, PSM, and URSM, which indicated that the
measures had low to acceptable reliability. Overall, these checks provided evidence that although
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
70
not perfect, the new instrument appeared to measure the expected constructs and that the
structure is best modeled with the theoretical three factors.
Research Question 2: What GLCK, PSM, and URSM of student misunderstandings of
ratios and proportional relationships do in-service middle school mathematics teachers
have?
Table 7 provides the overall performance of the teachers on all the GLCK, PSM, and
URSM items and has the average score subdivided by the item type. On average teachers
correctly answered 67% of the GLCK questions and correctly predicted the misunderstanding for
64% of the PSM questions. As a reminder, after being qualitatively coded, the URSM items were
assigned numerical scores between 0 and 1, where none and incorrect reason were scored as 0,
generic reason and task-specific reason were scored as 0.25, procedural description was scored
as 0.50, procedural-specific reason was scored as 0.75, and concept-specific reason was scored
as 1, to keep the scale the same for GLCK, PSM, and URSM items. The average URSM score
across the items was 0.56, meaning the average rationale score was closest to procedural
description.
Items involving indirect situations appeared to be the most challenging as the indirect
problems had the lowest mean score for both the GLCK and PSM questions (57% correct for
both GLCK and PSM) and the second lowest mean score for URSM. In contrast, additive items
had the highest average CK (87%) and URSM (61%) scores, but qualitative comparison
problems had the highest average PSM score (74%). This is likely attributable to these items
having values which allowed teachers to calculate the ratios before comparing them, whereas the
qualitative comparison items did not provide values which may have led to a much lower GLCK
score. The URSM items had the lowest average, which follows the theoretical conceptualization,
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
71
where URSM requires the most involved teacher knowledge suggesting it is likely more difficult
than GLCK and PSM.
Table 7
Average Scores on GLCK, PSM, and URSM Items
Item Type Mean SD
All GLCK items 0.67 0.28
GLCK items – Linear Situations 0.64 0.35
GLCK items – Qualitative Comparison 0.70 0.46
GLCK items – Additive Situations 0.87 0.34
GLCK items – Indirect Situations 0.57 0.43
All PSM (predict misunderstanding) items 0.64 0.27
PSM items – Linear Situations 0.62 0.37
PSM items – Qualitative Comparison 0.74 0.33
PSM items – Indirect Situations 0.57 0.43
All URSM (reason for misunderstanding) items 0.56 0.15
URSM items – Linear Situations 0.52 0.26
URSM items – Qualitative Comparison 0.58 0.22
URSM items – Additive Situations 0.61 0.19
URSM items – Indirect Situations 0.53 0.27
Note. N = 743. There were 4 item types with 2 item sets for each category. PSM only has three
item types as both additive PSM items were removed based on their skewness and kurtosis.
Representing the URSM items graphically, Figure 7 shows the distribution of rationale
types used across all the items. A procedural description rationale was provided for over a third
of the URSM questions, followed by a procedure-specific rationale. Figure 8 shows the
breakdown of rationale types by item type.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
72
Figure 7
Distribution of Rationale Types Provided for All Items
Figure 8
Distribution of Rationale Types Provided by Item Type
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
73
Similar to the GLCK and PSM items, linear and indirect items were the most challenging as
evidenced by having the highest percentage of incorrect rationales (19.18% for linear and
18.51% for indirect). Across all items, most often teachers provided a reason leading to the
student misunderstanding that was coded as procedural description, followed by procedurespecific. Additive items had a very low percentage of incorrect rationales (3.30%) and a very
high percentage of procedural description rationales (45.69%) which again may suggest
something about teachers being more knowledgeable about this problem type, as the additive
items had the highest GLCK scores as well (87%).
In general, it appears that teachers are somewhat proficient at solving proportional
relationship problems, with the exception of indirect problems, and that their ability to anticipate
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
74
student misunderstandings is fairly similar to their content knowledge. The rationales
demonstrate a strong skew toward describing students’ solutions rather than focusing on the
mathematical underpinnings as evidenced by the lower frequency of concept-specific rationales.
Research Question 3: What is the relationship between teachers’ GLCK, PSM, and URSM
of ratios and proportional relationships and their instructional quality?
For the sample of 37 teachers from Study 2, on average they correctly answered 62% of
the GLCK items, and 66% of the PSM items, which was similar to the scores for the sample in
Study 1 (67% for GLCK and 64% for PSM). The average URSM score for these teachers was
0.54, which similar to Study 1 (average score of 0.56) meant that the average rationale score was
closest to procedural description.
The regression results describing the relationship between teachers’ GLCK, PSM, URSM
and instructional quality as measured by (1) the potential cognitive demand of the task, (2) the
cognitive demand of the task in the enactment, and (3) the coherence of the learning
opportunities and teachers’ expectations focused on building students’ robust mathematical
knowledge are presented in Table 8.
GLCK, PSM, and URSM each showed significant relationships with different measures
of instructional quality. URSM was significantly related to all three indicators of instruction (p <
0.05), while PSM was significantly related to the enactment of the task and the coherence of
mathematics (p < 0.05). On the other hand, GLCK was only associated with the final
instructional outcome, coherent mathematics for conceptual understanding (p < 0.05).
The regression coefficients in Table 7 represent the expected change in the outcome
variable for a one-unit increase in the factor score, holding the other predictors constant (whether
they have a mathematics credential and whether they were in the treatment or control group for
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
75
the online professional development). Although not statistically significant, a one-unit increase
in a teachers’ GLCK factor score was associated with an increase of 2.23 points in cognitive
potential of the task (effect size of 0.29 1, p = 0.09). Similarly, a one-unit increase in teachers’
GLCK was associated with a 3.24 increase in their enactment of the mathematics score (effect
size of 0.34, p = 0.06) and it was also not statistically significant. For coherence of the
mathematics, a one-unit increase in teachers’ GLCK was associated with a 3.95 increase in their
score for coherence of the mathematics (effect size of 0.35, p = 0.04) . In this case, both before
and after the correction for the false discovery rate, GLCK was statistically significant for
coherence of the mathematics (p = 0.04, adjusted significance level of p < 0.05), which was
similar to the original regression results.
A one-unit increase in the PSM score was associated with an increase of 2.03 for task
potential (effect size of 0.28, p = 0.10) and it was not significant. In contrast, a one-unit increase
in the PSM score was significantly associated with an increase of 3.62 for task enactment (effect
size of 0.40, p = 0.02). For task enactment, PSM remained a significant predictor after computing
the Benjamini-Hochberg procedure (p = 0.02, adjusted significance level of p < 0.03). For the
final instructional outcome, a one-unit increase in the PSM score was associated with an increase
of 4.30 for coherence of the math (effect size of 0.40, p = 0.02). This also remained statistically
significant after the Benjamini-Hochberg procedure (p = 0.02, adjusted significance level of p <
0.03).
Finally, a one-unit increase in teachers’ URSM factor score was associated with a 4.01
increase in their task potential score (effect size of 0.35, p = 0.03). Although initially statistically
1 For the linear regression results, the effect sizes for GLCK, PSM, and URSM were calculated by multiplying the
regression coefficient by the quotient of the standard deviation of the sample for the variable in question and the
standard deviation of the outcome for the sample.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
76
significant, after calculating the Benjamini-Hochberg Procedure, URSM was not statistically
related to the cognitive potential of the task (p = 0.03, adjusted significance level p < 0.02). A
one-unit increase in teachers’ URSM score was associated with a 5.97 increase in task enactment
(effect size of 0.42, p = 0.01). This result was still significant after the Benjamini-Hochberg
procedure (p = 0.01, adjusted significance level p < 0.02). For the last instructional indicator, a
one-unit increase in teachers’ URSM score was associated with a 6.93 increase in coherence of
the math (effect size of 0.41, p = 0.01). After computing the Benjamini-Hochberg procedure,
URSM was still significant (p = 0.01, adjusted significance level p < 0.02).
By comparing the effect sizes, URSM appears to have the largest effect on task selection,
enactment of the task, and coherence of the math, although the differences in effect size are
relatively small. Regarding teachers’ background characteristics, neither being certified in
mathematics rather than not being certified in mathematics nor participating in the online
professional development, was significant at the 0.05 alpha level for any of the three metrics of
effective instruction.
Table 8
Regression Results of Teachers’ Instructional Practices on their Knowledge Components
Cognitive Demand of the Task’s Potential as the Outcome
GLCK 2.23~
(1.27)
PSM 2.03~
(1.19)
URSM 4.01*
(1.80)
Online Professional
Development (yes)
0.10 0.11 0.10
(0.19) (0.19) (0.18)
Math Credential (yes) 0.08~ 0.09~ 0.09*
(0.04) (0.04) (0.04)
Intercept 2.44*** 2.41*** 2.42***
(0.25) (0.25) (0.24)
Enactment of the Task as the Outcome
GLCK 3.24~
(1.63)
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
77
PSM 3.62*
(1.48)
URSM 5.97*
(2.28)
Online Professional
Development (yes)
-0.07 -0.07 -0.08
(0.24) (0.23) (0.23)
Math Credential (yes) 0.04 0.04 0.05
(0.06) (0.06) (0.05)
Intercept 2.30*** 2.29*** 2.28***
(0.33) (0.31) (0.31)
Coherence of the Mathematics for Conceptual Understanding as the Outcome
GLCK 3.95*
(1.83)
PSM 4.30*
(1.68)
URSM 6.93*
(2.58)
Online Professional
Development (yes)
0.21 0.20 0.21
(0.27) (0.27) (0.26)
Math Credential (yes) 0.11~ 0.11~ 0.12~
(0.07) (0.06) (0.06)
Intercept 1.46*** 1.44*** 1.43***
(0.37) (0.35) (0.35)
Note. The values in this table are unstandardized regression coefficients. Values in parentheses
indicate standard errors. N = 36 for cognitive demand of the task. N = 36 for enactment of the task, N
= 35 for coherence of the mathematics.
~p < .10. *p < .05. **p < .01. ***p < .001.
Research Question 4: What is relationship between teachers’ GLCK, PSM, and URSM of
ratios and proportional relationships and student achievement?
Table 9 includes the results of two-level HLM analyses in which students’ post-test
scores were predicted by three knowledge components (i.e., teachers’ GLCK, PSM, and URSM).
Based on the recommendations for multi-level modeling, I first ran an unconditional model,
meaning there were no predictors at level one or level two (Hox et al., 2017; Raudenbush &
Bryk, 2002). This unconditional model provided information about the amount of variance in the
outcome and can be decomposed into the variance at level one and level two. The intraclass
correlation for this model was 0.41, meaning that 41% of the variance in student post-test score
was at the teacher level and 59% was at the student level. The chi-squared was also significant (p
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
78
< 0.001), supporting the decision to use a multi-level model to account for the variation in the
data (Raudenbush & Bryk, 2002).
The HLM results showed that PSM and URSM were statistically significant predictors of
student learning, but that GLCK was not. Although not significant, a one-unit increase in a
teachers’ GLCK factor score was associated with a 4.19 increase in their student post-test scores
holding all else constant (effect size of 0.162
, p = 0.07).
A one-unit increase in teachers’ PSM score was related to a 4.42-point increase in
students’ post-test scores, all else equal (an effect size of 0.18, p = 0.03). The result stayed
statistically significant after the Benjamini-Hochberg correction (p = 0.03, adjusted significance
level p < 0.03).
A one-point increase in a teacher’s URSM factor score was related to a 7.74-point
increase in student post-test scores (effect size of 0.20, p = 0.01). After computing the
Benjamini-Hochberg procedure, URSM remained significantly related to student learning (p =
0.01, adjusted significance level p < 0.2). Although all three effect sizes would be considered
small effect by Cohen (1988), they do suggest there is an important relationship between PSM,
URSM, and student learning.
Across the three models, the students’ group mean centered pre-test scores and the class
mean pre-test scores were statistically significant (p < 0.01), but whether teachers had a
credential in mathematics or participated in the online professional development were not.
Table 9
Hierarchical Linear Model Results
2 For the HLM results, the effect sizes for GLCK, PSM, and URSM were calculated using the formula
(� ∗SDvariable/sconstant(Null)), meaning I multiplied the coefficient by the quotient of the standard deviation for the
sample and the Level 2 standard deviation from the null model because the teacher knowledge components are
Level 2 variables.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
79
Student Post Test Score as Outcome
GLCK 4.19~
(2.27)
PSM 4.42*
(2.00)
URSM 7.74*
(3.04)
Deviation from Group Mean PreTest Score 0.53*** 0.53*** 0.53***
(0.02) (0.02) (0.02)
Class Mean 0.90*** 0.89*** 0.90***
(0.09) (0.09) (0.09)
Grade Taught 0.09 0.15 0.17
(0.26) (0.25) (0.25)
Online Professional
Development (yes)
0.31 0.32 0.32
(0.31) (0.30) (0.29)
Math Credential (yes) 0.07 0.07 0.08
(0.07) (0.07) (0.07)
Intercept 1.65** 1.66** 1.58**
(0.64) (0.61) (0.58)
Random Effects Parameters
var(_cons) 0.59 0.55 0.53
(0.17) (0.17) (0.16)
var(Residual) 3.52 3.52 3.52
(0.15) (0.15) (0.15)
Level 1 R2 (%) 29.38 29.35 29.35
Level 2 R2 (%) 83.34 84.29 85.05
Total R2 (%) 51.67 52.06 52.38
Note. Values in parentheses indicate standard errors. N = 1,205.
~p < .10. *p < .05. **p < .01. ***p < .001.
Results Summary
Findings from Study 2 indicated that after correcting for a false discovery rate of 0.05 PSM and
URSM predicted enactment of the task and the coherence of the math for conceptual
understanding but not task selection. GLCK was only significant when predicting the coherence
of the mathematics. When investigating the relationship between student learning and GLCK,
PSM, and URSM, only PSM and URSM were significant predictors. These results seem to
indicate that PSM and URSM are key components of teachers’ usable knowledge.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
80
CHAPTER FIVE: DISCUSSION AND IMPLICATIONS
In this chapter, I discuss the aforementioned results and situate them within the prior
literature. I acknowledge some of the limitations of this work while also suggesting future areas
for research.
Discussion of Findings
As teacher knowledge is theoretically necessary for quality instruction (An et al., 2004;
Ball et al., 2008; Copur-Gencturk & Tolar, 2022; Li & Kaiser, 2011) and knowledge of students
and their misunderstandings is a key component of teacher knowledge conceptualizations (e.g.,
Fennema & Franke, 1992; Krauss et al., 2008; Shulman, 1986; Tattoo et al., 2008) understanding
whether and to what extent teachers have this knowledge and how it relates to important
outcomes is prudent. This study provides empirical evidence for whether GLCK, PSM, and
URSM are unique constructs and to what extent they are associated with teacher instructional
quality and student learning which is necessary so that teacher education programs can allocate
time to the most salient aspects of teacher knowledge.
In developing and testing a measure it became apparent that item type and structure had a
large influence on teachers’ responses for the GLCK, PSM, and URSM questions. For instance,
indirect problems demonstrated much more difficulty in terms of GLCK and PSM questions than
the other question types and had very consistent low performance. This is in line with previous
research (e.g., Fisher 1988; Jacobson & Izsák, 2014; Izsák & Jacobson 2017) that has shown
indirect problems are particularly challenging for teachers. Additionally, indirect problems had
the second most rationales that were coded as incorrect (18.78%) and the lowest percentage of
conceptual rationales (14.49%) which shows that not only did teachers have more difficulty
answering indirect problems correctly and predicting the common student misunderstanding, but
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
81
that their rationales were more mathematically incorrect suggesting teachers have less URSM for
indirect problems. For indirect problems, teachers provided the lowest percentage of conceptspecific rationales, meaning they did not discuss the conceptual underpinnings leading to the
common misunderstandings which reinforces the consistent difficulty with indirect items. The
pervasive difficulty across GLCK, PSM, and URSM indicates that indirect problems are
uniquely challenging compared to the other item types.
The first research question investigated whether GLCK, PSM, and URSM were distinct
components of teacher knowledge. After testing multiple models, the 3-factor model was found
to fit the data best which supported the conceptualization of PSM and URSM as unique
components of teacher knowledge. Prior KOSM studies (Chen et al., 2020; Hill & Chin, 2018;
Sadler et al., 2013) using a similar method of multiple-choice questions with research-based
misunderstandings and then predicting the common misunderstanding, may have missed an
opportunity to explore a key component of KOSM by not investigating the rationale for what
leads students to the common misunderstandings. Similarly, Masters (2012) created a single
measure of teacher knowledge combing GLCK and PCK, which this research suggests is not
advisable since they are distinct constructs and have differing relationships with student learning
and instructional metrics. Finding that GLCK, PSM, and URSM represent three unique factors
reinforces the notion that studies investigating teachers’ KOSM should consider including both
PSM and URSM as they are distinct components.
For the national sample in Study 1, teachers demonstrated that they are somewhat
proficient in GLCK and PSM for proportional relationships. The overall results are in line with
the prior work investigating teachers’ knowledge of student thinking and misunderstandings of
proportional relationships in that Masters (2012), Buforn et al. (2020), and Copur-Gencturk et al.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
82
(2022) all found IST’s scored between 62% and 73% on the content knowledge items. The
average score for this study on GLCK questions was 0.67 which is right in line with the prior
work. Again, while this score means that teachers on average got three fourths of the questions
correct, these questions were set at a middle school level, so this result speaks to the pervasive
difficulty of ratios and proportional relationships even for teachers.
As mentioned, the prior literature on ratios and proportional relationships has not
investigated PSM, so the PSM results are put in conversation with other studies measuring the
construct. In this study, teachers had much higher level of PSM compared to the prior research.
The average score across all PSM items was 0.64 for this study, whereas Hill and Chin (2018),
Chen et al. (2020), and Sadler et al. (2013) had PSM average scores of 0.55, 0.31, and 0.43
respectively. Understanding why teachers performed much better than expected on the PSM
items was outside the scope of this study but part of the reason for the much higher performance
compared to Hill and Chin (2018) specifically may be due to the grade level. Hill and Chin used
fourth and fifth grade teachers and the content they targeted was at the same grade levels. Past
research has shown middle school teachers have higher proportional reasoning levels than
elementary teachers (Copur-Gencturk et al., 2022a), so using a sample of strictly middle school
teachers may explain some of this difference as middle school teachers specialize in teaching
math and may have deeper knowledge.
I also found much more similar levels of GLCK and PSM when compared to the prior
research. In previous studies investigating PSM knowledge to student achievement, Chen et al.
(2020) found an average difference of 0.62 (mean GLCK = 0.93, mean PSM = 0.31) and Sadler
et al. (2013) found an average difference of 0.42 (mean GLCK = 0.85, mean PSM = 0.43). Hill
and Chin (2018) did not report their mean score due to using MKT items that specifically are not
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
83
to be used to assess teachers, but the average PSM score was 0.55. In these cases, the disparity
between GLCK and PSM was very large, but for this study, the difference was only 0.03 (mean
GLCK = 0.67, mean PSM = 0.64).
Part of the reason for the larger differential is due to the items removed from the scale.
The two removed GLCK items had mean scores of 0.96 meaning that with their inclusion the
average GLCK score would have increased from 0.64 to 0.74. The two PSM items had mean
scores of 0.44 and 0.82 so with their inclusion the average PSM score would not have changed
from 0.64. Although still not as high as the GLCK scores from the previous studies, this
demonstrates a more clear difference between GLCK average scores and PSM average scores.
This more consistent knowledge could also relate to the content investigated
(mathematics vs. science) or the difficulty of ratios and proportional relationships as the mean
GLCK score was lower than the mean GLCK reported for Chen et al. (2020) and Sadler et al.,
(2013), but future research is needed to understand why GLCK and PSM were much more
similar for ratios and proportional relationships. Even given the small difference between GLCK
and PSM, the difference was statistically significant (p = 0.01, alpha = 0.05), meaning that for
teachers in this sample there was a statistical difference between their GLCK than PSM, but it
may be due to the size of the sample.
Regarding URSM, across all the items, around 59% of the rationales teachers provided
for what led to the misunderstandings were attributed to the steps the students would take to get
the wrong answer (procedural description category) or to providing the method students would
use and referencing how it was inaccurate (procedure-specific category). This indicates that there
is an opportunity to deepen teachers’ understanding of the conceptual issues that typically lead to
common errors. The heavy focus on the procedure is not unexpected as Fernández et al., (2013)
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
84
found only three of their 39 pre-service teachers could discriminate proportional and
nonproportional situations and interpret students’ solutions in a way that addressed their
conceptual understanding of proportional relationships. Son (2013) found a similar result where
56% of the pre-service teachers identified a conceptual student error procedurally. It is a bit
surprising that in-service teachers performed similarly to the pre-service teachers in Son’s (2013)
sample, given that in-service teachers likely have much more exposure to these common
misunderstandings. This finding may indicate that understanding why students make errors may
not become more apparent while teaching. If that is the case, it would suggest that more focus
should be put into developing trainings to support teachers’ knowledge of the underlying
mathematical reasons for common misunderstandings since teachers may not develop their
URSM through their learning while teaching. As URSM was related to enactment of tasks and
coherence of mathematics along with student learning, it seems worth investing in developing
teachers’ URSM.
Comparing this sample to another sample of in-service teachers, Copur-Gencturk et al.,
(under review) found that around 25% of the rationales provided for common fractions
misunderstandings were conceptual in nature, which is a higher percentage than those in this
study (17% of the rationales provided in this study were concept-specific). That could be due to
the fact that in the prior study teachers chose common misunderstandings they had noticed in
their classroom regarding fractions and thus may have reported misunderstandings that they
knew more deeply and are able to provide a rationale for. It could also point to the difficulty of
proportional relationships in that teachers may struggle more to provide a rationale for
proportional relationship misunderstandings compared to fraction misunderstandings. As there is
increased focused on the conceptual understanding in mathematics (Common Core State
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
85
Standards Initiative, 2016; NRC, 2001), it appears that this has not extended to conceptual
understanding of why students make common errors. Further developing this understanding
could be particularly important given that we found rationale to be related to instructional quality
and student learning.
Study 2 used a second sample of in-service teachers and their students to understand how
GLCK, PSM, and URSM related to instruction and student learning. Linear regression was used
to test whether teachers’ knowledge components were predictive of their task selection in terms
of the cognitive potential of the task, enactment of the task, and the coherence of the
mathematics across planning, execution, and analysis of student work. The linear regression
models relating GLCK, PSM, and URSM to instructional quality (cognitive demand of the task
potential, the cognitive demand of the task in the enactment, and the coherence of the
mathematics) demonstrated that there is some relation between teachers’ knowledge and
instructional quality.
After correcting for the false discovery rate using the Benjamini-Hochberg procedure
(Benjamini & Hochberg, 1995), URSM was significantly related to two of the three metrics
(cognitive demand of the task and enactment of the task) for instructional quality (p < 0.05). This
indicated that teachers who provided more concept-specific rationales also had higher cognitive
demand during enactment and coherence across planning and execution. Teachers with higher
URSM demonstrate knowledge of the reasons leading to the misunderstanding, and this
knowledge could prime the teachers to more intentionally engage with the misunderstanding and
in doing so, improve their instruction. Copur-Gencturk et al. (under review) found similar
results, where more conceptual rationales for misunderstandings were associated with more
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
86
conceptual and targeted remediation strategies, reinforcing that being able to provide a
mathematical rationale for misunderstandings is related to effective instruction.
As with URSM, after running the Benjamini-Hochberg procedure, PSM was significantly
related to enactment of the task and coherence of the mathematics. PSM and URSM measured
whether teachers could predict the common misunderstanding and what type of reason they
believe leads to the misunderstanding, which makes it interesting that neither was associated
with the selection of tasks with higher cognitive potential. Theoretically, knowing the common
misunderstanding and what conceptually causes it, could allow a teacher to select a task that will
allow students to engage with the misunderstanding so it can be unearthed and addressed. One
reason PSM and URSM may have not been significant is that the scoring for cognitive potential
of the task was focused on the potential to build students’ conceptual understanding. Therefore,
teachers could have known the common misunderstanding and selected rudimentary tasks that
provided many opportunities to practice the procedural skill repeatedly without selecting tasks
that really addressed the underlying conceptual issues. Potentially since they could predict the
common misunderstanding and knew the underlying reason for it, they may have pushed
students to make their thinking more visible even on these more rote problems because they were
aware of the common misunderstanding. This would have led to them being scored more highly
on enactment of the task, which PSM and URSM were significantly related to, but could explain
why they were not related to cognitive potential of the selected tasks. Yet, more work
understanding this dynamic is needed to fully unpack why PSM and URSM were not associated
with cognitive potential of the task but were related to enactment and coherence of the
mathematics. Although the previous literature did not investigate the relationship between
teachers’ URSM and instructional quality, it is worth noting that the prior studies found that
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
87
largely could not provide conceptual instructional responses (Son, 2013) or conceptual
modifications to problems (Buforn et al., 2020) when confronted with student
misunderstandings. These results demonstrate that responding to misunderstandings effectively
is challenging, so finding that there is a relationship between PSM and URSM and task
enactment and coherence of the math reinforces that they are valuable aspects of teacher
knowledge.
GLCK was only statistically significant for coherence of the mathematics for conceptual
understanding. This demonstrates that GLCK is less related to the instructional outcomes
compared with PSM and URSM which were predictive of two of the three outcomes. Although
not by a very large margin, across all three instructional outcomes, URSM had the largest effect
size for each outcome. This along with finding that GLCK, PSM, and URSM are best modeled
as three unique factors supports the notion that URSM is important to consider when
investigating teachers’ knowledge of students thinking and misunderstandings, even though it
has not always been assessed.
The final analysis was investigating the relationship between teachers’ GLCK, PSM,
URSM and student achievement. Based on the hierarchical linear models, I was able to reject the
null hypothesis that there is no relationship between teachers’ PSM, URSM and student learning
(p < 0.05) after computing the Benjamini-Hochberg procedure. GLCK was not significantly
related to student learning which continued the pattern seen for the instructional outcomes where
GLCK was frequently not a significant predictor. Finding that GLCK was less predictive of
student learning compared to PCK knowledge components is in line with the prior literature
showing that although a theoretically important component of teacher knowledge, GLCK often
has smaller effect sizes on student learning when compared to PCK components (e.g., Baumert
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
88
et al., 2010; Campbell et al., 2014; Kunter et al., 2013). Reinforcing this point, Sadler et al.,
(2013) and Chen et al., (2020) found similar results in that teachers who had GLCK and PSM
had much higher likelihoods of their students answering questions correctly, compared to
teachers with only GLCK. So although it is important that teachers are able to do the
mathematics, it is not enough to maximize student learning.
PSM was related to student learning which supports the work on PSM in other fields as
Sadler et al. (2013) and Chen et al. (2020) both found significant relationships between teachers’
PSM and student learning. Hill and Chin (2018) did not find a statistically significant
relationship between PSM and student achievement, but as mentioned previously they did not
use research-based common misunderstandings which could have obfuscated their results as the
misunderstandings may not have been as common across all teachers.
As with the instructional outcomes, URSM had the largest effect size, suggesting that
being able to provide mathematical rationales for common misunderstandings has the strongest
relationship with student learning. This consistent trend of URSM being the most predictive
could be due to the depth of knowledge that URSM requires compared to GLCK and PSM.
URSM taps into the “knowing why” (Even & Tirosh, 1995), forcing teachers to consider not
only what students might do but also tapping into their understanding of mathematical concepts
and how students interact with the content. GLCK requires no knowledge of students, and PSM
does require knowledge of how students interact with problems, but does not require teachers to
investigate further. For instance, teachers could see a pattern in their class where students are
over assuming every situation is directly proportional and be able to identify that common
misunderstanding. This depth of thinking could be an explanation for why URSM seems to have
a slightly larger impact on the outcomes measured compared to PSM and GLCK. Similarly, it
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
89
could explain why PSM was related to enactment of the task and student learning while GLCK
was not, as PSM required more depth of knowledge about students than GLCK.
These findings are important to consider in contrast to Masters (2012), who found that
teachers’ URSM was not related to student learning. As a reminder, Masters (2012) combined
teachers’ GLCK and URSM into a single measure of teacher knowledge and then used that
combined metric to predict student learning. Study 1 demonstrated that GLCK and URSM are
better represented by separate constructs, suggesting that combining the results may have hidden
the effect of URSM. So, finding that URSM does have a significant relationship with student
learning and GLCK does not, suggests that Masters’s (2012) result is likely due to combining the
two whereas if separated, URSM may have been predictive as seen here.
As a whole, these studies demonstrated that PSM and URSM represent distinct
components of KOSM as evidenced by the confirmatory factor analysis and that even though
distinct, both PSM and URSM are related to how teachers enact their tasks, the coherence across
planning and enactment, how much their students learn.
Limitations and Delimitations
As with any study, there are some limitations. A limiting factor of this study was that
some student-level covariates that have been used in the prior literature were unable to be
included in the HLMs. The student-level data was previously collected and to protect the privacy
of the students, they were not contacted to gather more data. Therefore, it was not feasible to
retrieve some covariates that were anticipated to be used and that have been used in the past
literature, such as the highest level of parental education (Chen et al., 2020; Hill & Chin, 2018;
Sadler et al., 2013). A similar concern is the small sample size for which to investigate the
relationship between GLCK, PSM, and URSM and task selection, enactment of the task,
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
90
coherence of the mathematics, and student achievement. By having 37 teachers there was not as
much spread across their knowledge and outcomes as there likely would be with a larger sample.
Although a larger pool of diverse teachers would have been ideal, this study can provide
preliminary evidence for the relationship between teachers’ GLCK, PSM, and URSM and
instructional quality and student achievement, which can be further explored in more depth when
more teachers and students can be recruited.
Another limitation is that I ran many statistical tests, which increases the risk of a Type I
error. To account for this, I used the Benjamini-Hochberg method to control for the false
discovery rate at a 0.05 level and in the results noted that there were two coefficients that were
not significant after accounting for this correction.
A final limitation of the study is the instrument used. The instrument was created for this
study and even though it was grounded in research-based misunderstandings and developed
items based on past literature, the instrument had to have items removed based on skewness and
kurtosis. This led to an incomplete pictures of teachers’ additive situation knowledge as both
additive PSM items were removed. That said, there were still six items capturing teachers’ PSM
for ratios and proportional relationships, but in future work creating additive PSM items could be
beneficial so that all item types could be captured for each knowledge component.
Recommendations for Future Research
These studies provided an overview of the knowledge teachers have about student
misunderstandings for ratios and proportional relationships and explored the relationships to key
educational outcomes (i.e., task selection, task enactment, coherence of the math, and student
achievement). As mentioned, increasing the sample size of teachers when investigating the
relationship with instructional quality and student achievement would be beneficial, as it could
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
91
provide more differentiation of teachers’ knowledge, which would be fruitful in exploring the
relationship between these knowledge components and these outcomes.
Another area of future research could be investigating these relationships for other critical
mathematics content. Testing whether URSM is predictive of these outcomes in other domains is
necessary to understand whether URSM is an essential understanding for all mathematics content
as it is theoretically important, and this study provides more empirical evidence for its value. It
could be the case that specific topics have a stronger relationship with URSM or it may be that
URSM has relatively similar effect sizes across content. This knowledge would allow for a more
informed decision when determining how much time and energy to spend on teaching and
training teachers in the reasons belying common student misunderstandings.
A final area for research is determining how to develop teachers’ PSM and URSM. Prior
work has explored how teachers develop knowledge (e.g., Copur-Gencturk & Li, 2023;
Kleickmann et al., 2017) but it has not explicitly investigated interventions that might improve
teachers PSM and URSM specifically. Thus, a future stream of research could be developing a
program that attempts to improve teachers’ PSM and URSM for ratios and proportional
relationships and then testing the effect of the intervention.
Conclusion
Measuring teachers' knowledge effectively and understanding what components may
relate to effective instruction and student achievement is imperative in making decisions about
how to best prepare teachers and support all students. Theoretically, teachers’ KOSM should be
related to student achievement since it is included in most all conceptions of teacher knowledge
(e.g., An et al., 2004; Ball et al., 2008; Fennema & Franke, 1992; Shulman, 1986; Tattoo et al.,
2012), yet the results are not so conclusive, particularly for mathematics (Chen et al., 2020; Hill
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
92
& Chin, 2018; Sadler et al., 2013). Through this work I constructed a scale to capture teachers’
GLCK, PSM, and URSM for ratios and proportional relationships. Using confirmatory factor
analysis, I verified the structure of these knowledge components and found that like the
theoretical understanding, they represented three distinct components. Then I explored what
GLCK, PSM, and URSM in-service middle school mathematics teachers have for ratios and
proportional relationships. I demonstrated that teachers have a fairly strong grasp on the grade
level content knowledge of proportional relationships and with some accuracy can predict the
common misunderstandings their students will have. Teachers also show a reliance on providing
a procedural description when describing the reasons leading to the common student
misunderstandings.
After describing the broad trends in teachers’ GLCK, PSM and URSM, linear regression
and hierarchical linear modeling were used to understand how these types of knowledge are
related to three measures of instructional quality and student learning. Independently, URSM and
PSM were related to two of the three measures of instructional quality (the cognitive demand of
the task in the enactment, and the coherence of the mathematics) and to students’ mathematics
performance, suggesting they are crucial components of teachers’ knowledge. GLCK was
significantly related to coherence of the mathematics and was not related to students’ learning,
reinforcing that GLCK is important but might not be as salient for effective teaching compared
to PSM and URSM, which were significant across multiple outcomes (task enactment, coherence
of the mathematics, and student learning). Overall, this work provides more information on the
measurement and conceptualization of what is considered teachers' knowledge of student
misunderstandings by introducing understanding the reason for student misunderstandings
(URSM) as a novel component and empirically demonstrating its importance.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
93
References
Aaronson, D., Barrow, L., & Sander, W. (2007). Teachers and student achievement in the
Chicago public high schools. Journal of Labor Economics, 25, 95-135.
Akaike, H. (1987). Factor analysis and AIC. Psychometrika, 52, 317–332.
Akar, G. K. (2010). Different levels of reasoning in within state ratio conception and the
conceptualization of rate: A possible example. In P. Brosnan, D. B. Erchick, & L.
Flevares (Eds.), Proceedings of the 32nd annual meeting of the North American Chapter
of the International Group for the Psychology of Mathematics Education (pp. 711-719).
Columbus, OH: The Ohio State University.
An, S., Kulm, G., & Wu, Z. (2004). The pedagogical content knowledge of middle school,
mathematics teachers in China and the US. Journal of Mathematics Teacher Education,
7(2), 145-172.
Arican, M. (2019). Preservice mathematics teachers’ understanding of and abilities to
differentiate proportional relationships from nonproportional relationships. International
Journal of Science and Mathematics Education, 17(7), 1423-1443.
Arican, M. (2018). Preservice middle and high school mathematics teachers’ strategies when
solving proportion problems. International Journal of Science and Mathematics
Education, 16(2), 315–335. https://doi.org/10.1007/s10763-016-9775-1
Association of Mathematics Teacher Educators. (2017). Standards for Preparing Teachers of
Mathematics. Available online at amte.net/standards.
Ayan, R., & Isiksal-Bostan, M. (2018). Middle school students’ proportional reasoning in real
life con- texts in the domain of geometry and measurement. International Journal of
Mathematical Education in Science and Technology, 50(1), 65–81.
https://doi.org/10.1080/0020739x.2018.1468042
Ball, D. L. (1991). Research on teaching mathematics: Making subject matter knowledge part of
the equation. In J. Brophy (Ed.), Advances in research on teaching: Vol. 2, Teachers’
subject matter knowledge and classroom instruction (pp. 1–47). Greenwich, CT: JAI
Press.
Ball, D. L., & Bass, H. (2003). Toward a practice-based theory of mathematical knowledge for
teaching. In Proceedings of the 2002 annual meeting of the Canadian Mathematics
Education Study Group, Edmonton, AB, CMESG (pp. 3-14).
Ball, D. L., Thames, M. H., & Phelps, G. (2008). Content knowledge for teaching: What makes
it special? Journal of Teacher Education, 59, 389–407.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
94
Baumert, J., & Kunter, M. (2013). The COACTIV model of teachers’ professional competence.
In Cognitive activation in the mathematics classroom and professional competence of
teachers: Results from the COACTIV project (pp. 25-48). Boston, MA: Springer US.
Baumert, J., Kunter, M., Blum, W., Brunner, M., Voss, T., Jordan, A., et al. (2010). Teachers’
mathematical knowledge, cognitive activation in the classroom, and student progress.
American Educational Research Journal, 47, 133e180.
Beckmann, S., & Izsák, A. (2015). Two perspectives on proportional relationships: Extending
complementary origins of multiplication in terms of quantities. Journal for Research in
Mathematics Education, 46(1), 17-38.
Bellert, A. (2015). Effective re-teaching. Australian Journal of Learning Difficulties, 20(2), 163-
183.
Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: a practical and
powerful approach to multiple testing. Journal of the Royal statistical society: series B
(Methodological), 57(1), 289-300.
Bentler, P. M. (1990). Comparative fit indexes in structural models. Psychological Bulletin, 107,
238–246. https://doi.org/10.1037/0033-2909.107.2.238
Bentler, P., & Bonett, D. (1980). Significance tests and goodness of fit in the analysis of
covariance structures. Psychological Bulletin, 88, 588–606. https://doi.org/10.1037/0033-
2909.88.3.588
Berk, D., Taber, S. B., Gorowara, C. C., & Poetzl, C. (2009). Developing prospective elementary
teachers' flexibility in the domain of proportional reasoning. Mathematical Thinking and
Learning, 11(3), 113-135.
Blum, W., & Krauss, S. (2008, March). The professional knowledge of German secondary
mathematics teachers: Investigations in the context of the COACTIV project. In
Symposium on the occasion of the 100th anniversary of ICMI, Rome (pp. 5-8).
Boston, M. (2012). Assessing instructional quality in mathematics. The Elementary School
Journal, 113(1), 76-104.
Boston, M. D., & Smith, M. S. (2011). A ‘task-centric approach to professional development:
Enhancing and sustaining mathematics teachers’ ability to implement cognitively
challenging mathematical tasks. ZDM, 43(6), 965-977.
Brown, T. A. (2015). Confirmatory factor analysis for applied research. Guilford publications.
Brown, R. E., Epstein, M. L., & Orrill, C. H. (2020a). When constant in a proportional
relationship isn’t constant—A sign of not-so-shared understandings. Investigations in
Mathematics Learning, 12(3), 194-207.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
95
Brown, R.E., Weiland, T. & Orrill, C.H. (2020b). Mathematics teachers’ use of knowledge
resources when identifying proportional reasoning situations. International Journal of
Science and Mathematics Education 18, 1085–1104. https://doi.org/10.1007/s10763-019-
10006-3
Bruckmaier, G., Krauss, S., Blum, W., & Leiss, D. (2016). Measuring mathematics teachers’
professional competence by using video clips (COACTIV video). ZDM, 48, 111-124.
Buforn, À., Llinares, S., Fernández, C., Coles, A., & Brown, L. (2020). Pre-service teachers’
knowledge of the unitizing process in recognizing students’ reasoning to propose
teaching decisions. International Journal of Mathematical Education in Science and
Technology, 1-19.
Cai, J., & Sun, W. (2002). Developing students’ proportional reasoning: A Chinese perspective.
In B. Litwiller & G. Bright (Eds.), Making sense of fractions, ratios, and proportions (pp.
195–205). National Council of Teachers of Mathematics.
Campbell, P. F., Nishio, M., Smith, T. M., Clark, L. M., Conant, D. L., Rust, A. H., ... & Choi,
Y. (2014). The relationship between teachers' mathematical content and pedagogical
knowledge, teachers' perceptions, and student achievement. Journal for Research in
Mathematics Education, 45(4), 419-459.
Capon, N., & Kuhn, D. (1979). Logical reasoning in the supermarket: Adult females' use of a
proportional reasoning strategy in an everyday context. Developmental Psychology,
15(4), 450.
Carlson, J., Daehler, K. R., Alonzo, A. C., Barendsen, E., Berry, A., Borowski, A., ... & Wilson,
C. D. (2019). The refined consensus model of pedagogical content knowledge in science
education. In A. Hume, R. Cooper & A. Borowski (Eds.). Repositioning pedagogical
content knowledge in teachers’ knowledge for teaching science (pp. 77-94). Springer,
Singapore.
Charalambous, C. Y. (2016). Investigating the knowledge needed for teaching mathematics: An
exploratory validation study focusing on teaching practices. Journal of Teacher
Education, 67(3), 220-237.
Charalambous, C. Y., Hill, H. C., Chin, M. J., & McGinn, D. (2020). Mathematical content
knowledge and knowledge for teaching: Exploring their distinguishability and
contribution to student learning. Journal of Mathematics Teacher Education, 23(6), 579-
613.
Charalambous, C. Y., & Pitta-Pantazi, D. (2007). Drawing on a theoretical model to study
students’ understandings of fractions. Educational Studies in Mathematics, 64, 293-316.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
96
Chen, C., Sonnert, G., Sadler, P. M., & Sunbury, S. (2020). The impact of high school life
science teachers’ subject matter knowledge and knowledge of student misconceptions on
students’ learning. CBE—Life Sciences Education, 19(1), ar9.
Christiansen, B., & Walther, G. (1986). Task and activity. In B. Christiansen, A. G. Howson, &
M. Otte (Eds.), Perspectives on mathematics education (pp. 243–307). Dordrecht, the
Netherlands: Reidel.
Clotfelter, C. T., Ladd, H. F., & Vigdor, J. L. (2007). Teacher credentials and student
achievement: Longitudinal analysis with student fixed effects. Economics of Education
Review, 26, 673-682.
Cochran-Smith, M., Cannady, M., McEachern, K., Mitchell, K., Piazza, P., Power, C., & Ryan,
A. (2012). Teachers’ education and outcomes: Mapping the research terrain. Teachers
College Record, 114(10), 1-49.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences, 2nd ed. Hillsdale, NJ:
Erlbaum.
Common Core State Standards Initiative. (2016). Key shifts in mathematics.
Common Core Standards Writing Team. (2022). Progressions for the Common Core State
Standards for Mathematics (February 28, 2023). Tucson, AZ: Institute for Mathematics
and Education, University of Arizona.
Copur-Gencturk, Y. (2015). The effects of changes in mathematical knowledge on teaching: A
longitudinal study of teachers' knowledge and instruction. Journal for Research in
Mathematics Education, 46(3), 280-330.
Copur-Gencturk, Y. (2021). Teachers’ conceptual understanding of fraction operations: results
from a national sample of elementary school teachers. Educational Studies in
Mathematics, 107(3), 525-545.
Copur-Gencturk, Y., & Atabas, S. (under review). Scalable Professional Development: The
Impact on Instruction of an Online, Intelligent, Interactive Program with Just-in-Time
Feedback.
Copur-Gencturk, Y., Baek, C., & Doleck, T. (2022a). A Closer Look at Teachers’ Proportional
Reasoning. International Journal of Science and Mathematics Education, 1-17.
Copur-Gencturk, Y., Choi, H. J., & Cohen, A. (2022b). Investigating teachers’ understanding
through topic modeling: a promising approach to studying teachers’ knowledge. Journal
of Mathematics Teacher Education, 1-22.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
97
Copur-Gencturk, Y., & Doleck, T. (2021). Strategic competence for multistep fraction word
problems: An overlooked aspect of mathematical knowledge for teaching. Educational
Studies in Mathematics, 107(1), 49–70.
Copur-Gencturk, Y., Ezaki, J., & Jacobson, E. (under review) Missing Link: How Teachers’
Understanding of Student Misconceptions is Key to their Instructional Response.
Copur-Gencturk, Y., & Li, J. (2023). Teaching matters: A longitudinal study of mathematics
teachers’ knowledge growth. Teaching and Teacher Education, 121, 103949.
Copur-Gencturk, Y., & Tolar, T. (2022). Mathematics teaching expertise: A study of the
dimensionality of content knowledge, pedagogical content knowledge, and contentspecific noticing skills. Teaching and Teacher Education, 114, 103696.
Council for the Accreditation of Educator Preparation (CAEP). (2018). K-6 elementary teacher
preparation standards [Initial Licensure Programs]. Retrieved from
http://caepnet.org/~/media/ Files/caep/standards/2018-caep-k-6-elementary-teacherprepara.pdf?la=en
Cramer, K., & Lesh, R. (1988). Rational number knowledge of preservice elementary education
teachers. In Proceedings of the 10th annual meeting of the North American Chapter of
the International Group for Psychology of Mathematics Education (pp. 425-431).
Cramer, K., Post, T., & Currier, S. (1993). Learning and teaching ratio and proportion: Research
implications. In D. Owens (Ed.), Research Ideas For the Classroom (pp. 159-178). New
York: Macmillan Publishing Company.
Creswell, J. W., & Poth, C. N. (2017). Qualitative Inquiry and Research Design: Choosing
Among Five Approaches. SAGE Publications.
Darling-Hammond, L. (1999). Teacher quality and student achievement: A review of state policy
evidence.
Darling-Hammond, L., Hammerness, K., Grossman, P., Rust, F., & Shulman, L. (2005). The
design of teacher education programs. Preparing teachers for a changing world: What
teachers should learn and be able to do, 1, 390-441.
de Corte, E., Greer, B., & Verschaffel, L. (1996). Mathematics teaching and learning. In D. C.
Berliner & R. C. Calfee (Eds.), Handbook of educational psychology (pp. 491–549). New
York: Macmillan.
Depaepe, F., Verschaffel, L., & Kelchtermans, G. (2013). Pedagogical content knowledge: A
systematic review of the way in which the concept has pervaded mathematics educational
research. Teaching and teacher education, 34, 12-25.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
98
Even, R., & Tirosh, D. (1995). Subject-matter knowledge and knowledge about students as
sources of teacher presentations of the subject-matter. Educational studies in
mathematics, 29(1), 1-20.
Fennema, E., & Franke, M. L. (1992). Teachers' knowledge and its impact. In D. A. Grouws
(Ed.), Handbook of research on mathematics teaching and learning: A project of the
National Council of Teachers of Mathematics (pp. 147–164). Macmillan Publishing Co,
Inc.
Fernández, C., Llinares, S., & Valls, J. (2013). Primary school teacher’s noticing of students’
mathematical thinking in problem solving. The Mathematics Enthusiast, 10(1), 441-468.
Fisher, L. C. (1988). Strategies used by secondary mathematics teachers to solve proportion
problems. Journal for Research in Mathematics Education, 19(2), 157-168.
Gess-Newsome, J. (1999). Pedagogical content knowledge: An introduction and orientation. In J.
Gess-Newsome, & N. G. Lederman (Eds.), Examining pedagogical content knowledge
(pp. 3e17). Dordrecht: Kluwer Academic Publishers.
Gess-Newsome, J. (2015). A model of teacher professional knowledge and skill including PCK:
Results of the thinking from the PCK Summit. In A. Berry, P. J. Friedrichsen, & J.
Loughran (Eds.), Reexamining pedagogical content knowledge in science education (pp.
28–42). New York: Routledge.
Gitomer, D. H., & Zisk, R. C. (2015). Knowing what teachers know. Review of Research in
Education, 39(1), 1-53.
Grossman, P. L. (1990). The Making of a Teacher: Teacher knowledge and teacher education.
New York: Teachers College Press.
Grossman, P., & Loeb, S. (Eds.). (2021). Alternative routes to teaching: Mapping the new
landscape of teacher education. Harvard Education Press.
Grossman, P. L., & Richert, A. E. (1988). Unacknowledged knowledge growth: A reexamination of the effects of teacher education. Teaching and teacher Education, 4(1),
53-62.
Grossman P. L., Schoenfeld, A. & Lee, C. (2005). Teaching subject matter. In L. Darling
Hammond & J. Bransford (Eds.). Preparing teachers for a changing world: What
teachers should learn and be able to do. San Francisco: Jossey-Bass. pp. 201–231
Hadfield, O. D., Littleton, C. E., Steiner, R. L., & Woods, E. S. (1998). Predictors of preservice
elementary teacher effectiveness in the micro-teaching of mathematics lessons. Journal
of Instructional Psychology, 25(1), 34.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
99
Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2014). Multivariate data analysis (7th
ed.). Pearson.
Harel, G., & Behr, M. (1995). Teachers' Solutions for Multiplicative Problems. Hiroshima
Journal of Mathematics Education, 3, 31-51.
Harris, D. N., & Sass, T. R. (2011). Teacher training, teacher quality and student achievement.
Journal of Public Economics, 95, 798-812.
Hattie, J. A. (2003). Teachers make a difference: What is the research evidence? Australian
Council for Educational Research Annual Conference on: Building Teacher Quality.
Hiebert, J., Berk, D., & Miller, E. (2017). Relationships between mathematics teacher
preparation and graduates’ analyses of classroom teaching. The Elementary School
Journal, 117(4), 687-707.
Hiebert, J., Morris, A. K., Berk, D., & Jansen, A. (2007). Preparing teachers to learn from
teaching. Journal of teacher education, 58(1), 47-61.
Hill, H., & Ball, D. L. (2009). The curious—and crucial—case of mathematical knowledge for
teaching. Phi Delta Kappan, 91(2), 68-71.
Hill, H. C., Ball, D. L., & Schilling, S. G. (2008). Unpacking pedagogical content knowledge:
Conceptualizing and measuring teachers’ topic-specific knowledge of students. Journal
for Research in Mathematics Education, 39(4), 372-400.
Hill, H. C., Charalambous, C. Y., & Chin, M. J. (2019). Teacher characteristics and student
learning in mathematics: A comprehensive assessment. Educational Policy, 33(7), 1103-
1134.
Hill, H. C., & Chin, M. (2018). Connections between teachers’ knowledge of students,
instruction, and achievement outcomes. American Educational Research Journal, 55(5),
1076-1112.
Hill, H. C., Rowan, B., & Ball, D. L. (2005). Effects of teachers’ mathematical knowledge for
teaching on student achievement. American educational research journal, 42(2), 371-
406.
Hill, H. C., Schilling, S. G., & Ball, D. L. (2004). Developing measures of teachers’ mathematics
knowledge for teaching. The elementary school journal, 105(1), 11-30.
Hines, E., & McMahon, M. T. (2005). Interpreting middle school students' proportional
reasoning strategies: Observations from preservice teachers. School Science and
Mathematics, 105(2), 88-105.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
100
Hox, J. J., Moerbeek, M., & Van de Schoot, R. (2017). Multilevel analysis: Techniques and
applications. Routledge.
Izsák, A., & Jacobson, E. (2017). Preservice teachers' reasoning about relationships that are and
are not proportional: A knowledge-in-pieces account. Journal for Research in
Mathematics Education, 48(3), 300-339.
Izsák, A., Jacobson, E., De Araujo, Z., & Orrill, C. H. (2012). Measuring mathematical
knowledge for teaching fractions with drawn quantities. Journal for Research in
Mathematics Education, 43(4), 391-427.
Izsák, A., Orrill, C. H., Cohen, A. S., & Brown, R. E. (2010). Measuring middle grades teachers'
understanding of rational numbers with the mixture Rasch model. The Elementary School
Journal, 110(3), 279-300.
Jacobson, E., & Izsák, A. (2014). Using coordination classes to analyze preservice middle-grades
teachers’ difficulties in determining direct proportion relationships. Research Trends in
Mathematics Teacher Education, 47-65.
Jacobson, E., Lobato, J., & Orrill, C. H. (2018). Middle school teachers’ use of mathematics to
make sense of student solutions to proportional reasoning problems. International
Journal of Science and Mathematics Education, 16(8), 1541-1559.
Jakobsen, A., Thames, M. H., & Ribeiro, C. M. (2013). Delineating issues related to horizon
content knowledge for mathematics teaching. In Eight Congress of European Research in
Mathematics Education (CERME-8). Antalya, Turkey.
Jordan, A., Krauss, S., Lo ̈wen, K., Blum, W., Neubrand, M., Brunner, M., et al. (2008).
Aufgaben im COACTIV-Projekt: Zeugnisse des kognitiven Aktivierungspotentials im
deutschen Mathemat- ikunterricht [Tasks in the COACTIV project: Testaments to
cognitive activation potential in German mathematics instruc- tion]. Journal fu ̈r
Mathematikdidaktik, 29(2), 83–107.
Kafka, J. (2016). In search of a grand narrative: the turbulent history of teaching. Handbook of
research on teaching, 69-126.
Kersting, N. B., Givvin, K. B., Sotelo, F. L., & Stigler, J. W. (2010). Teachers’ analyses of
classroom video predict student learning of mathematics: Further explorations of a novel
measure of teacher knowledge. Journal of Teacher Education, 61(1-2), 172-181.
Kersting, N. B., Givvin, K. B., Thompson, B. J., Santagata, R., & Stigler, J. W. (2012).
Measuring usable knowledge: Teachers’ analyses of mathematics classroom videos
predict teaching quality and student learning. American Educational Research Journal,
49(3), 568-589.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
101
Kilpatrick, J., Blume, G., Heid, M., Wilson, J., Wilson, P., & Zbiek, R. (2015). Mathematical
understanding for secondary teaching: A framework. In M. Heid, P. Wilson, & G. Blume
(Eds.), Mathematical understanding for secondary teaching: A framework and
classroom-based situations (pp. 9-30). Information Age Publishing, Inc.
Kleickmann, T., Richter, D., Kunter, M., Elsner, J., Besser, M., Krauss, S., & Baumert, J. (2013).
Teachers’ content knowledge and pedagogical content knowledge: The role of structural
differences in teacher education. Journal of teacher education, 64(1), 90-106.
Kleickmann, T., Tröbst, S., Heinze, A., Bernholt, A., Rink, R., & Kunter, M. (2017). Teacher
knowledge experiment: Conditions of the development of pedagogical content
knowledge. Competence Assessment in Education: Research, Models and Instruments,
111-129.
Krauss, S., Baumert, J., & Blum, W. (2008). Secondary mathematics teachers’ pedagogical
content knowledge and content knowledge: Validation of the COACTIV
constructs. ZDM, 40, 873-892.
Kunter, M. (2013). Cognitive activation in the mathematics classroom and professional
competence of teachers results from the COACTIV Project. Springer-Verlag Berlin
Heidelberg.
Lamon, S. J. (2005). Teaching fractions and ratios for understanding: Essential content
knowledge and instructional strategies for teachers (2nd ed.). Lawrence Erlbaum
Associates.
Lamon, S. J. (2007). Rational numbers and proportional reasoning. In F. K. Lester (Ed.), Second
handbook of research on mathematics teaching and learning (pp. 629-667). Charlotte,
NC: Information Age Press.
Lamon, S. J. (2020). Teaching Fractions and Ratios for Understanding: Essential Content and
Instructional Strategies for Teachers. Routledge.
Li, Y., & Kaiser, G. (2011). Expertise in mathematics instruction: Advancing research and
practice from an international perspective (pp. 3-15). Springer US.
Livy, S., & Vale, C. (2011). First year pre-service teachers’ mathematical content knowledge:
Methods of solution for a ratio question. Mathematics Teacher Education and
Development, 13(2), 22–43.
Lobato, J., & Ellis, A. (2010). Developing essential understanding of ratios, proportions &
proportional reasoning: grades 6–8. Reston, VA: National Council of Teachers of
Mathematics.
Lobato, J., Ellis, A., & Zbiek, R. M. (2010). Developing Essential Understanding of Ratios,
Proportions, and Proportional Reasoning for Teaching Mathematics: Grades 6-8.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
102
National Council of Teachers of Mathematics. 1906 Association Drive, Reston, VA 20191-1502.
Magnusson, S., Krajcik, J., & Borko, H. (1999). Nature, sources, and development of
pedagogical content knowledge for science teaching. In Examining pedagogical content
knowledge: The construct and its implications for science education (pp. 95-132).
Dordrecht: Springer Netherlands.
Masters, J. (2012). Eighth Grade In-service Teachers' Knowledge of Proportional Reasoning and
Functions: A Secondary Data Analysis. International Journal for Mathematics Teaching
& Learning.
Mayer, R. E. (2004a). Should there be a three-strikes rule against pure discovery learning?
American Psychologist, 59, 14–19.
Mayer, R. E. (2004b). Teaching of subject matter. Annual Review of Psychology, 55, 715–744.
https://doi.org/10.1146/annurev.psych.55.082602.133124
Metzler, J., & Woessmann, L. (2012). The impact of teacher subject knowledge on student
achievement: Evidence from within-teacher within-student variation. Journal of
development economics, 99(2), 486-496.
Mewborn, D. (2003). Teachers, teaching, and their professional development. In J. Kilpatrick,
W. G. Martin & D. Schifter (Eds.), A research companion to principles and standards for
school mathematics (pp. 45–52). Reston, VA: National Council of Teachers of
Mathematics.
Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A
framework for teacher knowledge. Teachers college record, 108(6), 1017-1054.
Mokyr, J. (2001, December). The rise and fall of the factory system: technology, firms, and
households since the industrial revolution. In Carnegie-Rochester Conference Series on
Public Policy (Vol. 55, No. 1, pp. 1-45). North-Holland.
Monk, D. H. (1994). Subject area preparation of secondary mathematics and science teachers
and student achievement. Economics of education review, 13(2), 125-145.
National Commission on Excellence in Education. (1983). A nation at risk: The imperative for
educational reform. The Elementary School Journal, 84(2), 113-130.
National Commission on Teaching and America’s Future (1996). What Matters Most: Teaching
for America's Future. Report of the National Commission on Teaching & America's
Future. Woodbridge, VA: National Commission on Teaching and America's Future.
National Council of Teachers of Mathematics (NCTM). (1989). Curriculum and Evaluation
Standards for School Mathematics. Reston, Va.: NCTM.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
103
National Council of Teachers of Mathematics (NCTM). (2000). Principles and standards for
school mathematics. Reston, Va.: NCTM.
National Governors Association Center for Best Practices & Council of Chief State School
Officers. (2010). Common Core State Standards for Mathematics. Washington, DC:
Authors.
National Mathematics Advisory Panel. (2008). Foundations for success: The final report of the
National Mathematics Advisory Panel. Washington, DC: U.S. Department of Education.
National Research Council. 2001. Adding It Up: Helping Children Learn Mathematics.
Washington, DC: The National Academies Press. https://doi.org/10.17226/9822.
Ni, Y., & Zhou, Y. D. (2005). Teaching and learning fraction and rational numbers: The origins
and implications of whole number bias. Educational psychologist, 40(1), 27-52.
No Child Left Behind Act of 2001, 20 U.S.C. § 6319 (2008).
Ölmez, İ. B. (2022). Preservice teachers’ understandings of division and ratios in forming
proportional relationships. Mathematics Education Research Journal, 1-25.
Orrill, C. H., & Brown, R. E. (2012). Making sense of double number lines in professional
development: Exploring teachers’ understandings of proportional relationships. Journal
of Mathematics Teacher Education, 15(5), 381-403.
Orrill, C. H., Copur-Gencturk, Y., Cohen, A., & Templin, J. (2020). Revisiting purpose and
conceptualisation in the design of assessments of mathematics teachers’
knowledge. Research in Mathematics Education, 22(2), 209-224.
Pascal, N. R. (1984). The legacy of Roman education. The Classical Journal, 79(4), 351-355.
Pitta-Pantazi, D., & Christou, C. (2011). The structure of prospective kindergarten teachers’
proportional reasoning. Journal of Mathematics Teacher Education, 14(2), 149-169.
Post, T., Harel, G., Behr, M., & Lesh, R. (1988). Intermediate teachers knowledge of rational
number concepts. In Fennema, et al. (Eds.), Papers from First Wisconsin Symposium for
Research on Teaching and Learning Mathematics (pp. 194-219). Madison, WI:
Wisconsin Center for Education Research.
Rabe-Hesketh, S., & Skrondal, A. (2008). Multilevel and Longitudinal Modeling Using Stata,
Second Edition. Stata Press.
Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical Linear Models: Applications and Data
Analysis Methods. SAGE.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
104
Riley, K. R. (2010). Teachers’ understanding of proportional reasoning. In P. Brosnan, D. B.
Erchick, & L. Flevares (Eds.), Proceedings of the 32nd annual meeting of the North
American Chapter of the International Group for the Psychology of Mathematics
Education (pp. 1055-1061). Columbus, OH: The Ohio State University.
Rowan, B., Correnti, R., & Miller, R. (2002). What large-scale survey research tells us about
teacher effects on student achievement: Insights from the prospects study of elementary
schools. The Teachers College Record, 104, 1525-1567.
Rowland, T., & Turner, F. (2007). Developing and using the ‘Knowledge Quartet’: A framework
for the observation of mathematics teaching. The Mathematics Educator, 10(1), 107-123.
Sadler, P. M., Sonnert, G., Coyle, H. P., Cook-Smith, N., & Miller, J. L. (2013). The influence of
teachers’ knowledge on student learning in middle school physical science classrooms.
American Educational Research Journal, 50(5), 1020-1049.
Satorra, A., & Saris, W. E. (1985). Power of the likelihood ratio test in covariance structure
analysis. Psychometrika, 50, 83-90.
Schoenfeld, A. H. (2020). Reframing teacher knowledge: A research and development
agenda. ZDM, 52(2), 359-376.
Schwartz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 6, 461–464.
Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching. Educational
researcher, 15(2), 4-14.
Siegler, R. S., Duncan, G. J., Davis-Kean, P. E., Duckworth, K., Claessens, A., Engel, M., ... &
Chen, M. (2012). Early predictors of high school mathematics achievement.
Psychological Science, 23(7), 691-697.
Simon, M. A., & Blume, G. W. (1994). Building and understanding multiplicative relationships:
A study of prospective elementary teachers. Journal for Research in Mathematics
Education, 25(5), 472-494.
Son, J. W. (2013). How preservice teachers interpret and respond to student errors: ratio and
proportion in similar rectangles. Educational studies in mathematics, 84(1), 49-70.
Sowder, J., Armstrong, B., Lamon, S., Simon, M., Sowder, L., & Thompson, A. (1998).
Educating teachers to teach multiplicative structures in the middle grades. Journal of
Mathematics Teacher Education, 1(2), 127-155.
StataCorp. 2019. Stata Statistical Software: Release 16. College Station, TX: StataCorp LLC.
Steiger, J. H. (1990). Structural model evaluation and modification: An interval estimation
approach. Multivariate Behavioral Research, 25(2), 173-180.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
105
Steiger, J. H., & Lind, J. C. (1980). Statistically based tests for the number of common factors.
Paper presented at the Annual Meeting of the Psychometric Society, Iowa City, IA.
Stone. (2012). Policy paradox : the art of political decision making (3rd ed.). W.W. Norton &
Co.
Swanson, R. A., & Holton, E. F. (2005). Research in Organizations: Foundations and Methods
in Inquiry. Berrett-Koehler Publishers.
Tabachnick, B. G., & Fidell, L. S. (2013). Using Multivariate Statistics (6th ed.). Pearson.
Tatto, M. T., Schwille, J., Senk, S., Ingvarson, L., Peck, R., & Rowley, G. (2008). Teacher
Education and Development Study in Mathematics (TEDS-M): Policy, practice, and
readiness to teach primary and secondary mathematics. Conceptual framework. East
Lansing, MI: Teacher Education and Development International Study Center, College of
Education, Michigan State University.
Tatto, M. T. (2013). The Teacher Education and Development Study in Mathematics (TEDS-M):
Policy, Practice, and Readiness to Teach Primary and Secondary Mathematics in 17
Countries. Technical Report. International Association for the Evaluation of Educational
Achievement. Herengracht 487, Amsterdam, 1017 BT, The Netherlands.
Tatto, M. T., Schwille, J., Senk, S. L., Ingvarson, L., Rowley, G., Peck, R., Bankov, K.,
Rodriguez, M. & Reckase, M. (2012). Policy, Practice, and Readiness to Teach Primary
and Secondary Mathematics in 17 Countries. Findings from the IEA Teacher Education
and Development Study in Mathematics (TEDS-M). Amsterdam: International
Association for the Evaluation of Student Achievement.
Tchoshanov, M. A. (2011). Relationship between teacher knowledge of concepts and
connections, teaching practice, and student achievement in middle grades
mathematics. Educational studies in mathematics, 76(2), 141-164.
Thompson, P. W., & Saldanha, L. A. (2003). Fractions and multiplicative reasoning. In J.
Kilpatrick, G. Martin, & D. Schifter (Eds.), Research companion to the principles and
standards for school mathematics (pp. 95–113). National Council of Teachers of
Mathematics.
Thompson, P. W., & Thompson, A. G. (1994). Talking about rates conceptually, part I: A
teacher’s struggle. Journal for Research in Mathematics Education, 25(3), 279-303.
Tucker, L. R., & Lewis, C. (1973). A reliability coefficient for maximum likelihood factor
analysis. Psychometrika, 38, 1–10.
U.S. Department of Education, National Center for Education Statistics. (2021). Table 209.10:
Number and percentage distribution of teachers in public and private elementary and
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
106
secondary schools, by selected teacher characteristics: Selected years, 1987-88 through
2017-18. In U.S. Department of Education, National Center for Education Statistics
(Ed.), Digest of Education Statistics (2021 ed.). Retrieved from
https://nces.ed.gov/programs/digest/d21/.
Mellenbergh, G. J., & van den Brink, W. P. (1998). The measurement of individual
change. Psychological Methods, 3(4), 470.
Van Dooren, W., De Bock, D., Hessels, A., Janssens, D., & Verschaffel, L. (2005). Not
everything is proportional: Effects of age and problem type on propensities for
overgeneralization. Cognition and instruction, 23(1), 57-86.
Van Dooren, W., Vamvakoussi, X., & Verschaffel, L. (2018). Proportional reasoning.
Educational Practices Series 30. UNESCO International Bureau of Education.
Veal, W. R., & MaKinster, J. (1999). Pedagogical content knowledge taxonomies. Electronic
Journal of Science Education, 3(4).
Vergnaud, G. (1988). Multiplicative structures. In J. Hiebert & M. Behr (Eds.), Number concepts
and operations in the middle grades (pp. 141-161). NCTM.
Vosniadou, S., & Verschaffel, L. (2004). Extending the conceptual change approach to
mathematics learning and teaching. Learning and Instruction, 14, 445–45 1.
Walsh, K., & Jacobs, S. (2007). Alternative Certification Isn't Alternative. Thomas B. Fordham
Institute.
Wayne, A. J., & Youngs, P. (2003). Teacher characteristics and student achievement gains: A
review. Review of Educational Research, 73, 89-122.
Weiland, T., Orrill, C., Brown, R., & Nagar, G. (2019). Mathematics teachers’ ability to identify
situations appropriate for proportional reasoning. Research in Mathematics Education,
21(3), 233–250. https://doi.org/10.1080/14794802.2019.1579668
Weiland, T., Orrill, C. H., Nagar, G. G., Brown, R. E., & Burke, J. (2021). Framing a robust
understanding of proportional reasoning for teachers. Journal of Mathematics Teacher
Education, 24(2), 179-202.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
107
Appendix A
Table A1
List of KOSM Items
Qualitative Comparison Item 1
Qualitative Comparison Item 2
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
108
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
109
Additive Item 1
Additive Item 2
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
110
Linear Item 1
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
111
Linear Item 2
Indirect Item 1
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
112
Indirect Item 2
Table A2
Item Concepts Covered in the Student Pre- and Post-test
Item Concepts Measured
Item 1 Identifying proportional relationships
Item 2
Describing proportional relationships through equations and
other representations
Item 3 Graphing proportional relationships
Item 4 Calculating and interpreting unit rates
Item 5 Ratio operations
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
113
Item 6 Scaling appropriately
Item 7 Scaling appropriately
Item 8 Calculating and identifying equivalent ratios
Item 9 Calculating and identifying equivalent ratios
Item 10
Identifying if situations are proportional or not (looks
proportional but is not)
Item 11
Identifying if situations are proportional or not (indirect
proportion)
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
114
Appendix B
Classroom artifact rubrics for instructional quality
Table B1
Potential of the Task
Score
Brief
description Sample task from the data set
Score
Justification
1 The task’s
potential is
limited as it
only requires
students to
reproduce
answers
through
definitions,
facts, or
memorizatio
n.
The task simply
has students
reproduce the
definition of a
ratio and use their
knowledge of ratio
facts to select the
correct answers.
2 The task
guides
students to
use a rote
procedure to
solve rather
than
providing the
potential to
refine
students’
conceptual
understandin
g.
The task names
the procedure
students should
use and has them
apply the same
procedure to
similar problems
(missing value
problems) without
diving into the
conceptual
components of
why proportions
are appropriate or
how they can
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
115
represent the
situations.
3 The task
could provide
the potential
to develop
students’
conceptual
understandin
g but lacks a
prompt
encouraging
students to
justify or
write out
their
thinking.
The task offers the
opportunity to
develop
conceptual
understanding as it
requires students
to make sense of
the situations (i.e.,
the task is not
preceptive in what
procedure to use)
but it does not
encourage students
to explain their
thinking.
4 The task
provides
opportunities
for students
to develop
their
conceptual
understandin
g through the
inclusion of
nonroutine
situations and
by having
students
justify and
demonstrate
their
thinking.
The task promotes
students’
conceptual
understanding of
the multiplicative
relationship found
in ratios as it
prompts students
to discuss their
thinking and
provides the
opportunity to
explore a common
misunderstanding
when comparing
ratios, namely that
a constant
difference does not
always equate to
the same
relationship in a
ratio.
Note. Images from Copur-Gencturk and Atabas, (under review). For further details on scoring,
see Boston, 2012.
TEACHERS’ KNOWLEDGE OF STUDENT MISUNDERSTANDINGS
116
Table B2
Implementation of the Task
Score Brief description
1 The students’ solutions suggested that the level of engagement with the tasks
was superficial, such as recalling facts or filling in the blanks.
2 The students’ solutions suggested that the level of engagement focused on
repeated practice of the same rote procedure for all students.
3 The students’ solutions suggested that the level of engagement involved
students in making sense of and answering varied types of problems, yet there
was no clear evidence of their thinking.
4 The students’ work suggested that the level of engagement involved students
engaging with cognitively challenging tasks and simultaneously demonstrating
their thinking through written explanations.
Note. Adapted from Copur-Gencturk and Atabas, (under review).
117
Table B3
Coherent Mathematics for Conceptual Understanding
Score Description
1 The overall focus of the lesson goal, task selection (including its cognitive
demand), and teachers’ examination of their students’ work reinforced a
superficial understanding about the concepts, often through the explicit focus on
individual skills without making connections.
2 The overall focus of the lesson goal, task selection (including its cognitive
demand), and teachers’ examination of their students’ work centered on
procedural understanding, with infrequent opportunities for the development of
students’ conceptual understanding of the concepts.
3 The overall focus of the lesson goal, task selection (including its cognitive
demand), and teachers’ examination of their students’ work demonstrates that
the teacher had a coherent plan for opportunities to support students’ conceptual
understanding of the concepts using nonroutine tasks and explicit discussion of
the connections across representations and concepts. Although the plan was
coherent and focused on conceptual understanding, the teacher’s expectations
and interpretation of students’ work suggests that this did not play out in situ.
Instead, there appeared to be a transition from the conceptual focus towards
students’ procedural mastery and success.
4 The overall focus of the lesson goal, task selection (including its cognitive
demand), and teachers’ examination of their students’ work demonstrates that
there were repeated and consistent opportunities for the advancement of
students’ conceptual understanding of the concepts. The teacher was consistent
in this conceptual focus across the planning (goal of the less and task selection)
and the enactment (teacher expectations and analysis of students’ work).
Note. Adapted from Copur-Gencturk and Atabas, (under review).
Abstract (if available)
Abstract
Teachers’ knowledge of students’ misunderstandings (KOSM) is an essential theoretical component of teacher knowledge. Yet, the relationship between KOSM and key outcomes such as instructional quality and student achievement is unclear. For this dissertation project, I conducted two studies aimed at understandings different facets of teachers’ KOSM.
Study 1 used a national sample of 743 in-service middle school mathematics teachers to test whether two theoretical components of KOSM, predicting common misunderstandings (PSM) and understanding the reasons leading to common misunderstandings (URSM), were distinct constructs. Additionally, Study 1 used descriptive statistics to explore what grade level content knowledge (GLCK), PSM, and URSM this national sample had broadly. I found that GLCK, PSM, and URSM were unique components of knowledge and that teachers do possess the grade level content knowledge and are able to predict the common misunderstandings, but the rationales they provide for why students demonstrate the misunderstanding are largely attributed to the procedures rather than the underlying concepts.
The sample for Study 2 included 37 teachers and 1,205 of their students. Linear regression was used to test the relationship between GLCK, PSM, and URSM and three measures of instructional quality (task selection, task enactment, and coherence of the mathematics) and hierarchical linear modeling was used to assess the relationship between these knowledge components and student achievement. Overall, URSM and PSM were significantly related to task enactment, coherence of the mathematics, and student learning, suggesting that URSM and PSM are important components of KOSM and should be considered in future work.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Navigating the future of teacher professional development: Three essays on teacher learning in informal and digital context
PDF
Noticing identity: a critically reflective cycle to leverage student mathematical funds of knowledge and identity
PDF
Teacher management style: its impact on teacher-student relationships and leadership development
PDF
The relationship between teacher assessment practices, student goal orientation, and student engagement in elementary mathematics
PDF
The influence of numerical estimation skills and epistemic cognition in plausibility judgments and conceptual change
PDF
The enactment of equitable mathematics teaching practices: an adapted gap analysis
PDF
Teacher practices and student connection: high school students' perceptions of the influences of teacher pedagogy and care on their sense of connection
PDF
What shapes middle school teachers' abilities to build productive parent-teacher relationships? The roles of self-efficacy and teachers' and principal's role constructions
PDF
Teachers’ knowledge of gifted students and their perceptions of gifted services in public elementary schooling
PDF
The role of secondary mathematics teachers in fostering the Algebra 1 success of African American males
PDF
Teachers’ perceptions of student behavior and relationship quality: an exploration of racial congruence and self-identity development
PDF
Examining the impact of LETRS professional learning on student literacy outcomes: a quantitative analysis
PDF
What are the relationships among program delivery, classroom experience, content knowledge, and demographics on pre-service teachers' self-efficacy?
PDF
The use of cognitive task analysis to capture expert instruction in teaching mathematics
PDF
Relationship of teacher's parenting style to instructional strategies and student achievement
PDF
A case study of how principals, teachers and parents contribute to a quality comprehensive K-12 system of support for ELL student academic success
PDF
Learning the language of math: supporting students who are learning English in acquiring math proficiency through language development
PDF
Teacher perception on positive behavior interventions and supports’ (PBIS) cultivation for positive teacher-student relationships in high schools: an evaluation study
PDF
Teacher retention in an urban, predominately Black school district: an improvement study in the Deep South
PDF
Examining teacher pre-service/credential programs and school site professional development and implementation of culturally relevant teaching of BIPOC students
Asset Metadata
Creator
Ezaki, John (author)
Core Title
The importance of WHY: teachers’ knowledge of student misunderstandings of ratios and proportional relationships
School
Rossier School of Education
Degree
Doctor of Philosophy
Degree Program
Urban Education Policy
Degree Conferral Date
2023-12
Publication Date
10/06/2023
Defense Date
09/29/2023
Publisher
Los Angeles, California
(original),
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
knowledge of student misunderstandings,OAI-PMH Harvest,pedagogical content knowledge,proportional relationships,ratios,teacher knowledge
Format
theses
(aat)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Copur-Gencturk, Yasemin (
committee chair
), Cohen, Allan (
committee member
), Hasan, Angela (
committee member
)
Creator Email
ezakijohn@gmail.com,jezaki@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC113758750
Unique identifier
UC113758750
Identifier
etd-EzakiJohn-12414.pdf (filename)
Legacy Identifier
etd-EzakiJohn-12414
Document Type
Dissertation
Format
theses (aat)
Rights
Ezaki, John
Internet Media Type
application/pdf
Type
texts
Source
20231013-usctheses-batch-1101
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
knowledge of student misunderstandings
pedagogical content knowledge
proportional relationships
ratios
teacher knowledge