Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Microdevelopments in adaptive expertise in STEM-based, ill-structured problem solving
(USC Thesis Other)
Microdevelopments in adaptive expertise in STEM-based, ill-structured problem solving
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Running head: MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 1
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE
IN STEM-BASED, ILL-STRUCTURED PROBLEM SOLVING
by
Michelle Viotti
_____________________________________________________________________
A Dissertation Presented to the
FACULTY OF THE USC ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
May 2015
Copyright 2015 Michelle Viotti
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 2
Dedication
To
Kristina the Brave
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 3 3
Acknowledgements
Gratitude goes to the Chair of my committee, Dr. Pedro Garcia, who gave me the
psychological space to conceptualize, matching with his support my intent to begin mastering a
complex topic. At a meta level, the study represented a chance to enhance my own preparation
for future learning, asking questions and discovering in the context of my own novel problem
solving. In fact, this process helped me personally develop an awareness of some of the finer
points of adaptive expertise, persistence perhaps foremost among them.
To be more prepared for future learning is indeed a gift. Dr. Garcia and other members
of my committee, Dr. Rudy Castruita and Dr. Michael Escalente, ushered me along to what I
could achieve on this learning trajectory, which in turn readies me for more authentic, real-world
contributions. They inspire me with what could be named grand noble purpose, but simply
comes from their character and the reality of what they have given to students, teachers, and
administrators in their careers. All of my professors at Rossier will find aspects of their
teachings in this thesis, which reflects in many ways a synthesis of what their mentoring in this
learning experience helped achieve. I intend to “share it forward” as I continue to learn and
grow in my own career and mentor others.
Deep and indescribable appreciation extends first, last, and always to My Family and
Friends, who richly deserve the bolded, capitalization for their Continuous Encouragement and
the Power of its Meaning to me. I am always humbled by their belief in me, and would never be
able to reach, achieve, and contribute without the grace and goodness they bring to my life
experience.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 4 4
Table of Contents
List of Tables 8
List of Figures 9
Abstract 10
Chapter One: Overview of the Study 11
Background of the Problem 13
National Context 13
School Context 16
Statement of the Problem 22
Purpose of the Study 25
Research Question 26
Significance of the Study 27
Potential Contributions to Assessments of Adaptive Expertise 27
Potential Contributions to Instructional Design for Adaptive Expertise 29
Potential Contributions of Value to Stakeholders 31
Limitations 32
Delimitations 33
Definitions 35
Organization of the Study 41
Chapter Two: Literature Review 42
Theoretical Foundations of the Study 42
Preparation for Future Learning 43
Adaptive Expertise 44
Constructivist Pedagogy 44
Dynamic Skill Theory 52
Cognitive Load Theory 57
Synthesis and Implications of the Theoretical Foundations 59
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 5 5
Dependent Variable: Adaptive Expertise in Ill-structured STEM Problem Solving 60
Independent Variables: Dimensions of Preparation for Future Learning 67
Dimension 1: Knowledge and Skills 68
Dimension 2: Inquiry and Innovation 74
Dimension 3: Motivation 80
Synthesis of Independent Variables 86
Research Relating the Dependent Variable to the Independent Variables 87
Adaptive Expertise and Knowledge and Skills 88
Adaptive Expertise and Inquiry and Innovation 90
Adaptive Expertise and Motivation 92
Synthesis of Gaps in the Literature 93
Conclusion 96
Chapter Three: Methodology 97
Sample and Population 98
Description of Site and Participants 98
Instrumentation 100
Instrument Description 100
Conceptual Framework for Instrumentation 103
Process of Instrument Development 108
Field Testing 108
Reliability and Validity of Instrument 110
Data Collection Procedures 114
Participant Demographic Data 114
Participant Learning Data 115
Data Analysis Procedures 118
Ethical Considerations 118
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 6 6
Chapter Four: Findings 119
Finding 1: Complex Individual Patterns Overall 120
Finding 2: Highly Variable, Unstable Trajectories in Motivation 121
Finding 3: Low-to-Average Scores in Knowledge and Skills 124
Finding 4: Modest Between-Problem Improvements and Novice-level 125
Mental Models
Discussion 127
Implications for Theory: Finding 1 129
Implications for Theory: Finding 2 131
Implications for Theory: Finding 3 133
Implications for Theory: Finding 4 134
Chapter Five: Conclusions 138
Limitations 139
Implications for Practice 140
Assessing Adaptive Expertise and Diagnosing Learners’ PFL Needs 141
Providing Design Guidelines for CBLE-based Adaptive Expertise 141
Recommendations for Future Research Questions 142
Conclusions 145
References 148
Appendix A Evidence-centered Design Layer 1: Domain Analysis 179
STEM Problem-solving CBLE Framework from Domain Analysis 179
Problem-solving Step 1: Define Problems, Goals & Constraints 186
Problem-solving Step 2: Gather Needed Information & Evidence 188
Problem-solving Step 3: Propose Solutions Based on Evidence 189
Problem-solving Step 4: Justify Solutions Based on Evidence 190
Problem-solving Step 5: Evaluate the Effectiveness of the Solution 191
Summary 192
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 7 7
Appendix B Evidence-centered Design Layer 2: Domain Modeling 194
Appendix C Evidence-centered Design Layer 3: Conceptual Assessment Framework 196
Student Model 197
Task Model 199
Evidence Model 201
Measurement Model 203
Presentation Model 204
Assembly Model 205
Delivery System Model 206
Appendix D Evidence-centered Design Layer 4: Assessment Implications 207
Appendix E Evidence-centered Design Layer 5: Assessment Delivery Infrastructure 208
Appendix F Adaptive Expertise Pre/Post Assessment Items 210
Intervention Part 1: Preliminary Data Collection 210
Intervention Part 2: CBLE Data Collection in STEM Problem Solving 215
Intervention Part 3: Observations 218
Appendix G Participant Patterns 219
Person 1 Data 219
Person 2 Data 225
Person 3 Data 231
Person 4 Data 237
Person 5 Data 243
Person 6 Data 249
Person 7 Data 255
Person 8 Data 261
Person 9 Data 267
Person 10 Data 273
Person 11 Data 279
Person 12 Data 285
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 8 8
List of Tables
Table 1: Estimates of Functional & Optimal Developmental Levels 54
Table 2: Sub-Variables for Dimension 1: Knowledge & Skills 74
Table 3: Sub-Variables for Dimension 2: Inquiry & Innovation 80
Table 4: Sub-Variables for Dimension 3: Motivation 86
Table 5: Characteristics of Participants’ Schools 99
Table 6: Descriptions & Purposes of ECD Layers 105
Table 7: Domain-modeling Design Outline 195
Table 8: Student Model: Sample Taxonomic Alignment 199
Table 9: Sample Mapping Between Standards & Microdevelopmental Level 202
Table 10: CBLE Assembly Model: Steps & Sub-steps 205
Table 11: Likert-like Scales Used in the Study 210
Table 12: Survey Questions: Basic Demographic Data 210
Table 13: Survey Questions: Prior Knowledge 211
Table 14: Pre/Post-Assessment Survey 213
Table 15: Data Collection Embedded in the CBLE 215
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 9 9
List of Figures
Figure 1: 2-Dimensional Model of Adaptive Expertise 61
Figure 2: 4-Dimensional Model of Adaptive Expertise 63
Figure 3: Comparisons of Problem-solving and Conceptual Change Models, 181
STEM Practices, & SRL Processes
Figure 4: Task Model Variables 200
Figure 5: CBLE Presentation: Common Elements 204
Figure 6: CBLE Assembly Model 206
Figure 7: Assembly Delivery Infrastructure 208
Figure 8: Pre/Post Concept Maps: Problem-solving Schemata 212
Figures G-1A – G-12F: Participant Data 219
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 10 10
Abstract
The purpose of this exploratory study is to understand how educators in high-need environments
develop adaptive expertise in STEM-related, ill-structured-problem-solving. The research
question considered what microdevelopmental patterns emerged in three proposed dimensions of
adaptive expertise (knowledge and skills, inquiry and innovation, and motivation) and their
transfer through time. Educator participants were enrolled in a university-level education degree
program and taught underserved students. The research design included a computer-based
learning environment with two STEM-based, ill-structured problems. Data collection occurred
through a pre-study demographics survey, two learning sessions, and observations with note
taking. Evidence-centered design (ECD) guided data analysis. Findings showed that teachers
had highly complex and dynamic individual microdevelopmental patterns, including the most
unstable trajectories in the motivation dimension and low-to-average middle-school-level
performance in problem solving and STEM knowledge and skills. While they largely showed
improvement trends in their problem-solving schemata pre/post-experience, they still performed
at lower than optimal levels. The study underscores the importance of providing educators with
more “preparation for future learning” in STEM-based, ill-structured problem solving so that
they in turn can provide optimal supports for their students. It suggests that motivation is a key
factor to add to models for adaptive expertise. Given the wide variability of learner responses
within and between problems, it also supports the call for microdevelopmental assessments with
scaffolded, adaptive supports. With these improvements, learners in high-need environments can
be supported in their novice-to-expert trajectories, helping to provide educational equity and the
promise of contributing to, and sharing in, 21
st
century prosperity.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 11 11
CHAPTER ONE: OVERVIEW OF THE STUDY
To ensure prosperity, security, and competitiveness in a global, 21st-century economy,
the United States must strengthen citizen capabilities in science, technology, engineering, and
mathematics (STEM) (National Academies [NA], 2010b) and in related 21
st
-entury skills such as
problem solving (National Research Council [NRC], 2010). A national dilemma is that, in the
2006-2010 timeframe, the U.S. spent $2 trillion on K-12 education, while students “remained
mired near the bottom of the developed-world class” (NA, 2010b, p. 34). Because knowledge in
STEM-related fields grows rapidly through research, acquiring content knowledge is necessary,
but no longer sufficient. For a rapidly evolving knowledge-based economy, schools cannot
afford to separate STEM content from practice (e.g., Lave & Wenger, 1991; NRC, 2005; 2007;
2010). Instead, classroom teachers must prepare students using authentic, real-world, problem-
solving experiences, using the domain-general and domain-specific specific tools, language, and
procedures of STEM disciplines (NRC, 2010). Real-world, 21
st
-century problem solving
requires non-routine, adaptive expertise, in which learners demonstrate a flexible capacity to
alter their skills and strategies in response to a complex problem with inherently random
systemic elements and no one right answer (Hatano & Inagaki, 1986). What counts in this
regard is their preparation for future learning (PFL), which allows them to transfer knowledge,
skills, and abilities (KSAs) in solving complex, novel, open-ended, problems requiring
innovative, effective solutions (e.g., Bransford & Schwartz, 1999).
Problem solving is a crosscutting skill for STEM fields, and a general life skill as well
(Asia Pacific Economic Cooperation [APEC], 2008; Jonassen, 2010b; Organization for
Economic Cooperation and Development [OECD], 2010). Recognizing its importance, large-
scale tests such as the Program for International Student Assessment (PISA) and the National
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 12 12
Assessment of Educational Progress (NAEP) have emerging computer-based, adaptive
assessments of problem solving, a measure that potentially serves as one indicator of a nation’s
STEM-related, problem-solving capacity. Teaching and assessing problem solving is important,
given half of OECD-nation students cannot solve problems beyond a basic level (OECD, 2010)
and most students do not have the knowledge and skills to solve complex, real-world problems
found in 21st-century, knowledge-based jobs (Darling-Hammond, 2007; Jonassen, 2011;
McKenna & Hutchison, 2008). Among identified 21st-century skills, complex problem solving
consistently ranks at the top of international needs, as does ensuring that STEM classes mimic
authentic, real-world STEM practices (APEC, 2008; Coalition of the Assessment of 21
st
Century
Skills, 2010; Partnership for 21
st
Century Skills, 2008). A growing understanding is that learners
must develop adaptive expertise (Hatano & Inagaki, 1986), the ability to modify or invent skills
and models depending on changes in setting, requirements, constraints, and other variables of the
problem to be solved.
The development of this capacity particularly depends on the provision of ill-structured
problems (Jonassen, 2007; Mayer & Wittrock, 2006), or those with multiple solution paths and
without one correct answer, thus mimicking real-world scenarios in STEM-related fields and in
many jobs of the 21
st
-century knowledge economy. Assessments in non-routine, novel, and
complex problem solving are important to future prosperity as “what is tested gets taught” and as
nations can compare which educational techniques are effective vis-à-vis a common set of
complex tasks students must be able to perform (Schleicher, 2010, p. 433). With information
and communications technology (ICT) ubiquitous, a long-term future likely exists for computer-
based, ill-structured assessments that no longer measure just what students know, but how they
perform tasks in a way that makes their thinking, and particularly their non-routine, out-of-the-
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 13 13
box thinking, visible and computer-measured at each problem-solving action on a given task
(Schleicher, 2010). However, many national and local challenges exist in meeting this vision.
Background of the Problem
At the national level, students are ill-prepared for a 21
st
-century, knowledge-based
economy when compared to peers in other nations, and the performance of traditionally
underserved students is of even greater concern. From a school-based perspective, educators are
essential to improving national prospects, but often do not have the background or training in
STEM content and practices, ill-structured problem solving, or teaching with computers.
National Context
Resulting in Rising Above the Gathering Storm: Energizing and Employing America for a
Brighter Economic Future (NA, 2006), Congress asked the National Academies to investigate
priority actions and steps for enhancing the U.S. science and technology enterprise for a global,
21
st
-century economy (NA, 2006, p. 2). Of the report’s four recommendations, the first priority
was “[to] increase America’s talent pool by vastly improving K-12 science and mathematics
education” (NA, 2006, p. 112). This priority is designed to address the low performance of U.S.
students when compared to other nations, the decline in students going into science, technology,
engineering, and mathematics (STEM) fields, and the lack of qualified K-12 STEM teachers.
Released only five years later, the National Academies’ follow-up study concluded that “the
nation’s outlook has worsened” (NA, 2010b, p. 4), so much so that the subtitle changed to
Rapidly Approaching Category 5, comparing the current state no longer to a gathering storm, but
to the worst kind of hurricane. Among other data, the new report sounds the alarm that the
United States ranks:
• 20
th
in the high-school completion rate among industrialized nations, and sixteenth in
college completion rate,
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 14 14
• 27
th
among developed nations in the proportion of students receiving degrees in science or
engineering, and
• 48
th
worldwide in K-12 science and mathematics education (NA, 2010b, pp. 7-8).
This worsening condition “paints a daunting outlook for America if it were to continue on the
perilous path it has been following” (NA, 2010b, p. 2).
Confirming this conclusion is the President’s Council of Advisors on Science and
Technology (PCAST), which at the same time released its own report on preparing and inspiring
students in STEM for America’s future (PCAST, 2010). Among the concerns are: (a) the United
States is consistently in the middle of the pack or lower when it comes to K-12 students’
performance in STEM, (b) less than one-third of U.S. eighth-graders show proficiency in STEM,
and (c) the problem is not just a lack of proficiency, but a lack of interest in STEM fields among
many students (PCAST, 2010). That last issue is of particular concern. A rare, large-scale,
longitudinal study on persistence in science (Tai, Liu, Maltese, & Fan, 2006) concludes that
expectation, not grades, was the greatest predictor for students continuing on in STEM. Eighth
graders who had expectations of entering STEM careers were 1.9 times (life sciences) or 3.4
times (physical sciences and engineering) as likely to do so; average math students with
expectations were 1.8 times as likely to continue than high achieving students without
expectations (Tai, Liu, Maltese, & Fan, 2006, p. 1144). The finding tracks with studies that
interest (e.g., Frenzel, Goetz, Pekrun, & Watt, 2010; Hulleman & Harackiewicz, 2009), self-
efficacy (e.g., Britner & Pajares, 2006; Chen & Pajares, 2010, Zeldin, Britner, & Pajares, 2008),
and self concept (e.g., Sáinz & Eccles, 2012) closely relate to students’ STEM success.
Responding to a national survey, roughly a third of middle-school and high-school students
signaled an interest in a science career (Project Tomorrow & PASCO Scientific, 2008).
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 15 15
In addition to the general trends, the outlook for underserved and under-represented
students is even more troubling. Critiquing the completeness of its own Gathering Storm
reports, the National Academies produced an additional report with actions specifically focused
on the challenge that those who are under-represented in STEM are also the fastest growing in
the general population (NA, 2010a). The study relates that, among 24-year-olds, only 2.7% of
African Americans, 3.3% of Native Americans/Alaska Natives, and 2.2% of Hispanics/Latinos
have earned a STEM-related degree (NA, 2010a, p. 28) and that at least tripling those numbers is
vital to meet national STEM workforce needs. Barriers to fixing the problem include gaps in: (a)
K-12 preparation, (b) access and motivation, (c) affordability, and (d) academic and social
support (NA, 2010a, pp. 4-5). The report concludes that the United States is at a “crossroads”
and must develop a strategy for cultivating the talents of minorities who “currently embody a
vastly underused resource and a lost opportunity for meeting our nation’s technology needs”
(NA, 2010a, p. 1).
Additionally, while the United States has invested over $90 billion in computer
technologies for schools and provides $2.25 billion a year to provide affordable internet access,
low-income, minority students in the United States are falling behind in acquiring computer-
based knowledge and skills (Cummins, Brown, & Sayers, 2007). These skills are required not
only for authentic STEM problem solving and 21
st
-century jobs, but also for full inclusion and
participation in a society in which information and communications technologies (ICT)
increasingly prevail (Warschauer, 2003; 2010). Notably, the concept of a “digital divide” no
longer concerns only low access to computers by low-income, minority students, but also the
less complex use of computers by these students and the resultant lower outcomes for them as
measured by academic achievement, 21
st
-century skills, and participation in technology-related
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 16 16
careers (emphases per Warschauer & Matuchniak, 2010, p. 181). In addition, while the pressure
to prepare students for the demands of the 21
st
-century economy grows, poor working conditions
for teachers discourage the most qualified from teaching in urban schools, where large numbers
of students are English Language Learners, and such schools face high drop-out rates (Murnane,
2008). Due to unequal educational access, a growing number of U.S. citizens are thus
unemployable in a 21
st
-century context given that 70% of U.S. jobs now require specialized
skills and training (Darling-Hammond, 2007) such as computer-based, STEM-related problem
solving. That lack of computer literacy may continue a trend noted by Garcia (2002) that large
populations of immigrants without requisite skills lock themselves into low-wage jobs,
preventing social mobility for their children as well as for themselves. This ICT-based
segregation leads to what Garcia (2002) calls educational vulnerability, the “prospect of losing a
significant body of our country’s human resources at a time when we need all the resources we
can muster” (p. 33).
School Context
In preparing all students for 21
st
-century assessments and career readiness, teachers can
develop student problem-solving capabilities by providing learners with well-designed, ill-
structured problems (Bottge, Rueda, Kwon, Grant, & LaRoque, 2009) that are STEM-based and
complex in nature. A serious issue is that educators largely do not have an expertise or self-
efficacy in teaching problem solving and often lack the content-domain knowledge that is
prerequisite to creating STEM-related problems for, and guiding problem solving with, students.
For example, 69% of U.S. fifth- to eighth-grade students are taught mathematics by teachers
without degrees or certificates in that field (NA, 2010, p. 7); 93% of students in that same
demographic are taught physical sciences by teachers without degrees or certificates in that field
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 17 17
(NA, 2010, p.8). Vast differences also exist between the way science is taught and the way it is
practiced (Osborne, 2007). According to Jonassen (2010a), most classrooms provide students
with well-structured problems, and most designers of instruction use defined processes and
normative models in their simulations, but these do not support the acquisition of necessary
complex, problem-solving skills. Students mostly engage in laboratory experiences that are
scripted and routine (NRC, 2005), with teaching focused mainly on procedures set to convince
students that what is already known is empirically true, rather than on engaging students in
activities where data are unclear, multiple alternatives are possible, and scientific reasoning is
key to learning outcomes (Osborne, 2007). In a large-scale national study of 25,544 science
teachers from 3,729 schools in 867 districts, only 25% use inquiry-based curricula, and only 16%
assign students problem-solving projects (Project Tomorrow & PASCO Scientific, 2008). Of
those schools, 43% were eligible for Title I funding, and 20% had over 50% minority student
populations (Project Tomorrow & PASCO Scientific, 2008). Per Murnane (2008), since a
curriculum building higher-order learning skills is more difficult to teach effectively, greater
emphasis on pedagogical training in these areas is critical, particularly for urban schools.
Similar to a lack of teacher competency in STEM content and problem-solving skills,
barriers also remain in terms of information and communications technology (ICT) adoption in
the classroom for knowledge-building and critical analysis (Warschauer, Grant, Del Real, &
Rousseau, 2004; Warschauer & Matuchniak, 2010). An issue is educators’ potential lack of ICT
preparedness, as well as instructional design and pedagogical principles related to each (Kim &
Hannafin, 2011). Prensky (2001) states that the greatest problem facing education today is that
teachers are digital immigrants who often resist new tools and competencies, assuming that the
same methods that worked for their teachers will continue to work for their students. As such,
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 18 18
teachers may be identified as “alienated labor” (Carnoy & Levin, 1985), especially when tasked
by schools to implement curricula in which they may not believe or know how to implement. In
essence, many may have a cultural style of teaching and learning that may not be in keeping with
the realities and needs of the 21st-century classroom.
Even when computer-based lessons are offered, one practice negatively impacting low-
income, often minority students is that, with these learners, educators often predominantly focus
on “performativity,” which refers to curricula that only teach basic computer skills such as how
to use a particular word-processing, presentation-related, or other software (Cummins, Brown, &
Sayers, 2007, p. 96). Instead, the goal is generativity, the creative capacity to generate new
knowledge and skills (Pérez & Coffin-Murray, 2010) and to extend innovations beyond the
particular circumstances involved in their original creation (Schwartz, Varma, & Martin, 2008).
In generative learning, learners actively construct their own interpretations and inferences,
integrating new and prior concepts and relationships among them to make meaning (Wittrock,
2010). That capacity is important to the development of adaptive expertise, the ability to be
flexible and creative in transferring both STEM-and problem-solving-related schemata to novel
problems (DeHaan, 2009; Hatano & Oura, 2003; Hatano & Inagaki, 1986).
In reviewing twelve studies and then comparing eight low and high socioeconomic status
schools (SES), Warschauer, Knobel, and Stone (2004) found that low SES schools with high
African American and Hispanic students and high numbers of English Language Learners often
provide remedial or vocational (performativity-based) ICT curricula in comparison to more
academically oriented (generativity-based) curricula used by high SES schools with largely
White and Asian students, a situation that remains relatively unchanged (Warschauer &
Matuchniak, 2010). Studying design principles for technology-supported curricula, Cummins,
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 19 19
Brown, and Sayers (2007) note that this focus on software applications distracts from more
powerful ICT-enabled knowledge generation and inquiry, “a distraction that low-income
students can ill afford” (p. 96).
That emphasis on routine use holds true for teachers as well. Despite evidence from
teacher self-reports listing a greater need for developing more expert uses of technology, 81% of
reporting school districts invested in professional development (PD) in the use of technology for
grading and 64% for training on how to locate curricular resources on the Internet (U.S.
Department of Education [USDOE], 2009, p. 14). Seventy-three percent of reporting school
districts did offer ICT-based mathematics PD, with only 45% of teachers participating; 58%
percent offered ICT-based science PD, with only 35% of teachers participating (USDOE, p. 13).
An added barrier to more complex use is a lack of research on specific types of effective
technology-based instruction for guiding practice (USDOE, 2009).
Furthermore, Warschauer (2010) reveals that when teachers offer computer time, it often
comes as a privilege for a few instead of as a necessity for all. That creates not only information-
and skills-based segregation, but also social segregation in and out of school given students
without computer learning time are not able to participate in the computer-based social
networking and higher-level learning activities of their more economically and educationally
privileged peers, keeping them from the latter community of practice necessary to learning
(Warschauer, 2003). Volman, Eck, Heemskerk, and Kuiper (2005)’s qualitative study provides
further evidence of this marginalization, revealing that minority students consider themselves
less skilled ICT users than students from the majority population and that they use the computer
at school less for gathering information and preparing projects and more for drill and practice.
Addressing these issues, a federal focus on technology integration is growing,
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 20 20
particularly for high-need schools. The 2005 National Education Technology Plan and the 2009
American Recovery and Reinvestment Act has resulted in over $650 million in investment, and
of 45 states with defined technology-literacy standards for 8
th
-grade students, 43 include
standards for the use of technology for problem solving and decision making (USDOE, 2009, p.
36). The U.S. Department of Education’s Enhancing Education Through Technology (EETT)
Program, the most comprehensive federal program for improving student achievement through
education technology, advocates for technology-integrated teaching that provides learners with
opportunities to use ICT in alignment with learning theories and curricular goals (USDOE,
2009). Recognizing that high-poverty students may only experience ICT in school settings, the
federal goal is to provide students with technology-enhanced, engaging, curricular experiences
that help them learn better and faster in an individualized manner, prepare for formative and
summative assessments, and develop technology literacy that includes accessing and analyzing
information and critical thinking skills (USDOE, 2009).
However, technology itself is not the solution. This study does not challenge research
(e.g., Mayer, 2003) that the instructional method, not the technology medium, is ultimately
responsible for learning gains. Rather, it recognizes the computer as an essential tool of STEM
disciplines that affords more easily accessible ways to simulate the effects of manipulating
multiple variables and strategies, representing complex dynamic conditions and systems, and
creating and testing models, among other authentic tasks central to experimentation, problem
solving, and conceptual change inherent in STEM-domain principles and practices. In providing
these capabilities, along with flexibility and embedded, real-time assessments, educational
technologies must be well-designed and based on learning theory and pedagogical principles
(e.g., Pellegrino & Brophy, 2008). Relevant to this design point and the nature of inquiry-based
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 21 21
problem solving, what enables learners to encounter complex concepts without large amounts of
unproductive mental effort is guided discovery, with an emphasis on the guidance (Clark, Yates,
Early, Moulton, Silber, & Foshay, 2010) and scaffolding (Hmelo-Silver, Duncan, & Chinn 2007)
as part of a well-designed instructional experience.
Where learners will increasingly be called upon continuously to solve novel, complex, ill-
structured problems in inventive, schema-changing ways, technology-based instruction must
increasingly focus on increasing transfer of key skills and strategies. Research on demonstrating
transfer has largely proven inconclusive (Bransford & Schwartz, 1999). In acknowledging
difficulties in demonstrating transfer in learning, Bransford and Schwartz (1999) raise the
inadequacy of the traditional view of transfer as a significant issue. They argue that a learning
experience does not immediately generate expertise, and that it may be more appropriate to
demonstrate whether students are on a trajectory toward expertise. That can be particularly
important when the target for transfer is related to habits of mind associated with creativity and
innovative risk-taking, both of which serve STEM-discipline-related conceptual change and
adaptive problem solving. Such habits of mind addressed by Bransford and Schwartz include the
willingness (a) to look critically at current knowledge, (b) to seek multiple perspectives, (c) to
allow ambiguity, (d) to reflect on and monitor one’s own comprehension, (e) to let go of old
ideas, and (f) to persist in the face of difficulty. In this view of transfer, the key assessment
target is both teachers’ and students’ preparation for future learning (PFL), as demonstrated by
changes in their problem-solving schemata, which can be characterized by the sophistication of
what students notice about the problem and how they gather and interpret information, and how
they apply the information or past problem-solving strategies when confronting new problems
(Bransford & Schwartz, 1999). In essence, this view of transfer supports the development of an
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 22 22
adaptive expertise (Hatano & Inagaki, 1986) that is so important for teaching and learning
complex, ill-structured problem solving in STEM given the rapidly changing conditions of the
21st-century environment. Research on adaptive expertise and preparation for future learning
serves as the overarching framework of this study, with a proposed model of assessment,
implemented through a computer-based learning environment (CBLE).
Statement of the Problem
Required for complex, dynamic, ill-structured problem solving is adaptive expertise, the
ability to be flexible and creative in transferring both STEM-and problem-solving-related
schema to novel problems (DeHaan, 2009; Hatano & Inagaki, 1986; Hatano & Oura, 2003).
However, few models exist for measuring developing adaptive capacities equal to the challenge
of STEM-related, ill-structured problem solving, particularly for K-12 teachers and students in
high-need environments.
In general, expert-novice research often examines differences in problem-solving
performance (e.g., how much learners remember about a problem, how closely their
representations and schemata resemble expert understandings, their effective use of strategies
etc.), and emphasizes the importance of focused practice. However, few research studies center
on either evolving patterns in novices’ current and developing competencies (i.e., their
preparation for future learning) (e.g., Bransford & Schwartz, 1999) or developmental
measurements for them that could help provide empirical diagnoses in support of learning and
improvement (e.g., Fischer, 2009).
To enhance complex, STEM-related problem-solving abilities in high-need school
environments with populations of students who are under-represented in STEM, researchers,
educators, and policymakers face the task of determining effective pedagogical strategies that
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 23 23
synthesize at least three key elements: (a) authentic STEM learning opportunities that leverage
evidence-based learning and motivation theories to guide learners in acquiring domain and
adaptive expertise, (b) complex, ill-structured problems that mimic real-world challenges and
develop both divergent and convergent critical-thinking capabilities, and (c) computers as
essential tools of the discipline, providing rich informational resources, dynamic models and
other visualizations, simulations and virtual testing capabilities, and collaboration possibilities.
These pedagogical strategies hinge on educators’ preparation for future knowledge along their
own novice-to-expert paths in adaptive expertise, so that they are better able to support student
growth in these areas.
The research literature covers multiple pedagogies and frameworks for STEM, 21st-
century skills, problem solving, and ICT literacy separately, but is absent of an integrated
conceptual model for linking them to adaptive expertise and guiding the design of a CBLE in a
way that supports evidence-based assessments of developmental learning patterns in teachers’
and students’ integration of all of these elements in a synergistic, real-world manner. Just as
critics suggest that STEM content is often decoupled from STEM practice in classroom
environments, STEM and problem-solving frameworks are often treated separately (e.g., OECD,
2010). Although optimizing instruction to support PFL and to make developing PFL observable
in assessments is key (Mylopoulos, 2012), most studies do not address the concept of transfer in
terms of learners’ PFL and its relationship to adaptive expertise, and many assessment designs
focus more on performance and achievement as summative assessments than on acquiring
competencies as part of formative assessments (Rupp, Gushta, Mislevy, & Schaffer, 2010) that
reveal developmental patterns through time.
Additionally, understanding how to support novice learners through a situated, computer-
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 24 24
based experience is itself a complex, ill-structured problem. Teachers (and thus their students)
are more familiar with classroom exercises with a single, preferred solution path and one
expected right answer (e.g., Jonassen, 2010b). Integrating STEM-related knowledge, adaptive
problem-solving strategies and skills, and computer-based abilities can contribute to providing a
comprehensive, situated learning condition supportive of 21st-century competencies. However,
that integration poses challenges given teachers and students, particularly in high-need
environments, often have low prior knowledge in all three, and cognitive load, therefore, is likely
high. Challenges in adopting computer-based, STEM-related, ill-structured problem solving
experiences include (a) competing school-performance pressures per the “No Child Left Behind
Act” and other time and resource constraints (i.e., adoptability issues), (b) the lack of coherent,
evidence-based problem-solving frameworks and procedures relevant to all problems and
contexts, and (c) knowledge gaps in how to scaffold problem solving differently for novices and
experts at each problem-solving stage (Kim & Hannafin, 2011).
Designing curricula for complex, ill-structured problems is difficult because each
problem is by nature going to be unique, require different solution strategies for integrating
knowledge and innovating from it, and have multiple possible outcomes (though some more
optimal than others). The potential of computer-based learning is hampered given that many
technologies employed in classrooms use traditional instructional design models that emphasize
the acquisition and retention of specific knowledge and skills, rather than build a capacity for
adaptively learning how to learn at the boundaries of one’s current expertise (Bransford,
Slowinski, Vye, & Mosborg, 2008).
Integrating relevant STEM content necessary to formulating ill-structured problem
solutions depends on initial expert definition of domain-specific content, appropriate possible
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 25 25
and iterative sequences of its application, and the characteristics of the problem space itself. It
then depends on well-designed learning progressions geared for novices that manage cognitive
load and motivation-related behaviors (e.g., persistence) at various complex, problem-solving
phases. Assessment techniques to measure dynamic and evolving levels of adaptive expertise,
which occurs along a continuum rather than at discrete points, must also be as sophisticated as
the STEM-related, ill-structured problems themselves in order to diagnose in real-time individual
learner characteristics through tailored scaffolds and measurements in the CBLE. Currently,
most assessment models for adaptive expertise focus on knowledge and a capacity for
innovation, but often do not integrate motivation or often consider a common cross-problem
assessment for transfer.
Purpose of the Study
This exploratory study proposes a model for adaptive expertise (the dependent variable)
that potentially illustrates microdevelopmental learning patterns in key KSAs (the independent
variables) that occur during the use of a scaffolded, epistemic computer-based learning
environment instructionally designed to support complex, ill-structured problem solving in
STEM-based topics. For the purposes of this study, the relative level of a learner’s adaptive
expertise (AE) along a novice-to-expert trajectory is considered to be a function of their
preparation for future learning (PFL), which in turn is a function of changes in three dimensions:
knowledge and skills (KS), inquiry and innovation (II), and motivation (M) through the fourth
dimension of time, or, for short-hand:
Level of AE = f(ΔPFL) = f(ΔKS, ΔII, ΔM)
Though utilizing familiar symbols from mathematics, this short-hand description is a conceptual
(i.e., not mathematical) model, and is more fully described in Chapter 2. The general hypothesis
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 26 26
is that differences in separate or combined (clustered) developmental patterns in the three
dimensions over time (i.e., differences in preparation for future learning) influence the
acquisition of higher levels of adaptive expertise along a novice-to-expert continuum.
The purpose is to explore what these potential patterns might reveal about the trajectory
of learners’ preparation for future learning (PFL), including the emergence and development of
potentially transferrable problem-solving schemata and relevant cognitive, metacognitive, and
motivation-related behaviors and attitudes critical to adaptive expertise in complex, ill-structured
problem solving. The study is exploratory, as variables in adaptive expertise have not been fully
developed as a research construct in the literature, are likely confounded in as yet unknown
ways, and do not completely capture all of the sub-variables that might constitute the three
dimensions and individual differences therein. Additionally, in measuring relative expertise, the
difference between novice and expert levels is not discrete, but rather lies along a continuum.
This continuum is not necessarily on a stable, linear incline, with additive knowledge, skills, and
abilities leading to greater capacity in adaptive expertise. Instead, per constructivism and
Dynamic Skill Theory as addressed in Chapter 2, it is subject to rises and dips as learners
restructure their understandings by returning to prior knowledge and incorporating new
knowledge, all while going through similar undulations in their correct and incorrect applications
of these evolving understandings, ideally leading to conceptual change and the formation of
more expert-like, STEM-epistemic, problem-solving schemata.
Research Question
This exploratory study thus considers the following research question: For educator-
learners in high-need environments, how are levels of adaptive expertise in complex, STEM-
based, ill-structured problem solving (the dependent variable) influenced by evolving
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 27 27
competencies in four proposed dimensions of preparation for future learning (PFL) (the
independent variables):
(a) knowledge and skills (Dimension 1 of the Adaptive Expertise model),
(b) inquiry and innovation (Dimension 2 of the Adaptive Expertise model)
(c) motivation (Dimension 3 of Adaptive Expertise model), and
(d) transfer of any previously demonstrated capacity in these three dimensions to a novel
complex, ill-structured problem following the initial learning experience (Dimension 4 of
the Adaptive Expertise model)?
Significance of the Study
This study contributes a possible framework for measuring microdevelopments in
adaptive expertise in STEM-based, complex, ill-structured problem solving, particularly in
computer-based learning environments. It includes patterns in motivation as a key, integrated
factor, a variable that has not been built into prior models for adaptive expertise and preparation
for future learning. Addressed largely as implications of this study, findings also potentially
relate to how the interpretation of observed results in learners’ developmental patterns in
knowledge and skills, inquiry and innovation, and motivation through time contribute to:
• developing formative assessment systems that not only measure learners’ level of
problem-solving capacities (e.g., Fischer, 2008), but also diagnose difficulties in need of
additional supports at various stages in STEM-based, complex, ill-structured problem
solving to support their PFL, and
• understanding how CBLE instructional designs for adaptive expertise in STEM-based,
complex, ill-structured problem solving can better support learners’ PFL trajectories
Potential Contributions to Assessments of Adaptive Expertise
Scholarly research is lacking on characterizations of learners’ developmental pathways
toward adaptive expertise in complex, ill-structured problem solving over time (Fischer, 2008).
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 28 28
From a review of the research literature, studies largely focus on needed cognitive capacities,
center on metacognitive assessments to a lesser extent, and pay relatively little attention to
affective capacities associated with motivation-related behaviors key to open-ended problem
solving. More research is needed on embedded assessments of learner mental models at specific
stages of learning progressions in STEM-related, ill-structured problem solving, to ascertain not
only their developmental level, but also for correcting misconceptions or mistaken problem-
solving paths in real time (Eseryal, Ge, Ifenthaler, & Law, 2011).
Technology-based assessments involving simulations for conducting experiments and
capturing learner problem-solving actions can potentially enable measurements of problem-
solving capability beyond those of conventional tests (Bennett, Persky, Weiss, & Jenkins, 2007;
Nelson, Erlandson, & Denham, 2011), also allowing for natural dips and rises that naturally
occur during the learning process as students return to prior knowledge and lower-order thinking
as they seek to integrate novel concepts per conceptual-change and dynamic-skills theories
covered in Chapter 2. Tracking these sorts of micro-developments (e.g., Fischer, 2008) without
making misassumptions of problem-solving capacities at a single point in evaluation time is
likely to yield more accurate assessments of learners’ capacities in complex, ill-structured
problem solving in STEM along their PFL trajectories.
Finally, the development of common, cross-STEM-domain instructional and assessment
models for complex, ill-structured problem solving can potentially serve all stakeholders. While
simplified for this study, the model for measuring developmental patterns in learner progressions
could be enhanced based on exploratory findings, including more complex algorithms for
assessing learners through various complex, ill-structured problem-solving experiences in
STEM. If effective and adoptable, such a domain-general, problem-solving framework for
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 29 29
measuring levels of adaptive expertise based on PFL could provide curriculum designers with a
pedagogically sound infrastructure that they could apply to the creation of multiple, domain-
specific problems ranging from astrobiology to zoology. This framework could be used to
familiarize educators with problem-solving pedagogical approaches based on learning theory as
they also increase their own STEM and problem-solving knowledge. The creation of this
broader pool of novel, domain-specific problems using the same framework would in turn allow
educators and evaluators alike a more robust, systemic ability to measure formatively the
development of adaptive expertise throughout a school year (tailored to standards-aligned topics
in a given district’s or classroom’s scope and sequence), across STEM subject areas, and even
across grade levels, assisting in accountability and in preparing for future summative problem-
solving assessments that, while not common in contemporary standardized testing, is on the rise
with NAEP and PISA as early examples. It would also potentially be cost-effective for designers
of computer-based problem-solving experiences, given potential free, open-source code could
enable practitioners to focus on creating domain-specific content within it, rather than the
architecture itself.
Potential Contributions to Instructional Design for Adaptive Expertise
In that regard, related research gaps also exist in scaffolded environments supporting both
the development and measurement of adaptive expertise. Computer-based learning
environments potentially serve as situated learning experiences that mimic authentic, real-world,
ill-structured problem solving in STEM domains (e.g., Dede, 2009; Dunleavy & Simmons, 2011;
Ketelhut, Nelson, Clarke, & Dede, 2010). However, while research shows growing evidence
that such simulations can advance learners’ conceptual understanding of a given STEM topic, it
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 30 30
is not clear the extent to which they support other learning goals or motivation, and only limited
and inconclusive research results exist on the effectiveness of games (NRC, 2011b).
Commercial game designers (even of serious games intended to improve learning
outcomes) are not typically cognizant of learning theories and thus do not habitually embed
them, and research on serious games rarely documents (a) learning goals, (b) theories on how
students progressively learn in attaining those goals, and (c) what measures assess progress
(NRC, 2011b). Research lacks control groups, and does not isolate the unique effects of the
simulation or game from the contents of the curriculum, lacks common methods of analyzing
effectiveness, and lacks common terminology (NRC, 2011b). Little evidence exists either on the
effectiveness of technology-enhanced, problem-solving frameworks or on the effectiveness of
computer-based instructional strategies and scaffolds tailored for the dynamics of, and adoptable
in, real-world classrooms (Choi & Lee, 2009; Eseryel, Ge, Ifenthaler, & Law, 2011; Hannafin &
Kim , 2003; Jonassen, 2010b; Kim & Hannafin, 2011; Kim, Hannafin, & Bryan, 2007; Schrader,
Lawless, & McCreery, 2009), particularly those that are high-need in character.
While some studies caution that the use of technology does not necessarily enhance
learning more than traditional methods of instruction and can often increase cognitive load due to
poor design (Belland, Glazewski, & Richardson, 2011; Billing, 2007; Duncan, Schramm,
Thompson, & Dumontheil, 2012; Kalyuga, Renkl, & Paas, 2010; Liu et al., 2009; Nusbaum &
Silvia, 2011; Paas, Renkl, & Sweller, 2003, 2004), research on properly designed computer-
based learning experiences can (a) provide learners with a large, complex, yet finite problem-
solving space, and (b) allow all learner actions to be recorded so assessors can make valid
inferences about how learners use available tools and own depth of understanding to create
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 31 31
solutions, and (c) replicate real-world situations without the same cost, danger, or inaccessibility
(Vendlinski, Baker, & Nieto, 2008, pp. 6-7).
Potential Contributions of Value to Stakeholders
From the policy-making perspective, this study’s research contributions on
microdevelopments in expertise in STEM-based, ill-structured problems can potentially guide
strategies for improving educator preparedness and student achievement in 21st-century
competencies through scaffolded, formative, CBLE-based, situated assessments. Focusing on
high-need environments potentially assists in contributing to the development of pedagogical
skills and confidence in teaching with ill-structured problems, developing 21st-century job skills
for a key population, and thus contributing to national wellbeing. Understanding both learners’
patterns in the development of adaptive expertise and common mechanisms for scaffolding and
assessing differences could provide some domain-general infrastructural guidelines in the
development of learning-theory-based CBLEs that could provide cost-savings advantages in that
each learning tool would not wholly have to reinvent the wheel with each lesson.
For practitioners, this research addresses several pedagogical and learning needs. To
enhance student capabilities in solving complex, real-world problems, computer-based
experiences designed for developing adaptive expertise in the classroom could potentially offer
(a) supports to teachers for gaps in their STEM-content knowledge, problem-solving, and
scaffolding skills, (b) computer-based tools to diagnose student learning difficulties (Bottge et
al., 2009) at each problem-solving stage, and (c) practice and supports to manage cognitive load
for both teachers and students in problem solving that promote eventual automaticity in problem-
solving strategies, and (d) computer-based assessments of STEM-related problem-solving
capacities. Scaffolded, computer-based experiences can also supply real-time, differentiated aids
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 32 32
for individual learners, a personalized mentoring that a teacher cannot often facilitate with a
classroom of multiple students with multiple learning needs at given problem-solving stages (Ge,
Planas, & Er, 2010) or various levels of developmental competencies (Fischer, 2008).
Limitations
One issue related to internal validity is pre-test influence in measuring adaptive expertise.
Cueing and practicing in one problem-solving experience not only has an effect on later
problem-solving performance, but also is actually intended to play that role as part of preparation
for future learning. Learners might well understand the dependent variable is adaptive expertise
(though perhaps not in that terminology), since that goal would be intentionally presented as a
means of (a) making it an explicit, consciously considered learning outcome for students as they
engage in self-regulated learning, and (b) providing cognitive, metacognitive and affective
scaffolds supporting adaptive-expertise capacities (e.g., goal representation, mastery orientation
etc.). All of these explicit supports may thus confound the effects of the independent variables
on the dependent variable, potentially impacting interpretation of the data.
A potential also exists for the Hawthorne Effect, where being in the experiment changes
the response of the learners in it. That is particularly so in that scaffolding is specifically
designed to draw out learners’ metacognitive awareness of their own problem-solving
knowledge, skills, and abilities as part of their preparation for future learning, in support of
explicit problem-solving schemata development. That is, in this case, the intent is exactly to
change responses during the experiment, and mindfully, rather than to avoid that effect.
In terms of measurement, other possible threats are that the accuracy of observed patterns
in adaptive expertise (the dependent variable) might only reflect students’ initial, novice-level,
and thus more variable, engagement (e.g., Fischer, 2008), and changes in knowledge and skills,
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 33 33
inquiry and innovation, and motivation (the independent variables) might not be of sufficient
magnitude and duration to describe accurately developing patterns in adaptive expertise.
Additionally, aggregating scores in a simple model to express changes in learning patterns
(rather than, for example, using a more complex-dynamic-systems approach that was beyond the
scope of this study) might introduce unreliability into the measurements of the observed patterns.
Finally, as much as the assessment was detailed through rigorous evidence-centered design
(ECD) methods covered in Chapter 3 and Appendices A-E, instrumentation errors are inherently
possible, though minimized to the greatest extent due to the rigorous nature of that technique.
Delimitations
As an exploratory study by design, the findings of this research are not readily
generalizable, but rather suggest future experimental research directions for extending results and
implications to wider learner groups. While the research design ensured relatively uniform
conditions for individual participating students, a potential limitation of the study is that the
overall context of the learning environments in which the intervention took place (e.g., teacher,
learner, classroom, and sociocultural characteristics and interactions per Salomen, 2006; Sawyer,
2006) was not formally considered as a factor influencing learner patterns in developing adaptive
expertise, though this context is key in actual classroom learning. Participating teachers had
some limited help from the researcher if questions arose about how to use certain tools in the
CBLE, but the majority of the scaffolding was embedded in the CBLE itself.
With regard to electronic scaffolding, an open question in the research literature concerns
the most effective types and timing of scaffolds (e.g., Lajoie, 2005) in ill-structured problem
solving. While the chosen scaffolds within the model strove to align with prior research-based
evidence on their effectiveness, due to the scope of the study, it was not possible to isolate the
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 34 34
type and timing of each of the scaffolds to test their separate effectiveness on learner patterns.
Results might depend on single components or either additive or interactive effects of multiple
components. Thus, conclusions on the influence of the instructional model on learner patterns
cannot specifically be traced to advantages or disadvantages of any sub-element, which hinders
both deep analysis and targeted future improvements of the tool, but is an area of future research.
Also not studied was whether the type and timing of chosen scaffolds applied equally well in one
domain vs. the other in the two problem-solving experiences. It is possible that, even with a
common framework, the availability of those choices in domain-specific curriculum design
would be advantageous in assessing PFL and levels of adaptive expertise. In other words, some
elements of the framework may need to be as flexible as the adaptive experts it hopes to produce.
That said, the focus of the exploratory study was not on the relative merits of the
instructional model (i.e., the scaffolded framework supporting the independent variables) and its
improvements, but rather on the developmental learning patterns as part of PFL variables (the
independent variables: knowledge and skills, inquiry and innovation, and motivation, all through
time) that characterize levels of adaptive expertise (the dependent variable). For the scope of this
study, the limited selection of cognitive, metacognitive, and affective indicators for learners’
capacities in these four dimensions might not provide a complete picture of the complex suite of
knowledge, skills, and abilities required for adaptive expertise. In the future, the tool itself could
also be constructed in a more sophisticated manner, in order to reflect the nature of learning
more as the complex adaptive system that it is, particularly in relation to an adaptive expertise in
STEM-based, complex, ill-structured problem solving.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 35 35
Definitions
Conceptual Change – a process through which people learn new ways to think about classes of
situations (Schwartz, Varma, & Martin, 2008).
Cognitive Load, Extraneous – a load on a learner’s working memory in the presence of activities
that do not contribute to learning (Renkl, Hilbert, & Schworm, 2009).
Cognitive Load, Germane – a load on a learner’s working memory in the construction of
schemata (e.g., problem-solving schemata) or other productive learning processes (based
on Renkl, Hilbert, & Schworm, 2009).
Cognitive Load, Intrinsic – a load on working memory due to the complexity of the learning
content in relation to the learner’s prior knowledge (Renkl, Hilbert, & Schworm, 2009).
Computer-based Learning Environment – offers open-ended learning in complex topics (e.g.,
STEM) through multiple representations (text, audio, visual) that are often presented in a
nonlinear manner and make use of hypermedia affordances (e.g., Zimmerman &
Tsikalas, 2005).
Game, Computer – an interactive computer model, often based on simulations, with user controls
and explicit goals and rules that provide feedback on a player’s progress, which can be
affected by actions and gameplay strategies (NRC, 2011b).
Game, Epistemic – a digitally supported learning environment designed to assist learners in
developing domain-specific expertise under realistic constraints (based on Bagley &
Shaffer, 2009; Shaffer, 2006).
Game, Serious, STEM-related – learning experience designed to represent accurately authentic
principles, practices, and processes of STEM disciplines (based on NRC, 2011b).
Generativity– in the context of problem solving, refers to learners’ creative capacity to generate
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 36 36
new knowledge and skills (Pérez & Coffin-Murray, 2010) and to extend innovations
beyond the particular circumstances involved in their original creation (Schwartz, Varma,
& Martin, 2008), based on the notion that, in generative learning, students actively
construct their own interpretations and inferences, integrating new and prior concepts and
relationships among them to make meaning (Wittrock, 2010).
Habits of Mind – a target for transfer as part of Preparation for Future Learning (PFL), they
include the willingness (a) to look critically at current knowledge, (b) to seek multiple
perspectives, (c) to allow ambiguity, (d) to reflect on and monitor one’s own
comprehension, (e) to let go of old ideas, and (f) to persist in the face of difficulty
(Bransford & Schwartz, 1999).
High Need – per the No Child Left Behind Act of 2001, districts that serve high numbers of poor
students and at least one school requiring either academic improvement (i.e., Title I
schools whose students have not made adequate yearly progress for two consecutive
years) and needing assistance in acquiring or using technology.
Hypermedia – a nonlinear, learner-centered CBLE that integrates multiple representations (e.g.,
audio, video, animation, graphics, and text), enabling learners actively to choose, access,
and apply information when, how, and how long they desire in the construction of their
own knowledge (based on Moos & Azevedo, 2008).
Innovation – an outcome or a process (mechanisms related to restructuring concepts or
environments) (Schwartz, Varma, & Martin, 2008) related to new ideas
Instruction, Anchored – teaching that organizes (anchors) learning around solving a complex
(more than 14 steps, requiring several hours), open-ended, ill-structured, authentic
problem (Pellegrino & Brophy, 2008).
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 37 37
Knowledge, Explicit – a learner’s ability to find coherence by letting go of a prior inaccurate
concept, building new understandings with conflicting information, and by comparing
models to identify conflicting conceptions and their predictions (Mayer, 2008, p. 221).
Knowledge, Implicit – a learners’ ability to recognize a problem as new information, causing
them to reconsider their current concept and to create an explanation that reconciles the
anomaly (Mayer, 2011, p. 221).
Knowledge, Procedural – an understanding of the way in which to solve problems (Jonassen,
2010a).
Knowledge, Strategic– an understanding of strategies for problem solving such as search-and-
replace, serial elimination, and space splitting (Jonassen, 2010a).
Knowledge, System – a conceptual understanding of the way in which a system works (Jonassen,
2010a).
Learning, Authentic – involves real-world problems, following the practices of a discipline, and
often occurs in project-based learning scenarios (e.g., NRC 2011a).
Learning, Situated – a way progressively to gain knowledge, skills, and abilities in the specific
context to which they apply, with new understandings co-constructed in a community of
practice (e.g., Bransford et al., 2010; Güss, Tuason, & Gerhard, 2010).
Learning Progression – a description or map of the likely order and sequence (but not
necessarily rate) of concepts and practices learners must experience in moving from naïve
to sophisticated mental models and reasonings about (a) STEM-based principles and
phenomena, and (b) STEM-related practices such as inquiry or ill-structured problem
solving (Corcoran, Mosher, and Rogat, 2009; Zalles, Haertel, Mislevy, 2010).
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 38 38
Literacy, Computer – a complex concept, as definitions change rapidly along with the
technology (Hoffman, Blake, McKeon, Leone, & Schorr, 2003; Mason & Morrow, 2006;
Pérez & Coffin-Murray, 2010), giving rise to global generalities that computer literacy is
“whatever a person needs to know and do with computers in order to function
competently in our information-based society” (Sloan & Halaris, 1985, p. 320). Derived
from a synthesis of definitions (Australian Ministerial Council on Education, 2007; ICT
Literacy Panel, 2007; Stiller & LeBlanc, 2006), this study defines computer literacy as
the confidence and ability to know when and how to use ICT: (a) to access, manage,
integrate, and evaluate information appropriately, (b) to develop new understandings by
creating, analyzing, applying, or transforming information, (c) to communicate and
collaborate with others in order to learn, contribute, and participate effectively in society
as informed citizens, and (d) to use cognitive and technological proficiencies for
problem-solving that generates innovation, transformation, and even societal change
(“generativity” per Pérez & Coffin-Murray, 2010; Schwartz, Varma, & Martin, 2008;
Wittrock, 2010).
Literacy, STEM – interdisciplinary knowledge, skills, and abilities in science, technology,
engineering, and mathematics that enable understandings of the natural world and
patterns and relationships within it, as well as the modification of the natural world for
human purposes (based on NRC, 2011a).
Mental Model – a learner’s cognitive representation of essential parts of a system and cause-and-
effect relationships within it (Mayer, 2008, p. 210).
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 39 39
Preparation for Future Learning – a key assessment target related to adaptive expertise,
demonstrated by changes in learners’ problem-solving schemata (Bransford & Schwartz,
1999) along a novice-to-expert trajectory.
Problem Complexity – the problem difficulty based on how much knowledge is required to solve
it, the learner’s prior knowledge, the complexity of solution procedures, and the number
of relations the learner must process at the same time (Jonassen, 2010a).
Problem, Dimensions of – per Jonassen (2007), either (a) internal, addressing learner’s prior
domain knowledge, reasoning skills and epistemic beliefs, or (b) external, addressing the
problem’s formation and representation (e.g., complexity, structuredness, dynamicity)
and its situated nature (e.g., cultural or other expectations).
Problem Dynamicity – the extent to which relationships among variables or factors change over
time or the extent to which changes in one affect another (Jonassen, 2010a)
Problem-solving Schemata – typically individualized, generally include the types of problem, the
structural elements of problems (e.g., acceleration, distance, velocity), the situation(s) in
which the problems occur, and the processing operations for solving them (Jonassen,
2010a, p. 369).
Problems, Ill-structured – problems that have unclear goals, multiple solution paths, and
multiple potential outcomes (Jonassen, 2007; Mayer & Wittrock, 2006).
Problems, Well-structured – usually exposes all aspects of the problem, organizes a small
number of rules that work in a predictable manner, and has a clear and preferred solution
path, with one accepted outcome (Jonassen, 2010a).
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 40 40
Problem Solving – “situated, deliberate, learner-directed, activity-oriented efforts to see
divergent solutions to authentic problems through multiple interactions amongst problem
solver, tools, and other resources” (Kim & Hannafin, 2011, p. 405).
Problem Solving, Complex – thought processes for reducing barriers to achieving complex,
dynamic, nontransparent goals (Güss, Tuason, & Gerhard, 2009).
Problem Solving Competency – an individual’s capacity and willingness “to engage in cognitive
processing to understand and resolve problem situations where a method or solution is
not immediately obvious” (OECD, 2010, p. 12).
Problem-solving Schemata – characterized by the sophistication of what learners notice about the
problem, how they gather and interpret information, and how they apply the information
or past problem-solving strategies when confronting new problems (Bransford &
Schwartz, 1999).
Scaffolding – provides a sequence of just-manageable learning challenges for learners as they
move from novice to more expert levels of KSAs and learning aids that, ideally, are
adapted to the needs of each learner (e.g., Kim & Hannafin, 2011).
Self Regulation – the way in which learners knowingly manage and monitor their own learning
process, including planning, setting goals, implementing cognitive strategies (e.g.,
summarizing), and evaluating their learning effectiveness, as guided by learners’ beliefs,
motivation, and reflective processes (based on Nietfeld & Shores, 2011; Schraw, 2007).
Skill –a learner’s ability to control behaviors, thoughts, and feelings while doing a task in a given
environment (derived from Fischer, 1980; 2008), which develops in a self-organizing
manner toward more complex states (Dawson & Stein, 2008).
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 41 41
Simulations – dynamic, interactive computer-based models that provide representations at all
scales of otherwise invisible or hard-to-enact phenomena, processes, and systems,
typically with controls that allow users to understand the implications of their
manipulations and modifications of hypotheses, natural phenomena, and other models
(NRC, 2011b).
Task, Complex – in problem solving, a task with a large number of interrelated factors that
dynamically affect the problem state (Eseryel, Ge, Ifenthaler, & Law, 2011).
Transfer – in problem solving, the ability to apply concepts creatively to novel circumstances
(DeHaan, 2009); from an adaptive expertise view, preparation for future learning (PFL),
as demonstrated by changes in problem-solving schemata, habits of mind, and other
problem-solving capacities (aligned with Bransford & Schwartz, 1999).
Organization of the Study
Chapter 2 provides a literature review for this study. Beginning with a discussion of
theoretical foundations, it then draws upon Cresswell’s (2009) suggested model for literature
reviews with separate discussions of the dependent and independent variables followed by
research addressing them in the context of each other. That synthesis is intended to lead
naturally into Chapter 3, which outlines the methodology for answering the research question,
with prior research findings and gaps in mind. Chapter 4 analyzes data collected in the study,
with Chapter 5 considering implications for practice, limitations, and recommendations for
future studies.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 42 42
CHAPTER TWO: LITERATURE REVIEW
Organization of this section follows Cresswell’s (2009) recommendations for literature
reviews. Following an overview of theoretical foundations, discussions of adaptive expertise
(the dependent variable) lead to discussions of three dimensions of preparation for future
learning necessary for developing adaptive expertise: knowledge and skills, inquiry and
innovation, and motivation and their transfer through time (the independent variables). A
synthesis of the two leads to a discussion of methodology in Chapter 3 (Cresswell, 2009).
Theoretical Foundations of the Study
Several researchers suggest that the transfer of flexible strategies for complex, ill-
structured problem solving can positively impact learners’ success in developing more expert-
like problem-solving behaviors. This research includes concepts and theories such as
preparation for future learning, adaptive expertise, constructivism’s cognitive-flexibility,
conceptual-change, situated-learning, and self-regulated-learning theories, and dynamic skill
theory, all influenced by the constraints of cognitive load theory, particularly for novice learners.
What unites preparation for learning, adaptive expertise, cognitive flexibility theory, and
conceptual change theory is developing what Bransford et al. (2010) call an openness of mind for
building, deconstructing, and rebuilding mental models suitable to the goal or problem at hand,
which is closely related to generativity (Pérez & Coffin-Murray, 2010; Schwartz, Varma, &
Martin, 2008; Wittrock, 2010). This ability is not only based on flexible thinking and
constructivist principles for building knowledge, skills, and abilities (KSAs), but also on brain-
based research related to dynamic, developmental pathways toward ever higher-order levels of
cognitive, metacognitive, and emotional capacities, per dynamic skill theory. What must be
considered throughout is significant cognitive load on novice learners while engaged in complex,
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 43 43
ill-structured problem solving, and its negative effect on both learning and motivation unless
well managed.
Preparation for Future Learning
Bransford and Schwartz (1999) discuss the inadequacy of the traditional view of transfer,
whereby researchers often fail to show the immediate acquisition of expert-like behavior in
follow-on novel problems, but may miss developing capabilities. Critiquing studies that look for
a jump from novice to expert levels following a single or short-term intervention, they argue that
a learning experience does not immediately generate expertise, and that, when it comes to
assessing transfer, it may be more appropriate to demonstrate whether students are on a
trajectory toward expertise. In their view of transfer, the key assessment target is students’
preparation for future learning (PFL), as demonstrated by changes in their problem-solving
schemata, which can be characterized by the sophistication of what learners notice about the
problem, how they gather and interpret information, and how they apply the information or past
problem-solving strategies when confronting new problems (Bransford & Schwartz, 1999).
Bereiter and Scardamalia (1993) concur, focusing on the emergence of more expert-like
behaviors in learners who increasingly approach problem solving with executive function and
organized problem-solving schemata, even if they do not yet have the knowledge and skills to
create solutions without assistance. Bransford and Schwartz (1999) add that the target for
transfer should be related to the emergence of habits of mind associated with creativity and
innovative risk-taking such as the willingness (a) to look critically at current knowledge, (b) to
seek multiple perspectives, (c) to allow ambiguity, (d) to reflect on and monitor one’s own
comprehension, (e) to let go of old ideas, and (f) to persist in the face of difficulty.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 44 44
Adaptive Expertise
This view of transfer supports the development of an “adaptive expertise” (Hatano &
Inagaki, 1986) that is important for complex, ill-structured problem solving in STEM and the
rapidly changing conditions of the 21st-century environment. According to Hatano and Inagaki
(1986), it is important to focus learners on processes for developing adaptive expertise, which
they define as learners’ ability to understand and to explain the meaning of their demonstrated
procedural skills in a given context, as well as the ability to modify or invent skills and models
depending on changes in setting, requirements, constraints, and other variables of the problem to
be solved. By contrast, they note, routine expertise is about the increasingly efficient application
of skills to similar problems whose solution depends on shared characteristics (e.g., sequences of
strategies etc.). To develop adaptive expertise, students need learning opportunities that (a)
allow multiple ways of altering skills and strategies in response to a system with inherently
random factors, (b) do not focus on rewards for being right so that experimentation is less risky
and more playful, and (c) have a supporting educational culture that emphasizes learners’
metacognitive understanding of the system as the primary aim, rather than prompt, accurate, and
efficient performance (Hatano & Inagaki, 1986).
Constructivist Pedagogy
Along with a focus on developing novice-to-expert learning pathways, the generative and
experimental nature of PFL and adaptive expertise share similarities with constructivist
pedagogical approaches. Constructivism is a learning approach often used in STEM domains to
facilitate active, experiential learning in which students take responsibility for the development
of their understandings. Constructivist pedagogical approaches suggest that when learners assess
the way in which a lesson affects their understanding, they are on a pathway to becoming expert
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 45 45
learners (Joyce, Weil, & Calhoun, 2009). According to Jonassen (1994), constructivist learning
environments engage learners reflectively and collaboratively in their own knowledge
construction through learning experiences that provide them with multiple representations of
content in the context of real-world complexities, tasks, and settings (see situated learning
below). Constructivist learning environments thus avoid oversimplification, out-of-context
instruction, pre-determined sequential learning, and simple knowledge reproduction (Jonassen,
1994). Four theories associated with constructivist pedagogical techniques relevant to complex,
ill-structured problem solving are (a) cognitive flexibility theory, (b) conceptual change theory,
(c) situated cognition theory, and (d) self-regulated learning theory.
Cognitive Flexibility Theory. Based on constructivism and related to the above
concepts of adaptive expertise and preparation for future learning, Cognitive Flexibility Theory
(CFT) addresses learning in complex, ill-structured domains where goals, constraints, and
solutions are not pre-defined (Feltovich, Spiro, & Coulson, 1993; Spiro & Jehng, 1990).
Important in CFT is having learners seek multiple perspectives and make interconnections
among them to develop an overall holistic understanding of the complex problem and its parts
(Feltovich, Spiro, & Coulson, 1993). By clustering related concepts, CFT addresses
misconceptions that can occur with novices when they typically apply single and often simpler
concepts to the problem solution. By helping learners make connections among concepts,
particularly in case-based scenarios, CFT helps learners avoid reductive bias, or oversimplifying
the problem or concept (Feltovich, Spiro, & Coulson, 1993). In that respect, CFT has often been
applied in computer-based learning environments where learners act in a non-linear manner by
exploring the concept or problem without following a prescribed sequence, thereby constructing
a deeper and more complex representation of ill-structured concepts and problems (Feltovich,
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 46 46
Spiro, & Coulson, 1993). Essentially, CFT focuses learners on constructing schemata from
different perspectives, representations, and mental models in order to develop inquiry skills for
complex concepts and to use both prior knowledge and new information flexibly to construct
new knowledge, meanings, and representations.
Conceptual Change Theory. The constructive processing nature of cognitive-flexibility
theory resembles aspects of conceptual-change theory (e.g., Limón, 2001), which is also based
on constructivism. Conceptual change theory is often used as a pedagogical practice in science
education to guide authentic learning experiences based on real-world practices of STEM
disciplines. Stemming from Piaget’s (1977) notion that students restructure knowledge when
real-world experiences do not meet their existing mental models, conceptual change theory
suggests that learners intentionally repair or replace their mental models through a motivated,
metacognitive desire to understand and to control their own learning process (Mayer, 2008).
Conceptual change is about restructuring relationships among existing concepts and acquiring
new ones (Chen & She, 2012). Like CFT, conceptual-change theory has cognitive-process steps
intended to move learners from novice to expert knowledge: (a) using a predict-observe-explain
model in situations that create cognitive conflict in order to help them recognize anomalies and
inadequacies of their current mental models and related misconceptions, (b) constructing new
schemata by incorporating new information that better explains observed phenomena, and (c)
applying newly formed mental models and scientific thinking skills to figure out solutions to
problems (Mayer, 2008). By following this process, learners not only replace their naïve mental
models with more accurate and sophisticated ones in the given context, but also potentially
transfer a cognitive and metacognitive awareness of the importance of, and process for, seeking
alternative explanations in future situations when they again find their current understanding is
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 47 47
inadequate for explaining what they observe (Mayer, 2008). Thus, with practice in conceptual
change, learners progressively prepare themselves for future learning by adopting habits of mind
associated with STEM-discipline practices.
However, conceptual change theory does not assume that learners should acquire strict
procedural steps that commonly apply across all problems or questions. A component of
conceptual change theory related to the construction of flexible schemata offered by CFT is
Gentner’s (1983) structure mapping theory, which suggests ways in which learners can gain the
ability to identify the attributes of the problem or concept, the relationships among them, and
alternative perspectives in order to recognize which schemata apply, when, and how well (i.e.,
their generalizability). Key instructional strategies related to acquiring this kind of meta-
understanding such as some used in this study include analogical models that have learners (a)
distinguish between relevant and irrelevant information, (b) map parts and relationships, and (c)
use visual illustrations in order to understand underlying structures of problems in a way that
later supports far transfer of problem-solving strategies (Gentner, 1983).
Situated Cognition Theory. Associated with both constructivist pedagogy and
sociocultural theory (e.g., Vygotsky, 1978), situated cognition theory (Brown, Collins, &
Duguid, 1989) argues that complex problem-solving strategies (e.g., information gathering,
planning, decision making, and action taking) develop in a specific social, cultural, and historic
context (Güss, Tuason, & Gerhard, 2010). Situated cognition theory thus recognizes that the
environment in which learning takes place influences learners’ knowledge construction. Situated
cognition theory applies to STEM learning in that these disciplines have their own languages,
symbols, tools, practices, and procedures, to which students should become accustomed for their
potential future success in 21st-century careers. Like conceptual change theory, situated
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 48 48
cognition theory addresses the fact that students rarely receive instruction that is authentic to
STEM-related cultural practices (e.g., following the scientific method) and that school systems
are often not structured to support the development of adaptive expertise (Bransford et al., 2010).
Current teaching approaches that are not authentic to real-world contexts result not only
in the diminished state of STEM knowledge among U.S. students but also to a demonstrated lack
of interest and motivation that is otherwise critical to persistence in mastering complex concepts
(NRC 2005, 2007, 2010). For example, in a large-scale national study of over 300,000 students,
learners in grades 3-12 reported being motivated by the opportunity to use real tools of STEM
disciplines (laboratory and technological equipment) and to conduct authentic research including
computer research (Project Tomorrow & PASCO Scientific, 2008). In the same study, teachers
desired (a) animations for helping learners visualize complex concepts, (b) computer interactives
that allow learners to engage in STEM practices in building their expertise, and (c) real tools of
the discipline such as microscopes (Project Tomorrow & PASCO Scientific, 2008).
Several studies show the value of learning in authentic conditions (e.g., Hmelo-Silver,
2004; Lee, Shen, & Tsai, 2008), which may be relevant in facilitating future learning (Schwartz
& Martin, 2004). While research generally supports teaching strategies in a natural, authentic
context, Billing (2007) cautions that studies show transfer can be impeded if the contextual
specificity is so great that learners do not associate underlying principles with other problems
and problem contexts. Billing (2007) further attests that focusing on learning crosscutting
principles and concepts helps learners transfer strategies to novel problems given they can create
more flexible mental models. That idea is recognized by next-generation science standards
(NRC, 2011a; NGSS Lead States, 2013), which support the idea of learning discipline-specific
STEM content standards in the context of crosscutting STEM-based practices (e.g., asking
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 49 49
questions and solving problems, engaging in argument from evidence etc.) and crosscutting
concepts (e.g., patterns, systems and system models etc.). In addition, as Ranks (2000) notes,
brain-based instruction should organize content in authentic situations in a way that enables
learners to construct meaning by supporting the brain’s abilities to perceive patterns, create
meanings, integrate sensory experiences, and make connections.
Providing learners with situated practice through multiple examples in varied contexts is
important, and something CBLEs can potentially provide learners in ways they would not
otherwise receive in their classrooms. CBLEs are essentially cognitive tools that support
learners’ capabilities for knowledge construction during problem solving, thinking, and learning
(Moos & Azevedo, 2008). In a STEM-domain-specific context, they are situated tools of the
discipline, and familiarity with their use may significantly impact complex problem-solving
abilities (e.g., Barr & Stephenson, 2011). Additionally, when well designed, CBLEs can be
instructional in reinforcing STEM-related epistemic beliefs about knowledge creation and
knowledge change (based on fitness with theoretical models, for example, through empirical
research) given CBLE affordances such as the flexible and iterative ability to hypothesize,
model, experiment, and simulate. In presenting real-world cases, CBLEs provide access to rich
information and allow learners to question and to monitor processes and make and correct errors
(Woolf, 2009), and have affordances for inquiry such as nonlinear content that can be
manipulated and robust collaboration tools (de Jong, Weinberger, Girault, Kluge, Lazonder,
Pedaste, Ludvisgen et al., 2012). That kind of STEM-domain understanding and adaptability is
important to today’s learners. Research shows, for instance, that learners with well-developed
and flexible epistemic beliefs more ably interact with information in ill-structured tasks (Winne
& Nesbitt, 2010); that is, they behave in a more expert-like manner and, from an adaptive
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 50 50
expertise perspective, do not view knowledge as fixed and immutable based on authority-
delivered truths. Instead, they are able to develop and to demonstrate adaptive, STEM-epistemic
capacities such as looking critically at knowledge, collecting alternative perspectives, and letting
go of old ideas (Bransford & Schwartz, 1999).
Self-Regulated Learning. Self-regulated learning (SRL) is a constructive process in
which learners set goals and then actively monitor and adjust their cognitive, metacognitive, and
affective behaviors depending on goals, criteria, constraints, and conditions of the learning
environment (Pintrich, 2000). While students spend large amounts of time in complex, open-
ended game-based learning environments, little research exists on self-regulation aspects of this
environment (Nietfeld & Shores, 2011). Yet, self regulation is important, as studies show a
positive relationship between self regulation and high achievement among learners (Greene &
Azevedo, 2009; Greene, Bolick, & Robertson, 2009; Nietfeld & Shores, 2011). Critiquing
research studies showing no significant learning advantages, Lee, Lim, & Grabowski (2009)
argue that self regulation is a key factor in learner performance. In considering learners’ SRL
skills and low to high generativity levels, they found that the highest level of generativity
augments factual and conceptual knowledge, but that learners with low SRL skills did not
acquire such increased knowledge.
Bell and Kozlowski (2009) discuss research showing that cognitive, motivational, and
affective self regulation develops adaptive expertise, and that learner-centered, technology-based
training using active, constructivist-based processes stimulates exploration and experimentation
in tasks, allowing learners to make inferences about rules, principles, and strategies for
subsequent and successful adaptive transfer. Particularly as problems grow harder, self
regulation is impacted by learners’ varying capacities in terms of the efficiency of their working
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 51 51
memory, the breadth and depth of their content knowledge, and the automaticity of problem-
solving strategies (Nietfeld & Shores, 2011). Training learners in self-regulation processes raises
both metacognitive competence and self efficacy, particularly in challenging learning
circumstances (Zimmerman & Tsikalas, 2005).
According to Winne and Nesbitt (2010), metacognition sets the conditions for self-
regulated learning by involving learners in (a) monitoring their own knowledge, information,
consequences of actions, performance to standards, etc., and, (b) controlling the course of their
learning by choosing strategies and other decision making activities. Novices often inaccurately
perceive their learning through such monitoring, but improve accuracy as they interact
meaningfully with content (e.g., generating their own summaries). Such activities can
potentially be associated with progressive movement along Bransford and Schwartz’s (1999)
trajectory toward expertise through PFL.
Critiquing most studies that typically rely on a single conceptual framework, process, or
methodological technique, Azevedo (2005) contends that CBLEs cannot be used to their full
potential in learners’ exploration of complex, challenging subjects without reflecting the
complex nature of self-regulated learning itself. Azevedo (2005) suggests that a CBLE goes
beyond a cognitive tool and becomes a metacognitive tool when it (a) asks learners to assess the
level of support or utility provided by embedded contextual tools and informational resources, to
decide which to use, and when to optimize their learning goals, (b) provides learners with
models, prompts, and other supports that enable them to build skills in self and collaborative
inquiry-based reasoning, (c) provides either external or embedded agents to help regulate student
learning, and (d) asks students to self-employ self-regulatory processes at all stages of learning.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 52 52
Kalyuga, Renkl, and Paas (2010) concur, discussing the importance of teaching learners
generalized, higher order skills such as analyzing functions (what the use is), processes (how it
operates), and structures (of what it consists) of technical elements related to the problem in
order to elicit deeper, more expert-like analysis, the creation of explanatory principles, and thus
the development of flexible, transferrable schemata. Kalyuga et al. also support the idea that
rapid changes in the contemporary environment necessitate the development of learners who
flexibly adapt their strategies to new, non-routine contexts. They point out that acquiring an
adaptive expertise is not only about being innovative with applications of prior knowledge,
skills, and strategies, but also relies on quickly learning new ones as well. Such prior experience
in flexible thinking over time could support accelerated future learning (Chi & VanLehn, 2007;
Li, Cohen, & Koedinger, 2010), which occurs faster and more effectively due to prior learning.
Dynamic Skill Theory
While this study does not utilize neuro-imaging methods, brain-based studies of
developmental pathways and dynamic-systems transitions in students’ cognitive, metacognitive,
and affective states have direct bearing on the topic of adaptive expertise and preparation for
future learning through constructivist processes. According to dynamic skill theory, a Skill is a
learner’s ability to control behaviors, thoughts, and feelings while doing a task in a given
environment (Mascolo & Fischer, 1998), and develops in a self-organizing manner toward
increasingly more complex states (Dawson & Stein, 2008). [In this study, Skill is capitalized
when connected to this theory-based definition in order to distinguish it from other usages, such
as in 21
st
-century skills.] Per this theory, learners do not simply progress up a fixed ladder of
Skill acquisition, from simple to more complex, but rather demonstrate qualitative changes in
their abilities to organize cognitive, metacognitive, and emotional skills (Fischer, van Geert,
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 53 53
Molenaar, Lerner, & Newell, 2014). Instead, learners gain abilities in multiple, branching,
parallel yet separate, strands or domains that form a developmental web (Fischer et al., 2014).
Learners may show spurts in some strands due to complex-dynamic-system factors such as
emergence, self-organization, and attractors; when clusters of spurts occur across separate tasks
and domains through dynamic growth, it can indicate acquisition of a new Skill level (Fischer et
al., 2014). That is, when a learner acquires a collection of abilities across tasks and domains, a
higher order Skill level is emergent, and the learner transitions to a new and more stable state or
Skill system, where further development along strands continues until yet another transition to a
higher order Skill level.
Per Fischer et al. (2014), however, growth curves are neither fixed nor one-directional,
but rather show discontinuities such as spurts and drops. With drops, learners return to lower
level Skills they have previously mastered to acquire a new Skill. In this sense, regressions may
be a sign of learning, and thus not necessarily a negative or a lack of transfer, as higher order
Skill levels emerge from the consolidation of lower-order and new, more complex Skills (Fischer
et al., 2014). This concept is closely related to learning processes assumed by conceptual change
theory, which suggests learning happens through restructuring relationships among existing
concepts and acquiring new ones (Chen & She, 2012). When learners are in the process of
changing routine practices, they often get worse before better as they explore new options; in the
case of conceptual change, they must become novices again before understanding at a more
sophisticated level and adopting new paradigms for future application (Bransford et al., 2010).
Because dynamic-skills theory also argues that this kind of dynamic learning is highly
contextualized and dependent on supports, it aligns with situated-cognition theory and other
sociocultural, constructivist pedagogical principles.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 54 54
Fischer’s (1980) sequence of development has three major tiers (sensorimotor,
representational, and abstract), each with the same four cyclical levels (control of a single
element, mapping of two elements, coordination of at least two mappings into a system, and
coordination of two or more systems into a system of systems). For Jacobs (2010), each level in
Fischer’s (1980) tiered structure suggests the development of progressive mental models. Jacobs
describes these mental models as patterns in neural activity that are controlled through cognitive
executive function, which in turn develops through experience, cognitive activity, and reflective
thinking into an integrated, meaningful framework for complex decision making. Jacobs (2010)
attests that such mental models are essential in complex problem solving, as they increasingly
enable problem recognition, pattern matching, response inhibition while considering alternatives
and future implications, systems thinking, information processing, recognition of inconsistencies,
complex rule formation and complex decision making.
Pertinent to problem-solving abilities, both cognitive and cortical research shows that
learners make a shift from representational to abstract thinking, with their capacity for doing so
based on the level of supports they have (Fischer, 2008), as shown in Table 1.
Table 1:
Estimates of Functional & Optimal Developmental Levels, Adapted from Fischer (2008), p. 133.
Level
Functional Age Range
(without supports)
Optimal Age
(with supports)
Single Representations (Rp1) 2 – 5 2
Representational Mappings (Rp2) 4 – 8 4
Representational Systems (Rp3) 7 – 12 6
Single Abstractions (Rp4/Ab1) 12 – 20 10
Abstract Mappings (Ab2) 17 – 30 15
Abstract Systems (Ab3) 23 – 40 20
Single Principles (Ab4) 30 – 45 25
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 55 55
This transition is key, as these learners often must develop both representational and
abstract system models, among other tools for representing problems, mapping relationships
among elements relevant to specific problems (similar to structure mapping, per Gentner, 1983),
and engaging in more abstract processes such as hypothesis formation. For middle- and high-
school learners in high-need environments, an initial emphasis on representational knowledge
may be necessary given they may not have had the same learning supports as more affluent peers
and thus may be at lower Skill levels, per the discussion on the need for greater educational
equity in Chapter One. The same may be true for middle- and high-school teachers in high-need
environments, who have not had pedagogical supports themselves for the kind of abstract
thinking needed for ill-structured problems, given that (a) learned teaching practices often center
on what is known rather than on problem solving where unclear, multiple alternatives are
possible (Osborne, 2007), (b) many do not have strong backgrounds in either the STEM subject
they are teaching (NA, 2010) or in teaching with inquiry-based or problem-solving curricula
(Project Tomorrow & PASCO Scientific, 2008), and (c) may either be digital immigrants in the
use of new ICT-based tools and competencies in teaching (Prensky, 2001) or “alienated labor”
(Carnoy & Levin, 1985) who may not believe in the merit of, or know how to implement, ill-
structured problem solving.
While acquisitions of new Skill levels occur in specific age ranges, cognitive and cortical
research associated with dynamic-skills theory indicates that the path and pattern to Skill
acquisition uniquely varies for each learner along a continuum from functional (without support)
to optimal (with support) levels on task and domain developmental webs, with a developmental
range in learning and problem-solving processes (Dawson & Stein, 2008; Fischer, 2008; Fischer
& Biddell, 1998). As learners experience both spurts and drops while organizing capabilities
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 56 56
across task and domain strands, this idea ties to Bransford and Schwartz’s (1999) argument that a
learning experience does not generate immediate expertise and that transfer is better assessed in
terms of where each learner is along a trajectory toward expertise, or PFL level. By analyzing
cognitive, metacognitive, and emotional development, researchers can better understand, predict,
and produce generalization and transfer across domains (Fischer & Bidell, 1998).
In seeking an understanding of individual learners’ ability levels, it is important to create
developmental rulers (Rose & Fischer, 1989; Fischer, van Geert, Molenaar, Lerner, & Newell,
2014) that (a) measure principles underlying long-term changes in learning and problem-solving
variables and (b) correspond with a sequence of developmental stages. Unlike common
assessment measures in standardized tests, these developmental rulers must be more sensitive to
individual differences in learning pathways across a variety of task and domain abilities, which
develop based on person-context factors such as available supports (Dawson & Stein, 2008).
Relevant to providing teachers and students in high-need schools with equitable educational
opportunities, Dawson and Stein (2008) note that changes in person-context factors can impact
changes in ability and Skill levels. Providing CBLEs might be one such change offering support
for current and future learning that sets students on a pathway toward an adaptive expertise in
complex, ill-structured problem solving. Additionally, computer-based programs can
increasingly help assess individual learners’ dynamic growth, as they continue to access and
adapt prior knowledge in the process of their hierarchical construction of ever more
sophisticated, expert-like understandings along developmental webs (Fischer & Paré-Blagoev,
2000).
In this regard, the ability to describe patterns in learners’ microdevelopmental trajectories
in common terms is important. To describe emergent learning patterns, Yan and Fischer (2007)
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 57 57
define three trajectory types, each progressively requiring decreasing levels of learning support,
based on Vygotsky’s (1978) zone of proximal development. They include (a) unstable
trajectories associated with below-ZPD performance, (b) fluctuating trajectories associated with
performing within ZPD, and (c) stable trajectories associated with above-ZPD performance. Yan
and Fischer also define four key trends that represent pattern-based changes in trajectories: (a)
disorganization, when trajectories remain unstable, (b) stabilization, when trajectories remain
stable, (c) improvement, when trajectories move from fluctuating to stable over time, (c)
regression, when trajectories move from stable to fluctuating over time. All of these pattern-
based descriptions relate to assessing preparation for future learning and the creation of
developmental rulers to measure it.
Cognitive Load Theory
As transfer tasks in problem solving grow in complexity, learners’ ability to organize
their knowledge in schemata (perhaps for the attainment of higher order Skill levels per
dynamic-skills theory) becomes even more important based on theories about how humans
process information (Billing, 2007). Per cognitive load theory (CLT), humans have (a) limited
working-memory capacity, able to process only a few pieces of information at the same time,
and (b) a vast long-term memory capacity, with retrievable information stored and organized in
schemata (e.g., Paas, Renkl, & Sweller, 2003, 2004; Paas, Van Gog, & Sweller, 2010; Mayer,
2011). Ill-structured problem solving in STEM inherently has high intrinsic load, particularly for
novice learners. Not only must learners understand domain-specific and domain-general
concepts, they also must be able to make choices in how to apply them and when, which is even
more difficult when multiple solutions are possible. The level of difficulty in learning complex
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 58 58
material depends on whether the learner has prior knowledge and the extent to which it is
organized in schemata and automated in long-term memory (e.g., Sweller, 1994; Mayer, 2011).
Yet, for fluid problem solving, learners must also be able to separate readily from specific
schemata and connect with higher-order, generalized problem-solving schemata (Kalyuga,
Renkl, & Paas, 2010). Far transfer to different problem contexts relies on both the general
transfer of knowledge about principles and methods and the specific transfer of similar problem
elements (Billing, 2007). Compared to crystallized intelligence (acquired knowledge), fluid
intelligence (knowledge about solving, reasoning, and handling new situations) allows the
identification of relationships and decreases cognitive load as unfamiliar elements are “chunked”
into subsets of familiar elements (Billing, 2007). Fluid intelligence closely relates to adaptive
expertise, to dynamic-skill theory’s premise that learners often return to prior knowledge in the
process of attaining new Skill levels, and to conceptual-change theory’s idea that learners
restructure knowledge when real-world experiences do not meet existing mental models.
Further brain-based connections also exist. Based on neuro-imaging studies, cognitive
load on working memory in complex problem solving depends on the complexity of the task
instructions (Duncan, Schramm, Thompson, & Dumontheil, 2012). Learners’ neglect of rules
and their applications increases with task complexity, when working memory is insufficient to
keep attention on them. Fluid intelligence, they argue, is helpful in such conditions, given it aids
in building a mental control program that allows novelty in the segregation and construction of
multiple parts of tasks and associated rules. The more a task combines multiple complex novel
parts, the greater the correlation with fluid intelligence. The greater the complexity and novelty,
the more individual differences appear in mental-control programs. Related results show
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 59 59
learners high in fluid intelligence performed at a higher level when given a strategy, since they
could access and use it regardless of interference (Nusbaum & Silvia, 2011).
When Kalyuga, Renkl, & Paas (2010) indicate that a capacity for flexible problem
solving relies on self-regulation and metacognitive skills, they acknowledge that developing
those skills must be managed with cognitive load in mind. Ill-structured problem solving in
STEM inherently has high intrinsic load, particularly for novice learners. Not only do learners
have to understand domain-specific content, they must be able to make choices in how to apply it
and when, which is even more difficult when multiple solutions are possible. Most classroom
instruction focuses on reproducing the outcomes of theories and experiments (i.e., scientific
facts) rather than engaging in the scientific process (i.e., scientific thinking), yet the latter is
increasingly required for raising problem-solving skills that allow comprehension of the complex
and dynamic physical world (Mayer, 2008; Osborne, 2007). Adaptive learning environments
that provide differentiated instructional support in the development of flexible problem-solving
skills and management of cognitive load help move learners back and forth between the general
and specific features of a problem-solving scenario (Kalyuga, Renkl, & Paas, 2010). As learners
grow in knowledge and cognitive demand is not as high, enabling learner-controlled
environments that allow self regulation and metacognition could enhance flexible problem-
solving skills (Kalyuga, Renkl, & Paas, 2010). In short, to avoid non-germane cognitive load, a
CBLE should keep necessary domain-specific content knowledge to only that which would be
relevant to the problem at hand and organize cognitive content from simple to complex.
Synthesis and Implications of the Theoretical Foundations
Transfer of flexible strategies for complex, ill-structured problem solving can assist
learners in developing more expert-like, problem-solving behaviors. Learners must develop an
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 60 60
openness of mind for building, deconstructing, and rebuilding mental models. When transfer is
assessed as changes in learners' preparation for future learning, each problem-solving experience
can build expert-like strategies for analyzing problems, gathering, interpreting, and applying
information, and creating and testing solutions. It can also build habits of mind such as looking
critically at information and understanding its uncertainty, seeking multiple perspectives and
making interconnections, letting go of old ideas, taking risks, altering strategies per the situation,
persisting while struggling to learn, and assessing the way a lesson affects one's understanding.
These strategies and habits of mind can be organized into problem-solving schemata that
support the development of adaptive expertise in complex ill-structured problem solving,
particularly when these schemata and underlying knowledge structures are flexible rather than
fixed. When problem-solving strategies and related adaptive expertise develop in authentic, real-
world settings (or simulations of them), the instruction can facilitate future learning and
motivation-related behaviors relevant to a 21
st
-century environment. Instruction in self-
regulation in challenging learning circumstances can increase metacognitive capacity,
generalized higher order skills, and self efficacy. CBLEs can be instrumental in providing
learners with real-world simulations and situations, cognitive and metacognitive tools, real-time
adaptive assessments, and other resources not available in most contemporary classrooms. As
the computer is a tool of STEM disciplines, its use also supports the idea and importance of
learning in situated contexts. However, as transfer tasks grow in complexity, cognitive load for
novice learners cannot be ignored as it affects both learning and motivation.
Dependent Variable: Adaptive Expertise in Ill-structured STEM Problem Solving
Adaptive expertise is the capacity to understand and to explain the meaning of knowledge
and skills in a given context and to modify or invent skills and models depending on changes in
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 61 61
setting, requirements, constraints, and other variables of the problem to be solved (Hatano &
Inagaki, 1986). Adaptive expertise depends upon learners’ evolving preparation for future
learning (PFL), including changes in their problem-solving schemata (Bransford & Schwartz,
1999). A problem-solving schema includes the sophistication of what learners notice about the
problem, how they gather and interpret information, and how they apply the information or past
problem-solving strategies when confronting new problems (Bransford & Schwartz, 1999). The
emergence of more expert-like behaviors are related to developing innovation-related habits of
mind such as looking critically at current knowledge, seeking multiple perspectives, allowing
ambiguity, monitoring understanding, letting go of old ideas, and persisting in the face of
difficulty (Bransford & Schwartz, 1999). Leveraging Bransford and Schwartz’s (1999) idea that
it may be more appropriate to demonstrate whether students are on a trajectory toward expertise
as opposed to demonstrating expert-like mastery when assessing transfer, Bransford et al. (2010)
offer a two-dimensional diagram for the development of adaptive expertise (Figure 1) based on
Schwartz, Bransford, and Sears’ (2005) original conceptualization of its component parts.
Figure 1:
2-Dimensional Model of Adaptive Expertise (adapted from Bransford et al., 2010, p. 829)
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 62 62
In this model, learners must continue to grow both in the efficient use of specific skills and
knowledge (x axis) and in a capacity for inquiry and innovation (y axis) to develop adaptive
expertise. Too much inquiry and innovation without knowledge and skills leads to frustrated
novices (top of y axis), whereas too much reliance on building specific skills and knowledge
without inquiry and innovation develops routine experts who are simply increasingly efficient in
their knowledge use (rightward on x axis). The idea is that, by engaging in both dimensions,
learners can find themselves in the optimal adaptivity corridor, which ultimately leads to
increasing levels of adaptive expertise. This model suggests that learning environments that
provide balance in the two dimensions can best support learners in being on a trajectory toward
adaptive expertise. However, it does not fully take into account metacognitive strategies (e.g.,
self regulation) and motivational factors (e.g., interest, self-efficacy) central to learners’
persistence in ill-structured problem solving or show the way in which continual practice in
problem solving, with many novel problems of both the same and different types, moves learners
progressively toward greater cognitive and metacognitive knowledge, contextual competence,
and self-efficacy. It also does not take into account that learners’ problem-solving capacities
scored to be located in any optimal adaptivity corridor would be influenced by the availability
and quality of supports (e.g., scaffolds) learners may or may not have. From a research review, it
has not yet been tied to a substantive assessment system that would be able to evaluate where
learners are in this zone, or what indicators would support that kind of assessment.
Figure 2 offers a more inclusive model that can assess learner progressions in acquiring
adaptive expertise.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 63 63
Figure 2:
Proposed 4-Dimensional Model of Adaptive Expertise
This model retains Bransford et al.’s (2010) original two dimensions: skills and
knowledge (x axis) and inquiry and innovation (y axis). Associated with learning in both of
these dimensions are cognitive and metacognitive scaffolds that can provide learners with
assistance in learning just-out-of-reach concepts or skills and in being cognizant of their learning
process and their current state within it. The model adds a third dimension, motivation (z axis),
with supporting affective and metacognitive scaffolds for that development. This dimension is
particularly important in complex, ill-structured problem solving because learners must persist in
the face of emotionally managing cognitive load, uncertainty, the cognitive dissonance that is a
part of conceptual change, and early failures that are part of the iterative learning process in
solving novel problems. Finally, the fourth dimension considers development through time,
reflecting learners’ individual journeys through multiple ill-structured problem-solving
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 64 64
experiences, a path or progression that is likely neither simply additive nor able to be
standardized.
This model applies to both a single ill-structured problem as well as a set of them. That
is, the trajectory of learning can be interpreted either as changes in learner capabilities during
sub-stages or component learning tasks while solving a single problem (p1-pn) or as changes in
learner capabilities across multiple complex ill-structured problems (P1-Pn). These progressions
notionally occur in a four-dimensional space noted as the Preparation for Future Learning (PFL)
Zone. Following dynamic skill theory, in this simple model, each learner will likely have
different plots along their individual learning pathways, depending on their various cognitive,
metacognitive, and emotional ability levels and patterns in returning to prior abilities while
acquiring new ones.
While the simplified four-dimensional model shows a single, composite point to
represent an individual learner’s relative state on a trajectory to adaptive expertise (i.e., some
number representing an ability level in the three dimensions (#x, #y, #z) and in the fourth
dimension of time), each aggregate “score” would be based on sub-indicators for each dimension
and include an understanding of growth patterns behind the scale, per Fischer (2008). For
example, to create a single plotted point representing the learners’ current state, learners might be
measured on such things as changes in (a) discipline-specific, standards-aligned understandings
or competencies, problem-solving procedures, etc. (knowledge and skills), (b) numbers of
alternative perspectives considered, new hypotheses or solutions suggested, etc. (inquiry and
innovation), and (c) time on task as an indicator of interest or persistence, perceived self-efficacy
in problem solving etc. (motivation). For computer-based assessments, these are not simply
points, but would map into a development web indicating key desired changes in complex, ill-
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 65 65
structured problem solving capacities and allow dynamic movement as learners inconsistently go
back and forth between higher and lower levels within and among tasks and domains in the
process of learning.
In that regard, while using the term learning progression, it is important to note that
regressions often occur. In Figure 2, the backwards dip from P2 to P3, for example, takes into
account principles of dynamic systems and conceptual change theories, as well as Bransford et
al.’s (2010) point that when learners are in the process of changing routine practices, they often
get worse before getting better, as they explore new options and, in the case of conceptual
change, must become novices again before understanding at a more sophisticated level and
adopting new paradigms for future application. As Fischer (2008) relates, a new kind of skill
may emerge, but is not organized to produce consistent performance for several years, a
phenomenon called later-level consolidation. Thus, measuring transfer as preparation for future
learning, or perhaps, alluding to dynamic skill theory, preparation for later-level consolidation, is
an important consideration in assessment. Where diagnostics can potentially be especially
helpful is in identifying areas where learners are struggling or lagging, suggesting where
additional supports (e.g., opportunities to practice) might be optimally targeted.
With learning expressed in terms of a complex dynamic system (Fischer, 2008), the
model for a PFL-based trajectory toward adaptive expertise is more complex than the depiction
in Figure 2, with each learner’s individual learning-progression map dependent on the
individual’s initial and fluctuating states in knowledge and skills, inquiry and innovation, and
motivation, or combinations thereof, over the course of problem solving, conceptual change, and
schema formation. When learners attain a certain level in one variable, it might serve as an
attractor for others, affecting the pattern of growth in the overall “adaptive-expertise system” of
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 66 66
each individual learner. For example, a learner who acquires an ability to risk iterative failures
may see increased personal interest (within-variable effect) in pursuing multiple evidence-based
options and/or deepened content knowledge (between-variable effect) as a result. Reaching a
new PFL level on the path to adaptive expertise may be emergent, occurring as learners develop
their abilities in knowledge/skills, inquiry/innovation, and motivation. Emergent outcomes are
structures resulting from self-organizing elements and interactions among them, not simply
based on sub-goals and sub-skills (Scardamalia, Bransford, Kozma, & Quellmalz, 2012). In
other words, adaptive expertise is not the simple sum of each of these variables, an additive
learning process, but rather a pattern (perhaps habits of mind per Bransford & Schwartz, 1999)
that newly arises out of their complex and synergistic interactions. That is also, in essence, the
idea behind learners reaching a new developmental Level per dynamic skill theory.
In addition, in relation to the dynamic-systems concept of attractors, positive or negative
relationships among the three variables at different points may influence the shape and space of
learners’ actual PFL zones. For instance, motivation at one point might accelerate novel
experimentation, but result in misconceptions that set the learner back in terms of knowledge and
skills; at another, only a combination of knowledge and motivation might lead to breakthrough
and accurate conceptual change; at yet another, increased knowledge alone might influence self-
efficacy enough to spark a spurt of motivation-related behaviors. A learner might also remain in
a fairly stable place for a period of time, before continued incremental change creates enough of
an effect to result in breakthrough conceptual change or clearly higher levels of expertise.
Additionally, factors in the wider system in which learning takes place (e.g., classroom climate,
stability in the home, peer interactions etc.) and learner characteristics (e.g., comfort in risk-
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 67 67
taking) can also impact learners’ existing and progressive state. Modeling those dynamic
relationships and structures, however, lie beyond the scope of this study.
In addition, in the current proposed model, “reaching a new PFL level” (indicated by PFL
area) is somewhat ambiguous in that indicators for progressive novice-to-expert “levels” in ill-
structured problem solving have not been specifically articulated either in research or in national
grade-appropriate, standards-aligned problem-solving KSAs. In addition, defining a single point
in space and time for each variable (in essence, an aggregate “score”) is a function of multiple
sub-variables alone and in interaction that likely needs further explication to move from a more
abstract construct to an adoptable, generalizable measure that could represent learners’
capabilities with accuracy.
Instead, for the purposes of this study, simpler functions for adaptive expertise in each of
three dimensions can be expressed. In this study, adaptive expertise is conceptually considered
to be a function of preparation for future learning, which in turn is a function of changes in the
three dimensions (knowledge and skills, inquiry and innovation, and motivation) through the
fourth dimension of time, or:
Level of AE = f(ΔPFL) = f(ΔKS, ΔII, ΔM).
In a complex model, numerous factors could be measured in support of each dimension and
organized into an equally complex developmental web. For this study, a smaller set of measures
used as indicators are characterized in the following discussion of independent variables.
Independent Variables: Dimensions of Preparation for Future Learning
Independent variables that together influence changes in preparation for future learning
along a trajectory toward adaptive expertise include changes in the aforementioned three
dimensions of knowledge and skills, inquiry and innovation, and motivation, as measured in an
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 68 68
evolving way through a fourth dimension, time. At this level, these dimensions are fairly high-
level, and thus coarse in defining learner levels of adaptive expertise. Based on the research
literature, some sub-variables are delineated for this study. They cannot represent the totality of
possible measurements, but do provide a finer level of analysis that, in an exploratory way suited
to the purpose of this study, might highlight some of the sub-concepts and sub-skills that support
proficiency in ill-structured problem solving. Per Dynamic Skills theory, potential patterns in
individual sub-concept and sub-skill development, including potential clusters among them,
might also suggest the acquisition of higher order understandings enabling adaptive expertise.
Dimension 1: Knowledge and Skills
Knowledge relates to a learner’s abilities to comprehend in order to act with purpose,
intention, and reflection, whereas skills relate to a learner’s abilities to perform tasks through
practice (Pérez & Coffin-Murray, 2010).
Knowledge. Cognitive research shows that experts in science organize and represent
knowledge differently than novices, who, for instance, focus on superficial problem features
rather than on principles (Day & Goldstone, 2012; Vosniadou, Ioannides, Dimitrakopoulou, &
Papadimitriou, 2001). Cognitive or knowledge structures describe hierarchies of various ways in
which a learner (a) arranges and reorganizes attributes and relationships among concepts and
procedures in a unified manner (i.e., constructs and adjusts holistic schemata), and (b) uses ad
hoc, dynamic mental models to represent complex concepts in simplified ways to support
understanding (e.g. Ifenthaler, Masduki, & Seel, 2011). To characterize the relative development
of expertise in novices, it is useful to compare their cognitive structures with those of experts, the
latter of which have more integrated, linked, and inter-related concepts and procedures
(Ifenthaler, Masduki, & Seel, 2011). In knowledge representation, learners develop discrete
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 69 69
symbols representing meaningful concepts and combine them in structures that reflect their
relationships (Day & Goldstone, 2012). By understanding deep situational structures and
systems of structures, concepts can be more flexibly recombined (Day & Goldstone, 2012). Per
situated cognition theory, it is important for these representations to be based on real-world
disciplines. The more learners’ representations resemble those of experts, the greater their
progression from novice levels. In the developmental process of conceptual change, however,
learners acquire misconceptions as they come up with alternative ideas to explain phenomena,
often striving to modify pre-existing and often concrete ideas to retain prior understandings
rather than change them completely (Vosniadou et al., 2001). In this regard, a focus on
encouraging learners’ meta-conceptual awareness of their own beliefs as they construct
explanations is key, as is a focus on building models and external representations rather than
memorizing facts and explanations (Vosniadou et al., 2001).
For ill-structured problem-solving, Hill and Hannafin (2001) offer four broad classes of
technology-based scaffolds for acquiring specific kinds of knowledge: (a) conceptual, (b)
procedural, (c) strategic, and (d) metacognitive. According to Saye and Brush (2001), during
problem solving, learners follow expert-like practices by using (a) conceptual scaffolds to find
out what information is necessary, (b) procedural scaffolds to learn how to use resources and
tools helpful to problem solving, (c) strategic scaffolds to raise awareness of the value of
different approaches and techniques, and (d) metacognitive scaffolds to help self-regulated-
learning processes.
Conceptual Knowledge. Conceptual knowledge in problem solving largely references a
learners’ ability to seek and sort necessary information relevant to their problem and solution
path (Saye & Brush, 2001). For problem-solving concepts, such knowledge includes
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 70 70
understandings of crosscutting principles and concepts, which is a transfer strategy for creating
flexible mental models (Billing, 2007), deep situational structures and systems of structures,
which aids in flexible recombinations (Day & Goldstone, 2012), and representations and models
of expert-like problem-solving steps, processes, and strategies (Ifenthaler, Masduki, & Seel,
2011). Conceptual knowledge can also relate to domain-specific understandings of principles
and processes within a give STEM discipline.
Procedural Knowledge. Procedural knowledge helps learners understand how to use
resources and tools helpful to problem solving (Saye & Brush, 2001). According to Hatano and
Inagaki (1986), such procedural knowledge assists learners in integrating prior and new
conceptual knowledge by focusing learners on understanding: (a) why a given procedure
produces the results it does, (b) how it varies when applied to alternative examples due to their
deep features, and (c) what basic model are at work. For example, Ge, Planas and Er (2010)
found a positive effect when learners procedurally accessed examples of how experts conducted
the task, compared their approach to the more expert one, and revised their problem-solving
paths. Given the nature of STEM-based, ill-structured problem-solving and the use of computers
as situated tools of those disciplines, such epistemic procedural understandings also include
knowledge of when and how to use ICT resources to access, manage, analyze, integrate, apply,
create, transform, and evaluate information (Australian Ministerial Council on Education, 2007;
ICT Literacy Panel, 2007; Stiller & LeBlanc, 2006).
Strategic Knowledge. Strategic learners are motivated problem solvers with the capacity
(a) to use strategies effectively, efficiently, and knowingly, (b) to self-direct and self-regulate
their learning, and (c) to generalize their use of strategies (Montague & Dietz, 2009). For
example, using situated, epistemic cognitive tools, Liu et al. (2009) demonstrate strategic
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 71 71
knowledge gains among learners including capacities related to (a) acquiring just-out-of-reach
information (e.g., measurement instruments such as seismographs and cameras), (b) organizing,
storing, and retrieving information (bookmarking and note booking), (c) acquiring behaviors
helpful to reaching a solution (video experts), and generating hypotheses and solutions (control
room). Strategic knowledge can also include effective ways to use mental models and
approaches (e.g., generating questions, analyzing structural components, representing meaning,
predicting, etc.) that support learners’ own independent thinking and learning in complex topics
(Conley, 2008). However, research is lacking on cognitive-strategy knowledge in secondary
classrooms, particularly in complex learning situations when required knowledge, skills, and
thinking processes are neither fixed nor clearly understood and adolescent’s mastery of cognitive
strategies are needed (Conley, 2008). Due to pressures such as standardized testing that
emphasize rigid, step-by-step cognitive strategies that organize existing knowledge for fixed
tasks, adolescents are often ill-prepared for college readiness or workplace success, which
require strategic knowledge about how to think and to build one’s own learning capacities
through reasoning, generating meaning, and finding solutions through inquiry and studies of
discrepant information and evidence (Conley, 2008).
Metacognitive Knowledge. Metacognition refers to the conscious awareness and control
of one’s own cognitive processes (e.g., Verschaffel, Luwel, Torbeyns, & Van Dooren, 2009).
Eliciting learners’ process-related metacognitive strategies potentially helps bring to
consciousness their current understandings and areas of difficulties. In Billing’s (2007) review
of 700 research studies on problem-solving transfer, learning metacognitive skills increased
student performance. According to Chi and VanLehn (2007), students who used a metacognitive
learning strategy focused on principles showed transfer from one domain to another (probability
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 72 72
to physics) and accelerated learning. Metacognitive knowledge also helps address students’ self-
regulation difficulties, which hinder their learning in complex STEM topics (Azevedo, Cromley,
& Seibert, 2004; Azevedo, Greene, & Moose, 2007: Dabbach & Kitsantas, 2005; Hmelo-Silver
& Azevedo, 2006; Winne et al., 2006). A metacognitive awareness of SRL strategies such as
checking for one’s own understanding, reflecting, and monitoring comprehension is a key habit
of mind associated with PFL (Bransford & Schwartz, 1999). Metacognition also allows learners
to regulate their selection of strategies in a flexible manner (e.g., Verschaffel, Luwel, Torbeyns,
& Van Dooren, 2009), and is furthered by a meta-conceptual awareness of one’s own beliefs
during explanation construction (Vosniadou et al., 2001).
While learners do not largely seek opportunities to build metacognitive knowledge
(Belland, Glazewski, & Richardson, 2011), combining cognitive and metacognitive learning
prompts can effectively help learners consciously acquire knowledge about which aspects of the
learning material they have already understood well and where they have self-identified
comprehension difficulties (Berthold, Nückles, & Renkl, 2007). For example, combining
generative learning strategies (highlighting, summarizing/note-taking, and adjunct questioning)
and related metacognitive feedback (tips to revise highlights and summaries if the response to the
adjunct question was incorrect) can affect learners’ comprehension and self-regulation; among
other results, learners’ use of self-regulation strategies positively correlate with comprehension
scores in a complex science topic (Lee, Lim, & Grabowski, 2009). Yet, metacognition imposes a
dual-task cognitive load, as learners must not only think through the problem and its solution, but
also self monitor by thinking about those thoughts (Schwartz et al., 2009). Techniques that
alleviate this dual-task cognitive load (e.g., a “learning by teaching” approach where learners
teach with a self-generated concept map) result in stronger performance (Schwartz et al., 2009),
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 73 73
as do other means of embedding supports for metacognitive knowledge as seamlessly as possible
(Shute & Spector, 2008; Shute, Ventura, Bauer, & Zapata-Rivera, 2008).
Skills. Skills relate to a learner’s abilities to perform tasks through practice (Pérez &
Coffin-Murray, 2010), and are often evaluated on the efficiency or effectiveness of task
performance. When students learn skills in situated contexts, they become linked to similar
situations, facilitating transfer (Vosniadou et al., 2001). Per dynamic-skills theory, a clustered
combination of component skills may reveal movement from one Skill Level to another (e.g.,
Fischer, 2008; Fischer, van Geert, Molenaar, Lerner, & Newell, 2014). Problem solving itself is
a 21
st
-century skill (APEC, 2008; Coalition of the Assessment of 21
st
Century Skills, 2010; NRC,
2010; Partnership for 21
st
Century Skills, 2008). For adaptive expertise, general overarching
skills include the ability to analyze current knowledge critically, a habit of mind per Bransford
and Schwartz (1999). Kalyuga, Renkl, and Paas (2010) discuss the importance of teaching
learners generalized, higher order skills such as analyzing functions (what the use is), processes
(how it operates), and structures (of what it consists) of technical elements related to the problem
in order to elicit deeper, more expert-like analysis, the creation of explanatory principles, and
thus the development of flexible, transferrable schemata. Essential skills also include adaptive-
expertise-related capabilities such as (a) building models and external representations, rather than
memorizing facts and explanations, and (b) understanding the meaning of procedural skills in a
given context (Hatano & Inagaki, 1986).
Dimension 1 Sub-variables. For this study, the dimension knowledge and skills is
defined as a function of changes in the following elements, as summarized in Table 2, and
described in conceptual short-hand as:
KS = f(ΔPSK, ΔPSS, ΔSTEM)
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 74 74
Table 2:
Sub-Variables for Dimension 1: Knowledge & Skills
Sub IV Definition Indicators Reference
PSK Problem-Solving
Knowledge
Problem-solving schemata including:
(Brush & Saye, 2001; Hill &
Hannafin, 2001)
• conceptual:
understanding what
information is
necessary in
problem solving
a. crosscutting principles and concepts
b. deep situational structures and systems
of structures
c. representations/models of expert-like
problem-solving steps, processes, and
strategies
a. a transfer strategy for creating
flexible mental models (Billing,
2007)
b. an aid in flexible re-
combinations (Day &
Goldstone, 2012)
c. (Ifenthaler, Masduki, & Seel,
2011)
• procedural: how to
use resources and
tools helpful to
problem solving
Understanding of when and how to use ICT
resources to access, manage, analyze,
integrate, apply, create, transform, and
evaluate information
(Australian Ministerial Council on
Education, 2007; ICT Literacy Panel,
2007; Stiller & LeBlanc, 2006)
• strategic:
awareness of
different problem-
solving approaches
and their value
Understanding strategic use of:
a. mental models/approaches that support
independent thinking and learning in
complex topics (e.g., generating
questions, analyzing structure, organizing
information, representing meaning,
predicting, seeking alternative
explanations when understanding differs
from observed, generalizing etc.)
b. techniques such as search-and-replace,
serial elimination, and space splitting
a. (Conley, 2008; Mayer, 2008;
Schunk, Pintrich, & Meece,
2008)
b. (Jonassen, 2010a)
• metacognitive:
knowledge of
learning how to
learn, including SRL
strategies
a. Meta-conceptual awareness of own
beliefs during explanation construction
b. SRL strategies such as checking for
understanding, reflecting, and monitoring
comprehension
a. (Vosniadou et al., 2001)
b. self regulation, a habit of mind
(Bransford & Schwartz, 1999)
PSS Problem-Solving
Skills, as found in
expert-like problem
solving
a. to analyze current knowledge critically
b. to build models and external
representations rather than to memorize
facts/explanations
c. to understand the meaning of procedural
skills in a given context
d. higher order skills: to analyze functions
(what the use is), processes (how it
operates), and structures (of what it
consists) of the problem
a. a habit of mind (Bransford &
Schwartz, 1999)
b. an adaptive-expertise process
(Hatano & Inagaki,1986)
c. an adaptive-expertise process
(Hatano & Inagaki,1986)
d. (Kalyuga, Renkl, & Paas,
2010)
STEM STEM knowledge
relevant to the
specific complex, ill-
structured problem
Understanding of learning objectives defined
in the individual learning activities per
Anderson & Krathwohl’s (2001) taxonomy
STEM standards alignment
(NGSS Lead States, 2013; NRC,
2011a)
Dimension 2: Inquiry and Innovation
Inquiry. Fundamentally, inquiry allows learners to acquire knowledge in any discipline.
In the context of STEM-based, ill-structured problem-solving, inquiry is a constructivist learning
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 75 75
approach that enables learners actively to build their own knowledge by asking questions,
gathering data, making and improving hypotheses, finding solutions and evidence-based
theories, and reflecting. In instruction, scaffolded inquiry guides learners in reflective activities
to enable the learner to construct deep meaning (Peters & Slotta, 2010). A critique about
traditional ways of teaching science is that instruction is often based on knowledge content,
without emphasizing these and other inquiry-based processes (NRC, 2011a). As students ask
questions, generate hypotheses, and collect data, inquiry-based learning often promotes
metacognitive as well as cognitive understandings, and prior research shows that inquiry learning
is crucial to developing critical thinking, problem solving, and reasoning skills (Woolf, 2009).
For adaptive expertise, the ability to ask questions freely and to be flexible in allowing one’s
own conceptual change in making meaning is key (Woolf, 2009).
Curiosity. Important to inquiry in the context of adaptive expertise, curiosity reflects the
level and sophistication learners have in asking exploratory and critical questions freely (Woolf,
2009). Asking questions that can be answered with empirical evidence is a key science and
engineering practice emphasized in next-generation science standards given its importance in
understanding the natural and human-built world, designing inquiry-based experiments, and
constructing solutions (NRC, 2011a). Asking questions is important to adaptive capacities in
refining research questions and problems, determining and modifying constraints and
specifications of solutions, identifying features, patterns, and contradictions in observations,
identifying evidence on which an arguments are made, ensuring elaborations, and reflecting on
new questions based on the results of inquiry-based discoveries (NRC, 2011a). It is thus an
indicator for curiosity as a component of inquiry.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 76 76
Innovation. Innovation is related to generating, synthesizing, implementing, and
improving ideas (Holman, Totterdell, Axtell, Stride, Port, Svensson, & Zibarras, 2012; Peters &
Slotta, 2010), a process that supports knowledge building (Peters & Slotta, 2010). Innovation
can be either an outcome or a process (mechanisms related to restructuring concepts or
environments), including conceptual change as a subset of innovation (Schwartz, Varma, &
Martin, 2008). Transfer in conceptual change involves two challenges: (a) the knowledge
problem, finding ways to craft new knowledge from prior knowledge, and (b) the inertia
problem, rigidity in changing ideas due to the inability to consider alternatives or risk avoidance
(Schwartz, Varma, & Martin, 2008). In the knowledge problem, learners apply prior knowledge
to explain phenomena in new ways (similarity transfer) or coordinate prior KSAs to create novel
concepts and outcomes (dynamic transfer) (Schwartz, Varma, & Martin, 2008). Behavioral and
cognitive learning strategies help learners gain task knowledge by organizing it within existing
schemata or creating new schemata (Holman et al., 2012). Building flexible knowledge through
ill-structured problems ensures integration of cross-domain information in long-term memory for
application in different problem scenarios (Hmelo-Silver & Eberbach, 2012).
The concept of novelty arises in the context of innovation. Looking at the function,
structure, and detail of design alternatives, Lopez-Mesa and Vidal (2006) measure novelty in
terms of the number of alternatives produced by each design team, as well as non-obviousness
(i.e., originality of products among design teams). Srinivasan and Chakrabarti (2010) discuss
different aspects of novelty, including originality, unexpectedness, infrequency, non-
obviousness, and newness. Their research shows that if learners explore more and varied ideas
while designing solutions, the variety of concepts they produce increases, leading to greater
novelty in their concept space. Zahner, Nickerson, Tversky, Corter, and Ma’s (2010) study
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 77 77
cautions, however, that originality is negatively correlated with the correctness of solutions.
That finding relates to Bransford et al.’s (2010) idea that inquiry and innovation alone, without
knowledge and skills, leads to frustrated novices. Thus, learners’ ability levels in the knowledge
and skills dimension must also be sufficient enough before a converging cluster of skills leading
to a new Skill Level per Dynamic Systems Theory is possible.
Zahner, Nickerson, Tversky, Corter, and Ma (2010) show positive correlations between
the presentation of an abstract rather than concrete description of a problem and variety of
solutions to it. That is, learners given a specific concrete example prior to creating innovative
ideas had fewer creative concepts. That finding might suggest that the provision of worked
examples that often work in well-structured problems may not be effective in supporting
adaptive expertise in ill-structured problems relying on innovation. In Zahner et al.’s study,
strategies such as abstraction and re-representation successfully increased the originality of
solutions, suggesting these two processes in combination may assist learners with divergent
rather than convergent thinking. Similarly, Sibley (2009) notes that only occasionally does
retrieval (recognizing elements of a new situation that relate to known representations) invoke a
novel analogy. In considering conceptual change, Schwartz, Varma, and Martin (2008) agree
that learners’ transfer is often on the basis of repetition rather than innovation, particularly in the
context of reproducing rather than changing behaviors. The process of abstraction and re-
representation also aligns well with cognitive development per Dynamic Skill theory and
processes of conceptual change as well.
Flexibility. Adaptive expertise requires flexibility in making meaning through fluid and
executive abilities and conceptual change (e.g., Hmelo-Silver & Eberbach, 2012; Woolf, 2009),
including the capacities to allow ambiguity, to let go of old ideas, and to take cognitive risks
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 78 78
(habits of mind per Bransford & Schwartz, 1999), as well as to consider alternatives (Schwartz,
Varma, & Martin, 2008). These concepts are related to cognitive flexibility theory, important to
complex, ill-structured problem solving where goals, constraints, and solutions are not pre-
defined (Feltovich, Spiro, & Coulson, 1994; Spiro & Jehng, 1990) and where learners must seek
multiple perspectives and make connections among concepts to develop a holistic understanding
of the complex problem and its parts (Feltovich, Spiro, & Coulson, 1993). Flexible thinking
helps learners avoid misconceptions that can occur in applying single and often simple concepts
to problem solutions and reductive bias, or oversimplifying the problem or concept itself
(Feltovich, Spiro, & Coulson, 1993). Essentially, cognitive flexibility helps learners construct
schemata from different perspectives, representations, and mental models and also in turn
adaptively to apply schemata to construct new knowledge, meanings, and representations.
Generativity. Generativity is the creative capacity to generate new knowledge and skills
(Pérez & Coffin-Murray, 2010) and to extend innovations beyond the particular circumstances
involved in their original creation (Schwartz, Varma, & Martin, 2008). In generative learning,
students actively construct their own interpretations and inferences, integrating new and prior
concepts and relationships among them to make meaning (Wittrock, 2010). Required for
creating novel concepts and outcomes, generativity is the adaptive capacity to generate,
synthesize, implement, and improve ideas by fluidly identifying and using executive strategies
(e.g., controlling attention, refining initial ideas, managing interference by inhibiting non-novel
or inappropriate ideas or convergent processes leading to a single solution) (Holman et al., 2012;
Nusbaum & Silvia, 2011; Peters & Slotta, 2010). That capacity is important to the development
of adaptive expertise, the ability to be flexible and creative in transferring both STEM-and
problem-solving-related schemata to novel problems (DeHaan, 2009; Hatano & Ouro, 2003). In
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 79 79
terms of innovation, Silvia and Beaty (2012) argue that generating creative ideas requires fluid
and executive abilities in identifying and using strategies for idea generation (e.g., controlling
attention, refining initial ideas, dampening non-novel or inappropriate ideas). In their study,
people high in fluid intelligence produced more creative metaphors, as did those who took more
time. Nusbaum and Silvia (2011) argue that fluid intelligence has a large effect on creativity,
mediated by the number of times executive switching occurs in idea categories during divergent
thinking processes that generate many possibilities. In their study, learners high in fluid
intelligence performed at a higher level when given a strategy, since they could maintain access
to it and use it despite interference from common responses (i.e., from convergent processes
seeking one answer, even while that may have a greater role in divergent thinking than structural,
associative processes previously assumed in the research literature). High levels of generativity
through SRL augment factual and conceptual knowledge, but learners with low SRL skills do not
acquire such increased knowledge (Lee, Lim, & Grabowski, 2009).
Personal Novelty. For novice learners, the key is not whether concepts are completely
original, but rather whether they are novel to the individual. Shah, Smith, and Vargas-Hernandez
(2003) call this personal novelty. That definition can be a good measure for novices on a path
toward expertise. While the ideas they express, the conceptual change they experience, or results
they discover may not be novel for experts or society at large, personal novelty records a change
in individuals’ awareness or insight on their learning pathways.
Dimension 2 Sub-variables. For this study, inquiry and innovation is defined as a
function of changes in the following elements, as summarized in Table 3, and described in
conceptual short-hand as:
II = f(ΔC, ΔF, ΔG, ΔPN)
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 80 80
Table 3:
Sub-Variables for Dimension 2: Inquiry & Innovation
Sub IV Definition Indicators Reference
C Curiosity
The level of sophistication in asking exploratory
and critical questions freely
(e.g., Woolf, 2009)
F Flexibility Adaptability in making meaning through one’s own
fluid and executive abilities and conceptual
change, including the capacities:
a. to allow ambiguity
b. to let go of old ideas
c. to take cognitive risks
d. to consider alternatives
(Hmelo-Silver & Eberbach,
2012; Woolf, 2009)
a-c. habits of mind
(Bransford &
Schwartz, 1999)
d. Schwartz, Varma, &
Martin, 2008)
G Generativity Required for creating novel concepts and
outcomes, the adaptive capacity:
a. to generate, synthesize, implement and
improve ideas by fluidly identifying and
using executive strategies (e.g.,
controlling attention, refining initial ideas,
managing interference by inhibiting non-
novel or inappropriate ideas or
convergent processes leading to a single
solution)
b. to modify or invent new knowledge and
models depending on changes in setting,
requirements, constraints, and other
variables of the problem to be solved
a. (Holman et al.,
2012; Nusbaum &
Silvia, 2011; Peters
& Slotta, 2010)
b. an adaptive-
expertise process
per Hatano &
Inagaki, 1986)
PN Personal
Novelty
The awareness and perception of changes in
one’s own ideas or results
(Shah, Smith, & Vargas-
Hernandez, 2003)
Dimension 3: Motivation
Motivation is a process of goal-directed activity that learners begin and maintain
(Schunk, Pintrich, & Meece, 2008). Recent studies have shown how emotions play a significant
role in learning (e.g., D’Mello, Craig, & Graesser, 2009; D’Mello & Graesser, 2010; Moos &
Marroquin, 2010) and in STEM-related learning specifically (Calvo & D’Mello, 2010; Robison,
McQuiggan, & Lester, 2010). Authentic simulations and tasks motivate learners because they
focus on understanding and mastery (Adams et al., 2008a, b). Learning environments based on
project-, problem-, or design-based experiences have motivational advantages given that their
authentic, inquiry-based, technology-rich, and collaborative nature engages learners in deep
thinking, which leads to knowledge construction around domain-specific key ideas (Blumenfeld,
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 81 81
Kempler, & Krajcik, 2006). Other studies indicate that students also believe that technology
enhances their learning experiences (e.g., Lenhart, Madden, & Hitlin, 2005; Sheffield, 2007).
Boyer, Phillips, Wallis, Vouk, and Lester (2008) assert that motivation and affective
processes are as important to learning as cognitive aspects. Their study on computer-science-
based problem solving suggests that instruction that includes positive feedback and greater
autonomy for novice learners is associated with increases in both confidence and learning. Such
motivation is important, as completely restructuring prior knowledge to accommodate counter-
intuitive information in the process of conceptual change is hard work (Vosniadou, Ioannides,
Dimitrakopoulou, & Papadimitriou, 2001).
Güss, Tuason, & Gerhard (2010) concur that problem solving can pose motivation-
related issues, depending on three characteristics of a problem-based scenario: (a) complexity,
or the number of variables in the system, the number of relationships among them, and the
relative linearity or nonlinearity of those relationships, (b) dynamics, or activity in the system
that develops without the intervention of the user, and (c) non-transparency, or degree to which
variables in the system are non-visible, goals are ambiguous, and possible answers are multiple.
Perceptions of these factors may not be absolute, but rather depend on the novice or expert level
of the learner, and may accordingly influence their affective state (Güss et al., 2010). These
concerns relate to cognitive load as well.
Mastery Orientation. Research shows that self-regulation and mastery orientations
enable learners to persist in challenging learning phases (Billing, 2007). A mastery goal
orientation is a focus on developing new knowledge and skills, improving competence, trying
challenging tasks, and gaining understanding and insight (Schunk, Pintrich, & Meece, 2008).
Research shows that a mastery goal orientation is associated with cognitive strategies important
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 82 82
to problem solving, such as deeper processing of content through elaboration and organizational
strategies, and metacognitive strategies and self- regulation strategies such as checking for
understanding and monitoring comprehension (Schunk, Pintrich, & Meece, 2008). A mastery
goal orientation is also associated with affective and motivation-related behaviors and attitudes,
including task value beliefs, persistence, interest, and enjoyment (Schunk, Pintrich, & Meece,
2008). Ways to foster mastery behaviors include setting learning goals, attributing outcomes to
effort and strategy use, and teaching self-regulatory practices (Schunk, Pintrich, & Meece, 2008).
In traditional school environments, students are more accustomed to goal-oriented
activities with one clear, best answer, and often tie their performance and sense of wellbeing to
knowing it (e.g., Jonassen, 2010b). As learners respond to numerous cognitive and
metacognitive prompts intended to support their learning progress, they may experience
learning-inhibiting negative emotions, particularly when the feedback implies they have not
succeeded in mastering a particular aspect. In addition, although CBLEs are designed to
facilitate learners’ understanding of complex instructional material, the overwhelming amount of
multi-representational material can interfere with learners’ self-regulation abilities (Azevedo &
Strain, 2011). Negative emotions associated with this complex processing can be particularly
deleterious if initiated and sustained. For instance, in a CLBE with an Intelligent Tutoring
System (ITS), D’Mello, Craig, and Graesser (2009) found that once learners experienced an
emotional state in one phase of the experience, it was likely to persist.
Emotional Awareness. Given learners face a great amount of information out of which
they must construct their own learning, affective and motivational processes need monitoring in
addition to cognitive and metacognitive processes (Azevedo & Strain, 2011). Otto & Lanterman
(2006), for example, reported that learners’ ability to detect their emotions and express them
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 83 83
resulted in higher performance in problem solving. Using a non-invasive software in their
adaptive experience, Azevedo and Strain (2011) monitored low- and high-achieving learners’
affective progressions during science learning. High performers experienced significantly less
sadness than low performers, though the researchers cautioned that further study is necessary.
Other research suggests that simulations and games can motivate students in STEM and in
STEM-based problem solving (e.g., Adams 2008a, b). Wilson et al. (2009) suggest that
motivation positively relates to a mystery narrative, where learners question or solve the
unknown, something well suited to STEM-based problem solving.
Risk Taking. In complex, ill-structured problem solving, the ability to take risks without
negative implications is also particularly important. Learners must engage in an iterative process
of trying different hypotheses and methods, learning from failing as well as from succeeding.
Bransford et al. (2010) note that learning can be emotionally charged when students must leave
their comfort zones, let go of current beliefs and ways of doing things, be wrong, and even fail.
Maintaining positive emotions while taking risks is key. For example, in a study of young
children’s scientific understanding of floating and sinking, Hsin and Wu (2011) used affective
scaffolds either to control frustration (e.g., verbal prompts making it okay to explore ideas
without worrying about failure or being wrong) or to maintain interest (e.g., verbal prompts
related to encouraging self-efficacy and showing enthusiasm for successful outcomes). In task-
and goal-oriented environments, computer-based assessments that include affective student
modeling can offer important supports in managing student anxiety, frustration, or boredom
(McQuiggan, Lee, & Lester, 2007), particularly if strategic pedagogical measures can be taken to
remove the extraneous load of those emotions. Setting realistic expectations (Mayer, 2011) in
the construction of the part- and whole-task progressions is central to that. In that regard, an
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 84 84
important note is that teachers often include emotional scaffolding to inspire learners’
imaginations and to create an emotional response to the domain (Rosiek, 2003; Rosiek &
Beghetto, 2009). They do not see a dichotomous relationship between cognitive and affective
domains, but rather view them as integrated factors in learning processes (Rosiek & Atkinson,
2005). However, little research relates to affective scaffolds relative to cognitive and
metacognitive supports (Azevedo & Strain, 2011; Brophy, 2008), even though they can be
particularly important for marginalized students, who may need tailored supports that separate
them from habituated ideas, invite them to make new relationships to the content of the
curriculum, and encourage them to take risks (Rosiek & Beghetto, 2009).
Self Concept, Self Efficacy, Interest, Attitude, and Persistence. While separate
measures for each of these sub-variables are in this study, they are often interconnected (e.g.,
Lent, Brown, & Larkin, 1984). In the context of adaptive expertise and epistemic beliefs, self
concept is a multidimensional construct related to learners’ perceptions of their ability to be or
do well (e.g., Wilkins, 2011) in STEM fields and/or problem solving. It relates to the connection
between learners’ identities and attainment/expectancy values in STEM (e.g., Eccles, 2009;
Sáinz & Eccles, 2012) and career paths (e.g., Tai, Liu, Maltese, & Falcon, 2006), and often is
influenced by gender differences (e.g., Tai et al., 2006). Self efficacy relates to learners’
perceived capabilities to learn and to perform actions at a given level of proficiency according to
specific goals (Schunk, Pintrich, & Meece, 2008), in this case in STEM-based, complex, ill-
structured problem solving. In a reciprocal relationship, self-efficacy influences motivation-
related behaviors such as choice, persistence, and effort in tasks, which in turn re-influences self-
efficacy (Schunk, Pintrich, & Meece, 2008). In this sense, self-efficacy can be a motor for the
emergence of learning and performance gains, and is closely related to students’ STEM success
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 85 85
(e.g., Britner & Pajares, 2006; Chen & Pajares, 2010, Zeldin, Britner, & Pajares, 2008). Interest
and attitude concern learners’ willful engagement (Schraw & Lehman, 2001), whereas
persistence is the capacity to endure in the face of difficulty, a habit of mind per Bransford and
Schwartz (1999) and a motivation-related behavior per Schunk, Pintrich, and Meece (2008).
From a longitudinal perspective, STEM-career expectations among middle-school students often
predicts persistence in these fields (e.g., Tai, Liu, Maltese, & Falcon, 2006). However, at an
aggregate level, U.S. students demonstrate a lack of interest and motivation that is otherwise
critical to persistence in mastering complex concepts (NRC 2005, 2007, 2011a). Thus, these
motivation-related states and behaviors are important not only to adaptive expertise, but also to
national goals.
Dimension 3 Sub-variables. For this study, motivation is defined as a function of
changes in the following elements, as detailed in Table 4 and described in conceptual short-hand
as:
M = f(ΔMO, ΔEA, ΔRT, ΔSC, ΔSE, ΔIA, ΔP)
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 86 86
Table 4:
Sub-Variables for Dimension 2: Inquiry & Innovation
Sub IV Definition Indicators Reference
MO Mastery Orientation
Focus on developing new knowledge &
skills, improving competence, trying
challenging tasks, and gaining
understanding and insight
(Schunk, Pintrich, & Meece, 2008)
EA Emotional Awareness Metacognitive ability to detect one’s
own emotions and express them
(e.g., Otto & Lanterman, 2006)
RT Risk Taking
Ability to leave one’s comfort zone, let
go of current beliefs and ways of doing
things, be wrong, and even fail
(e.g., Bransford et al., 2010)
SC Self Concept Perception of one’s ability to be in, or do
well in, STEM fields or problem solving
(e.g., Wilkins, 2011)
SE Self Efficacy Perception of capability to learn/perform
at given level of proficiency often
according to specific goals
(Schunk, Pintrich, & Meece, 2008)
IA Interest/Attitude
Willful engagement
(Schraw & Lehman, 2001)
P Persistence The capacity to persist in the face of
difficulty.
A habit of mind (Bransford & Schwartz,
1999) and a motivation-related
behavior (Schunk, Pintrich, & Meece,
2008)
Synthesis of Independent Variables
While many learner characteristics and capacities influence their preparation for future
learning and their trajectories toward increased levels of adaptive expertise, this study selects
some key variables as indicated in the literature. For the simple model used in this study,
independent variables and sub-variables are abbreviated in conceptual short-hand as follows:
Level of AE = f(ΔPFL) = f(ΔKS, ΔII, ΔM)
where
ΔKS = f(ΔPSK, ΔPSS, ΔSTEM)
ΔII=f(ΔC, ΔF, ΔG, ΔPN)
ΔM=f(ΔMO, ΔEA, ΔRT, ΔSC, ΔSE, ΔIA, ΔP)
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 87 87
While for descriptive purposes the sub-variables are essentially separated into the three main
dimensions, or strands, of PFL (i.e., into three larger independent variables), the sub-variables
likely interact and influence not only one another (both within variable and among them), but the
learners’ overall performance as well.
Research Relating the Dependent Variable to the Independent Variables
Where the dependent and independent variables come together is in the measurement and
assessment of relative levels of adaptive expertise. In alignment with the research question, the
key is to optimize instruction to support PFL in problem solving and to make developing PFL
observable in assessments (Mylopoulos, 2012). Novice-expert studies related to differences in
knowledge structures and learners’ relative efficiency and effectiveness of their application are
plentiful (e.g., Ge & Hardré, 2010; Hsu, Lin, Wu, Lee, & Hwang, 2012), as are studies on
increasing ill-structured problem solving capacities (e.g., Bottge, Rueda, Kwon, Grant, &
LaRoque, 2009; Jonassen, 2007; Mayer & Wittrock, 2006). However, most studies do not
consider specific measurements of developing strands of relative adaptive expertise as part of an
instructional design, particularly at the micro-development level, which Fischer (2008) describes
as skill changes within a school learning period, ranging from minutes to weeks.
Prior studies across learning domains show a general cyclical learning pattern, in which
novices’ patterns are initially chaotic when their understanding is low, then stabilize into a
repeating, scalloped pattern of growing and collapsing understandings that build from actions to
representations to abstractions over time, but can also collapse with changes in situations
(Fischer, 2008), something that is inherent in ill-structured problems requiring adaptive
expertise. In fact, one argument is that it is the very human ability to drop to low cognitive
levels in learning a new task that allows flexibility and adaptation (Fischer, 2008) so key to
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 88 88
STEM-based, complex, ill-structured problem solving. A few recent studies focus on adaptive
expertise, sometimes with STEM-related content, and typically some but not all elements of the
proposed three dimensions of knowledge and skills, inquiry and innovation, and motivation and
their development through the fourth dimension of time.
Adaptive Expertise and Knowledge and Skills
Because studies are currently few, a lack of a critical mass makes it difficult to find
threads given the disparate research targets related to adaptive expertise. They include (a) the
developmental timing of focusing on strategic knowledge vs. domain-specific knowledge for
high-school and college students, (b) the effective use of technology in elevating adaptive
expertise among teenagers, (c) instructional designs related to the effectiveness of scaffolded,
case-based scenarios and the inadequacy of curricula for supporting the development of relevant
information-handling skills, and (d) the possibility of exploring an alternate STEM field to
increase learning in another.
Rayne, Martin, Brophy, Kemp, Hart, and Diller (2006) studied differences in high-school
and college learners’ learning trajectories in adaptive expertise in biomedical engineering ethics,
analyzing learners’ factual content knowledge and adaptive-expertise analysis (but not inquiry,
innovation, or motivation). Comparisons of students at different education levels led to a
conclusion that it is more effective to introduce novice students to strategies supporting adaptive
expertise (e.g., continually assessing information and processes) before they have acquired
deeper content knowledge and potentially routine-expertise behaviors (Rayne et al., 2006). This
potentially suggests a weighting of strategic knowledge for novices, with lighter emphasis on
STEM content in earlier stages, which also might have implications for reducing cognitive load
as well.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 89 89
With sixteen teenagers on a field trip mimicking real-world geology and ecology studies,
O’Mahony and Baer (2008) designed a study of changes in an ecosystem using networked
graphing calculators that enabled the creation and sharing of mathematical representations as part
of PFL supporting the development of both adaptive expertise and distributed expertise. Those
using the calculators performed better in procedural and conceptual knowledge and abilities to
connect concepts and to make inferences, with the lowest achieving students making the highest
gains. As part of their ethnographic study, they also observed (a) high levels of interest and
engagement, (b) a shift over time from nervous novice to more adaptive expert roles, (c) greater
risk taking in proposing ideas that might be incorrect, (d) conceptual change as their inquiry
process gradually eliminated preconceived ideas, and (e) greater metacognitive awareness.
While more comprehensive in addressing aspects of the three dimensions, none of these
outcomes were quantified as knowledge gains, and micro-developments weren’t articulated.
To provide teachers with an instructional design, Choi and Lee (2009) develop a case-
based instructional model by integrating ill-structured-problem-solving scaffolds into an
adaptation of Jonassen’s (1999) constructivist learning environment. They looked at each stage
of case-based problem solving in their model, and diagnosed how the learning activities and
resources at each stage affected student problem-solving performance. Then, they asked whether
the learning experience generated transferrable skills. They concluded that cases provide an
anchor for knowledge construction and that repeated cycles of providing and removing scaffolds
assist learners in internalizing problem-solving processes and taking on more responsibility over
successive problems and problem-solving phases. In their study, students struggled with their
critical evaluations of multiple perspectives. Also looking at instructional design, Hellen (2011)
reviews curricula related to information handling, finding that current ways of teaching students
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 90 90
to analyze information need to change as they do not support the development of adaptive
expertise and higher order skills needed to evaluate data critically.
Finally, Bell, Horton, Blashki, and Seidel (2010) make a case that including climate
change in medical education may help develop adaptive expertise, but do not run a formal study.
While untested, this idea is intriguing in relation to Rayne et al.’s (2006) finding that too much
domain-specific knowledge stifles adaptive expertise. That is, for already knowledgeable
experts in other STEM fields, learning adaptive-expertise strategies in the context of an
unfamiliar field where they are content novices might be effective in preventing routine thinking.
Along with best practices in instructional design supporting adaptive expertise (e.g., critical
evaluation of multiple perspectives per Choi & Lee, 2009; information handling and critical
analysis of data per Hellen, 2011), common frameworks with multi-domain problem-solving
examples such as Choi & Lee’s (2009) case studies (or potentially a future, modified version of
the one in this study) might provide potential utility in providing easy access to novel complex,
ill-structured problems that can serve as worked examples as part of PFL, not for domain-
specific knowledge construction, but for constructing adaptive-expertise KSAs.
Adaptive Expertise and Inquiry and Innovation
Two studies that include inquiry and innovation in relation to adaptive expertise focus on
differences in problem-solving strategies between novice and more expert learners. A
commonality appears to be a need to provide learners of all levels with opportunities to develop
greater comfort and ease in facing novel problems with equally innovative, generative ideas.
One notes that it is important to remove a commonly held perception that expertise is deep,
automated, and routine (i.e., traditional measures of efficiency as one component of expertise)
rather than flexible and creative. The other indicates that innovation focused on inventing
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 91 91
cognitive tools for later problem solving is a key generative capability. While not stated as such,
both suggest that the ways in which novice and more expert learners differently approach a
problem at the beginning may serve as relative indicators of where students are on their
developmental trajectory toward adaptive expertise.
In non-routine, novel problem solving, Mylopoulos, Regehr, and Ginsburg (2011) found
that undergraduate medical students and residents perceived expertise more as routine expertise
that develops with accumulated practice than as adaptive expertise with its more reflective,
flexible, and innovative aspects. Those with more years of training understood greater
complexity in routine practice and an awareness of the need to develop an approach for non-
routine problems, including building their own comfort with novelty, iterative processes rather
than single solutions, and avoiding narrowing problems to fit within known-to-them solutions
(i.e., existing mental models or diagnosis-related schemata). However, learners saw these
strategies largely as a way to routinize novel problems rather than be adaptive, even though
having novel problems to solve was simultaneously seen as motivationally interesting. From a
cognitive load perspective, participants saw routinizing basic elements as a way of freeing their
cognitive capacity to understand situational complexities. Overall, they had a metacognitive
awareness of the importance balancing the efficiency of routine practices with innovation
necessary to non-routine practices, but could be better supported with a curriculum that doesn’t
ignore adaptive expertise in favor of acquiring medical knowledge alone. These findings closely
relate to those of Rayne et al. (2006) in that more knowledgeable learners in both studies
struggled with letting go of their interest and comfort in routine thinking.
Martin and Schwartz (2009) studied learners’ adaptive capacities, in particular their
reactions to difficulty and need (fault-driven adaptations) and proactive reformulations of
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 92 92
complex information (prospective adaptations). In the face of a problem, more novice
undergraduates created visual representations reactively when their initial ideas did not solve
things, while graduate students proactively took the time to make representations first. That is,
novices did not innovate or inquire in new ways until their results didn’t work, while more expert
learners began with greater initiative in invention. Their study suggests that it is effective when
learners first step away from the task at hand to create representations as tools, which later
support the successful accomplishment of the task, particularly when the representations solve a
class of problems rather than individual, case-by-case problems, and when memory burdens (i.e.,
cognitive load) would otherwise be too high. Having different representational tools (e.g., Venn
diagrams, charts etc.) and knowing the pros and cons of their use can, Martin and Schwartz
suggest, transfer to novel problems given learners are prepared for changing conditions. With
greater expertise, not only did the graduate students demonstrate similarity-based transfer, in
which they applied their representations to similar problems, but they also demonstrated dynamic
transfer, in which they modified their representations to match better new contexts. That is, they
demonstrated two generative types of innovation, a key component of adaptive expertise, with
dynamic transfer reflecting perhaps a higher Skill level than similarity-based transfer in a yet to
be articulated developmental ruler per Dynamic Skill theory.
Adaptive Expertise and Motivation
As noted previously, motivation is an under-addressed dimension of adaptive expertise
(e.g., Boyer, Phillips, Wallis, Vouk, & Lester, 2008) and does not appear as a component of
Bransford et al.’s (2010) model, even though it is acknowledged as central to pursuing solutions
in, and enduring difficulties associated with, STEM-based, ill-structured problems (e.g., Güss,
Tuason, & Gerhard, 2010; Vosniadou, Ioannides, Dimitrakopoulou, & Papadimitriou, 2001).
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 93 93
While Martin and Schwartz’s (2009) study above only touched on the motivating effects of novel
problems, Vanasupa, Stolk, and Harding (2010) use a situated, constructivist model in
engineering to inspire motivation and the beginnings of adaptive expertise. They focus on
learners’ engagement in self-determined and self-regulated learning, the development of mastery
in understanding a given learning scenario, and intrinsic motivation (including autonomy,
interest, relevance, and value, modified by self efficacy and social relatedness). Thus, this study
addresses a great number of motivation-related states, traits, and behaviors, making some
meaningful connections to metacognitive knowledge developments. However, the study did not
additionally measure the development of engineering knowledge and skills or inquiry and
innovation, nor their separate or combined relationship to observed levels of intrinsic motivation.
Still vital in filling a research gap in relationship to adaptive expertise, this study, not unlike the
findings of many studies related to motivation in general, concludes that differences in students’
intrinsic motivation depends on situational/ecological factors related to self-determination and
self-regulation (e.g., having choice in problem-solving tasks based on interest and perceived
value; having a supportive learning environment that accepts failure while encouraging
adjustments in support of mastery development rather than grade-based performance).
Synthesis of Gaps in the Literature
Scholarly research is lacking on characterizations of learners’ developmental pathways
toward adaptive expertise in complex, ill-structured problem solving over time. Studies largely
focus on needed cognitive capacities, center on metacognitive assessments to a lesser extent, and
pay relatively little attention to affective capacities associated with motivation-related behaviors
key to open-ended problem solving (e.g., choice, level of effort, and persistence). These
dimensions and their transfer through time are not collectively considered in the literature. Much
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 94 94
research in adaptive expertise is focused at the higher education level, with fewer studies on
adolescent trajectories toward adaptive expertise, and a lack of focus on novices of that age who
are underserved and underrepresented in STEM fields.
More research is needed on embedded assessments of learner mental models at specific
stages of learning progressions in STEM-related, ill-structured problem solving, particularly for
correcting misconceptions or mistaken problem-solving paths in real time (Eseryal, Ge,
Ifenthaler, & Law, 2011) and for creating developmental rulers for assessing
microdevelopments, diagnosing problems such as misconceptions, establishing Skill levels in
adaptive expertise, and designing scaffolds to support optimal development (Fischer, 2008;
Fischer, van Geert, Molenaar, Lerner, & Newell, 2014). Related research gaps also exist in
scaffolded environments supporting both the development and measurement of adaptive
expertise. Computer-based learning environments can potentially serve as situated learning
experiences that mimic authentic, real-world, ill-structured problem solving in STEM domains
(e.g., Dede, 2009; Dunleavy & Simmons, 2011; Ketelhut, Nelson, Clarke, & Dede, 2010). While
research shows growing evidence that simulations can advance learners’ conceptual
understanding of a given STEM topic, it is not clear the extent to which they support other
learning goals or motivation, and only limited and inconclusive research results exist on the
effectiveness of games (NRC, 2011b). Commercial CBLE designers are not typically cognizant
of learning theories and thus do not embed them, and research on serious games rarely
documents (a) learning goals, (b) theories on how students progressively learn in attaining those
goals, and (c) what measures assess progress (NRC, 2011b). Research lacks control groups, and
does not isolate the unique effects of the simulation or game from the contents of the curriculum,
lacks common methods of analyzing effectiveness, and lacks common terminology (NRC,
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 95 95
2011b). Little evidence exists either on the effectiveness of technology-enhanced, problem-
solving frameworks or on the effectiveness of computer-based instructional strategies and
scaffolds tailored for the dynamics of, and adoptable in, real-world classrooms (Choi & Lee,
2009; Eseryel, Ge, Ifenthaler, & Law, 2011; Hannafin & Kim, 2003; Kim & Hannafin, 2011;
Kim, Hannafin, & Bryan, 2007; Schrader, Lawless, & McCreery, 2009), particularly those that
are high-need in character.
According to Billing (2007), research on transfer mainly focuses on acquired domain-
specific knowledge rather than on skills for learning how to acquire that knowledge. Choi and
Lee (2009) remark that teachers struggle in creating a good learning environment as few
conceptual or research-based instructional design models or specific instructional strategies and
scaffolds for ill-structured problem solving exist. While CBLEs hold promise, per Schwartz and
Martin (2004), preparation for future learning and adaptive expertise depends on new
instructional methods and assessments. Addressing a lack of research-based instructional
strategies, Azevedo and Jacobson (2008) warn that learners must have well-designed scaffolds
when asked to face the challenges of creating conceptual representations from multiple kinds of
information in hypermedia learning environments. They critique the design of many CBLEs,
noting that they have been designed more on intuition than on empirical research, and do not
provide sufficient scaffolding. Based on mixed results on cognitive tool use, Liu et al. (2009)
also suggest more research is needed on strategies that enable learners’ more productive use of
available cognitive tools for problem solving. They recommend that if students become
knowledgeable about which cognitive tools to use, how, when, and why, learners can be more
strategic in their decision-making and self-regulation through the problem-solving process.
Moos and Azevedo (2008) concur, recommending conceptual scaffolds (e.g., structured
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 96 96
interactive overviews, guiding questions, etc.) in CBLEs that enable learners to face difficulties
in learning complex topics such as understanding conceptual inter-relationships and thus the
organization of a domain. While acknowledging that learners need support in authentic problem-
solving processes, Belland (2011) points out that developing scaffolds for ill-structured problems
can be difficult given that multiple, equally legitimate answers are possible. Thus, scaffolding
designs that typically address student difficulties when they diverge from correct answers along
the problem-solving path are difficult to pre-construct, and often do not apply as well.
Conclusion
An overview of theoretical foundations included concepts and theories such as
preparation for future learning, adaptive expertise, constructivism (including cognitive-
flexibility, conceptual-change, situated-learning theories), and dynamic skill theory, all
influenced by the constraints of cognitive load theory. These discussions guide both the design
of the CBLE, grounding it in learning theory. Limited prior research on adaptive expertise (the
dependent variable) and its measurement led to a proposed model of adaptive expertise for use in
this study: Level of AE = f(PFL) = f(ΔKS, ΔII, ΔM). Further research-based discussions on
each of the independent variables, their sub-variables, and their relationship to adaptive expertise
in STEM-based, complex, ill-structured problem solving informed the way in which learner
patterns in each influences their relative and evolving levels of adaptive expertise would be
assessed. As a synthesis, what unites adaptive expertise and the independent variables on which
it depends is prior, if limited, research on assessing developments of adaptive expertise in the
context of a CBLE or related technology tools. This research and related gaps in the literature
suggest a need for assessing student learning patterns to characterize preparation for future
learning through time, and through it, adaptive expertise.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 97 97
CHAPTER THREE: METHODOLOGY
Adaptive expertise (Hatano & Inagaki, 1986) in STEM-based, complex, ill-structured
problem solving is key to learning and success in a 21
st
-century, knowledge-based economy. To
ensure learners, including teachers at high-need schools, can meet and contribute to future
challenges, a key assessment target is preparation for future learning (PFL), as demonstrated by
changes in their problem-solving schemata and the flexible application thereof. PFL and relative
adaptive expertise can be characterized by the sophistication of what learners notice about the
problem, how they gather and interpret information, and how they apply the information or past
problem-solving strategies when confronting new problems (Bransford & Schwartz, 1999). The
proposed model is that learners’ adaptive-expertise level is a function of PFL, which is in turn a
function of changes in knowledge and skills, inquiry and innovation, and motivation through
time. That is, AE level = f(ΔPFL) = f(ΔKS, ΔII, ΔM).
Using a scaffolded, epistemically situated CBLE and assessed through evidence-centered
design (ECD), the purpose of this exploratory study was to provide a view of patterns in
teachers’ PFL, including the development of potentially transferrable problem-solving schemata
and relevant cognitive, metacognitive, and motivation-related KSAs critical to adaptive expertise
in complex, ill-structured problem solving. These evolving patterns might help provide the
beginnings of developmental rulers (e.g., Fischer, 2008) for the emergence of adaptive expertise
in complex, STEM-based, ill-structured problem solving. The study used a mixed-method
approach, following Cresswell’s (2009) endorsement of the dual benefits of this inquiry
approach in which quantitative analysis allows the statistical depiction of relationships among
variables, while qualitative analysis supports more descriptive depictions of relationships as
understood by participants.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 98 98
Sample and Population
Purposive sampling allowed the selection of teachers who represented the targeted
demographic: (a) little background in learning or teaching STEM-based, ill-structured problem
solving, and (b) teaching underserved middle- and high-school students. Given the small sample
and absence of a comparison group, findings were suggestive and limited to the particular site,
and pre-understood as not generalizable given the exploratory nature of this pilot study.
Description of Site and Participants
The site was a computer-learning classroom at the university of participating student
teachers. Participants included twelve learners from an urban, college-level teaching degree
program accredited by the Western Association of Schools and Colleges (WASC) in the United
States. Seven students were female and five students were male. Five self-identified as White,
three as Asian, two as Latino, one as Black, and one as Asian/Pacific Islander. In terms of age
ranges, three were 18-24, seven were 25-34, and two were 35-44. Seven had only student
teaching experiences as pre-service teachers of 1.79 years on average. Five had an average of
4.8 years of teaching. Their program of study had coursework neither related to problem-solving
pedagogies nor to STEM topics, even though State teacher standards relevant to their
credentialing included knowing ways of engaging learners in (a) critical thinking, creativity,
collaborative problem solving related to real-world problems, (b) analyzing complexities through
project-based learning, (c) questioning and challenging assumptions and approaches to foster
innovation and problem solving, (d) generating and evaluating new ideas and novel approaches
and solutions to problems, and (e) developing original work. In response to a verbal question by
their professor, none were familiar with 21
st
-century skills.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 99 99
Ten participants taught at public schools, and two at private. Seven taught high school,
and five taught middle school. Table 5 reflects the demographics of the school environments
where the participants taught. Note that negative numbers in the proficiency columns (2 and 3)
indicate poorer performance, but that negative numbers in the underserved student columns (4
and 5) indicate fewer numbers of high-need students. That is, being higher than the state average
in proficiencies is a positive sign of achievement for the schools, but being higher than the state
average in numbers of underserved students or than the state average of those receiving lunch
support is a potential indicator of socioeconomic stress.
Table 5:
Characteristics of Participants’ Schools, per greatschools.net
School
Math Proficiency
compared to
HS State Mean: 45%
MS State Mean: 59%
Reading Proficiency
compared to
HS State Mean: 69%
MS State Mean: 72%
Minority Students
compared to
State Mean: 49%
% Free/Reduced Lunch
compared to
State Mean: 47%
HS 1* 33% (-12%) 57% (-12%) 81% (+32%) ( 63% (+ 16%)
HS 2* 39% (-4%) 76% (+ 4%) 44% (-3%) 47% (=)
HS 3 58% (+13%) 69% (=) 38% (-11%) 58% (+11%)
HS 4 49% (+3%) 64% (- 5%) 73% (+ 24%) 52% (+ 5%)
HS 5 37% (-8%) 65% (- 4%) 50% (+1%) 50% (+3%)
MS 1 48% (-11%) 71% (- 1%) 68% (+19%) 55% (+8%)
MS 2* 50% (-9%) 53% (-19%) 56% (+7%) 80% (+33%)
MS 3 Private School; No Data Available 63% (+14%) PS; NDA
MS 4 Private School; No Data Available 60% (+11%) PS; NDA
*2 participants taught at these schools
In short, three high schools were below the state average in math and reading; the two
high schools performing above the state mean still had significant numbers of students
performing below desired proficiencies. That is, the highest performing high school in math had
only 58% of their students performing at or above proficiency. Both middle schools providing
data were below the state average in math and reading. All schools reported greater proficiencies
in reading than in math. Math is a proxy for STEM, as science, technology, and engineering are
not yet evaluated and/or reported. High schools had a minority percentage of at least one-third
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 100 100
or so, and middle schools had about half minority students on average. All public schools had
around half or more of their students receiving free or reduced-price lunch.
Instrumentation
Instrumentation Description
This study used an online platform for inquiry-based instruction created by adapting
open-source tools for scaffolding and assessing learner understandings (e.g., question prompts, a
drawing tool used to show conceptual relationships in learners’ representational models, a
scoring system for responses etc.). Modifications to the design of the overall structure and
content supported measures of novice outputs in the three dimensions through time serving as
components of PFL, and were largely criterion-referenced relative to next-generation science
standards and expert models to assess relative mastery in a given area. A number of graphical
icons were included to help learners attend to key conceptual aspects of adaptive expertise in ill-
structured problem solving to support their development of mental models for it. To ensure
learners acted as closely as possible to the way in which domain experts would, data collection
was embedded as “stealthily” as possible (Shute & Spector, 2008; Shute, Ventura, Bauer, &
Zapata-Rivera, 2008) in the expert-like problem-solving steps that were designed to capture key
cognitive, metacognitive, and affective KSAs for the three primary dimensions of preparation for
future learning (i.e., the independent variables).
The CBLE included two ill-structured problem-solving scenarios, both selected given
they had been in use for over a decade as hands-on, science-classroom activities for middle-
school students and improved through successive project-level education evaluations. Selecting
middle-school standards helped ensure cognitive load for participants was manageable. To
establish clear performance goals based on learning theory, both instructional designs had strong
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 101 101
alignment among learning objectives (knowledge and cognitive process), the content and
procedures of activities in the subtasks, and rubrics for assessing learner outcomes, as
recommended by Anderson and Krathwohl (2001), as well as alignment with referenced Next
Generation Science Standards (NGSS Lead States, 2013) and its guiding Framework for K-12
Science Education (NRC, 2011a). Both lessons relied on Miller, Linn, and Gronlunds’ (2009)
methods for (a) creating instructional objectives with specific, supporting, measurable learning
outcomes to ensure objectives are met, and (b) assessing learning outcomes through rubrics and
other measures. Rubrics and scales to support scoring drew on Lantz’s (2004) guide for
assessing science achievement. Thus, this study followed research recommending that CBLEs
have clear and well-defined learning goals (Clark et al., 2010; Clark, Mayer, & Thalheimer,
2003; Parnafes, 2007; Plass et al., 2009) and align learning goals with the assessment design
(Pellegrino & Quellmalz, 2010). These pre-existing attributes allowed an easier construction of
an evidence-centered design (ECD), discussed in greater detail below.
In both scenarios, the original standards-aligned content focused primarily on domain-
specific factual, conceptual, and procedural knowledge tailored to each problem, rather than on
emergent, inquiry-based, crosscutting knowledge structures evaluated to support PFL and
adaptive expertise in STEM. Thus, where relevant per the literature review, KSAs and
instructional materials were modified to include strategic, metacognitive, and motivation-related
goals, including representations of domain-general problem-solving schemata.
Scenario 1. The first problem-solving scenario was a geology-based challenge that
asked students to solve a mystery and infer what geologic processes created landforms seen in
their chosen orbital images. Learners explored standards-aligned concepts related to the rock
cycle, constructive (e.g., volcanism) and destructive (e.g., erosion) forces, and domain-specific
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 102 102
rules and processes such as the law of superposition (i.e., whatever feature lies on top of another
occurred later) and relative age dating. From this exploration, learners then had to create
hypotheses that plausibly explained their observations, to elaborate on their findings, and then to
evaluate those based on the strength of their hypotheses using their own data as evidence.
Results were evaluated based on both product- and process-based learner outputs and similarities
among them to expert models, along with relevant self-regulatory and motivation-related
measures, per an overall evidence-centered design model discussed below and detailed in
Appendices A-E. Finally, they evaluated their solution, their learning and scientific-inquiry
process, and their perceived ability to solve a similar science problem of the same type.
Scenario 2. The second problem-solving scenario was a design-based, engineering-style
challenge in which learners began by activating their prior knowledge about sustainability and
technology. This activity included multiple content standards (e.g., science and technology in
society, principles of technological design, Earth and space sciences etc.). They set goals for
designing a sustainable technology, identifying criteria and constraints for their design. They
then created a more detailed model that aligned with their goal and self-determined criteria and
constraints. Then, learners had to adjust their technological design to work on another planet. In
the process, they had to make comparison of conditions on that world and Earth (e.g., changes in
gravity, atmospheric composition and pressure, radiation, environmental resources, and various
Earth-science-related concepts). They also had to modify their technological designs based on
these new environmental factors and then to justify the effectiveness of their solution using
evidence. Finally, they evaluated their design, their learning and problem-solving process, and
their perceived ability to solve a similar technology, design-based problem of the same type.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 103 103
Participants in the study also created pre/post concept maps of their STEM-based
problem-solving processes in order to capture and make explicit their evolving problem-solving
schemata. These maps provided further documentation of within-CBLE learner responses,
helping reinforce proficiency claims per the ECD assessment model.
Conceptual Framework for Instrumentation
The design of an epistemic learning environment involves complex considerations of
alignment between the instructional design and the assessment design (e.g., principled
assessment design, data collection and analysis, score reporting, and formative feedback) (Rupp,
Gushta, Mislevy, & Schaffer, 2010). In terms of instructional design, both activities already
followed a STEM-based, constructivist 5E model (Bybee et al., 2006), making them relatively
straightforward to translate into the proposed scaffolded CBLE framework and its research-
based, expert-like problem-solving steps, per Figure 3 in Appendix A. Both activities aligned
instructional objectives, NGSS standards, knowledge types, scaffolded activity formats, learning
outcomes, and assessment rubrics per Anderson and Krathwohl’s (2001) taxonomy and, where
possible, tied to a dynamic-skills-theory-related scale modeled after Yan and Fischer (2007).
Modifications to this scale largely tied progressive Skill levels to progressive standards across
grade-levels. While a one-to-one match was not possible as these developmental tools were
separately created, standards are based on cognitive development and what students at various
ages can do, much as Dynamic Skill theory defines from a brain- and age-based perspective.
Because in a school setting standards are well-defined in accordance with grade-appropriate
cognitive development levels, it is possible to detect when learners return to prior (simpler)
knowledge schemata as defined by the standards, working at the appropriate targeted level,
and/or offering solutions at a higher level of proficiency (i.e., in dynamic-skills terms, going
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 104 104
from representations to abstractions). An example of this mapping is found in Table 9.
Inquiry and innovation and motivation-related components were often assessed through
Likert-like self-reports embedded in the CBLE, though made more “stealthy” (Shute & Spector,
2008; Shute, Ventura, Bauer, & Zapata-Rivera, 2008) than otherwise through a starred “like”
system similar to those learners might find online in various web and social-media environments.
They were further assessed in terms of performance in the activity and according to self-
reflection narratives that provided qualitative data. In each of the study’s two ill-structured
problems, discipline experts in the fields of engineering design and geology originally identified
the key domain content knowledge and the corresponding authentic STEM and problem-solving
practices and the iterative sequences thereof. The two selected, situated learning scenarios
allowed the proposed framework to be tested for the extent to which it was applicable to learner
transfer (as defined by preparation for future learning) as part of developing adaptive expertise in
two very different STEM domains (one focused on hypothesis formation and conceptual change,
the other on technology design-based problem solving).
Uniting at the assessment level, adaptive expertise (the dependent variable) and the three
dimensions through time that serve as indicators for preparation for future learning (the
independent variables) share a need to be operationalized in the assessment of learning patterns
that depict evolving learner capacities. In support of that need, this study used evidence-centered
design (ECD) (Mislevy, Steinberg, & Almond, 2003) as a framework for that assessment. ECD
links observations of learner outputs and behaviors to claims about their knowledge, skills, and
abilities, to principles on which the claims are made, and to tasks, responses, rubrics, scores and
other elements that together provide evidence of the learning claims (e.g., Mislevy, Steinberg, &
Almond, 2003). Stemming from research in expert systems, software design, and legal
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 105 105
argumentation, this framework provides evidence of explicit understandings and ensures that
warrants of learner proficiency ranges have corresponding valid, inferential, evidentiary claims
(Mislevy & Riconscente, 2005), a topic discussed further in the section on reliability and validity
in this Chapter and detailed in Appendices A-E.
In essence, ECD allows interactive, performance assessments that measure scientific
inquiry in a manner that reflects relative levels of expert-like, situated, complex learner
performances (Clarke-Midura & Dede, 2010). In a five-layer framework, the technique provides
a way of identifying (a) target knowledge, skills, and abilities (KSAs) for learners, (b)
performance- or behavior-related factors providing acceptable evidence of those competencies,
and (c) learning tasks prompting the demonstration of those competencies (Snow, Fulkerson,
Feng, Nichols, Mislevy, & Haertel, 2010; Behrens, Mislevy, DeCerbo, & Levy 2010). Each
layer supports the provision and collection of evidence for claims about learner proficiencies, as
shown in Table 6, summarized below, and detailed more fully in Appendices A-E.
Table 6:
Descriptions & Purposes of ECD Layers (e.g., Mislevy, Steinberg, & Almond, 2003)
Layer Purpose
Domain Analysis Documents:
• discipline-based content knowledge (e.g., concepts, language,
representational forms, procedures, etc.)
• ways in which such knowledge is constructed, acquired, used,
communicated, and assessed, and
• ways in which it is applied to support problem solving.
Domain Modeling Provides key assessment arguments and relationships based on the domain
analysis.
Conceptual Assessment
Framework
Organizes assessment characteristics through five measurement models
(student, task, evidence, assembly, and presentation).
Assessment
Implementations
Includes authoring tasks and responses, often through automated scoring
statistical models for reusability
Assessment Delivery
Architecture
Coordinates learner-task interactions through four processes (activity
selection, activity presentation, evidence identification, and evidence
accumulation), and graphically summarizes learner outcomes.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 106 106
Domain Analysis. A domain analysis, is critical in linking adaptive expertise (the
dependent variable) with preparation for future learning and its related dimensions (the
independent variables) given it (a) details what knowledge and experiences are important and the
relationships among them (Behrens, Mislevy, DeCerbo, & Levy, 2010), and (b) depicts the ways
learners construct, acquire, use, and communicate the identified knowledge, skills, and abilities
(e.g., discipline-based content, concepts, language, representational forms, procedures, etc.) in
order to solve problems (Snow, Fulkerson, Feng, Nichols, Mislevy, & Haertel, 2010). Typically
relying on instructional specialists in collaboration with discipline experts, domain analysis
organizes discipline-based concepts, knowledge representations, symbols, structures,
relationships, practices, values, and other epistemic elements into an assessable form (e.g.,
knowledge maps) that differentiates among varying levels of expertise (e.g., Mislevy &
Riconscente, 2005). That is, related to situated cognition theory, it describes the real-world work
that is valued in a discipline, what representational forms guide presentations of information and
responses in the discipline, and what provides evidence for, and indicators of success or failure
in, that real-world work (Mislevy, Steinberg, &Almond, 2003). The overall design of the
epistemic CBLE in this study relies on this domain analysis as summarized in the expert-like
problem solving steps outlined in Appendix A. The key knowledge, skills, and abilities for each
step were analyzed and aligned with Next Generation Science Standards (NGSS Lead States,
2013), with particular reference to progressions in science and engineering practices and
crosscutting concepts along with specific domain content targeted in each of the two scenarios.
These new standards and grade-level progressions served as the knowledge map, and where
possible, were aligned to the degree possible with Yan and Fischer’s (2007) developmental scale.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 107 107
Domain Modeling. At the domain modeling level, design patterns modeled after the
Principled Assessment Designs for Inquiry (PADI) (Mislevy, et al., 2003) were constructed to
assess both knowledge outcomes and proficiencies related to adaptive expertise and preparation
for future learning in STEM-related, complex, ill-structured problem solving. Funded by the
National Science Foundation to improve scientific inquiry, PADI meets the challenges associated
with iterative, inquiry-based, and cyclical processes of STEM learning by providing assessment
designers with scaffolds for identifying target learner proficiencies, crafting tasks that are
appropriate use cases for a learner’s demonstration of these proficiencies, selecting psychometric
models that authentically capture the learner’s level of proficiency, and creating decision rules
for scoring that support evidence-based outcomes (Mislevy, et al., 2003). Several reusable
design templates found at http://design-drk.padi.sri.com/padi were referenced and modified
based on the content presented in each of the two learning scenarios used in this study. This
level is further detailed in Appendix B.
Conceptual Assessment Framework. All parts of the Conceptual Assessment
Framework are detailed in Appendix C. As an overview, the student proficiency model typically
used NGSS performance targets as the “students are able to” claim, and documented variables
for students’ KSAs in reference to the claim. The task model described what student work
products counted as evidence for the targeted KSAs, as well as how the work products were
captured (e.g., an ordering task, a constructed response, a modeling task etc.). As previously
stated, the component task models aligned defined learning goals (Clark et al., 2010; Clark ,
Mayer, & Thalheimer, 2003; Parnafes, 2007; Plass et al., 2009), knowledge and knowledge-
process types (Anderson & Krathwohl, 2001), NGSS standards (NGSS Lead States, 2013), and
appropriate assessment types (Anderson & Krathwohl, 2001; Lantz, 2004; Miller, Linn, &
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 108 108
Gronlund, 2009; Quellmalz et al., 2009). The evidence model demonstrated the way in which
student responses (observable work products) connected to the KSAs documented in the student
model and the way in which proficiencies were measured and scored through potential variables
in their observable work products. The assembly model for each problem-solving scenario
documented the level of accuracy in measuring the variables and the breadth and depth of the
tasks (e.g., #s of items, item types, difficulty levels) relative to reflecting levels of proficiency in
domain-based KSAs. Finally, as shown in Figure 3 in Appendix A, the overall presentation
model clearly organized the content of the scenarios into five problem-solving steps (shown as
puzzle pieces in problem solving) and provided representational icons for scaffolding help that
were conceptually rooted in NGSS Science & Engineering Practices (NGSS Lead States, 2013)
and Depth & Complexity prompts (Kaplan, 2012).
Process of Instrument Development
Instrument development included several stages: (a) the overall design of the CBLE, (b)
the contents of the problems within the CBLE, and (c) the embedded assessments for
ascertaining students’ perceived or actual proficiencies within it. Under the direction of the
researcher, a web developer modified the open-source code to enable desired features such as the
representational scaffolds based on the research-based recommendations provided by the
researcher. Conversion of the lesson materials by the researcher into fully digital form followed,
with modifications within the lesson to take advantage of certain CBLE affordances (e.g., drag-
and-drop mechanisms etc.).
Field Testing
Instruments. All items in embedded survey questions followed Krosnick and Presser’s
(2010) guidelines in creating items, followed by testing and data analysis. Participants’
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 109 109
individual problem-solving schemata were tested through a concept-mapping exercise, with
measures for complexity, sophistication, and accuracy (e.g., number of conceptual nodes and
connections). Embedded cognitive, metacognitive, and affective scaffolds and quantifiable
assessments were added per the ECD assessment framework. Quantitative answers were scored
within the CBLE. A common open coding scheme for elements needing qualitative
interpretations (e.g., short answers or drawings) guided results also per the ECD assessment
framework. Qualitative assessments using open coding served to triangulate the data from the
embedded quantitative answers.
Curricular Materials. The pedagogical designs of the two selected problem-solving
experiences in this study had been tested over a decade of widespread classroom use by teachers
of middle-school learners, including underserved, high-needs students, with iterative
improvements previously made according to learner experiences and outcomes. Past measures
largely focused on cognitive rather than metacognitive and affective outcomes, and did not
consider adaptive expertise. As an exploratory study, the research conducted here precedes any
broad field testing of the CBLE, which would be dependent on findings and implications.
However, pretesting of the CBLE occurred to assure usability and construct validity, as covered
in data-collection procedures below.
CBLE Usability and Construct Validity Tests. Prior to the experiment, ten students
not related to the later study participated in a lab setting to ensure usability and construct
validity. These participants identified issues related to the clarity or usability of the design or its
components (e.g., confusing prompts, programming bugs etc.), to ensure that the tasks could be
completed within the given timeframe, and to demonstrate the CBLE elicited desired learner data
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 110 110
per the evidence-centered design. All issues found through this testing were corrected prior to
the study.
Reliability and Validity of Instrument
Measuring adaptive expertise (the dependent variable) and PFL (as measured by the three
dimensions through time that serve as the independent variables) poses some unique challenges.
If, at the top level of analysis, AE = f(ΔPFL) = f(ΔKS, ΔII, ΔM), reliable and valid
measurements for each functional component are key to assessing learning progression patterns,
and ultimately, adaptive expertise itself. Using traditional techniques, however, it is difficult to
measure emerging levels of that expertise, and few models exist for monitoring it at the micro-
developmental level. Common classroom assessment tools (e.g., multiple choice tests, surveys,
etc.) geared for measuring learners’ gains in knowledge and skills are out-of-context with
immersive learning experiences that continually assess learners’ progress (Nelson, Erlandson, &
Denham, 2011) and have multiple possible answers given the nature of ill-structured problems,
rather than one correct or best answer. Reliable, valid, cost-effective, and scalable assessments
of complex, ill-structured problem solving are scarce (Eseryal, Ifenthaler, & Ge, 2011).
Objective measures of validity involving causality do not work for epistemic computer-based
experiences given complexities in interpretations of learners’ characteristics and their
longitudinally developing expertise (Rupp, Gushta, Mislevy, & Schaffer, 2010).
To inspect what students know, assessment must rely on indirect reasoning from
evidence through models of what learners should master, observations of learners’ performance,
and interpretations of the way in which those observations depict learner KSAs (Clarke-Midura
& Dede, 2010). While subjective measurements of reliability and validity must be defensible as
part of a principled assessment design, disciplinary standards for evidence-based design,
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 111 111
implementation, scoring, and reporting in epistemic CBLEs do not currently exist (Rupp, Gushta,
Mislevy, & Schaffer, 2010). Bransford et al. (2010) additionally relate that most assessments
place learners in sequestered, timed circumstances and test them on their factual or conceptual
knowledge rather than on their capacity in problem solving. That kind of assessment has a large
stifling effect on innovation-related attitudes and behaviors related to adaptive expertise
(Bransford et al., 2010). It also assumes a single developmental pathway, a notion critiqued by
Dynamic Skill Theory, which argues for assessing a branched web of multiple ability states that
can together potentially represent and predict a learner’s individual developmental pathway
(Dawson & Stein, 2008), perhaps including Bransford and Schwartz’s (1999) idea of a trajectory
for adaptive expertise. For all of these reasons, this study largely used evidence-centered design
as an approach to meet these challenges, with traditional methods to ensure reliability and
validity used where possible. The ECD for this study is detailed in Appendices A-E.
Reliability. In traditional studies, reliability is meant to ensure that the same data-based
results would occur in the same population if administered and performed in the same way
multiple times. That premise is antithetical to the very nature of adaptive expertise and the open-
ended, emergent nature of ill-structured problem solving, which has no one right response or
pathway and emphasizes innovation. While students may encounter the same diagnostic tool for
a given variable (e.g., a constructed response), their unique choices and experiences in the
activity prior to that likely changes the potential nature of their response, whether cognitive,
metacognitive, or affective. That is, as with any complex adaptive system of which this learning
process is one, actions in the system change the system and responses within it. However, what
aids in reliability is crafting sub-tasks with independent measures, which provides multiple
assessments and ensures that capturing learner understandings and abilities are not as
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 112 112
interdependent as they otherwise would be (Clarke-Midura & Dede, 2010). Those allowances
are built into the rigorous structure of the ECD framework, as covered in Appendices A-E.
Validity. Validity in turn is meant to ensure data accurately represent the phenomena of
study. Validity in an epistemic digital learning environment is different from traditional
proficiency assessments such as those used in summative evaluation in high-stakes testing
(Rupp, Gushta, Mislevy, & Schaffer, 2010). More formative in nature, the emphasis is on low-
stakes needs of diagnosing developing expertise within a domain-based experience. In this case,
through learning-theory-based analyses, what is being assessed are relative professional patterns
of thinking and acting, as well as the extent to which those patterns are internalized in an expert-
like manner as they evolve (Rupp et al., 2010).
The key idea behind the use of ECD is that this method meets these challenges by laying
out evidentiary structures that serve as the validity argument, which includes claims about
learners given a targeted proficiency level, data outcomes if achieved, warrants for the scores
representing targeted proficiency levels, and alternative explanations for high or low scores
(Haertel, DeBarger, Villalba, Hamel, & Colker, 2010). As Shute and Zapata-Rivera (2008)
assert, formative assessment instruments such as the one developed for this study support and
diagnose diverse learners in their progress for potential real-time instructional modifications as
well as assessment. In this study, per the ECD framework, in addition to pretesting content with
non-participant teachers for comprehensibility, content validity was supported by involvement of
discipline experts in the design of the problem-solving activities in the CBLE, including the
sequence and content of tasks within it. Substantive validity related to the engagement of
learners’ mental processes in a task was supported through the assessment-related scaffolds that
collected learners’ metacognitive and motivational processes at different points in the problem-
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 113 113
solving process, as well as by capturing representations of learners’ more cognitive problem-
solving schemata. Considerations of content and substantive validity reinforced structural
validity, given high levels of alignment in the way the CBLE reflected professional cognitive,
metacognitive, and affective practices led to scoring schemes consistent with that strongly
situated linkage. Given the problems in this study are authentic and real-world, predictive
validity for real-world adaptive expertise outside of the CBLE may be supported, but testing for
performance in a follow-up, outside-the-CBLE-context, real-world problem-solving experience
requiring adaptive expertise was beyond the scope of this exploratory research.
Per the discussion of limitations in Chapter 1, internal validity might be impacted by pre-
test influence in measuring adaptive expertise, particularly given the intent is to have an effect on
later problem-solving performance as part of a conscious preparation for future learning through
explicit supports, thereby confounding the effects of the independent variables on the dependent
variable, and potentially impacting interpretation of the data. A potential also exists for the
Hawthorne Effect, where being in the experiment changes the response of the learners in it,
something that, in the case of adaptive expertise, is actually a learning purpose. Other possible
threats are that the accuracy of observed patterns in adaptive expertise (the dependent variable)
might only reflect students’ initial, novice-level, and thus more variable, engagement, and
changes in knowledge and skills, inquiry and innovation, and motivation (the independent
variables) might not be of sufficient magnitude and duration to describe accurately developing
patterns in adaptive expertise. Additionally, aggregating scores in a simple model to express
changes in learning patterns might introduce unreliability into the measurements of the observed
patterns. Finally, as much as the assessment was detailed through rigorous evidence-centered
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 114 114
design (ECD) methods, instrumentation errors are inherently possible, though minimized to the
greatest extent, due to the subjective but rigorous nature of that technique.
As covered in Chapter 1 as a part of delimitations, the findings of this exploratory study,
by design, are not readily generalizable, but rather suggest future experimental research
directions for extending results and implications to wider learner groups. While the research
design ensured relatively uniform conditions for individual participating students, a potential
limitation of the study is that the overall context of the learning environment in which the
intervention took place was not formally considered. While the chosen scaffolds within the
model strove to align with prior research-based evidence on their effectiveness, due to the scope
of the study, it was not possible to isolate the type and timing of each of the scaffolds to test their
separate effectiveness on learner patterns, which might affect the learner performance in the
development of adaptive expertise. For the scope of this study, the limited selection of cognitive,
metacognitive, and affective indicators for learners’ capacities as detailed in the discussion of the
independent variables in Chapter 2 might not provide a complete picture of the complex suite of
knowledge, skills, and abilities required for adaptive expertise.
Data Collection Procedures
Participant Demographic Data
Participant demographics were established through integrated online surveys completed
prior to the problem-solving experiences, as shown in Appendix F. All participants established a
secret code name to keep their identities anonymous, and a password-protected database on a
password-protected computer held all data. Data on school demographics included: a) school
type (e.g., public/private), (b) number of students enrolled, (c) race and ethnicity percentages of
students relative to state average, (d) percentage of students receiving free or reduced-cost lunch
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 115 115
relative to the State average, (e) number of students who took the State proficiency test in math
and reading, and (f) mean proficiency percentages for each school relative to mean State
proficiency percentages. Data on teacher demographics included (a) gender, (b) race/ethnicity,
(c) years of teaching experience, (d) experience in STEM pedagogy, (e) level of STEM content
knowledge, (f) level of computer skills, (g) experience in teaching with computers, (h)
experience in teaching ill-structured problem-solving skills and/or guided or full inquiry, and (i)
self-reported levels of capacity in knowledge and skills, inquiry and innovation, and motivation
related to STEM-based, complex, ill-structured problem solving. See Appendix F.
Participant Learning Data
Data-collection procedures are documented through this study’s ECD, as created by the
researcher, reviewed by the committee, and documented in Appendices A-E. To establish and
monitor participants’ development of problem-solving schemata, a pre- and post-problem-
intervention instrument consisted of a concept-mapping activity depicting students’
externalization of their mental models of problem-solving processes and strategies and a few
short answer questions to establish students’ scientific knowledge, misconceptions, scientific
processing, reasoning, and problem-solving capacity. Concept mapping was used given it
prepares learners for meaningful learning, scaffolds cognitive processing, supports critical
thinking, captures achievements and interests, and supports transfer in problem solving
(Gonzales, Palencia, Umana, Galindo, & Villafrade, 2008; Lim, Lee, & Grabowski, 2009; Mann
& LeClair, 2009; Tseng, Chang, Lou, Tan, & Chiu, 2012; Tseng, Chang, Chen, & Lou, 2010).
Specifically, concept maps help students see the links among scientific concepts (Chiou, 2008).
Concept maps allow insight into an individual’s knowledge structures at different points in the
learning process, and some studies show concept-mapping strategies had higher impact on
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 116 116
learners with the lowest cognitive competence (Tseng et al., 2012). Quantitative analysis
included assigning weights to the appearance of correct, erroneous, and missing concepts and the
depicted relationships among them, as compared to an expert map (Gouli, Gogoulou,
Papanikolaoy, & Grigoriadou, 2005; Schaal, 2010; Schaal, Bogner, & Girwidz, 2010), while a
human-based analysis (three staff for inter-rater reliability) further characterized the
sophistication of the propositions.
For other elements, the CBLE itself captured participant responses. Where possible to
quantify, the CBLE automatically scored responses according to the ECD for each scenario.
Where qualitative analyses were necessary (e.g., short answers), collected responses were later
scored against expert models and rubrics created as part of the ECD or through open coding.
Mixed methods for collecting this data included embedded Likert-like surveys, short-answer
questions, model-based assessments, and other measures. The appropriateness of the method
depended on the knowledge type being assessed, per Anderson and Krathwohl’s (2001)
taxonomy. For example, a multiple-choice format more appropriately measures recalled factual
knowledge than generating hypotheses.
Participants conducted the problem-solving exercises in a computer lab set up with the
problem-solving CBLE. Initial demographic and pre-intervention cognitive, metacognitive, and
affective characteristics were collected, along with concept-map representations of learners’
problem-solving schemata. Instructional time for each problem occurred during a two-and-a-
half-hour session, set a week apart, during which time data was collected according to the ECD.
Post-experience reflections and concept maps were collected, and observations of learner
experiences were also conducted throughout. Following participant involvement, the data was
prepared for an analysis of findings and implications.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 117 117
Data Analysis Procedures
Embedded assessments within the computer program collected most learner responses.
The exception was the concept-mapping data for revealing learners’ problem-solving schemata
prior to and after each problem-solving scenario, as learners drew by hand concept maps
depicting their overall problem-solving schemata to allow as many nodes and links to be created
as possible (i.e., to remove differences imposed by the use of computer drawing tools requiring
more sophisticated computer skills than, for example, question prompts that might otherwise
impede the recording of the potential complexity of the students’ models). Otherwise, embedded
assessments per the ECD diagnosed and scored the learner’s level of knowledge and
performance, along with how well and how fast the learner acquired new concepts and skills.
For example, codes for the amount of time a learner took to perform an action or the number of
iterations attempted allowed inferences about the level of engagement, cognitive load, and/or
comprehension of key underlying concepts. Points for tasks, variables, and sub-variables were
assigned per the ECD. Both cognitive load and motivation was monitored at key moments in the
problem-solving process through student self reports in Likert-like survey items, followed by
explanatory short answers where relevant.
Representative scores for each student at each of the five problem-solving stages were
individually aggregated according to the ECD strategy. These stage-based measurements served
as sub-scores. They in turn were also individually aggregated for each learner to create an
overall score for each problem according to the measurement model. This process repeated from
one problem-solving scenario to the next to capture changes through time and to assess them in
terms of potential developmental patterns in PFL on the way to adaptive expertise. Statistical
techniques included 2D and 3D plots that tracked performance and changes in independent
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 118 118
variables and sub-variables at each problem-solving stage within a problem as well as between
them. From scored responses for the observable variables in the students’ work products
according to the ECD-based evidence model, individual learning progressions were detailed and
summarized for each student in 2D plots of their progressions, following a similar approach to
Yan and Fischer’s (2007) analysis of learning patterns. Per the proposed model for representing
adaptive expertise in Figure 2, independent variable sub-scores (within-test task-level) were also
combined into overall scores (scenario/test level) to provide a 3D visual representation of student
patterns in the three dimensions through time and transfer, both within and between problems.
For this first exploratory study, scores and sub-scores were not weighted given a lack of prior
evidence regarding the importance of valuing one variable over another with respect to novice-
expert trajectories in adaptive expertise in STEM-based, ill-structured problem solving.
Ethical Considerations
Per Cresswell (2009) and IRB rules, participants were fully informed about the research
objectives, data-collection methods, and data protection. Participants could opt out at any time.
While individual student data was collected, anonymity was preserved through code names, as
the intent was not to have personal identifiers.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 119 119
CHAPTER FOUR: FINDINGS
Adaptive expertise (Hatano & Inagaki, 1986) in STEM-based, complex, ill-structured
problem solving is key to national and individual prosperity in the knowledge economy of the
21
st
Century and beyond. To ensure learners can meet future challenges, a key transfer
assessment target is preparation for future learning (PFL), as demonstrated by changes in their
problem-solving schemata, which can be characterized by the sophistication of what learners
notice about the problem, how they gather and interpret information, and how they apply
information or past problem-solving strategies when confronting new problems (Bransford &
Schwartz, 1999).
Using a scaffolded, epistemically situated CBLE, the purpose of this exploratory study
was to develop an assessment of developing patterns in a novice-expert journey toward more
sophisticated, efficient, and effective adaptive expertise in complex, STEM-based, ill-structured
problem solving (the dependent variable). The research question asked: How is the level of
adaptive expertise in complex, STEM-based, ill-structured problem solving (the dependent
variable) influenced by evolving patterns of competencies in four proposed dimensions of
learners’ preparation for future learning (PFL) (the independent variables):
• (a) knowledge and skills (Dimension 1 in the Adaptive Expertise model),
• (b) inquiry and innovation (Dimension 2 in the Adaptive Expertise model),
• (c) motivation (Dimension 3 in the Adaptive Expertise model), and
• (d) transfer of any previously demonstrated capacity in these three dimensions to a
novel complex, ill-structured problem following the initial learning experience
(Dimension 4 in the Adaptive Expertise model)?
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 120 120
The following findings directly relate to the overall research question about how
microdevelopmental patterns in evolving competencies in three dimensions through the fourth
dimension of time (collectively preparation for future learning) describe the development of
adaptive expertise in STEM-based, complex, ill-structured problem-solving (the dependent
variable).
Finding 1: Complex Individual Patterns Overall
Educator-learners exhibited highly complex and dynamic individual microdevelopmental
patterns within and between problems. For the twelve learners’ individual microdevelopmental
trajectories depicted in Appendix G, this study shows the same kind of highly complex learning
patterns, and variations among them, per earlier studies (e.g., Yan & Fischer, 2007).
Participants’ microdevelopmental trajectories varied widely for each dimension and related sub-
dimensions, both within and between problems. The spurts and drops each learner experienced
correspond well with prior studies related to Dynamic Skill Theory, which identifies chaotic
microdevelopmental patterns as a characteristic of novice understandings as learners gain higher-
order Skills (e.g., Fischer, 2008). Learners in this study, over the course of the two problems,
stayed within the same low-to-mid level of scoring per the scale, without any major leaps from
one problem to the next. That result is reasonable given the limited number of problem-solving
opportunities over a relatively short period of time in this study. Yet, the relatively chaotic
patterns give an initial potential window into the microgenesis (e.g., Granott, Fischer, & Parziale,
2002; Fischer, 2008) of KSAs related to adaptive expertise in STEM-based, ill-structured
problem solving (i.e., a view into the time when, and the processes by which, novices transition
from relatively unstable to more stable patterns in their performance, as shown by precise and
dense data points), as well as into the potential processes that influence the development adaptive
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 121 121
expertise (e.g., development of potential clusters of KSAs that collectively assist in this learning
process). It additionally corresponds with Dynamic Skill Theory’s perspective that tracking
microdevelopments produces more accurate, complex, and comprehensive assessments of
learner proficiencies along a developmental trajectory (e.g., Fischer, 2008). That is also in
keeping with prior research on preparation for future learning (Hatano & Inagaki, 1986) and the
need for making PFL observable in assessments (Mylopoulos, 2012). Data supporting this
finding are in Appendix G.
Finding 2: Highly Variable, Unstable Trajectories in Motivation
The most highly variable and unstable trajectories occurred in the motivation dimension,
followed by the inquiry and innovation dimension. The most highly unstable trajectories among
all participants occurred in the motivation dimension, both within and between problems.
Individuals differed on which sub-dimension variable(s) for motivation fluctuated most strongly,
when, and in combination with which other variables. In addition to supporting Finding 1 and
potential microgenetic understandings, this result indicates that motivation is likely an area in
which learners may need more supports, especially in learning self-regulation practices generally
necessary in learning, but especially important in ill-structured problem-solving domains given
its open-ended, complex nature.
In open responses in the CBLE as well as in the study debrief, the majority of participant
comments related to struggles with motivation. Persistence and self-efficacy appeared to be the
most frequently cited topics among respondents. Participants clearly indicated in their responses
an awareness of what they could have done better in terms of self-regulation strategies. For
example, in the debrief following the study, participants expressed emotions related to
motivation, including surprise (e.g., “I just realized how my students feel when learning
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 122 122
completely new things. I haven’t been in that position in a long time.” “I really thought this
would be a piece of cake, but I learned it takes a lot of new thinking.”), frustration (e.g., “At one
point, I was tearing my hair out. I thought it would be easier to do, but I’m used to having the
lesson plan and answers to guide what I’m supposed to be doing.” “I was getting frustrated,
because I didn’t immediately know how to create something that would work.”), or persistence
(“I knew what I needed to focus on, but it was hard for me to cheer myself on. I could tell I
didn’t really feel like learning how, I just wanted to know it already.” “I didn’t want to work to
get to the finish line because it was such a process.” “I wish there was [sic] chocolate and coffee
so I could stay focused!”).
Despite the fact that the content was middle-school-level, ten of twelve participants
expressed doubt and discomfort in providing solutions to real-world problems in the experience
because they weren’t experts (e.g., “I’m not a scientist, so I’m not sure how I or my students can
be expected to contribute to real data analysis like this.” “I know engineering is in the new
standards, but I don’t have a degree or training in that.” “I’d have to learn a lot more about this
subject and also how to teach it. I’ve never done anything like this, and my lack of experience
reduce [sic] my motivation to complete it, because I don’t think I’ll get it right without too much
work and time I don’t have.”) This response reflects a general lack of self efficacy that was also
reflected in mid-problem self-assessments (e.g., “I ‘m not an expert in every area of STEM, so
am worried about being able to handle lots of different problems in it.” “no background in this,
am not prepped to learn or definitely teach it, don’t know if my thinking is correct. [sic]” “I
don’t know enough to get whether my answer or the other teachers[’] are good enough. I don’t
know the topic well enough.” “I’m not sure I did a good job in answering the questions. I
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 123 123
usually know at least in general what the answers should be.” “I did my best, but I doubt I was
smart enough about it.” “woa. This is frying my no good today science brain.”)
In the debrief discussion, participants agreed that they thought their motivation would
increase if they had prior expertise, and agreed that a lack of effort and/or persistence (going for
an easy answer rather than engaging in deeper critical thinking) was tied to feeling as if they
were not already subject-matter experts. In that regard, while not using the formal language, the
group indicated more of a preference for performativity than a motivation for generative
thinking. Qualitative comments from participants suggest that they may not have enough self-
efficacy or may not perceive the value or relevance of learning and/or teaching ill-structured
STEM problem solving in a 21
st
-century, STEM-situated context.
While not showing as much variability as the motivation dimension, participants in this
study had more and wider spurts and drops in inquiry and innovation than in the knowledge and
skills dimension. Patterns in the sub-dimensions of curiosity and generativity in both problems
varied among the individuals in this study, with no common patterns discernable. Most
participants in both problems scored high in measures for personal novelty, suggesting their
novice level. Eight showed lower scores in flexibility relative to other sub-dimensions
throughout the science problem, and seven demonstrated the same in the technology problem.
Participants reflected in the debrief an unfamiliarity with how to assess qualities related to
inquiry and innovation, and again reflected a need for a higher level of expertise or assessment
supports to be able to make such judgments in STEM-based content, particularly in free-choice
scenarios with no one right solution pathway or outcome.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 124 124
Finding 3: Low-to-Average Scores in Knowledge and Skills
Educator-learners exhibited low-to-average, middle-school-level KSAs in the knowledge
and skills dimension within and between problems, but had relatively more stable trajectories in
this dimension. The rubric-based and/or Likert-like scoring system reflects middle-school-level
KSAs per NGSS, as correlated with Anderson and Krathwohl’s (2001) taxonomy, and where
possible, Dynamic Skill Theory levels (e.g., Fischer, 2008). Per this scoring system, all
participants performed at a relatively low-to-medium middle-school level in the dimensions of
knowledge and skills, inquiry and innovation, and motivation. Averaged scores for all steps and
sub-steps typically hovered in the 2 – 4 range (with 6 being the highest possible score).
Compared to the other two dimensions, however, participants were more stable at this lower
level. Performing more stably in this low score range might indicate either the learners’ actual
KSA level or a regression to more assured prior knowledge during the acquisition of new and
necessary KSAs, per Dynamic Skill Theory. Regardless, this finding suggests educators are
likely ill prepared to provide optimal levels of support for their students when they themselves
are at or below a more functional level expected for their own age range (see Table 1), likely
without having had the benefit of prior, optimal PFL themselves. The majority of participants
expressed a need for supports as discussed in Finding 2, and per prior research (e.g., Martin &
Schwartz, 2009; Mylopoulos, Regehr, & Ginsburg, 2011), also demonstrated a preference for
routine rather than adaptive-expertise-related techniques (e.g., “My kids could never do this right
– I don’t even have a good sense of what is right, or how I’d help them since I have to create the
solutions….”; “I think I did only somewhat okay, but I’m not sure. I’d want to be able to access
the right answer so I could compare it to my thinking.” “I’d like to have the procedures rather
than coming up with them.”).
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 125 125
While the teacher credentialing system for the participants’ state has language about
pedagogical KSAs related to problem-solving, the pre-experience survey showed none had ever
received any professional development or training in either teaching or solving open-ended
problems solving or teaching full scientific inquiry, even though one-quarter indicated they had
led project-based lessons. In informal discussions during the debrief, all were aware of new
NGSS standards, but only two were able to articulate the shift to practices essential to the
domain. None reported confidence that they had sufficient training. While all had taught
STEM-related topics, in the two problem-solving experiences in this study, the presentation of
what one participant called the “somewhat familiar” standards-aligned content in an ill-
structured manner was ubiquitously unfamiliar.
Finding 4: Modest Between-Problem Improvements and Novice-level Mental Models
Educator-learners demonstrated relatively modest improvement trends in their problem-
solving schemata pre/post experience, but all demonstrated novice-level mental models, below
an optimal level of sophistication for their age and experience levels per Dynamic Skill Theory.
For dimension 4, transfer through time, both within and between problems, eight of twelve
participants demonstrated a recurring metacognitive awareness about varying aspects of their
own problem-solving schemata (i.e., per Bransford & Schwartz, 1999, what they notice about the
problem and how they gather, interpret, and apply information and strategies in solving it) and
potential personal improvements to it. This finding applies to responses collected during the
problem-solving steps, and particularly in the self-evaluation during Step 5. For example, in
answering reflection questions related to problem-solving schemata in the course of solving
problems, participants focused much of their self-assessments on recommending strategies to
themselves (e.g., “I could have done better at taking notes.” “I should have organized the
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 126 126
information differently.” “I didn’t focus on the details as much as I would next time.” “Next
time I’ll wait to get more input from others doing this before I revise my answers.”).
When encountering the second problem, only three of those consciously applied the
changes that they had recommended to themselves following problem 1. Only one among those
three not only applied prior strategies, but also offered completely new self-recommendations on
schema-related strategies at the end of problem 2. The other five largely repeated their prior
recommendation to themselves following problems 1 and 2.
The remaining four participants who did not demonstrate a recurring metacognitive
awareness of their own problem-solving schemata often repeatedly focused on struggles related
to hunger, tiredness, and daily-life-type distractions in the context of their problem-solving
learning rather than on self-recommended strategies. While they did occasionally mention
possible self-recommendations, these responses occurred less frequently and more haphazardly
than those of other participants. All of these participants were among the lowest scoring in all
categories overall. That could reflect low prior PFL levels and/or below-ZPD performance, as
reflected by greater fluctuation in scores in and among steps.
All participants, however, showed improvement trends in the sophistication of their
problem-solving concept maps. These concept maps intentionally helped capture and make
explicit individual learners’ mental models of the problem-solving process. However, per
Fischer’s (1980) Dynamic Skill structure (see Table 1), participants largely demonstrated a
functional ability below their age level, mapping elements into a single representational system.
In Step 5 for both problems, eleven of the twelve participants discussed metacognitive and
motivational improvements they were aware they could make in terms of future problem solving
on a problem of the same nature, but these self-identified improvements rarely made an
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 127 127
appearance in the post-experience concept maps. Two post-experience concept maps explicitly
represented a five-problem-solving-step model similar to the one used in the CBLE for both
problems per theory described in Appendix A, and four additional post-experience concept maps
implicitly included all concepts, reflecting a growing awareness of an expert-like problem-
solving process as laid out in the domain analysis (Appendix A).
Overall, in this two-problem study, three-quarters of participants demonstrated a
recurring metacognitive awareness about varying aspects of their own problem-solving
schemata. Four participants (one quarter) then applied self-recommended behaviors from
problem 1 to problem 2. This demonstration reflects near transfer. The sole participant who
both applied self-recommended strategies and generated new ones had a stronger, longer-term
background in both STEM and teaching, reflecting a stronger past PFL. This individual was also
one of the highest scoring participants overall, demonstrating the kind of experimentation with
strategies called for by Bell and Kozlowski (2009) in support of adaptive transfer. Eight of the
final concept maps indicated metacognitive awareness, two of which also demonstrated
integration of self-regulation strategies.
Discussion
This study is an exploratory effort that provides a first attempt to analyze
microdevelopmental patterns against a scale created to measure adaptive expertise across five
problem-solving steps common to STEM, made further explicit through concepts maps. The
study confirms the complexity and uniqueness of microdevelopmental variations for individual
learners, as found by earlier studies. It corroborates the complexity of microdevelopmental
learning patterns in ill-structured, open-ended STEM problems. It also confirms that, at least for
the participants in both the pre-study and study groups, educators in high-need school
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 128 128
environments may not have prior PFL opportunities, limiting their ability to provide appropriate
supports for the optimal rather than functional acquisition of necessary KSAs relevant to STEM-
based, ill-structured problem-solving, and the development of adaptive expertise in it.
The complexity of learner patterns supports the value of diagnostic computer-based
analyses that can help teachers-as-learners, teachers-as-teachers, and students identify areas
where help is needed, and of what type, for optimal learning along a microdevelopmental PFL
trajectory, per each learner’s zone of proximal development (Vygotsky, 1978). It also supports
the importance of learning tools that are not “one size fits all.” These findings are consistent
with the idea that individual learners need different kinds of supports at different times in the
learning process to achieve optimal performance per Dynamic Skill Theory. They also support
Fischer’s (2008) perspective that having a microdevelopmental record of learner progressions
provides more information about the nature of learners’ capabilities than a single assessment, as
well as insight into the microgenesis of, and patterns of microdevelopments in, new KSAs
leading to higher Skill levels. On this basis, real-time computer-based analyses at the
microdevelopmental level can reveal individual learner needs at each step in the learning
process, enabling instructors (real or within the CBLE) to respond better with individually
appropriate scaffolds that enable learning and growth in performance.
As learners experience both spurts and drops while organizing various capabilities across
task and domain strands, this idea additionally ties to Bransford and Schwartz’s (1999) argument
that a learning experience does not generate immediate expertise and that transfer might better be
assessed in terms of where each learner is along a trajectory toward expertise, or their PFL level.
It also supports studies calling for measurements of problem-solving capability beyond those of
conventional tests (e.g., Bennett, Persky, Weiss, & Jenkins, 2007; Nelson, Erlandson, &
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 129 129
Denham, 2011), as well as for the nascent need for microdevelopmental rulers and assessments
(Bennett, Persky, Weiss, & Jenkins, 2007; Nelson et al., 2011).
Implications for Theory: Finding 1
Finding 1 showed that participants in this study demonstrated highly complex and
dynamic individual microdevelopmental patterns within and between problems, and in the KSAs
associated with the sub-variables. That was true even for learners who achieved the same overall
“KSA score” in a given dimension or sub-dimension, but different underpinnings for it. The
diversity of learning trajectories in this study suggests that, at least against a common scoring
system cum microdevelopmental ruler, different learners exhibit unique developmental levels for
both individual and sets (or clusters) of KSAs in achieving a similar novice-to-expert (and
possibly Skill) level. That is, individuals differed in the strengths and weaknesses of their
individual KSA strands (sub-dimensions in this study) and in the combinations of KSAs or
strands of KSAs they applied, yet were equally able to produce an acceptable outcome at a
similar overall novice-to-expert level for a given type of ill-structured STEM-based problem
with no one right answer. More expert-like performance in one sub-dimension might
compensate for weaknesses in another given similarly scored ultimate outcomes. Between PFL-
based problems in this study, individual learners also appeared to sequence or accelerate the
development of some KSAs versus others differently as well, based on their exhibited
microdevelopmental patterns.
These findings raise some interesting questions for theory about whether the clusters of
KSAs needed to develop higher levels of adaptive expertise (and/or Skill levels proposed by
Dynamic Skill Theory) along a trajectory may also be unique per the learner and the learner’s
experience. That is, Dynamic Skill Theory postulates in part that some clusters of KSAs may be
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 130 130
common and prerequisite to achieving a given Skill level, and seeks to measure the development
and emergence of those more accurately. It does not preclude, however, variations in the clusters
that lead to a given Skill level.
This study was designed to assess individually identified, though grouped, KSAs (e.g.,
one or more of the sub-dimensions measured in this study), and did not have enough participants
or length of time across multiple problems to identify definitive classes of clusters leading to a
more expert level of performance or even to define discrete novice-to expert levels in the context
of adaptive expertise in ill-structured, STEM-based problem solving. Still, variability and
complexity of individual responses raises a question of whether such “cluster classes” might
exist. If several different types of KSA clusters could be identified that lead to a similar PFL
(and/or Skill) level that supports a given novice-to-expert categorization, educators and learning
systems could adapt curricula either to support the individual learner’s strengths and provide
further scaffolds for areas needing improvement, while also determining the most productive
individualized approaches (that is, all students might not need to achieve the same competencies
in every sub-dimension to be able to demonstrate more-expert-like performance). In that regard,
Dynamic Skill Theory addresses in general the difference between functional (without help) and
optimal (with help) performance. What is not yet addressed is the nature of the help, and
whether it is more optimal to target help to developing and reinforcing a learner’s emerging KSA
strengths (or clusters of strengths) or to develop and reinforce as part of PFL other learner KSAs
that are more functional in nature. Given that the 21
st
-century trend in problem solving is
toward teaming, it may in fact be important for team members to have different KSAs and
clusters of KSAs than a dominant “one size fits all” KSA mix.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 131 131
Just as research has developed and articulated classes of microdevelopmental patterns as
described in Chapter 2 (e.g., Yan & Fischer, 2007), longer term studies with more participants
might reveal several possible emergent “classes of clusters” that disrupt tendencies toward the
design of “one size fits all” assessments, even at the formative level, while still providing well-
structured scaffolds per varying novice-to-expert pathways that could instruct CBLE design. In
the abstract, CBLEs may be able to accommodate an infinite number of combinations and
permutations that lead to an overall higher level performance in adaptive expertise in problem
solving by illuminating microdevelopments in “mini” novice-to-expert pathways in individual
KSAs and clusters of KSAs. That is sometimes held out as the promise of CBLE-based
formative assessments in terms of individualized personal learning paths. However, in
developing learning-theory-based curricula that optimize CBLEs and scaffolds within them,
identifying common “classes of clusters” would support the development of a few different
instructional PFL models for developing adaptive expertise as part of each learner’s
individualized PFL plan, along a trajectory in the “optimal PFL zone” proposed as part of this
study’s 4 dimensional model of adaptive expertise (Figure 2). That would assist in making
CBLE instructional design efficient.
Implications for Theory: Finding 2
Finding 2 showed that educator learners in this study had the most highly unstable
trajectories occurring in the motivation dimension, closely followed by the inquiry and
innovation dimension. The high level of variability in motivation and inquiry and innovation
supports Dynamic Skill Theory studies that show learning patterns are chaotic before they
stabilize into patterns of growing and collapsing understandings, all as part of a process that
builds toward higher order thinking over time (e.g., Fischer, 2008). The chaotic patterns of
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 132 132
learner responses in this study reflect more novice-level understandings and/or self regulation in
these areas. Since these are areas that traditionally receive less attention in formative and
summative assessments, participants may not have been exposed to supports that would have led
them to more optimal rather than functional performance. Yet, motivation and self-regulation
strategies are important in an ill-structured problem-solving context.
One major theoretical implication from the study is that, given the more disorganized
patterns in motivation, the addition of motivation to current models of adaptive expertise is likely
important in filling a gap in PFL learning requirements in developing adaptive expertise in ill-
structured, STEM problem solving. Measuring novice-to-expert levels of motivation on a
microdevelopmental scale would allow either the CBLE or an instructor can be responsive in a
real-time manner in formative stages. Adding it as a key component to existing models for
problem-solving and/or adaptive expertise would also potentially enhance an understanding of
which motivational factors at which points in PFL trajectories can enhance either performance or
mental models supporting problem solving. As one sub-dimension example, persistence might
be more or less requisite at a given step or type of cognitive difficulty, than say, interest.
Motivational findings thus support the four-dimensional model for adaptive expertise
proposed in Figure 2. Moreover, it potentially suggests the inclusion of the concept of
motivational zones of proximal development (e.g., Brophy, 1999) in the creation of
microdevelopmental rulers and even the measurement of motivational microgeneses in the
various motivational sub-dimensions. Any such consideration of motivational zones of proximal
development would concurrently recommend the development of an empirical framework and
measures for relating subjective reports of affective experiences to microdevelopmental patterns
in key motivational areas such as those proposed as sub-dimensions in this study. While some
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 133 133
advocates for related research in affective microdevelopmental assessments are beginning to
appear (e.g., Immordino-Yang, 2010), this area is relatively undeveloped, particularly in the
context of adaptive expertise.
Implications for Theory: Finding 3
Finding 3 showed that educator learners in this study exhibited low-to-average, middle-
school-level KSAs in the Knowledge and Skills dimension, both within and between problems.
From a theoretical perspective, it is unclear whether participant responses reflect either their
limited existing level of KSAs, or a return to lower levels of understanding as they sought to
learn to apply their existing knowledge in unfamiliar ways. The former possibility relates to
reports on their prior experience (e.g., none with degrees in a STEM field; none with pedagogical
training in situated STEM practices), with the latter possibility expected per Conceptual Change
and Dynamic Skill Theories, and appropriate to their novice-level experience. It is also unclear
if the cognitive load, even if germane, of new inquiry and innovation and motivation aspects of
ill-structured problem solving subdued the expression of ostensibly more familiar knowledge and
skills. That is, in considering Findings 1 and 2, unstable and fluctuating patterns (particularly in
motivation) might initially interfere with performance in knowledge and skills as attention and
cognitive load shifts toward these areas of learning growth. In addition, both in terms of problem
solving and computer use, educators in this study demonstrated more of an inclination toward
performativity than generativity in their approach to the problems and problem-solving steps, so
the relatively stable, though lower performance in this area, may be implicated in this inclination
as well.
In terms of knowledge and skills findings, a framework that helps educators gain both
necessary problem-solving strategies and an awareness of them is important in their own PFL in
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 134 134
a situated context. Many participants reflected a lack of expertise and their discomfort, which
supports prior studies calling for CBLEs that can serve as situated learning experiences that
mimic authentic, real-world, ill-structured problem solving in STEM domains (e.g., Dede, 2009;
Dunleavy & Simmons, 2011; Ketelhut, Nelson, Clarke, & Dede, 2010). Beyond utility for
educators, it additionally supports Choi and Lee’s (2009) finding that, because few conceptual or
research-based instructional design models, specific instructional strategies, and scaffolds for ill-
structured problem solving exist, teachers struggle in creating a good learning environment for it.
Implications for Theory: Finding 4
As Bransford and Schwartz (1999) discuss, the key assessment target in transfer is
preparation for future learning, as demonstrated by changes in problem-solving schemata (what
learners notice about the problem and how they gather, interpret, and apply information and
strategies in solving it). Making mental models explicit is important for supporting transfer in
problem solving (Gonzales, Palencia, Umana, Galindo, & Villafrade, 2008; Mann & LeClair,
2009; Tseng, Chang, Lou, Tan, & Chiu, 2012), helping learners see conceptual links (Chiou,
2008) and offering insight into individuals’ knowledge structured during learning (Tseng et al.,
2012). Concept maps additionally help learners’ be aware of their own beliefs (Vosniadou et al.,
2001). Finding 4 showed that educator-learners in this study demonstrated relatively modest
improvement trends in their problem-solving schemata pre/post experience. Along with some
CBLE responses in each step, these gains can be considered partial evidence for increases in
PFL, as building such mental models is key to developing adaptive expertise (Hatano & Inagaki,
1986). Per schemata construction articulated by Cognitive Flexibility Theory, the modest
participant improvements to their models suggest novice-level capabilities for flexibly
constructing new knowledge, meanings, and representations, which may be tied to their low Skill
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 135 135
and/or adaptive-expertise level per the scoring system used in this study. Still, half included
previously missing steps and strategies corresponding to those used in the CBLE, per expert
models. This result suggests that the structure of the CBLE, even over the course of two
problems, influenced participants’ (re)conceptualization of the problem-solving process to be
more in accord with expert models.
Despite improvements to their mental models of problem solving, all demonstrated
novice-level performance both pre and post experience that was below an optimal level of
sophistication for their age and experience levels per what Dynamic Skill Theory would suggest.
Yet, participants’ demonstration of small gains even at this nascent level may also provide
insight into early microdevelopments in (and even microgenesis of) relevant PFL-related KSAs
necessary for adaptive expertise, as well as begin to characterize scaffolding needs for educators
with little prior PFL in their high-need environments. Thus, the gains might reveal early
emergent factors necessary to the development of more expert-like behaviors (e.g., Bereiter &
Scardamalia, 1993), including more enduring habits of mind (Bransford & Schwartz, 1999).
Participants who articulated strategic, metacognitive self-recommendations in both
problems, but did not apply them from one problem to the other, reflect a cognitive awareness
related to habits of mind (Bransford & Schwartz, 1999), but without a demonstration of the
necessary behaviors associated with them. All had relatively low-average scores in procedural
skills, though no claims are made with respect to the relationship, as further studies to isolate this
aspect would be needed across a larger sample. Widely speaking in terms of theoretical
implications, however, any such correlations could help identify necessary clusters of KSAs that
support the emergence of adaptive expertise, as measured through higher scores, and thus
support the creation of a developmental ruler that serves as a proxy for assessing PFL (and Skill)
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 136 136
levels, per Dynamic Skill Theory. They could also support the identification of pedagogical
techniques in the design of a CBLE that could support accelerated future learning (Chi &
VanLehn, 2007; Li, Cohen, & Koedinger, 2010), particularly if tailored to learners’ different
pathways, strengths, and weaknesses.
In terms of self regulation, the four participants who did not focus on self-recommended,
schema-related strategies had low scores for self-regulated learning throughout. That is, they
were less engaged in a constructive process of goal setting and monitoring behaviors in the
process of reaching solutions (e.g., Pintrich, 2000), and per both observations and self-
reflections, were more frequently engaged in off-task activities. In a reverse manner, this result
aligns with earlier research (Nietfeld & Shores, 2011) showing a positive relationship between
self regulation and high achievement. They also had lower scores in generativity and conceptual
knowledge, in alignment with Lee, Lim, and Grabowski (2009).
From a constructivist perspective, an adaptive CBLE that capitalized on that awareness
with future problems might assist at the formative level in acquiring the articulated skill. For
example, for participant 4, who reflected that she “could have done better at taking notes,” the
learning system could employ a note-taking application more frequently within a later problem-
solving unit. While the tool could still include other pertinent aids of which the learner may be
unaware, it could still reinforce learner-identified habits of mind that they see as important to
their own success. By focusing on the ability to deliver common supports, but applying them
specifically when and where the learner needs them, it may be possible to scale personalized
learning across large masses of participants, providing each with the kind of optimal assistance
called for by Dynamic Skill theory and supportive of constructivist learning approaches.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 137 137
Overall, in terms of making conceptualizations of the learning process explicit, what
participants reflected on most avidly in the post-study debrief was how long it had been since
they were in the position of learners, and discussed how that felt in terms of expressed empathy
for what their students’ experience might be. Beyond empathy, allowing educators to gain a
first-hand knowledge of the characteristics of students’ cognitive, metacognitive, and affective
processing might allow them to attend to their development in problem solving more
knowledgeably, improving their ability to guide students in motivation and in transformations in
the levels or quality of their problem-solving KSAs and, beyond that, their mental models of
adaptive problem-solving and related schemata.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 138 138
CHAPTER FIVE: CONCLUSIONS
Adaptive expertise (Hatano & Inagaki, 1986) in STEM-based, complex, ill-structured
problem solving is key to national and individual prosperity in the knowledge economy of the
21
st
Century and beyond. To ensure learners, including teachers and students from high-needs
schools, can meet future challenges, a key assessment target is learners’ preparation for future
learning (PFL), as demonstrated by changes in their problem-solving schemata, which can be
characterized by the sophistication of what learners notice about the problem, how they gather
and interpret information, and how they apply the information or past problem-solving strategies
when confronting new problems (Bransford & Schwartz, 1999).
Using a scaffolded, epistemically situated CBLE, the purpose of this exploratory study
was to assess learners’ developing patterns in their novice-expert journey toward more
sophisticated, efficient, and effective adaptive expertise (the dependent variable) in complex,
STEM-based, ill-structured problem solving. The assessment of learners’ preparation for future
learning consisted of three dimensions serving as independent variables (knowledge and skills,
inquiry and innovation, and motivation), all measured through the fourth dimension of time, and
evaluated through evidence-centered design.
Through the use of a cross-STEM-domain, scaffolded CBLE designed for complex, ill-
structure problem-solving, the central research question was: How are novice-to-expert levels of
adaptive expertise in complex, STEM-based, ill-structured problem solving (the dependent
variable) affected by microdevelopmental patterns and evolving competencies in four proposed
dimensions of preparation for future learning (PFL) (the independent variables):
• knowledge and skills (Dimension 1 of the Adaptive Expertise model),
• inquiry and innovation (Dimension 2 of the Adaptive Expertise model),
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 139 139
• motivation (Dimension 3 of the Adaptive Expertise model), and
• transfer of any previously demonstrated capacity in these three dimensions to a novel
complex, ill-structured problem following the initial learning experience (Dimension
4 of the Adaptive Expertise model)?
That is, for the purposes of this study, adaptive expertise (AE) is considered as a function of
changes in preparation for future learning (PFL), which in turn is a function of changes in three
dimensions through time: knowledge and skills (KS), inquiry and innovation (II), and
motivation (M), or as a conceptual short-hand:
AE = f(ΔPFL) = f(ΔKS, ΔII, ΔM).
The hypothesis was that differences in separate or combined patterns in knowledge and skills,
inquiry and innovation, and motivation (i.e., differences in preparation for future learning)
influence the acquisition of higher levels of adaptive expertise, as reflected in learners’
preparation for future learning and the extent to which it involves scaffolded help for optimal
acquisition of KSAs and even Skill levels. The study was intended to reveal patterns such as the
development of potentially transferrable problem-solving schemata and relevant cognitive,
metacognitive, and motivation-related behaviors and attitudes critical to adaptive expertise in
complex, ill-structured problem solving.
Limitations
Chapter 1 outlines four key limitations considered pre-study. First, internal validity could
be impacted given the goal is to make problem-solving schemata explicit, consciously
considered by the learner in the course of their learning experience. Second, intrinsic to adaptive
expertise is the encouragement of changes in learner responses within and between problems and
the allowance for different responses (i.e., the CBLE instructional design necessarily triggers a
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 140 140
Hawthorne Effect, where being in the experience changes the responses). Third, measurements
in two problems likely represent participants’ initial novice-level understandings, so variations
might not be of sufficient magnitude and duration to describe accurately developing patterns in
adaptive expertise. Fourth, the use of a simple model to express microdevelopmental changes
may affect the accuracy of measurements of observed patterns, though the rigorous ECD-based
methods were designed to prevent that to the greatest extent possible.
A further limitation revealed by the study is that the frequency of collecting
microdevelopmental data across so many sub-variables might interrupt the learners’ sense of
flow, if not increasing cognitive load. While metacognitive awareness and self-regulation, for
example, are key habits of mind for expert-level adaptive expertise in STEM-based, ill-structured
problem solving, and can be considered germane load as a result, the frequency of their
assessment might have contributed in unknown ways to the highly unstable patterns in
motivation or other dimensions. However, not capturing microdevelopments in affective realms
would have negatively impacted the completeness of the model.
Implications for Practice
Addressed largely as implications of this study, topics related to the above research question
include interpretations of the way in which observed results in learners’ developmental patterns
in knowledge and skills, inquiry and innovation, and motivation contribute to:
• developing assessment systems that not only measure learners’ microdevelopmental
problem-solving capacities (e.g., Fischer, 2008), but also diagnose difficulties in need of
additional (optimal) supports at various stages in STEM-based, complex, ill-structured
problem solving to support their preparation for future learning, and
• understanding how CBLE instructional designs for adaptive expertise in STEM-based,
complex, ill-structured problem solving can better support learners’ PFL trajectories.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 141 141
Assessing Adaptive Expertise and Diagnosing Learners’ PFL Needs
This study supports research that CBLEs can play a valuable, long-term role in making
learners’ non-routine performance at each problem-solving step visible (e.g., Schleicher, 2010)
and assessable, detailing and diagnosing learners’ preparation for future learning along novice-
to-expert trajectories associated with adaptive expertise. This ability allows KSA-specific
formative assessments in real- or near-real-time, enabling the provision of appropriate scaffolds
“on demand,” as needed. It also allows an evaluation of patterns and transitions in KSAs
through time, along a novice-to-expert trajectory or set of trajectories. Both types of analyses
can help detect the complex and dynamic acquisition and application of clusters of KSAs that
contribute to more expert-like performance and higher Skill levels per Table 1, both within and
between situated contexts of STEM-based, ill-structured problems. It can also track learners’
process of self-organized learning, personalizing both the learning and assessment processes,
rather than providing one-size-fits-all, coarse evaluations of where students are in the spectrum
of their learning in STEM disciplines and practices at one point in time, as occurs in most current
high-stakes testing scenarios.
Providing Design Guidelines for CBLE-based Experiences in Adaptive Expertise
This study underscores the importance of grounding the design of a CBLE for adaptive
expertise in learning theory and in evidence-based practices, particularly those related to the
situated context of STEM disciplines. Given that most problem-solving experts and related
studies largely agree on problem-solving steps, strategies, and schemata as documented in the
domain analysis detailed in Appendix A, framing novel problems within a well-designed system
(e.g., Azevedo & Jacobson, 2008); Bottge, Rueda, Kwon, Grant, & LaRoque, 2009; Liu et al.,
2009; Pellegrino & Brophy, 2008) that consistently delivers novel problems in a common
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 142 142
framework would enable instructional designers to focus on the provision of multiple problems
and problem types of increasing levels of sophistication, giving learners more PFL opportunities.
While problems themselves may be extremely complex and ill-structured, the design of scaffolds
must be well-designed to allow Skill development at optimal levels and the explicit and realized
mental models of learners’ problem-solving schemata. The further development of common
measures and scoring systems that accurately capture the KSA or KSA clusters support the
development of strong microdevelopmental rulers, as well as the type and timing of scaffolds
that best support the acquisition of new KSAs according to a learner’s individual PFL trajectory.
These scaffolds have been difficult to pre-construct (e.g., Belland, 2011), so guidelines based on
records of actual learner performances could be useful to the field.
Recommendations for Future Research Questions
Future research should involve greater numbers of educators and also large numbers of
students, with a greater number of problems over longer periods of time, which was not possible
given the constraints of this exploratory study and the simple model used in it. The integration
of more advanced diagnostic tools and ECD-associated psychometric models (e.g., based on a
variety of commonly used assessment techniques such as Item Response Theory, latent class and
factor analyses, Bayesian networks, Rasch analysis, multi-dimensional random coefficients
multinomial logit models etc.) would also enhance future studies. Assessments might compare
teachers in schools with strong STEM and/or problem-solving or design-based curricula to those
without. Studies could also be conducted to compare student progressions based on the level of
their teachers’ PFL, based on the dynamic-skills-theory-based idea that learners’ progressions are
optimal with supports.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 143 143
With a larger participant pool and more sequential learning activities, time and transfer
analysis could be further supported through time-series analysis to identify patterns in the data
over time vis-à-vis the trajectories of the independent variables (KS, II, and M, the inputs to
PFL) and in turn to the dependent variable (AE). For example, statistical techniques that detect
repeating patterns and any statistical dependency between values at various times would be of
interest as they might identify when key branches of knowledge, skills, and abilities in a
developmental web converge and cluster to support the development of a higher order Skill
level, per Fischer et al.’s (2009) conceptualization (or when their absence suggests a potential
drop). Statistical methods that estimate trends and generate forecasts and predictions about
students’ novice-to-expert learning trajectories and levels of adaptive expertise could help
characterize the trajectories of KS, II, and M (e.g., changes over a given period of time, rates and
directions of changes etc.) or predict learner progressions and emergent necessary scaffolds.
Other enhancements might include models for analyzing complex dynamic systems, particularly
in looking at how more stable patterns (indicating above ZPD performance) establish themselves
out of unstable, fluctuating patterns (indicating below to within ZPD performance).
Studies that consider what combination of KSAs and what levels among them propel
students toward higher levels of adaptive expertise might offer insights into pedagogical
emphases in scaffolding learners in their PFL, whether for teachers or for their students. That
might also lead to a more precise understanding of clusters of KSAs necessary to different levels
of adaptive expertise and monitoring progress toward them, as well as potentially contribute to
accelerated future learning (Chi & VanLehn, 2007; Li, Cohen, & Koedinger, 2010). It might
also guide curriculum design in terms of what learners must revisit frequently as they reorganize
their understandings at higher, more complex levels. This concept is not new in education and
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 144 144
learning theory, but has not been considered comprehensively in the context of adaptive
expertise, including dynamic, developmental progressions in the creative and affective
dimensions in particular. Other studies could examine whether specific clusters of sub-variables
vary in their importance depending on the problem-solving step. As a notional example, a
combination of information-seeking skills (KS), curiosity (II), and interest/attitude (M) might
have greater weight in Step 2, while strategic knowledge of hypothesis/design formation (KS),
generativity (II), and risk taking (M) might have greater weight in Step 3.
Future research could also study the role of co-constructive processes. In this study,
learners independently worked through the problems in order to isolate the characterization of
their unique pathways, recognizing that sequences of skill acquisition will differ for individuals,
based on their prior knowledge and experiences, as well as other personal characteristics (e.g.,
Dawson & Stein, 2008; Fischer, 2008; Fischer & Biddell, 1998). That is important for teaching
and for CBLE design in terms of enhancing the ability to recognize, and subsequently plan
support for, these variations in individual learning needs. In the units for this study, participants
did have opportunities to compare their individual answers to those of others as part of
modifying their original first responses in certain cases, but this limited type of interaction is not
fully situated. Because collaborative learning opportunities are key to learning in general, and
imperative to authentic, real-world problem solving, future studies of co-constructive skill
development would contribute a greater understanding of how the acquisition and convergence
of skills related to adaptive expertise in STEM-based, novel problem solving is optimized
through peer-to-peer interactions, particularly through the sharing of mental models, as well as
problem solving strategies and flexibility in their use.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 145 145
Conclusions
For both national well-being and educational equity, this study contributes to two key
reciprocal goals for 21
st
-century learning: (a) optimizing instruction that supports PFL for
teachers and students, and through it, adaptive expertise in ill-structured STEM problem-solving,
and (b) making PFL observable in assessments (Mylopoulos, 2012). It also supports the call for
microdevelopmental rulers (Fischer, 2008; Fischer, van Geert, Molenaar, Lerner, & Newell,
2014; Jacobs, 2010) that measure the emergence and development of adaptive expertise
(DeHaan, 2009; Hatano & Ouro, 2003; Hatano & Inagaki, 1986), as well as for authentic,
situated STEM-domain-based experiences that build KSAs toward higher order mental models
and Skill levels.
Per the findings from this study and prior research described in Chapter 1, a contributing
factor to poor U.S. student performance in solving complex, real-world STEM and other
problems include a lack of teacher preparation (and the nature of their preparation for future
learning in pre- and in-service environments), both in ill-structured problem solving itself, as
well as pedagogical techniques for teaching it. Teachers thus may be ill equipped to provide the
scaffolds students need for optimal vs. functional performance and growth (per Dynamic Skill
Theory) in situated, novel, ill-structured STEM problem solving. This conclusion is less about
the capability and potential of teachers as it is about providing them with resources and learning
tools that allow them to develop expertise in ill-structured problem solving in rapidly evolving
and emerging STEM fields, as well as an expertise in teaching it to their students, who will
require adaptive expertise to meet the needs of a 21
st
Century, knowledge-based economy.
For the educators in high-need and underserved schools in this study, key findings
include: a) highly complex and dynamic individual microdevelopmental patterns within and
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 146 146
between problems, b) the most highly variable and unstable trajectories in the motivation
dimension, followed by the inquiry and innovation dimension, c) more stable trajectories in the
knowledge and skill dimension, though low-to-average, middle-school KSA demonstrations, and
d) improvement trends in explicit problem-solving schematic, though below an optimal level of
sophistication. Many of these findings reflect the educator participants’ novice level and their
lack of prior “preparation for future learning,” as also evidenced by their self reports on their
backgrounds and experience. Yet, this lack of prior knowledge and support also makes them
and other educators they represent highly desirable as a developmental audience, not only to
improve their and their students’ learning, but also to expose the microgenesis of KSAs in gap
areas of prior learning, and scaffold to build them further. Studying the developmental patterns
of their growth could clarify what precursor KSAs and clusters of KSAs lead to higher levels of
capabilities and performance along a novice-to-expert trajectory, and even help define measures
(“rulers”) for the levels themselves.
Focusing on how learners acquire KSAs, and how to teach them to acquire KSAs, in
situated experiences is key to developing STEM-domain-specific expertise, preparing them for
the 21
st
-century, knowledge-based economy. Situated CBLEs with expansive opportunities for
practice in solving problems of the same or differing types may provide teachers with a key
resource. From a theoretical view, research with CBLEs can close gaps in knowledge about
microdevelopments in learners’ ill-structured problem solving KSAs and mental models, as well
as in patterns in their clustered development through time. They also have the potential to allow
highly targeted supports for personalized optimal rather than functional learning at a mass scale.
Implications for theory include the confirmation of the dynamic complexity and
uniqueness of learner microdevelopmental patterns in general, and in the specific area of ill-
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 147 147
structured, STEM problem solving. Findings reflect the importance of including motivation in
models of adaptive expertise, and support PFL arguments about the nature of transfer and the
nascent need for microdevelopmental rulers and assessments, particularly in building adaptive
expertise in ill-structured, STEM problem solving.
Implications for practice include the importance of designing scaffolded, evidence-based
instruction within a CBLE to support PFL for learners’ individual novice-to-expert trajectories,
particularly for those teaching and learning in high-need environments. They also suggest the
utility to both teaching and learning of developing CBLE-based assessment systems that measure
microdevelopments in KSAs related to ill-structured problem solving.
In short, for national wellbeing and educational equity, this study contributes to two
reciprocal goals for 21
st
-century learning (e.g., Fischer, van Geert, Molenaar, Lerner, & Newell,
2014). The first is optimizing instruction that supports adaptive expertise (DeHaan, 2009;
Hatano & Ouro, 2003; Hatano & Inagaki, 1986) and preparation for future learning (Bransford &
Schwartz, 1999) in situated, ill-structured problem solving, particularly for teachers and students
in high-need environments where opportunities for PFL may be traditionally limited. The
second is making PFL observable in assessments (Mylopoulos, 2012), responding to the call for
microdevelopmental rulers that help describe the process by which learners’ capabilities emerge
(Fischer et al., 2014; Jacobs, 2010), in this case with a focus on adaptive expertise in STEM-
based, ill-structured problem solving. By developing human competencies and habits of mind
that build higher order understandings about the nature of the world and solutions to complex
21
st
-century challenges, all learners would have the opportunity to excel along strong personal
learning trajectories, as well as to create strong growth trajectories for greater family,
community, national, and global wellbeing.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 148 148
References
Adams, W.K., Reid, S., LeMaster, R., McKagan, S.B., Perkins, K.K., Dubson, M., and Wieman,
C.E. (2008a). A study of educational simulations part I: Engagement and learning.
Journal of Interactive Learning Research, 19(3), 397-419.
Adams, W.K., Reid, S., LeMaster, R., McKagan, S.B., Perkins, K.K., Dubson, M., and Wieman,
C.E. (2008b). A study of educational simulations part II: Interface design. Journal of
Interactive Learning Research, 19(4), 551-577.
Anderson, L., & Krathwohl, D. A. (2001). Taxonomy for learning, teaching and assessing: A
revision of Bloom's taxonomy of educational objectives. New York: Longman.
Asia Pacific Economic Cooperation. (2008). Report of the 2nd APEC education reform
symposium on 21st century competencies. Retrieved from
http://hrd.apec.org/index.php/21st_Century_Competencies
Australian Ministerial Council on Education, Employment, Training and Youth Affairs. (2007).
Information and communication technologies literacy: Years 6 and 10 report 2005.
Retrieved January 31, 2010, from: http://www.mceecdya.edu.au/mceecdya/
nap_ictl_2005_years_6_and_10_reportpress_release,22065.html
Azevedo, R. (2005). Using hypermedia as a metacognitive tool for enhancing student learning?
The role of self-regulated learning. Educational Psychologist, 40(4), 199-209.
Azevedo, R., Cromley, J.G., & Seibert, D. (2004). Does adaptive scaffolding facilitate students’
learning in hypermedia? Contemporary Educational Psychology, 29, 344-370.
Azevedo, R., Greene, J. A., & Moos, D. C. (2007). The effect of a human agent’s external
regulation upon college students’ hypermedia learning. Metacognition Learning, 2, 67-
87.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 149 149
Azevedo, R., & Jacobson, M. (2008). Advances in scaffolding learning with hypertext and
hypermedia: a summary and critical analysis. Educational Technology Research &
Development, 56(1), 93-100.
Azevedo, R. & Strain, A.C. (2011). Integrating cognitive, metacognitive, and affective
regulatory processes with MetaTutor. In R.A. Calvo & S.K. D’Mello (Eds.), New
perspectives on affect and learning technologies: Explorations in the learning sciences,
instructional systems, and performance technologies (pp. 141-154). New York, NY:
Springer. doi: 10.1007/978-1-4419-9625-1_11
Bagley, E., & Shaffer, D. W. (2009). When people get in the way: Promoting civic thinking
through epistemic gameplay. International Journal of Gaming and Computer-Mediated
Simulations (IJGCMS), 1(1), 36-52.
Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12: what is involved
and what is the role of the computer science education community? ACM Inroads, 2(1),
48-54.
Behrens, J.T., Mislevy, R.J., DiCerbo, K.E., & Levy, R. (2010). An evidence-centered design for
learning and assessment in the digital world. (CRESST Report 778). Los Angeles, CA:
University of California, National Center for Research on Evaluation, Standards, and
Student Testing (CRESST).
Bell, B. S. & Kozlowski, S. W. J. (2009). Toward a theory of learner-centered training design:
An integrative framework of active learning [Electronic version]. In S. W. J. Kozlowski
& E. Salas (Eds.), Learning, training, and development in organizations (pp. 263-300).
New York: Routledge.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 150 150
Bell, E., Horton, G., Blashki, G., & Seidel, B. M. (2012). Climate change: could it help develop
‘adaptive expertise’?. Advances in health sciences education, 17(2), 211-224.
Belland, B. R. (2010). Portraits of middle-school students constructing evidence-based
arguments during problem-based learning: The impact of computer-based scaffolds.
Educational Technology Research and Development, 58(3), 285-309.
Belland, B. R. (2011). Distributed cognition as a lens to understand the effects of scaffolds: The
role of transfer of responsibility. Educational Technology Research and Development,
58(3), 285-309.
Belland, B.R., Glazewski, K.D., & Richardson, J.C. (2011). Problem-based learning and
argumentation: Testing a scaffolding framework to support middle school students’
creation of evidence-based arguments. Instructional Science, 39, 667-694.
Bennett, R.E., Persky, H., Weiss, A.R., and Jenkins, F. (2007). Problem solving in technology-
rich environments: A report from the NAEP technology-based assessment project. U.S.
Department of Education. Washington, DC: National Center for Education Statistics.
Bereiter, C., & Scardamalia, M. (1993). Surpassing ourselves. An inquiry into the nature and
implications of expertise. Chicago: Open Court.
Berthold, K., Nückles, N., & Renkl, A. (2007). Do learning protocols support learning strategies
and outcomes? The role of cognitive and metacognitive prompts, Learning and
Instruction, 17(5), 564-577.
Billing, D. (2007). Teaching for transfer of core/key skills in higher education: Cognitive skills.
Higher Education, 53(4), 483-516.
Blumenfeld, P. C., Kempler, T. M., & Krajcik, J. S. (2006). Motivation and cognitive
engagement in learning environments. In R. K. Sawyer (Ed.), The Cambridge handbook
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 151 151
of the learning sciences (pp. 475-488). New York: Cambridge University Press.
Bottge, B.A., Rueda, E., Kwon, J. M., Grant, T. & LaRoque, P. (2009). Education Technology
Research and Development, 57, 529-552.
Boyer, K. E., Phillips, R., Wallis, M., Vouk, M., & Lester, J. (2008). Balancing cognitive and
motivational scaffolding in tutorial dialogue. In Intelligent Tutoring Systems (pp. 239-
249). Berlin, Germany: Springer.
Bransford, J., Mosborg, S., Copland, M., Honig, M., Nelson, H. G., & Gawel, D., Phillips, R. S.,
& Vye, N. (2010). Adaptive people and adaptive systems: Issue of learning and design.
In A. Hargreaves, M. Fullan, D. Hopkins, & A. Leiberman (Eds.), The second
international handbook of educational change (pp. 825-856). Dordrecht, the
Netherlands: Springer.
Bransford, J.D. & Schwartz, D.L. (1999). Rethinking transfer: A simple proposal with multiple
implications. Review of Research in Education, 24, 61-100.
Bransford, J., Slowinski, M., Vye, N., & Mosborg, S. (2008). The learning sciences, technology
and designs for educational systems: Some thoughts about change. In Learners in a
Changing Learning Landscape (pp. 37-67). Dordrecht, the Netherlands: Springer.
Britner, S. L. and Pajares, F. (2006). Sources of science self-efficacy beliefs of middle school
students. Journal of Research in Science Teaching, 43(5), 485-499.
Brophy, J. (1999). Toward a model of the value aspects of motivation in education: Developing
appreciation for particular learning domains and activities. Educational Psychologist,
34(2), 75-85.
Brophy, J. (2008). Developing students’ appreciation of what is taught in school: Educational
Psychologist, 43, 132-141.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 152 152
Brown, Collins, & DuGuid (1989). Situated cognition and the culture of learning. Educational
Researcher, 18(1), 32-42.
Bybee, R. W., Taylor, J. A., Gardner, A., Van Scotter, P., Powell, J. C., Westbrook, A., &
Landes, N. (2006). The BSCS 5E instructional model: Origins and effectiveness.
Colorado Springs, CO: BSCS.
Calvo, R. A., & D'Mello, S. (2010). Affect detection: An interdisciplinary review of models,
methods, and their applications. Affective Computing, IEEE Transactions on, 1(1), 18-37.
Carnoy, M., & Levin, H. M. (1985). Schooling and work in the democratic state. Stanford, CA:
Stanford University Press.
Chen, J. A., & Pajares, F. (2010). Implicit theories of ability of Grade 6 science students:
Relation to epistemological beliefs and academic motivation and achievement in science.
Contemporary Educational Psychology, 35(1), 75-87.
Chen, C.-H., & She, H.-C. (2012). The impact of recurrent on-line synchronous scientific
argumentation on students' argumentation and conceptual change. Educational
Technology & Society, 15(1), 197–210.
Cheng, G., & Chau, J. (2013). Exploring the relationship between students' self-regulated
learning ability and their ePortfolio achievement. The Internet and Higher Education, 17,
9-15.
Chi, M. & VanLehn, K. (2007) Accelerated future learning via explicit instruction of a problem
solving strategy. In R. Luckin, K. R. Koedinger & J. Greer (Eds.) Artificial Intelligence
in Education. (pp. 409-416). Amsterdam, the Netherlands: IOS Press.
Chiou, C. C. (2008). The effect of concept mapping on students’ learning achievements and
interests. Innovations in Education and Teaching International, 45(4), 375-387.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 153 153
Choi, I., & Lee, K. (2009). Designing and implementing a case-based learning environment for
enhancing ill-structured problem solving: Classroom management problems for
prospective teachers. Educational Technology Research and Development, 57(1), 99-129.
Clark, R. C., Mayer, R. E., & Thalheimer, W. (2003). E-‐learning and the science of instruction:
Proven guidelines for consumers and designers of multimedia learning. Performance
Improvement, 42(5), 41-43.
Clark, R. E., Yates, K., Early, S., Moulton, K., Silber, K. H., & Foshay, R. (2010). An analysis of
the failure of electronic media and discovery-based learning: Evidence for the
performance benefits of guided training methods. Handbook of training and improving
workplace performance, 1, 263-287.
Clarke-Midura, J., & Dede, C. (2010). Assessment, technology, and change. Journal of Research
on Technology in Education, 42(3), 309-328.
Coalition of the Assessment & Teaching of 21
st
Century Skills. (2010). Assessment & teaching
of 21
st
century skills: Status report as of January 2010. Retrieved from
http://atc21s.org/default.aspx
Conley, M.W. (2008). Cognitive strategy instruction for adolescents: What we know about the
promise, what we don’t know about the potential. Harvard Educational Review 78(1),
84-106.
Corcoran, T., Mosher, F.A., & Rogat, A. (2009). Learning progressions in science: An evidence
based approach to reform. Teachers College-Columbia University, NY: Consortium for
Policy Research in Education.
Creswell, J.W. (2009). Research design: Qualitative, quantitative, and mixed methods
approaches. London and Thousand Oaks: Sage Publications.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 154 154
Cummins, J., Brown, K. & Sayers, D. (2007). Literacy, technology, and diversity: Teaching for
success in changing times. Boston: Allyn and Bacon.
Dabbach, N. & Kitsantas, A. (2005). Using web-based pedagogical tools as scaffolds for self-
regulated learning. Instructional Science, 33, 513-540.
Darling-Hammond, L. (2007). The flat earth and education: How America’s commitment to
equity will determine our future. Educational Researcher, 36(6), 318-334.
Dawson, T. L., & Stein, Z. (2008). Cycles of research and application in education: Learning
pathways for energy concepts. Mind, Brain, and Education, 2(2), 90-103.
Day, S. B., & Goldstone, R. L. (2012). The import of knowledge export: Connecting findings
and theories of transfer of learning. Educational Psychologist, 47(3), 153-176.
Dede, C. (2009). Learning context: Gaming, simulations, and science learning in the classroom.
National Research Council. Retrieved from
http://www7.nationalacademies.org/bose/Dede_Gaming_CommissionedPaper.pdf
DeHaan, R. L. (2009). Teaching creativity and inventive problem solving in science. CBE-Life
Sciences Education, 8, 172-181.
De Jong, T., Weinberger, A., Girault, I., Kluge, A., Lazonder, A. W., Pedaste, M., ... & Zacharia,
Z. C. (2012). Using scenarios to design complex technology-enhanced learning
environments. Educational technology research and development, 60(5), 883-901.
D'Mello, S. K., Craig, S. D., & Graesser, A. C. (2009). Multimethod assessment of affective
experience and expression during deep learning. International Journal of Learning
Technology, 4(3), 165-187.
D'Mello, S. K., & Graesser, A. C. (2010). Mining bodily patterns of affective experience during
learning. In R.S.J.D. Baker, A. Merceron, & P.I.J. Pavlik (Eds.), Proceedings of the 3
rd
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 155 155
international conference on educational data mining (pp. 31-40). Pittsburgh, PA:
International Educational Data Mining Society.
Dohn, N. B. (2007). Knowledge and skills for PISA: Assessing the assessment. Journal of
Philosophy of Education, 41(1), 1-16.
Duncan, J., Schramm, M., Thompson, R., & Dumontheil, I. (2012). Task rules, working
memory, and fluid intelligence. Psychonomic bulletin & review, 19(5), 864-870.
Dunleavy, M. & Simmons, B. (2011). Assessing learning and identity in augmented reality
science games. Serious Educational Game Assessment, 221-240. doi: 10.1007/978-94-
6091-329-7_14
Eccles, J. (2009). Who am I and what am I going to do with my life? Personal and collective
identities as motivators of action. Educational Psychologist, 44(2), 78-89.
Eseryel, D., Ge, X., Ifenthaler, D., & Law, V. (2011). Dynamic modeling as a cognitive
regulation scaffold for developing complex problem-solving skills in an educational
massively multiplayer online game environment. Journal of Educational Computing
Research, 45(3), 265-286.
Eseryel, D., Ifenthaler, D., & Ge, X. (2011). Alternative assessment strategies for gamebased
learning environments. In D. Ifenthaler, P. Kinshuk, D. G. Isaias, D. G. Sampson, & J.
M. Spector (Eds.), Multiple perspectives on problem solving and learning in the digital
age (pp. 159-178). New York: Springer.
Feltovich, P.J., Spiro, R.J., & Coulson, R.L. (1993). Learning, teaching, and testing for complex
conceptual understanding. In N. Frederiksen, R. J. Mislevy, I. I. Bejar (Eds.), Test theory
for a new generation of test (pp. 181-217). Hillsdale, NJ: Lawrence Erlbaum Associates.
Fischer, K. W. (1980). A theory of cognitive development: The control and construction of
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 156 156
hierarchies of skills. Psychological Review, 87(6), 477.
Fischer, K.W. (2008). Dynamic cycles of cognitive and brain development: Measuring growth
in mind, brain, and education. In A.M. Battro, K.W. Fischer, & P. Léna (Eds.), The
educated brain (pp. 127-150). Cambridge, U.K.: Cambridge University Press.
Fischer, K. W. (2009). Mind, Brain, and Education: Building a Scientific Groundwork for
Learning and Teaching 1. Mind, Brain, and Education, 3(1), 3-16.
Fischer, K. W., & Bidell, T. R. (1998). Dynamic development of psychological structures in
action and thought. In W. Damon and R. M. Lerner (Eds.), Handbook of child
psychology: Theoretical models of human development (5th ed., pp. 467-561). New
York: Wiley and Sons.
Fischer, K. W., & Paré-‐Blagoev, J. (2000). From individual differences to dynamic pathways of
development. Child development, 71(4), 850-853.
Fischer, K. W., van Geert, P., Molenaar, P., Lerner, R.M., & Newell, K.M. (2014). Dynamic
development of brain and behavior. Handbook of developmental systems theory and
methodology, 287-315.
Frenzel, A. C., Goetz, T., Pekrun, R., & Watt, H. M. (2010). Development of mathematics
interest in adolescence: Influences of gender, family, and school context. Journal of
Research on Adolescence, 20(2), 507-537.
Frezzo, D. C., Behrens, J. T., & Mislevy, R. J. (2010). Design patterns for learning and
assessment: Facilitating the introduction of a complex simulation-based learning
environment into a community of instructors. Journal of Science Education and
Technology, 19(2), 105-114.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 157 157
Garcia, G. (2002). Student cultural diversity: Understanding and meeting the challenge (3
rd
ed.),
Boston, MA: Houghton Mifflin.
Ge, X., & Hardré, P. (2010). Self-processes and learning environment as influences in the
development of expertise in instructional design. Learning Environments Research, 23,
23-41.
Ge, X., & Land, S. M. (2003). Scaffolding students’ problem-solving processes in an ill-
structured task using question prompts and peer interactions. Educational Technology
Research and Development, 51(1), 21-38.
Ge, X., Planas, L.G. & Er, N. (2010). A cognitive support system to scaffold students’ problem-
based learning in a web-based learning environment. Interdisciplinary Journal of
Problem-based Learning, 4(10), 30-56.
Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive
Science, 7(2), 155-170.
Gerjets, P., Scheiter, K., & Catrambone, R. (2004). Designing instructional examples to reduce
intrinsic cognitive load: Molar versus modular presentation of solution procedures.
Instructional Science, 32(1-2), 33-58.
Gonzalez H.L., Palencia A.P., Umana L.A., Galindo L., Villafrade, M.L.A. (2008). Mediated
learning experience and concept maps: a pedagogical tool for achieving meaningful
learning in medical physiology students. Advances in Physiology Education. 32, 312-316.
Goode, N., & Beckmann, J. F. (2010). You need to know: There is a causal relationship between
structural knowledge and control performance in complex problem solving tasks.
Intelligence, 38(3), 345-352.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 158 158
Gouli, E., Gogoulou, A., Papanikolaou, K. A., & Grigoriadou, M. (2005). How to qualitatively +
quantitatively assess concepts maps: The case of COMPASS. In Artificial Intelligence in
Education, (pp. 804-806). Athens: IOS Press.
Granott, N., Fischer, K. W., & Parziale, J. (2002). Bridging to the unknown: A transition
mechanism in learning and development. In N. Granott & J. Parziale (Eds.),
Microdevelopment: Transition processes in development and learning (pp. 131-156).
Cambridge, England: Cambridge University Press.
Greene, J.A. and Azevedo, R. (2009). A macro-level analysis of SRL processes and their
relations to the acquisition of a sophisticated mental model of a complex system.
Contemporary Educational Psychology 34, 18-29.
Greene, J. A., Bolick, C. M., & Robertson, J. (2009). Fostering historical knowledge and
thinking skills using hypermedia learning environments: the role of self-regulated
learning. Computers & Education, 54(1), 230-243.
Güss, D. Tuason, Ma. T. & Gerhard, C. (2010). Cross-national comparisons of complex
problem-solving strategies in two microworlds. Cognitive Science, 34, 489-520.
Haertel, G., DeBarger, A.H.,Villalba, S., Hamel, L., & Colker, A.M. (2010). Integration of
evidence-centered design and universal design principles using PADI, an online
assessment delivery system. Menlo Park, CA: SRI International.
Hannafin, M. J., & Kim, M. C. (2003). In search of a future: A critical analysis of research on
web-based teaching and learning. Instructional Science, 31(4-5), 347-351.
Hatano, G., & Inagaki, X. (1986). Two courses of expertise. In H. Stevenson, H. Azuma, & K.
Hakuta (Eds.). Child development and education in Japan (pp. 27-36). New York:
Freeman.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 159 159
Hatano, G., & Oura, Y. (2003). Commentary: Reconceptualizing school learning using insight
from expertise research. Educational Researcher, 32(8), 26-29.
Hellen, M. (2011). Information handling and adaptive expertise. Education and Information
Technologies, 16(2), 107-122.
Hill, J. R., & Hannafin, M. J. (2001). Teaching and learning in digital environments: The
resurgence of resource-based learning. Educational Technology Research and
Development, 49(3), 37-52.
Hmelo-Silver, C. E. (2004). Problem-based learning: What and how do students learn?.
Educational Psychology Review, 16(3), 235-266.
Hmelo-Silver, C. E., & Azevedo, R. (2006). Understanding complex systems: Some core
challenges. The Journal of the Learning Sciences, 15(1), 53-61.
Hmelo-Silver, C. E., Duncan, R. G., & Chinn, C. A. (2007). Scaffolding and achievement in
problem-based and inquiry learning: a response to Kirschner, Sweller, and Clark (2006).
Educational Psychologist, 42, 99-107.
Hmelo-Silver, C.E., & Eberbach, C. (2012). Learning theories and problem-based learning. In
S. Bridges et al., (Eds.), Problem-based Learning in Clinical Education, Innovation and
Change in Professional Education 8 (pp. 3-17), The Netherlands, Springer.
Hoffman, M., Blake, J., McKeon, J., Leone, S., & Schorr, M. (2005). A critical computer
literacy course. Journal of Computing Sciences in Colleges, 20(5), 163-175.
Holman, D., Totterdell, P., Axtell, C., Stride, C., Port, R., Svensson, R., & Zibarras, L. (2012).
Job Design and the Employee Innovation Process: The Mediating Role of Learning
Strategies. Journal of Business and Psychology, 27(2), 177-191.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 160 160
Hsin, C. T., & Wu, H. K. (2011). Using scaffolding strategies to promote young children’s
scientific understandings of floating and sinking. Journal of Science Education and
Technology, 20(5), 656-666.
Hsu, Y-S. Lin, L-F. Wu, H-K. Lee, D-Y. & Hwang, F-K. (2012). A novice-expert study of
modeling skills and knowledge structures about air quality. Journal of Science Education
& Technology, 21, 588-606.
Hulleman, C. S., & Harackiewicz, J. M. (2009). Promoting interest and performance in high
school science classes. Science, 326(5958), 1410-1412.
ICT Literacy Panel. (2007). Digital transformation: A framework for ICT literacy. Princeton,
NJ: Educational Testing Service. Retrieved January 31, 2010, from:
http://www.ets.org/Media/Tests/Information_and_Communication_Technology_Literacy
/ictreport.pdf
Ifenthaler, D., Masduki, I., & Seel, N.M. (2011). The mystery of cognitive structures and how
we can detect it: Tracking the development of cognitive structures over time.
Instructional Science, 39, 41-61.
Immordino-Yang, M. H. (2010). Toward a microdevelopmental, interdisciplinary approach to
social emotion. Emotion Review, 2, 217-220.
Jacobs, T. O. (2010). On becoming more complex (and what to do about it). On the Horizon,
18(1), 62-70.
Jitendra, A. K., Star, J. R., Starosta, K., Leh, J. M., Sood, S., Caskie, G., ... & Mack, T. R.
(2009). Improving seventh grade students’ learning of ratio and proportion: The role of
schema-based instruction. Contemporary Educational Psychology, 34(3), 250-264.
Jonassen, D.H. (1994). Thinking technology. Educational Technology, 34(4), 34-37.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 161 161
Jonassen, D. H. (1999). Designing constructivist learning environments. Instructional design
theories and models: A new paradigm of instructional theory, 2, 215-239.
Jonassen, D. H. (2007). Learning to solve complex, scientific problems. Mahwah, NJ: Lawrence
Erlbaum Associates.
Jonassen, D.H. (2009). Externally modeling mental models. In L. Moller et al. (Eds.).
Learning and Instructional Technologies for the 21
st
Century (pp. 1-26). New York:
Springer.
Jonassen, D. H. (2010a). Assembling and analyzing the building blocks of problem-based
learning environments. In Kenneth H. Silber and Wellesley R. Forshay Handbook of
improving performance in the workplace, volume I: Instructional design and training
delivery (pp. 361-394). San Francisco, CA: International Society for Performance
Improvement.
Jonassen, D. H. (2010b). Research issues in problem solving. Paper presented at the 11th
International Conference on Education Research New Educational Paradigm for Learning
and Instruction. Retrieved from:
http://www.aect.org/publications/whitepapers/2010/JonassenICER.pdf
Jonassen, D. H. (2011). Design problems for secondary students. National Center for
Engineering and Technology Education. Retrieved from:
http://ncete.org/flash/research.php
Joyce, B., Weil, W., & Calhoun, E. (2009). Models of Teaching and Learning (8
th
Edition).
Boston, MA: Allyn and Bacon.
Kalyuga, S., Renkl, A, & Paas, F. (2010). Facilitating flexible problem solving: A cognitive
load perspective. Educational Psychology Review, 22, 175-186.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 162 162
Kaplan, S.N. (2012). Depth and Complexity. In C. M. Callahan, H. L. Hertberg-Davis (Eds.).
Fundamentals of gifted education: Considering multiple perspectives (pp. 277-286). New
York, NY: Routledge.
Ketelhut, D.J., Nelson, B.C., Clarke, J., & Dede, C. (2010). A multi-user virtual environment
for building and assessing higher-order inquiry skills in science. British Journal of
Educational Technology, 41(1), 56-68.
Kim, M.C. & Hannafin, M.J. (2011). Scaffolding problem solving in technology-enhanced
learning environments. Computers & Education, 56, 403-417.
Kim, M. C., Hannafin, M. J., & Bryan, L. A. (2007). Technology-‐enhanced inquiry tools in
science education: An emerging pedagogical framework for classroom practice. Science
Education, 91(6), 1010-1030.
Krosnick, J.A., & Presser, S. (2010). Question and questionnaire design. In J. Wright and P.
Marsden (Eds.), Handbook of survey research (2
nd
ed., pp. 263-313). West Yorkshire,
England: Emerald Group.
Kuhn, D., Zillmer, N., Crowell, A., & Zavala, J. (2013). Developing norms of argumentation:
Metacognitive, epistemological, and social dimensions of developing argumentative
competence. Cognition and Instruction, 31(4), 456-496.
Lajoie, S. P. (2005). Extending the scaffolding metaphor. Instructional Science, 33(5-6), 541-
557.
Lantz, H. (2004). Rubrics for assessing student achievement in science grades K-12. Thousand
Oaks, CA: Corwin Press.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 163 163
Laxman, K. (2010). A conceptual framework mapping the application of information search
strategies to well and ill-structured problem solving. Computers & Education, 55(2), 513-
526.
Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. New
York: Cambridge University Press.
Law, V., Ge, X., & Eseryal, D. (2011). An investigation of the development of a reflective
learning community in an ill-structured domain of instructional design. Knowledge
Management & E-Learning: An International Journal, 3(4), 513-533.
Lee, W.L., Lim, K.Y., & Grabowski, B. (2009). Generative learning strategies and
metacognitive feedback to facilitate comprehension of complex science topics and self-
regulation. Journal of Educational Multimedia and Hypermedia, 18(1), 5-25.
Lent, R. W., Brown, S. D., and Larkin, K. C. (1984). Relation of self-efficacy expectations to
academic achievement and persistence. Journal of Counseling Psychology, 31, 356-362.
Lee, J., & Spector, J. M. (2012). Effects of model-centered instruction on effectiveness,
efficiency, and engagement with ill-structured problem solving. Instructional Science, 1-
21.
Lee, T. H., Shen, P. D., & Tsai, C. W. (2008). Applying web-enabled problem-based learning
and self-regulated learning to add value to computing education in Taiwan's vocational
schools. Educational Technology & Society, 11(3), 13-25.
Lenhart, A., Madden, M., & Hitlin, P. (2005). Teens and technology: Youth are leading the
transition to a fully wired and mobile nation.
Li, N., Cohen, W., & Koedinger, K. (2010). A computational model of accelerated future
learning through feature recognition. In Intelligent Tutoring Systems (pp. 368-370).
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 164 164
Berlin/Heidelberg, Germany: Springer.
Lim, K.Y., Lee, H.W., & Grabowski, B. (2009). Does concept-mapping strategy work for
everyone? The levels of generativity and learners self-regulated learning skills. British
Journal of Educational Technology, 40(4), 606-618.
Limón, M. (2001). On the cognitive conflict as an instructional strategy for conceptual change: A
critical appraisal. Learning and instruction, 11(4), 357-380.
Liu, M., Horton, L.R., Corliss, S.B., Svinicki, M.D., Bogard, T., Kim, J. and Chang, M. (2009).
Students’ problem solving as mediated by their cognitive tool use: A study of tool use
patterns. Journal of Educational Computing Research, 40(1), 111-139.
Lopez-Mesa, B., & Vidal, R. (2006). Novelty metrics in engineering design experiments. In
Proceedings of the 9th International Design Conference DESIGN 2006 (pp. 557-564).
Mann, M. M., & LeClair, J. (2009). Concept mapping for meaningful learning, knowledge
retention, and transfer. Transactions of the American Nuclear Society, 101, 149-150.
Martin, L., & Schwartz, D. L. (2009). Prospective adaptation in the use of external
representations. Cognition and Instruction, 27(4), 370-400.
Mascolo, M. F., & Fischer, K. W. (1998). The development of self through the coordination of
component systems. Self-awareness: Its nature and development, 332-384.
Mason, J., & Morrow, R. M. (2006). YACLD: Yet another computer literacy definition. Journal
of Computing in Small Colleges, 21 (5), 94-100.
Mayer, R. E. (2003). Elements of a science of e-learning. Journal of Educational Computing
Research, 29(3), 297-313.
Mayer, R. E. (2008). Learning and instruction. Upper Saddle River, NJ: Pearson Education.
Mayer, R.E. (2010). Fostering scientific reasoning with multimedia instruction. In H.S. Waters
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 165 165
& W. Schneider (Eds.). Metacognition, Strategy Use, and Instruction (pp. 160-175).
New York, NY: The Guilford Press.
Mayer, R. E. (2011). Applying the science of learning. Boston, MA: Pearson Education.
Mayer, R. E. & Wittrock, M. C. (2006). Problem solving. In P. A. Alexander, & P. H. Winne
(Eds.), Handbook of educational psychology (2nd ed., pp. 287-304). New York:
Macmillan.
McKenna, A. F., & Hutchison, M. A. (2008). Investigating Knowledge Fluency in Engineering
Design: Two Studies Related to Adaptive Expertise. On Being an Engineer: Cognitive
Underpinnings of Engineering Education, Texas Tech University, Lubbock, TX.
McQuiggan, S., Lee, S., & Lester, J. (2007). Early prediction of student frustration. Affective
Computing and Intelligent Interaction, 698-709.
Miller, M., Linn, R., & Gronlund, N. (2012). Measurement and Assessment in Teaching (11
th
ed.). Columbus: Pearson.
Mislevy, R. J., Chudowsky, N., Draney, K., Fried, R., Gaffney, T., Haertel, G., et al. (2003).
Design patterns for assessing science inquiry (PADI Technical Report 1). Menlo Park,
CA: SRI International.
Mislevy, R., & Riconscente, M. (2005). Evidence-‐centered Assessment Design: Layers,
Structures, and Terminology (PADI Technical Report 9). Menlo Park, CA: SRI
International.
Mislevy, R. J., Steinberg, L. S., & Almond, R. G. (2003). On the structure of educational
assessments. Measurement: Interdisciplinary Research and Perspectives, 1, 3-62.
Montague, M. & Dietz, S. (2009). Evaluating the evidence base for cognitive strategy
instruction and mathematical problem solving. Exceptional Children, 75(3), 285-302.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 166 166
Moos, D. C., & Azevedo, R. (2008). Self-regulated learning with hypermedia: The role of prior
domain knowledge. Contemporary Educational Psychology, 33(2), 270-298.
Moos, D. C., & Marroquin, E. (2010). Multimedia, hypermedia, and hypertext: Motivation
considered and reconsidered. Computers in Human Behavior, 26(3), 265-276.
Murnane, R.J. (2008). Educating urban children. Working Paper No. 13791. Cambridge, MA:
National Bureau of Economic Research.
Mylopoulos, M. (2012). Competence as expertise: Exploring constructions of knowledge in
expert practice. In B. D. Hodges, & L. Lingard (Eds.), The question of competence:
Reconsidering medical education in the twenty-first century (pp. 97-112). Cornell, NY:
Cornell University Press.
Mylopoulos, M., Regehr, G., & Ginsburg, S. (2011). Exploring residents' perceptions of
expertise and expert development. Academic Medicine, 86(10), S46-S49.
National Academies. (2006). Rising above the gathering storm: energizing and employing
America for a brighter economic future. Retrieved from:
http://www.nap.edu/catalog/11463.html
National Academies. (2010a). Expanding underrepresented minority participation: America’s
science and technology talent at a crossroads. Retrieved from:
http://www.nap.edu/catalog/12984.html
National Academies. (2010b). Rising above the gathering storm, revisited: rapidly approaching
category 5. Retrieved from: http://www.nap.edu/catalog/12999.html
National Research Council. (2005). America’s lab report: Investigations in high school science.
Washington, DC: The National Academies Press.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 167 167
National Research Council. (2007). Taking science to school: Learning and teaching science in
grades K-8. Washington, DC: The National Academies Press.
National Research Council. (2010). Exploring the intersection of science education and 21st
century skills: A workshop summary. Margaret Hilton, Rapporteur. Board on Science
Education, Center for Education, Division of Behavioral and Social Sciences and
Education. Washington, DC: The National Academies Press.
National Research Council. (2011a). A framework for K-12 science education: Practices,
crosscutting concepts, and core ideas. Committee on a Conceptual Framework for New
K-12 Science Education Standards. Board on Science Education, Division of Behavioral
and Social Sciences and Education. Washington, DC: The National Academies Press.
National Research Council. (2011b). Learning Science Through Computer Games and
Simulations. Committee on Science Learning: Computer Games, Simulations, and
Education, M. A. Honey and M. L. Hilton (Eds.), Board on Science Education, Division
of Behavioral and Social Sciences and Education. Washington, DC: The National
Academies Press.
Nelson, B.C., Erlandson, B., and Denham, A. (2011). Global channels of evidence for learning
and assessment in complex game environments. British Journal of Educational
Technology, 42(1), 88-100.
NGSS Lead States. (2013). Next Generation Science Standards: For States, By States.
Washington, DC: The National Academies Press.
Nietfeld, J., & Shores, L. R. (2011). Self-Regulation Within Game-Based Learning
Environments. Serious Educational Game Assessment, 19-42.
No Child Left Behind (NCLB) Act of 2001, 20 U.S.C.A. § 6301 et seq.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 168 168
Nusbaum, E. C., & Silvia, P. J. (2011). Are intelligence and creativity really so different?: Fluid
intelligence, executive processes, and strategy use in divergent thinking. Intelligence,
39(1), 36-45.
OECD. (2010). PISA 2012 field trial problem solving framework: Draft subject to possible
revision after the field trial. Retrieved from:
http://www.oecd.org/dataoecd/8/42/46962005.pdf
Oliver, K., & Hannafin, M. J. (2001). Developing and refining mental models in open-ended
learning environments: a case study. Educational Technology Research and
Development, 49(4), 5-33.
O’Mahony, T. K., Baer, T., & Quynn, J. (2008). A visual-spatial learning ecosystem enhances
adaptive expertise with preparation for future learning. Seattle, WA, University of
Washington, College of Education. Retrieved from http://ti-
researchlibrary.com/ListsTI%20Education%20Technology%20%20Research%20Library
/DispForm.aspx?ID=10
Osborne, J. (2007). Science education for the twenty first century. Eurasia Journal of
Mathematics, Science & Technology Education, 3(3), 173-184.
Otto, J., & Lantermann, E.D. (2006). Individual differences in emotional clarity and complex
problem solving. Imagination, Cognition and Personality, 25, 3-24.
Paas, F., Renkl, A., & Sweller, J. (2003). Cognitive load theory and instructional design: Recent
developments. Educational Psychologist, 38, 1-4.
Paas, F., Renkl, A., & Sweller, J. (2004). Cognitive load theory: Instructional implications of the
interaction between information structures and cognitive architecture. Instructional
Science, 32, 1-8.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 169 169
Paas, F., Van Gog, T., & Sweller, J. (2010). Cognitive load theory: New conceptualizations,
specifications, and integrated research perspectives. Educational Psychology Review,
22(2), 115-121.
Parnafes, O. (2007). What does “fast” mean? Understanding the physical world through
computational representations. The Journal of the Learning Sciences, 16(3), 415-450.
Partnership for 21st Century Skills. (2008). Results that matter: 21st century skills and higher
school reform. Retrieved from: http://www.p21.org/index.php
Pellegrino, J. W., & Brophy, S. (2008). From cognitive theory to instructional practice:
Technology and the evolution of anchored instruction. Understanding models for
learning and instruction, 277-303.
Pellegrino, J. W., & Quellmalz, E. S. (2010). Perspectives on the integration of technology and
assessment. Journal of Research on Technology in Education, 43(2), 119-134.
Pérez, J., & Coffin-Murray, M. (2010). Generativity: The new frontier for information and
communication technology literacy, Interdisciplinary Journal of Information,
Knowledge, and Management, 5 (127-137).
Peters, V. L., & Slotta, J. D. (2010). Scaffolding knowledge communities in the
classroom: New opportunities in the Web 2.0 era. In M. J. Jacobson & P. Reimann
(Eds.), Designs for learning environments of the future: International perspectives from
the learning sciences (pp. 205-232). Secaucus, NJ: Springer.
Piaget, J. (1977) The development of thought: equilibration of cognitive structures. New York:
Holt.
Pintrich, P. R. (2000). The role of goal orientation in self-regulated learning. In M. Boekaerts, P.
R. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 451–502). San Diego,
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 170 170
CA: Academic.
Plass, J. L., Homer, B. D., Milne, C., Jordan, T., Kalyuga, S., Kim, M., & Lee, H. (2009). Design
factors for effective science simulations: Representation of information. International
Journal of Gaming and Computer-Mediated Simulations, 1(1), 16-35.
Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9(5): 1-6.
President’s Council on Science and Technology Policy. (2010). Prepare and inspire: K-12
education in science, technology, engineering, and mathematics (STEM) for America’s
future. Retrieved from:
http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-stemed-report.pdf
Project Tomorrow and PASCO Scientific. (2008). Inspiring the next generation of innovators:
Students, parents, and teachers speak up about science education. Irvine, CA. Retrieved
from: http://www.tomorrow.org/SpeakUp/pdfs/Inspiring_the_next_generation_of_
innovators.pdf
Ranks, B. (2000). Brain-Compatible Instruction. In H.G. Cram and V. Germinario. Leading and
learning in schools: Brain-based practices (pp. 57-84). Lanham, MD: Scarecrow Press.
Rayne, K., Martin, T., Brophy, S., Kemp, N. J., Hart, J. D., & Diller, K. R. (2006). The
development of adaptive expertise in biomedical engineering ethics. Journal of
Engineering Education, 95(2), 165-173.
Rebello, C. M., & Rebello, N. S. (2013). Transfer of argumentation skills in conceptual physics
problem solving. In American Institute of Physics Conference Series (Vol. 1513, 322-325).
Reiser, B. J. (2004). Scaffolding complex learning: The mechanisms of structuring and
problematizing student work. The Journal of the Learning Sciences, 13(3), 273-304.
Renkl, A., Hilbert, T. & Schworm, S. (2009). Example-based learning in heuristic domains: A
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 171 171
cognitive load theory account. Educational Psychology Review, 21, 67-78.
Robison, J., McQuiggan, S., & Lester, J. (2010). Developing empirically based student
personality profiles for affective feedback models. In Intelligent Tutoring Systems (pp.
285-295). Berlin/Heidelberg, Germany: Springer.
Rose, S. P., and Fischer, K. W. (1989). Constructing task sequences: A structured approach to
skill theory (Instructional Manual). Cambridge, MA: Harvard University.
Rosiek, J. (2003). Emotional scaffolding: An exploration of teacher knowledge at the
intersection of student emotion and subject matter content. Journal of Teacher
Education, 54(5), 399-412.
Rosiek J. & Atkinson, B. (2005). Bridging the divides: The need for a pragmatic semiotics of
teacher knowledge research. Educational Theory, 55(4), 231-266.
Rosiek, J. & Beghetto, R.A. (2009). Emotional scaffolding: The emotional and imaginative
dimensions of teaching and learning. In P.A. Schutz and M. Zembylas (Eds.), Advances
in teacher emotion research: The impact on teachers’ lives (pp. 175-194). Dordrecht,
The Netherlands: Springer.
Rupp, A. A., Gushta, M., Mislevy, R. J., & Shaffer, D. W. (2010). Evidence-centered design of
epistemic games: Measurement principles for complex learning environments. The
Journal of Technology, Learning and Assessment, 8(4). Retrieved from
http://www.jtla.org
Salomon, G. (2006). The systemic vs. analytic study of complex learning environments. In J.
Elen & R.E. Clark (Eds.), Handling complexity in learning environments: Theory and
research (pp. 255-265). Boston, MA: Elsevier.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 172 172
Sáinz, M., & Eccles, J. (2012). Self-concept of computer and math ability: Gender implications
across time and within ICT studies. Journal of Vocational Behavior, 80(2), 486-499.
Sawyer, R. K. (Ed.). (2006). The Cambridge handbook of the learning sciences (Vol. 2, No. 5).
New York: Cambridge University Press.
Saye, J., & Brush, T. (2001). The use of embedded scaffolds with hypermedia-supported student-
centered learning. Journal of Educational Multimedia and Hypermedia, 10(4), 333-356.
Scardamalia, M., Bransford, J., Kozma, B., & Quellmalz, E. (2012). New assessments and
environments for knowledge building. Assessment and Teaching of 21st Century Skills,
231-300.
Schaal, S. (2010). Enriching traditional biology lectures digital concept maps and their influence
on cognition and motivation. World Journal on Educational Technology, 2(1), 42-54.
Schaal, S., Bogner, F. X., & Girwidz, R. (2010). Concept Mapping Assessment of Media
Assisted Learning in Interdisciplinary Science Education. Research in Science Education,
40(3), 339-352.
Schleicher, A. (2010). Assessing literacy across a changing world. Science, 328(3), 433-434.
Schrader, P. G., Lawless, K. A., & McCreery, M. (2009). Intertextuality in massively
multiplayer online games. In R. E. Ferdig (Ed.), Handbook of research on effective
electronic gaming in education (Vol. III, pp. 791-807). Hershey, PA: Information
Science Reference.
Schraw, G. (2007). The use of computer-based environments for understanding and improving
self regulation. Metacognition and Learning, 2, 169-176.
Schraw, G., & Lehman, S. (2001). Situational interest: A review of the literature and directions
for future research. Educational Psychology Review, 13(1), 23-52.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 173 173
Schunk, D.H., Pintrich, P.R., & Meece, J.L. (2008). Motivation in education: Theory, research,
and applications. Upper Saddle River, NJ: Pearson Education.
Schwartz, D. L., Bransford, J. D., & Sears, D. (2005). Efficiency and innovation in transfer.
Transfer of learning from a modern multidisciplinary perspective, 1-51.
Schwartz, D. L., Chase, C., Chin, D. B., Oppezzo, M., Kwong, H., Okita, S., Biswas, G., Roscoe,
R., Hogyeong, J., & Wagster, J. (2009). Interactive metacognition: monitoring and
regulating a teachable agent. In D. Hacker, J. Dunlosky, & A. Graesser (Eds.), Handbook
of metacognition in education (pp. 340-358). New York: Routledge.
Schwartz, D.L. & Martin, T. (2004). Inventing to prepare for future learning: The hidden
efficiency of encouraging original student production in statistics instruction. Cognition
and Instruction, 22(2), 129-184.
Schwartz, D. L., Varma, S., & Martin, L. (2008). Dynamic transfer and innovation. In S.
Vosniadou (Ed.), International handbook of research on conceptual change (pp. 479-
506). Hillsdale, NJ: Lawrence Erlbaum.
Shaffer, D. W. (2006). Epistemic frames for epistemic games. Computers & Education, 46(3),
223-234.
Shaffer, D.W. (2007). How computer games help children learn. New York: Palgrave
Macmillan.
Shah, J. J., Smith, S. M., & Vargas-Hernandez, N. (2003). Metrics for measuring ideation
effectiveness. Design studies, 24(2), 111-134.
Sheffield, C. C. (2007). Technology and the gifted adolescent: Higher order thinking, 21st
century literacy, and the digital native. Meridian: A Middle School Technologies Journal,
10(2), 1-5.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 174 174
Shute, V. J., & Spector, J. M. (2008). SCORM 2.0 white paper: Stealth assessment in virtual
worlds. Unpublished manuscript.
Shute, V. J., Ventura, M., Bauer, M., & Zapata-Rivera, D. (2008). Monitoring and fostering
learning through games and embedded assessments. Report No. RR-08-69). Princeton,
NJ: Educational Testing Service.
Shute, V. J. & Zapata-Rivera, D. (2008). Using an evidence-based approach to assess mental
models. In D. Ifenthaler, P. Pirnay-Dummer, & J. M. Spector (Eds.), Understanding
models for learning and instruction: Essays in honor of Norbert M. Seel (pp. 23-41).
New York: Springer.
Sibley, D. F. (2009). A cognitive framework for reasoning with scientific models. Journal of
Geoscience Education, 57(4), 255-263.
Silvia, P. J., & Beaty, R. E. (2012). Making creative metaphors: The importance of fluid
intelligence for creative thought. Intelligence, 40(4), 343-351.
Sloan, L., & Halaris, A. (1985). Towards a definition of computing literacy for the liberal arts
environment. ACM SIGCSE Bulletin, 17(1), 320-326.
Snow, E., Fulkerson, D., Feng, M., Nichols, P., Mislevy, R., and Haertel, G. (2010). Leveraging
evidence-centered design in large-scale test development (Application of evidence-
centered design for large-scale state science assessment, technical report 4). Menlo
Park, CA: SRI International.
Spiro, R.J. & Jehng, J. (1990). Cognitive flexibility and hypertext: Theory and technology for
the non-linear and multidimensional traversal of complex subject matter. In D. Nix & R.
Spiro (Eds.), Cognition, Education, and Multimedia (pp. 163-206). Hillsdale, NJ:
Erlbaum.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 175 175
Srinivasan, V., & Chakrabarti, A. (2010). Investigating novelty–outcome relationships in
engineering design. Artificial Intelligence for Engineering Design, Analysis and
Manufacturing, 24(02), 161-178.
Stiller, E., & LeBlanc, C. (2006). From computer literacy to cyber-literacy. Journal of
Computing Sciences in Colleges, 21(6), 4-13.
Sweller, J. (1994). Cognitive load, learning difficulty, and instructional design. Learning and
Instruction, 4, 295-312.
Tai, R. H., Liu, C. Q., Maltese, A. V., and Fan, X. (2006). Planning early for careers in science.
Science, 312, 1143-1144.
Tseng, K. H., Chang, C. C., Chen, W. P., & Lou, S. J. (2010). The use of a computer-mediated
environment to promote learning performance through concept mapping in a nanometre
course. World Transactions on Engineering and Technology Education, 8, 50-55.
Tseng, K. H., Chang, C. C., Lou, S. J., Tan, Y., & Chiu, C. J. (2012). How concept-mapping
perception navigates student knowledge transfer performance. Journal of Educational
Technology & Society, 15(1), 102-115.
U.S. Department of Education, Office of Planning, Evaluation, and Policy Development, Policy
and Program Studies Service. (2009). Evaluation of the Enhancing Education Through
Technology Program: Final Report, Washington, D.C.
Vanasupa, L., Stolk, J., & Harding, T. (2010). Application of self-determination and self-
regulation theories to course design: Planting the seeds for adaptive expertise.
International Journal of Engineering Education, 26(4), 914.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 176 176
Vendlinski, T. P., Baker, E.L., & Nieto, D. (2008). Templates and objects in authoring problem
solving. CRESST Report 735. Los Angeles, CA: National Center for Research and
Evaluation, Standards Testing (CRESST).
Verschaffel, L., Luwel, K., Torbeyns, J., & Van Dooren, W. (2009). Conceptualizing,
investigating, and enhancing adaptive expertise in elementary mathematics education.
European Journal of Psychology of Education, 24(3), 335-359.
Volman, M., van Eck, E., Heemskerk, I., & Kuiper, E. (2005). New technologies, new
differences. Gender and etnic differences in pupils’ use of ICT in primary and secondary
education. Computers & Education, 45, 35-55.
Vosniadou, S., Ioannides, C., Dimitrakopoulou, A., & Papademetriou, E. (2001). Designing
learning environments to promote conceptual change in science. Learning and
instruction, 11(4), 381-419.
Vygotsky, L.S. (1978). Mind in Society: The development of higher psychological processes.
Cambridge, MA: Harvard University Press.
Warschauer, M. (2003) Technology and social inclusion: Rethinking the digital divide.
Cambridge, MA: MIT Press.
Warschauer, M., Grant, D., Real, G. D., & Rousseau, M. (2004). Promoting academic literacy
with technology: Successful laptop programs in K-12 schools. System, 32(4), 525-537.
Warschauer, M., Knobel, M., & Stone, L. (2004). Technology and equity in schooling:
Deconstructing the digital divide. Educational Policy, 18(4), 562-588.
Warschauer, M., & Matuchniak, T. (2010). New technology and digital worlds: Analyzing
evidence in equity of access, use, and outcomes. Review of Research in Education,
34(1), 179-225. DOI: 10.3102/0091732X09349791
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 177 177
Wilkins, J.L.M. (2011). Mathematics and science self concept: An international investigation.
The Journal of Experimental Education, 72(4), 331-346.
Winne, P.H., Nesbit, J.C., Kumar, V., Hadwin, A.F., Lajoie, S.P., Azevedo, R. et al. (2006).
Supporting self-regulated learning with study software: The learning kit project.
Technology, Instruction, Cognition, and Learning, 3, 105-113.
Winne, P.H. and Nesbit, J.C. (2010). The psychology of academic achievement. Annual Review
of Psychology, 61, 653-678.
Wittrock, M. C. (2010). Learning as a generative process. Educational Psychologist, 45(1), 40-
45.
Woolf, B.P. (2009). Building intelligent interactive tutors: Student-centered strategies for
revolutionizing e-learning. Burlington, MA: Morgan Kauffman.
Yan, Z. & Fischer, K. (2007). Pattern emergence and pattern transition in microdevelopmental
variation: Evidence of complex dynamics of developmental processes. Journal of
Developmental Processes, 2(2), 39-62.
Zahner, D., Nickerson, J. V., Tversky, B., Corter, J. E., & Ma, J. (2010). A fix for fixation?
Rerepresenting and abstracting as creative processes in the design of information
systems. Artificial Intelligence for Engineering Design, Analysis and Manufacturing,
24(02), 231-244.
Zalles, D., Haertel, G., & Mislevy, R.J. (2010). Using evidence-centered design to support
assessment, design, and validation of learning progressions (Large-scale technical report
10). Menlo Park, CA: SRI International.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 178 178
Zeldin, A. L., Britner, S. L., & Pajares, F. (2008). A comparative study of the self-‐efficacy
beliefs of successful men and women in mathematics, science, and technology careers.
Journal of Research in Science Teaching, 45(9), 1036-1058.
Zimmerman, B., & Tsikalas, K. (2005). Can computer-based learning environments (CBLEs) be
used as self-regulatory tools to enhance learning? Educational Psychologist, 40, 267-
271.
Zimmerman, C., & Croker, S. (2014). A prospective cognition analysis of scientific thinking and
the implications for teaching and learning science. Journal of Cognitive Education and
Psychology, 13(2), 245-257.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 179 179
Appendix A
Evidence-centered Design: Domain Analysis
STEM Problem-solving CBLE Framework from Domain Analysis
Typically relying on instructional specialists in collaboration with discipline experts,
domain analysis guides the overall design of a learning experience. The domain analysis (a)
details what discipline-based knowledge and experiences are important and the relationships
among them (e.g., Behrens, Mislevy, DeCerbo, & Levy, 2010; Mislevy, Steinberg, & Almond,
2003), and (b) depicts the ways learners construct, acquire, use, and communicate the identified
knowledge, skills, and abilities (e.g., discipline-based content, concepts, language, procedures,
representational forms, relationships etc.) to solve discipline-based problems (e.g., Mislevy,
Steinberg, & Almond, 2003; Snow, Fulkerson, Feng, Nichols, Mislevy, & Haertel, 2010).
With situated cognition theory in mind, the domain analysis describes real-world work
valued in STEM disciplines and problem-solving, provides representational forms guiding
discipline-based information, and articulates what learner responses provide evidence for, and
indicators of success or failure in, that real-world work (Mislevy, Steinberg, & Almond, 2003),
with differentiations along a novice-to-expert continuum (e.g., Mislevy & Riconscente, 2005).
Such discipline-based descriptions often entail commonly used epistemic practices that might be
viewed as routine, but can also be considered necessary baseline tools gained in learners’ PFL
trajectories that can be adaptively applied as they gain increasing proficiency in the practices.
Next Generation Science Standards (2013) targeted at the middle-school level (i.e., at or
below the level taught by participants) serve as the basis of discipline-based content knowledge,
with particular reference to progressions in science and engineering practices and crosscutting
concepts along with specific domain content targeted in each of the two scenarios. In this study,
knowledge construction was considered in terms of developing adaptive expertise (i.e., the
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 180 180
flexible capacity to understand, explain, execute, evaluate, and modify approaches given
complex contextual changes and variables in ill-structured problems). Acquiring expert-like
knowledge in STEM-based, complex, ill-structured problem solving depends on learners’
capacity to develop problem-solving schemata. An important part of a learner’s problem-solving
schemata involves an explicit knowledge of expert-like, problem-solving stages. However,
during problem solving, novice learners experience difficulties in creating external
representations of their internal knowledge and need cognitive-tool supports that allow them to
detect conceptual and other relationships (Liu et al., 2009). In this regard, Greene and Azevedo
(2009) recommend research models that conceptually merge different metacognitive, self-
regulation, and problem-solving models, with specific process models identified for each phase.
Likewise, Bransford et al. (2010) stress that the sequence of instruction matters in understanding
a complex topic or problem. Content must be well organized to ensure meaningful learning for
students (e.g., Tseng, Chang, Lou, Tan, & Chiu, 2012), and the overall structure of an
instructional model must reflect authentic, expert-like sequencing. To those ends, the domain
analysis here includes understanding and organizing commonalities among problem-solving,
conceptual-change, and self-regulation models.
Figure 3 provides a brief comparison among models related to complex, ill-structured
problem solving in STEM, and out of that, creates a common, cross-domain model for
sequencing problem-solving phases in a way that supports stages of self-regulated learning and
standards-aligned crosscutting practices of STEM disciplines. Figure 3 shows that recent models
vary in terms of the number of problem-solving phases, but generally agree on the sequence of
content and procedures. Areas of nearly complete alignment (shaded in gray in Figure 3) occur
in the first and last problem-solving stages. While variation among models occurs in the
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 181 181
Figure 3:
Comparisons of Problem-solving and Conceptual Change Models, STEM Practices, & SRL Processes
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 182 182
sequence of tasks in the middle phases, the types of thinking required largely show agreement.
For example, Kim and Hannafin (2011) provide a model with five key inquiry-supported
problem-solving activities: (a) problem identification and engagement, (b) evidence exploration,
(c) explanation reconstruction, (d) communication and justification of the explanation, and (e)
revision of, and reflection on, the explanation. The advantage of this model is that it closely
compares to the frequently used 5E model of science inquiry: Engage, Explore, Explain,
Elaborate, and Evaluate (Bybee et al., 2006). In this constructivist instructional model, educators
use the engage stage to introduce a STEM-related learning scenario to inspire student curiosity,
interest, and other motivation-related behaviors, to assess prior knowledge, and to pose a
discrepant problem or curiosity for students to address (i.e., the problem context). In the explore
stage, learners experiment with models and variables in order to deepen their understanding,
challenge their existing preconceptions and misconceptions, and consider alternative
explanations, all of which help them form new schemata. In the explain stage, students
synthesize their understanding by communicating what they have learned, illustrating initial
conceptual change. The elaborate stage gives students the opportunity to apply their newfound
knowledge to novel situations or to changed variables in relation to the same problem, deepening
their understanding further and supporting the reinforcement of new schemata or their transfer.
The evaluate stage serves as a time for students’ own metacognitive, formative assessment.
While instructors’ analysis of student performance happens at all stages, the evaluate stage
represents an opportunity for educators to diagnosis of areas of an individual learner’s confusion
and to differentiate further instruction in additional learning experiences. Tenets of conceptual-
change-theory parallel the 5E model, with cognitive conflict introduced early on to help learners
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 183 183
recognize the inadequacies of their existing mental models, and from there, construct and apply
new ones based on their observations (e.g., Chen & She, 2012; Limón, 2001; Mayer, 2008).
In contrast to the five-step models, Ge and Land’s (2003) problem-solving model has
four main processes: (a) problem representation (learners identify initial state, goal state,
operators, constraints), (b) solution development (learners consider divergent assumptions,
evidence etc.), (c) justification development (learners explain solution pros and cons), and (d)
monitoring and evaluation (learners engage in self-regulated strategies and iterative
improvements). These stages are similar to Kim and Hannafin’s (2011), with Ge and Land’s
solution-development process essentially rolling up Kim and Hannafin’s evidence-exploration
and explanation-reconstruction phases.
Drawing from prior research, Güss, Tuason, and Gerhard (2010) describe problem-
solving strategies as: (a) problem identification, (b) goal definition, (c) information gathering,
(d) elaboration and prediction, (e) strategic and tactical planning, decision-making, and action
and (f) evaluation of outcome with self-reflection and modification of strategic approach. As
with the 5E model (Bybee et al., 2006), Güss et al. (2010) point out that problem solving is not
linear, but involves frequent stops back and forth among the steps. While arranged in Figure 3 in
a linear manner, the NRC’s (2011a) definitions of science and engineering practices are also
iterative and crosscut problem-solving steps in their application.
In rough alignment with the aforementioned problem-solving models, the National
Research Council (2010) identifies six components of non-routine problem solving as a 21st-
century skill: (a) narrowing information to diagnose the problem, (b) reflecting on whether
strategies are working and modifying if necessary, (c) generating creative, innovative solutions,
(d) integrating information without clear connections, (e) recognizing patterns not noticeable by
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 184 184
novices, and (f) understanding how knowledge is conceptually related. While these correspond
to earlier phases described in other problem-solving models, they do not address justifying,
monitoring, or evaluating (i.e., the last two problem-solving phases per Ge & Land, 2003; Kim &
Hannafin, 2011).
As an example of summative evaluation, PISA’s (OECD, 2010) test design for assessing
the problem-solving competency of 15-year-old students around the world includes four
problem-solving processes (exploring and understanding, representing and formulating, planning
and executing, and monitoring and reflecting), each with one or more reasoning skills (deductive,
inductive, quantitative, correlational, analogical, combinatorial, and multidimensional
reasoning). The distribution of points across the four problem-solving processes emphasizes
planning and executing. Unlike research that calls for combining content with task (e.g., NRC
2005, 2007), PISA separates the two given it has separate tests focused on domain content
knowledge. Thus, students are not assessed in problem solving in a real-world manner that is
authentic to STEM practices. In a critique of PISA, for example, Dohn (2007) asserts that
PISA’s methodology is flawed given (a) it does not adequately represent the domain of the
question of inquiry, and (b) test items are misleadingly formulated, faulty, or culturally biased.
However, given PISA serves as one primary method of summative evaluation, understanding its
problem-solving structure could influence instructional strategy of formative assessments such as
that considered in this study focused on learners’ developmental patterns in their preparation for
future learning, including summative test scenarios.
Finally, self-regulated learning is a constructive process in which learners set goals and
then actively monitor and adjust their cognitive, metacognitive, and affective behaviors
depending on goals, criteria, constraints, and conditions of the learning environment (Pintrich,
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 185 185
2000). Considered key to all phases of generative problem solving (Behrens, Mislevy, DeCerbo,
& Levy, 2010; Lee, Lim, & Grabowski, 2009; Snow, Fulkerson, Feng, Nichols, Mislevy, &
Haertel, 2010), self regulation raises proficiency, metacognitive competence, and self-efficacy in
challenging learning circumstances (Zimmerman & Tsikalas, 2005). The phases of self-
regulation begin and end much the same as the other models, with an emphasis on defining task
problems, resources, constraints and goals in the initial anticipatory phase and evaluating
performance and beliefs in the final self-reflection phase in order to evaluate and refine
approaches for the ensuing anticipatory phase (Nietfield & Shores, 2011). Bridging the two is an
enactment phase, when learners manipulate variables, monitor judgments about their
understanding and the effectiveness of their strategy use, control adapting strategies, and seek
supports for optimal performance (Nietfield & Shores, 2011).
Despite differences, these models of learning have a rough but clear affinity in terms of
overall stages. Though many (e.g., Bybee et al., 2006; Güss, Tuason, & Gerhard, 2010) note that
these stages may be iterative rather than sheerly linear as depicted in Figure 3, structuring
complex, open-ended problem-solving in stages can benefit novice learners in making complex
tasks more manageable (Reiser, 2004). Providing such structuring scaffolds reduces complexity
and choice in open-ended tasks, chunking pertinent knowledge and skills in explicit subtasks that
learners may use to create mental models (e.g., Oliver & Hannafin, 2001) of problem-solving
processes and cognitive and metacognitive strategies (Reiser, 2004). This chunking also
manages cognitive load (Billing, 2007).
For this study, based on the above research, synthesized and rewritten in middle-school-
level terms, the CBLE structured and scaffolded experiences into five problem-solving steps: (a)
Define Problems and Goals, (b) Gather Information, (c) Propose Solutions Based on Evidence,
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 186 186
(d) Justify Solutions Based on Evidence, and (e) Evaluate How Well Solutions Work. For
domain analysis to be complete, it is necessary to determine what kinds of knowledge are
important in each step, and how that knowledge is constructed. In terms of preparation for future
learning, each of these steps involves key areas of knowledge, skills, and abilities learners need
to acquire on their trajectory toward adaptive expertise, which in turn suggest constructivist,
situated scaffolding in support of their optimal development per dynamic skill theory. While
these types of higher-order thinking cross-cut problem-solving phases, each step has certain
requisite capacities that lend themselves to successful performance, as discussed in the literature.
Problem-solving Step 1: Define Problems, Goals, and Constraints
In building adaptive expertise, this step primarily involves the need to understand the
deep rather than superficial characteristics of the problem in order to define it, set goals, and
understand constraints. This problem-definition step involves the need for learners to pose well-
defined questions about the natural and human-constructed world, a key scientific and
engineering situated practice (NRC, 2011a) that leads to goal-oriented empirical investigations
that seek to find evidence-based understandings and solutions. Based on questioning that helps
define sub-elements of problems and investigations, this problem-definition step also depends on
mapping relationships among sub-elements to ensure the problem and its complexity is properly
understood. This initial stage is critical, as initial misrepresentations of the problem, potential
goal-oriented solutions, and relevant constraints can have negative repercussions in later
problem-solving steps if not identified (Jonassen, 2010a).
Many studies indicate that learners must create a mental model of a problem in order to
solve it, but, compared to experts, novices spend less time on identifying a problem’s nature and
often focus on superficial similarities to those previously solved (Billing, 2007). With ill-
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 187 187
structured problems, learners must determine the nature of the problem, generate a representation
and a rationale for the problem, and draw on prior knowledge and experience to examine
different perspectives in order to generate a mental model of the problem (Lee & Spector, 2012).
Learners’ control of a complex problem-solving task also depends on their knowledge of the
problem’s underlying structure (Goode & Beckmann, 2010; Jonassen, 2010a), an understanding
that is a prerequisite for designing problem-solving strategies and solutions (Laxman, 2010).
According to Jonassen (2010a), developing a problem schema must reflect this kind of deeper
understanding, including understandings of (a) the kind of problem that is presented, (b) the
structural elements of the problem, (c) situations in which the problem occurs, and (d)
appropriate process options. Sibley (2009) additionally argues that experts construct models
through meaningful relational analogies that include concrete characteristics such as physical
properties, but often must go beyond these to recognize more abstract characteristics such as
underlying processes, behaviors, and causation. This practice not only helps correct novice
learners’ superficial problem characterization, but also to supports their development toward
more abstract thinking.
According to Jonassen (2009), meaningful learning is fostered when learners use
computer tools (e.g., concept maps, databases, systems modeling etc.) to construct mental
models of domain knowledge, problems, structures, and processes. Scaffolds that often support
this investigation typically include those that assist with creating problem analogies, in order to
bridge prior knowledge with understanding the full context of the novel problem presented
(Jonassen, 2009). At this stage, recognizing elements of a new situation that relate to known
representations is important (Sibley, 2009). Particularly in complex domains, expert modeling in
this respect, can help learners internalize experts’ conceptual knowledge, and, in terms of
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 188 188
instructional design, is often is placed at the beginning of a lesson for greater instructional
efficiency and effectiveness and learner engagement (Lee & Spector, 2012). For novice learners,
self-guided modeling is more of a trial-and-error process than being guided by insights, but for
more experienced learners, expert modeling shows an expertise reversal effect given higher
cognitive load in reintegrating already known schemata (Lee & Spector, 2012). What expert
modeling provides to novices, however, is assistance with noticing patterns they might overlook,
organizing information, and planning strategies (Lee & Spector, 2012). Using schema-based
instruction, semantic cues and schematic diagrams can illustrate the relationship of problem
elements (Jitendra et al., 2009). In terms of constructing a learning experience, Sibley (2009)
joins Gentner (1983) in focusing on five cognitive processes for generating analogies: (a)
retrieval, or recognizing that elements of a new situation relate to known representations) (b)
mapping, or aligning representational characteristics of subject and analog so as to enable
comparisons of similarities and differences that enable inferences, (c) evaluation, or determining
how closely the analogical model and inferences resulting from it are valid through iterative
testing and refinement (d) abstraction, or generalizing a causal or other principle shared by
analogical elements, and (e) re-representation, or altering the representation or elements of it to
increase its validity and utility. As Sibley (2009) notes, successfully reasoning through
analogical thinking results in schema changes among learners. This outcome closely compares
to conceptual-change theory and may support implications of dynamic-skills theory as well.
Problem-solving Step 2: Gather Needed Information and Evidence
The sophistication of learners’ information-seeking strategies impacts problem-solving
performance and positive attitudes about the utility of learned information-seeking strategies
(Laxman, 2010). Lest learners end up with disorganized and unreliable results, learners need
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 189 189
instruction in efficient and effective information seeking given the complex mix of skills
involved (e.g., strategies such as the creation of a search plan, appropriate application of different
Internet tools and search techniques, organization of search results etc.) (Laxman, 2010). This
need is supported by changes in high-stakes summative assessments as well as the formative
assessments considered in this study. As of 2012, the National Assessment of Educational
Progress (NAEP) includes evaluating student abilities to select correct ICT tools for problem
solving and to determine the relative utility of applying various kinds of information (Pérez &
Coffin-Murray, 2010). Self-regulation skills and motivation dimensions (e.g., persistence) are
key to this step, as learners often seek the most obvious or quickest information they deem
necessary to complete the task, sometimes based on prior, less sophisticated mental models.
Along with self regulation, important cognitive strategies to develop in this step include
elaboration, organization, and critical thinking (Cheng & Chau, 2013). Curiosity (one of the sub-
dimensions in this study) is critical to information seeking as well (Zimmerman & Croker, 2014).
Problem-solving Step 3: Propose Solutions Based on Evidence
To develop evidence-based solutions, novice problem-solvers need support, as they lack
deep content knowledge and, as a result, evidence-based argumentation skills (Belland, 2010;
Belland, Glazewski, & Richardson, 2011). Carefully planned inquiry-based experiences
(predicting, experimenting, and explaining) and experimenting with models can support novice-
to-expert conceptual change and problem-solving transfer, with the highest gains going to less
knowledgeable learners (Mayer, 2008). What instructional designers must recognize, however,
is that learners without scientific-thinking skills often seek evidence to support their ideas rather
than to discount them (Mayer, 2008), so need training in looking for alternative explanations.
Given traditional teaching in a high-stakes environment, that tendency is likely exacerbated as
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 190 190
students are also conditioned to seek a correct answer instead of engaging in scientific thinking
by exploring multiple explanations.
Mayer (2010) suggests that learning facts and procedures alone are insufficient for
meaningful learning (the construction of mental models that increases science reasoning), and
that learners must also develop conceptual knowledge, strategies, and beliefs. To promote
scientific reasoning, learners must build a mental model, a representation of how a change in one
part causes a change in another, to understand how a system works. He recommends four types
of question prompts for: (a) troubleshooting reasons why a system may not be behaving as
desired, (b) redesigning to meet new criteria or constraints, (c) encouraging learners’
articulations of principles related to a causal link, and (d) generating hypothetical what-if
scenarios that assist learners in making inferences about the consequences of changes. Sibley
(2009) adds that modeling is key to scientific literacy, and argues that constructivist instruction
in analogical thinking supports learner’s abilities to create and to use scientific models, which
serve as complex representations that organize observations, enable prediction or retrodiction
(hypotheses about what happened based on evidence), and allow hypothesis testing. Given
solutions can have high cognitive demands, Gerjets, Scheiter, & Catrambone (2004) suggest
breaking down the solution elements into smaller, meaningful items that can be isolated.
Problem-solving Step 4: Justify Solutions Based on Evidence
In preparation for future learning in complex, STEM-based, ill-structured problem
solving, a key area of competence is evidence-based argumentation. Per Belland, Glazewski,
and Richardson (2011), evidence-based argumentation in ill-structured problem solving is based
on abstract thinking, but learners need scaffolding supports to excel. Necessary abstractions that
learners find difficult include (a) considerations of logic rules, (b) systems thinking, (c) analyses
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 191 191
of audience perspectives relevant to arguing a claim successfully, and (d) understandings of
scientific theories not as an absolute truths, but rather as hypotheses that change due to evolving
evidence and argumentation (Belland, Glazewski, & Richardson, 2011). Their computer-based
scaffolding framework thus incorporates problem-solving supports that enable learners to
construct strong arguments and to support their claims with evidence. In a STEM-situated
context, argumentation skills in peer-to-peer interactions are key to defending new science
results, math proofs, or technology- and engineering-based designs and essential to expertise in
the discipline. Kuhn, Zilmer, Crowell, and Zavala (2013) note that metacognition and peer-
related social aspects to argumentation are important for developing competence. In terms of
PFL, from scaffolded training involving peer interactions in argumentation, individuals may
successfully transfer argumentation skills to new tasks, demonstrating the development of a
mental model for argumentation (e.g., Rebello & Rebello, 2014).
Problem-solving Step 5: Evaluate the Effectiveness of Solutions
Evaluating the effectiveness of solutions depends on reflection, which in turn supports
future knowledge construction. Per Hmelo-Silver and Eberbach (2012), reflection supports
understanding and transfer because learners consciously relate their new knowledge to the
questions and learning goals they set for themselves, the effectiveness of their hypotheses,
inferences, and solutions, the applicability of their new knowledge and strategies to future
problems, future goals given identified gaps, and their overall self-regulation process. In
authentic ill-structured problems in CBLEs, novices’ abilities mature through reflective thinking,
which assists in the adoption and improvement of discipline-based practices (Law, Ge, &
Eseryal, 2011). From a self-regulation perspective, the final self-reflection phase allows learners
to evaluate their performance and beliefs in order to evaluate and refine approaches for future
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 192 192
anticipatory phases of new problem scenarios (Nietfield & Shores, 2011). The closer the mental
model is to expert models, the higher the level of expertise. If the solution does not work, then
re-representation of the problem may be necessary (i.e., learners may need to return in an
iterative manner to Step 1).
Summary
The above literature review of problem-solving steps provides the basis on which the
CBLE, the problems, scaffolds, and scoring are crafted. The domain analysis accounts for the
challenges of ill-structured problem solving, recognizing that the assessment delivered through a
situated, constructivist CBLE must analyze patterns as learners build and rebuild mental models
of problems and problem-solving strategies, through multiple representations and tasks, taking
into account limitations such as learners’ developmental levels and cognitive load. It
acknowledges that structuring the CBLE to capture developing capacities for ill-structured
problem solving is key to establishing transfer, defined in terms of preparation for future
learning, with the goal being reinforcement and growth of learners’ higher order understandings,
rather than just immediate, expert-like performance. While learners with good mental models
demonstrate greater learning in STEM, assessing learners’ internal mental models through
external representations (e.g., concept maps) and comparing them with expert models can be
difficult, though indicators of completeness (e.g., numbers of nodes, links, crosslinks, cycles,
structures, and examples), structural mapping, and semantic understandings can assist (Shute &
Zapata-Rivera, 2008). Creating instruction that supports learners in constructing mental models
can also prove difficult, so providing model-based instruction early can potentially help learners
create necessary representations of tasks and relationships (Shute & Zapata-River, 2008).
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 193 193
Experts largely agree on the kinds and sequences of higher order thinking required, and
that agreement can provide such a map. Thus, the common framework used in this study to
support the development of adaptive expertise includes expert-like stages of STEM inquiry and
problem solving, renamed for middle-school student understanding and for clear instructional
“easy step” procedural action as shown in Figure 5. The repeated use of the word “evidence” is
intentional, reinforcing that important element of STEM situated practices. In the CBLE, these
steps remained the same for the two problem-solving experiences, one science-oriented and the
other more engineering-design-oriented.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 194 194
Appendix B
Evidence-centered Design: Domain Modeling
Typically relying on evaluation specialists in collaboration with discipline experts,
domain modeling provides assessment arguments that relate learner actions, behaviors, and
outputs in situated contexts to inferences about their proficiencies (e.g., Mislevy & Riconscente,
2005). Using the KSAs generally identified in the domain assessment, domain modeling acts as
a scaffold for the conceptual assessment framework (the next ECD layer), where technical
specifications are made for evidence-based assessments of targeted learner proficiencies
(Haertel, DeBarger, Villalba, Hamel, & Colker, 2010; Mislevy, et al., 2003).
This study drew upon design patterns developed for use in assessing science inquiry in
the context of challenges associated with iterative and cyclical processes of STEM learning
(Mislevy, et al., 2003; Mislevy & Riconscente, 2005), such as (a) design under constraints, (b)
analyzing data quality, (c) model revision, (d) formulating a scientific explanation from a body
of evidence, and (e) self-monitoring. It also referenced and integrated several elements of other
reusable design templates found at http://design-drk.padi.sri.com/padi, as relevant to the content
and learning outcomes in each of the two learning scenarios used in this study, as well as to the
5-step, inquiry- and problem-solving-based framework. Many elements of these design patterns
also corresponded with “science and engineering practices” in NGSS standards (NGSS Lead
States, 2013) and their framework (NRC, 2011a). All were aligned with the specific problem
solving steps. A high-level summary, following the above references, is shown in Table 7.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 195 195
Table 7
Domain-Modeling Design Outline
Title Science
Problem-solving Example
Engineering Design
Problem-solving Example
Description Supports inquiry-based scientific method tasks related
to an ill-structured geology-based solution
Supports engagement in an iterative, ill-structured
engineering design tasks related to an ill-structured
engineering problem.
Authentic,
Situated, Real-
World Use
Like geologists and other STEM professionals using
remote-sensing techniques, learners should be able to
analyze satellite images of landscapes to interpret their
site.
Like engineers and other STEM professionals, learners
should be able to create, assess, and modify a
proposed design solution, taking into account criteria,
constraints, and optimizing results.
Primary KSAs: Problem-solving KSAs per the dimensions (KS, II, M) and related sub-dimensions of the proposed model, and per
problem-solving steps with it, according to the research-based domain analysis
Focus on KS-
STEM Sub-
Dimension
as Situated
Novel, Ill-
structured
Problem
(Directly aligns
with national
standards and
benchmarks,
rather than
treating
standards as a
separate
category in this
domain-
modeling
design outline)
NRC Framework Core Question/ NRC Component
Question/NGSS Standard
ESS1 What is the universe, and what is Earth’s
place in it?
ESS!C How do people reconstruct and date events
in Earth’s planetary history?
MS-ESS1-4: Construct a scientific explanation
based on evidence from rock strata for how the
geologic time scale is used to organize Earth’s
4.6-billion-year-old history.
ESS2: How and why is Earth constantly changing?
ESS2A How do Earth’s major systems interact?
MS-ESS2-2: Construct an explanation based
on evidence for how geoscience processes
have changed Earth’s surface at varying time
and spatial scales.
NRC Framework Core Question/ NRC Component
Question/NGSS Standard
ETAS1: How do engineers solve problems?
ETAS1A: What is a design for?
What are the criteria and constraints of a
successful solution?
ETAS1B: What is the process for developing
potential design solutions?
MS-ETAS1-1: Define the criteria and
constraints of a design problem with sufficient
precision to ensure a successful solution, taking
into account relevant scientific principles and
potential impacts on people and the natural
environment that may limit possible solutions.
ETS1.C How can the various proposed design
solutions be compared and improved?
MS-ETAS1-2: Evaluate competing design
solutions using a systematic process to
determine how well they meet the criteria and
constraints of the problem.
Additional
KSAs:
As appropriate, other NGSS and Common Core
standards referenced for MS-ESS1-4 and MS-ESS2-2
above.
As appropriate, other NGSS and Common Core
standards referenced for MS-ETAS1-1 and MS-ETAS-
1-2 above.
Narrative
Structure
5-Step, Problem-Solving Model in the CBLE, with scaffolding for the desired learning outcomes in each Step, per
the domain analysis.
Common
(Well-
Structured)
Features
Proposed Solutions Require:
• analysis of satellite remote-sensing data
• use of discipline-related principles (e.g.,
superposition) and tools
• application of cross-cutting concepts (e.g.,
Patterns, , System & System Models, Structure &
Function, Stability & Change)
• type and degree of scaffolding
• middle-school-level content
Proposed Solutions Require:
• design with same criteria and constraints
• use of discipline-related principles (e.g., optimal
solution) and tools
• application of cross-cutting concepts (e.g.,
Patterns, , System & System Models, Structure &
Function, Stability & Change)
• type and degree of scaffolding
• middle-school-level content
Variable
(Ill-structured)
Features
• selected geographic area
• selected geologic feature
• complexity of geologic feature
• hypotheses and application of discipline-based
theories
• selected mission type
• selected engineering components
• mass, power, cost, science value
• wild cards forcing redesign (e.g., equipment
failures)
Observables/
Work Products
Tasks and questions elicit responses in KS, II, and M within and between the 2 problems, based on modified
curricula previously developed.
Rubrics Modified application of Lantz (2004) rubrics for scientific literacy.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 196 196
Appendix C
Evidence-centered Design: Conceptual Assessment Framework
For a given problem-solving scenario, a conceptual assessment framework must be
developed for constructing knowledge, skills, and abilities (Frezzo, Behrens, & Mislevy, 2010),
one that encourages patterns of behavior that mimic authentic STEM practices (epistemic frame
of the domain per Shaffer, 2007). An ECD-based conceptual assessment framework (Rupp,
Gushta, Mislevy, & Shaffer, 2010; Shute & Zapata-Rivera, 2008) consists of five measurement
models: (a) student proficiency (KSAs to be assessed), (b) evidence (relationships among
observable results and related variables), (c) task (types of situations where evidence can be
observed), (d) assembly (rules for creating the assessment out of a collection of the first three
models), and (e) presentation, (characteristics of the delivery environment such as format). The
first three link desired competencies with the kinds of data that provide evidence of relative
expertise levels. In this study, the student model specifies the inquiry- or design-based content to
be measured in the context of adaptive expertise in STEM-based, ill-structured problem solving.
The task model specifies the authentic, real-world problem-based inquiry scenario and what data
would be collected from the student interactions. The evidence model details what observable
learner responses would qualify as evidence of levels of proficiency achieved in adaptive
expertise in STEM-based, ill-structured problem solving.
Through such organization, the conceptual assessment framework determines the design
structure for measuring generative learning (Behrens, Mislevy, DeCerbo, & Levy, 2010; Snow,
Fulkerson, Feng, Nichols, Mislevy, & Haertel, 2010). An additional assembly model supports
the assessment of task outputs by providing an outline of how those three components come
together. The assembly model provides an overview of the task sequences that would mimic
real-world, expert-like, domain practices. It identifies which learner-CBLE interactions in each
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 197 197
problem-solving activity was designed to elicit desired evidence and where in the problem-
solving process it was collected. For the assembly model, this study tied this sequencing to the
five problem-solving steps articulated in the domain analysis (Appendix A). Finally, the
presentation model describes changes in task and product presentation over the course of the
problem-solving experience and outlines potential implications of these changes.
Student Model
The student model in this study assesses how well learners demonstrate dimensions of
adaptive expertise (Tables 2, 3, and 4) in STEM-based, ill-structured problem solving, as
organized in the domain model’s five problem-solving steps and aligned with specific NGSS
standards (NGSS Lead States, 2013) specific to the nature of the two problems. Learner
responses in the three dimensions are treated as inferred evidence to support the claim of
proficiency. Learner proficiencies and microdevelopments in them can be assessed both within
and between steps in different problems, as well as rolled up into a problem proficiency
summary for cross-problem comparison.
Steps have both unique and common targeted learning outcomes and measurable
proficiencies. For instance, strategically generating questions (PSK3a from Table 2 and aligned
with NGSS science and engineering practice “Asking Questions) is common to all steps.
However, the focus in STEP 1 is centered on the ability to ask questions requiring empirical
evidence to answer (i.e., NGSS MS SEP-1E). In Step 2, the focus is on asking questions that
arise from observations of phenomena and models to clarify and seek additional information
(NGSS MS SEP-1A). In Step 3, learners ask questions to describe the evidence supporting the
premise of their arguments (NGSS MS SEP-1B). In Steps 4-5, the focus shifts to asking
questions that challenge the premises of their argument (NGSS MS SEP 1G), to support their
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 198 198
justification and evaluation respectively. Similarly, defining criteria and constraints (NGSS-MS-
ETAS1A) is associated with Step 1; analyzing data to compare design solutions and optimal
characteristics (MS-ETAS1C) aligns with Step 2; developing a testable model for an optimal
design (NGSS-MS-ETAS1D) accords with Step 3; and evaluating solutions against criteria and
constraints (NGSS-MS-ETAS 1C) applies to Steps 4 and 5.
With targeted standards (learner proficiencies) determined for each step, the analysis of
them per Anderson and Krathwohl’s (2001) taxonomy serves as a precursor for designing the
task and evaluation models, as it lays out a process for aligning cognitive process and knowledge
types with appropriate evaluative tasks (e.g., multiple choice responses are not appropriate for
evaluating higher order thinking such as generating hypotheses). The taxonomic analysis also
provides backing for the warrant regarding proficiency claims by analyzing what discipline-
based sub-proficiencies are needed to provide evidence for the desired proficiency standard.
Using an example from above, if the learner proficiency is the ability to ask questions whose
answers require empirical evidence to answer (i.e., NGSS MS SEP-1E), learners must
demonstrate the measurable ability to generate a testable hypothesis/research question, which
further depends on measurable demonstrations of abilities shown in the first column of Table 8,
with correlated adaptive-expertise dimensions from Tables 2, 3, and 4 in parentheses.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 199 199
Table 8
Student Model: Sample Taxonomic (Anderson & Krathwohl, 2001) Alignment
Observable
Proficiency
Cognitive
Process
Knowledge
Type
Recommended
Assessment Type
To generate a testable
hypothesis/research question
6.1 Generating/
Hypothesizing
Cb: Knowledge of subject-
specific techniques &
methods
Constructed
Responses
Step 1 A-1: To summarize the goal of
the specific problem-solving
challenge* (PSK-1c)
2.4 Summarizing Ab: Specific Details and
Elements
Constructed or selected
responses
* in this case, to determine based on geologic data the processes that shaped a landscape and the relative
timescales/sequences for these processes (based on the problem-specific standards identified by the domain model: NGSS
MS-ESS1-4 and MS-ESS2-2); this summary reflects the learner’s mental model/representation of the science-based
problem
Step 1 A-2: To analyze current
knowledge critically (PSS 1a)
5.2 Critique Db: Contextual and
Conditional Knowledge
Learner-generated critique
Step 1 B-1: To differentiate between
everyday and scientific observations
based on data (PSK 1c)
4.1 Differentiating Ba: Classifications and
categories
Constructed responses,
selection tasks
Step 1 B-2: To differentiate between
testable (scientific) and non-testable
questions (PSK 1c)
4.1 Differentiating Ba: Classifications and
categories
Constructed responses,
selection tasks
Step 1 C-1: To use scientific
observations in scientific question
formation (PSK 2)
3.2 Implementing/
Using
Cb: Subject-specific
techniques and methods
Task-oriented procedure
Step 1C-2: To check whether the
research question/hypothesis is
testable with available data (PSK 2)
5.1 Checking Cb: Subject-specific
techniques and methods
Learner-generated
verifications
Step 1C-3: To plan a process for
achieving solutions
6.2 Planning Cc: Criteria for when to use
appropriate procedures
Learner-generated
descriptions of plans
Step 1 Gen-1: To recognize
discipline-based terminology (e.g.,
hypothesis, scientific observation)
(PSK 1a)
1.1 Recognizing/
Identifying
Aa: Terminology Verification, matching, forced
choice
Step 1 Gen-2: To monitor personal
factors (e.g., novelty, motivation)
(PSK4a, plus II and M)
5.1 Checking/
Detecting
Dc: Self Knowledge Learner-generated response
Step1 Gen-3: To use self-regulation
strategies (PSK 4b)
3.2 Implementing/
Using
Da: Strategic Knowledge Learner-generated responses
A claim about a learner’s relative ability to generate a testable hypothesis/research question is
thus backed by their performance in the sub-areas listed in column 1 above. This particular
approach for the student model is a potential contribution of this study to ECD techniques, as it
has not previously been used in prior studies.
Task Model
The task model defines the CBLE activities, which are designed to elicit claim-
supporting evidence for learner proficiency variables. It describes what student work products
count as evidence for the targeted KSAs, as well as how the work products are captured (e.g., an
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 200 200
ordering task, a constructed response, a modeling task etc.). In this study, in alignment with the
student model, STEM-practice-based lessons tested and used for more than a decade were
modified for the CBLE to take advantage of its flexible, dynamic affordances (e.g., drawing
tools, drag-and-drop, simulations etc.). The tasks and subtasks aligned defined learning goals
(Clark et al., 2010; Clark, Mayer, & Thalheimer, 2003; Parnafes, 2007; Plass et al., 2009),
knowledge and knowledge-process types (Anderson & Krathwohl, 2001), NGSS standards
(NGSS Lead States, 2013), and appropriate assessment types (Anderson & Krathwohl, 2001;
Lantz, 2004; Miller, Linn, & Gronlund, 2009; Quellmalz et al., 2009). The appropriate
interactive task presentation was selected based on Anderson and Krathwohl’s (2001) taxonomy
and its correlation of appropriate knowledge, cognitive process and assessment types aligned
with relevant NGSS standards. Per the student-model example in Table 8 above, the sequence of
tasks also supplies a scaffolded model for eliciting learner responses, and inferring proficiencies
from them. That is, observable proficiencies in each step/sub-step progressively build up to the
overall standards-based proficiency. Task model variables are shown in Figure 4.
Figure 4
Task Model Variables
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 201 201
Evidence Model
The evidence model demonstrates the way in which student responses (observable work
products) connect to the KSAs documented in the student model and the way in which
proficiencies are measured and scored through potential variables in observable work products.
Divided into two parts (evidence rules and the measurement model), this evidence can be
considered as a probability indicator for adaptive expertise as defined in this study, rather than as
a precise microdevelopmental ruler, since a warrant for the proficiency level is inferred from the
collection of participant data and an evaluation of its correctness or quality.
Evidence rules detail how observable variables (i.e., learner responses) express a
learner’s level of performance in a given task. In this study, tasks were either single or combined
items, depending on the complexity of elements determined necessary for expressing evidence of
a given level of aptitude. In measuring proficiencies, necessary attention was given to
distinguishing between the learner’s representation (schema) and that of an expert community’s
shared representation (paradigm) (Sibley, 2009). How close a learner’s schema is to expert
paradigms is one type of measure of their position on a path from novice-to-expert levels of
understanding. Task-appropriate, standards-aligned rubrics established for each measured
observable variable in learner work products in the CBLE are based on Lantz’s (2004) rubrics
for science assessments. The rubrics serve as a means of assigning points, with lower points
associated with a novice-level understanding, and higher scores signifying greater levels of
expertise. For self reports, Likert-like responses were collected, along with short reflective-style
open-response items where further insight was needed. Where possible, groups of tasks with
different ways of assessing a unique KSA was used as a triangulation strategy, not only to
provide greater evidence for the claim about learner proficiency, but also to identify better
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 202 202
strengths and weaknesses where microdevelopmental mentoring (by an instructor or by
computer-generated scaffolding) could be targeted for individual learning needs. For example,
for hypothesis generation, learners in the science-centric problem-solving experience had the
opportunity to demonstrate hypothesis formulation through different, highly scaffolded task
formats (e.g., case studies, comparisons/ constructive critiques of other learners’ hypotheses,
differentiation between “big picture” questions and evidence-based research questions, short-
answer responses etc.). Where the research literature suggested it, alternate measures were used.
For example, in some inquiry and innovation measures, Likert-like scales were used per
Nusbaum and Silver’s (2011) critique of common methods for scoring uniqueness and creativity.
These new standards and grade-level progressions serve as a knowledge map, and where
possible, are compared with Yan and Fischer’s (2007) developmental scale. Table 9 provides an
example of this comparison.
Table 9
Sample Mapping Between Microdevelopmental Level & Standard
Microdevelopmental Scale
(Yan & Fischer, 2007)
NGSS Standard
Pts Code Microdevelopmental
Level
Grade Performance Expectation
8 AB2 Abstract Mappings ` High
School
SEP1.912.H Define a design problem
that involves the development of a
process or system with interacting
components and criteria and constraints
that may include social, technical,
and/or environmental considerations.
7 AB1 Systems of
Representations
or Single Abstract
Systems
[Adds more complex interacting parts
with abstract conceptual considerations.
Sophistication of work product would
determine whether level 7 or 8.]
6 RP3 Representational
Systems
Middle
School
SEP1.68.H Define a design problem
that can be solved through the
development of an object, tool,
process, or system and includes
multiple criteria and constraints,
including scientific knowledge that may
limit possible solutions.
[Adds scientific knowledge, idea of
limiting factors, and greater number of
criteria/constraints.]
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 203 203
5 RP2 Representational
Mappings
Upper
Elementary
SEP1.35.E Define a simple design
problem that can be solved
through the development of an object,
tool, process, or system
and includes several criteria for
success and constraints
on materials, time, or cost.
[Sophistication of work product –
creating sets or maps – could
determine whether level 4 or 5.]
4
RP1
Single Representational
Sets or Sensory Motor
Systems
Lower
Elementary
3 SM3 Sensory Motor Systems [Physical use of object/tool in the
context of solving a problem.]
While not a direct match, it illustrates how standards, which are created with cognitive
development in mind, have a similar progression in complexity. Given Dynamic Skill theory
suggests that, as people learn, they often return to earlier levels, looking at standards
progressions and using them, in part, to determine the quality and sophistication of learner
responses can help illustrate where learners are in their individual learning trajectories. Brain-
based definitions for a microdevelopmental scale, when combined with standards used in
educational settings, can help determine nuances in the quality and sophistication of learners’
work products, so influences the scoring, and thus the developmental ruler, used for the simple
model of adaptive expertise in this study.
Measurement Model
The measurement model synthesizes learners’ performance outcomes in a task (in this
study, a step or sub-step) or across tasks. This model gathers evidence over time to provide
summary scores for learner proficiencies. In this study, summary scores per task (problem
solving step and sub-step) appear in the 2D graphs in Appendix G and summary scores per
problem (for each dimension) appear in the 3D graphs. To ensure the assessment method
matched the available data, the selected measurement model is grounded in classical test theory
(i.e., rubric-based or Likert-like scores as aforementioned). The choice not to use a more
sophisticated psychometric model (e.g., IRT, Rasch, latent etc.) was based on the exploratory
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 204 204
nature of the study, the lack of prior models for measuring microdevelopments in adaptive
expertise, and the absence of pre-existing probability distributions for determining learners’
proficiencies in it. While the CBLE was tested, calibrated, and modified accordingly with an
initial learner group, the sample size was not sufficient to provide meaningful item parameters
against which learner responses in the actual test group could be legitimately related.
Presentation Model
The Presentation Model determines how the tasks appear in the CBLE, with common
elements depicted in Figure 5. Each activity then used CBLE templates (Likert-like scales,
short-answer responses etc.) according to the Task and Evidence Models, with aforementioned
taxonomic alignments (Anderson & Krathwohl, 2001) and NGSS alignment. Following beta
testing, some formats changed to reduce cognitive load related to the presentation model.
Figure 5
CBLE Presentation – Common Elements
For the overall framework, both problem-solving experiences had the
same five steps, arranged as interlocking puzzle pieces.
Each opened to show sub-steps.
Sub-steps (1.1, 1.2 etc.) open and close accordion-style, depending on
where the learner is in the problem.
While problem-solving steps can be cyclical and iterative, a simple, linear
approach is used to reduce complexity, and thus cognitive load, given the
novice, introductory level of the participants.
Resources for learners had iconic representations in “My Brain Power Tools,” which were
available to students at all times for reference and for use in given tasks. Where a scaffold is
specifically needed for a problem-solving task, the icon appears with an instruction to consult it.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 205 205
Assembly Model
The assembly model provides a basic outline that aligns the Student Proficiency Model,
Evidence Model, and Task Model into an assessment whole. It focuses on specifying the order,
mix, and flow of tasks, with both scaffolding and authentic, situated contexts and expert
practices in mind. The assembly model follows five problem-solving steps laid out in the
domain analysis, showing consistency across ECD layers. These five steps are further sub-
divided into sub-steps as shown in Table 10, with activities in each sub-step reflecting the closely
linked student proficiency, evaluation, and task models.
Table 10
CBLE Assembly Model – Steps & Sub-steps
Step Sub-Step
1: Define Problems and
Goals
A: Characterization of the Challenge and Learner Goals
B: Clarification of the Research Question/Design Problem (testability; criteria/constraints)
C: Generation of Testable Question/Problem and Solution Plan
2: Gather Information A: Determination of Problem-Relevant Data and Discipline-based Theories and Procedures
B: Clarification of Data, Theories, and Procedures
C: Application of Data, Theories, and Procedures
3. Propose Solution
Based on Evidence
A: Articulation/Characterization of Potential Model/Solution
B: Experimentation with Potential Model/Solution
C: Elaboration and Revision of Model/Solution
4. Justify Solution
Based on Evidence
A: Characterize Evidence (type, strengths/weaknesses, relationships) and Inferences Related to
Model/Solution
B: Build Logical Evidence-based Argument
5. Evaluate How Well
Solutions Work
A: Targeted Reflection of Learner’s Experience and Self-Identified Improvements (e.g., problem-
solving strategies, newly realized KSAs, acknowledged areas of growth)
B: Short Mini-Case Study of Similar Problem (i.e., asks learners how they’d approach solving a
novel problem of the same type, without asking them to do so in full);
Science: Learners receive image of landscape features different from
those initially delivered, but using the same discipline-based
principles and processes
Engineering Design: Learners receive new technology-based scenario using
the same content, but changing criteria and constraints to
achieve a different design outcome
Based on the discipline-based content of each problem, the assembly model grouped items into
task groups (“TG” in Figure 6), from which scores in the evaluation model are averaged at the
sub-step level for 2D plots and at the step level for 3D plots.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 206 206
Figure 6
CBLE Assembly Model – Task Groups Per Step/Sub-step
Delivery System Model
The delivery system model outlines how all of the CAF models function as a whole, with
a focus on common, crosscutting elements that support the whole enterprise. For this study, the
CBLE is designed to be cross-platform, able to work in all browsers on major operating systems
on personal computers. All learner work products are collected behind a password-protected
system for the CBLE, with the capability of exporting raw learner responses and providing
automated scores for quantitative responses (e.g., Likert-like scales or answer-key-based
responses) that did not need further processing. Common functional elements include the
organization of the tasks into five problem-solving steps (shown as puzzle pieces in problem
solving) and representational icons for scaffolding help conceptually rooted in NSSS Science &
Engineering Practices and Depth & Complexity prompts (Kaplan, 2012) as documented in the
task model. Computer programming included attention to 508 compliance in order to support
accessibility.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 207 207
Appendix D
Evidence-centered Design Layer 4: Assessment Implications
Assessment implementation involves authoring tasks, finalizing assessments, creating
measurement models, and establishing states and rules (Behrens, Mislevy, DeCerbo, & Levy,
2010). For this study, task authoring occurred through the authoring section of the CBLE, which
allows the selection of templates (matching, short-answer, mapping etc.) depending on the
knowledge type and cognitive process targeted per Anderson and Krathwohl’s (2001) taxonomy.
To finalize assessments, the CBLE authoring tools allowed automated scoring rules to be
programmed where possible, particularly for quantitative responses such as the Likert-like self
reports, as well as the ability for entering rubrics-based scores once calibrated by raters. While
some items were problem-specific, others can be reused cross-problems, as part of a growing
item bank. Reusable items and scoring for them are most often associated with crosscutting
elements such as general standards (i.e., learner proficiencies) for STEM disciplines (e.g.,
“science and engineering practices” and “crosscutting concepts”) and/or adaptive-expertise
indicators in each of the dimensions from Tables 2, 3, and 4 (e.g., rules for scoring items using
concept maps or scientific drawings). The CBLE allows modification of rules and responses per
observations and/or changes in research-based methodologies. In some cases, rules could not be
reliably established, so did not have a scoring schema (e.g., based on initial observations with the
test group, it was unclear whether time on task consistently represented interest, persistence,
confusion or some other factor; it could also vary with one participant in the course of the
problem-solving experience, as well as among participants). While not scored, the system
preserves that data is designed to collect such data for possible future studies.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 208 208
Appendix E
Evidence-centered Design Layer 5: Assessment Delivery Infrastructure
The assessment delivery infrastructure focuses on overarching aspects for providing the
CBLE to learners. Assessment delivery typically includes four-process elements: activity
selection, activity presentation, evidence identification, and evidence accumulation (Behrens,
Mislevy, DeCerbo, & Levy, 2010). Figure 7 provides a brief depiction of the assessment
delivery infrastructure.
Figure 7
Assessment Delivery Infrastructure
Development time involved building the overall CBLE infrastructure for this study by
using and combining various open-source learning tools and making modifications to them,
along with basic html. Activity selection has been well covered in terms of how tasks were
selected on the basis of long-standing well-tested curricula, previously developed to simulate
discipline-based, situated practices in scientific inquiry and engineering design. These activities
were also modified for the CBLE and then pre-tested with a sample of teachers similar to those
participating in the final study, and modified further based on outcomes from that. Activity
presentation relied on the CBLE to display tasks, capture work products (learner responses), and
store them. While numerous programming details were involved, describing that work in depth
is less pertinent to the subject of this study and its results. In general, however, modifications
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 209 209
included such things as placement of task elements (e.g., location of clickable buttons) to make
the experience more intuitive and to reduce extraneous cognitive load. The basic CBLE
environment is shown in Figure 5.
For evidence identification, the CBLE provided autoscoring for learners’ quantitative
work products. Rubric-based scoring for more quantitative responses could be done primitively
within the CBLE’s basic version of an authoring tool, but it proved faster and more effective to
export responses to a spreadsheet, add the rubric, and establish scores through inter-rater
reviews, with the ability to enter the final scores back into the tool. Evidence accumulation
involved synthesizing scores in each sub-step.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 210 210
Appendix F
Adaptive Expertise Pre/Post Assessment Items
The following shows Likert-like scales used in the study.
Table 11.
Likert-like Scales Used in the Study
Amount (AM)
Very
Low
Low
Somewhat
Low
Somewhat
High
High
Very
High
Frequency (F)
Almost
Never
Not Very
Often
Sometimes Regularly
Very
Often
Almost
Always
Importance (IM)
Not At All
Important
Not Very
Important
Not
Important
Important
Very
Important
Extremely
Important
Intervention Part 1: Preliminary Data Collection
Table 12 shows basic demographic information collected prior to the experience.
Table 12.
Survey Questions: Basic Demographic Data
1. I describe myself as:
¢ Male ¢ Female
2. If I had to choose, I would primarily describe myself as (check one):
¢ American Indian or Alaska Native
origins in any of the original peoples of North, Central, and South America and a tribal affiliation or
community attachment
¢ Asian
origins in any of the original peoples of the Far East, Southeast Asia, or the Indian subcontinent, including,
for example, Cambodia, China, India, Japan, Korea, Malaysia, Pakistan, the Philippine Islands, Thailand,
and Vietnam
¢ Black or African American
origins in any of the Black racial groups of Africa, including Caribbean Islanders
¢ Native Hawaiian or other Pacific Islander
origins in any of the original peoples of Hawaii, Guam, Samoa, or other Pacific Islands
¢ White
origins in any of the original peoples of Europe, the Middle East, or North Africa
3. I would also describe myself as (check as many as apply):
¢ American Indian or Alaska Native
origins in any of the original peoples of North, Central, and South America and a tribal affiliation or
community attachment
¢ Asian
origins in any of the original peoples of the Far East, Southeast Asia, or the Indian subcontinent, including,
for example, Cambodia, China, India, Japan, Korea, Malaysia, Pakistan, the Philippine Islands, Thailand,
and Vietnam
¢ Black or African American
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 211 211
origins in any of the Black racial groups of Africa, including Caribbean Islanders
¢ Native Hawaiian or other Pacific Islander
origins in any of the original peoples of Hawaii, Guam, Samoa, or other Pacific Islands
¢ White
origins in any of the original peoples of Europe, the Middle East, or North Africa
4. Regardless of race (above), I describe myself as Hispanic or Latino.
A person of Cuban, Mexican, Puerto Rican, South or Central American, or other Spanish culture or origin.
¢ Yes ¢ No
5. My age range is:
¢ 18-24
¢ 25-34
¢ 35-44
¢ 45-54
¢ 55-64
¢ 65-74
¢ Over 75
6. I Teach at: [Name of School]
7. I teach [Grade]:
Table 13 shows data items on learners’ prior knowledge collected prior to the experience
using a Likert-like scale for amount.
Table 13
Survey Questions: Prior Knowledge
My knowledge of . . . is:
science content
technology/engineering
mathematics
computer skills
My experience in teaching . . . is:
my students with computer-based lessons
myself with computer-based lessons
my students how to solve novel, STEM-based problems without one right answer
my students to use full inquiry* in science is:
*full inquiry=posing questions, problems, or scenarios that require students to hypothesize and test
to discover and to reach new understandings.
My use of teaching techniques that help students question their prior understanding is:
To make learners’ schema explicit, concept maps were collected pre/post experience
through hand-drawn models with instructions per Figure 8.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 212 212
Figure 8.
Pre/Post Concept Maps: Problem-solving Schemata
Date__________________ Code Name_________________
If you had to solve a complex problem without one right answer, many possible solutions, or solutions that
no one has invented yet, what would you do? What process for trying to solve the problem would you
create?
Please draw a diagram of your ideas of how you would solve a complex problem without a single or known
answer.
Use circles, triangles, squares, or other shapes to identify and organize your steps, activities, concepts, or
other efforts you think you would need to do to solve problems.
Use lines with arrows to show connections between these parts and write on the line what the connection is.
Need some examples first? (see reverse)
[blank]
Your task (see reverse) is to draw a diagram of your ideas of how to solve a complex problem without a
single or known answer.
Example:
Let’s take a simple “problem”: how to make a peanut butter and jelly sandwich. While there are many
creative solutions, here are three possible ways to create a diagram showing how to do it.
[3 concept maps presented]
Use circles, triangles, squares, or other shapes to identify and organize your steps, activities,
concepts, or other efforts you think are needed for solving problems.
Use lines with arrows to show connections between these parts and write on the line what the connection is.
Learner self-assessments were collected as shown in Table 14. For convenience of
review, items are organized here by sub-dimensional variables used in this study. In the survey
administered to participants, they appeared in a different order and were broken up into sections
to prevent fatigue and/or excessive cognitive load in responding. No indication of the
independent variables (dimensions or sub-dimensions) was given to participants.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 213 213
Table 14
Pre/Post Self Assessment Survey
Please answer the following questions as honestly, accurately, and completely as you can. You will not be graded on
your responses. Your answers will potentially be used with others to improve computer-learning experiences for students
in the future.
QUESTIONS [Reference – not indicated in survey] SCALE
[Independent Variable 1: KNOWLEDGE & SKILLS (KS)]
[1A PROBLEM-SOLVING KNOWLEDGE]
• I consciously check my thinking as I learn. [metacognitive]
• When something I believe differs from what I observe, I know ways of seeking other
explanations. [strategic]
• When I solve complex problems,
• I use approaches that enable me to consider alternative explanations. [strategic]
• I keep track of my ability to think creatively. [metacognitive]
• I know steps and strategies for solving problems. [conceptual]
• I am aware of my own beliefs about the nature of things and how they work. [metacognitive]
FREQUENCY
Describe your ability:
• to find information needed to solve problems. [procedural]
• to organize information needed to solve problem. [procedural]
• to apply principles and concepts when solving problems. [conceptual]
• to create good models of problem-solving steps, processes, and strategies. [strategic]
• to create good models that represent the meaning of things. [strategic]
• to keep track of how well I am learning. [metacognitive]
AMOUNT
[1B PROBLEM-SOLVING SKILLS]
• I take time to relate what I already know to what I’m learning.
• I consider which procedures to use depending on the context of a problem.
FREQUENCY
When you learn, describe your ability:
• to combine different ideas in a way that makes sense
• to figure out what something does or is used for (its function)
• to figure out how something works (its mechanics)
• to figure out what something is made of (its parts)
• When you learn, describe your tendency:
• to memorize facts and expert explanations
• to create models, pictures, charts, and other visualizations
• to use strategies I know work well
• to invent new strategies that could work well
• to analyze the accuracy of my current knowledge
AMOUNT
Independent Variable 2: INQUIRY & INNOVATION (II)
[2A CURIOSITY]
• I enjoy asking questions.
• I question what experts say are facts.
• I like to wonder why things are the way they are.
• When I learn something new, I follow up to find out more.
• I seek the just basics of what I need to know.
FREQUENCY
When you learn, describe your tendency:
• to try to discover things that aren’t obvious
• to seek information if you don’t understand something
• to look for answers when something is a mystery
AMOUNT
[2B FLEXIBILITY]
• I rely on what I already know to solve problems.
• I hold onto my ideas about things.
When I am learning:
• I experiment.
• I change my ideas to come up with better ideas and solutions.
FREQUENCY
When you learn, describe your tendency:
• to seek evidence for your existing ideas
• to pursue multiple options
• to integrate ideas from others
• to find a solution as quickly as possible
• to rely on proven methods to solve problems
AMOUNT
[2C GENERATIVITY]
• I have trouble coming up with new ideas.
• I create my own examples to help me figure things out.
FREQUENCY
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 214 214
• I invent my own ways of solving things.
When you learn, describe your tendency:
• to come up with new ideas
• to invent ways of improving things
• to rely on ideas known to work
AMOUNT
[2D PERSONAL NOVELTY] FREQUENCY
• Most things I hear about or learn are new to me.
• I perceive changes in my own ideas.
• My tendency to think “Wow. I didn’t know that!” is: AMOUNT
Independent Variable 3: MOTIVATION (M)
[3A MASTERY ORIENTATION]
• I engage in learning to satisfy my own interests.
• I set my own criteria for success and failure.
• My primary goal in learning is to increase my own understanding.
When I learn, I evaluate myself based on:
• how well I do in relationship to my peers.
• how well I understand a topic.
FREQUENCY
• To me personally, performing to a higher standard than most of my peers is IMPORTANCE
[3B EMOTIONAL AWARENESS]
• I notice my emotions when learning is hard.
• I notice my emotions when learning is easy.
• I consciously encourage myself when learning is frustrating.
FREQUENCY
Describe your awareness of:
• how you feel when learning something new
• how you feel when you make mistakes
AMOUNT
[3C RISK TAKING]
• I don't mind making mistakes as I learn.
• I hesitate to try new things when I think I might fail.
• I answer questions only when I am sure my answers are correct.
• I prefer to rely on known ways of solving problems.
FREQUENCY
• When I learn, my comfort in experimenting is AMOUNT
[3D SELF CONCEPT]
I do well in:
• school.
• math.
• science.
• language arts.
• using computers.
• solving complex problems without one right answer.
FREQUENCY
[3E SELF EFFICACY]
• I am confident I have what it takes to solve complex problems.
• I am not sure I can successfully solve STEM-based challenges.
FREQUENCY
• My ability to learn how to solve complex problems is AMOUNT
[3F INTEREST/ATTITUDE]
• I enjoy books, websites, and tv shows about:
• science, nature, and discovery.
• robotics, electronics, engineering, construction, or “do it yourself” challenges.
• When solving complex problems in learning, I concentrate so much that time seems to pass
quickly.
FREQUENCY
• To me personally, learning problem-solving strategies is IMPORTANCE
• Compared to other topics, my interest in STEM problem-solving is AMOUNT
[3G PERSISTANCE]
• My tendency to keep trying until I understand is
• When learning a complex STEM topic, my tendency to give up is
AMOUNT
• I try things until I succeed.
• When I can’t figure something out, I move on to something else of interest.
• I make an effort to practice in areas where I notice I need it.
FREQUENCY
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 215 215
Intervention Part 2:
CBLE Data Collection in STEM Problem Solving
At various points during the computer-based learning experience, the following
participant data was recorded as indicators of the following:
Table 15
Data Collection Embedded in the CBLE
INDEPENDENT VARIABLE 1: KNOWLEDGE & SKILLS
Subvariables:
1A Problem-Solving
Knowledge, 1B
Problem Solving
Skills, & 1C STEM
Content
• Items for Factual and Conceptual types of STEM Knowledge and Skills are evaluated in the
computer-based STEM lessons used in this study per rubrics based on well-vetted national
standards (e.g., Next Generation Science Standards, Common Core, 21
st
C. skills etc.).
• An example of assessing STEM content knowledge is asking participants to conclude
which life form would be most likely to survive in a given environment following a study of
the characteristics of life and environmental conditions suitable for different kinds of
organisms.
• An example of problem-solving knowledge assessment is a participant-generated concept
map of problem-solving procedures.
• General learning assessments of content were embedded within the interactives.
INDEPENDENT VARIABLE 2: INQUIRY & INNOVATION
Subvariable 2A:
CURIOSITY
Quantitative
Likert-like survey questions:
AMOUNT
• My curiosity about this question/problem was:
FREQUENCY
• I came up with questions about what I wondered about
• When I solved this problem, I actively looked for answers to my own questions.
Computer-based data (passively collected):
• # of questions participant asks about the problem/part of the problem
• # of times participant seeks additional information (supplied through help scaffolds)
without a prompt
Qualitative
Language reflecting curiosity (e.g., “I want to know….” “I wonder about….”) in responses to short
answer/open answer metacognitive reflection question (Take a moment to reflect on the task you
just did. How would you describe your experience?); assessment through open coding
Subvariable 2B:
FLEXIBILITY
Quantitative
Computer-based data (passively collected):
• # of times participant modifies their initial ideas (e.g., changes their model/schema of how
something works)
• # of times participant incorporates new ideas from others
Qualitative
Language reflecting flexibility (e.g., “I changed my ideas because….”) in responses to short
answer/open answer metacognitive reflection question (Take a moment to reflect on the task you
just did. How would you describe your experience?); assessment through open coding
Subvariable 2C:
GENERATIVITY
Quantitative
Likert-like survey questions:
AMOUNT
• The amount of new things I created/contributed in this problem/part of the problem was:
FREQUENCY
• I created/contributed new ideas
Computer-based data (passively collected):
• # of times participant submits new ideas
Qualitative
Language reflecting generativity (e.g., I created a new idea that ….) in responses to short
answer/open answer metacognitive reflection question (Take a moment to reflect on the task you
just did. How would you describe your experience?); assessment through open coding
Subvariable 2D:
PERSONAL
NOVELTY
Quantitative
Likert-like survey questions:
AMOUNT
• The amount of new things I learned in this problem/part of the problem was:
FREQUENCY
• In this problem/part of the problem, I noticed ideas that were new to me:
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 216 216
Qualitative
Language reflecting personal novelty (e.g., I found out something I didn’t know before….) in
responses to short answer/open answer metacognitive reflection question (Take a moment to
reflect on the task you just did. How would you describe your experience?); assessment through
open coding
Responses to additional short answer/open answer questions, assessment through open coding:
• What new things did you learn in this problem/this part of the problem?
• What new idea surprised you in this problem/this part of the problem?
• In solving this problem, what ideas from other participants or from experts were particularly
intriguing to you? Why?
INDEPENDENT VARIABLE 3: MOTIVATION
Subvariable 3A:
MASTERY
ORIENTATION
Quantitative
Likert-like survey questions:
IMPORTANCE
• To me personally, increasing my own understanding while solving this problem/part of the
problem was
• Doing better than my peers on this problem/this part of the problem was
Qualitative
Language reflecting mastery orientation (e.g., I wanted to get really good at this problem-solving
technique….) in responses to short answer/open answer metacognitive reflection question (Take
a moment to reflect on the task you just did. How would you describe your experience?);
assessment through open coding
Subvariable 3B:
Emotional
Awareness
Quantitative
Likert-like survey questions:
AMOUNT
• My awareness of my emotions during this problem/part of the problem was:
FREQUENCY
• I noticed how I felt during this problem/part of the problem.
• I noticed how I felt when making mistakes during this problem/part of the problem.
• I noticed how I felt when learning something new during this problem/part of the problem.
Qualitative
Language reflecting emotional awareness (e.g., I was frustrated I couldn’t figure out x at first….)
in responses to short answer/open answer metacognitive reflection question (Take a moment to
reflect on the task you just did. How would you describe your experience?); assessment through
open coding
Subvariable 3C:
RISK TAKING
Quantitative
Likert-like survey questions:
FREQUENCY
While solving this problem,
• I didn’t mind when I made mistakes.
• I asked questions even when I didn’t know the answer.
• I was afraid I’d fail.
Computer-based data (passively collected):
• # of times participant tries new things/experiments when problem-solving failure is likely
• # of times participant answers when solutions are vague
Qualitative
Language reflecting risk taking (e.g., I tried new things even though I fail; I was hesitant to try new
things) in responses to short answer/open answer metacognitive reflection question (Take a
moment to reflect on the task you just did. How would you describe your experience?);
assessment through open coding
Subvariable 3D:
SELF CONCEPT
Quantitative
Likert-like survey question:
AMOUNT
• I do/did well in solving this type of problem/part of the problem.
• I do/did well in understanding this type of STEM concept.
Qualitative
Language reflecting self concept (e.g., I do/did well in this task) in responses to short
answer/open answer metacognitive reflection question (Take a moment to reflect on the task you
just did. How would you describe your experience?); assessment through open coding
Subvariable 3E:
SELF EFFICACY
Quantitative
Likert-like survey questions:
AMOUNT
Describe your confidence in your ability to solve:
• this problem/this part of the problem.
• future problems similar to this one.
• any new science or technology problem.
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 217 217
FREQUENCY
In this part of the problem, how often did you feel:
• you had the capability to solve the problem?
• you did not have the capability to solve the problem?
Qualitative
Language reflecting self efficacy (e.g., I am confident now that I can do this because….) in
responses to short answer/open answer metacognitive reflection question (Take a moment to
reflect on the task you just did. How would you describe your experience?); assessment through
open coding
Responses to additional short answer/open answer question, assessment through open coding:
• Now that you have completed this task, imagine that you are asked to join a team of
[scientists/engineers/explorers] who have to [insert relevant problem-solving scenario].
Describe what capabilities you think you have to contribute to succeed in this career. Describe
what capabilities you think you would need to develop to succeed in this career.
Subvariable 3F:
INTEREST/
ATTITUDE
Quantitative:
Likert-like survey questions:
AMOUNT
• My interest in solving this problem/this part of the problem was:
FREQUENCY
• When solving this problem, I felt engaged:
Qualitative:
Language reflecting interest (e.g., I really was intrigued by….) in responses to short answer/open
answer metacognitive reflection question (Take a moment to reflect on the task you just did. How
would you describe your experience?); assessment through open coding
Responses to more short answer/open answer questions, assessment through open coding:
• What are/were the most interesting things about this problem/solving this problem?
• What are/were the least interesting things about this problem/solving this problem?
Sub-variable 3G:
PERSISTANCE
Quantitative:
Likert-like survey questions:
FREQUENCY
• When solving this problem/this part of the problem, I wanted to/did not want to keep going:
IMPORTANCE
• How important was it to you to keep going until you solved this problem/this part of the
problem?
Computer-based data (passively collected):
• time spent on each problem subtask
• # of times answers revised until solution reached
Qualitative:
Language reflecting persistence (e.g., I didn’t stop until I solved it….) in responses to short
answer/open answer metacognitive reflection question (Take a moment to reflect on the task you
just did. How would you describe your experience?); assessment through open coding
Responses to more short answer/open answer questions, assessment through open coding:
• List anything you did to keep going until you solved the problem.
• List anything that kept you from solving the problem.
• Describe the kind of effort you put into solving this problem.
• Describe any feelings or thoughts you had that kept you from persisting in finding solutions.
Additional:
Cognitive Load
AMOUNT
• The complexity of this task was
• The mental effort I used was
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 218 218
Intervention Part 3:
Observations
Observations involved handwritten note-taking only, and included:
• Description of the Setting
• Did the participants experience any trouble with the computer-based learning experience?
When/in what context?
• Did the participants have enough time to complete each problem-solving step? Which
ones took more time than anticipated?
• Did participants stay on task? If not, when did they lose focus on the problem-solving
activity?
• Did participants ask for help in:
• Understanding the task
• Using the computer/interactives
• Other
• Descriptions of any unforecasted events related to the setting
• Notes from Study Debrief and Participant Comments
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 219 219
Appendix G:
Participant Patterns
Person 1 Data
Person 1 was a White female in the 18 - 24 age range with 4 years of teaching experience. She
was a science teacher at a middle school with around 350 students, 56% of whom were minority
and 80% of whom received free/reduced lunch. On the 8
th
-grade state proficiency tests, 50% of
students demonstrated desired proficiency in math and 53% in reading.
In terms of a self-description in the pre-experience survey, Person 1 reported:
• “low” knowledge of science content and technology/engineering
• “somewhat low” mathematics knowledge and computer skills
• “very low” experience in teaching students or herself with computer-based lessons
• “very low” experience in teaching students how to solve novel, STEM based problems or
how to engage in full science inquiry
Based on the pre-experience survey, Person 1’s self-assessment scores were as follows: KS –
2.57; II – 3.42; M – 3.60.
Figure G-1A:
Person 1 2D Composite Score
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 220 220
Figure G-1B:
Person 1 2D Knowledge & Skills: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 221 221
Figure G-1C:
Person 1 2D Inquiry and Innovation: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 222 222
Figure G-1D:
Person 1 2D Motivation Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 223 223
Figure G-1E:
Person 1 3D Composite Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 224 224
Figure G-1F:
Person 1 Pre/Post-Experience Problem Solving Concept Maps
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 225 225
Person 2 Data
Participant 2 was an Asian male in the 25 - 34 age range with 8 years of teaching experience. He
was a science teacher at a high school with around 875 students, 81% of whom were minority
and 63% of whom received free/reduced lunch. On the 10
th
-grade state proficiency tests, 33% of
students demonstrated desired proficiency in math and 57% in reading.
In terms of a self-description in the pre-experience survey, Person 2 reported:
• “somewhat high” knowledge of science content and technology/engineering
• “somewhat low” mathematics knowledge and computer skills
• “somewhat low” experience in teaching students or himself with computer-based lessons
• “low” experience in teaching students how to solve novel, STEM based problems or how
to engage in full science inquiry
Based on the pre-experience survey, Person 2’s self-assessment scores were as follows: KS –
4.78; II – 4.73; M – 4.90.
Figure G-2A:
Person 2 2D Composite Score
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 226 226
Figure G-2B:
Person 2 2D Knowledge & Skills: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 227 227
Figure G-2C:
Person 1 2D Inquiry and Innovation: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 228 228
Figure G-2D:
Person 2 2D Motivation Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 229 229
Figure G-2E:
Person 2 3D Composite Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 230 230
Figure G-2F:
Person 2 Pre/Post-Experience Problem Solving Concept Maps
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 231 231
Person 3 Data
Person 3 was a Black male in the 25-34 age range with 2 years of teaching experience. He was a
math teacher at a high school with around 1400 students, 44% of whom were minority and 47%
of whom received free/reduced lunch. On the 10
th
-grade state proficiency tests, 58% of students
demonstrated desired proficiency in math and 69% in reading.
In terms of a self-description in the pre-experience survey, Person 3 reported:
• “somewhat low” knowledge of science content
• “somewhat high” knowledge of technology/engineering
• “high” mathematics knowledge
• “somewhat high” computer skills
• “somewhat low” experience in teaching students or himself with computer-based lessons
• “low” experience in teaching students how to solve novel, STEM based problems or how
to engage in full science inquiry
Based on the pre-experience survey, Person 3’s self-assessment scores were as follows:
KS – 4.43; II – 4.12; M – 4.77.
Figure G-3A:
Person 3 2D Composite Score
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 232 232
Figure G-3B:
Person 3 2D Knowledge & Skills: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 233 233
Figure G-3C:
Person 3 2D Inquiry and Innovation: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 234 234
Figure G-3D:
Person 3 2D Motivation Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 235 235
Figure G-3E:
Person 3 3D Composite Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 236 236
Figure G-3F:
Person 3 Pre/Post-Experience Problem Solving Concept Maps
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 237 237
Person 4 Data
Person 4 was a White female in the 35-44 age range with 1 year of student/pre-service teaching
experience. She was a science teacher at a middle school with around 1400 students, 68% of
whom were minority and 55% of whom received free/reduced lunch. On the 8
th
-grade state
proficiency tests, 48% of students demonstrated desired proficiency in math and 71% in reading.
In terms of a self-description in the pre-experience survey, Person 4 reported:
• “somewhat high” knowledge of science content and technology/engineering
• “low” mathematics knowledge
• “somewhat low” computer skills
• “somewhat low” experience in teaching students or herself with computer-based lessons
• “low” experience in teaching students how to solve novel, STEM based problems or how
to engage in full science inquiry
Based on the pre-experience survey, Person 4’s self-assessment scores were as follows:
KS – 3.83; II – 4.31; M – 4.60.
Figure G-4A:
Person 4 2D Composite Score
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 238 238
Figure G-4B:
Person 4 2D Knowledge & Skills: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 239 239
Figure G-4C:
Person 4 2D Inquiry and Innovation: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 240 240
Figure G-4D:
Person 4 2D Motivation Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 241 241
Figure G-4E:
Person 4 3D Composite Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 242 242
Figure G-4F:
Person 4 Pre/Post-Experience Problem Solving Concept Maps
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 243 243
Person 5 Data
Person 5 was an Asian female in the 18 - 24 age range with 1 year of student/pre-service
teaching experience. She was a science teacher at a high school with around 2500 students, 50%
of whom were minority and 50% of whom received free/reduced lunch. On the 10
th
-grade state
proficiency tests, 37% of students demonstrated desired proficiency in math and 65% in reading.
In terms of a self- description in the pre-experience survey, Person 5 reported:
• “somewhat low” knowledge of science content
• “very low” knowledge of technology/engineering
• “somewhat low” mathematics knowledge
• “somewhat low” computer skills
• “very low” experience in teaching students or herself with computer-based lessons
• “very low” experience in teaching students how to solve novel, STEM based problems or
how to engage in full science inquiry
Based on the pre-experience survey, Person 5’s self-assessment scores were as follows:
KS – 3.65; II – 3.85; M – 4.70.
Figure G-5A:
Person 5 2D Composite Score
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 244 244
Figure G-5B:
Person 5 2D Knowledge & Skills: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 245 245
Figure G-5C:
Person 5 2D Inquiry and Innovation: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 246 246
Figure G-5D:
Person 5 2D Motivation Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 247 247
Figure G-5E:
Person 5 3D Composite Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 248 248
Figure G-5F:
Person 5 Pre/Post-Experience Problem Solving Concept Maps
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 249 249
Person 6 Data
Person 6 was a Latino male in the 25 - 34 age range with 2 years of student/pre-service teaching
experience. He was a science teacher at a middle school with around 350 students, 56% of
whom were minority and 80% of whom received free/reduced lunch. On the 8
th
-grade state
proficiency tests, 50% of students demonstrated desired proficiency in math and 53% in reading.
In terms of a self-description in the pre-experience survey, Person 6 reported:
• “somewhat low” knowledge of science content
• “low” knowledge of technology/engineering
• “somewhat low” mathematics knowledge
• “somewhat high” computer skills
• “somewhat low” experience in teaching students with computer-based lessons
• “somewhat high” experience teaching himself with computer-based lessons
• “low” experience in teaching students how to solve novel, STEM based problems or how
to engage in full science inquiry
Based on the pre-experience survey, Person 6’s self-assessment scores were as follows:
KS – 4.00; II – 3.85; M – 5.13.
Figure G-6A:
Person 6 2D Composite Score
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 250 250
Figure G-6B:
Person 6 2D Knowledge & Skills: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 251 251
Figure G-6C:
Person 6 2D Inquiry and Innovation: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 252 252
Figure G-6D:
Person 6 2D Motivation Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 253 253
Figure G-6E:
Person 6 3D Composite Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 254 254
Figure G-6F:
Person 6 Pre/Post-Experience Problem Solving Concept Maps
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 255 255
Person 7 Data
Person 7 was a Latina female in the 25 - 34 age range with 2 years of student/pre-service
teaching experience. She was a math teacher at a high school with around 2000 students, 73% of
whom were minority and 52% of whom received free/reduced lunch. On the 10
th
-grade state
proficiency tests, 49% of students demonstrated desired proficiency in math and 64% in reading.
In terms of a self-description in the pre-experience survey, Person 7 reported:
• “somewhat high” knowledge of science content
• “very low” knowledge of technology/engineering
• “high” mathematics knowledge
• “somewhat low” computer skills
• “low” experience in teaching students or herself with computer-based lessons
• “low” experience in teaching students how to solve novel, STEM based problems or how
to engage in full science inquiry
Based on the pre-experience survey, Person 7’s self-assessment scores were as follows:
KS – 4.43; II – 4.31; M – 4.80.
Figure G-7A:
Person 7 2D Composite Score
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 256 256
Figure G-7B:
Person 7 2D Knowledge and Skills: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 257 257
Figure G-7C:
Person 7 2D Motivation Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 258 258
Figure G-7D:
Person 7 2D Motivation Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 259 259
Figure G-7E:
Person 7 3D Composite Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 260 260
Figure G-7F:
Person 7 Pre/Post-Experience Problem Solving Concept Maps
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 261 261
Person 8 Data
Person 8 was a White female in the 25 - 34 age range with 1 year of student/pre-service teaching
experience. She was a science teacher at a private middle school with around 450 students, 63%
of whom were minority. Given the school was private, no other demographic data were
available.
In terms of a self- description in the pre-experience survey, Person 8 reported:
• “somewhat high” knowledge of science content and technology/engineering
• “somewhat high” mathematics knowledge and computer skills
• “very high” experience in teaching students or herself with computer-based lessons
• “very low” experience in teaching students how to solve novel, STEM based problems or
how to engage in full science inquiry
Based on the pre-experience survey, Person 8’s self-assessment scores were as follows:
KS – 4.35; II – 3.92; M – 5.27.
Figure G-8A:
Person 8 2D Composite Score
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 262 262
Figure G-8B:
Person 8 2D Knowledge and Skills: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 263 263
Figure G-8C:
Person 8 2D Inquiry and Innovation Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 264 264
Figure G-8D:
Person 8 2D Motivation Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 265 265
Figure G-8E:
Person 8 3D Composite Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 266 266
Figure G-8F:
Person 8 Pre/Post-Experience Problem Solving Concept Maps
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 267 267
Person 9 Data
Person 9 was a White female in the 25-34 age range with 8 years of teaching experience. She
was a science teacher at a private middle school with around 350 students, 60% of whom were
minority. Given the school was private, no other demographic data were available.
In terms of a self- description in the pre-experience survey, Person 9 reported:
• “high” knowledge of science content, technology/engineering, and mathematics
• “high” computer skills
• “high” experience in teaching students or herself with computer-based lessons
• “very low” experience in teaching students how to solve novel, STEM based problems
• “very high” experience in teaching students how to engage in full science inquiry
Based on the pre-experience survey, Person 9’s self-assessment scores were as follows:
KS – 4.52; II – 4.73; M – 5.33.
Figure G-9A:
Person 9 2D Composite Score
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 268 268
Figure G-9B:
Person 9 2D Knowledge and Skills: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 269 269
Figure G-9C:
Person 9 2D Inquiry and Innovation: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 270 270
Figure G-9D:
Person 9 2D Motivation Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 271 271
Figure G-9E:
Person 9 3D Composite Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 272 272
Figure G-9F:
Person 9 Pre/Post-Experience Problem Solving Concept Maps
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 273 273
Person 10 Data
Person 10 was an Asian male in the 18 - 24 age range with 1 year of student/pre-service teaching
experience. He was a science teacher at a high school with around 1800 students, 38% of whom
were minority and 58% of whom received free/reduced lunch. On the 10
th
-grade state
proficiency tests, 58% of students demonstrated desired proficiency in math and 69% in reading.
In terms of a self-description in the pre-experience survey, Person 10 reported:
• “high” knowledge of science content, technology/engineering, and mathematics
• “somewhat high” computer skills
• “somewhat high” experience in teaching students or himself with computer-based lessons
• “low” experience in teaching students how to solve novel, STEM based problems
• “very low” experience in teaching students how to engage in full science inquiry
Based on the pre-experience survey, Person 10’s self-assessment scores were as follows:
KS – 4.83; II – 4.73; M – 5.10.
Figure G-10A:
Person 10 2D Composite Score
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 274 274
Figure G-10B:
Person 10 2D Knowledge & Skills: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 275 275
Figure G-10C:
Person 10 2D Inquiry and Innovation: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 276 276
Figure G-10D:
Person 10 2D Motivation Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 277 277
Figure G-10E:
Person 10 3D Composite Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 278 278
Figure G-10F:
Person 10 Pre/Post-Experience Problem Solving Concept Maps
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 279 279
Person 11 Data
Person 11 was an Asian/Pacific Islander female in the 35 - 44 age range with 2 years of teaching
experience. She was a science teacher at a high school with around 1400 students, 44% of whom
were minority and 47% of whom received free/reduced lunch. On the 10
th
-grade state
proficiency tests, 58% of students demonstrated desired proficiency in math and 69% in reading.
In terms of a self-description in the pre-experience survey, Person 11 reported:
• “very low” knowledge of science content, technology/engineering, and mathematics
• “somewhat high” computer skills
• “somewhat low” experience in teaching students or herself with computer-based lessons
• “very low” experience in teaching students how to solve novel, STEM based problems or
how to engage in full science inquiry
Based on the pre-experience survey, Person 11’s self-assessment scores were as follows:
KS – 5.04; II – 5.08; M – 4.70.
Figure G-11A:
Person 11 2D Composite Score
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 280 280
Figure G-11B:
Person 11 2D Knowledge and Skills Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 281 281
Figure G-11C:
Person 11 2D Inquiry and Innovation: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 282 282
Figure G-11D:
Person 11 2D Motivation Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 283 283
Figure G-11E:
Person 11 3D Composite Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 284 284
Figure G-11F:
Person 11 Pre/Post-Experience Problem Solving Concept Maps
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 285 285
Person 12 Data
Person 12 was a White male in the 25-34 age range with 2 years of student/pre-service teaching
experience. He was a science teacher at a high school with around 875 students, 81% of whom
were minority and 63% of whom received free/reduced lunch. On the 10
th
-grade state
proficiency tests, 33% of students demonstrated desired proficiency in math and 57% in reading.
In terms of a self-description in the pre-experience survey, Person 12 reported:
• “somewhat high” knowledge of science content, technology/engineering, mathematics,
and computer skills
• “somewhat low” experience in teaching students or himself with computer-based lessons
• “somewhat low” experience in teaching students how to solve novel, STEM based
problems
• “somewhat high” experience in teaching students how to engage in full science inquiry
Based on the pre-experience survey, Person 12’s self-assessment scores were as follows:
KS – 4.74; II – 4.69; M – 5.37.
Figure G-12A:
Person 12 2D Composite Score
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 286 286
Figure G-12B:
Person 12 2D Knowledge & Skills: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 287 287
Figure G-12C:
Person 12 2D Inquiry and Innovation: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 288 288
Figure G-12D:
Person 12 2D Motivation Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 289 289
Figure G-12E:
Person 12 3D Composite Patterns: Science and Technology Problems
MICRODEVELOPMENTS IN ADAPTIVE EXPERTISE 290 290
Figure G-12F:
Person 12 Pre/Post-Experience Problem Solving Concept Maps
Abstract (if available)
Abstract
The purpose of this exploratory study is to understand how educators in high‐need environments develop adaptive expertise in STEM‐related, ill‐structured‐problem‐solving. The research question considered what microdevelopmental patterns emerged in three proposed dimensions of adaptive expertise (knowledge and skills, inquiry and innovation, and motivation) and their transfer through time. Educator participants were enrolled in a university‐level education degree program and taught underserved students. The research design included a computer‐based learning environment with two STEM‐based, ill‐structured problems. Data collection occurred through a pre‐study demographics survey, two learning sessions, and observations with note taking. Evidence‐centered design (ECD) guided data analysis. Findings showed that teachers had highly complex and dynamic individual microdevelopmental patterns, including the most unstable trajectories in the motivation dimension and low‐to‐average middle‐school‐level performance in problem solving and STEM knowledge and skills. While they largely showed improvement trends in their problem‐solving schemata pre/post‐experience, they still performed at lower than optimal levels. The study underscores the importance of providing educators with more “preparation for future learning” in STEM‐based, ill‐structured problem solving so that they in turn can provide optimal supports for their students. It suggests that motivation is a key factor to add to models for adaptive expertise. Given the wide variability of learner responses within and between problems, it also supports the call for microdevelopmental assessments with scaffolded, adaptive supports. With these improvements, learners in high‐need environments can be supported in their novice‐to‐expert trajectories, helping to provide educational equity and the promise of contributing to, and sharing in, 21st century prosperity.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Capacity building for STEM faculty and leaders: supporting university students with ADHD in earning STEM degrees
PDF
Globalization and the need for 21st-century skills: implications for policy education in science, technology, engineering, mathematics, and project-based learning in schools in Ireland
PDF
Influence of globalization, school leadership, and students’ participation in science competitions on 21st-century skill development, instructional practices, and female students’ interest in sci...
PDF
Expanding educational access and opportunities: the globalization and foreign direct investment of multinational corporations and their influence on STEM, project-based learning and the national ...
PDF
The successful implementation of STEM initiatives in lower income schools
PDF
The impact of globalization, economics and educational policy on the development of 21st century skills and STEM education in Costa Rica
PDF
The impact of globalization, economics, and educational policy on the development of 21st century skills and STEM education in Costa Rica
PDF
Influence of globalization and educational policy on development of 21st-century skills and education in science, technology, engineering, and mathematics and the science and technology fairs in ...
PDF
Mathematics Engineering Science Achievement and persistence in science, technology, engineering, and mathematics majors: the influence of MESA on the retention of first generation females in STEM...
PDF
SciFest and the development of 21st-century skills, interest in coursework in science, technology, engineering, and mathematics, and preparation of Irish students for a globalized Ireland
PDF
Superintendents increase student achievement by selecting effective principals
PDF
Influence of globalization and educational policy on development of 21st-century skills and education in science, technology, engineering, and mathematics and the science and technology fairs in ...
PDF
Examination of the influence of globalization, leadership, and science fairs on the female acquisition of 21st-century skills and their college-career pursuit of science, technology, engineering,...
PDF
Influence of SciFest on Irish students in developing interest in science, technology, engineering, and mathematics and 21st-century skills in preparation for competing in a global economy
PDF
Obstacles to and motivations for women pursuing and serving in academic leadership positions in STEM fields at California universities
PDF
The impact of science, technology, engineering and math education on the development of a knowledge based economy in Costa Rica
PDF
Strategies used by superintendents in developing leadership teams
PDF
Effective leadership practices used by middle school principals in the implementation of instructional change
PDF
A case study in best practices: how engaging and respectful communication practices in a K-8 school lead to positive outcomes for students
PDF
A study of California public school district superintendents and their implementation of 21st century skills
Asset Metadata
Creator
Viotti, Michelle A.
(author)
Core Title
Microdevelopments in adaptive expertise in STEM-based, ill-structured problem solving
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education
Publication Date
07/16/2015
Defense Date
01/22/2015
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
adaptive expertise,microdevelopment,OAI-PMH Harvest,STEM learning,STEM problem solving
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
García, Pedro Enrique (
committee chair
), Castruita, Rudy Max (
committee member
), Escalante, Michael F. (
committee member
)
Creator Email
Michelle.A.Viotti@jpl.nasa.gov,viotti@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-596895
Unique identifier
UC11299457
Identifier
etd-ViottiMich-3622.pdf (filename),usctheses-c3-596895 (legacy record id)
Legacy Identifier
etd-ViottiMich-3622.pdf
Dmrecord
596895
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Viotti, Michelle A.
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
adaptive expertise
microdevelopment
STEM learning
STEM problem solving