Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Inaccuracies in expert self -report: Errors in the description of strategies for designing psychology experiments
(USC Thesis Other)
Inaccuracies in expert self -report: Errors in the description of strategies for designing psychology experiments
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
NOTE TO USERS
This reproduction is the best copy available.
®
UMI
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
INACCURACIES IN EXPERT SELF-REPORT: ERRORS IN THE DESCRIPTION
OF STRATEGIES FOR DESIGNING PSYCHOLOGY EXPERIMENTS
by
David Frank Feldon
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
EDUCATION (EDUCATIONAL PSYCHOLOGY)
August 2004
Copyright 2004 David Frank Feldon
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
UMI Number: 3145196
Copyright 2004 by
Feldon, David Frank
All rights reserved.
INFORMATION TO USERS
The quality of this reproduction is dependent upon the quality of the copy
submitted. Broken or indistinct print, colored or poor quality illustrations and
photographs, print bleed-through, substandard margins, and improper
alignment can adversely affect reproduction.
In the unlikely event that the author did not send a complete manuscript
and there are missing pages, these will be noted. Also, if unauthorized
copyright material had to be removed, a note will indicate the deletion.
®
UMI
UMI Microform 3145196
Copyright 2004 by ProQuest Information and Learning Company.
All rights reserved. This microform edition is protected against
unauthorized copying under Title 17, United States Code.
ProQuest Information and Learning Company
300 North Zeeb Road
P.O. Box 1346
Ann Arbor, Ml 48106-1346
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
ACKNOWLEDGEMENTS
I would like to gratefully acknowledge a number of people, without whom
this project could never have been completed. First and foremost, I would like to
thank my wife, Colby, for her love and support. The dissertation process is a taxing
one not only for authors, but also for those who ensure that all other facets of life are
not neglected during the project. She has been the greatest joy of my life, and I look
forward to beginning a new chapter with her.
I would also like to express my gratitude to three generations of family who
have provided inspiration, support, assistance, and advice that was always freely
offered. Very few people in this world are blessed with the opportunities from which
I have benefited. I hope that my contributions to research will improve our
understanding of learning processes, and in doing so, help others to efficiently attain
whatever goals they set without depending on access to the various forms of capital
that are inequitably distributed in human society.
Finally, I would like to thank the faculty whose knowledge and passion for
research have guided me through the travails of academe and initiated me into the
discipline of educational research: Dr. Richard Clark has been and will always be an
outstanding mentor and friend whose insight and values lie at the heart of what is
noble in scholarship; Dr. David Marsh for his unfailing guidance through the maze
of academic and university life; Dr. John Horn for inspiring my need to understand
the mechanisms that underlie the tools of research; and Dr. Lee Shulman for his
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
advocacy of the stewardship of our discipline as a governing principle for the efforts
of those who have earned the Ph.D.
Education is a vital human endeavor. The work presented in this study is
offered with the goal of contributing to the improved understanding and
implementation of its principles and practices.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
iv
TABLE OF CONTENTS
Page
Acknowledgements ii
List of Tables v
List of Figures vi
Abstract vii
Chapter 1: Review of the Literature 1
Purpose of the Study 4
Review of the Literature 4
Expertise 5
Automaticity 22
Accuracy of Self-Report 27
Scientific Problem Solving 33
Summary 39
Research Questions 40
Hypotheses 40
Chapter 2: Method 42
Design 45
Operational Definitions of Key Variables 47
Apparatus 49
Subjects 59
Procedure 62
Analysis 64
Chapter 3: Results 70
Reasoning Abilities 70
Simulation Performance 72
Self-Report Accuracy 75
Automaticity and Accuracy 79
Problem Solving Processes 82
General Qualitative Themes in Strategy Data 103
Chapter 4: Conclusions 108
Findings for Research Question 1 109
Findings for Research Question 2 110
Findings for Research Question 3 111
Summary 112
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
V
Implications 112
References 122
Appendix A: Lawson’s Test of Scientific Reasoning 145
Appendix B: Maze Task 155
Appendix C: BackSpanTask 156
Appendix D: Expert A Simulation Data 157
Appendix E: Expert B Simulation Data 160
Appendix F: Expert C Simulation Data 167
Appendix G: Intermediate A Simulation Data 174
Appendix H: Intermediate B Simulation Data 181
Appendix I: Intermediate C Simulation Data 184
Appendix J: Novice A Simulation Data 189
Appendix K: Novice B Simulation Data 202
Appendix L: Novice C Simulation Data 218
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
vi
LIST OF TABLES
Page
Table 1: Mixed Methodology Rationales 44
Table 2: Independent Variables Manipulable in Simulated Psychology Lab 52
Table 3: Critical Decision Method Probes 57
Table 4: Written Measures—All Subjects 71
Table 5: Written Measures—By Subject Classification 72
Table 6: Written Measures—By Individual Subject 72
Table 7: Density of Verbal Report Data in Relation to Observed Actions 78
Table 8: Guttman Split-Half Reliability Correlations 80
Table 9: Pooled Coherence Correlations 80
Table 10: Intraindividual Coherence-Accuracy Correlation Matrices 81
Table 11: Rotated Factor Matrix 82
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
LIST OF FIGURES
Page
Figure 1: Mixed Method Diagram 46
Figure 2: Expert Subjects Hypothesis-Outcome Correlations 74
Figure 3: Intermediate Subjects Hypothesis-Outcome Correlations 74
Figure 4: Novice Subjects Hypothesis-Outcome Correlations 75
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
viii
ABSTRACT
Expertise is developed through the acquisition of domain-specific skills and schemas
that manifest as highly effective problem solving strategies. After extensive
deliberate practice and experience within the domain of expertise, these skills
become automated and impose less cognitive load on a limited short term memory
capacity. Further, as skills automate, fewer decision points require conscious
resolution and consequently are unlikely to be retained in long term memory as
specific episodic representations that could be accurately recalled and articulated.
The purposes of this study were to: (a) understand the strategies used by novices
(undergraduates), intermediates (doctoral students), and experts (tenured professors)
to solve problems in experimental design while utilizing a computer simulation to
evaluate competing hypotheses in psychology; (b) analyze the accuracy of self-report
in relation to the level of skill automaticity; and (c) demonstrate that experimental
design is a viable and specific domain of expertise. Once their strategies were
identified, the accuracy of subjects’ self-report was analyzed as a function of their
levels of cognitive load while performing the task. The study found that regardless
of level of expertise, subjects selected one of three general strategies: (a) begin with
a strong prediction about a theory and interpret results in relation to it; (b) begin with
a highly complex exploratory design to maximize the likelihood of informative
patterns in the initial data to pursue a hypothesis subsequently; or (c) use the least
complex designs possible where each separate experiment explores a different
variable with the intention of interpreting the cumulative results in a linear fashion.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
ix
The results also suggested that automaticity and the accuracy of self-report are
negatively correlated and that the criteria for expertise in any domain are under-
defined. Implications for expert-based instructional models and future research at
the intersections of these topics are discussed.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
1
CHAPTER I: REVIEW OF THE LITERATURE
There are a number of assumptions that a skilled researcher uses when doing
research. Often, they can't even articulate what they are, but they practice
them. The [expert researcher] model requires a long process of acculturation,
an in-depth knowledge of the discipline, awareness of important scholars
working in particular areas, participation in a system of informal scholarly
communication, and a view of research as a non-sequential, nonlinear process
with a large degree of ambiguity and serendipity. The expert researcher is
relatively independent, and has developed his or her own personal [research]
strategies. (Leckie, 1996, p. 202)
This view of experimental research reflects several critical concerns in the
preparation of Ph.D. students. One of the most challenging aspects of graduate
education in the social sciences is the teaching of research skills (Labaree, 2003;
Schoenfeld, 1999). While there are many instructional texts on the process of
experimental research (e.g. Gall, Borg, & Gall, 1996; McBumey, 1998; Pedhazur &
Schmelkin, 1991) and an emphasis on personal advisement and cognitive
apprenticeship in the advanced stages of graduate study (Brown, Collins, & Duguid,
1989; Golde & Dore, 2001), there have been increasing levels of concern about the
quality of research skills that students develop through their doctoral programs (e.g.
Adams & White, 1994; Holbrook, 2002).
The development of social science research skills at the graduate level often
begins with course-based instruction, which has been found to yield great variation in
the learning outcomes with regard to both skill mastery and self-efficacy
(Onwuegbuzie, Slate, Paterson, Watson, & Schwartz, 2000). The content of these
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
courses is usually presented through assigned readings and instructors’ lectures. In
each case, the strategies for thinking about the research process are ultimately
mimetic. That is, “when stripped of all their embellishments.. .their fundamental aim
is to get the student to reproduce or to imitate in his own actions or words a form of
behavior [within a range of acceptability] that has already been settled upon as a
standard, even if only imaginatively, in the mind of his teacher” (Jackson, 1985, p. 72;
italics in original). The establishment of the target performance is thus dependent on
the reflections of a researcher describing his own practice—from the instructor
directly, through the assigned readings, or a combination thereof.
The relevance of this position is clearly evident in the example of medical
education (Maupin, 2003; Velmahos, Toutouzas, Sillin, Chan, Clark, Theodorou, &
Maupin, 2004). In the study of Maupin and his colleagues, medical students were
taught a foundational medical procedure through either (a) a traditional instructional
approach involving explanatory lecture and demonstration by an expert followed by
learner practice or (b) an instructional approach that relied on a carefully structured
cognitive task analysis of the skill to structure the information presented. The students
in each condition were followed during their subsequent hospital work during which
they performed the procedure on multiple live subjects, and the performance
difference between the two groups was striking: Students taught through the method
that was not grounded in cognitive task analysis were more than twice as likely to
commit an error during the procedure and often took considerably longer to complete
it. These findings strongly suggest that the errors and omissions that occur during
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
experts’ explanations can significantly impact the quality of instruction and
subsequent skill performance.
The critical role played by experts’ explanations is also evident in cognitive
apprenticeships, wherein the mentor’s role is to model and explain his or her own
approach to problems considered together with the student. In their studies of
cognitive apprenticeships, Radziszewska and Rogoff (1988, 1991) report that
successful learning by the student is dependent on accurate, comprehensible
explanations of strategies by the mentor and the opportunity to participate in decisions
during authentic tasks. As the decision-making and strategic components of the
research process are entirely cognitive, direct observation by the student is severely
limited. Thus, at every stage of training in experimental research, the student is
dependent on the self-report of (ideally) an expert in the field.
However, the accuracy of self-report under many conditions is considered
highly suspect (e.g. Nisbett & Wilson, 1977; Schneider & Shiffrin, 1977; Wilson &
Dunn, 2004). Consequently, a deeper, more objective understanding of cognitive
research skills must be developed to assist students in their transition to skilled
researchers. Ultimately, to better scaffold the skill acquisition of developing
researchers, an accurate understanding and representation of expert strategies must
emerge—specifically, the conceptualization of problems for inquiry, the formulation
of experimental designs, and the analytical skills utilized in the interpretation of results
(Hmelo-Silver, Nagarajan, & Day, 2002; McGuire, 1997).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Purpose of the Study
The purpose of this study is threefold:
1. To accurately identify cognitive skills and strategies that research experts use
in experimental research and differentiate them from novice techniques.
2. To evaluate experts’ abilities to accurately report their own problem-solving
processes.
3. To demonstrate that automaticity is a fundamental characteristic of expert
performance in this domain.
Review of the Literature
In order to meaningfully explore the problems posed above, it is first necessary
to ground several implicit assumptions in the foundation of prior empirical research.
Specifically, these include the assertions that (a) scientific research skills are
acquirable and so represent predominantly crystallized intelligence (i.e. expertise-
based deductive reasoning; EDR), (b) expertise in scientific research is distinguishable
from lower levels of skill, both in an individual’s ability to successfully solve a
problem and in a qualitative distinction between expert and novice strategies, and (c)
skills that are practiced extensively automate, such that the mental effort necessary to
perform a procedure is minimal.
The support for each assumption will emerge from reviews of the empirical
evidence in the following categories: (a) expertise, (b) automaticity of procedural
knowledge, (c) accuracy in self-report and (d) scientific problem solving. Because
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
5
the arguments draw upon findings from a variety of overlapping research agendas
with different theoretical frameworks, multiple terms will sometimes be used to
describe a single construct and the use of identical terms will sometimes be
differentiated to clarify important distinctions that exist between researchers. Further,
it should be noted that the scope of the literature review exceeds that of the specific
study that is reported here. The range of issues addressed are necessary to provide
adequate context for the interpretation of the results, but comprehensive resolution of
all issues inherent to the questions are beyond the scope of the study.
Expertise
The past several decades have yielded a burgeoning body of work on the
subject of expertise (Patel, Kaufman, & Magder, 1996). While there is more than a
century of research on skill acquisition (for an extensive review see Proctor & Dutta,
1995), relatively recent emphasis has emerged on the characteristics of experts that are
common across domains. Such work has predominantly emphasized two general
issues in high level performance: the identification of cognitive processes
generalizable to all expert performances and the factors contributing to the acquisition
of expert-level skill.
Despite several ongoing debates in the theory of expertise, a number of reliable
representative characteristics have emerged. Glaser and Chi (1988) elucidated seven
oft-cited attributes that characterize the performance of most experts. These
observations, which are drawn from generalizations of a number of studies in the late
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
1970s and 1980s, have helped to shape the development of the field, despite a lack of
definitive analysis regarding the extent to which each might be necessary or sufficient
for the expertise construct:
1. Experts excel mainly in their own domains.
2. Experts perceive large meaningful patterns in their domain.
3. Experts are fast; they are faster than novices at performing the skills of
their domain, and they quickly solve problems with little error.
4. Experts have superior short-term and long-term term memory.
5. Experts see and represent a problem in their domain at a deeper (more
principled) level than novices; novices tend to represent a problem at a
superficial level.
6. Experts spend a great deal of time analyzing a problem qualitatively.
7. Experts have strong self-monitoring skills.
In more recent research, Ericsson (1996; Ericsson & Lehmann, 1996) has advanced
two additional characteristics of expertise. First, he has noted that expertise in a given
domain typically requires a minimum of ten years of deliberate practice to develop.
Extending the original findings of Simon and Chase (1973) suggesting that a decade
was the minimum amount of experience necessary to gain mastery in chess, this idea
has been further elaborated and supported by the findings of several important
investigations in various domains (e.g. Chamess, Krampe, & Mayr, 1996; Simonton,
1999). Defined as “the individualized training activities especially designed by a
coach or teacher to improve specific aspects of an individual’s performance through
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
repetition and successive refinement [that includes] monitorfing] their training with
full concentration, which is effortful and limits the duration of daily training”
(Ericsson & Lehmann, 1996, pp. 278-279), deliberate practice is considered to be not
inherently motivating and not intended to attain any goal beyond continued skill
development (Ericsson & Chamess, 1994; Starkes, Deakin, Allard, Hodges, & Hayes,
1996). Second, he has described an expert process as one exhibiting a maximal
adaptation to task constraints. Such constraints include the functional rules that are
associated with the particular task and domain of expertise (e.g., the rules of chess,
established flight paths, etc.), as well as the demands of the laws of physics, physical
limitations of the human body, and the limitations of short-term memory and other
cognitive functions (Casner, 1994; Vicente, 2000)1 . ft is the asymptotic approach to
these constraints, Ericsson argues, that allows experts to succeed where others fail.
Due to their extensive practice and skill refinement, experts have, to a great extent,
shaped the development of their physiological (e.g. density of blood vessels in top
athletes; Ericsson, Krampe, & Tesch-Romer, 1993) and cognitive mechanisms (e.g.
working memory limitations; Ericsson & Kintsch, 1995) required for high level
performance in their domains of expertise.
1 It is notable that task constraints do not include an individual’s intelligence. Data from a number o f
studies has indicated that expert performance is not significantly related to measures o f general or fluid
ability (Ceci & Liker, 1986; Doll & Mayr, 1987; Ericsson & Lehmann, 1996; Hulin, Henry, & Noon,
1990; Masunaga & Horn, 2001). However, an extensive treatment o f this issue is beyond the scope o f
the current study.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
8
The Role o f Knowledge in Expertise
Historically, several broad aspects of expert performance have been examined,
with each emphasizing a distinct aspect of cognition. One major approach focuses on
experts’ extensive knowledge of domain-relevant information and the ability to recall
it in appropriate situations. Chase and Simon’s (1973) classic work in the memory
performance of chess masters suggested that quantity and accuracy of knowledge was
considered the foundational component of expertise. Their results indicated that
experts had vastly superior memory for the locations of realistically-placed chess
pieces in briefly presented stimuli relative to novices, but equivalent recall ability for
randomly-placed pieces and chess-unrelated stimuli under equivalent conditions.
From this, they concluded that expert performance on selected tasks depended on
those tasks falling within their domain of mastery and being representative of the tasks
performed during normal participation in the activity. Further, the increased speed
and capacity that they seemed to demonstrate was attributed to the recognition of
previous situations encountered within the domain that were equivalent to the tasks
presented. This suggested that expertise was in large part a benefit of extensive
experience within a domain from which subjects could recall previously successful
solutions and deploy them quickly and consistently. Such findings have been
consistently replicated in a wide array of domains as diverse as, for example, tennis
(Beilock, Wierenga, & Carr, 2002) and botany (Alberdi, Sleeman, & Korpi, 2000).
Later work has also analyzed the organization of expert knowledge structures
and differentiated them from novice representations on the basis of levels of detail,
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
differentiation, and level of principled abstraction. For example, Chi, Feltovich, and
Glaser (1981) examined expert and novice performance in physics problem-sorting
tasks and observed that the categories identified by experts were based on fundamental
principles on which the problem solutions relied. In contrast, novices conceptualized
the problems based on their surface-level details, such as the presence of pulleys or
inclined planes. Similarly, Adelson (1981) found that novice programmers
categorized lines of code according to syntax, whereas experts utilized functional or
semantic aspects.
High recall performance in experts has also been linked to principled
conceptual organization. In additional chess studies, it has been found that by
providing additional conceptually descriptive information about the location of chess
pieces in a game before or after the visual presentation of the board, experts generate
even higher levels of recall than in a visual presentation-only condition, suggesting
that memory performance is linked to more abstract cognitive representations (Cooke,
Atlas, Lane, & Berger, 1993). The level of conceptual abstraction in expert reasoning
has also been explained as a “comfortable, efficient compromise...that is optimal” for
expert-level performance in a specific domain (Zeitz, 1997, p. 44). This
“compromise” represents a suitable chunking size and schematic framework to
facilitate the establishment of appropriate links between the concrete elements of a
particular problem and the more general concepts and principles that the expert has
acquired through experience in the domain. This framework facilitates a number of
knowledge-related traits associated with expertise, specifically an expert’s ability to
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
10
recognize sophisticated patterns and enhanced performance for recall of salient details
in given situations.
The Role o f Strategy in Expertise
The second framework for expertise grounds performance in qualitative
differences in problem solving strategies between experts and novices. Consistent
early findings in the study of physics problem-solving skills indicate that experts call
on domain knowledge to approach problems through forward reasoning processes in
which they are represented conceptually and approached strategically on the basis of
the given factors (Chi, et al., 1981; Chi, Glaser, & Rees, 1982; Larkin, McDermott,
Simon, & Simon, 1980a, 1980b). Such “strong methods” involve developing a highly
principled representation of the problem that through manipulation yields an
appropriate solution (Singley & Anderson, 1989). Novices on the other hand, utilize
“weak method” heuristics that begin with identification of the goal state and reason
backwards to identify relevant given information and approaches that will generate the
necessary outcome (Lovett & Anderson, 1996). Further, the development of expertise
entails a progression from general, “weak-method” heuristics to feedback-refined
procedures that have integrated domain-specific knowledge (Anderson, 1987).
Such differences between expert and novice performance are robust, even
when novices are instructed to develop a strategy before attempting a solution.
Phillips, Wynn, McPherson, and Gilhooly (2001) found that despite extensive
preplanning, novices exhibited no significant differences in their speed or accuracy of
performance when compared with those not provided with the opportunity to develop
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
11
a plan before beginning. Further, when asked to report their intermediary sub-goals,
they were able to accurately report only up to two moves ahead, thereby
demonstrating a means-ends approach rather than a forward strategy. Similarly,
Larkin, et al. (1980a, 1980b) described novice physics students reasoning backwards
from the required solution by determining which equation would yield an appropriate
answer and then attempting to utilize the givens provided, whereas physics experts
initiated the problem solving process by formulating a conception of the situation on
the basis of physics principles and available specifics, generating the solution by
manipulating the mental model to yield the appropriate answer.
More recently, an extensive study of individual differences in physics problem
solving replicated and extended these findings, identifying the prominence of principle
identification as a factor in the strategy selection of experts (Dhillon, 1998). Expert
utilization of theoretical conceptualizations had the benefit of activating pre-existing
mental models that represented both the relevant pieces of information that were
presented and the abstract relationships between elements in the mental model (Glaser
& Chi, 1988; Larkin, 1985). This abstract representation also supported the search for
missing information that the model would otherwise incorporate. In contrast, novices
who lacked an adaptive model relied on surface-level details and iterative hypothesis
testing that generated a mental model that was more firmly bound to the concrete
representation of the situation as it was presented (Lamberti & Newsome, 1989).
Even when it has been determined that novices perceive the deeper principles
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
12
underlying a problem, their solutions rely nearly exclusively on surface features
(Sloutsky & Yarlas, 2000; Yarlas & Sloutsky, 2000).
Investigations of expert problem-solving processes in scientific
experimentation and design have also provided clear illustrations of this phenomenon
(Hmelo-Silver, Nagarajan, & Day, 2002). In Hmelo-Silver, et al.’s (2002) study,
experts and novices in the domain of clinical trial design used a simulation to
demonstrate a hypothetical drug’s suitability for medical use. Throughout a number of
repeated trials, those subjects with extensive experience in the domain consistently
used analogies to past experiences in their verbal protocols to reason abstractly about
the process and outcomes. Additionally, they were highly reflective about the
effectiveness of particular strategies in relation to their progress toward the goals of
the task. In contrast, novices rarely used analogies and did not typically have the
cognitive resources available for reflection while engaged in the task.
The Role o f Working Memory in Expertise
A third account focuses primarily on the superior working memory
performance of experts when working in their domain. Extensive evidence indicates
that experts are able to process much more information in working memory than is
possible under normal circumstances (cf. Baddeley, 1986). Evolving from the initial
theory of chunking provided by Chase and Simon (1973), in which experts were
believed to represent large, familiar perceptual patterns held in long term memory as
single chunks that could be elaborated rapidly in short term memory, several newer
theories have developed that are better able to account for findings suggesting that
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
experts’ extraordinary recall of information from domain-specific problems is not
impaired by disruptive short term memory tasks, despite the theory’s expectation that
chunks are held in short term memory (Gobet, 1998; Vicente & Wang, 1998). Long
term working memory theory (LTWM; Ericsson & Kintsch, 1995), template theory
(Gobet & Simon, 1996), and the constraint attunement hypothesis (CAH; Vicente &
Wang, 1998) have suggested that as a result of continued practice within a domain,
schematic structures within long term memory can be used to not only facilitate access
to existing declarative knowledge as discussed above, but also to functionally augment
the limited capacity of short term memory when considering domain-relevant
problems.
Long term working memory. LTWM suggests that experts develop domain-
specific representation mechanisms in long term memory that reflect the structure of
the primary domain tasks themselves, allowing for the rapid encoding and retrieval of
stimuli from relevant tasks. Such a model can account not only for experts’
exceptional recall abilities of domain-relevant situations in general, but also for
expanded working memory capacity during expert performance (Ericsson & Kintsch,
1995). Gobet (1998) extrapolates two possible manifestations of the LTWM theory in
an attempt to account for a number of empirical findings. The first representation,
referred to as the “square version” (p. 125), suggests that the LTWM structure for
chess experts manifests itself directly in the form of a 64-square schematic chess
board. In this conception, encoding is therefore contingent on appropriately
compatible stimuli for the format. The second possible representation, dubbed the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
14
“hierarchy interpretation” (p. 125), constructs a different conceptualization of the
original theory to allow for encoding that is not contingent on format and establishes
that “in preference to storing pieces in squares, experts store schemas and patterns in
the various levels of the retrieval structure” (p. 125).
Template theory. Contrasting with LTWM theory, template theory (Gobet &
Simon, 1996) does not completely reject the chunking component originally
established by Chase and Simon (1973). Instead, in cases of extensive practice,
associated templates augment a single large chunk with slots that could represent
related but variable items, retrievable through a short term memory trace mechanism.
The creation of these slots occurs when a minimum number of semantically related
elements occur in similar relationships below the node representing the chunk in short
term memory. Thus, slots could be occupied by common component categories that
vary depending on the particular situation, such as strategy features or, in the case of
chess, players associated with the particular approach.
Constraint attunement hypothesis. The constraint attunement hypothesis
critiques LTWM theory by arguing that it accounts primarily for data that was
generated by experts in domains for which memorization is an intrinsic element. In an
attempt to provide a more generalizable theory, Vicente and Wang (1998) suggest that
the appropriate structures in long term memory to facilitate enhanced working
memory performance are representations of the task constraints that govern
performance within the domain. An abstraction of the task in this format, they argue,
allows the goal structure to serve as the encoding representation for rapid retrieval. In
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
15
essence, the hierarchy interpretation of LTWM theory elaborated by Gobet (1998)
provides a comparable role, except that the hierarchical representation is structured
according to the goals of the task, rather than the structural features. This allows the
authors to predict the magnitude of expertise-enhanced memory performance on the
basis of the number of available constraints, in that the higher the number of authentic
task constraints, the more optimally a constraint-attuned framework in long term
memory can be utilized to expand working memory capacity.
Domains
Every characterization of expertise reviewed above emphasizes that it is
specific to a particular domain, develops from extensive training and experience, and
does not represent general intelligence or fluid ability. The emphasis on the
specificity of domain is important, because it frames the context in which expert
performance is both executed and evaluated. However, the identification of discrete
domains has been an under-explored facet of the expertise research. While there is
extensive and consistent evidence that the skilled performance constituting expertise is
reliably demonstrated only within the particular domain in which the skills have been
developed (Ericsson, 1996; Glaser & Chi, 1988), there has been very little
examination (empirical or otherwise) of the defining properties of domains
themselves.
In most studies, domains are identified de facto as specific structured
endeavors (e.g. chess) or professional fields (e.g. physics) and allow for some sort of
objective differentiation among levels of performance. However, these characteristics
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
16
represent the priorities of empirical researchability (Ericsson & Smith, 1991), rather
than inherent properties of domains as cognitive or theoretical entities. As noted by
Sternberg (1989):
If domains are not well-specified, and indeed, no one does seem to have
specified very clearly what a domain is, then what exactly, does domain-
specificity mean? One cannot cleanly refer to a set of thoughts or behaviors as
limited to a particular domain if that domain does not itself have clear limits or
boundaries (p. 126; italics in original).
Domains have been primarily conceptualized in two ways, each of which
aligns with the level of analysis at which relevant research was conducted. Within a
behavioral framework, domains have been defined through the observational grouping
of knowledge and performances associated with a socially-defined activity (Wertsch,
1991). School subjects and occupations, for example, are customarily considered
different domains on this basis (Barnett & Ceci, 2002). However, these inexact
differentiations pose a problem when the performances of experts within the same
domain are compared. For example, Wineberg (1998) compared the performances of
two expert historians with different specializations in their interpretation of historical
documents pertaining to Abraham Lincoln. While each was considered to be an
authority on American history, the historian with a Lincoln-specific emphasis in his
work was able to utilize more refined strategies in his analysis and was found to
exhibit a higher level of performance overall. In another example, Barnett and
Koslowski (2002) conducted a study of expert restaurateurs and business consultants
without restaurant-specific knowledge or experience participated in a restaurant
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
17
business simulation that required the use of problem-solving skills “at the intersections
of the [restaurant and business] domains” (p. 237). The study found that the business
consultants exhibited better performance, despite the lack of domain-specific
experience and knowledge. In each instance, the subjects were identified as experts
within domains relevant to the tasks presented but failed to perform at an expert level.
This suggests that either the subjects were misidentified during selection (unlikely
given the extensive credentials and experience that each possessed), or that there was
poor alignment between the subjects’ actual domain of expertise and the task
presented. As each task was considered to be within the appropriate domain on the
basis of the knowledge (Wineberg, 1998) or skills (Barnett & Koslowski, 2002)
required of the socially-constructed domains, there is reason to question the
sufficiency of such an observational approach to the identification of domains.
From a cognitive perspective, a domain can be understood as a critical mass of
interconnected schemas that entail meaningful combinations of acquired declarative
and procedural knowledge for use in specific contexts and types of tasks. As such, the
boundaries of domains in this sense can be represented by the points of performance
failure of domain-specific skills during transfer tasks as the transfer distance increases
(Singley & Anderson, 1989). Expressed another way, domains can be defined as the
area over which spreading activation occurs, wherein the use of a particular
knowledge structure stimulates those knowledge structures with which it has
established connections and interrelationships (Clark & Blake, 1997).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
18
The development of domains in this sense is inherently a product of
individuals’ personal experience and skill development. Thus, the generalizability of
the definition of a specific domain is limited to each individual’s personal knowledge
organization. In support of this conception, Acton, Johnson, and Goldsmith (1994)
mapped expert and novice computer programmers’ domain knowledge and analyzed
the structural similarity of those maps within groups. They found that despite
significant variation between experts in the structure of their domain knowledge, a
representation that was generated by averaging all expert subjects’ individual
structures and used as a model against which to compare individual novice structures.
The degree of similarity between novice knowledge maps and the aggregate expert
map provided far more predictive power for levels of novice programming
performance than did similarity with the knowledge map of any individual expert.
From these findings, it can be inferred that the representation of domains of expertise
vary substantially from expert to expert without inherently losing the stability of the
domain as a basis for expertise within a discrete, identifiable knowledge structure.
Because domains cannot be reliably identified a priori, Ohlsson (1983)
proposed a set of criteria for differentiating between knowledge structures conducive
to the attainment of technical expertise and other interconnections of schemas that
arise naturally as a result of general human experience but do not represent an
acquirable domain of expertise:
A working definition of a technical knowledge domain:
(a) Technical knowledge is externalized and codified in some culturally
acknowledged form (textbooks, handbooks, manuals, dictionaries, etc.).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
19
(b) For each technical knowledge domain, there is an associated, special-
purpose notation or symbolism, sometimes developed into a calculus.
(c) Technical knowledge is developed through study, i.e. by consciously
trying to acquire it, usually while interacting with a teacher.
(d) Technical knowledge is usually acquired late in life.
[In contrast,] a natural knowledge domain is recognized by the opposite
properties:
(e) Natural knowledge is not externalized', it is presupposed, rather than
talked about.
(f) There is no notation or symbolism for dealing with natural knowledge
other than natural language.
(g) Natural knowledge is acquired incidentally, usually through learning by
doing.
(h) Natural knowledge is acquired in the process of growing up. (p. 89;
italics in original)
Domain specificity/generality. The uncertainty about the nature of domains
has also manifested in the inconsistent meanings underlying the description of skills as
domain-specific or domain-general. Generally, the terms have been used in various
literatures to refer to both heuristic procedures applicable in multiple domains that
have been acquired through experience (e.g. Anderson, 1987; Lovett & Anderson,
1996) or as the application of fluid intelligence to novel situations (e.g. Perkins &
Grotzer, 1997). The distinction emerges as a difference in the sense of the term as
either a transdomain skill, wherein a specific procedure is applied to any of a number
of diverse problems in varying contexts, or a global cognitive process that influences
problem-solving processes in the absence of domain-specific procedures acquired
through specific training or relevant experience. In contrast, domain-specific skills
indicate those crystallized abilities (i.e. EDR; Masunaga & Horn, 2001) that actively
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
20
utilize acquired knowledge of a particular domain to generate solutions within a
specific problem space (Anderson, 1993; Newell & Simon, 1972).
Ceci (1989, p. 134) identified the source of these disparate approaches to
domain generality and specificity as “predicated on whether the reference is to
‘processes’ or ‘products’; whether the domain is one having to do with families of
processes or families of outcomes, including knowledge structures.” Although these
descriptors have developed within different psychological traditions and can
accordingly be used as alternative but qualitatively equivalent frameworks, the
process-product distinction may prove to be a false dichotomy. Given that both
declarative and procedural knowledge in problem solving and their reciprocal
interactions are necessary for the attainment of cognitive skills (Anderson, Fincham, &
Douglass, 1997; Rittle-Johnson, Siegler, & Alibali, 2001) and that trait complexes
consisting of “intelligence-as-process, personality, interests, and intelligence-as-
knowledge [lead to the]... development of specific expertise and skills in different
knowledge domains.. .as a joint consequence of abilities and non-ability trait
complexes” (Ackerman, 2003, pp. 16-17), the distinction is not necessarily helpful for
developing an operational definition of domain specificity and generality.
As domain-general skills may often be indistinguishable from domain-specific
skills within a natural knowledge domain because their acquisition is implicit and to
some extent universal, for the purposes of the current study skills considered to be
inherent to domain expertise will only be those that meet the criteria for a technical
knowledge domain and represent expertise-based reasoning (EDR; Masunaga & Horn,
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
21
2001) abilities. Domain-general skills, on the other hand, will be recognized as those
that are generally not learned explicitly and can be applied in diverse problem
contexts.
Because the application of domain-general skills in the solution of a problem
relies on inferences drawn from observations that do not map directly to a
“semantically rich” mental representation (i.e. one that contains extensive, specific
prior knowledge; Bhaskar & Simon, 1977), it is assumed that the effectiveness of
domain-general skills will be significantly impacted by individual differences in fluid
ability (Snow, 1989). In contrast, domain-specific skills are those that actively utilize
acquired technical knowledge and EDR ability within a particular domain to generate
solutions specific to a particular type of problem. Consequently, they will remain
consistent with the understanding that expert skill is uncorrelated with fluid
intelligence.
Summary
Experts are distinct from novices in their ability to consistently solve
nonarbitrary problems specific to their respective domains that are beyond the ability
of novices. The characteristics of expertise that facilitate this high level performance
include extensive and highly structured knowledge of the domain, effective strategies
for solving problems within the domain, and expanded working memory that utilizes
elaborated schemas to organize information effectively for rapid storage, retrieval, and
manipulation. In the current study, these characteristics will be identified in the
domain of psychological research.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
22
Automaticity
The manifestation of expertise through the three frameworks discussed in the
previous section (knowledge, strategy, and working memory) each yield a common
result: Ultimately, each aspect of expert performance improves the cognitive
efficiency of the problem solving process. This phenomenon not only emerges as a
result of acquired expertise, but also further improves performance by freeing up
cognitive resources to accommodate atypical features or other added cognitive
demands that may arise within a task (Bereiter & Scardamalia, 1993; Sternberg &
Horvath, 1998).
In investigations of skill acquisition, individuals with a high level of practice in
a procedure perform it at increasingly high speeds and with minimal mental effort
(Anderson, 1982; Logan, 1988). Thus, highly principled representations of domain-
specific problems can be used in fast, effortless performance by a subject with a large
and well-structured knowledge base and at extensive practice of component skills.
However, the procedure itself becomes more ingrained and extremely difficult to
change to the extent that both goals and processes can manifest without conscious
activation (Bargh & Ferguson, 2000). As such, once a skill has been automated, it no
longer operates in such a way that it is available to conscious monitoring, and it tends
to run to completion without interruption2, further limiting the ability to modify
performance (Wheatley & Wegner, 2001).
2 In the automaticity literature, this property is commonly referred to as ballisticity (Hermans, Crombez,
& Eelen, 2000; Logan & Cowan, 1984).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
23
In contrast, adaptive experts are highly successful even under novel conditions
that require departure from routine procedures. Bereiter and Scardamalia (1993)
observed that often when experts have automated procedures within their domain,
their skills are highly adaptable to complex, ill-structured, and novel situations,
because minimal space in working memory is occupied by the process, thereby
allowing mental effort to be reinvested attending to relevant new details. In one
example, Gott, Hall, Pokomy, Dibble, and Glaser (1993) reported that highly
successful air force technicians were able to adapt knowledge to novel situations
despite high levels of consistent, effortless performance. This description is
reminiscent of the differences described by supervisors between the experts and super
experts in the Koubek and Salvendy (1991) study. Although their analysis of the data
suggested that there was no difference in the levels of automaticity between the two
groups, it is possible that Bereiter and Scardamalia’s (1993) arguments could have
been supported if different forms of data that were more sensitive to fluctuations in the
level of cognitive load and representative of cognitive processes in greater detail had
been collected.
Automaticity and Cognition
Human attention is limited by the finite capacity of working memory to retain
and manipulate information (Baddeley, 1986). When mental operations such as
perception and reasoning occur, they occupy some portion of available capacity and
limit the attention that can be dedicated to other concurrent operations. Referred to as
cognitive load, the burden placed on working memory has been found to play a major
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
24
role in both learning and the governance of behavior (Goldinger, Kleider, Azuma, &
Beike, 2003; Sweller, 1988; Sweller, Chandler, Tierney, & Cooper, 1990).
During the last century, both behaviorists and cognitive scientists have argued
that many mental and behavioral processes take place without any conscious
deliberation (Bargh, 2000). From these investigations, a dual-process model of
cognition requiring the parallel execution of controlled and automatic processes has
emerged in which conscious functions are relatively slow, effortful, and controllable,
whereas automatic processes are rapid and effortless (Bargh, 1999a; Devine &
Monteith, 1999). Automated procedures can occur without intention, tend to run to
completion once activated, utilize few, if any attentional resources, and are not
available to conscious monitoring (Wegner & Wheatley, 2001). It is important to
note, however, that the dual-process description can prove misleading, as many
procedures rely on the integration of both conscious and nonconscious thought during
performance (Bargh & Chartrand, 1999; Hermans, et ah, 2000). Specific sub
components can be automated or conscious, but their integration into the larger
production yields a mixed composition.
In the seminal work of Shiffrin and Schneider (1977), the acquisition of
automaticity is said to be achieved through the consistent, repeated mapping of stimuli
to responses. Most commonly associated with skill acquisition, automated procedures
can be consciously initiated for the satisfaction of a specific goal. Thus, primary
attention is paid to the level of ballisticity of the procedure (Logan & Cowan, 1984)
and the ability to maintain performance levels in dual task paradigms (Brown &
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
25
Bennett, 2002). Acquired through extensive practice, these goal-dependent
procedures become fluid and require less concentration to perform over time
(Anderson, 1995; Logan, 1988a). Additionally, these procedures evolve to greater
levels of efficiency as automaticity develops by eliminating the need for conscious
intermediate decision points (Blessing & Anderson, 1996). Evidence suggests that
habitual approaches to problems are goal-activated, such that the solution search is
significantly limited by the activation of established patterns of behavior (Aarts &
Dijksterhuis, 2000). Dubois and Shalin (2000) further report that goal choice,
conditions/constraints, method choice, method execution, goal standards, and pattern
recognition are each elements of procedural knowledge that can become automated.
Automaticity and Attribution
Given the extreme limitations on the amount of information that can be
consciously and simultaneously processed (i.e., as few as four chunks; Cowan, 2000)
as well as the persistently high levels of information and sensory input available in
most natural settings, it is necessary that many cognitive functions also take place
outside of conscious awareness and control. Wegner (2002) suggests that as much as
95% of the common actions that we experience to be under conscious control are in
fact automated.
Further, he argues that the cognitive mechanisms which generate these false
impressions of intention are themselves generated by nonconscious processes. Thus,
people tend to claim full knowledge and control of their actions to the point of creating
false memories that provide plausible explanations for their actions. Empirical
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
26
evidence of such attribution errors can be found in a variety of studies ranging from
insight problem solving (e.g. Maier, 1931) to judgment biases (Kahneman & Tversky,
1974). For example, Maier’s (1931) classic study of insight problem solving reported
a consistent phenomenon involving the resolution of a particular problem when an
impasse was reached. After the subjects had ceased all attempts to solve a problem
involving the tying together of two strings suspended from a ceiling too far apart for
the subject to hold one and reach the other, the experimenter would enter and during
casual conversation lightly rest his hand on the string that was the key to the solving
problem without offering any comments about the solution to the problem. Invariably,
after the experimenter had again left the room, the subjects would realize the solution
and use the previously handled string to complete the puzzle. During retrospective
reports when the subjects were asked how they realized the appropriate solution, more
than two thirds of them failed to attribute their insight to any comment or action on the
part of the researcher and instead identified their continued thought on the problem as
the sole source of success (e.g. “Having exhausted everything else, the next thing was
to swing it. I thought of the situation of swinging across a river. I had imagery of
monkeys swinging from trees. This imagery appeared simultaneously with the
solution. The idea appeared complete”; as cited in Nisbett & Wilson, 1977, p. 241).
Summary
As procedural knowledge is rehearsed, it becomes automated such that skill
can be exhibited consistently and rapidly with a minimal level of conscious attention
(i.e. cognitive load). Because experts engage in deliberate practice and have extensive
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
27
experience within their domains (Ericsson, Krampe, & Tesch-Romer, 1993), it has
been theorized that automaticity represents an important aspect of expert performance.
Although empirical evidence has not yet demonstrated the automation of decision
points during the expert performance of complex tasks (Koubek & Salvendy, 1991),
this challenge may be due to methodological limitations. It is hoped that the use of
unobtrusive, highly sensitive, continuous measures of cognitive load in the current
study will provide data of a small enough grain size to capture this phenomenon and
provide evidence for its role in expert-level cognitive performance.
Accuracy o f Self-Report
The challenge in collecting valid verbal self-report data lies in the structure of
the human memory system itself. In the traditional cognitive model, the short term
(working) memory acts as a gateway through which all information must pass as it is
encoded and incorporated into schemas in long term memory or retrieved for
manipulation or use in the production of behavior. According to Baddeley (1986, p.
34), short term memory is “a system for the temporary holding and manipulation of
information during the performance of a range of cognitive tasks.” For new
information to be stored retrievably in long term memory, a trace or pathway must be
created to allow the needed information to be activated at an appropriate time. Such
links are more easily and successfully generated when the new information maps well
onto an existing schema. Thus, schemas that have been utilized and refined adaptively
through experience with particular concepts and events serve as stable mental models
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
28
for the more efficient encoding and evaluation of specific types of events (Anzai &
Yokoyama, 1984; Bainbridge, 1981; Clement, 1988; Larkin, 1983).
While these refined mental models are highly adaptive for problem solving,
they may interfere with the accurate recall of problem solving situations after the fact.
Because mental models are utilized in the search of a problem space (Larkin, et al.,
1980a), details that are not directly mappable to the representation can fail to be
encoded into long term memory. As a result, a retrospective account of the event may
fall victim to errors of generalizability and rationalization (Nisbett & Wilson, 1977).
“Such reports may well be based more on people’s a priori causal theories about
stimulus effects than on direct examination of their cognitive processes, and will be
inaccurate whenever these theories are inaccurate” (Wilson & Nisbett, 1978, p. 130).
Moray and Reeves (1987) provide direct empirical evidence for the potentially
maladaptive nature of mental models. They presented subjects with a set of eight bar
graphs that changed lengths over time. The lengths of some pairs of graphs were
designed to covary to provide observable subsystems from which it was expected that
a mental model with separate identifiable components would be derived. Participants
in the study were given the task of preventing the graphs from exceeding specified
parameters by changing the location and color of the bars within each graph. Once the
subject had successfully learned to manage the graphs in accordance with the defined
relationships, three “faults” were introduced to the system that prevented certain
relationships from functioning as they had. The authors hypothesized that once the
original model had been developed, the subjects would not recognize the appearance
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
29
of the faults. As expected, the fact that the relationships among and across subsystem
components had changed took significantly longer to recognize than the time it took
subjects to discover all of the rules for the original system, thereby demonstrating the
durability of mental models once established.
More recently, Logan, Taylor, and Etherton (1996) reported that “the
representations expressed during automatic performance do not represent all stimulus
attributes uniformly” (p. 636). Instead, only those elements that were attended to
during the task production are specifically encoded in episodic memory. When
subjects were asked to recall the font color in which target words were presented, they
were unable to do so, despite above-chance performance on a recognition task as
evidenced by instances of successful encoding of superficial item features without
successful retrieval of those features during recall. Thus, they concluded that retrieval
of an episode may stimulate differential partial representations of the specific instance.
The challenge to validity of self-report posed by mental models mirrors the
difficulties inherent in capturing self-reported procedural knowledge. Cooke (1992)
notes that after knowledge of a process becomes proceduralized, subjects may have
difficulty decomposing the mental representation into a declarative form. Further,
subjects with extensive practice solving problems in a domain will have automated
significant portions of their procedures, suggesting that the representations—
presumably those of greatest interest within a domain of expertise—will be even
harder to articulate. Williams (2000, p. 165) explains that “production units are not
interpreted but are fired off automatically in sequences, which produce skilled
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
30
performance. They are automatic to the extent that experts at a specific skill may not
be able to recall why they perform the skill as they do.”
For example, Johnson (1983) observed significant discrepancies between an
expert physician’s technique for the diagnosis of medical problems and the technique
that he articulated to his students. The physician’s explanation for this contradiction
was: “Oh, I know that, but you see I don’t know how I do diagnosis, and yet I need
things to teach students. I create what I think of as plausible means for doing tasks
and hope students will be able to convert them into effective ones” (p. 81). It is
important to note that contrary to the awareness of implicit knowledge that was
exhibited by the subject in Johnson’s (1983) study, experts across several domains
have failed to notice their discrepancies between their awareness of problem solving
strategies and their actual performance (Berry, 1987).
Further evidence of the dissociation between articulable knowledge and
automated skills was obtained in a study of radar system troubleshooters. In a study
by Schaafstal & Schraagen (2000, p. 59), “only a small correlation was found between
the knowledge test [of radar systems] and fault-finding performance (Pearson r = .27).
This confirms the.. .gap between theory and practice.”
These findings have also been supported in the study of metacognitive
monitoring and strategy selection. Reder and Schunn (1996; Schunn, Reder,
Nhouyvanisvong, Richards, & Stroffolino, 1997) have demonstrated that subjects’
metacognitive selection of strategies during problem solving occur implicitly and are
better predicted by previous exposure to similar problems regardless of whether or not
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
a solution was obtained than by active strategy selection, despite subjects’ lack of
awareness that (a) learning occurred during the initial exposure to previous problems
or (b) new strategy development did not occur. Thus, information regarding critical
elements of any problem solving procedure can fail to be directly accessible for verbal
report.
The manifestation of these phenomena as a function of expertise has been
collectively described as the “intermediate effect” by Rikers, Schmidt, and Boshuizen
(2000). As discussed earlier, cognitive load theory suggests that those behaviors and
decision-making processes which are automated or occur as a default during cognitive
overload are unlikely to be stored in long term memory. Because novices are most
likely to encounter cognitive overload and highly experienced practitioners are most
likely to have acquired automated processes, it is logical that those individuals of
intermediate experience are likely to demonstrate the most accurate and complete
recall of events and their own decision making processes for specific events within
their domain of expertise. This pattern of recall has been frequently observed in
interviews with experts and non-experts in a range of domains including investment
banking and medicine (Baird, 2001; Hassebrock, Bullemer, Fox, & Moller, 1993;
Schmidt & Boshuizen, 1993).
The domain of teaching provides powerful evidence of this effect. Allen and
Casbergue (1997), for example, conducted a study of expert, intermediate, and novice
teachers’ recall of classroom events. Although there was very little error in the
reporting of sequence of events across the three subject groups, there was a striking
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
32
intermediate effect for the completeness of recall. Novice teachers accurately reported
47% of their own behaviors and 40% of their students’ behaviors, intermediate
teachers accurately reported 71% of their own behaviors and 76% of their students’
behaviors, and experts reported 52% of their own behaviors and 48% of their students’
behaviors.
Further, retrospective evaluations of student performance and the attribution of
success or failure have been found to reflect cognitive simplification processes by
which analytical heuristics are employed rather than more complex and effortful
decision-making processes. Cadwell and Jenkins (1986) conducted a study of
teachers’ judgments by asking them to rate hypothetical student profiles. The data
collected indicated that retrospective interpretation involved heavy reliance on
generalized schemas as the basis for identifying deficits in student behavior and
performance. If each profile was evaluated strictly on its own merits, the ratings after
the variation of the component measures had been statistically controlled would have
been uncorrelated, because the variability would only have reflected the differences
between the profiles. However, because the correlations persisted, it was evident that
an underlying mental construct was driving a substantive proportion of the ratings
irrespective of the differences in student performance. Thus, the limitations of mental
models discussed above likewise provided cognitively efficient mechanisms at the
cost of higher accuracy at higher levels of load.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
33
Summary
The dissociation between expertise and accuracy of self-report has been
demonstrated in a variety of domains in relation to both procedural automaticity and
robust mental models. Although each of these cognitive phenomena have been
theorized to reduce the levels of cognitive load imposed by a relevant task, there has
not yet been a study that has directly examined the relationship between levels of
cognitive load and the accuracy of self-report. It is toward this end that the current
study was conducted.
Scientific Problem Solving
In the study of problem solving, the search for solutions is often referred to as
the navigation of a problem space, in which the initial state and the goal state are the
starting and ending points, and the space is composed of all possible routes to move
from the former to the latter (Newell & Simon, 1972). However, because scientific
problem solving requires attention to both a hypothesis or research question and an
experimental design that controls sources of variance, Klahr and Dunbar (1988) argue
that there are in fact two adjacent, mutually influential problem spaces which must be
•5
searched’. After the selection of a particular experimental design and the generation
3 Thagard (1998) advocated the existence o f a third problem search space for selection o f instruments
used within experiments, given the key role that advances in instrumentation played in the
understanding o f bacteria’s role in ulcer formation. However, it can be argued that instrument selection
is encompassed in the attainment o f sub-goals within the experimental design problem space. Further,
Schunn and Klahr (1996) present criteria for instances when it might be appropriate to go beyond a two-
space model o f scientific problem solving: (a) additional spaces should involve search o f different
goals and entities; (b) the spaces should differ empirically from one another; and (c) spaces should be
representable in a computational model that considers each space as distinct. As the task in the present
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
34
of data, the scientist must evaluate the progress that has been made in the hypothesis
space towards the goal of null hypothesis rejection. Thus, each successful navigation
of the experiment space results in incremental progress within the hypothesis space.
Given the vast number of possible steps within each problem space and the
exponential increase in the size of the search space for each additional step, Klahr and
Simon (2001) note that “much of the training in scientists is aimed at increasing the
degree of well-definedness of problems in their domain” (p. 76). In their study, for
example, subjects were provided with programmable “rocket ships” that included a
programming a button that produced an unknown function. Their task was to
determine the function of this button through experimentation. As the participants
engaged in the activity, they simultaneously worked to design experiments that would
isolate its role as well as generate hypotheses that could be used to describe the rule-
governed function and be tested for confirmation.
In the study of high-level scientific problem solving, several significant
qualitative differences have been observed in performance between experts and
novices that have been consistent across studies and specific domains. Specifically,
novices in a given task will rely on heuristic strategies—also called weak methods—
that represent general skills which can be adaptively applied across domains, such as
means-ends analyses and backward reasoning (Lovett & Anderson, 1996). In contrast,
experts utilize domain-specific skills grounded in knowledge of concepts and
procedures acquired through practice within a domain (Anderson, 1993; Newell &
study does not meet these criteria, further discussion o f this issue is beyond the scope o f the current
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
35
Simon, 1972). Even when an expert encounters novel problems within his domain of
expertise that limit the extent to which strong methods can be used, the heuristic
approaches that are employed are fundamentally more adaptive than novices’ weak
methods, as they utilize an elaborate knowledge base (Schraagen, 1990). For
example, McGuire (1997) provides an extensive list of heuristics that are specific to
the navigation of the hypothesis problem space but require training in experimental
design prior to successfully use. Such training at the graduate level was demonstrated
by Lehman, Lampert, and Nisbett (1988) to result in significant improvements in
methodological and statistical reasoning ability across multiple topics (i.e.
behaviorally-defined domains) over the course of graduate programs in psychology at
several universities across the country. Although at face value, such improvements
are indicative of a domain general skill (Perkins & Salomon, 1989), they can instead
be attributed to improvements in the (cognitively-defined) domain specific-skill of
scientific inquiry.
In contrast, true domain-general skills are applied when a structured
knowledge base is not available. Means-ends analysis involves a general attempt to
reduce observed differences between the problem state and the goal state. This
strategy manifests itself as either an attempt to metaphorically move in as straight a
line as possible toward the goal state (i.e., hill climbing; Lovett & Anderson, 1996) or
the identification and pursuit of nested sub-goals that must be met prior to the
attainment of the end goal. For example, studies of novice performance on the Tower
review.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
36
of Hanoi task have reported that subjects will not generate a rule-based strategy for
solving the problem efficiently. Instead, they work backwards from the goal state,
identifying the move necessary to achieve the main goal, then identifying the move
necessary to allow the first (i.e. identify a sub-goal), and so on. Neves (1977; as cited
in Anderson, 1993, p. 37) provides a clear illustration through the verbal reasoning of
one subject: “The 4 has to go to the 3, but the 3 is in the way. So you have to move
the 3 to the 2 post. The 1 is in the way there, so you move the 1 to the 3.” More
recent work by Phillips, Wynn, McPherson, and Gilhooly (2001) further indicates that
even when novice subjects are instructed to preplan a strategy to solve the problem (in
this case, a slight variation of the Tower of Hanoi task, referred to as the Tower of
London) before attempting it, there were no significant differences between their
speed and accuracy of performance and those of subjects not provided with the
opportunity to develop a plan before beginning. Further, when asked to report their
intermediary sub-goals, they were able to accurately report only up to two moves
ahead in a manner similar to the Neves (1977) subject. From this, the authors
conclude that the problem-solving method was indicative of a means-ends approach.
The distinction between novice and expert approaches to the conceptualization
of physics problems was also observed in problem sorting tasks. Expert participants
consistently categorized problems according to underlying principles represented in
the prompts in contrast to novices, who paid greater heed to surface features such as
apparatus and available facts (Chi, Feltovich, & Glaser, 1981). More recently, an
extensive study of individual differences in physics problem solving replicated and
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
37
extended these findings, identifying the prominence of principle identification as a
factor in the strategy selection of experts (Dhillon, 1998). Expert utilization of theory-
driven conceptualizations had the benefit of activating pre-existing mental models that
represented both the relevant pieces of information that were presented and the
abstract relationships between elements in the mental model (Glaser & Chi, 1988;
Larkin, 1985). This abstract representation also scaffolded the search for missing
information that the model would otherwise incorporate. In contrast, novices who
lacked an adaptive model relied on surface-level details and iterative hypothesis
testing that generated a mental model that was more firmly bound to the concrete
representation of the situation as it was presented (Lamberti & Newsome, 1989).
As these differences are reliable and do not emerge until after high level
attainment within the problem solving domain, it follows that they represent a distinct
set of skills that are not intuitive or available to the novice problem solver. Singley
and Anderson (1989) demonstrated that not only do experts characteristically use
strong (i.e. domain-specific) methods in their problem solving, but less experienced
problem solvers can also learn to use them successfully. In another of Anderson’s
studies, he determined that differences in scientific performance were not significantly
attributable to individual differences in fluid intelligence (Schunn & Anderson, 1998).
This finding replicated the evidence in other skilled domains that after five years of
professional experience, fluid intelligence and performance are not reliably correlated
(Ceci & Liker, 1986; Doll & Mayr, 1987; Ericsson & Lehmann, 1996; Hulin, Henry,
& Noon, 1990; Masunaga & Horn, 2001).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
38
Chen and Klahr (1999) also describe the high level of impact that training in
the strategic control of experimental variables had on science students’ ability to
generate and execute valid scientific inquiries. However, the authors describe the
control of variables strategy (CVS) as a domain-general skill, because it can apply
with equal success to specific studies in physics or any other science content domain.
It is suggested here, however, that experimental design should be recognized as an
independent domain that is specific to endeavors within the experiment problem
space, because it meets Ohlsson’s (1983) criteria for a technical knowledge domain, in
that it is externalized and codified, possesses its own calculus, and can be developed
over time through study.
While domain general skills will certainly impact performance in this area to
the extent that relevant and essential elements of the mental model are not known,
designing informative experiments “requires.. .domain-specific knowledge about the
pragmatic constraints of the particular discovery context” (Klahr, Fay, & Dunbar,
1993, p. 114). For example, Schunn and Anderson (1999) differentiate between
domain experts and “task” experts for their study of scientific reasoning in a memory
experiment design task. Domain experts were university faculty in psychology whose
research agendas were directly related to the problem presented, whereas task experts
were also psychology research faculty but specialized in topics unrelated to memory.
Although there were some significant differences in experimentation strategies
between the two groups of experts (to be expected in light of the inherent confluence
of hypothesis and experiment problem spaces), the performance of both were
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
39
consistently superior to that of undergraduates who had completed a course in
experimental design in psychology.
Given these findings and the relationship between expertise and practice
discussed above, it is reasonable to conclude that skills and strategies specific to
scientific problem solving are fundamentally acquirable. As such, differences in
performance are expected to be more directly linked to deliberate practice of skill
acquisition than to measures of fluid intelligence.
Summary
Scientific problem solving processes have been found to consistently be
reflective of levels of expertise and retain striking similarities across different
scientific disciplines (e.g. physics, psychology, etc.). Further, forward reasoning skills
in these endeavors improve with practice and the incorporation of knowledge into
relevant mental models. As argued above, this suggests that the navigation of dual
problem spaces inherent in the design and interpretation of scientific experiments
represent an independent domain in which individuals can develop expertise. The
current study identifies the strategies used at various levels of expertise in the
examination of a novel problem within a domain familiar to the participants.
Summary
As indicated in the literature above, the assumptions on which this study rests
are justified by extant research. Specifically, (a) scientific research skills are
acquirable and represent predominantly crystallized intelligence, (b) expertise in
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
40
scientific research is distinguishable from lower levels of skill, both in an individual’s
ability to successfully solve a problem and in a qualitative distinction between expert
and novice strategies, and (c) skills that are practiced extensively automate, such that
the mental effort necessary to perform a procedure is minimal. Further, although there
are significant challenges to the reportability of automated procedural knowledge,
there are certain circumstances under which some level of knowledge can be elicited.
Research Questions
The research questions for this study are:
1. What are the cognitive skills and strategies that research experts use in
experimental research and how do they differ from novice techniques?
2. To what extent can experts and non-experts accurately report their own
problem-solving processes?
3. What is the relationship between the accuracy of self-report and the degree
of automaticity during problem-solving?
Hypotheses
1. Differences in performance between expert, intermediate, and novice subjects
will not be significantly related to individual differences in fluid intelligence
(Masunaga & Horn, 2001; Schunn & Anderson, 1998).
2. Experts will evidence highly developed procedures that yield solutions which
are more systematic, more effective, and qualitatively distinct from non-expert
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
41
performances (Hmelo-Silver, et al., 2002; Schunn & Anderson, 1999;
Schraagen, 1993; Voss, Tyler, & Yengo, 1983).
3. Experts and novices will be largely inaccurate in reporting their problem
solving procedures as compared to the computer-based record of their actions,
while intermediate subjects’ reports will have significantly higher accuracy.
4. The accuracy of subjects’ self-report with regard to specific elements of the
problem solving process will be negatively related to the level of automaticity
observed during the point in the process reported.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
CHAPTER II: METHOD
The research questions posed in this study represent a multi-tiered
investigation of the interrelationships between expert, intermediate, and novice
strategy use, automaticity, and the accuracy of self-report. As each of these elements
manifests in both qualitatively and quantitatively observable ways, a full examination
of the target phenomena necessitates an integrated capture and analysis of the data
across methodologies. Further, it is necessary to utilize multiple methods of data
collection to verify that the results obtained do not unduly represent artifacts of a
particular measurement technique (Campbell & Fiske, 1959). Although Campbell and
Fiske’s original conception of multitrait-multimethod studies called exclusively upon
quantitative methods, the logic underlying the approach extends to the incorporation
of techniques that cross the traditional lines of quantitative and qualitative
methodologies (Tashakkori & Teddlie, 1998).
Creswell (2003) argues that four criteria must be met to fully justify a mixed
methods approach4. First, a priority for the analytical framework must be selected
such that the epistemic and logical assumptions that accompany each methodological
tradition can be applied consistently and systematically to avoid criticisms of
incoherence in application that have been referred to as “mixed up models” (Datta,
1994). In the current study, a postpositive framework is utilized, because the research
4 Onwuegbuzie and Johnson (2004) suggest that studies which utilize multiple methodologies within or
across stages o f research (i.e. defining the research objective, data collection, and data analysis) be
referred to as “mixed models” rather than “mixed methods.” However, to maximize consistency in
terminology across authors, these phrases will be considered interchangeable for the purpose o f the
current study.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
43
questions fundamentally call into question the accuracy of individuals’ self-reported
experiences relative to more objective observations. As such, it fundamentally rejects
the validity of an entirely subjective personal interpretation of experiences as the basis
for conclusions about the reality of events (phenomenology).
Second, the foundational framework or theory that serves as the conceptual
underpinning for the study as a whole must be acknowledged to clarify the basis from
which assumptions of causality are drawn. As discussed in the first chapter, it is
assumed that expertise is based in the acquisition of domain-specific skills and
schemas. After extensive deliberate practice and experience within the domain of
expertise, these skills become automated and impose less cognitive load on a limited
short term memory capacity. Further, as skills automate, fewer decision points require
conscious resolution and consequently are unlikely to be retained in long term
memory as specific episodic representations that could be accurately recalled and
articulated.
Third, the sequence of data collection (i.e. quantitative first, qualitative first, or
simultaneous collection) and its rationale must be articulated. In order to best capture
the process data of interest, it was necessary to collect both quantitative and qualitative
data as events occurred. However, in addition to the concurrent data captured in real
time, retrospective reports were also collected sequentially to gain additional insight
without confounding or overshadowing the time-sensitive process data. Thus, each
cycle of data collection began with a concurrent phase followed by a sequential phase,
described by Creswell (2003, p. 218) as a “concurrent nested strategy.”
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
44
The fourth criterion—that the placement of the integration data/methods must
be established as occurring during specific phases of the research: during data
collection, analysis, interpretation, or some combination of the three—is a complex
issue given the nested nature of the data collection and analysis for this study. Greene,
Caracelli, and Graham (1989) articulated a framework of rationales for the
incorporation of multiple methodologies and identified five distinct categories:
triangulation, complementarity, initiation, development, and expansion (see Table 1).
Table 1: Mixed Met hodology Rationales5
Triangulation Consideration of converging results to
corroborate the validity of inferences
drawn from different methods
Complementarity Examination of overlapping and distinct
aspects of the phenomena in question for
elaboration and clarification
Development Formulation of a subsequent method of
data collection or analysis on the basis of
results from the initial method
Initiation Identification of contradictions,
paradoxes, and alternative explanations
or perspectives for an observed event
Expansion Incorporation of additional information
to expand the breadth or scope of the
study
In each category, the collection of both quantitative and qualitative data inform
the analysis and interpretation of results to provide a more comprehensive, accurate, or
refined account of the phenomena observed. By virtue of the differing natures of the
phenomena captured for the current study, it is necessary to embrace each of these
rationales at different stages of analysis. Reliance on each specific use will be
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
45
justified below in the explanation of relevant aspects in the design and analysis
sections. Additionally, in accordance with Morse’s (1991) suggestion that mixed
method research be illustrated graphically to simplify the representation of complex
design issues, a diagram representing the current study is presented in Figure 1.
Design
This study utilized a Single-Subject Multivariate Repeated Measures design
(SSMRM; Nesselroade & Featherman, 1991; Wood & Brown, 1994) for the purpose
of capturing concurrent intraindividual changes in multiple variables over time. In this
case, automaticity, scientific problem-solving behaviors, and self-report accuracy were
recorded through time-locked records of each. The operational definition of each
variable and its measurement is discussed with regard to its methodology in the
following section.
SSMRM was selected as the optimal design for the current study, because the
aggregation of data in a cross-sectional design would have prevented appropriate
analysis of the co-occurrence of changes in key variables (Jones & Nesselroade,
1990). The within-subjects approach allowed for the necessary precision, because
observed values were impacted by both intraindividual changes over time and
interindividual differences in domain knowledge, skills, and reasoning abilities that
would have been obscured with a different approach. Specifically, it was necessary to
record subjects’ use of scientific problem-solving skills over time and synchronize it
5 Adapted from Greene, et al. (1989), as cited by Onwuegbuzie & Johnson (2004).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
46
with measures of cognitive load to facilitate the evaluation of skill automaticity.
Because individuals utilize different approaches, sequences, and durations when
solving scientific problems (Schraagen, 1990), aggregating the data across subjects at
the level of first-order factors would not have yielded meaningful results when
uniquely individual processes needed to be analyzed in relation to the measures of
other relevant variables (Chen & Siegler, 2000; Morgan & Morgan, 2001).
Q UAL data
(behavior)
QUAN data
(G f task scores)
QUAN data
(Time)
QUAN data
(EEG)
QUAN results
(correlation
matrices)
QUAL data
(self-report)
QUAL results
(decision rules)
QUAL analysis
(self-report/behavior
congruence)
QUAN analysis
(quantitize congruence
data)
QUAN analysis
(individual and pooled
correlation analysis)
QUAL analysis
(segment time according
to behavior segments)
QUAL analysis
(cognitive task analysis)
QUAN analysis
(EEG -> cognitive load
per segment)
Figure 1: Mixed Method Diagram
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
47
Operational Definitions o f Key Variables
Cognitive load. Cognitive load is considered to be an index of mental effort
and is defined as “the number of non-automatic elaborations necessary to solve a
problem” (Salomon, 1984, p. 231). As skills become less effortful with practice, they
move toward automaticity and impose less cognitive load. Thus, measures of
cognitive load represent the quantifiable inverse of the extent to which a procedure is
automated (Sweller, 1994). Early empirical findings suggested a reliable, linear
correlation between subjects’ self-report of mental effort and other load measurements
(Paas, 1992). However, more recent research (Clark, 1999; Flad, 2002; Gimino, 2000)
has argued that although self-reported mental effort scores continue to increase under
highly taxing conditions, other indicators (e.g. response latency in secondary tasks)
actually indicate diminishing levels of cognitive load occupying attentional resources.
Thus, the current study utilized only directly quantitative measures of load to avoid
this potential confound.
Scientific problem-solving strategies. Scientific problem-solving strategies
manifest as sequences of cognitive and behavioral steps taken to navigate the given
problem space. As such, they were identified qualitatively through the observation of
behavior during the task and the verifiable self-report of cognitive goals and actions.
Although raw data representing this variable was captured qualitatively, secondary
data was numerically represented by quantitizing the data through frequency counts
and the recording of elapsed time for individually identifiable processes.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
48
Accuracy o f self-report. The accuracy of self-report represents the interaction
between the verbal descriptions of strategies utilized provided by subjects and the
captured behavioral data depicting the enactment of those strategies. Although the
initial derivation of this variable was achieved through qualitative comparison, the
analysis relied on both qualitative and quantitative representations. Errors in reporting
accuracy are categorized as errors of commission or errors of omission. An error of
commission is defined as a statement made during subsequent self-report that
misrepresents the objectively recorded order of events by inaccurately reporting the
order of steps taken or the presence of steps that did not occur. In contrast, an error of
omission is defined as a failure to report a step that was taken without otherwise
affecting the congruence of the sequence as observed and verbally reported. This data
is quantitized by determining the percentages of sequential congruence and
thoroughness representing the avoidance of errors of commission and omission,
respectively (Allen & Casbergue, 1997).
Fluid and crystallized (EDR) ability. Fluid (Gf) and crystallized (Gc)
reasoning measures represent a quantification of subjects’ ability to solve novel
(inductive) and domain-specific (deductive) problems, respectively. Horn and
Masunaga (2000, p. 127) define fluid ability as “inductive reasoning (conductive and
disjunctive), and capacities for identifying relationships, comprehending implications,
and drawing inferences within content that is either novel or equally familiar to all.”
Conversely, crystallized intelligence, or expertise-based deductive reasoning (EDR), is
defined as the deductive reasoning ability of experts in their domain of expertise that
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
49
is acquired through experience and deliberate practice. As noted by Masunaga and
Horn (2001, p. 294), “reasoning depends on perceiving relationships in complex sets
of stimuli and drawing inferences from these perceptions in order to estimate
relationships under conditions of lawful change. In this sense, [expert reasoning] is
similar to the reasoning that characterizes Gf; indeed, the reasoning of expertise and
the reasoning of Gf are probably along a continuum of similarity.”
Apparatus
Simulation
Presented on a Dell Latitude D600 laptop computer with an external keyboard
and mouse, subjects used a computer simulation, the Simulated Psychology Lab
(Schunn & Anderson, 1999) to design and interpret the results of a series of factorial -
design experiments with the goal of determining which, if either, of two competing
theories accounted for the memory spacing effect described in the introduction to the
program. The interface allowed subjects to select values for six independent variables
(see Table 2), of which up to four could be manipulated in a particular experiment,
involving the learning and recall of word lists by hypothetical subjects. When the
variable settings for an experiment were determined by the user (though they remained
editable until the command to run the experiment was given), the user was then
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
50
required to predict the mean percentages of correct responses that the hypothetical
subjects would produce in each condition.6
• Once the predictions were entered and the command was given to run the
experiment, the computer generated data sets that were compatible with real-world
results. After each iteration of the design-hypothesize-and-execute cycle, the results
generated by the computer were available to the user in order to inform experimental
designs in subsequent iterations and modify hypotheses. To facilitate this process, the
results of all previous experimental designs and results were available to the user
through the interface. As each action was taken, the simulation captured and recorded
all user actions, including the viewing of data, with a time stamp to provide an
accurate procedural history for analysis. Additional recording was accomplished
using a videocassette recorder connected to the video output of the computer.
Although the ecological validity of laboratory experiments in general, and
simulation apparati in particular, have been called into question for the study of
cognitive scientific processes (e.g. Giere, 1993; Klahr & Simon, 1999), simulations
that maintain a high level of fidelity to the complexity of experimental tasks and a
small observational “grain size” can be considered to capture to a great extent the
actual problem-solving processes employed by scientists during the course of their
6 Schunn and Anderson (1999, p. 347-348) note that “although this prediction task is more stringent
than the prediction task psychologists typically give themselves (i.e., directional predictions at best, and
rarely for all dimensions and interactions), we used this particular form o f a prediction task because 1)
assessing directional predictions proved a difficult task to automate; 2) numerical predictions could be
made without explicit thought about the influence o f each variable and possible interactions, and thus
we thought it was less intrusive; 3) it provided further data about the participants’ theories and beliefs
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
work (Klahr, 2002). Of primary concern is the representativeness of the task,
analogous to the methodological considerations of Ericsson and Smith (1991) in the
laboratory study of expert performance. With regard to the task in the simulation
apparatus used in this study, Schunn and Anderson (1999, p. 346) note that “although
the mapping of the two theories for the spacing effect onto these six variables is not
simple, this relationship between theory and operational variable is typical of most
psychological theories and experiments.” Further, the Simulated Psychology Lab is
specifically cited by other authors as an apparatus that “model[s] essential aspects of
specific scientific discoveries” (Klahr & Simon, 2001, p. 75).
An additional point to consider in the use of a simulation task for the current
study is the nature of the phenomena of interest. Automaticity manifests only in
situations that have been encountered and responded to in a consistent manner,
wherein a familiar stimulus or goal triggers a response unmediated by conscious
mental effort. As such, distinctions between the simulated and in vivo environments
will be more likely to inhibit the automatic performance of a skill, strongly reducing
the likelihood of a Type I error for confirming a positive correlation between self-
report accuracy and automaticity (Hypothesis 4). As Vissers, Heyne, Peters, & Guerts
(2001; see also Mook, 1983) explain, “we may demonstrate the power of a
phenomenon by showing that it happens even under unnatural conditions that ought to
preclude it” (p. 134).
about each o f the variables; and 4) it provided some cost to large experimental designs (i.e., many more
predictions to make) to simulate the increasing real-world cost o f larger experimental designs.”
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
52
Table 2: Independent Variables Manipulable in Simulated Psychology Lab
Phase of Experiment Manipulable Variable Possible Values
Learning Repetitions—the number
of times that the list of
words was studied.
2, 3, 4, 5
Learning Spacing—the amount of
time spent between
repetitions
1 minute to 20 days
Learning Learning context—
whether subjects were in
the same context for each
repetition or changed
contexts for each
repetition
Mood, Location (yes/no)
Recall Test—memory
performance
Free recall, Recognition,
or Stem completion
Recall Delay—the amount of
time from the last learning
repetition until the recall
test was given
1 minute to 20 days
Recall Recall context—whether
subjects were in the same
context for each repetition
or changed contexts for
each repetition
Mood, Location (yes/no)
Electroencephalogram monitor
Subjects’ cognitive load during experimental tasks was measured using a
MindSet MS-1000 EEG and MindMeld processing software (Nolan Computer
Systems, 2002) on a Compaq Presario 700 desktop computer to dynamically record
and analyze changes in the frequency and amplitude of brainwave activity associated
with attention and mental activity. Electrodes were placed in accordance with the
International 10-20 system using a 16-channel ECI Electro-Cap Electrode System
(Electro-Cap International, n.d.).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
53
After raw EEG data is converted into a frequency X amplitude matrix by a Fast
Fourier Transform (FFT), correlations representing the degree of temporal and spatial
symmetry in electrical activity across hemispheres of the brain have been
demonstrated to be indicative of cognitive load (coherence analysis; Petsche &
Etlinger, 1998; Samtheim, Petsche, Rappelsberger, Rauscher, Shaw, & von Stein,
1998; Volke, Dettmar, Richter, Rudolf, & Buhss, 2002). Specifically, increases in
cognitive load manifest as dissociations in the synchronization of bilateral brain
function, because they represent the use of neural pathways that are underdeveloped or
inefficient due to a lack of routine use (Dunbar & Sussman, 1995; Klimesch, 1999).
EEG is currently the physiological instrument of choice for the measure of
cognitive workload, due to its ability to provide relatively unobtrusive continuous
monitoring of brain function (Gevins & Smith, 2003). Specifically, this approach has
been found to reliably measure cognitive load during task analyses of naturalistic
human-computer interactions (Raskin, 2000) and, when analyzed in multivariate
combinations, can accurately indicate changes in cognitive tasks (Wilson & Fisher,
1995).
Verbal Protocol for Knowledge Elicitation
In the selection of a verbal protocol to capture the covert decisions of the
subjects during the design process, two general methods of verbal knowledge
elicitation utilized in research of expertise and problem-solving were considered:
protocol analysis (Ericsson & Simon, 1993) and cognitive task analysis (Schraagen,
Chipman, & Shute, 2000). Both approaches have been characterized as valid within
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
54
specific constraints (discussed below). However, these considerations have not been
consistently applied in the study of experts’ cognitive processes (Bainbridge, 1999).
During protocol analysis, also referred to as the “think aloud” technique, the
capacity limit of short term memory can prevent sufficient attention from being given
to both the task at hand and the translation of mental symbols to verbal form. While
this does not pose a particular problem for tasks that are verbal in nature or do not
require full attentional capacity to achieve, those elements which are not easily
articulable (like images) require extra attentional resources to translate (Chipman,
Schraagen, & Shalin, 2000). If those resources are not available, subjects will fall
silent during the periods of high load. Likewise, processes which have become mostly
or entirely automated will not be articulated (Ericsson & Simon, 1993).
This eventuality is particularly problematic for cognitive research in expertise,
as automaticity often manifests at moments of particular interest. When studying the
cognitive processes of a task, automated productions are especially important to
capture accurately, because they represent the points of most refined skill in the
procedure. Further, subjects who are required to think aloud during insight problem
solving tasks have revealed performance deficits in the time necessary for task
completion and frequency of correct solutions (Schooler, Ohlsson, & Brooks, 1993).
Similarly, in Chung, de Vries, Cheak, Stevens, and Bewley (2002), subjects required
to think aloud while engaging in scientific problem solving tasks required a
significantly higher number of attempts to successfully solve problems when using a
computer interface.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
55
This finding of verbal overshadowing has been replicated across a number of
experimental and authentic tasks (Meissner & Memon, 2002). “A common
observation is that verbal rehearsal or mediation declines with practice on perceptual-
motor tasks.. .which indicates that at least the form and possibly the amount of
information held in working memory changes” (Carlson, Khoo, Yaure, & Schneider,
1990, p. 195). Thus, while appropriate for gathering process data in certain verbal
types of tasks performed by individuals who have not yet developed automaticity in
their skills, capturing an accurate representation of the highly specialized
performances of experts in complex cognitive tasks utilizing this method remains
problematic.
In contrast, cognitive task analysis techniques represent knowledge of events
and strategies that have been retained by the subject until after the event in question.
While these techniques span a wide range of tools for knowledge elicitation, the data
generated does not represent the contents of working memory in situ. These
approaches have, however, been found to yield highly accurate information about the
processes executed in a wide variety of authentic tasks (Schraagen et al., 2000;
Velmahos, et al., 2004).
Although capturing procedural knowledge is considered to be more
challenging than declarative knowledge within a domain (Hoffman, 1992), a skill-
based cognitive task analysis framework has been established by Seamster, Redding,
and Kaempf (2000) that focuses specifically on the elicitation of five cognitive skill
types: (a) strategies, (b) decision-making skills, (c) representational skills, (d)
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
56
procedural skills, and (e) automated skills. While several specific techniques have
been developed within this framework, one of the most successful has been the
Critical Decision Method, in which each of the skill types is elicited through a variety
of probes in semi-structured interviews. Hoffman, Crandall, and Shadbolt (1998)
reviewed reliability studies of the method and reported that there was high test-retest
reliability over time (3 days, 3 months, and 5 months after the incident reported) and
intercoder reliability of .89. With regard to validity of content, they argue that the
memory prompting cues that are incorporated into the method can overcome the
memory errors that are discussed above. Further, studies conducted by Crandall and
his colleagues (e.g. Crandall & Calderwood, 1989; Crandall & Gamblian, 1991) have
demonstrated that the behaviors captured with cognitive task analysis have differed
significantly from the theoretical knowledge generally portrayed in textbooks but
aligned well with the experiences of experts in the field.
Cognitive task analyses conducted with subjects during the study followed the
Critical Decision Method protocol (CDM; Klein & Calderwood, 1996; Klein,
Calderwood, & MacGregor, 1989). The protocol emphasizes the elicitation of both
procedural and conceptual knowledge from subjects through the recall of particular
incidents using a set of specific probes (Table 3; adapted from Klein & Calderwood,
1996, p. 31) following the subject’s report of a general timeline highlighting decision
points occurring during the procedure. The general sequence for knowledge
elicitation first required subjects to articulate a timeline of events and decisions that
occurred during the task. The interviewer then repeated the sequence back to the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
57
subject and asked the subject to correct, clarify, or expand the sequence of cognitive
events to ensure accuracy. After the timeline had been established, then the subject
was asked to respond to the CDM probes. All elements of the CDM were captured
using an audio cassette recorder with a directional microphone.
Table 3: Critical Decision Method Probes
Probe Type Probe Content
Cues What were you seeing, thinking...?
Knowledge What information did you use in making
this decision, and how was it obtained?
Analogues Were you reminded of any previous
experience?
Goals What were your specific goals at this
time?
Options What other courses of action were
considered or were available to you?
Basis How was this option selected/other
options rejected? What rule was being
followed?
Experience What specific training or experience was
necessary or helpful in making this
decision?
Aiding If the decision was not the best, what
training, knowledge, or information
could have helped?
Situation Assessment Imagine that you were asked to describe
the situation to a partner who would take
over the process at this point, how would
you summarize the situation?
Hypothetical If a key feature of the situation had been
different, what difference would it have
made in your decision?
EDR Assessment
Expertise deductive reasoning was assessed using the Lawson’s Test for
Scientific Reasoning (Lawson, 1978; 2000; see Appendix A). The test of 24 multiple-
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
58
choice items assessed subjects’ abilities to separate variables and use proportional
logic as well as combinational reasoning and correlations. It has been found to
adequately represent the abilities of college students for assessment of scientific
reasoning skills with a Cronbach’s alpha reliability coefficient = .81 (Lawson,
Alkhoury, Benford, Clark, & Falconer, 2000).
Fluid Ability Assessment
Two fluid ability (Gf) tasks were used to assess fluid ability at multiple points
during the procedure. The first task, Maze (Carroll, 1993; Masunaga & Horn, 2001),
presented subjects with two triangular grids—one large and one small—that had 61
and 36 dots, respectively, placed randomly at intersections of the grid lines (see
Appendix B). Subjects were asked to trace a path originating at the vertex of the
triangle and ending at the top edge that passed through as many dots as possible
without reversing direction as it was being drawn within two minutes. Maze paths that
passed through the maximum number of dots were scored with three points. Paths that
passed through one less dot than the maximum were given two points, and paths that
passed through two less dots were scored as one point. Paths that passed through three
or more dots fewer than the maximum were scored zero.
The second task, BackSpan (Masunaga & Horn, 2001), was a test of backward
memory span, in which subjects were asked to write down strings of random numbers
recited to them in the reverse order in which they were presented (see Appendix C).
Although forward span tasks are known to be indicative of working memory, Horn,
Donaldson, and Engstrom (1981) found that having subjects recreate the strings in
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
59
reverse order from presentation was strongly indicative of fluid reasoning ability.
Twelve strings were presented per assessment, with string lengths ranging between
three and eight digits in length, and items were scored as correct or incorrect, yielding
scores ranging from 0 to 12.
Subjects
Participants in this study were recruited from major research universities in
southern California. Each had experience in academic research at an appropriate level
in some area of psychology (e.g. cognitive psychology, educational psychology, or
psychobiology). All were male and between the ages of 20 and 60 to enhance the
reliability of electroencephalographic data (Fisch, 1999), and all were fluent in
English.
Expert Subjects
Three expert subjects were identified and recruited on the basis of the
following criteria:
1. Subjects had attained tenure in the field of psychology or educational
psychology at a Tier I research university and conducted research for at least
10 years. These elements are associated with the attainment of expertise as
recognized by a professional peerage and a typically necessary duration
(Ericsson & Chamess, 1994).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
60
2. Each subject had published peer-reviewed empirical studies within their
domain of expertise utilizing factorial designs equivalent to those available in
the simulation.
3. Subjects’ major lines of research were not in the area of memory to prevent
biasing of the experimental design and analysis tasks based on recall.
Expert A. Expert A was a 37-year old African-American tenured associate
professor of psychology with 3 published studies utilizing factorial designs in
counseling psychology.
Expert B. Expert B was a 57-year old Caucasian tenured associate professor of
psychology with 18 published studies utilizing factorial designs in cognitive decision
making and perception.
Expert C. Expert C was a 43-year old Asian visiting professor of psychology
holding tenure at his home institution with 15 published studies utilizing multivariate
designs in the large-scale studies of academic achievement.
Intermediate Subjects
Three intermediate subjects who were each enrolled as doctoral students in
psychology were also identified and recruited for the study. Each had completed at
least one course in research methodology and experimental design, but had not yet
conducted their dissertation studies. Additionally, these subjects had not taken
coursework emphasizing research in human memory or participated in research
projects related to memory issues.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Intermediate A. Intermediate A was a 56-year old Caucasian who was
concurrently enrolled in coursework for a masters and doctoral program in educational
psychology. He earned a B.A. in humanities and a teaching certificate in 1971. He
had not published any research papers or been involved in the design of any factorial
design studies.
Intermediate B. Intermediate B was a 39-year old African-American who had
completed all coursework for a Ph.D. in educational psychology. He graduated in
1990 with a major in psychology and in 1995 with a M.S. in education. He had
published empirical studies but had designed one factorial experiment, which had not
been conducted at the time of data collection.
Intermediate C. Intermediate C was a 56-year old Caucasian Ph.D. candidate
in educational psychology. He completed his B.A. in computer science and a M.S. in
computer engineering. He had not published any research papers or been involved in
the design of any factorial design studies.
Novice Subjects
Three novice subjects, undergraduates majoring in psychology were also
identified and recruited for the study. Each had completed at least one course in
research methodology and experimental design. Additionally, these subjects had not
taken coursework emphasizing research in human memory or participated in research
projects related to memory issues.
Novice A. Novice A was a 22-year old Caucasian who was completing his
senior year as a psychology major and graduating with departmental honors. He had
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
62
just completed designing and conducting the study for his honors thesis that utilized a
2x2x2 factorial design.
Novice B. Novice E was a 21-year old Asian student who was completing his
senior year as a psychobiology major and graduating with departmental and university
honors. He had not published any research papers or been involved in the design of
any factorial design studies.
Novice C. Novice C was a 21-year old Hispanic student who had just
completed his sophomore year as a psychology major. He had not published any
research papers or been involved in the design of any factorial design studies.
Procedure
The procedure consisted of two phases—preliminary data collection and
primary task data collection. During the preliminary data collection phase, subjects’
expert deductive reasoning (EDR) was assessed, and instrument calibration occurred.
During the primary task data collection, automaticity, problem-solving strategies, self-
report, and fluid ability measures were recorded.
Preliminary Data Collection
Subjects were first asked to read and sign the Informed Consent Form for Non-
Medical Research. Once any initial questions had been answered and the subject was
seated at a desk, the experimenter explained the nature of the EEG equipment and
provided an overview of the tasks to be completed during the two-hour session. While
the subject completed the Lawson’s Test of Scientific Reasoning, the experimenter
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
63
attached the Electro-Cap, inserted conductive gel into the electrodes using a dull
syringe, and verified that the impedance of the EEG was no greater than 3 kQ as per
the Electro-Cap guidelines. Once the subject had completed the written test and the
impedance level had been verified, the primary data collection began.
Primary Data Collection
Subjects were informed that they were participating in a study to analyze their
mental effort and strategies during a research task that consisted of multiple iterations
of experimental design and data analysis. They were instructed not to concern
themselves with the amount of time necessary to complete the task. Instead, they were
advised to remain focused on the task at hand and not perform extraneous actions
within the interface. Following these brief instructions, they were presented with the
instructions and task description via the computer monitor. These instructions
included an orientation to the Simulated Psychology Lab interface (Schunn &
Anderson, 1999). Questions and requests for clarification were answered by the
researcher.
Once the task began, the subjects’ coherence levels were recorded via the EEG
while their actions were recorded by a video cassette recorder that captured the on
screen activity as the subject manipulated the software. After each experiment was
designed and executed, subjects were interviewed using the Critical Decision Method
of cognitive task analysis to elicit their recollection of the cognitive processes
underlying their keystroke-level behavior data captured during simulation use. After
the CDM, subjects completed the two fluid ability measures described above.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
64
This process (simulation use, CDM, and Gf measures) was repeated until the
subject solved the problem presented by the simulation as indicated by the attainment
of a Pearson correlation greater than or equal to 0.9 between the hypothesized values
for all variables available and the values generated as actual outcomes by the
simulation or reached a maximum time of two hours for the entire session. It should
be noted that the only one subject attained a correlation greater than 0.9 with a
maximally complex design and did so only minutes before the two-hour time limit.
As such, the simulation task can be considered sufficiently challenging to adequately
represent a range of skill and fluctuation in mental effort (i.e. cognitive load) over the
course of performance.
Analysis
The first phase of analysis consisted of synchronizing the EEG data, the
computer interactions captured by the Simulated Psychology Lab, and the events
reported in the cognitive task analysis. The EEG and simulation data were both time-
stamped by the respective components of the recording devices. However the timeline
generated during each CDM interview required comparison with the recorded
simulation data.
An adaptation of Ericsson and Simon’s (1993) protocol analysis coding was
utilized, wherein the transcribed data was segmented to encompass the processes that
occurred between one identified decision point and the next. The steps described in
each segment were matched to the video-recorded actions within each iteration of the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
65
design-hypothesize-and-execute cycle. One point of departure from the Ericsson and
Simon (1993) technique was to maintain the contextualization of each segment in
accordance with the recommendation of Yang (2003), who argues that in ill-
structured, complex, and knowledge-rich tasks, discrete cognitive steps are inherently
situated in the broader context of ongoing high level cognitive and metacognitive
processes of reasoning and interpretation. “Given the systemic, interrelated,
multidimensional nature of the learners’ cognitive processes in this.. .task, the
complexity of the functional model and taxonomy [can cause subjects] to become
immersed in the.. .interwoven processes of searching, interpreting, defining,
reasoning, and structuring,” (Yang, 2003, p. 102) resulting in an oversimplification of
subsequent analyses that loses relevant meaning within the process.
The second phase of analysis consisted of the coding of self-report data
segments for errors of commission and omission in relation to the time-matched
actions in the simulation, as well as calculating percentages as described above. Once
the segments were coded by error type, the categorical variables were represented
numerically for use in the statistical analyses. For each segment, three binary
variables were assigned: correct, omission, and commission. A value of 1 for a
category indicated it applied to the segment, and a value of 0 indicated that it did not.
When the verbal data for a segment was deemed accurate, the other two categories
were marked 0. Likewise, if either of the error conditions was found to be present, the
other was coded 0. As an additional analysis, overall accuracy of self-report was
evaluated by percentages as described previously. The percentage representing the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
66
proportion of errors of omission was subtracted from 100 to determine the density of
data available in the self-report. Iterations with densities less than 20% were excluded
from further statistical analyses, as they yielded too few coded segments to provide
sufficient power.
In the third phase of analysis, the CDM responses were evaluated to identify
consistent decision rules for each subject. The verbal data was analyzed in three ways.
First, individual statements from each subject’s CDM data were categorized as goals,
strategies, decisions, situational assessments, cues, mental modeling, or analogues to
determine conceptual representations of the subjects’ procedures for designing the
experiments. Second, decision rules were derived from those procedures through the
identification of consistent chains of reasoning. Third, temporal sequences of the
procedures were reconstructed from subjects’ verbal reports. To preserve readability,
observed sequential data captured during subjects’ use of the simulation is presented
for each subject in Appendices D-L due to their excessive length.
Categorization o f Process Elements
Goal statements were considered to be those that identified an intended
outcome or product of action that provided an immediately discemable purpose for a
given action or action sequence. Strategy statements identified operating principles
that provided a heuristic approach to a attaining an explicit or implicit goal or
explained the reasoning process that underlay a particular decision. Statements that
were coded as decisions represented cognitive or physical actions that satisfied an
interim goal (explicit or implicit) in the problem solving process. Situational
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
67
assessments were any statements reporting observations of or reflections on processes
or outcomes involved in the task. Cues were identified as specific observations that
directly prompted the implementation of a strategy or decision. Mental modeling
represented instances of forward reasoning that called upon specific declarative
knowledge. Analogue statements were those that explained similarities between the
simulation task and previous experiences with regard to any of the previously defined
statement categories. Subjects’ verbalizations in each category were used to inform
the creation of representative decision rules, and general strategic themes were
explored.
Decision Rules
The second analysis of the verbal data identified consistent sequential patterns
that emerged in the sequencing of categories discussed above. These sequences
required two criteria in order to provide meaningful decision procedures: First, they
required obvious meaningful interconnections between the coded segments, such that
authentic processes were identified, and semantically unrelated sequences that
occurred randomly were excluded. Second, they had to be generalizable beyond the
immediate situation to provide decision rules likely to manifest in similar, not merely
identical, situations.
Sequencing
The third analysis constructed a linear sequence of events as explained by each
subject. Although a sequential narrative of actions and decisions was requested
immediately upon the completion of each iteration of the simulation, subjects did not
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
68
always respond in strict compliance. This resulted in explanations that were
occasionally non-sequential. Further, the CDM probes often elicited additional details
that were not part of the unstructured segment of the interview. As such, all relevant
data was compiled into a representation of sequence that internally consistent with the
verbalized temporal order.
In the final phase of analysis, P-technique factor analysis (Cattell, 1973; Jones
& Nesselroade, 1990) was initially attempted in order to identify how the observed
variables in the study (accuracy, action, and automaticity) changed together over time
within individual subjects. However, the number of observations per subject was not
sufficient to yield meaningful factor scores (Thompson, 2004). Based on a Monte
Carlo simulation study, MacCallum, Widaman, Zhang, and Hong (1999) found that
communalities of .60 or higher reproduced accurate population pattern coefficients
with minimum sample sizes of 60 cases. However, no subjects yielded 60 or more
empirically verifiable statements. Consequently, for all intraindividual analyses, data
was compiled into correlation matrices. Because the Pearson product-moment
correlations generated in coherence analysis are not normally distributed, Fisher’s z’
transformation was used to convert the data into a form compatible with assumptions
of normality prior to statistical analysis.
This transformation also permitted analysis of the data across subjects in the
form of a pooled correlation matrix. When any two of the three accuracy
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
69
classifications were included in the matrix7, maximum likelihood factor analysis
extracted two orthogonal factors using a varimax rotation. As there were very few
errors of commission even in the pooled data, only errors of omission and accurate
statements were included in the analysis.
7 When all three accuracy categories were used, the perfect correlations entailed therein prevented
matrices from being positive definite.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
70
CHAPTER III: RESULTS
The results of the current study are presented whenever possible at three levels
of analysis: across all subjects, within expertise categories, and intraindividually.
First, the fluid and crystallized (EDR) ability measures are analyzed to detect any
significant intraindividual variations over the course of data collection or individual
differences between subjects that might have implications for the interpretation of
subsequent data (Hypothesis 1). Second, quantitative analysis of the performance on
the simulation task is presented (Hypothesis 2). Third, results of the congruence
analysis between subjects’ self-reported and directly observed process data from
simulation tasks are presented to indicate the levels of accuracy and fidelity that
subjects maintained during their self-reports (Hypothesis 3). Fourth, the analysis of
the covariance between self-report congruence and cognitive load is presented as
correlation matrices (Hypothesis 4). Finally, the results of the CDM cognitive task
analyses are presented for each subject as sets of sequential decision rules that were
extracted from the self-report data, followed by data that identifies trends in higher-
level strategy selection.
Reasoning Abilities
Overall descriptive statistics for the subjects participating in this study were
derived by computing the mean and standard deviation (SD) across all subjects and
within the novice, intermediate, and expert groups. For measures with multiple scores
for each subject (i.e. Maze and BackSpan tasks), the mean score for each task was
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
71
computed for each subject and used as a single indicator in the determination of group
means. One-way ANOVAs were used to determine the significance of interaction
between levels of expertise (expert, intermediate, or novice) and measures of scientific
(EDR) and fluid reasoning (Gf).
As predicted by the first hypothesis, there was no significant interaction
between subjects’ levels of expertise and measures of their fluid ability. One-way
ANOVAs failed to indicate any significant level of interaction between subjects’
expertise classifications (i.e. expert, intermediate, and novice) and Gf as measured by
the Maze task (F=1.00, p=0.42), and the BackSpan task (F=4.36, p=0.07),
respectively. It was surprising to note, however, that there was also no significant
level of interaction between subjects’ pre-determined expertise classifications and
their scores on the Lawson’s Test of Scientific Reasoning (F=1.10, p=0.39).
Table 4: Written Measures—All
Subjects
Lawson's Test of
Scientific Reasoning
Mean 19.444
SD 3.1667
Skewness -1.8309
Kurtosis 3.9784
Maze Task
Mean 4.693
SD 1.1170
Skewness -2.0280
Kurtosis 4.7641
BackSpan
Mean 7.122
SD 2.3813
Skewness -0.4050
Kurtosis -0.1916
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
72
Table 5: Written Measures— B y Subject
Classi ication
n Mean SD
Lawson’s Test of Scientific
Reasoning (EDR)
Expert 3 21.000 1.7321
Intermediate 3 17.333 5.0332
Novice 3 20.000 1.0000
Maze
Expert 3 5.278 0.2546
Intermediate 3 4.000 1.7638
Novice 3 4.800 0.7513
BackSpan
Expert 3 5.444 2.1430
Intermediate 3 9.500 1.1667
Novice 3 6.422 1.8139
Table 6: Written Measures— B y Individual Subject
Subjects
Lawson’s
Test
Of
Scientific
Reasoning
N
Maze BackSpan
Mean SD Mean SD
Expert A 22 1 5 N/A 7 N/A
Expert B 22 2 5.5 0.71 3 0
Expert C 19 3 5.333333 0.58 6.333333 2.31
Intermediate A 12 3 2 2 10 1
Intermediate B 18 3 4.666667 2.3 10.33333 2.08
Intermediate C 22 6 5.333333 0.52 8.166667 1.67
Novice A 19 3 5.666667 0.58 4.333333 1.53
Novice B 21 3 4.333333 2.08 7.333333 1.15
Novice C 20 5 4.4 0.55 7.6 1.14
Simulation Performance
Performance on the simulation task in terms of outcome quality was evaluated
by the attainment of sufficient results to accept or reject the theories presented to the
subjects. In the context of the simulation, this was represented by attaining a positive
correlation of 0.9 or better between hypothesized and actual (simulation generated)
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
73
values as predicted by subjects each round in a design with enough sophistication to
represent all relevant factors to the theories. Because subjects completed different
numbers of iterations in the experiment design process, aggregate values for the
outcome correlations achieved would not provide meaningful data. Thus, the
following charts present the number of iterations completed by each subject and
correlations between their respective predicted and actual values for each experiment.
It was hypothesized (Hypothesis 2) that experts would outperform less
experienced subjects. However, the data did not demonstrate an improved ability to
meet or exceed the goal correlation value in the time allotted. Only one subject,
Novice A, attained a correlation greater than 0.9 with a maximally complex design
during the allotted time. Other subjects (Intermediate B, Intermediate C, Novice C)
exceeded 0.9 with very simple designs in succession, but lacked sufficient time to
compile their results from these into a single, complex design that could be tested.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Expert Subjects
Hypothesis-Outcome Correlations
0.8 -
M |” . L,-. : . " / ' • ' S'-'. . ..... . ?r * ' ' . . ’:.s’ ; : .' ,; ,;: I
0-6
■=• 0.5
9 0.3
— Expert C i l
!
3 4
Iteratio n s C om pleted
Figure 2: Expert Subjects Hypothesis-Outcome Correlations
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Intermediate Subjects
Hypothesis-Outcome Correlations
Bllllgisil
W M E m E M
-0.5 - i i l B B i M l
■ M
Interm ediate A
Interm ediate B
Interm ediate C
1 2 3 4 5 6
Ite ra tio n s C o m p lete d
Figure 3: Intermediate Subjects Hypothesis-Outcome Correlations
Novice Subjects
Hypothesis-Outcome Correlations
J.isifetfefrShMi
Novice A
Novice B
Novice C
Iterations Completed
Figure 4: Novice Subjects Hypothesis-Outcome Correlations
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
76
Self-Report Accuracy
The accuracy of self-report was determined by first identifying all empirically
verifiable statements made by subjects during the cognitive task analyses. This
included information that was both directly observable on the videotaped computer
interaction (“I ran two repetitions, three repetitions, and five repetitions for the source
learning”; Novice B) and that required limited inferences to link with specific actions
(“When [subjects] were in different rooms and they tested in different rooms... [I
hypothesized that] they would have the best learning there”; Expert A). These
statements were mapped to the smallest possible sequences identifiable in the
transcriptions of the observed sequential data (Appendices D-L) that represented the
full range of actions covered by the statement.
For example, the two statements used as examples above were matched to the
italicized portions of the observed data transcriptions. Underlined segments indicate
errors of commission, where the observed data directly contradicted the subject’s
statement.
From Novice B (Appendix K):
1. Source Repetitions
1.1.3 conditions
1.1.1. First condition: 2
1.1.2. Second condition: 3
1.1.3. Third condition: 5
2. Source Spacings
2.1. 1 condition
2.1.1. First condition: 5
2.1.2. First condition: minutes
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
77
From Expert A (Appendix D):
29. Test Contexts
29.1. 1 condition-> 3
29.1.1. Second condition: Rooms
29.1.2. Third condition: mood
30. Hypotheses
30.1. Source context-same/Test context-same: 10
30.2. Source context-rooms/Test context-rooms: 10
30.3. Source context-mood/Test context-mood: 10
30.4. Source context-same/Test context-rooms: 7
30.5. Source context-same/Test context-mood: 7
30.6. Source context-rooms/Test context-rooms: 10~>12
30.7. Source context-mood/Test context-mood: 10->12
30.8. Source context-rooms/Test context-mood: 10
30.9. Source context-rooms/Test context-rooms: 12 ->14
30.10. Source context-rooms/Test context-same: 12
30.11. Source context-mood/Test context-same: 12
30.12. Source context-mood/Test context-room: 14
The density of the verbal data was determined by summing the number of
italicized lines and dividing by the total number of lines in the observational data. The
density for errors of omission and errors of commission were computed in the same
manner. Density data for all subjects is reported in Table 7.
The third hypothesis for the current study predicted that intermediate subjects
would be significantly more accurate than either experts or novices in their self-report.
However, one-way ANOVAs failed to indicate any significant level of interaction
between subjects’ expertise classifications (i.e. expert, intermediate, and novice) and
correct verbalization (F=0.02, p=0.98), errors of commission (F=0.05, p=0.96), errors
of omission (F=0.01, p=0.99), and overall levels of verbalization (F=0.01, p=0.99).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
78
Table 7: Density of Verbal Report Data in Relation to Observed Actions
Expert A Intermediate C
Correct 34.0% Correct 26.4%
Commission Errors 2.1% Commission Errors 5.7%
Omission Errors 63.8% Omission Errors 67.9%
Overall Verbalization 36.2% Overall Verbalization 33.2%
Expert B Novice A8
Correct 34.3% Correct 17.1%
Commission Errors 3.9% Commission Errors 5.1%
Omission Errors 61.9% Omission Errors 77.8%
Overall Verbalization 38.1% Overall Verbalization 22.2%
Expert C Novice B
Correct 15.3% Correct 51.6%
Commission Errors 0% Commission Errors 0%
Omission Errors 84.7% Omission Errors 48.4%
Overall Verbalization 15.3% Overall Verbalization 51.6%
Intermediate A Novice C
Correct 33.2% Correct 8.6%
Commission Errors 1.1% Commission Errors 2.7%
Omission Errors 65.8% Omission Errors 88.7%
Overall Verbalization 34.2% Overall Verbalization 11.3%
Intermediate B
Correct 21.2%
Commission Errors 0%
Omission Errors 78.9%
Overall Verbalization 21.2%
Automaticity and Accuracy
The fourth and final hypothesis of the study predicted that cognitive load
would be significantly related to the accuracy of self-report. Because increased
coherence indicates a higher level of automaticity, accuracy was predicted to be
negatively correlated with it. Non-significant negative correlations were observed in
8 This data represents verbal data from only two out o f three rounds completed. The third round o f the
simulation was completed only shortly before the end o f the two hour time limit, and the subject spoke
only briefly in general (i.e. not directly mappable) terms about how he had simply reversed his
hypotheses from the previous round to convert his second round correlation o f -0.76 to a strong positive
correlation before the time limit. Consequently, it was determined that using quantitative analyses
exclusively on the basis o f the first two rounds would be most representative o f the subject’s self-report
accuracy.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
79
three of the six subjects for whom data could be computed9 and the pooled subject
data, and Novice A evidenced a significant negative correlation in his frontal lobes,
which are commonly associated with problem solving, cognitive decision making, and
working memory associated with these processes (Baddeley, 1986; Dunbar &
Sussman, 1995; Samtheim, et al., 1998).
As a corollary, the hypothesis also predicted that the correlation with errors
(especially those of omission) would be positive. As seen in Table 6, nearly all
intraindividual correlations were positive, though not significantly so. Expert A and
Novice A both demonstrated strong positive correlations in the frontal lobe, and the
pooled correlations demonstrate significant positive correlations in all lobes (Table 5).
Additionally, when data was pooled within subject classifications for experts and
intermediates (EEG data from only one of three novices was usable), experts’
coherence in the frontal-parietal lobes was significantly correlated with errors of
omission (r=0.38, p<0.05), and intermediates’ coherence in the parietal lobes was
significantly correlated with errors of omission as well (r=0.194, p<0.05). As
indicated in Table 7, Guttman split-half reliability measures of both the intraindividual
and pooled data yielded highly significant reliability coefficients.
Maximum likelihood factor analysis was conducted in an attempt to reduce the
data from the pooled correlation matrix. Using a varimax rotation, the solution
yielded two factors that accounted for 56.368 and 16.799 percent of the variance,
respectively (Chi-square=l65.792, df=26, p<.000). The first loaded predominantly on
9 Expert C and N ovice C lacked sufficient density in their verbal data, and the EEG data for N ovice B
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
80
hemispheric coherence overall, which is representative of general brain function
across all lobes. The second factor loaded on accurate statements and errors of
omission predominantly, but also included loadings approaching significance for
frontal (F7F8) and occipital lobe (0102) coherence. As previously discussed, the
frontal lobes are responsible for many functions related to decision making and
working memory. The occipital lobes are associated with visual processing and
represent increased coherence during activities coded as errors of omission, which
could be explained by reduced visual processing during the periods of problem solving
least dependent on use of the computer interface (e.g. reasoning and strategy selection.
Table 8 : Guttman Split-Half Reliability
Correlations
Expert A .8991
Expert B .9148
Intermediate A .9071
Intermediate B .8663
Intermediate C .8832
Novice A .9416
Pooled Analysis .9005
Table 9: Pooled Coherence Correlations
ACCURATE OMISSION COMMISSION
FP1FP2 -.103 .190** - . 1 2 2
F7F8 -.129 .2 1 1 ** -.113
F3F4 -.115 .169* -.072
T3T4 -.034 .046 -.016
C3C4 -.124 .157* -.039
T5T6 -.090 .178* -.125
P3P4 -.091
j 9 9 **
-.155*
0 1 0 2
’ C orrelation is
- . 1 1 2
significant at the 0.
.187*
)5 level (2-tailed).
-.103
’ ’ C orrelation is significant at the 0.01 level (2-tailed).
+N o errors o f th at type w ere m ade by the subject.
had too many artifacts to be interpretable.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
81
Tab e 10: Intraindividual Coherence-Accuracy Correlation Matrices
Expert A: Coherence Correlations Expert B: Coherence Correlations
ACCURATE OMISSION COMMISSION ACCURATE OMISSION COMMISSION
FP1FP2 -0.104 0.64 -0.516 0.17 0.397 0.234
F7F8 -0.089 .701* -0.594 0.113 0.028 0.08
F3F4 0.118 0.572 -.713* 0.002 0.16 0.158
T3T4 0.157 0.501 -.688* 0.205 0.115 0.312
C3C4 0.059 0.624 -.694* 0.033 0.054 0.022
T5T6 0.035 .669* -.712* 0.076 0.147 0.22
P3P4 0.099 0.517 -0.635 0.021 0.189 0.209
0 1 0 2 -0.013 -0.1 0.115 0.229 0.368 0.149
Intermediate A: Coherence Correlations Intermediate B: Coherence Correlations
ACCURATE OMISSION COMMISSION ACCURATE OMISSION COMMISSION
FP1FP2 0.212 0.238 0.065 -0.113 0.113 +
F7F8 0.003 0.036 0.099 -0.182 0.182 +
F3F4 0.036 0.078 0.125 -0.084 0.084 +
T3T4 0.156 0.191 0.097 0.247 -0.247 +
C3C4 0.017 0.044 0.189 -0.205 0.205 +
T5T6 0.047 0.055 0.021 -0.15 0.15 +
P3P4 0.232 0.237 0 -0.116 0.116 +
0 1 0 2 0.267 0.238 0.106 0.02 -0.02 +
Intermediate C: Coherence Correlations Novice A: Coherence Correlations
ACCURATE OMISSION COMMISSION ACCURATE OMISSION COMMISSION
FP1FP2 .046 .154 .158 -.101 .196 -.126
F7F8 .192 .255 .085 -.437* .511** -0.053
F3F4 .051 .185 .197 -0.351 0.365 0.024
T3T4 .052 .133 .118 -0.188 0.209 -0.007
C3C4 .155 .200 .060 -0.331 .397* -0.054
T5T6 .022 .151 .192 -0.329 .457* -0.146
P3P4 .124 .257 .192 -0.237 0.347 -0.132
0 1 0 2 .091 .167 .109 -0.216 .409* -0.256
‘ C orrelation is significant at the 0.05 level (2-tailed).
“ C orrelation is significant at the 0.01 level (2-tailed).
+N o errors o f that type w ere m ade by th e subject.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
82
Table 11: Rotated Factor Matrix
Factor
1 2
ACCURATE -.052 -.794
OMISSION .112 .993
FP1FP2 .820 .100
F7F8 .853 .116
F3F4 .919 .066
P3P4 .904 .098
0 1 0 2 .565 .124
T3T4 .778 -.041
C3C4 .916 .055
T5T6 .893 .079
Problem Solving Processes
One of the purposes of this study was to identify the cognitive skills and
strategies that experts use and differentiate them from novice techniques. The CDM
cognitive task analyses yielded decision rule sequences for each subject that were
reasonably robust across iterations of the simulation. First, the sequences are reported
for each subject. Then common themes that emerged from the transcription data at a
general level of strategy are presented.
Decision Rules
Expert A:
Task 1. Select overall strategy for navigating problem space
Goal: Establish criteria on which to base design decisions
Action and Decision Steps
Step 1: Identify specific hypotheses to be tested.
IF knowledge from prior knowledge or experimental data or inferred
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
83
from context suggest viable hypotheses THEN select them on that
basis.
IF the proposed hypotheses are too complex to be considered
simultaneously THEN select a single hypothesis that is more easily
comprehensible or memorable as the basis for future design decisions.
Step 2: Identify variables that are extraneous to the hypothesis selected.
IF variable is not implicated by the hypothesis THEN do not vary it
or use it to establish multiple experimental conditions.
Task 2. Identify parameters for specific experiment.
Goal: Select appropriate experimental conditions and variables
Action and Decision Steps
Step 1: Select experimental conditions to vary that will most likely create
data informative to the validity of the hypothesis.
IF knowledge from prior knowledge or experimental data or inferred
from context suggest relevant conditions THEN select on that basis.
IF complexity of design (i.e. number of different conditions) exceeds
manageable level THEN reduce number of conditions until not
overwhelming.
Step 2: Select predicted values for outcomes from all experimental
conditions.
IF multiple units of measure or analysis are available THEN select the
one most likely to capture change in the phenomenon of interest.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
84
IF predicted outcome values can be identified from the literature or
informed by previous results THEN select them on that basis.
IF previous data is not available THEN preserve relative value
relationships with an arbitrary baseline score.
Task 3. Run experiment.
Task 4. Interpret results.
Goal: Understand relationship between hypotheses and data values
Action and Decision Steps
Step 1: Look for interactions between conditions manipulating variables
of interest.
IF outcome in a given cell is significantly different from the
hypothesized value THEN note the size and direction of difference.
Expert B:
Task 1. Evaluate complexity of variables to be studied
Goal: Determine constraints on variable manipulation and measurement.
Action and Decision Steps
Step 1: Apply knowledge of theories pertaining to variables.
IF knowledge of the variables to be studied is available THEN
interpret their known properties with regard to capturing the results of
variable manipulation.
Task 2. Select overall strategy for navigating problem space
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
85
Goal: Establish constraints on design to be utilized and select variables.
Action and Decision Steps
Step 1: Determine if sufficient knowledge is available to make a strong
prediction.
IF knowledge from existing literature suggests viable hypothesis
THEN determine it on that basis.
Step 2: Construct hypothesis.
IF incomplete knowledge of relevant variables is available THEN
constrain elements of hypothesis to conform to those expectations.
IF knowledge is insufficient to make definitive hypotheses for
predicting outcomes THEN construct exploratory design.
Task 3. Design experiment.
Goal: Select appropriate experimental conditions.
Action and Decision Steps
Step 1: Determine specific conditions for experiment.
IF hypothesis is determined THEN select unit of analysis and design
conditions to demonstrate expected outcomes.
IF exploratory design is used THEN maximize number of conditions
and breadth of units of analysis to capture broadest possible patterns
in data.
Task 4. Run experiment.
Task 5. Interpret results.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
86
Goal: Determine sufficiency of initial hypotheses on the basis of
experimental data.
Action and Decision Steps
Step 1: Determine if results are interpretable.
IF results reflect changes in identified variables THEN
identify disparities between predicted and actual outcomes.
IF results do not reflect changes in identified variables THEN
repeat process at Task 1.
Step 2: Identify disparities between hypothesized outcomes and
experimental results.
IF results depart significantly from expectations THEN
simplify design in Task 3 to allow for more controlled scrutiny
of unexpected outcomes.
IF results reflect only minor differences between expectations
and outcomes THEN refine hypotheses to minimize
differences.
Task 6. Refine hypotheses.
Goal: Adjust hypotheses on the basis of experimental data.
Action and Decision Steps
Step 1: Look for possible sources of discrepancy.
IF different variables change in common ways THEN add
interaction effect to hypothesis.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
87
Step 2: Incorporate results into hypothesis.
IF hypotheses are similar to data THEN change specific
predicted values to more closely match previous results.
Task 7. Repeat process at Task 1.
Expert C:
Task 1. Identify and evaluate variables to be studied
Goal: Determine constraints on variable manipulation.
Action and Decision Steps
Step 1: Determine dependence/independence of variables.
IF changing a variable represents a different condition for a subject
THEN it is considered independent.
IF a variable represents a change in subject performance THEN it is
considered dependent.
Step 2: Identify independent variables of relevance.
IF variable is central to theory being tested THEN determine ways
that they can be manipulated.
Task 2. Select overall strategy for navigating problem space
Goal: Establish constraints on design to be utilized and select variables.
Action and Decision Steps
Step 1: Determine number of conditions to manipulate.
IF theory to be tested suggests a specific hypothesis THEN determine
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
88
it on that basis, otherwise selection is arbitrary.
IF selection is arbitrary THEN maximize the range of possible values.
Step 2: Identify possible confounding or unobserved variables.
IF they can be controlled THEN determine mechanisms for doing so.
IF confounds cannot be controlled THEN apply to subsequent
analyses.
Step 3: Construct hypothesis.
IF knowledge from existing literature suggests viable hypothesis
THEN determine on that basis.
IF incomplete knowledge of relevant variables is available THEN
constrain elements of hypothesis to conform to those expectations.
IF knowledge is insufficient to make definitive hypotheses for
predicting outcomes THEN construct exploratory design.
Task 3. Design experiment.
Goal: Select appropriate experimental conditions.
Action and Decision Steps
Step 1: Determine specific conditions for experiment.
IF hypothesis is determined THEN select unit of analysis and design
conditions to demonstrate expected outcomes.
IF exploratory design is used THEN maximize number of conditions
and breadth of units of analysis to capture broadest possible patterns
in data.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
89
Task 4. Run experiment.
Task 5. Interpret results.
Goal: Identify major factors in experimental data.
Action and Decision Steps
Step 1: Identify disparities between hypothesized outcomes and
experimental results.
IF results suggest major impact of specific factors THEN add
additional conditions to manipulate those factors in Task 2.
IF results reflect only minor differences between expectations
and outcomes THEN refine hypotheses to minimize
differences.
Task 6. Refine hypotheses.
Goal: Adjust hypotheses on the basis of experimental data.
Action and Decision Steps
Step 1: Incorporate results into hypothesis.
IF hypotheses are similar to data THEN expand range of
hypothesized values to improve strength of prediction.
Task 7. Repeat process at Task 2.
Intermediate A:
Task 1. Select overall strategy for navigating problem space
Goal: Establish constraints on design to be utilized and select variables.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
90
Action and Decision Steps
Step 1: Determine number of conditions to manipulate.
IF no experimental data is available THEN select a single
independent variable to manipulate that is central to the theory being
tested.
IF previous data is available THEN select one more independent
variable to manipulate than was done in the previous experiment.
Task 2. Design experiment.
Goal: Select appropriate experimental conditions.
Action and Decision Steps
Step 1: Determine specific conditions for experiment.
IF independent variable to be tested has multiple conditions THEN
maximize number of conditions and breadth of units of analysis to
capture broadest possible patterns in data.
Step 2: Construct hypothesis.
IF knowledge is insufficient to make definitive hypotheses for
predicting outcomes THEN predict wide range of values.
IF some knowledge of relevant variables is available THEN
constrain hypothesized values to conform to those expectations.
Task 3. Run experiment.
Task 4. Interpret results and refine hypotheses.
Goal: Adjust hypotheses on the basis of experimental data.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
91
Action and Decision Steps
Step 1: Identify disparities between hypothesized outcomes and
experimental results.
IF results depart significantly from expectations THEN increase
complexity in Task 1 to allow for broader pattern of data.
IF results reflect only minor differences between expectations and
outcomes THEN refine hypotheses to minimize differences and
increase complexity in Task 1 to create more comprehensive
predictions.
Task 5. Repeat process at Task 1.
Intermediate B :
Task 1. Select overall strategy for navigating problem space
Goal: Establish criteria on which to base design decisions
Action and Decision Steps
Step 1: Identify specific hypothesis to be tested.
IF it is the first time testing experimenting with a given population
THEN empirically verify prior knowledge of theory is applicable.
IF previous experimental data with population is available THEN
design simplest experiment possible to detect effect of a single
proposed theory.
Step 2: Construct hypothesis.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
92
IF some knowledge of relevant variables is available THEN
constrain hypothesized values to conform to those expectations.
Task 2. Identify parameters for specific experiment.
Goal: Select appropriate experimental conditions and variables
Action and Decision Steps
Step 1: Select experimental conditions to vary that will most likely create
data informative to the validity of the hypothesis.
IF knowledge from prior knowledge or experimental data or inferred
from data suggest relevant conditions THEN select on that basis.
Step 2: Select predicted values for outcomes from all experimental
conditions.
IF multiple units of measurement or analysis are available THEN
select the one most likely to generate an increase in variance.
Task 3: Run experiment.
Task 4. Interpret results.
Goal: Determine sufficiency of initial hypotheses on the basis of
experimental data.
Action and Decision Steps
Step 1: Determine if results are interpretable.
IF results reflect changes in identified variables THEN identify
disparities between predicted and actual outcomes.
IF results do not reflect changes in identified variables THEN repeat
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
93
process at Task 1 to increase data variance.
Step 2: Identify disparities between hypothesized outcomes and
experimental results.
IF results depart significantly from expectations THEN consider the
likelihood of small effect size or necessary interaction between
independent variables for the variable examined.
IF results reflect only minor differences between expectations and
outcomes THEN refine hypotheses to minimize differences.
Task 5. Refine hypotheses.
Goal: Adjust design on the basis of experimental data.
Action and Decision Steps
Step 1: Look for possible sources of discrepancy.
IF variance is too low to interpret data THEN change unit of analysis
for variable in question.
Step 2: Incorporate results into hypothesis.
IF hypotheses are similar to data THEN change specific predicted
values to more closely match previous results.
Task 6. Repeat process at Task 1.
Intermediate C:
Task 1. Select overall strategy for navigating problem space
Goal: Establish criteria on which to base design decisions
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
94
Action and Decision Steps
Step 1: Identify specific hypothesis to be tested.
IF knowledge from prior knowledge or experimental data or inferred
from context suggest viable hypotheses compatible with theories in
question THEN select them on that basis.
Step 2: Construct hypothesis.
IF some knowledge of relevant variables is available THEN
constrain hypothesized values to conform to those expectations.
IF expectations are established THEN design simplest experiment
possible to detect effect of a single variable proposed in the theory.
Task 2. Identify parameters for specific experiment.
Goal: Select appropriate experimental conditions and variables
Action and Decision Steps
Step 1: Select experimental conditions to vary that will most likely create
data informative to the validity of the hypothesis.
IF knowledge from prior knowledge or experimental data or inferred
from data suggest relevant conditions THEN select on that basis.
Step 2: Select predicted values for outcomes from all experimental
conditions.
IF multiple units of measurement or analysis are available THEN
select the widest possible range to maximize variance.
Task 3: Run experiment.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
95
Task 4. Interpret results.
Goal: Determine sufficiency of initial hypotheses on the basis of
experimental data.
Action and Decision Steps
Step 1: Determine if results are interpretable.
IF results reflect changes in identified variables THEN identify
disparities between predicted and actual outcomes.
IF results do not reflect changes in identified variables THEN repeat
process at Task 1 to increase data variance.
Step 2: Identify disparities between hypothesized outcomes and
experimental results.
IF results depart significantly from expectations THEN consider
the likelihood of small effect size or necessary interaction between
independent variables for the variable examined.
IF results reflect only minor differences between expectations and
outcomes THEN refine hypotheses to minimize differences.
Task 5. Refine hypotheses.
Goal: Adjust design on the basis of experimental data.
Action and Decision Steps
Step 1: Look for possible sources of discrepancy.
IF variance is too low to interpret data THEN change unit of analysis
for variable in question.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
96
Step 2: Incorporate results into hypothesis.
IF hypotheses are similar to data THEN change variable to be
manipulated.
IF changing variable to be manipulated THEN select on the basis of
the smallest increase in complexity.
Task 6. Repeat process at Task 1.
Novice A:
Task 1. Select overall strategy for navigating problem space
Goal: Establish criteria on which to base design decisions
Action and Decision Steps
Step 1: Identify specific hypothesis to be tested.
IF knowledge from prior knowledge or experimental data or inferred
from context suggest viable hypothesis THEN select it on that basis.
IF proposed hypotheses are not too complex to consider
simultaneously THEN pursue both simultaneously.
IF hypotheses are too complex to test simultaneously THEN select a
single hypothesis that is more easily comprehensible or memorable
and use as the basis for future design decisions.
Step 2: Identify variables that are extraneous to the hypothesis selected.
IF variable is not implicated by the hypothesis THEN do not
vary it or use it to establish multiple experimental conditions.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
97
Task 2. Identify parameters for specific experiment.
Goal: Select appropriate experimental conditions and variables
Action and Decision Steps
Step 1: Select experimental conditions to vary that will most likely create
data informative to the validity of the hypothesis.
IF knowledge from prior knowledge or experimental data or inferred
from context suggest relevant conditions THEN select on that basis.
IF complexity of design (i.e. number of different conditions) exceeds
manageable level THEN reduce number of conditions until not
overwhelming.
Step 2: Select predicted values for outcomes from all experimental
conditions.
IF multiple units of measure or analysis are available THEN select
the ones most likely to capture change in the phenomenon of interest.
IF predicted outcome values can be identified from the literature or
informed by previous results THEN select them on that basis.
IF previous data is not available THEN preserve relative value
relationships with an arbitrary baseline score.
Task 3. Interpret results.
Goal: Understand relationship between hypotheses and data values
Action and Decision Steps
Step 1: Look for interactions between conditions manipulating variables
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
98
of interest.
IF outcome in a given cell is significantly different from the
hypothesized value THEN note the size and direction of difference
Task 5. Interpret results and refine hypotheses.
Goal: Adjust hypotheses on the basis of experimental data.
Action and Decision Steps
Step 1: Identify disparities between hypothesized outcomes and
experimental results.
IF results depart significantly from expectations THEN increase
complexity in Task 2 to allow for broader pattern of data.
IF results reflect only minor differences between expectations and
outcomes THEN refine hypotheses to minimize differences and
increase complexity in Task 2 to create more comprehensive
predictions.
Task 6. Repeat process at Task 2.
Novice B:
Task 1. Select overall strategy for navigating problem space
Goal: Establish constraints on design to be utilized and select variables.
Action and Decision Steps
Step 1: Determine number of conditions to manipulate.
IF no experimental data is available THEN select a single
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
99
independent variable to manipulate that is central to the theory being
tested.
IF previous data is available THEN select one more independent
variable to manipulate than was done in the previous experiment.
Task 2. Design experiment.
Goal: Select appropriate experimental conditions.
Action and Decision Steps
Step 1: Construct hypothesis.
IF knowledge is insufficient to make definitive hypotheses for
predicting outcomes THEN predict wide range of values.
IF some knowledge of relevant variables is available THEN
constrain hypothesized values to conform to those expectations.
Task 3. Run experiment.
Task 4. Interpret results and refine hypotheses.
Goal: Adjust design on the basis of experimental data.
Action and Decision Steps
Step 1: Incorporate results into hypothesis.
IF hypotheses are similar to data THEN change variable to be
manipulated.
IF changing variable to be manipulated THEN select on the basis of
the smallest increase in complexity.
Step 2: Compile all data collected.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
100
IF data collected is complementary with regard to theory tested
THEN include new results in assessment of theory validity.
Task 5. Repeat process at Task 1.
Novice C:
Task 1. Select overall strategy for navigating problem space
Goal: Establish criteria on which to base design decisions
Action and Decision Steps
Step 1: Identify variables that are extraneous to the hypothesis selected.
IF variable is not implicated by the hypothesis THEN do not
vary it or use it to establish multiple experimental conditions.
Step 2: Identify specific hypothesis to be tested.
IF knowledge from prior knowledge or experimental data or inferred
from context suggest viable hypothesis THEN select it on that basis.
IF proposed hypotheses are not too complex to consider
simultaneously THEN pursue both simultaneously.
IF hypotheses are too complex to test simultaneously THEN select a
single hypothesis that is more easily comprehensible or memorable
and use as the basis for future design decisions.
IF first hypothesis is deemed confirmed THEN adopt second
hypothesis.
Task 2. Identify parameters for specific experiment.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
101
Goal: Select appropriate experimental conditions and variables
Action and Decision Steps
Step 1: Select experimental conditions to vary that will most likely create
data informative to the validity of the hypothesis.
IF knowledge from prior knowledge or experimental data or inferred
from context suggest relevant conditions THEN select on that basis.
IF complexity of design (i.e. number of different conditions)
exceeds manageable level THEN reduce number of conditions
until not overwhelming.
Step 2: Establish comparison group
IF multiple conditions across multiple independent variables are to be
used THEN maintain a condition where settings for all variables do
not change across experiments.
Task 3. Run experiment.
Task 4. Interpret results.
Goal: Determine sufficiency of initial hypotheses on the basis of
experimental data.
Action and Decision Steps
Step 1: Determine if results are interpretable.
IF results reflect changes in identified variables THEN
identify disparities between predicted and actual outcomes.
IF results do not reflect changes in identified variables THEN
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
102
repeat process at Task 1.
Step 2: Identify disparities between hypothesized outcomes and
experimental results.
IF results depart significantly from expectations THEN
simplify design in Task 3 to allow for more controlled scrutiny
of unexpected outcomes.
IF results reflect only minor differences between expectations
and outcomes THEN refine hypotheses to minimize
differences.
Task 5. Refine hypotheses.
Goal: Adjust design on the basis of experimental data.
Action and Decision Steps
Step 1: Incorporate results into hypothesis.
IF hypotheses are similar to data THEN change variable to be
manipulated.
IF changing variable to be manipulated THEN select on the basis of
the smallest increase in complexity.
Step 2: Compile all data collected.
IF data collected is complementary with regard to theory tested
THEN include new results in assessment of theory validity.
Task 6. Repeat process at Task 1.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
103
General Qualitative Themes in Strategy Data
There were three general experimental design strategies that subjects used in
their approaches to confirm or reject the two theories proposed in the simulation task.
Experts B and C attempted to first “get as much information as possible in one quick,
broad experiment” (Expert B) in order to better understand what experiments would
be most beneficial in determining the validity of the hypotheses put forth in the
introduction to the simulation. As explained by Expert C:
Okay, first of all one main dependent variable is the theory of memorization —
that is a dependent variable. An independent variable.. .we can set two or three
different kinds of time periods to code spacing, maybe about three minutes
between instruction and then about three days between instructions in order to
find out the effect of spacing between instructions [learning tasks]. Then we
can start with three different kinds of moods then three different kinds of the
same or likewise so I can find out one or more independent variables, three
different kinds of context source, so in total, the number of cells could be
3x3x3 and then there could be twenty-seven cells, so we can find out a
completely randomized design for each of the subjects, all of the twenty-seven
cells can be equally distributed at random and equally distributed so we can
experiment.
In contrast, Expert A, Novice A, and Novice C each chose to pursue specific
hypotheses by making strong claims in their hypothesized values and adjusting
appropriately as it was supported or undermined by the results of the first round:
I was looking at the results from the first one thinking how I could kind of
narrow down what I was trying to test, so instead of trying to test both theories
at once just focus on one and focus on the context theory and try to get that
design down... .1 just realized I just needed to focus; I need to narrow the focus
a little without trying to test score, just focus on one theory rather than trying
to test everything at the same time and confusing myself. That is what
happened the first time. (Novice C)
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
104
Similarly, Expert A (Round 1) commented that “initially [he] was going to try to test
the massed versus spaced hypothesis to kind of include... [a second factor], but I
backed away from that early on, and I think it.. .was making things.. .too complex.”
It is interesting to note that although all three subjects ended up pursuing single
hypotheses in their designs, each initially chose to pursue both the context and spacing
theories simultaneously as affirmative hypotheses, but found the resulting designs to
be too complicated to meaningfully determine values for each cell in the factorial
design. Consequently, they revised their respective strategies to consider only one
hypothesis and maintain manageable levels of detail. Novice A explained:
Basically...something that.. .came to me this time around—it is a lot simpler if
you make.. .heavy predictions.. .for one theory, and it either is going to be right
on or it is going to be completely off and that helps you say “okay this theory
doesn’t work so maybe I should.. .make my hypothesis for the other theory....”
It’s more like kind of picking a side I guess.
The intermediate subjects (A, B, and C) and Novice E, however, opted to begin
with experimental designs that were as simple as possible with the expectation that
larger and more elaborate experiments would be manageable and more likely to yield
high correlations if they were based on the results of simpler experiments. As
explained by Intermediate B (Round 1):
I started thinking about an experiment and how I would be able to rule out or
confirm any hypotheses in terms of what would cause the desired outcome and
so initially, my thinking was to build up from small steps to vary as little as
possible, but to vary enough to where I could confirm some element of one
hypothesis over another.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
105
Subject Intermediate C agreed, indicating at the end of Round 1, that “this was the first
of possibly 50 or 100 experiments that theoretically will be conducted.”
Within these general strategies, subjects sought either to maintain the highest
tolerable level of complexity (Expert A, Expert B, Expert C, Novice A, Novice C) in
the number of conditions and resulting data or to maintain the lowest level of
complexity possible (Intermediate A, Intermediate B, Intermediate C, Novice B) by
expanding the design of each successive experiment only enough to generate new data
sufficient to draw a single new inference. Explanations by Expert B and Intermediates
A and B illustrate this contrast:
My initial assumption was that I would get as much information as possible in
one quick broad experiment and that based on what I found I would then
simplify the experiments and sort of follow particular leads in a more concise
way. (Expert B)
So I decided that this time I would go out on a limb and try a much more—a
little bit more complicated.. .and try the different contexts, the same
repetitions, two, three and four repetitions but now try different rooms,
different contexts and different moods and kind of go for it. (Intermediate A)
There are [elements of the theory] that I’m not as confident in, but this is one
that I am and that I think most people are confident that the more times you do
something, the better you’re going to be at it or the better you’re going to be
able to recall it. So, I sort of see this as a gimmee up front and will build on
the sort of less intuitive ideas as I make the experiment more complex.
(Intermediate B)
In a follow up analysis, one-way ANOVAs were conducted to determine if
there was any relationship between fluid ability (Gf) and strategy selection or fluid
ability and approach to complexity. While there was no interaction between mean
performance on the maze task and strategy selection (F=1.20, p=0.36) or the level of
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
106
complexity (F=2.56, p=0.15), the interaction between mean BackSpan performance
and complexity was highly significant (F=8.06, p=0.025), with the interaction between
BackSpan performance and strategy selection approaching significance (F=4.63,
p=0.061).
Regardless of which approach to complexity the subjects took, they
constrained the options they considered through the selective use of prior knowledge
from their respective backgrounds. In some cases, this knowledge pertained directly
to experimental design considerations. For example, Intermediate C explained his
reasoning for not varying multiple variables in his experiments:
As a programmer—I’ve been programming 25 years almost—one thing you
learn is that if you’re going to make changes, make changes to one thing not
ten things, because if you make them to ten it will take forever to figure out
which of the ten caused the change. So, it’s from programming I learned to
limit to one change.
In other cases, the knowledge was used to inform hypotheses:
In fact I was doing research yesterday and came across a web site which spoke
about the context for remembering and actually recall the discussion of this
piece of research in which was it better to—the army has massive training
exercises in the desert lets say and then so the training exercise is over and then
they debrief on what went right and what went wrong in the training exercise.
So the question was whether or not the debrief ought to occur on the spot
or.. .talk about it in a classroom and the person that wrote the research paper
believed that the context was important for remembering so and that the
context is not only sight but it is also all your other senses, sense of smell, feel,
touch etc....[T]he conclusion was that the better time for training would be
right after the training exercise ended.. .in the battlefield. (Intermediate A)
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
107
CHAPTER IV: CONCLUSIONS
The purpose of this study was to investigate the nature and manifestations of
expertise in experimental design. Specifically, it sought to understand the interactions
between automaticity, strategy, and the ability to recall the means by which problems
are solved. It was hypothesized that, consistent with other research in the study of
expertise, experts’ performance would be attributable to previous training and
experience rather than to fluid intelligence. Further, the strategies and level of
performance that experts exhibited were expected to be highly systematic, effective,
and qualitatively different from those of non-experts.
It was also expected that because experts acquire their skill through extensive
practice, the procedures that they utilize are less effortful, faster, and more robust,
because they develop automaticity. This property was predicted to reduce the
accuracy of experts’ self-reported problem solving strategies, because automated skills
are not available to conscious monitoring. In contrast, it was expected that
intermediate-level subjects would be highly accurate in their recall, because they do
possess existing mental models of the experimental design process that facilitate
storage and retrieval for relevant information but have not yet automated the
situational assessment and cognitive decision making skills that allow experts to
perform at the highest levels. Conversely, novices were also expected to be less
accurate in their recall, because although they must proceed through tasks in a fully
conscious (i.e. non-automated) way, they have not yet developed the schemas to
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
108
effectively organize the information that they need to process. As such, they are
susceptible to cognitive overload as task complexity increases.
What are the cognitive skills and strategies that research experts use in experimental
research and how do they differ from novice techniques?
The findings from the current study demonstrate that although there were
substantial commonalities among scientific problem solving strategies used in the
completion of the presented task, those strategies were not consistently associated with
the category of expertise of the subjects. Further, the time constraints placed on the
completion of the task made it difficult to assess hypothesized properties, such as
systematicity and efficiency. It was notable, however, that the one subject to
successfully complete the task, although classified as a novice, did utilize a strategy
that was very similar to that used by one of the experts in the study.
In addition to high level strategies, subjects also evidenced distinct trends
regarding the complexity of the designs and the data that they yielded. Those subjects
who attempted to sustain maximal levels of complexity included all three expert
subjects and two novices, one of whom successfully completed the task. This suggests
that such an approach is one that holds promise within the constraints of the task as it
was presented in this study, presumably because it yielded maximal information in the
shortest amount of time.1 0
1 0 The concept o f time in this case can represent either the temporal constraints o f the task (i.e. no more
than two hours) or the limited number o f iterations that the constraints permitted.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
109
The intermediate subjects and one of the novices, on the other hand, elected to
pursue the opposite strategy, maintaining the simplest designs possible. Although this
strategy did yield a higher number of completed iterations for those subjects as
compared with the first group, the informational yield was not as high per round.
However, these subjects were not necessarily concerned about the imposition of a time
constraint and, as one subject noted, “50 or 100 experiments [would] theoretically...be
conducted” (Novice C) using the strategy.
One finding that bears further examination in future studies is the significant
relationship between the high complexity strategy and one of the fluid ability tasks.
From the data available, it was neither possible to determine whether this relationship
was one of correlation or causation, nor whether such differences would ultimately
yield a difference in success at research design tasks (assuming that time is not a
condition for success).
To what extent can experts and novices accurately report their own problem-solving
processes?
The findings from this study did not indicate any significant differences
between experts and non-experts in the accuracy of their self-report. Errors of
omission ranged between 48.4% and 88.7%, and errors of commission ranged from
0% to 5.7%. However, because statements needed to be validated against observable
information, statements pertaining to strategy, mental modeling, and analogues were
unverifiable. Similarly, goals, cues, and situational assessments were only verifiable if
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
110
they related to directly observable elements of the simulation. Consequently, it is
likely that much of the validated verbal data emphasized declarative, episodic
knowledge rather than the procedural knowledge of primary interest. To further
explore the question of accuracy for procedural knowledge specifically, more
sophisticated methods will need to be developed.
What is the relationship between the accuracy o f self-report and the degree o f
automaticity during problem-solving?
Two subjects, one expert and one novice, demonstrated significant correlations
between automaticity as detected by EEG-measured coherence and errors of omission.
Additionally, when coherence and accuracy data was pooled across all subjects,
significant correlations emerged across nearly all brain regions for that error type.
This suggests that given greater intraindividual sample sizes, correlations might
emerge more commonly at the individual level. The fact that these correlations
represent a positive relationship between automaticity and failure to recall events
within the domain of expertise is strongly supportive of the hypothesis that experts
(who presumably possess large amounts of automated knowledge) will be less
accurate in providing complete recollections as a function of their automaticity,
because lower level decision points are not consciously resolved.
It is also possible that the lack of common correlations between automaticity
and accuracy of self-report at the level of individual experts are attributable to the
limitations of the method employed. First, as was discussed in relation to questions of
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
I l l
validity for laboratory and simulation studies of cognitive processes, it is possible that
this task was sufficiently novel for the experts who participated in the study to limit
the extent to which automated processes could manifest in their performance. Second,
the long duration that some segments of EEG data represented due to the limitations of
the coding technique utilized in the procedure could have obscured the brief periods of
automaticity that characterize the phenomenon.
Summary
Overall, the results of this study present a mixed view of the claims made by
the hypotheses. The lack of differentiation among categories of subjects suggests
either that the selection procedure was not sufficient for identifying individuals who
were truly expert in the domain that was being tested or that the domain itself is not
one that can manifest reliable expert performance. In short, further research is needed
to determine which possibility is most likely. In future studies, the methods employed
will need to further decrease the grain size of automaticity measurements during the
completion of tasks and exercise improved selectivity of expert and novice subjects to
draw more robust conclusions.
Implications
There are two primary sets of implications from this study. The first pertains
to research considerations in the area of expertise, and the second pertains to
considerations of expert-based instruction in research methods.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
112
Definitions o f Expertise
Establishing a definition of expertise suitable for research has proven to be a
problematic endeavor (Cooke, 1992; Ericsson, Patel, & Kintsch, 2000). As is the case
for many constructs that exist as relative indicators, the challenge has been in
establishing parameters that are neither so broad as to provide little differentiation
among possible instances, nor so specific as to exclude particular fields of endeavor
for which practitioners can be considered “expert” (Cooke, 1992; cf. Pedhazur &
Schmelkin, 1991). Further, Sternberg (1997, p. 158) observes that “criteria may differ
from one field to another, and they may be loosely and even inconsistently applied
from one case to another.”
Ericsson and Smith (1991, p. 2) developed one of the foundational definitions
in the study of expertise by defining the field of study itself. “On the most general
level,” they noted, “the study of expertise seeks to understand and account for what
distinguishes outstanding individuals in a domain from less outstanding individuals in
that domain, as well as from people in general.” Intentionally leaving wide latitude
for interpretation, they nonetheless established certain necessary conditions that limit
the scope of study to examination of those individuals who evidence stable
characteristics and high levels of performance within their domains of expertise but do
not belong to an artificially limited class of participants (e.g. military commanders,
heads of state).
To facilitate the comparative study of expertise, it is further necessary to
restrict the considered domains to those with standardized performance criteria. This
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
113
approach allows for the identification of representative tasks from a domain that both
captures the range of performance levels and allows the assessment of “critical
mediating mechanisms” (p. 8).
However, most empirical studies do not rely on domain-specific rankings such
as those found in competitive chess (e.g., master, grandmaster), the military (e.g.,
officer’s rank, training certification), and academia1 1 (e.g., advanced degree, number
of publications) to operationally define expertise. Instead, they have operationally
identified “experts” either a priori as people who have worked for more than a certain
number of years within a domain or post hoc on the basis of subjects’ relative
performance on an artificial task.
Examples from the literature provide very clear illustrations of these
approaches. Using a priori identification, Shafto and Coley (2003) identified expert
subjects for their study in the domain of commercial fishing by requiring participants
to have simply “a minimum of 5 years experience” (p. 643). Similarly, UNIX
operating system experts were identified in another study, because “all had taken an
operating-systems course and had 3 or more years of experience with UNIX” (Doane,
Pellegrino, & Klatzky, 1990, p. 274). In contrast, a number of studies have utilized
the post hoc approach to subject categorization. Hong and Liu (2003) selected
performers in the top and bottom quartiles of a 76-subject pool to respectively
represent expert and novice game players before eliciting verbal protocols to examine
expert thinking processes. In discussing the definition of expertise from a post hoc
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
114
approach, they argue that “comparing the novice and expert is an effective way to
determine the level of expertise that classifies a person as novice or expert” (p. 246).
In addition to the obvious problem of circularity in the latter method, there are
significant empirical challenges to both approaches, as a number of studies indicate
that years of experience and isolated instances of high performance are imperfectly
correlated with common conceptions of expertise (Cooke, 1992; Ericsson & Chamess,
1994).
In a comprehensive review, Ericsson, et al. (1993) analyzed many experiments
examining training outcomes across a wide range of tasks (e.g., Morse Code
operation, Olympic events) and found strong evidence that years of experience alone
was not sufficient for explaining performance outcomes. Replicating these findings,
their own study demonstrated that the number of years of experience for expert
professional pianists in some cases had up to six fewer years of experience than their
less-skilled amateur counterparts. Likewise, Reif and Allen’s (1992) study of expert-
novice differences in physics knowledge indicated that while a general difference in
accuracy and strategy adaptivity between groups was evident, there were great
discrepancies among the experts themselves in their ability to apply relevant domain
knowledge. Ericsson and Lehmann (1996, p. 277) concede that such data “raise
troubling issues about the relation of experience-based expertise.”
Similar problems for post hoc selection have also emerged from expert-novice
studies where high level performance in an experimental task has failed to represent a
1 1 As demonstrated by the current study, however, even these ranking tools may not yield consistent,
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
115
consistent representation of expertise. Without clearly defined parameters for expert
performance within the domain and a direct relationship between the representative
and actual tasks, it becomes difficult to reliably extract the key elements of expertise.
When equivalent task constraints are not embodied within a laboratory study, the
elicited performance will not provide adequate manifestations of the maximal
adaptations that develop with expertise (Ericsson & Chamess, 1994). For example,
Garb (1989, cited in Camerer & Johnson, 1991) compared the performance of clinical
psychologists and psychology graduate students with the performance of non
psychology students in interpreting the results of Rorschach and sentence completion
tests. He found that the performance of his “experts” was no better than that of the
“novices.” This result suggests that the task itself did not adequately capture the high
level performance of which the trained subjects were capable. Thus, those who
performed best on the task were not necessarily those whose performance would be
best able to represent high performers within the domain of psychology.
Additionally, the elaboration of the construct has thus far failed to meet
Pedhazur and Schmelkin’s (1991, p. 167) criterion for a “scientifically meaningful
definition” in that “its relations with other concepts, be they antecedents, consequents,
or concomitant” has not been satisfactorily specified. Although a great deal of
evidence indicates that experts develop from about ten years of consistent, deliberate
practice, it is not yet definitively known what, if any, additional factors specifically
contribute to expertise beyond a general model of skill acquisition. Further,
high-level performance.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
116
differentiations in levels of expertise have yet to be adequately explained. As Holyoak
(1991, p. 312) noted, “skill acquisition results in routine expertise; adaptive expertise
requires something else.”
The relationship between expertise and automaticity is a clear illustration of
this problem. Although it is often expected that experts have developed extensive
automated procedures that facilitate complex solutions (Anderson, 1987; Sternberg, et
al., 2002), there has been no demonstrative evidence that this cognitive phenomenon is
a necessary or sufficient condition for expertise. While it is known that extensive
practice leads to a predictable reduction in the amount of mental effort necessary to
complete a procedure, the specific contribution of this phenomenon to the
demonstration of general characteristics of expert performance has yet to be
delineated.
Frequently, studies that evaluate expertise by means of performance levels on
highly controlled specific tasks do not capture all expertise-relevant characteristics of
subjects. As observed by Shalin, Geddes, Bertram, Szczepkowski, and DuBois
(1997), most experimental tasks are relatively simple in comparison to the actual
problems encountered in actual practice within the domain. Card-sorting, for
example, is valuable for eliciting representations of experts’ underlying semantic
representations of the knowledge within a domain, but is at best a minor element of
domain-specific expertise, and at worst of no value for differentiating between
journeymen and experts. To illustrate this point, in Chi, et al.’s (1981) classic study of
classification of physics problems, undergraduate students with relatively little
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
117
experience in physics were found to sort problems according to surface features of
problems, whereas doctoral students grouped them according to the principles on
which the problems were based. While this finding has great significance in the
development of the study of expertise, it de facto positions graduate students as
experts in the field of physics. Although it was quite reasonable to expect them to be
more knowledgeable and skilled than undergraduate subjects, it is doubtful that they
would have been classified as experts outside the context of the study when compared
to others with more accomplishments and experience in the field.
Similarly, the departure from reliance on weak methods to those that involve
forward reasoning from an elaborate mental model of the domain that has often been
ascribed to expert performance is itself an insufficient measure of expertise. While
many studies have found this to be a consistent difference in subjects’ performance,
there is not yet a satisfactory way to reliably differentiate between the forward
reasoning strategies of subjects who have acquired a moderate level of mastery in the
domain and those who are recognized as experts. Ultimately, such distinctions are
generated by evaluating the sufficiency of solutions as tasks increase in difficulty.
However, this approach still fails to provide generalizable distinctions and non-relative
criteria, because the prediction of subjects’ performance on unattempted tasks is not
reliable.
Despite suggestions in the literature that “expertise is perhaps a prototypically
defined construct where it is quite difficult to specify any one set of characteristics,
each of which is singly necessary and all of which are jointly sufficient for a person in
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
118
any field to be labeled an expert,” (Sternberg, et al., 2002, p. 63; see also Sternberg,
1997), the continued pursuit of an objective definition for expertise is necessary for
the advancement of the field. An examination of Kahane’s (1973) criteria for the value
of a definition reveals the limitations in Sternberg’s approach: (1) Definitions must be
neither too broad nor too narrow; (2) they must not contain vague language; (3) they
must not be circular by concept or vocabulary; and (4) they must state a term’s
essential properties. The first two criteria require discriminability among possible
experts. However, if experts in each domain and task can be considered to have
different properties, it is impossible to reliably identify them. The non-circularity
criterion suggests that the definition must establish a clear relationship to other known
constructs. However, without a firm grounding in key cognitive mechanisms, the
explanation of an expert is one whose performance reflects qualities of expertise.
Lastly, a prototypical conception of the phenomenon fails to establish the essential
properties by which identification can be made.
If this definition were accepted, it would be necessary to resign the construct to
an inconsistent set of loosely coupled phenomena, requiring that research into expert
performance abandon any explanation grounded in a unified cognitive theory, as
common underlying mechanisms could not be generalized to consistently account for
high-level performance across domains. Thus, the construct would lack validity, as
“high construct validity requires rigorous definition of potential causes and effects so
that manipulations or measures can be tailored to the construct they are meant to
represent” (Cook & Campbell, 1976, p. 239).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
119
While there are certain qualities that are necessary to all conceptions of expert
performance, such as speed and accuracy of task performance (Ericsson & Smith,
1991), they are not sufficient, because they do not adequately capture the facile nature
of expert-level skill. As Camp, Paas, Rikers, and van Merrienboer (2001) note, “when
only performance is used as a selection parameter, no difference is made between
people who perform well and indicate a high mental effort and people who also
perform well, but indicate a low mental effort” (pp. 579-580). It is for this reason that
closer attention to automaticity and the measurement of cognitive load during
performance may foster steps in the direction of objective criteria for expertise.
Instructional Considerations
Traditionally, experts are called upon to explain the necessary skills and
knowledge associated with their domains of expertise in either the format of a lecture
or in written instructional materials. However, this approach is not necessarily a
reliable way to identify and represent the requisite skills in a domain. It has prompted
a need for knowledge elicitation experts to develop highly specified cognitive task
analysis techniques for identifying all of the component skills necessary to complete a
complex task.
Recent research (Lee, 2004; Maupin, 2003; Velmahos, et al., 2004) has
demonstrated the value and powerful instructional effects of these techniques for
representing the necessary skills for high level performance and implementing them to
significant effect in instruction. Specifically, Lee’s (2004) meta-analytic study of the
instructional effectiveness of training based in cognitive task analyses provided
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
powerful evidence of the technique’s utility. The overall median percentage of post
training performance gain was 75.2% with an effect size of 1.72. Further, one of the
reviewed studies (i.e. Maupin, 2003) demonstrated outcomes of major practical
significance as a result of CTA based training that yielded a ratio of four critical
emergency room accidents for the control group to none for the treatment group. It
should further be noted that the errors made by the medical students in the control
group were characterized as errors in cognitive decision making, rather than failure to
appropriately duplicate directly observed techniques, thereby clearly illustrating the
critical role that cognitive decision mechanisms play in the adequate development of
complex skills. Although research methodology is not necessarily as critical an issue
as life or death medical practices, finding maximally effective techniques for
developing curricula may help to resolve the challenges currently faced by graduate
education in research design discussed in the introduction to this study.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
121
REFERENCES
Aarts, H., & Dijksterhuis, A. (2000). Habits as knowledge structures: Automaticity
in goal-directed behavior. Journal o f Personality and Social Psychology,
78(1), 53-63.
Ackerman, P. L. (2003). Cognitive ability and non-ability trait determinants of
expertise. Educational Researcher, 32(8), 15-20.
Acton, W. H., Johnson, P. J., & Goldsmith, T. E. (1994). Structural knowledge
assessment: Comparison of referent structures. Journal o f Educational
Psychology, 86, 303-311.
Adams, G. B., & White, J.D. (1994). Dissertation research in public administration
and cognate fields: An assessment of methods and quality. Public
Administration Review, 54(6), 565-576.
Adelson, B. (1981). Problem solving and the development of abstract categories in
programming languages. Memory and Cognition, 9, 422-433.
Alberdi, E., Sleeman, D. H., & Korpi, M. (2000). Accommodating surprise in
taxonomic tasks: The role of expertise. Cognitive Science, 24(1), 53-91.
Allen, R. M., & Casbergue, R. M. (1997). Evolution of novice through expert
teachers’ recall: Implications for effective reflection on practice. Teaching and
Teacher Education, 13(7), 741-755.
Anderson, J. R. (1982). Acquisition of cognitive skill. Psychological Review,
89(4), 369-406.
Anderson, J. R. (1987). Skill acquisition: Compilation of weak-method problem
situations. Psychological Review, 94(2), 192-210.
Anderson, J. R. (1993). Problem solving and learning. American Psychologist,
48(1), 35-44.
Anderson, J. R., Fincham, J. M., & Douglass, S. (1997). The role of examples and
rules in the acquisition of a cognitive skill. Journal o f Experimental
Psychology: Learning, Memory, and Cognition, 23(4), 932-945.
Anzai, Y., & Yokoyama, T. (1984). Internal models in physics problem solving.
Cognition and Instruction, 1, 397-450.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
122
Baddeley, A. (1986). Working memory. Oxford, England: Clarendon Press.
Bainbridge, L. (1981). Mathematical equations or processing routines ? In J.
Rasmussen and W.B. Rouse (Eds.), Human Detection and Diagnosis o f System
Failures. NATO Conference Series III: Human Factors, Vol. 15. New York:
Plenum Press.
Bainbridge, L. (1999). Verbal reports as evidence of the process operator’s
knowledge. International Journal o f Human-Computer Studies, 51, 213-238.
Baird, R. R. (2001). Experts sometimes show more false recall than novices: A cost
of knowing too much. Learning & Individual Differences, 13(4), 349-355.
Bakeman, R., McArthur, D., & Quera, V. (1996). Detecting group differences in
sequential association using sampled permutations: Log odds, kappa, and phi
compared. Behavior Research Methods, Instruments and Computers, 28(3),
446-457.
Bargh, J. A. (1990). Auto-motives: Preconscious determinants of social interaction.
In R. M. Sorrentino & E. T. Higgins (Eds.), Handbook o f Motivation and
Cognition (pp. 93-130). New York: Guilford Press.
Bargh, J.A. (1999). The unbearable automaticity of being. American Psychologist,
54(7), 462-479.
Bargh, J.A. & Ferguson, M, J. (2000). Beyond behaviorism: On the automaticity of
higher mental processes. Psychological Bulletin, 126(6), 925-945.
Bamett, S. M., & Ceci, S. J. (2002). When and where do we apply what we learn?
A taxonomy for far transfer. Psychological Bulletin, 128(4), 612-637.
Bamett, S. M., & Koslowski, B. (2002). Adaptive expertise: Effects of type of
experience and the level of theoretical understanding it generates. Thinking
and Reasoning, 8(4), 237-267.
Baumeister, R. F. (1984). Choking underpressure: Self-consciousness and
paradoxical effects of incentives on skillful performance. Journal o f
Personality and Social Psychology, 46, 610-620.
Beilock, S. L., Wierenga, S. A., & Carr, T. H. (2002). Expertise, attention, and
memory in sensorimotor skill execution: Impact of novel task constraints on
dual-task performance and episodic memory. The Quarterly Journal o f
Experimental Psychology, 55A(4), 1211-1240.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
123
Bereiter, C., & Scardamalia, M. (1993). Surpassing ourselves: An inquiry into the
nature and implications o f expertise. Chicago, IL: Open Court.
Berelson, B. (1952). Content analysis in communication research. Glencoe, IL:
Free Press.
Berry, D. C. (1987). The problem of implicit knowledge. Expert Systems, 4, 144-
151.
Besnard, D. (2000). Expert error. The case of trouble-shooting in electronics.
Proceedings o f the 19th International Conference SafeComp2000 (pp. 74-85).
Rotterdam, Netherlands.
Besnard, D., & Bastien-Toniazzo, M. (1999). Expert error in trouble-shooting: An
exploratory study in electronics. International Journal o f Human-Computer
Studies, 50, 391-405.
Besnard, D., & Cacitti, L. (2001). Troubleshooting in mechanics: A heuristic
matching process. Cognition, Technology & Work, 3, 150-160.
Bhaskar, R., & Simon, H.A. (1977). Problem solving in semantically rich domains:
An example from engineering thermodynamics. Cognitive Science, 1, 193-215.
Blessing, S. B., & Anderson, J. R. (1996). How people learn to skip steps. Journal
o f Experimental Psychology: Learning, Memory, and Cognition, 22(3), 576-
598.
Bransford, J.D., Brown, A.L. & Cocking, R.R. (1999). How people learn: Brain,
mind,_experience, and school. Washington, DC: National Academy Press.
Brookings, J. B., Wilson, G. F., & Swain, C. R. (1996). Psychophysiological
responses to changes in workload during simulated air traffic control.
Biological Psychology, 42, 361-377.
Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of
learning. Educational Researcher, 18(1), 32-42.
Brown, S. W., & Bennett, E. D. (2002). The role of practice and automaticity in
temporal and nontemporal dual-task performance. Psychological Research,
66, 80-89.
Bruenken, R., Plass, J. L., & Leutner, D. (2003). Direct measurement of cognitive
load in multimedia learning. Educational Psychologist, 38(1), 53-61.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
124
Cadwell, J., & Jenkins, J. (1986). Teachers’ judgments about their students: The
effect of cognitive simplification strategies on the rating process. American
Educational Research Journal, 23(3) 460-475.
Camp, G., Paas, F., Rikers, R., & van Merrienboer, J. (2001). Dynamic problem
selection in air traffic control training: a comparison between performance,
mental effort and mental efficiency. Computers in Human Behavior, 17, 575-
595.
Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by
the multitrait-multimethod matrix. Psychological Bulletin, 54, 297-312.
Caracelli, V. W., & Greene, J. C. (1993). Data analysis strategies for mixed-method
evaluation designs. Educational Evaluation and Policy Analysis, 15(2), 195-
207.
Carlson, R. A., Khoo, B. H., Yaure, R. G., & Schneider, W. (1990). Acquisition of
a problem-solving skill: Levels of organization and use of working memory.
Journal o f Experimental Psychology: General, 119(2), 193-214.
Carroll, J. B. (1993). Human cognitive abilities: A survey o f factor-analytic studies.
New York: Cambridge University Press.
Casner, S. M. (1994). Understanding the determinants of problem-solving behavior
in a complex environment. Human Factors, 36(4), 580-596.
Cattell, R. B. (1973). Factor analysis: An introduction and manual for the
psychologist and social scientist. Westport, CT: Greenwood Press.
Ceci, S. J. (1989). On domain specificity.. .more or less general and specific
constraints on cognitive development. Merrill-Paimer Quarterly, 35(1), 131-
142.
Ceci, S. J., & Liker, J. K. (1986). A day at the races: A study of IQ, expertise, and
cognitive complexity. Journal o f Experimental Psychology, 115, 255-266.
Chandler, P. & Sweller, J. (1991). Cognitive load theory and the format of
instruction. Cognition and Instruction, 8, 293-332.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
125
Chamess, N., Krampe, R., & Mayr, U. (1996). The role of practice and coaching in
entrepreneurial skill domains: An international comparison of life-span chess
skill acquisition. In K. A. Ericsson (Ed.), The road to excellence: The
acquisition o f expert performance in the arts and sciences, sports, and games
(pp. 51-80). Mahwah, NJ: Lawrence Erlbaum Associates.
Chase, W. G. & Simon, H. A. (1973). Perception in chess. Cognitive Psychology, 4,
55-81.
Chen, Z., & Siegler, R. S. (2000). Across the great divide: Bridging the gap between
understanding of toddlers’ and older children’s thinking. Monographs o f the
Society for Research in Child Development, Serial No. 261, 65(2).
Chi, M. T., Feltovich, P. J. & Glaser, R. (1981). Categorization and representation of
physics problems by experts and novices. Cognitive Science, 5, 121-152.
Chi, M. T. H., Glaser, R., & Rees, E. (1982). Expertise in problem solving. In R. J.
Sternberg (Ed.), Advances in psychology o f human intelligence (Vol. 1, pp. 7-
75). Hillsdale, NJ: Erlbaum.
Chipman, S. F., Schraagen, J. M., & Shalin, V. L. (2000). Introduction to cognitive
task analysis. In J. M. Schraagen, S. F. Chipman, & V. L. Shalin (Eds.),
Cognitive Task Analysis (pp. 3-23). Mahwah, NJ: Lawrence Erlbaum
Associates.
Chung, G. K. W. K., de Vries, F. L., Cheak, A. M., Stevens, R. H., & Bewley, W. L.
(2002). Cognitive process validation of an online problem solving assessment.
Computers in Human Behavior, 18, 669-684.
Clark, R. E. (1999). Yin and yang: Cognitive motivational processes operating in
multimedia learning environments. In J. van Merrienboer (Ed.), Cognition and
Multimedia Design. Herleen, Netherlands: Open University Press.
Clark, R. E., & Blake, S. B. (1997). Designing training for novel problem-solving
transfer. In R. Tennyson, D. F. Schott, N. M. Seel, & S. Dijkstra (Eds.),
Instructional design: International perspectives, Vol. I: Theory, research, and
models (pp. 183-214). Mahwah, NJ: Lawrence Erlbaum Associates.
Clarridge, P. B., & Berliner, D. C. (1991). Perceptions of student behavior as a
function of expertise. Journal o f Classroom Interaction, 26(1), 1-8.
Clement, J. (1988). Observed methods for generating analogies in scientific problem
solving. Cognitive Science, 12(4):563-586.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
126
Cook, T. D., & Campbell, D. T. (1976). The design and conduct of quasi-
experimental and true experiments in field settings. In M. D. Dunnette (Ed.),
Handbook o f Industrial and Organizational Psychology (pp. 223-326). Rand
McNally Publishing Company.
Cooke, N. J. (1992). Modeling human expertise in expert systems. In R. R.
Hoffman (Ed.), The psychology o f expertise: Cognitive research and empirical
A I{pp. 29-60). Mahwah, NJ: Lawrence Erlbaum Associates.
Cooke, N. J., Atlas, R. S., & Berger, R. C. (1993). Role of high-level knowledge in
memory for chess positions. American Journal o f Psychology, 106(3), 321-
351.
Cowan, N. (2000). The magical number 4 in short-term memory: A reconsideration
of mental storage capacity. Behavioral and Brain Sciences, 24, 87-185.
Crandall, B., & Gamblian, V. (1991). Guide to early sepsis assessment in the NICU.
Fairborn, OH: Klein Associates, Inc.
Crandall, B., & Calderwood, R. (1989). Clinical assessment skills ofneo-natal
Intensive care nurses (Report Contract 1-R43-NR01911-01, National Center
for Nursing, National Institutes of Health, Bethesda, MD). Fairborn, OH:
Klein Associates, Inc.
Creswell, J. W. (2003). Research design: Qualitative, quantitative, and mixed
methods approaches (2n d ed.). Thousand Oaks, CA: SAGE Publications.
Datta, L. (1994). Paradigm wars: A basis for peaceful coexistence and beyond. In
C. S. Rallis & S. F. Rallis (Eds.), The qualitative-quantitative debate: New
perspectives (pp. 53-70). San Francisco: Jossey Bass.
Dey, I. (1993). Qualitative data analysis: A user-friendly guide fo r social scientists.
New York: Routledge.
Doane, S. M., Pellegrino, J. W., & Klatzky, R. L. (1990). Expertise in a computer
operating system: Conceptualization and performance. Human-Computer
Interaction, 5, 267-304.
Doll, J., & Mayr, U. (1987). Intelligenz und schachleistung—eine untersuchung an
schachexperten. [Intelligence and achievement in chess—a study of chess
masters.] Psychologische Beitrage, 29, 270-289.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
127
Dhillon, A. S. (1998). Individual differences within problem-solving strategies used
in physics. Science Education, 82, 379-405.
Dubois, D., & Shalin, V. L. (2000). Describing job expertise using cognitively
oriented task analyses (COTA). In J. M. Schraagen, S. F. Chipman, & V. L.
Shalin (Eds.), Cognitive Task Analysis (pp. 41-55). Mahwah, NJ: Lawrence
Erlbaum Associates.
Dunbar, K. (1993). Concept discovery in a scientific domain. Cognitive Science,
17, 397-434.
Dunbar, K., & Sussman, D. (1995). Toward a cognitive account of frontal lobe
function: Simulating frontal lobe deficits in normal subjects. Annals o f the
New York Academy o f Sciences, 769, 289-304.
Electro-Cap International, (n.d.). Instruction manual for the ECI Electro-Cap
Electrode System. Eaton, OH: Electro-Cap International, Inc.
Ericsson, K. A. (Ed.). (1996). The road to excellence: The acquisition o f expert
performance in the arts and sciences, sports and games. Mahwah, New
Jersey: Lawrence Erlbaum Associates.
Ericsson, K. A. (2000). How experts attain and maintain superior performance:
Implications for the enhancement of skilled performance in older individuals.
Journal o f Aging & Physical Activity, 8(4), 366-372.
Ericsson, K. A., & Chamess, N. (1994). Expert performance: Its structure and
acquisition. American Psychologist, 49(8), 725-747.
Ericsson, K. A., Krampe, R. T., & Tesch-Romer, C. (1993). The role of deliberate
practice in the acquisition of expert performance. Psychological Review, 100,
363-406.
Ericsson, K. A., & Lehmann, A. C. (1996). Expert and exceptional performance:
Maximal adaptation to task constraints. Annual Review o f Psychology, 47,
273-305.
Ericsson, K. A., Patel, V., & Kintsch, W. (2000). How experts’ adaptations to
representative task demands account for the expertise effect in memory recall:
Comment on Vicente and Wang (1998). Psychological Review, 107(3), 578-
592.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
128
Ericsson, K. A., & Smith, J. (1991). Prospects and limits of the empirical study of
expertise: An introduction. In K. A. Ericsson & J. Smith (Eds.), Towards a
general theory o f expertise: Prospects and limits (pp. 1-38). New York:
Cambridge University Press.
Fisch, B. J. (1999). Fisch and Spehlmann’ s EEG primer: Basic principles o f digital
and analog EEG. New York: Elsevier.
Flad, J. A. (2002). The effects o f increasing cognitive load on self-report and dual
task measures o f mental effort during problem solving. Unpublished
dissertation, University of Southern California.
Fournier, L. R., Wilson, G. F., & Swain, C. R. (1999). Electrophysiological,
behavioral, and subjective indexes of workload when performing multiple
tasks: manipulations of task difficulty and training. International Journal o f
Psychophysiology, 31, 129-145.
Frensch, P. A., & Sternberg, R. J. (1989). Expertise and intelligent issues: When is
it worse to know better? In R. J. Sternberg (Ed.), Advances in the psychology
o f human intelligence (Vol. 5, pp. 157-188). Hillsdale, NJ: Lawrence Erlbaum
Associates.
Gall, M. D., Borg, W. R., & Gall, J. P. (1996). Educational research: An
introduction (6th edition). White Plains, NY: Longman Publishers, USA
Gardner, M. K., Woltz, D. J., & Bell, B. G. (2002). Representation of memory for
order of mental operations in cognitive tasks. American Journal o f
Psychology, 115(2), 251-274.
Gevins, A., & Smith, M. E. (2003). Neurophysiological measures of cognitive
workload during human-computer interaction. Theoretical Issues in
Ergonomic Science, 4(1-2), 113-131.
Giere, R. N. (1993). Cognitive models of science. Psycoloquy, 4(56), Scientific
Cognition (1). Accessed online at http://psycprints.ecs.soton.ac.uk/archive/
00000350/.
Gimino, A. E. (2000). Factors that influence students ’ investment o f mental effort
in academic tasks: A validation and exploratory study. Unpublished
dissertation, University of Southern California.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
129
Glaser, R., & Chi, M. T. H. (1988). Overview. In M. T. H. Chi, R. Glaser, & M. J.
Farr (Eds.), The nature o f expertise (pp. xv-xxviii). Mahwah, NJ: Lawrence
Erlbaum Associates.
Gobet, F. (1998). Expert memory: A comparison of four theories. Cognition, 66,
115-152.
Gobet, F., & Simon, H. A. (1996). Templates in chess memory: A mechanism for
recalling several boards. Cognitive Psychology, 31, 1-40.
Golde, C. M. and Dore, T. M. (2001). At cross purposes: What the experiences o f
doctoral students reveal about doctoral education. Philadelphia, PA, A report
for The Pew Charitable Trusts.
Gordon, S. E. (1992). Implications of cognitive theory for knowledge acquisition.
In R. R. Hoffman (Ed.), The psychology o f expertise: Cognitive research and
empirical A l (pp. 99-120). Mahwah, NJ: Lawrence Erlbaum Associates.
Gorman, B. S., & Allison, D. B. (1997). Statistical alternatives for single-case
designs. In R. D. Franklin, D. B. Allison, & B. S. Gorman (Eds.), Design and
Analysis o f Single-Case Research (pp. 159-214). Mahwah, NJ: Lawrence
Erlbaum Associates.
Gott, S. P., Hall, E. P., Pokomy, R. A., Dibble, E., & Glaser, R. (1993). A
naturalistic study of transfer: Adaptive expertise in technical domains. In D.
K. Detterman & R. J. Sternberg (Eds.), Transfer on trial: Intelligence,
cognition, and instruction (pp. 258-288). Norwood, NJ: Ablex.
Greene, J. C., Caracelli, V. J., & Graham, W. F. (1989). Toward a conceptual
framework for mixed-method evaluation designs. Educational Evaluation and
Policy Analysis, 11, 255-274.
Guadagnoli, E., & Velicer, W. (1988). Relation of sample size to the stability of
component patterns. Psychological Bulletin, 103, 265-275.
Hankins, T. C., & Wilson, G. F. (1998). A comparison of heart rate, eye activity,
EEG and subjective measures of pilot mental workload during flight. Aviation,
Space, and Environmental Medicine, 69(4), 360-367.
Harman, H. H. (1967). Modern factor analysis. Chicago: University of Chicago
Press.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
130
Hermans, D., Crombez, G., & Eelen, P. (2000). Automatic attitude activation and
efficiency: The fourth horseman of automaticity. Psychologica Belgica, 40(1),
3-22.
Hatano, G. (1982). Cognitive consequences of practice in culture specific procedural
skills. Quarterly Newsletter o f the Laboratory o f Comparative Human
Cognition, 4, 15-18.
Hatano, G. & Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H.
Asuma & K. Hakauta (Eds.). Child Development and Education in Japan (pp.
262-272). San Francisco, CA: Freeman.
Hatano, G. & Inagaki (2000). Practice makes a difference: Design principles for
adaptive expertise. Presented at the Annual Meeting of the American
Education Research Association. New Orleans, Louisiana: April, 2000.
Hmelo-Silver, C. E., Nagarajan, A., & Day, R. S. (2002). “ It’s harder than we
thought it would be”: A comparative case study of expert-novice
experimentation strategies. Science Education, 86, 219-243.
Holbrook, A. (2002). Examining the quality o f doctoral research. A Symposium
presented at the American Education Research Association Conference, New
Orleans, LA, April 1-5, 2002.
Holyoak, K. J. (1991). Symbolic connectionism: Toward third generation theories
of expertise. In K. A. Ericsson & J. Smith (Eds.) Toward a general theory o f
expertise: Prospects and limits (pp. 301-335). New York: Cambridge
University Press.
Hong, J. C., & Liu, M. C. (2003). A study on thinking strategy between experts and
novices of computer games. Computers in Human Behavior, 19, 245-258.
Hooker, K., Nesselroade, D. W., Nesselroade, J. R., & Lemer, R. M. (1987). The
structure of intraindividual temperament in the context of mother-child dyads:
P-technique factor analyses of short-term change. Developmental Psychology,
23(3), 332-346.
Horn, J. L., Donaldson, G., & Engstrom, R. (1981). Apprehension, memory, and
fluid intelligence decline in adulthood. Research in Aging, 3, 33-84.
Horn, J. L., & Masunaga, H. (2000). New directions for research into aging and
intelligence: The development of expertise. In T. J. Perfect & E. A. Maylor,
Models o f cognitive aging (pp. 125-159). Oxford: Oxford University Press.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
131
Hulin, C. L., Henry, R. A., & Noon, S. L. (1990). Adding a dimension: Time as a
factor in the generalizability of predictive relationships. Psychological
Bulletin, 107, 328-340.
Hyoenae, J., Tommola, J., & Alaja, A. (1995). Pupil dilation as a measure of
processing load in simultaneous interpretation and other language tasks.
Quarterly Journal o f Experimental Psychology: Human Experimental
Psychology, 48A(3), 598-612.
Jackson, P. W. (1985). Private lessons in public schools: Remarks on the limits of
adaptive instruction. In M. C. Wang & H. J. Walberg (Eds.), Adapting
instruction to individual differences (pp. 66-81). Berkeley, CA: McCutchan.
Johnson, P. E. (1983). What kind of expert should a system be? The Journal o f
Medicine and Philosophy, 8, 77-97.
Jonassen, D. H. (2000). Toward a meta-theory of problem solving. Educational
Technology: Research and Development, 48(4), 63-85.
Jones, C. J., & Nesselroade, J. R. (1990). Multivariate, replicated, single-subject,
repeated measures designs and P-technique factor analysis: A review of
intraindividual change studies. Experimental Aging Research, 16, 171-183.
Kahane, H. (1973). Logic and philosophy: A modern introduction (2n d Ed.).
Belmont, CA: Wadsworth.
Klahr, D. (2000). Exploring science: The cognition and development o f discovery
processes. Cambridge, MA: The MIT Press.
Klahr, D., & Dunbar, K. (1988). Dual space search during scientific reasoning.
Cognitive Science , 12(1), 1-55.
Klahr, D., Fay, A. L., & Dunbar, K. (1993). Heuristics for scientific
experimentation: A developmental study. Cognitive Psychology, 24(1), 111-
146.
Klahr, D., & Simon, H. A. (1999). Studies of scientific discovery: Complementary
approaches and convergent findings. Psychological Bulletin, 125(5), 524-543.
Klahr, D., & Simon, H. A. (2001). What have psychologists (and others) discovered
about the process of scientific discovery? Current Directions in Psychological
Science, 10(3), 75-79.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
132
Klein, G. A., & Calderwood, R. (1996). Investigations o f naturalistic decision
making and the recognition-primed decision model (Research Note 96-43).
Yellow Springs, OH: Klein Associates, Inc. Prepared under contract MDA903-
85-C-0327 for U. S. Army Research Institute for the Behavioral and Social
Sciences, Alexandria, VA.
Klein, G. A., Calderwood, R., & MacGregor, D. (1989). Critical decision method
for eliciting knowledge. IEEE Transactions on Systems, Man, and
Cybernetics, 19, 462-472.
Klimesch, W. (1999). Event-related band power changes and memory performance.
In G. Pfurtscheller & F. H. Lopes de Silva (Eds.), Handbook o f
electroencephalography and clinical neurophysiology: Vol. 6, Event-related
desynchronization (pp. 161-178). New York: Elsevier.
Kosslyn, S. M., Thompson, W. L., Kim, I. J., & Alpert, N. M. (1995).
Topographical representations of mental images in primary visual cortex.
Nature, 378, 496-498.
Koubek, R. J., & Salvendy, G. (1991). Cognitive performance of super-experts on
computer program modification tasks. Ergonomics, 34, 1095-1112.
Labaree, D. F. (2003). The peculiar problems of preparing educational researchers.
Educational Researcher, 32(4), 13-22.
Lamberti, D.M., & Newsome, S.L. (1989). Presenting abstract versus concrete
information in expert systems: What is the impact on user performance.
International Journal o f Man-Machine Studies, 31, 27-45.
Larkin, J.H. (1983). The role of problem representation in physics. In D. Gentner &
A. L. Stevens (Eds.). Mental models (pp. 75-98). Hillsdale, NJ: Lawrence
Erlbaum Associates.
Larkin, J.H. (1985). Understanding, problem representation, and skill in physics. In
S.F. Chipman, J.W. Segal, & R. Glaser (Eds.), Thinking and learning skills
(Vol. 2): Research and open questions (pp. 141-160). Hillsdale, NJ: Erlbaum.
Larkin, J., McDermott, J., Simon, D.P. and Simon, H.A. (1980a). Expert and novice
performance in solving physics problems. Science, 208, 1335-1342.
Larkin, J. H., McDermott, J., Simon, D. P., & Simon, H. A. (1980b). Models of
competence in solving physics problems. Cognitive Science, 4(4), 317-345.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
133
Lavelli, M., Pantoja, A. P. F., Hsu, H., Messinger, D., & Fogel, A. (in press). Using
microgenetic designs to study change processes. In D. G. Teti (Ed.),
Handbook o f Research Methods in Developmental Psychology. Oxford, UK:
Blackwell Publishers.
Lawson, A. E. (1978). Development and validation of the classroom test of formal
reasoning. Journal o f Research in Science Teaching, 15(1), 11-24.
Lawson, A. E. (2000). Classroom test o f scientific reasoning: Multiple choice
version (Revised Edition). Tempe, AZ: Arizona State University.
Lawson, A. E., Alkhoury, S., Benford, R., Clark, B. R., & Falconer, K. A. (2000).
What kinds of scientific concepts exist? Concept construction and intellectual
development in college biology. Journal o f Research in Science Teaching,
37(9), 996-1018.
Leckie, G. J. (1996). Desperately seeking citations: Uncovering faculty assumptions
about the undergraduate research process. The Journal o f Academic
Librarianship, 22, 201-208.
Lehman, D. R., Lampert, R. O., & Nisbett, R. E. (1988). The effects of graduate
training on reasoning: Formal discipline and thinking about everyday-life
problems. American Psychologist, 43, 431-442.
Logan, G. (1988a). Toward an instance theory of automatization. Psychological
Review, 95, 583-598.
Logan, G. D. (1988b). Automaticity, resources, and memory: Theoretical
controversies and practical implications. Human Factors, 30(5), 583-598.
Logan, G. D., & Cowan, W. (1984). On the ability to inhibit thought, and action: A
theory of an act of control. Psychological Review, 91, 295-327.
Logan, G. D., Taylor, S. E., & Etherton, J. L. (1996). Attention in the acquisition
and expression of automaticity. Journal o f Experimental Psychology:
Learning, Memory, and Cognition, 22(3), 620-638.
Lovett, M. C., & Anderson, J. R. (1996). History of success and current context in
problem solving: Combined influences on operator selection. Cognitive
Psychology, 31, 168-217.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
134
Lundy, D. H., Wegner, J. L., Schmidt, R. J., & Carlson, R. A. (1994). Serial step
learning of cognitive sequences. Journal o f Experimental Psychology:
Learning, Memory, and Cognition, 20, 1183-1195.
MacCallum, R. C., Widaman, K. F., Zhang, S., & Hong, S. (1999). Sample size in
factor analysis. Psychological Methods, 4, 84-99.
Maier, N. R. F. (1931). Reasoning in humans II: The solution of a problem and its
appearance in consciousness. Journal o f Comparative Psychology, 12, 181-
194.
Masunaga, H., & Horn, J. (2001). Expertise and age-related changes in components
of intelligence. Psychology and Aging, 16(2), 293-311.
Maupin, F. (2003). Comparing cognitive task analysis to behavior task analysis in
training first year interns to place central venous catheters. Unpublished
dissertation, University of Southern California.
McBumey, D. H. (1998). Research Methods. (4th edition). Pacific Grove, CA:
Brooks/Cole.
McGuire, W. J. (1997). Creative hypothesis generating in psychology: Some useful
heuristics. Annual Review o f Psychology, 48, 1-30.
Meissner, C. A., & Memon, A. (2002). Verbal overshadowing: A special issue
exploring theoretical and applied issues. Applied Cognitive Psychology, 16,
869-872.
Mook, D. G. (1983). In defense of external invalidity. American Psychologist, 38,
379-387.
Moray, N., & Reeves, T. (1987). Hunting the homomorph: A theory of mental
models and a method by which they may be identified. Proceedings o f the
International Conference on Systems, Man, and Cybernetics (pp. 594-597).
Morgan, D. L., & Morgan, R. K. (2001). Single-subject research design: Bringing
science to managed care. American Psychologist, 56(2), 119-127.
Morse, J. M. (1991). Approaches to qualitative-quantitative methodological
triangulation. Nursing Research, 40, 120-123.
Nagai, M., Kazai, K., & Yagi, A. (2001). Lambda response by orientation of striped
patterns. Perceptual and Motor Skills, 93, 672-676.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
135
Nesselroade, J.R., & Featherman, D.L. (1991). Intraindividual variability in older
adults’ depression scores: Some implications for development theory and
longitudinal research. In D. Magnusson, L. Bergman, G. Rudinger. & Y.B.
Torestad (Eds.), Problems and methods in longitudinal research: Stability and
change (pp. 7-66). Cambridge: Cambridge University Press.
Neves, D. (1977). An experimental analysis o f strategies o f the Tower o f Hanoi
(C.I.P. Working Paper No. 362). Unpublished manuscript, Carnegie Mellon
University.
Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs,
NJ: Prentice-Hall.
Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal
reports on mental processes. Psychological Review, 84, 231-259.
Nolan Computer Systems. (2002). Mindset software users’manual. Conifer, CO:
Nolan Computer Systems.
Ohlsson, S. (1983). On natural and technical knowledge domains. Scandinavian
Journal o f Psychology, 24, 89-91.
Onwuegbuzie, A. J., & Johnson, R. B. (2004). Mixed method and mixed model
research. In R. B. Johnson & L. B. Christensen, Educational research:
Quantitative, qualitative, and mixed approaches (pp. 408-431). Boston, MA:
Allyn and Bacon.
Onwuegbuzie, A. J., Slate, J. R., Paterson, F. R. A., Watson, M. H., & Schwartz, R.
A. (2000). Factors associated with achievement in educational research
courses. Research in the Schools, 7(1), 53-65.
Paas, F. G. W. C. (1992). Training strategies for attaining transfer of problem
solving skill in statistics: A cognitive-load approach. Journal o f Educational
Psychology, 84, 429-434.
Patel, V. L., Kaufman, D. R., & Magder, S. A. (1996). The acquisition of medical
expertise in complex dynamic environments. In K. A. Ericsson (Ed.), The
road to excellence: The acquisition o f excellence in arts and sciences, sports
and games (pp. 127-165). Mahwah, NJ: Lawrence Erlbaum Associates.
Pedhazur, E. J., & Schmelkin, L. P. (1991). Measurement, design, and analysis: An
integrated approach. Hillsdale, NJ: Lawrence Erlbaum Associates.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
136
Perkins, D. N., & Grotzer, T. A. (1997). Teaching intelligence. American
Psychologist, 52(10), 1125-1133.
Perkins, D. N., & Salomon, G. (1989). Are cognitive skills context-bound?
Educational Researcher, 18(1), 16-25.
Petsche, H., & Etlinger, S. C. (1998). EEG and thinking: Power and coherence
analysis o f cognitive processes. Vienna, Austria: Verlag.
Phillips, L. H., Wynn, V. E., McPherson, S., & Gilhooly, K. J. (2001). Mental
planning and the Tower of London task. The Quarterly Journal o f
Experimental Psychology, 54A(2), 579-597.
Proctor, R. W., & Dutta, A. (1995). Skill acquisition and human performance.
Thousand Oaks, CA: Sage Publications.
Radziszewska, B., & Rogoff, B. (1988). Influence of adult and peer collaborators
on children’s planning skills. Developmental Psychology, 24(6), 840-848.
Radziszewska, B., & Rogoff, B. (1991). Children’s guided participation in planning
imaginary errands with skilled adult or peer partners. Developmental
Psychology, 27(3), 381-389.
Raskin, J. (2000). Humane interface: New directions for designing interactive
systems. Boston, MA: Addison-Wesley.
Reder, L. M., & Schunn, C. D. (1996). Metacognition does not imply awareness:
Strategy choice is governed by implicit learning and memory. In L. M. Reder
(Ed.), Implicit memory and metacognition (pp. 45-77). Mahwah, NJ:
Lawrence Erlbaum Associates.
Reingold, E. M., Chamess, N., Schultetus, R. S., & Stampe, D. M. (2001).
Perceptual automaticity in expert chess players: Parallel encoding of chess
relations. Psychonomic Bulletin & Review, 8(3), 504-510.
Rikers, R., Schmidt, H. G., & Boshuizen, H. (2000). Knowledge encapsulation and
the intermediate effect. Contemporary Educational Psychology, 25(2), ISO-
166.
Rittle-Johnson, B., Siegler, R. S., & Alibali, M. W. (2001). Developing conceptual
understanding and procedural skill in mathematics: An iterative process.
Journal o f Educational Psychology, 93(2), 346-362.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
137
Rowe, R. M., & McKenna, F. P. (2001). Skilled anticipation in real-world tasks:
Measurement of attentional demands in the domain of tennis. Journal o f
Experimental Psychology: Applied, 7(1), 60-67.
Sabers, D., Cushing, K. S., & Berliner, D. C. (1991). Differences among teachers in
a task characterized by simultaneity, multidimensionality, and immediacy.
American Educational Research Journal, 28(1), 63-88.
Salomon, G. (1984). Television is “easy” and print is “tough”: The differential
investment of mental effort in learning as a function of perceptions and
attributions. Journal o f Educational Psychology, 76, 774-786.
Samtheim, J., Petsche, H., Rappelsberger, P., Shaw, G. L., & von Stein, A. (1998).
Synchronization between prefrontal and posterior association cortex during
human working memory. Proceedings o f the National Academy o f Sciences o f
the United States o f America, 95, 7092-7096.
Schaafstal, A., & Schraagen, J. M. C. (2000). Training of troubleshooting: A
structured, task analytical approach. In J. M. C. Schraagen, S. F. Chipman, &
V. L. Shalin (Eds.), Cognitive Task Analysis (pp. 57-70). Mahwah, NJ:
Lawrence Erlbaum Associates.
Schmidt, H. G., & Boshuizen, H. (1993). On the origin of intermediate effects in
clinical case recall. Memory & Cognition, 21(3), 338-351.
Schneider, W., & Fisk, A. D. (1982). Concurrent automatic and controlled visuals
search: Can processing occur without resource cost? Journal o f Experimental
Psychology: Learning, Memory, and Cognition, 8, 261-278.
Schneider, W., & Shifffin, R.M. (1977). Controlled and automatic human
information processing: I. Detection, search, and attention. Psychological
Review, 84(1), 1-66.
Schoenfeld, A. H. (1999). The core, the canon, and the development of research
skills: Issues in the preparation of education researchers. In E. C. Lagemann
& L. S. Shulman (Eds.), Issues in Education Research: Problems and
Possibilities {pp. 166-202). San Francisco, CA: Jossey-Bass.
Schooler, J. W., Ohlsson, S., & Brooks, K. (1993). Thoughts beyond words: When
language overshadows insight. Journal o f Experimental Psychology: General,
122(2), 166-183.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
138
Schunn, C. D., & Anderson, J. R. (1998). Scientific discovery. In J. R. Anderson &
C. Lebiere (Eds.), The Atomic Components o f Thought (pp. 385-427).
Mahwah, NJ: Lawrence Erlbaum Associates.
Schunn, C. D., & Anderson, J. R. (1999). The generality/specificity of expertise in
scientific reasoning. Cognitive Science, 23(3), 337-370.
Schunn, C. D., & Klahr, D. (1996). The problem of problem spaces: When and how
to go beyond a 2-space model of scientific discovery. In G. W. Cottrell (Ed.),
Proceedings o f the 18th Annual Conference o f the Cognitive Science Society
(pp. 25-26). Hillsdale, NJ: Erlbaum.
Schunn, C. D., Reder, L. M., Nhouyvanisvong, A., Richards, D. R., & Stroffolino, P.
J. (1997). To calculate or not to calculate: A source activation confusion
model of problem-familiarity’s role in strategy selection. Journal o f
Experimental Psychology: Learning, Memory, & Cognition, 23(1), 3-29.
Schraagen, J. M. C. (1990). How experts solve a novel problem within their domain
o f expertise (IZF 1990 B-14). Soesterberg, The Netherlands: Netherlands
Organization for Applied Scientific Research. Prepared under HDO
assignment B89-35 for TNO Institute for Perception, TNO Division of
National Defense Research, Soesterberg, The Netherlands.
Schraagen, J. M. C. (1993). How experts solve a novel problem in experimental
design. Cognitive Science, 17(2), 285-309.
Schraagen, J. M. C., Chipman, S. F., & Shute, V. J. (2000). State-of-the-art review
of cognitive task analysis techniques. In J. M. C. Schraagen, S. F. Chipman, &
V. L. Shalin (Eds.), Cognitive Task Analysis (pp. 467-487). Mahwah, NJ:
Lawrence Erlbaum Associates.
Seamster, T. L., Redding, R. E., & Kaempf, G. L. (2000). A skill-based cognitive
task analysis framework. In J. M. C. Schraagen, S. F. Chipman, & V. L.
Shalin (Eds.), Cognitive Task Analysis (pp. 135-146). Mahwah, NJ: Lawrence
Erlbaum Associates.
Shafto, P., & Coley, J. D. (2003). Development of categorization and reasoning in
the natural world: Novices to experts, naive similarity to ecological
knowledge. Journal o f Experimental Psychology: Learning, Memory, and
Cognition, 29(4), 641-649.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
139
Shalin, V. L., Geddes, N. D., Bertram, D., Szczepkowski, M. A., & DuBois, D.
(1997). Expertise in dynamic, physical task domains. In P. J. Feltovich, K. M.
Ford, & R. R. Hoffman (Eds.), Expertise in Context (pp. 195-217). Menlo
Park, CA: American Association for Artificial Intelligence Press.
Shiffrin, R. M. (1988). Attention. In R. C. Atkinson, R. J. Hermstein, G. Lindzey, &
R. D. Luce, (Eds.), Stevens' Handbook o f Experimental Psychology (2nd Ed.)
(pp. 739-811). New York: Wiley.
Shiffrin, R. M. & Dumais, S. T. (1981). The development of automatism. In J. R.
Anderson (Ed.), Cognitive Skills and Their Acquisition (pp. 111-140).
Hillsdale, NJ: Erlbaum.
Shiffrin, R. M., & Schneider, W. (1977). Controlled and automatic human
information processing: II. Perceptual learning, automatic attending, and a
general theory. Psychological Review, 84, 127-190.
Simonton, D. K. (1999). Talent and its development: An emergenic and epigenetic
mode. Psychological Review, 106, 435-457.
Singley, M. K., & Anderson, J. R. (1989). Transfer o f cognitive skill. Cambridge,
MA: Harvard University Press.
Sleeman, C. (1998). Implicit expertise: Do we expect too much from our experts?
In K. Kirsner, C. Speelman, M. Maybery, A. O’Brien-Malone, M. Anderson,
& C. MacLeod (Eds.), Implicit and explicit mental processes (pp. 135-147).
Mahwah, NJ: Lawrence Erlbaum Associates.
Sloutsky, V. M., & Yarlas, A. (2000). Problem representation in experts and novices:
Part 2. Underlying processing mechanisms. Proceedings o f the XXII Annual
Conference o f the Cognitive Science Society (pp. 475-480). Mahwah, NJ:
Erlbaum.
Snow, R. E. (1989). Aptitude, instruction, and individual development. International
Journal o f Educational Research, 13(8), 869-881.
Starkes, J. L., Deakin, J. M., Allard, F., Hodges, N. J., & Hayes, A. (1996).
Deliberate practice in sports: What is it anyway? In K. A. Ericsson (Ed.), The
road to excellence: The acquisition o f expert performance in the arts and
sciences, sports, and games (pp. 81-106). Mahwah, NJ: Lawrence Erlbaum
Associates.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
140
Sternberg, R. J. (1997). Cognitive conceptions of expertise. In P. J. Feltovich, K.
M. Ford, & R. R. Hoffman (Eds.), Expertise in Context (pp. 149-162). Menlo
Park, CA: American Association for Artificial Intelligence Press.
Sternberg, R. J. (1989). Domain-generality versus domain-specificity: The life and
impending death of a false dichotomy. Merrill-Paimer Quarterly, 35(1), 115-
130.
Sternberg, R. J., Gigorenko, E. L., & Ferrari, M. (2002). Fostering intellectual
excellence through developing expertise. In. M. Ferrari (Ed.), The Pursuit o f
Excellence Through Education (pp. 57-83). Mahwah, NJ: Lawrence Erlbaum
Associates.
Sternberg, R. J., & Horvath, J. A. (1998). Cognitive conceptions of expertise and
their relations to giftedness. In R. C. Friedman & K. B. Rogers (Eds.), Talent
in Context (pp. 177-191). Washington, DC: American Psychological
Association.
Sugrue, B. (1994). Specification fo r the design ofproblem-solving assessments in
science. CSE Technical Report 387. Los Angeles, CA: CRESST/University
of California, Los Angeles.
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning.
Cognitive Science, 12, 257-285.
Sweller, J. (1989). Cognitive technology: Some procedures for facilitating learning
and problem solving in mathematics and science. Journal o f Cognitive
Psychology, 81(4), 457-466.
Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional
design. Learning & Instruction, 4(4), 295-312.
Sweller, J., Chandler, P., Tierney, P., & Cooper, M. (1990). Cognitive load as a
factor in the structuring of technical material. Journal o f Experimental
Psychology: General, 119(2), 176-192.
Tashakkori, A., & Teddlie, C. (1998). Mixed methodology: Combining qualitative
and quantitative approaches. Thousand Oaks, CA: SAGE Publications.
Thagard, P. (1998). Ulcers and bacteria: I. Discovery and acceptance. Studies in
the History and Philosophy o f Biology and Biomedical Sciences, 9, 107-136.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
141
Thompson, B. (2004). Exploratory and confirmatory factor analysis. Washington,
DC: American Psychological Association.
Torff, B. (2003). Developmental changes in teachers' use of higher order thinking
and content knowledge. Journal o f Educational Psychology, 95(3), 563-569.
VanLehn, K. (1996). Cognitive skill acquisition. Annual Review o f Psychology, 47,
513-539.
VanLehn, K., Siler, S., Murray, C., Yamauchi, T., & Baggett, W. B. (2003). Why
do only some events cause learning during human tutoring? Cognition &
Instruction, 21(3), 209-249.
Velmahos, G., Toutouzas, K., Sillin, L., Chan, L., Clark, R. E. Theodorou, D.,
Maupin, F. (2004). Cognitive task analysis for teaching technical skills in an
animate surgical skills laboratory. The American Journal of Surgery, 187,
114-119.
Vicente, K. J. (2000). Revisiting the constraint attunement hypothesis: Reply to
Ericsson, Patel, and Kintsch (2000) and Simon and Gobet (2000).
Psychological Review, 107(3), 601-608.
Vissers, G., Heyne, G., Peters, V., & Guerts, J. (2001). The validity of laboratory
research in social and behavioral science. Quality and Quantity, 35, 129-145.
Volke, H., Dettmar, P., Richter, P., Rudolf, M., & Buhss, U. (2002). On-coupling
and off-coupling of neocortical areas in chess experts and novices. Journal o f
Psychophysiology, 16, 23-36.
Voss, J. F., Tyler, S. W., & Yengo, L. A. (1983). Individual differences in the
solving of social science problems. In R. F. Dillon & R. R. Schmeck (Eds.),
Individual Differences in Cognition (Vol. 1, pp. 205-232). New York:
Academic.
Wegner, D. M. (2002). The Illusion o f Conscious Will. Cambridge, MA: MIT
Press.
Wertsch, J. V. (1991). Voices o f the mind: A sociocultural approach to mediated
action. Cambridge, Massachusetts: Harvard University Press.
Wheatley, T., & Wegner, D. M. (2001). Automaticity of action, Psychology of. In
N. J. Smelser & P. B. Baltes (Eds.), International Encyclopedia o f the Social
and Behavioral Sciences, (pp. 991-993). Oxford, IK: Elsevier Science Limited.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
142
Williams, K. E. (2000). An automated aid for modeling human-computer
interaction. In J. M. C. Schraagen, S. F. Chipman, & V. L. Shalin (Eds.),
Cognitive Task Analysis (pp. 165-180). Mahwah, NJ: Lawrence Erlbaum
Associates.
Wilson, G. F., & Fisher, F. (1995). Cognitive task classification based upon
topographic EEG data. Biological Psychology, 40, 239-250.
Wilson, G. F., Swain, C. R., & Ullsperger, P. (1999). EEG power changes during a
multiple level memory retention task. International Journal o f
Psychophysiology, 32, 107-118.
Wilson, T. D., & Dunn, E. W. (2004). Self-knowledge: Its limits, value, and
potential for improvement. Annual Review o f Psychology, 55, 493-518.
Wilson, T. D., & Nisbett, R. E. (1978). The accuracy of verbal reports about the
effects of stimuli on evaluations and behavior. Social Psychology, 41(2), 118-
131.
Wineberg, S. (1998). Reading Abraham Lincoln: An expert/expert study in the
interpretation of historical texts. Cognitive Science, 22(3), 269-388.
Wood, P., & Brown, D. (1994). The study of intraindividual differences by means of
dynamic factor models: Rationale, implementation, and interpretation.
Psychological Bulletin, 116(1), 166-186.
Yang, S. C. (2003). Reconceptualizing think-aloud methodology: Refining the
encoding and categorizing techniques via contextualized perspectives.
Computers in Human Behavior, 19, 95-115.
Yarlas, A., & Sloutsky, V. M. (2000). Problem representation in experts and novices:
Part 1. Differences in the content of representation. Proceedings o f the XXII
Annual Conference o f the Cognitive Science Society (pp. 1006-1011).
Mahwah, NJ: Erlbaum.
Zeitz, C. M. (1997). Some concrete advantages of abstraction: How experts’
representations facilitate reasoning. In P. J. Feltovich, K. M. Ford, & R. R.
Hoffman (Eds.), Expertise in Context (pp. 43-65). Menlo Park, CA: American
Association for Artificial Intelligence.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
143
Appendix A
CLASSROOM TEST OF
SCIENTIFIC REASONING
Multiple Choice Version
Directions to Students:
This is a test of your ability to apply aspects of scientific and mathematical reasoning
to analyze a situation to make a prediction or solve a problem. Make a dark mark on
the answer sheet for the best answer for each item. If you do not fully understand what
is being asked in an item, please ask the test administrator for clarification.
DO NOT OPEN THIS BOOKLET UNTIL YOU ARE TOLD TO DO SO
Revised Edition: August 2000 by Anton E. Lawson, Arizona State University. Based on: Lawson, A.E. 1978.
Development and validation o f the classroom test o f formal reasoning. Journal o f Research in Science Teaching, 15(1):
11-24.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
144
1. Suppose you are given two clay balls o f equal size and shape. The two clay
balls also weigh the same. One ball is flattened into a pancake-shaped piece.
Which o f these statements is correct?
a. The pancake-shaped piece weighs more than the ball
b. The two pieces still weigh the same
c. The ball weighs more than the pancake-shaped piece
2. because
a. the flattened piece covers a larger area.
b. the ball pushes down more on one spot.
c. when something is flattened it loses weight.
d. clay has not been added or taken away.
e. when something is flattened it gains weight.
3. To the right are drawings o f two cylinders filled to the same level with water. The cylinders
are identical in size and shape. Also shown at the right are two marbles, one glass and one steel. The
marbles are the same size but the steel one is much heavier than the glass one.
When the glass marble is put into Cylinder 1 it sinks to the bottom and the water level rises to the 6th
mark. I f we put the steel marble into Cylinder 2, the water will rise
a. to the same level as it did in Cylinder 1
b. to a higher level than it did in Cylinder 1
c. to a lower level than it did in Cylinder 1
4. because
a. the steel marble will sink faster.
b. the marbles are made o f different materials.
c. the steel marble is heavier than the glass marble.
d. the glass marble creates less pressure.
e. the marbles are the same size.
OLASS MARBLE STEEL MARBLE
o •
CYLINDER 1 CYLINDER 2
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
145
4. To the right are drawings o f a wide and a narrow cylinder. The cylinders have
Equally spaced marks on them. Water is poured into the wide cylinder up to the 4th mark (see
A). This water rises to the 6th mark when poured into the narrow cylinder (see B).
Both cylinders are emptied (not shown) and water is poured into the wide cylinder up to the
6th mark. How high would this water rise if it were poured into the empty narrow cylinder?
a. to about 8
b. to about 9
c. to about 10
d. to about 12
e. none o f these answers is correct
6. because
a. the answer can not be determined with the information given.
b. it went up 2 more before, so it will go up 2 more again.
c. it goes up 3 in the narrow for every 2 in the wide.
d. the second cylinder is narrower.
e. one must actually pour the water and observe to find out.
7. Water is now poured into the narrow cylinder (described in Item 5 above) up to the 11th mark. How
high would this water rise if it were poured into the empty, wide cylinder?
a. to about 7 1/2
b. to about 9
c. to about 8
d. to about 7 1/3
e. none o f these answers is correct
8. because
a. the ratios must stay the same.
b. one must actually pour the water and observe to find out.
c. the answer can not be determined with the information given.
d. it was 2 less before so it will be 2 less again.
e. you subtract 2 from the wide for every 3 from the narrow.
B
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
146
9. At the right are drawings o f three strings hanging from a bar. The three strings have
metal weights attached to their ends. String 1 and String 3 are the same length. String 2 is
shorter. A 10 unit weight is attached to the end o f String 1. A 10 unit weight is also attached to
the end o f String 2. A 5 unit weight is attached to the end o f String 3. The strings (and attached
weights) can be swung back and forth and the time it takes to make a swing can be timed.
Suppose you want to find out whether the length o f the string has an effect on the time it takes
to swing back and forth. Which strings would you use to find out?
a. only one string
b. all three strings
c. 2 and 3
d. 1 and 3
e. 1 and 2
10. because
a. you must use the longest strings.
b. you must compare strings with both light and heavy weights.
c. only the lengths differ.
d. to make all possible comparisons.
e. the weights differ.
1 2 3
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
147
11. Twenty fruit flies are placed in each o f four glass tubes. The tubes are sealed.
Tubes I and II are partially covered with black paper; Tubes III and IV are not covered. The
tubes are placed as shown. Then they are exposed to red light for five minutes. The number o f
flies in the uncovered part o f each tube is shown in the drawing.
RED LIGHT
1 1 1 I I 1 1
; ;
18
19
I M N
1
1 (n n »
1
o
o
t t t t t t f
RED LIGHT
This experiment shows that flies respond to (respond means move to or away from):
a. red light but not gravity
b. gravity but not red light
c. both red light and gravity
d. neither red light nor gravity
12. because
a. most flies are in the upper end o f Tube III but spread about evenly in Tube II.
b. most flies did not go to the bottom o f Tubes 1 and III.
c. the flies need light to see and must fly against gravity.
d. the majority o f flies are in the upper ends and in the lighted ends o f the tubes.
e. some flies are in both ends o f each tube.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
148
13. In a second experiment, a different kind o f fly and blue light was used. The results
are shown in the drawing.
IV
0 10 ■ 10 )
t t t
These data show that these flies respond to (respond means move to or away from):
a. blue light but not gravity
b. gravity but not blue light
c. both blue light and gravity
d. neither blue light nor gravity
14. because
a. some flies are in both ends o f each tube.
b. the flies need light to see and must fly against gravity.
c. the flies are spread about evenly in Tube IV and in the upper end o f Tube III.
d. most flies are in the lighted end o f Tube II but do not go down in Tubes I and III.
e. most flies are in the upper end o f Tube I and the lighted end o f Tube II.
15. Six square pieces o f wood are put into a cloth bag and mixed about. The six pieces are
identical in size and shape, however, three pieces are red and three are yellow. Suppose
someone reaches into the bag (without looking) and pulls out one piece.
What are the chances that the piece is red?
a. 1 chance out o f 6
b. 1 chance out o f 3
c. 1 chance out o f 2
d. 1 chance out o f 1
e. cannot be determined
0 0
RED LIGHT
I 1 I I
,__
o
18
19
I "
1
! o K m
1
t f t t
RED LIGHT
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
149
16. because
a. 3 out o f 6 pieces are red.
b. there is no way to tell which piece will be picked.
c. only 1 piece o f the 6 in the bag is picked.
d. all 6 pieces are identical in size and shape.
e. only 1 red piece can be picked out o f the 3 red pieces.
17. Three red square pieces o f wood, four yellow square pieces, and five blue square
pieces are put into a cloth bag. Four red round pieces, two yellow round pieces, and three blue
round pieces are also put into the bag. All the pieces are then mixed about. Suppose someone
reaches into the bag (without looking and without feeling for a particular shape piece) and
pulls out one piece.
J5.
Y
1
B
R
B B B
® © ® ®
© ©
© ® ©
What are the chances that the piece is a red round or blue round piece?
a. cannot be determined
b. 1 chance out o f 3
c. 1 chance out o f 21
d. 15 chances out o f 21
e. 1 chance out o f 2
18. because
a. 1 o f the 2 shapes is round.
b. 15 o f the 21 pieces are red or blue.
c. there is no way to tell which piece will be picked.
d. only 1 o f the 21 pieces is picked out o f the bag.
e. 1 o f every 3 pieces is a red or blue round piece.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
150
19. Farmer Brown was observing the mice that live in his field. He discovered that all o f
them were either fat or thin. Also, all o f them had either black tails or white tails. This made
him wonder if there might be a link between the size o f the mice and the color o f their tails. So
he captured all o f the mice in one part o f his field and observed them. B elow are the mice that
he captured.
Do you think there is a link between the size o f the mice and the color o f their tails?
a. appears to be a link
b. appears not to be a link
c. cannot make a reasonable guess
20. because
a. there are some o f each kind o f mouse.
b. there may be a genetic link between mouse size and tail color.
c. there were not enough mice captured.
d. most o f the fat mice have b lack tails while most o f the thin mice have white tails.
e. as the mice grew fatter, their tails became darker.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
151
21. The figure below at the left shows a drinking glass and a burning birthday candle
stuck in a small piece o f clay standing in a pan o f water. When the glass is turned upside down,
put over the candle, and placed in the water, the candle quickly goes out and water rushes up
into the glass (as shown at the right).
1 M
This observation raises an interesting question: Why does the water rush up into the glass?
Here is a possible explanation. The flame converts oxygen into carbon dioxide. Because
oxygen does not dissolve rapidly into water but carbon dioxide does, the newly formed carbon
dioxide dissolves rapidly into the water, lowering the air pressure inside the glass.
Suppose you have the materials mentioned above plus some matches and some dry ice (dry
ice is frozen carbon dioxide). Using some or all o f the materials, how could you test this
possible explanation?
a. Saturate the water with carbon dioxide and redo the experiment noting the amount o f water
rise.
b. The water rises because oxygen is consumed, so redo the experiment in exactly
the same way to show water rise due to oxygen loss.
c. Conduct a controlled experiment varying only the number o f candles to see if that
makes a difference.
d. Suction is responsible for the water rise, so put a balloon over the top o f an open-ended
cylinder and place the cylinder over the burning candle.
e. Redo the experiment, but make sure it is controlled by holding all independent
variables constant; then measure the amount o f water rise.
22. What result o f your test (mentioned in #21 above) would show that your
explanation is probably wrong?
a. The water rises the same as it did before.
b. The water rises less than it did before.
c. The balloon expands out.
d. The balloon is sucked in.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
152
23. A student put a drop o f blood on a microscope slide and then looked at the blood
under a microscope. As you can see in the diagram below, the magnified red blood cells look
like little round balls. After adding a few drops o f salt water to the drop o f blood, the student
noticed that the cells appeared to become smaller.
This observation raises an interesting question: Why do the red blood cells appear smaller?
Here are two possible explanations: I. Salt ions (Na+ and C1-) push on the cell membranes and
make the cells appear smaller. II. Water molecules are attracted to the salt ions so the water
molecules move out o f the cells and leave the cells smaller.
To test these explanations, the student used some salt water, a very accurate weighing device,
and some water-filled plastic bags, and assumed the plastic behaves just like red-blood-cell
membranes. The experiment involved carefully weighing a water-filled bag, placing it in a salt
solution for ten minutes and then reweighing the bag.
What result o f the experiment would best show that explanation I is probably wrong?
a. the bag loses weight
b. the bag weighs the same
c. the bag appears smaller
24. What result o f the experiment would best show that explanation II is probably
wrong?
a. the bag loses weight
b. the bag weighs the same
c. the bag appears smaller
Magnified Red Blood Cells
After Adding Salt Water
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
153
Appendix B
Instructions:
1. Start at the very bottom o f the maze.
2. Keep the drawn line on a dotted line.
3. Keep the line going upward.
4. Go through as many dots as possible in the path through the maze.
V V V * V V V f M V M V V V
\ A A A A A A A A M X A /
¥ v x v x ¥ v v x v
” A A A A A A A ■ /
¥ X X * ¥ X ¥ ¥ ¥
X X X X X X x ¥
¥ X M x * X ¥
Y M ¥ V X y
Y ¥ ¥ * ¥
Y X X >
a .ji ■ • ;* ♦ •
♦ < * * .* *. *
w M w
\ /
V ¥
^ * * * * * * , , * a * % ** * V a *
W V v V v V M V V i 2 X A ¥ f
V » 5 J S a A A ™ A A .f> / ’ * f \ A /
w y v v v ¥ Y Y M X X X A X X
V v V v ¥ ¥ Y ’ ¥ ¥ X * ¥ > : X ¥ Y
\ A R A A / \ A / v y • - i , t
\ J \ J \ J y * Y v V V M V M Y /
% X . A “ “ A A A. a \ * • / \ /
YxV v X M w ¥ x M M M X
V y v y ¥ ¥ ¥ ¥ M M X ¥
V ¥ X x V X X X X X f
v yV y y y x x m . ¥
\ A /\ A A,* \/ % / \ v/
V y \* x > 6 A ( V
¥ X X ¥ ¥
/ < * V ■ > * *
a * « • «. * '.A A
\ ft A /
¥
¥ ¥
- * \ y
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Appendix C
154
Instructions:
played.
Write down the numbers you hear in the reverse order that they
Heard: 1 2 3 4 5
Written: 5 4 3 2 1
1 )
2)
3)
4)
5)
6)
7)
8)
9)
10)
11)
12)
were
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Appendix D
Expert A Simulation Data
ROUND ONE
2. Source Repetitions
2.1. First condition: 2 ->4
2.2. Second condition: 2 ~> 4
3. Source Spacings
3.1. 2 conditions
1. Source Repetitions
1.1. 2 conditions
1.1.1. First condition: 2
1.1.2. Second condition: 2
0:11
0:18
0:27
1:02
1:10
1:14
1:16
1:16
1:20
3.1.1. First condition: 1 1:32
3.1.2. Second condition: 4 1:35
3.1.3. Second condition: days 1:42
3.1.4. First condition: minutesl:50
3.1.5. First condition: 1 ~> 4 1:56
3.1.6. First condition: minutes ~>hours 2:00
4. Source Context 2:20
4.1. 2 conditions
5. Source Spacings
5.1. First condition: hours->days
6. Source Context
6.1. First condition: same
6.2. Second condition: rooms
7. Source Repetitions 4:42
7.1. 2 conditions ~ > 3 3:45
8.1.1. First condition: days ~>hours ~^days5:29
8.1.2. Third condition: days 5:33
8.1.3. Third condition: 4 5:35
10.1.1. First condition: Free Recall
10.1.2. 1 condition->3
10.1.2.1. Second Condition: Free Recall
7.1.1. Third condition: 4
8. Source Spacings
8.1. 2 conditions ~?3
3:58
5:04
5:07
9. Source Contexts !
9.1. 2 conditions->3
9.1.1. Third condition: mood
10. Test Task
10.1. 1 condition
5:51
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
10.1.2.2. Third Condition: Free Recall
11. Test Delay
11.1. 3 conditions
11.1.1. First condition: 1
11.1.2. Second condition: 1
11.1.3. Third condition: 1
11.1.4. First condition: minute
11.1.5. Second condition: minute
11.1.6. Third condition: minutes
12. Test Tasks
12.1. 3 conditions -> 1
13. Test Delays
13.1. 3 conditions-> 1
14. Test Context
14.1. 1 condition
14.1.1. First condition: Same
15. Hypotheses
16. Redesign
17. Source Repetitions
17.1. First condition: 4->2
17.2. Second condition: 4->2
17.3. Third condition: 4->2
17.4. 3 conditions-^ l -^ 3 ^2 -^3
18. Hypotheses
19. Redesign
20. Source Repetitions
20.1. 3 conditions-^ 1 ->2-> 1
20.2. First condition: 2->3
21. Hypotheses
22. Redesign
23. Source Repetitions
23.1. 1 condition->2->l
24. Hypotheses
25. Redesign
26. Source Spacings
26.1. First condition: 4-> 1 ->4
26.2. 3 conditions-^ 1
27. Hypotheses
28. Redesign
29. Test Contexts
29.1. 1 condition3
29.1.1. Second condition: Rooms
29.1.2. Third condition: mood
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
30. Hypotheses 23:28
30.1. Source context-same/Test context-same: 10 23:44
30.2. Source context-rooms/Test context-rooms: 10 23:48
30.3. Source context-mood/Test context-mood: 10
30.4. Source context-same/Test context-rooms: 7
30.5. Source context-same/Test context-mood: 7
30.6. Source context-rooms/Test context-rooms: 10 ~> 12
30.7. Source context-mood/Test context-mood: 10 ->12
30.8. Source context-rooms/Test context-mood: 1026:40
30.9. Source context-rooms/Test context-rooms: 12 ->14 27:08
30.10. Source context-rooms/Test context-same: 12
30.11. Source context-mood/Test context-same: 12
30.12. Source context-mood/Test context-room: 14
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Appendix E
Expert B Simulation Data
ROUND ONE
1. Source Repetitions 1:16
1.1. 3 conditions
1.1.1. First condition: 2
1.1.2. Second condition: 4
1.1.3. Third condition: 5
2. Source Spacings
2.1. 3 conditions
2.1.1. First condition: 5
2.1.2. First condition: minutes
2.1.3. Second condition: 5
2.1.4. Second condition: hours
2.1.5. Third condition: 5
2.1.6. Third condition: days
3. Source Context
3.1. 3 conditions
3.1.1. First condition: same
3.1.2. Second condition: rooms
3.1.3. Third condition: mood
4. Test Task
4.1.3 conditions
4.1.1. First condition: Free Recall
4.1.2. Second condition: recognition
4.1.3. Third condition: stem completion
5. Test Delay
5.1. 3 conditions
5.1.1. First condition: 5
5.1.2. First condition: minutes
5.1.3. Second condition: 5
5.1.4. Second condition: hours
5.1.5. Third condition: 5
5.1.6. Third condition: days
6. Test Context
6.1. 3 condition
6.1.1. First condition: Same
6.1.2. Second condition: rooms
6.1.3. Third condition: mood
7. Test Task
7.1. 3 conditions1
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
159
7.1.1. First condition: free recall
8. Source Context
8.1. 3 conditions->1
8.1.1. First condition: same
9. Hypotheses
9.1. Source spacines-5 minutes/Source repetitions-2/Source context-same/Test
delay-5 minutes: 10__________________________ 11:24
9.2. Source spacings-5 hours/Source repetitions-2/Source context-same/Test
delay-5 minutes: 20
9.3. Source spacings-5 days/Source repetitions-2/Source context-same/Test delay-
5 minutes: 30
9.4. Source spacings-5 minutes/Source repetitions-4/Source context-same/Test
delay-5 minutes: 20
9.5. Source spacings-5 hours/Source repetitions-4/Source context-same/Test
delay-5 minutes: 30
9.6. Source spacings-5 days/Source repetitions-4/Source context-same/Test delay-
5 minutes: 40
9.7. Source spacings-5 minutes/Source repetitions-5/Source context-same/Test
delay-5 minutes: 30
9.8. Source spacings-5 hours/Source repetitions-5/Source context-same/Test
delay-5 minutes: 40
9.9. Source spacings-5 days/Source repetitions-5/Source context-same/Test delay-
5 minutes: 50
9.10. Source svacines-5 minutes/Source revetitions-2/Source context-
same/Test delay-5 days: 10____________________12:56
9.11. Source spacings-5 hours/Source repetitions-2/Source context-same/Test
delay-5 days: 20
9.12. Source spacings-5 days/Source repetitions-2/Source context-same/Test
delay-5 days: 30
9.13. Source spacings-5 minutes/Source repetitions-4/Source context-
same/Test delay-5 days: 20
9.14. Source spacings-5 hours/Source repetitions-4/Source context-same/Test
delay-5 days: 30
9.15. Source spacings-5 days/Source repetitions-4/Source context-same/Test
delay-5 days: 40
9.16. Source spacings-5 minutes/Source repetitions-5/Source context-
same/Test delay-5 days: 30
9.17. Source spacings-5 hours/Source repetitions-5/Source context-same/Test
delay-5 days: 40
9.18. Source spacings-5 days/Source repetitions-5/Source context-same/Test
delay-5 days: 50
9.19. Source svacings-5 minutes/Source repetitions-2/Source context-
same/Test delay-5 hours: 30___________________13:22
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
160
9.20. Source spacings-5 hours/Source repetitions-2/Source context-same/Test
delay-5 hours: 40
9.21. Source spacings-5 days/Source repetitions-2/Source context-same/Test
delay-5 hours: 50
9.22. Source spacings-5 minutes/Source repetitions-4/Source context-
same/Test delay-5 hours: 40
9.23. Source spacings-5 hours/Source repetitions-4/Source context-same/Test
delay-5 hours: 50
9.24. Source spacings-5 days/Source repetitions-4/Source context-same/Test
delay-5 hours: 60
9.25. Source spacings-5 minutes/Source repetitions-5/Source context-
same/Test delay-5 hours: 50
9.26. Source spacings-5 hours/Source repetitions-5/Source context-same/Test
delay-5 hours: 60
9.27. Source spacings-5 days/Source repetitions-5/Source context-same/Test
delay-5 hours: 70
9.28. Source spacings-5 minutes/Source repetitions-2/Source context-
same/Test delay-5 minutes: 10->50
9.29. Source spacings-5 hours/Source repetitions-2/Source context-same/Test
delay-5 minutes: 20->60
9.30. Source spacings-5 days/Source repetitions-2/Source context-same/Test
delay-5 minutes: 30->70
9.31. Source spacings-5 minutes/Source repetitions-4/Source context-
same/Test delay-5 minutes: 20~>60
9.32. Source spacings-5 hours/Source repetitions-4/Source context-same/Test
delay-5 minutes: 30->70
9.33. Source spacings-5 days/Source repetitions-4/Source context-same/Test
delay-5 minutes: 40->80
9.34. Source spacings-5 minutes/Source repetitions-5/Source context-
same/Test delay-5 minutes: 30->70
9.35. Source spacings-5 hours/Source repetitions-5/Source context-same/Test
delay-5 minutes: 40->80
9.36. Source spacings-5 days/Source repetitions-5/Source context-same/Test
delay-5 minutes: 50->90
9.37. Source spacings-5 minutes/Source repetitions-2/Source context-
rooms/Test delay-5 minutes: 60 15:54
9.38. Source spacings-5 hours/Source repetitions-2/Source context-
rooms/Test delay-5 minutes: 70 15:58
9.39. Source spacings-5 days/Source repetitions-2/Source context-
rooms/Test delay-5 minutes: 80 16:00
9.40. Source spacings-5 minutes/Source repetitions-4/Source context-
rooms/Test delay-5 minutes: 70 16:04
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
161
9.41. Source spacings-5 hours/Source repetitions-4/Source context-
rooms/Test delay-5 minutes: 80 16:07
9.42. Source spacings-5 days/Source repetitions-4/Source context-
rooms/Test delay-5 minutes: 90 16:09
9.43. Source spacings-5 minutes/Source repetitions-5/Source context-
rooms/Test delay-5 minutes: 80 16:12
9.44. Source spacings-5 hours/Source repetitions-5/Source context-
rooms/Test delay-5 minutes: 90 16:15
9.45. Source spacings-5 days/Source repetitions-5/Source context-
rooms/Test delay-5 minutes: 100 16:18
9.46. Source spacings-5 minutes/Source repetitions-2/Source context-
rooms/Test delay-5 days: 40 16:25
9.47. Source spacings-5 hours/Source repetitions-2/Source context-
rooms/Test delay-5 days: 50 16:30
9.48. Source spacings-5 days/Source repetitions-2/Source context-
rooms/Test delay-5 days: 60 16:33
9.49. Source spacings-5 minutes/Source repetitions-4/Source context-
rooms/Test delay-5 days: 50 16:38
9.50. Source spacings-5 hours/Source repetitions-4/Source context-
rooms/Test delay-5 days: 60 16:41
9.51. Source spacings-5 days/Source repetitions-4/Source context-
rooms/Test delay-5 days: 70 16:44
9.52. Source spacings-5 minutes/Source repetitions-5/Source context-
rooms/Test delay-5 days: 60 16:47
9.53. Source spacings-5 hours/Source repetitions-5/Source context-
rooms/Test delay-5 days: 70 16:50
9.54. Source spacings-5 days/Source repetitions-5/Source context-
rooms/Test delay-5 days: 80 16:51
9.55. Source spacinss-5 minutes/Source repetitions-2/Source context-
rooms/Test delav-5 hours: 20_____ 16:57
9.56. Source spacings-5 hours/Source repetitions-2/Source context-
rooms/Test delay-5 hours: 30 16:58
9.57. Source spacings-5 days/Source repetitions-2/Source context-
rooms/Test delay-5 hours: 40 01
9.58. Source spacings-5 minutes/Source repetitions-4/Source context-
rooms/Test delay-5 hours: 30 05
9.59. Source spacings-5 hours/Source repetitions-4/Source context-
rooms/Test delay-5 hours: 40
9.60. Source spacings-5 days/Source repetitions-4/Source context-
rooms/Test delay-5 hours: 50
9.61. Source spacings-5 minutes/Source repetitions-5/Source context-
rooms/Test delay-5 hours: 40
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
162
9.62. Source spacings-5 hours/Source repetitions-5/Source context-
rooms/Test delay-5 hours: 50
9.63. Source spacings-5 days/Source repetitions-5/Source context-
rooms/Test delay-5 hours: 60
9.64. Source spacings-5 minutes/Source repetitions-2/Source context-
mood/Test delay-5 minutes: 60
9.65. Source spacings-5 hours/Source repetitions-2/Source context-
mood/Test delay-5 minutes: 70
9.66. Source spacings-5 days/Source repetitions-2/Source context-mood/Test
delay-5 minutes: 80
9.67. Source spacings-5 minutes/Source repetitions-4/Source context-
mood/Test delay-5 minutes: 70
9.68. Source spacings-5 hours/Source repetitions-4/Source context-
mood/Test delay-5 minutes: 80
9.69. Source spacings-5 days/Source repetitions-4/Source context-mood/Test
delay-5 minutes: 90
9.70. Source spacings-5 minutes/Source repetitions-5/Source context-
mood/Test delay-5 minutes: 80
9.71. Source spacings-5 hours/Source repetitions-5/Source context-
mood/Test delay-5 minutes: 90
9.72. Source spacings-5 days/Source repetitions-5/Source context-mood/Test
delay-5 minutes: 100
9.73. Source spacings-5 minutes/Source repetitions-2/Source context-
mood/Test delay-5 days: 40
9.74. Source spacings-5 hours/Source repetitions-2/Source context-
mood/Test delay-5 days: 50
9.75. Source spacings-5 days/Source repetitions-2/Source context-mood/Test
delay-5 days: 60
9.76. Source spacings-5 minutes/Source repetitions-4/Source context-
mood/Test delay-5 days: 50
9.77. Source spacings-5 hours/Source repetitions-4/Source context-
mood/Test delay-5 days: 60
9.78. Source spacings-5 days/Source repetitions-4/Source context-mood/Test
delay-5 days: 70
9.79. Source spacings-5 minutes/Source repetitions-5/Source context-
mood/Test delay-5 days: 60
9.80. Source spacings-5 hours/Source repetitions-5/Source context-
mood/Test delay-5 days: 70
9.81. Source spacings-5 days/Source repetitions-5/Source context-mood/Test
delay-5 days: 80
9.82. Source spacings-5 minutes/Source repetitions-2/Source context-
mood/Test delay-5 hours: 20
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
163
9.83. Source spacings-5 hours/Source repetitions-2/Source context-
mood/Test delay-5 hours: 30
9.84. Source spacings-5 days/Source repetitions-2/Source context-mood/Test
delay-5 hours: 40
9.85. Source spacings-5 minutes/Source repetitions-4/Source context-
mood/Test delay-5 hours: 30
9.86. Source spacings-5 hours/Source repetitions-4/Source context-
mood/Test delay-5 hours: 40
9.87. Source spacings-5 days/Source repetitions-4/Source context-mood/Test
delay-5 hours: 50
9.88. Source spacings-5 minutes/Source repetitions-5/Source context-
mood/Test delay-5 hours: 40
9.89. Source spacings-5 hours/Source repetitions-5/Source context-
mood/Test delay-5 hours: 50
9.90. Source spacings-5 days/Source repetitions-5/Source context-mood/Test
delay-5 hours: 60 18:50
ROUND TWO
1. Source Repetitions 40:31
1.1. 2 conditions
1.1.1. First condition: 2
1.1.2. Second condition: 4
2. Source Spacings
2.1. 3 conditions
2.1.1. First condition: 5
2.1.2. First condition: minutes
2.1.3. Second condition: 5
2.1.4. Second condition: hours
2.1.5. Third condition: 5
2.1.6. Third condition: days
3. Test Task
3.1. 1 condition
3.1.1. First condition: Free Recall
4. Test Delay
4.1. 3 conditions
4.1.1. First condition: 5
4.1.2. First condition: minutes
4.1.3. Second condition: 5
4.1.4. Second condition: hours
4.1.5. Third condition: 5
4.1.6. Third condition: days
5. Test Context 49:23
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
164
5.1. 1 condition
5.1.1. First condition: Same
6. Source Context 49:30
6.1. 1 condition
6.1.1. First condition: same
1. Hypotheses 49:54
7.1. Source spacinss-5 minutes/Source repetitions-2/Test delay-5 minutes: 10
51:29
7.2. Source spacings-5 hours/Source repetitions-2/Test delay-5 minutes: 40
51:39
7.3. Source spacings-5 days/Source repetitions-2/Test delay-5 minutes: 60
51:43
1 A. Source spacings-5 minutes/Source repetitions-4/Test delay-5 minutes: 15
7.5. Source spacings-5 hours/Source repetitions-4/Test delay-5 minutes: 45
7.6. Source spacings-5 days/Source repetitions-4/ Test delay-5 minutes: 65
7.7. Source spacinss-5 minutes/Source repetitions-2/Test delav-5 hours: 10
52:09
7.8. Source spacings-5 hours/Source repetitions-2/Test delay-5 hours: 65
52:28
7.9. Source spacings-5 days/Source repetitions-2/Test delay-5 hours: 75
52:29
7.10. Source spacings-5 minutes/Source repetitions-4/Test delay-5 hours: 15
7.11. Source spacings-5 hours/Source repetitions-4/Test delay-5 hours: 75
7.12. Source spacings-5 days/Source repetitions-4/ Test delay-5 hours: 85
7.13. Source svacinss-5 minutes/Source repetitions-2/Test delav-5 days: 10
52:4
5
7.14. Source spacings-5 hours/Source repetitions-2/Test delay-5 days: 75
7.15. Source spacings-5 days/Source repetitions-2/Test delay-5 days: 85
52:51
7.16. Source spacings-5 minutes/Source repetitions-4/Test delay-5 days: 15
7.17. Source spacings-5 hours/Source repetitions-4/Test delay-5 days: 85
7.18. Source spacings-5 days/Source
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Appendix F
Expert C Simulation Data
ROUND ONE
1. Source Repetitions 0:05
1.1.3 conditions
1.1.1. First condition: 2
1.1.2. Second condition: 2
1.1.3. Third condition: 2
2. Source Spacings
2.1. 1 condition
2.1.1. First condition: Minutes
2.1.2. First condition: Minutes->Hours
2.1.3. First condition: Hours^Days
2.1.4. First condition: 3
2.1.5. First condition: Days->Minutes
2.2. 1 condition~>3 conditions
2.2.1. Second condition: 3
2.2.2. Third condition: 3
2.2.3. Second condition: Minutes
2.2.4. Third condition: Minutes
3. Source Spacings
3.1.3 conditions 1 condition 1:36
4. Source Contexts 1:38
4.1. 3 conditions
4.1.1. Same
4.1.2. Rooms
5.1. 3 conditions
5.2. First condition: Free recall
5.3. Second condition: Recognition
5.4. Third condition: Stem Completion
6.1. 1 condition->2 conditions
6.1.1. First condition: Minutes ->hours
6.1.2. Second condition: Minutes ->Days
6.1.3. First condition: Hours-^Minutes 2:43
4.1.3. Moods 1:52
2:00 5. Test Tasks
6. Source Spacings 2:31
7. Test Delays
7.1. 1 condition
7.1.1. First condition: 1
8. Test Contexts
2:57
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
166
8.1. 1 condition
8.1.1. First condition: ?
9. Test Delays
9.1. First condition: Minutes
10. Test Contexts
10.1. First condition: same
11. Source Repetitions
11.1. Second condition: 2 -> 3
11.2. Third condition: 3->4
12. Hypotheses
12.1. Source Spacing-10 min/Source Repetitions-2/Source Context-
same/Task-Free Recall: 20
12.2. Source Spacing-1 day/Source Repetitions-5/Source Context-
same/Task-Free Recall: 60
12.3. Source Spacing-10/Source Repetitions-4/Source Context-same/Task-
Free Recall: 30
12.4. Source Spacing-10/Source Repetitions-2/Source Context-same/Task-
recognition: 20
12.5. Source Spacing-10/Source Repetitions-4/Source Context-same/Task-
recognition: 30
12.6. Source Spacing-10/Source Repetitions-2/Source Context-same/Task-
Stem Completions: 10
12.7. Source Spacing-10/Source Repetitions-4/Source Context-same/Task-
Stem Completions: 15
12.8. Source Spacing-1 day/Source Repetitions-10/Source Context-
same/Task-Stem Completions: 70
12.9. Source Spacing-10 minute/Source Repetitions-5/Source Context-
same/Task-Stem Completions: 75
12.10. Source Spacing-1 day/Source Repetitions-10/Source Context-
same/Task-Stem Completions: 70->25
12.11. Source Spacing-10 minute/Source Repetitions-5/Source Context-
same/Task-Stem Completions: 75->20
12.12. Source Spacing-10 min/Source Repetitions-2/Source Context-
same/Task-Free Recall: 20-> 10
12.13. Source Spacing-10/Source Repetitions-4/Source Context-same/Task-
Free Recall: 30~> 15
12.14. Source Spacing-10 minutes/Source Repetitions-5/Source Context-
same/Task-Free Recall: 20
12.15. Source Spacing-10/Source Repetitions-2/Source Context-same/Task-
recognition: 20
12.16. Source Spacing-10/Source Repetitions-2/Source Context-same/Task-
recognition: 20-^12
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
167
12.17. Source Spacing-10/Source Repetitions-4/Source Context-same/Task-
recognition: 30-> 15
12.18. Source Spacing-10/Source Repetitions-5/Source Context-same/Task-
recognition: 24
12.19. Source Spacing-10 minute/Source Repetitions-5/Source Context-
same/Task-Stem Completions: 20->27
12.20. Source Spacing-1 day/Source Repetitions-10/Source Context-
same/Task-Stem Completions: 25->0
12.21. Source Spacing-1 day/Source Repetitions-5/Source Context-
same/Task-Free Recall: 60->50
12.22. Source Spacing-1 day/Source Repetitions-5/Source Context-
same/Task-Free Recall: 50->40
12.23. Source Spacing-1 day/Source Repetitions-2/Source Context-
same/Task-Free Recall: 20
12.24. Source Spacing-1 day/Source Repetitions-2/Source Context-
same/Task-Free Recall: 20->10
12.25. Source Spacing-1 day/Source Repetitions-4/Source Context-
same/Task-Free Recall: 7
12.26. Source Spacing-10 minutes/Source Repetitions-2/Source Context-
rooms/Task-Free Recall: 0
12.27. Source Spacing-10 minutes/Source Repetitions-5/Source Context-
rooms/Task-Free Recall: 0
12.28. Source Spacing-1 day/Source Repetitions-5/Source Context-
rooms/Task-Free Recall: 70
12.29. Source Spacing-1 day/Source Repetitions-5/Source Context-
rooms/Task-recognition: 80
12.30. Source Spacing-1 day/Source Repetitions-5/Source Context-
rooms/Task-stem completion: 90
12.31. Source Spacing-1 day/Source Repetitions-5/Source Context-
mood/Task-free recall: 50
12.32. Source Spacing-1 day/Source Repetitions-5/Source Context-
mood/Task-recognition: 60
12.33. Source Spacing-1 day/Source Repetitions-5/Source Context-
mood/Task-stem completion: 70
ROUND TWO
1. Source Repetitions 28:24
1.1. 1 conditions
1.1.1. First condition: 2
2. Source Spacings
2.1. 3 condition 32:06
2.1.1. First condition: 1
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
168
2.1.2. Second condition: 2
2.1.3. Second condition: Hours
2.1.4. Third condition: 3
2.1.5. Second condition: 2->1
2.1.6. Third condition: 3-> 1
2.1.7. Third condition: Days
2.1.8. First condition: Minutes 32:21
3. Source Contexts 32:29
3.1. 3 conditions 32:35
3.1.1. Same
3.1.2. Rooms
3.1.3. Moods
4. Test Tasks 32:47
4.1. 3 conditions 32:52
4.1.1. First condition: Free recall
4.1.2. Second condition: Recognition
4.1.3. Third condition: Stem Completion
5. Test Delays
5.1. 3 condition
5.1.1. First condition: 1
5.1.2. Second condition: 1
5.1.3. Third condition: 1
5.1.4. First condition: Minutes
5.1.5. Second condition: Hours
5.1.6. Third condition: Days 33:17
6. Test Contexts
6.1. 3 conditions 33:25
6.1.1. First condition: same
6.1.2. Second condition: room
6.1.3. Third condition: mood
7. Test Contexts 33:47
7.1. 3 conditions-^ 1 condition
7.1.1. First condition: same
8. Test Delays
8.1. 3 conditional condition
8.1.1. First condition: 1 minute
9. Hypotheses
9.1. Source Spacing-1 day/Source Context-same/Task-Free Recall: 50
9.2. Source Spacing-1 day/Source Context-same/Task-recognition: 60
9.3. Source Spacing-1 day/Source Context-same/Task-stem completion: 70
34:51
ROUND THREE
49:12
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
/. Source Spacings 49:22
1.1. 3 condition
1.1.1. First condition: 1
1.1.2. First condition: Minutes
1.1.3. Second condition: 2
1.1.4. Second condition: Hours
1.1.5. Third condition: Days
1.1.6. Third condition: 2 ->7
1.1.7. Second condition: Hours ~>Days
1.1.8. Second condition: 2 ~ > 1
2. Source Contexts 50:15
2.1.1. 3 conditions
2.2. Third condition: 2
2.2.1. Same
2.2.2. Rooms
2.2.3. Moods
3. Test Tasks
3.1. 3 conditions
3.1.1. First condition: Free recall
3.1.2. Second condition: Recognition
3.1.3. Third condition: Stem Completion
4. Previous Outcomes: Round 1
5. Source Spacings
5.1. 3 condition
5.1.1. First condition: 1
5.1.2. First condition: Minutes
5.1.3. Second condition: 1
5.1.4. Second condition: Day
5.1.5. Third condition: 7
5.1.6. Third condition: Days
6. Source Contexts
6.1. 3 conditions
6.1.1. Same
6.1.2. Rooms
6.1.3. Moods
7. Source Repetitions
7.1. 1 condition
7.1.1. First condition: 2
8. Test Tasks
8.1. 3 conditions
8.1.1. First condition: Free recall
8.1.2. Second condition: Recognition
8.1.3. Third condition: Stem Completion
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
170
9. Test Delay
9.1. 1 condition
9.1.1. First condition: 1 minute
10. Test Delay 52:47
10.1. First condition: 1 minute-^3
10.2. 1 condition ->3 conditions
10.3. First condition: 3 ->1
10.4. Second condition: 1 Day
10.5. Third condition: 7
10.6. Third condition: Days 53:04
11. Test Contexts 53:12
11.1. 2 conditions
11.1.1. First condition: Same
11.1.2. Second condition: Rooms
11.1.3. 2 conditions-> 3 conditions
11.1.3.1. Third condition: Moods
12. Test Contexts
12.1. 3 conditions-> 1 condition
12.1.1. First condition: same
13. Hypotheses
13.1. Source Spacing-7 days/Test Delay-1 minute/Source Context-
same/Task-Free Recall: 60
13.2. Source Spacing-7 days/Test Delay-1 day/Source Context-same/Task-
Free Recall: 70
13.3. Source Spacing-7 days/Test Delay-7 days/Source Context-same/Task-
Free Recall: 80
13.4. Source Spacing-7 days/Test Delay-1 minute/Source Context-
same/Task-Free Recall: 60->50
13.5. Source Spacing-7 days/Test Delay-1 day/Source Context-same/Task-
Free Recall: 70->55
13.6. Source Spacing-7 days/Test Delay-7 days/Source Context-same/Task-
Free Recall: 80->60
13.7. Source Spacing-7 days/Test Delay-1 minute/Source Context-
same/Task-recognition: 60
13.8. Source Spacing-7 days/Test Delay-1 day/Source Context-same/Task-
recognition: 65
13.9. Source Spacing-7 days/Test Delay-7 days/Source Context-same/Task-
recognition: 70
13.10. Source Spacing-7 days/Test Delay-1 minute/Source Context-
same/Task-stem completion: 65
13.11. Source Spacing-7 days/Test Delay-1 day/Source Context-same/Task-
stem completion: 70
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
171
13.12. Source Spacing-7 days/Test Delay-7 days/Source Context-same/Task-
stem completion: 75 55:23
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Appendix G
Intermediate A Simulation Data
ROUND ONE
/. Source Repetitions 0:01
1.1. 2 conditions
1.1.1. First condition: 2
1.1.2. Second condition: 3
1.1.3. 2 conditions ~ > 3
1.1.3.1. Third condition: 4 0:41
2. Source Spacings 0:49
2.1. 1 condition
2.1.1. First condition: 20
2.1.2. First condition: minutes
2.1.3. First condition: 20 ~ > 1
2.1.4. First condition: minutes->days 1:40
3. Source Context 1:59
3.1. 1 condition
3.1.1. First condition: Same 2:05
4. Test Task 2:12
4.1. 1 conditions
4.1.1. First condition: Free Recall 2:18
5. Test Delay 2:25
5.1. 1 conditions
5.1.1. First condition: 1
5.1.2. First condition: day 2:43
6. Test Context 2:46
6.1. 1 condition
6.1.1. First condition: Same 2:50
7. Hypotheses 3:06
7.1. Source repetitions-1: 20 3:35
7.2. Source repetitions-2: 40 3:39
7.3. Source repetitions-3: 6 3:42
7.4. Source repetitions-2: 40->30 3:54
7.5. Source repetitions-3: 6->50 3:57
ROUND TWO
23:55
1. Source Repetitions 25:31
1.1. 3 conditions 25:52
1.1.1. First condition: 2
1.1.2. Second condition: 3 25:54
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
1.1.3. Third condition: 4 25:51
2. Source Spacings 26:10
2.1. 1 condition
2.1.1. First condition: 1
2.1.2. First condition: day 26:26
3. Source Context 26:29
3.1. 3 conditions 26:38
4. Test Task 27:31
4.1. 1 conditions
4.1.1. First condition: Free Recall 27:35
5. Test Delay 27:39
5.1. 1 conditions
5.1.1. First condition: 1
5.1.2. First condition: day 27:46
6. Test Context 27:55
6.1. 1 condition
6.1.1. First condition: Same 28:00
7 . Source Context 28:22
7.1. 3 conditions ~ > 1
7.1.1. First condition: rooms 28:47
7.2. 1 condition-^2
7.2.1. Second condition: same
7.2.2. Second condition: same -hnood 28:23
7.3. 2 conditions->3 29:31
7.3.1. Third condition: same
8. Hypotheses
8.1. Source repetitions-2/Source context-same: 65
8.2. Source repetitions-3/Source context-same: 65
8.3. Source repetitions-4/Source context-same: 65
8.4. Source repetitions-2/Source context-rooms: 65
8.5. Source repetitions-3/Source context-rooms: 70
8.6. Source repetitions-4/Source context-rooms: 80
8.7. Source repetitions-2/Source context-mood: 65
8.8. Source repetitions-3/Source context-same: 70
8.9. Source repetitions-4/Source context-same: 80
ROUND THREE
44:16
1. Review Results
1.1. Experiment 2
2. Source Repetitions 46:42
2.1. 3 conditions 46:56
2.1.1. First condition: 2
31:28
32:14
32:31
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
174
2.1.2. Second condition: 3
2.1.3. Third condition: 4 47:27
47:30 3. Source Spacings
3.1. 3 conditions
3.1.1. First condition: 1 47:49
3.1.2. First condition: day
3.1.3. Second condition: days 47:57
3.1.4. Second condition: 3
3.1.5. Third condition: days
3.1.6. Third condition: 5
3.1.7. Second condition: 3 ~> 5
3.1.8. Third condition: 5->10 48:33
4. Source Context 48:54
4.1.3 conditions
4.1.1. First condition: same
4.1.2. Second condition: rooms
4.1.3. Third condition: mood
5. Test Delay
5.1. 3 conditions
5.1.1. First condition: 1
5.1.2. First condition: day
5.1.3. Second condition: days
5.1.4. Second condition: 5
5.1.5. Third condition: days
5.1.6. Third condition: 5
6. Test Context
6.1. 1 condition
6.1.1. First condition: Same
7.1. 1 conditions
7.1.1. First condition: stem completion 50:17
8.1. Source Spacing-1 day/Source repetitions-2/Source context-same/Test delay-1
day: 65
8.2. Source Spacing-1 day/Source repetitions-3/Source context-same/Test delay-1
day: 65
8.3. Source Spacing-1 day/Source repetitions-4/Source context-same/Test delay-1
day: 65
8.4. Source Spacing-5 days/Source repetitions-2/Source context-same/Test delay-1
day: 65
8.5. Source Spacing-5 days/Source repetitions-3/Source context-same/Test delay-1
day: 65
7. Test Task 50:04
8. Hypotheses 50:58
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
175
8.6. Source Spacing-5 days/Source repetitions-4/Source context-same/Test delay-1
day: 65
8.7. Source Spacing-10 days/Source repetitions-2/Source context-same/Test delay-
1 day: 65
8.8. Source Spacing-10 days/Source repetitions-3/Source context-same/Test delay-
1 day: 65
8.9. Source Spacing-10 days/Source repetitions-4/Source context-same/Test delay-
1 day: 65
8.10. Source Spacing-1 day/Source repetitions-2/Source context-rooms/Test
delay-1 day: 65
8.11. Source Spacing-1 day/Source repetitions-3/Source context-rooms/Test
delay-1 day: 65
8.12. Source Spacing-1 day/Source repetitions-4/Source context-rooms/Test
delay-1 day: 65
8.13. Source Spacing-5 days/Source repetitions-2/Source context-rooms/Test
delay-1 day: 65
8.14. Source Spacing-5 days/Source repetitions-3/Source context-rooms/Test
delay-1 day: 65
8.15. Source Spacing-5 days/Source repetitions-4/Source context-rooms/Test
delay-1 day: 65
8.16. Source Spacing-10 days/Source repetitions-2/Source context-
rooms/Test delay-1 day: 65
8.17. Source Spacing-10 days/Source repetitions-3/Source context-
rooms/Test delay-1 day: 65
8.18. Source Spacing-10 days/Source repetitions-4/Source context-
rooms/Test delay-1 day: 65
8.19. Source Spacing-1 day/Source repetitions-2/Source context-mood/Test
delay-1 day: 65
8.20. Source Spacing-1 day/Source repetitions-3/Source context-mood/Test
delay-1 day: 65
8.21. Source Spacing-1 day/Source repetitions-4/Source context-mood/Test
delay-1 day: 65
8.22. Source Spacing-5 days/Source repetitions-2/Source context-mood/Test
delay-1 day: 65
8.23. Source Spacing-5 days/Source repetitions-3/Source context-mood/Test
delay-1 day: 65
8.24. Source Spacing-5 days/Source repetitions-4/Source context-mood/Test
delay-1 day: 65
8.25. Source Spacing-10 days/Source repetitions-4/Source context-
mood/Test delay-1 day: 65
8.26. Source Spacing-10 days/Source repetitions-3/Source context-
mood/Test delay-1 day: 65
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
176
8.27. Source Spacing-10 days/Source repetitions-2/Source context-
mood/Test delay-1 day: 65
8.28. Source Spacing-1 day/Source repetitions-2/Source context-same/Test
delay-5 days: 65
8.29. Source Spacing-1 day/Source repetitions-3/Source context-same/Test
delay-5 days: 65
8.30. Source Spacing-1 day/Source repetitions-4/Source context-same/Test
delay-5 days: 65
8.31. Source Spacing-5 days/Source repetitions-2/Source context-same/Test
delay-5 days: 65
8.32. Source Spacing-5 days/Source repetitions-3/Source context-same/Test
delay-5 days: 65
8.33. Source Spacing-5 days/Source repetitions-4/Source context-same/Test
delay-5 days: 65
8.34. Source Spacing-10 days/Source repetitions-2/Source context-same/Test
delay-5 days: 65
8.35. Source Spacing-10 days/Source repetitions-3/Source context-same/Test
delay-5 days: 65
8.36. Source Spacing-10 days/Source repetitions-4/Source context-same/Test
delay-5 days: 65
8.37. Source Spacing-1 day/Source repetitions-2/Source context-rooms/Test
delay-5 days: 65
8.38. Source Spacing-1 day/Source repetitions-3/Source context-rooms/Test
delay-5 days: 65
8.39. Source Spacing-1 day/Source repetitions-4/Source context-rooms/Test
delay-5 days: 65
8.40. Source Spacing-5 days/Source repetitions-2/Source context-rooms/Test
delay-5 days: 65
8.41. Source Spacing-5 days/Source repetitions-3/Source context-rooms/Test
delay-5 days: 65
8.42. Source Spacing-5 days/Source repetitions-4/Source context-rooms/Test
delay-5 days: 65
8.43. Source Spacing-10 days/Source repetitions-2/Source context-
rooms/Test delay-5 days: 65
8.44. Source Spacing-10 days/Source repetitions-3/Source context-
rooms/Test delay-5 days: 65
8.45. Source Spacing-10 days/Source repetitions-4/Source context-
rooms/Test delay-5 days: 65
8.46. Source Spacing-1 day/Source repetitions-2/Source context-mood/Test
delay-5 days: 65
8.47. Source Spacing-1 day/Source repetitions-3/Source context-mood/Test
delay-5 days: 65
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
177
8.48. Source Spacing-1 day/Source repetitions-4/Source context-mood/Test
delay-5 days: 65
8.49. Source Spacing-5 days/Source repetitions-4/Source context-mood/Test
delay-5 days: 65
8.50. Source Spacing-5 days/Source repetitions-3/Source context-mood/Test
delay-5 days: 65
8.51. Source Spacing-5 days/Source repetitions-2/Source context-mood/Test
delay-5 days: 65
8.52. Source Spacing-10 days/Source repetitions-4/Source context-
mood/Test delay-5 days: 65
8.53. Source Spacing-10 days/Source repetitions-3/Source context-
mood/Test delay-5 days: 65
8.54. Source Spacing-10 days/Source repetitions-2/Source context-
mood/Test delay-5 days: 65
8.55. Source Spacing-1 day/Source repetitions-2/Source context-same/Test
delay-10 days: 65
8.56. Source Spacing-1 day/Source repetitions-3/Source context-same/Test
delay-10 days: 65
8.57. Source Spacing-1 day/Source repetitions-4/Source context-same/Test
delay-10 days: 65
8.58. Source Spacing-5 days/Source repetitions-2/Source context-same/Test
delay-10 days: 65
8.59. Source Spacing-5 days/Source repetitions-3/Source context-same/Test
delay-10 days: 65
8.60. Source Spacing-5 days/Source repetitions-4/Source context-same/Test
delay-10 days: 65
8.61. Source Spacing-10 days/Source repetitions-2/Source context-same/Test
delay-10 days: 65
8.62. Source Spacing-10 days/Source repetitions-3/Source context-same/Test
delay-10 days: 65
8.63. Source Spacing-10 days/Source repetitions-4/Source context-same/Test
delay-10 days: 65
8.64. Source Spacing-1 day/Source repetitions-2/Source context-rooms/Test
delay-10 days: 65
8.65. Source Spacing-1 day/Source repetitions-3/Source context-rooms/Test
delay-10 days: 65
8.66. Source Spacing-1 day/Source repetitions-4/Source context-rooms/Test
delay-10 days: 65
8.67. Source Spacing-5 days/Source repetitions-2/Source context-rooms/Test
delay-10 days: 65
8.68. Source Spacing-5 days/Source repetitions-3/Source context-rooms/Test
delay-10 days: 65
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
178
8.69. Source Spacing-5 days/Source repetitions-4/Source context-rooms/Test
delay-10 days: 65
8.70. Source Spacing-10 days/Source repetitions-2/Source context-
rooms/Test delay-10 days: 65
8.71. Source Spacing-10 days/Source repetitions-3/Source context-
rooms/Test delay-10 days: 65
8.72. Source Spacing-10 days/Source repetitions-4/Source context-
rooms/Test delay-10 days: 65
8.73. Source Spacing-1 day/Source repetitions-2/Source context-mood/Test
delay-10 days: 65
8.74. Source Spacing-1 day/Source repetitions-3/Source context-mood/Test
delay-10 days: 65
8.75. Source Spacing-1 day/Source repetitions-4/Source context-mood/Test
delay-10 days: 65
8.76. Source Spacing-5 days/Source repetitions-2/Source context-mood/Test
delay-10 days: 65
8.77. Source Spacing-5 days/Source repetitions-3/Source context-mood/Test
delay-10 days: 65
8.78. Source Spacing-5 days/Source repetitions-4/Source context-mood/Test
delay-10 days: 65
8.79. Source Spacing-10 days/Source repetitions-4/Source context-
mood/Test delay-10 days: 65
8.80. Source Spacing-10 days/Source repetitions-3/Source context-
mood/Test delay-10 days: 65
8.81. Source Spacing-10 days/Source repetitions-2/Source context-
mood/Test delay-10 days: 65 56:28
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Appendix H
Intermediate B Simulation Data
ROUND ONE
1. Source Repetitions 0:01
1.1. 3 conditions
1.1.1. First condition: 2 0:14
1.1.2. Second condition: 5 0:17
2. Source Spacings 0:30
2.1. 1 condition
2.1.1. First condition: 1
2.1.2. First condition: minute
3. Test Tasks
3.1. 1 conditions
3.2. First condition: Free recall
4. Test Delays 1:29
4.1. 1 condition
4.1.1. First condition: 1
4.1.2. First condition: minute 1:38
5. Test Context 1:40
5.1. 1 condition
5.1.1. First condition: same 1:42
6. Source Context 1:45
6.1. 1 conditions 1:47
6.1.1. First condition: same
7. Test Delays
7.1. First condition: l->5 2:07
8. Hypotheses 2:36
8.1. Source Repetitions-2: 40
8.2. Source Repetitions-5: 90 2:58
ROUND TWO
23:30
1. Source Repetitions 23:50
1.1. 2 conditions 23:59
1.1.1. First condition: 2
1.1.2. Second condition: 5 24:11
2. Source Spacings 24:13
2.1. 1 condition
2.1.1. First condition: 1
2.1.2. First condition: hour 24:24
3. Source Context
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
3.1. 1 conditions
3.1.1. First condition: same
4. Test Tasks
4.1. 1 conditions
4.2. First condition: Free recall
5. Test Delays
5.1. 1 condition
5.1.1. First condition: 1
5.1.2. First condition: minute
6. Test Context
6.1. 1 condition
6.1.1. First condition: same
7. Hypotheses
7.1. Source Repetitions-2: 10
7.2. Source Repetitions-2: 10->5
7.3. Source Repetitions-5: 15
ROUND THREE
1. Source Repetitions
1.1. 1 condition
1.1.1. First condition: 5
2. Source Spacings
2.1. 2 conditions
2.1.1. First condition: 1
2.1.2. Second condition: 1
2.1.3. Second condition: days
2.1.4. First condition: hours
3. Source Context
3.1. 1 conditions
3.1.1. First condition: same
4. Test Tasks
4.1. 1 conditions
4.2. First condition: Free recall
5. Test Delays
5.1. 1 condition
5.1.1. First condition: 1
5.1.2. First condition: hour
6. Test Context
6.1. 1 condition
6.1.1. First condition: same
7. Review Findings
7.1. Experiment: 1
25:44
50:40
52:55
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
7.2. Experiment: 2 53:07
8. Source Repetitions 53:42
8.1. 1 condition 53:55
8.1.1. First condition: 5 53:58
9. Source Spacings 54:00
9.1. 2 conditions
9.1.1. First condition: 1
9.1.2. Second condition: 1
9.1.3. Second condition: days
9.1.4. First condition: hours
10. Source Context
10.1. 1 conditions
10.1.1. First condition: same 54:20
11. Test Tasks 54:21
11.1. 1 conditions
11.2. First condition: Free recall 54:23
12. Test Delays 54:28
12.1. 1 condition
12.1.1. First condition: 1
12.1.2. First condition: minute
13. Test Context
13.1. 1 condition
13.1.1. First condition: same
14. Test Delays
14.1. First condition: minutes-> hours
15. Source Spacings
15.1. 2 conditions 1
15.1.1. First condition: hours
16. Source Repetitions
16.1. 1 condition->2 56:32
16.1.1. Second condition: 2 56:37
17. Hypotheses
17.1. Source Repetitions-2: 20
17.2. Source Repetitions-5: 15 57:10
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Appendix I
Intermediate C Simulation Data
ROUND ONE
1. Source Repetitions 0:04
1.1. 1 condition
1.1.1. First condition: 2 0:10
2. Source Spacings 0:11
2.1. 1 condition
2.1.1. First condition: 1
2.1.2. First condition: minute
3. Source Context
3.1. 1 condition
3.1.1. First condition: Same
4. Test Task
4.1. 1 condition
4.1.1. First condition: Free Recall 0:44
5. Test Delay 0:46
5.1. 1 condition____________________ 0:49
5.1.1. First condition: 1
5.1.2. First condition: minute_____0:53
6. Test Context 0:53
6.1. 1 condition
6.1.1. First condition: S ame
7 . Source Svacines___________________ 1:10
7.1. 1 condition ->2_________________ 1:16
7.1.1. Second condition: 12 1:18
7.1.2. Second condition: days 1:20
7.1.3. First condition: minutes hours 1:24
7.1.4. Second condition: 12-^1 1:26
8. Hypotheses 1:31
8.1. Spacing-1 hour: 50
8.2. Spacing-1 minute: 0 1:47
ROUND TWO
11:10
1. Source Repetitions 11:21
1.1.1 condition
1.1.1. First condition: 5 11:27
2. Source Spacings
2.1. 1 condition
2.1.1. First condition: 1
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
2.1.2. First condition: minute 11:29
2.1.3. 1 condition ~^2 conditions 11:31
2.1.3.1.Second condition: 1 11:33
2.1.3.2.Second condition: hour
3. Source Context
3.1. 1 condition
3.1.1. First condition: Same
4. Test Task
4.1. 1 condition
4.1.1. First condition: Free Recall
5. Test Delay
5.1. 1 condition
5.1.1. First condition: 1
5.1.2. First condition: minute 11:50
6. Test Delay 11:51
6.1. First condition: minute hour ~^day 11:52
7. Test Context
7.1. 1 condition
7.1.1. First condition: Same
8. Hypotheses
8.1. Spacing-1 hour: 50
8.2. Spacing-1 minute: 0 12:07
ROUND THREE
23:40
1. Source Repetitions
1.1. 1 condition
1.1.1. First condition: 5
2. Source Spacings
2.1. 1 condition
2.1.1. First condition: 1
2.1.2. First condition: minute
2.1.3. First condition: m inuted hour
3. Source Context
3.1. 1 condition
3.1.1. First condition: Rooms
4. Source Spacings
4.1. 1 condition-^2 conditions
4.1.1. Second condition: 5
4.1.2. Second condition: Hours
4.1.3. Second condition: 5->l
4.1.4. Second condition: Hours-> minutes
4.1.5. First condition: hours minutes
24:00
24:02
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
4.1.6. Second condition: minutes-> hours
5. Test Task
5.1. 1 condition
5.1.1. First condition: Free Recall
6. Test Delay
6.1. 1 condition
6.1.1. First condition: 1
6.1.2. First condition: hour
6.1.3. First condition: hour ->day
7. Test Context
7.1. 1 condition
7.1.1. First condition: Same
8. Hypotheses
8.1. Spacing-1 hour: 0
8.2. Spacing-1 minute: 0
ROUND FOUR
1. Source Repetitions
1.1. 1 condition
1.1.1. First condition: 5
2. Source Spacings
2.1. 1 condition
2.1.1. First condition: 1
2.1.2. First condition: hour
3. Source Context
3.1. 1 condition
3.1.1. First condition: Rooms
4. Source Context
4.1. 1 condition^ 2
4.1.1. Second condition: Same
5. Test Task
5.1. 1 condition
5.1.1. First condition: Free Recall
6. Test Delay
6.1. 1 condition
6.1.1. First condition: 1
6.1.2. First condition: day
7. Test Context
7.1. 1 condition
7.1.1. First condition: Same
8. Hypotheses
8.1. Source context-same: 50
24:33
24:36
24:46
24:57
34:22
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
8.2. Source context-rooms: 0 35:54
ROUND FIVE
51:19
1. Source Repetitions
1.1. 1 condition
1.1.1. First condition: 5
2. Source Spacings 51:39
2.1.2 conditions 51:40
2.1.1. First condition: 1 51:43
2.1.2. First condition: minute 51:44
2.1.3. 2 condition ~ > 3 conditions 51:45
2.1.3.1.Second condition: 1 51:49
2.1.3.2.Second condition: hour 51:50
2.1.3.3. Third condition: 1 51:51
2.1.3.4. Third condition: day 51:52
3. Source Context
3.1. 1 condition
3.1.1. First condition: Same
4. Test Task
4.1. 1 condition
4.1.1. First condition: Free Recall 51:58
5. Test Delay_________________________ 51:59
5.1. 1 condition
5.1.1. First condition: 1
5.1.2. First condition: day________ 52:03
6. Test Context
6.1. 1 condition
6.1.1. First condition: Same 52:16
7. Test Task 52:23
7.1. 1 condition ~ > 3 52:29
7.1.1. Second condition: recognition 52:31
7.1.2. Third condition: stem completion 52:33
8. Hypotheses
8.1. Test task-free recall/Spacing-1 day: 50
8.2. Test task-free recall/Spacing-1 hour: 25
8.3. Test task-free recall/Spacing-1 minute: 10
8.4. Test task-recognition/Spacing-1 minute: 10
8.5. Test task-stem completion/Spacing-1 minute: 10
8.6. Test task-recognition/Spacing-1 hour: 25
8.7. Test task-stem completion/Spacing-1 hour: 25
8.8. Test task-recognition/Spacing-1 day: 50
8.9. Test task-stem completion/Spacing-1 day: 50 53:54
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
ROUND SIX
1. Source Repetitions 1:07:20
1.1. 1 condition 1:07:23
1.1.1. First condition: 5 1:07:24
2. Source Spacings
2.1. 3 conditions
2.1.1. First condition: 1
2.1.2. Second condition: 1
2.1.3. Third condition: 1
2.1.4. First condition: minute
2.1.5. Second condition: hour
2.1.6. Third condition: day
3. Source Context 1:07:40
3.1. 1 condition
3.1.1. First condition: Rooms 1:07:44
4. Test Task 1:07:47
4.1. 3 conditions 1:07:48
4.1.1. First condition: Free Recall 1:07:49
4.1.2. Second condition: Recognition 1:07:50
4.1.3. Third condition: Stem completion 1:07:51
1:06:59
5. Source context 1:07:13
5.1. First condition: Rooms -^Same 1:07:15
6. Test Context
6.1. 1 condition
6.1.1. First condition: Rooms 1:07:17
7 . Test Delay 1:07:25
7.1. 1 condition 1:07:27
7.1.1. First condition: 1
7.1.2. First condition: day 1:07:30
8. Hypotheses
8.1. Spacing-1 minute/Test-free recall: 10
8.2. Spacing-1 hour/Test-free recall: 25
8.3. Spacing-1 day/Test-free recall: 50
8.4. Spacing-1 minute/Test-recognition: 10
8.5. Spacing-1 hour/Test-recognition: 25
8.6. Spacing-1 day/Test-recognition: 50
8.7. Spacing-1 minute/Test-stem completion: 50
8.8. Spacing-1 hour/Test- stem completion: 50
8.9. Spacing-1 day/Test- stem completion: 50
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
187
Appendix J
Novice A Simulation Data
ROUND ONE
1. Source Repetitions 0:30
1.1. 3 conditions
1.1.1. First condition: 4
1.1.2. Second condition: 4
1.1.3. First condition: 4 ~ > 5
1.1.4. Second condition: 4 ~> 5
1.1.5. Third condition: 5 0:55
2. Source Spacings 0:58
2.1. 1 condition
2.1.1. First condition: 1
2.1.2. First condition: minute 1:14
3. Source Spacings 1:18
3.1. l->3 conditions
3.2. 3 conditions-> 1
4. Test Tasks
4.1. 3 conditions
4.2. First condition: Free recall
4.3. Second condition: Recognition
4.4. Third condition: Stem Completion
5. Source Spacings
5.1. l->3 conditions
5.2. 3 conditions-^ 1
6. Source Spacings
6.1. 1-^3 conditions
6.2. 3 conditions->l
7. Source Spacings
7.1. 1-^2 conditions
7.1.1. Second condition: 30
7.1.2. Second condition: minutes
7.2. 2 conditions-> 3
7.2.1. Second condition: 30-> 1
7.2.2. Second condition: minutes-^hours
7.2.3. Third condition: 5
7.2.4. Third condition: days
7.2.5. Second condition: l->5
8. Source spacings
8.1. Second condition: 5-M 4:04
9. Source Contexts 4:12
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
188
9.1. 3 conditions
9.1.1. Same
9.1.2. Rooms
9.1.3. Moods 4:21
10. Test Delays 4:30
10.1. 1 condition
10.1.1. First condition: minutes
10.2. 1 condition->2 conditions
10.2.1. Second condition: 5
10.2.2. Second condition: days
10.3. 2 conditions-> 3 conditions
10.3.1. Second condition: 5->20
10.3.2. Second condition: days ^m inutes
10.3.3. Third condition: 5
10.3.4. Third condition: days
11. Test Contexts
11.1. 3 conditions
11.1.1. First condition: same
11.1.2. Second condition: room
11.2. 3 conditions-> 1 condition
12. Test Delays
12.1. 3 conditions-> 1 condition
13. Hypotheses
13.1. Redesign 8:09
14. Test Tasks 8:20
14.1. 3 conditions ~ > 1 8:24
14.1.1. First condition: Free recall-> Recognition 8:25
15. Hypotheses
15.1. Source Repetitions-5/Source Context-same/Source Spacing-1 day: 7
15.2. Source Repetitions-5/Source Context-same/Source Spacing-1 day:
7->10
15.3. Source Repetitions-5/Source Context-same/Source Spacing-1 day:
10->75
15.4. Redesign 11:47
16. Test Delays 11:48
16.1. First condition: 1 ~ > 5
16.2. First condition: Minutes->hours 11:54
17. Hypotheses 11:55
17.1. Source Repetitions-5/Source Context-same/Source Spacing-1 day: 75
17.2. Source Repetitions-5/Source Context-same/Source Spacing-1 day: 50
17.3. Source Repetitions-5/Source Context-same/Source Spacing-1 day: 25
17.4. Source Repetitions-5/Source Context-rooms/Source Spacing-1 day: 75
17.5. Source Repetitions-5/Source Context-rooms/Source Spacing-1 day: 65
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
189
17.6.
17.7.
17.8.
17.9.
17.10.
17.11.
17.12.
17.13.
75
17.14.
75
17.15.
75
17.16.
75
17.17.
75
17.18.
75
17.19.
75
17.20.
75
17.21.
75
17.22.
17.23.
17.24.
17.25.
17.26.
65
17.27.
55
Source Repetitions-5/Source Context-rooms/Source Spacing-1 day: 55
Source Repetitions-5/Source Context-mood/Source Spacing-1 day: 75
Source Repetitions-5/Source Context-mood/Source Spacing-1 day: 65
Source Repetitions-5/Source Context-mood/Source Spacing-1 day: 55
Source Repetitions-5/Source Context-same/Source Spacing-1 hour: 75
Source Repetitions-5/Source Context-same/Source Spacing-1 hour: 65
Source Repetitions-5/Source Context-same/Source Spacing-1 hour: 55
Source Repetitions-5/Source Context-same/Source Spacing-1 minute:
Source Repetitions-5/Source Context-same/Source Spacing-1 minute:
Source Repetitions-5/Source Context-same/Source Spacing-1 minute:
Source Repetitions-5/Source Context-rooms/Source Spacing-1 minute:
Source Repetitions-5/Source Context-rooms/Source Spacing-1 minute:
Source Repetitions-5/Source Context-rooms/Source Spacing-1 minute:
Source Repetitions-5/Source Context-mood/Source Spacing-1 minute:
Source Repetitions-5/Source Context-mood/Source Spacing-1 minute:
Source Repetitions-5/Source Context-mood/Source Spacing-1 minute:
Source Repetitions-5/Source Context-rooms/Source Spacing-1 hour: 75
Source Repetitions-5/Source Context-rooms/Source Spacing-1 hour: 65
Source Repetitions-5/Source Context-rooms/Source Spacing-1 hour: 55
Source Repetitions-5/Source Context-mood/Source Spacing-1 hour: 75
Source Repetitions-5/Source Context- mood /Source Spacing-1 hour:
Source Repetitions-5/Source Context- mood /Source Spacing-1 hour:
ROUND TWO
1. Source Context
1.1.3 conditions
37:11
37:22
1.1.1. Same
1.1.2. Rooms
1.1.3. Moods 37:29
37:45 2. Source Spacings
2.1. 3 conditions
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
190
3.
4.
5.
6.
2.1.1. First condition: minutes
2.1.2. Second condition: hours
2.1.3. Third condition: days
Source Spacings
3.1. Third condition: 1 ->5
Test Contexts
4.1. 3 conditions
4.1.1. Same
4.1.2. Rooms
4.1.3. Moods
Source Repetitions
5.1. 3 conditions
5.1.1. 3 conditions ->1
5.1.2. First condition: 5
Test Delays____________________
37:57
38:02
38:07
38:28
39:08
39:25
39:41
39:47
7 .
8 .
6.1. 3 conditions
6.1.1. First condition: 5
6.1.2. First condition: minutes
6.1.3. Second condition: 5
6.1.4. Second condition: hours
6.1.5. Third condition: 5
6.1.6. Third condition: days
Test Tasks
40:14
40:31
7.1. 1 condition
7.1.1. First condition: Free recall 40:34
Test Delays
8.1. 3 conditions-> 1 condition
8.1.1. First condition: minutes-^hours
9. Source Repetitions
9.1. 1 condition-^3 conditions
9.1.1. First condition: 5->2
9.1.2. Second condition: 5
9.2. 3 conditions-^2 conditions
10. Hypotheses
10.1. Source spacing-1 minute/Source context-same/Source repetition-2/Test
context-same: 99
10.2. Source spacing-1 hour/Source context-same/Source repetition-2/Test
context-same: 80
10.3. Source spacing-1 day/Source context-same/Source repetition-2/Test
context-same: 50
10.4. Source spacing-1 minute/Source context-rooms/Source repetition-
2/Test context-same: 99
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
191
10.5. Source spacing-1 minute/Source context-mood/Source repetition-2/Test
context-same: 99
10.6. Source spacing-1 hour/Source context-same/Source repetition-2/Test
context-same: 80->50
10.7. Source spacing-1 day/Source context-same/Source repetition-2/Test
context-same: 50->25
10.8. Source spacing-1 minute/Source context-same/Source repetition-5/Test
context-same: 99
10.9. Source spacing-1 hour/Source context-same/Source repetition-5/Test
context-same: 75
10.10. Source spacing-1 day/Source context-same/Source repetition-5/Test
context-same: 50
10.11. Source spacing-1 minute/Source context-rooms/Source repetition-
5/Test context-same: 99
10.12. Source spacing-1 hour/Source context-rooms/Source repetition-5/Test
context-same: 75
10.13. Source spacing-1 day/Source context-rooms/Source repetition-5/Test
context-same: 50
10.14. Source spacing-1 minute/Source context-mood/Source repetition-5/Test
context-same: 99
10.15. Source spacing-1 hour/Source context-mood/Source repetition-5/Test
context-same: 75
10.16. Source spacing-1 day/Source context-mood/Source repetition-5/Test
context-same: 50
10.17. Source spacing-1 minute/Source context-rooms/Source repetition-
2/Test context-rooms: 99
10.18. Source spacing-1 minute/Source context-same/Source repetition-2/Test
context-rooms: 99
10.19. Source spacing-1 minute/Source context-mood/Source repetition-2/Test
context-rooms: 99
10.20. Source spacing-1 hour/Source context-rooms/Source repetition-2/Test
context-rooms: 75
10.21. Source spacing-1 day/Source context-rooms/Source repetition-2/Test
context-rooms: 75
10.22. Source spacing-1 hour/Source context-same/Source repetition-2/Test
context-rooms: 50
10.23. Source spacing-1 day/Source context-same/Source repetition-2/Test
context-rooms: 25
10.24. Source spacing-1 hour/Source context-moods/Source repetition-2/Test
context-rooms: 50
10.25. Source spacing-1 day/Source context-moods/Source repetition-2/Test
context-rooms: 25
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
192
10.26. Source spacing-1 minute/Source context-mood/Source repetition-2/Test
context-mood: 99
10.27. Source spacing-1 hour/Source context-mood/Source repetition-2/Test
context-mood: 75
10.28. Source spacing-1 day/Source context-mood/Source repetition-2/Test
context-mood: 75
10.29. Source spacing-1 minute/Source context-rooms/Source repetition-
2/Test context-mood: 99
10.30. Source spacing-1 hour/Source context-rooms/Source repetition-2/Test
context-mood: 50
10.31. Source spacing-1 day/Source context-rooms/Source repetition-2/Test
context-mood: 25
10.32. Source spacing-1 minute/Source context-same/Source repetition-2/Test
context-mood: 99
10.33. Source spacing-1 hour/Source context-same/Source repetition-2/Test
context-mood: 50
10.34. Source spacing-1 day/Source context-same/Source repetition-2/Test
context-mood: 25
10.35. Source spacing-1 minute/Source context-rooms/Source repetition-
2/Test context-mood: 99
10.36. Source spacing-1 hour/Source context-rooms/Source repetition-2/Test
context-mood: 50
10.37. Source spacing-1 minute/Source context-same/Source repetition-5/Test
context-rooms: 99
10.38. Source spacing-1 minute/Source context-rooms/Source repetition-
5/Test context-rooms: 99
10.39. Source spacing-1 minute/Source context-mood/Source repetition-5/Test
context-rooms: 99
10.40. Source spacing-1 minute/Source context-same/Source repetition-5/Test
context-mood: 99
10.41. Source spacing-1 minute/Source context-rooms/Source repetition-
5/Test context-mood: 99
10.42. Source spacing-1 minute/Source context-mood/Source repetition-5/Test
context-mood: 99
10.43. Source spacing-1 hour/Source context-rooms/Source repetition-5/Test
context-rooms: 75
10.44. Source spacing-1 day/Source context-rooms/Source repetition-5/Test
context-rooms: 50
10.45. Source spacing-1 hour/Source context-mood/Source repetition-5/Test
context-mood: 75
10.46. Source spacing-1 day/Source context-mood/Source repetition-5/Test
context-mood: 50
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
193
10.47. Source spacing-1 hour/Source context-rooms/Source repetition-5/Test
context-rooms: 75->90
10.48. Source spacing-1 day/Source context-rooms/Source repetition-5/Test
context-rooms: 50->80
10.49. Source spacing-1 hour/Source context-mood/Source repetition-5/Test
context-mood: 75->90
10.50. Source spacing-1 day/Source context-mood/Source repetition-5/Test
context-mood: 50->80
10.51. Source spacing-1 hour/Source context-same/Source repetition-5/Test
context-rooms: 60
10.52. Source spacing-1 day/Source context-same/Source repetition-5/Test
context-rooms: 40
10.53. Source spacing-1 hour/Source context-mood/Source repetition-5/Test
context-rooms: 60
10.54. Source spacing-1 day/Source context-mood/Source repetition-5/Test
context-rooms: 40
10.55. Source spacing-1 hour/Source context-same/Source repetition-5/Test
context-mood: 60
10.56. Source spacing-1 day/Source context-same/Source repetition-5/Test
context-mood: 40
10.57. Source spacing-1 hour/Source context-rooms/Source repetition-5/Test
context-mood: 60
10.58. Source spacing-1 day/Source context-rooms/Source repetition-5/Test
context-mood: 40
10.59. Source spacing-1 hour/Source context-same/Source repetition-5/Test
context-same: 75->80
10.60. Source spacing-1 hour/Source context-same/Source repetition-5/Test
context-same: 80->90
10.61. Source spacing-1 day/Source context-same/Source repetition-5/Test
context-same: 50->80
10.62. Source spacing-1 hour/Source context-rooms/Source repetition-5/Test
context-same: 75->60
10.63. Source spacing-1 day/Source context-rooms/Source repetition-5/Test
context-same: 50->40
10.64. Source spacing-1 hour/Source context-mood/Source repetition-5/Test
context-same: 75->60
10.65. Source spacing-1 day/Source context-mood/Source repetition-5/Test
context-same: 50->40
10.66. Source spacing-1 hour/Source context-same/Source repetition-2/Test
context-same: 50->75
10.67. Source spacing-1 day/Source context-same/Source repetition-2/Test
context-same: 25->50
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
194
10.68. Source spacing-1 hour/Source context-rooms/Source repetition-2/Test
context-same: 50
10.69. Source spacing-1 day/Source context-rooms/Source repetition-2/Test
context-same: 25
10.70. Source spacing-1 hour/Source context-mood/Source repetition-2/Test
context-same: 50
10.71. Source spacing-1 day/Source context-mood/Source repetition-2/Test
context-same: 25 52:22
ROUND THREE
1. Source Context
1.1.3 conditions
1.1.1. Same
1.1.2. Rooms
1.1.3. Moods
2. Test Contexts
2.1. 3 conditions
2.1.1. Same
2.1.2. Rooms
2.1.3. Moods
3. Source Spacings
3.1. 3 conditions
3.1.1. First condition: minutes
3.1.2. Second condition: 5
3.1.3. Second condition: hours
3.1.4. Third condition: 5
3.1.5. Third condition: days
4. Review Results
4.1. Round 2
5. Test Delays
5.1. 1 condition
5.1.1. First condition: 5
5.1.2. First condition: hours
6. Source Context
6.1.3 conditions
6.1.1. Same
6.1.2. Rooms
6.1.3. Moods
7. Test Contexts
7.1. 3 conditions
7.1.1. Same
7.1.2. Rooms
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
195
7.1.3. Moods
8. Test Task
8.1. 1 condition
8.1.1. Recognition
9. Source Repetitions
9.1. 3 conditions
9.1.1. 3 conditions -> 2
9.1.1.1 .First condition: 2
9.1.1.2.Second condition: 5
10. Source Spacings
10.1. 3 conditions
10.1.1. First condition: 1
10.1.2. Second condition: 1
10.1.3. Third condition: 5
10.1.4. Third condition: days
10.1.5. Second condition: hours
10.1.6. Third condition: minutes
11. Source Spacings
11.1. Second condition: l->10->5
12. Test Delays
12.1. 1 condition-^ 2 conditions
12.1.1. Second condition: 5
12.1.2. First condition: hours-^minutes
12.1.3. Second condition: hours
12.2. 2 conditions 1 condition
13. Hypotheses
14. Redesign
14.1. Test Delays
14.1.1. First condition: minutes->hours
14.2. Source Spacings
14.2.1. Second condition: 5-M
15. Hypotheses
15.1. Source spacing-1 minute/Source context-same/Source repetition-2/Test
context-same: 1
15.2. Source spacing-1 minute/Source context-rooms/Source repetition-
2/Test context-same: 1
15.3. Source spacing-1 minute/Source context-mood/Source repetition-2/Test
context-same: 1
15.4. Source spacing-1 hour/Source context-same/Source repetition-2/Test
context-same: 1
15.5. Source spacing-1 hour/Source context-rooms/Source repetition-2/Test
context-same: 1
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
196
15.6. Source spacing-1 hour/Source context-mood/Source repetition-2/Test
context-same: 1
15.7. Source spacing-1 day/Source context-same/Source repetition-2/Test
context-same: 1
15.8. Source spacing-1 day/Source context-rooms/Source repetition-2/Test
context-same: 1
15.9. Source spacing-1 day/Source context-mood/Source repetition-2/Test
context-same: 1
15.10. Source spacing-1 minute/Source context-same/Source repetition-2/Test
context-rooms: 1
15.11. Source spacing-1 minute/Source context-rooms/Source repetition-
2/Test context-rooms: 1
15.12. Source spacing-1 minute/Source context-mood/Source repetition-2/Test
context-rooms: 1
16. Redesign
16.1. Previous Results
16.1.1. Round 2
17. Source Context
17.1. 3 conditions
17.1.1. Same
17.1.2. Rooms
17.1.3. Moods
18. Test Contexts
18.1. 3 conditions
18.1.1. Same
18.1.2. Rooms
18.1.3. Moods
19. Test Delays
19.1. 1 condition
19.1.1. First condition: 5
19.1.2. First condition: hours
20. Test Task
20.1. 1 condition
20.1.1. First condition: Recognition
21. Source Repetitions
21.1. 2 conditions
21.1.1. First condition: 2
21.1.2. Second condition: 5
22. Source Spacings
22.1. 3 conditions
22.1.1. First condition: 1
22.1.2. First condition: minutes
22.1.3. Second condition: 1
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
197
22.1.4. Second condition: hours
22.1.5. Third condition: 5
22.1.6. Third condition: days
23. Hypotheses
23.1. Source spacing-
context-same: 1
23.2. Source spacing-
2/Test context-same: 1
23.3. Source spacing-
context-same: 1
23.4. Source spacing-
context-same: 5
23.5. Source spacing-
5/Test context-same: 5
23.6. Source spacing-
context-same: 5
23.7. Source spacing-
context-same: 20
23.8. Source spacing-
context-same: 20
23.9. Source spacing-
context-same: 20
23.10. Source spacing-
context-same: 30
23.11. Source spacing-
context-same: 30
23.12. Source spacing-
context-same: 30
23.13. Source spacing-
context-same: 60
23.14. Source spacing-
context-same: 60
23.15. Source spacing-
context-same: 60
23.16. Source spacing-
context-same: 70
23.17. Source spacing-
context-same: 70
23.18. Source spacing-
context-same: 70
23.19. Source spacing-
context-rooms: 25
minute/Source context-same/Source repetition-2/Test
minute/Source context-rooms/Source repetition-
minute/Source context-mood/Source repetition-2/Test
minute/Source context-same/Source repetition-5/Test
minute/Source context-rooms/Source repetition-
minute/Source context-mood/Source repetition-5/Test
hour/Source context-same/Source repetition-2/Test
hour/Source context-rooms/Source repetition-2/Test
hour/Source context-mood/Source repetition-2/Test
hour/Source context-same/Source repetition-5/Test
hour/Source context-rooms/Source repetition-5/Test
hour/Source context-mood/Source repetition-5/Test
day/Source context-same/Source repetition-2/Test
day/Source context-rooms/Source repetition-2/Test
day/Source context-mood/Source repetition-2/Test
day/Source context-same/Source repetition-5/Test
day/Source context-rooms/Source repetition-5/Test
day/Source context-mood/Source repetition-5/Test
hour/Source context-same/Source repetition-2/Test
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
198
23.20. Source spacing-1 hour/Source context-rooms/Source repetition-2/Test
context-rooms: 25
23.21. Source spacing-1 hour/Source context-mood/Source repetition-2/Test
context-rooms: 25
23.22. Source spacing-1 day/Source context-same/Source repetition-2/Test
context-rooms: 60
23.23. Source spacing-1 day/Source context-rooms/Source repetition-2/Test
context-rooms: 60
23.24. Source spacing-1 day/Source context-mood/Source repetition-2/Test
context-rooms: 60
23.25. Source spacing-1 hour/Source context-same/Source repetition-5/Test
context-rooms: 25
23.26. Source spacing-1 hour/Source context-rooms/Source repetition-5/Test
context-rooms: 25
23.27. Source spacing-1 hour/Source context-mood/Source repetition-5/Test
context-rooms: 25
23.28. Source spacing-1 day/Source context-same/Source repetition-5/Test
context-rooms: 65
23.29. Source spacing-1 day/Source context-rooms/Source repetition-5/Test
context-rooms: 65
23.30. Source spacing-1 day/Source context-mood/Source repetition-5/Test
context-rooms: 65
23.31. Source spacing-1 day/Source context-same/Source repetition-2/Test
context-rooms: 60->65
23.32. Source spacing-1 day/Source context-rooms/Source repetition-2/Test
context-rooms: 60->65
23.33. Source spacing-1 day/Source context-mood/Source repetition-2/Test
context-rooms: 60->65
23.34. Source spacing-1 hour/Source context-same/Source repetition-2/Test
context-mood: 25
23.35. Source spacing-1 hour /Source context-rooms/Source repetition-2/Test
context-mood: 25
23.36. Source spacing-1 hour /Source context-mood/Source repetition-2/Test
context-mood: 25
23.37. Source spacing-1 day/Source context-same/Source repetition-2/Test
context-mood: 65
23.38. Source spacing-1 day /Source context-rooms/Source repetition-2/Test
context-mood: 65
23.39. Source spacing-1 day /Source context-mood/Source repetition-2/Test
context-mood: 65
23.40. Source spacing-1 hour/Source context-same/Source repetition-5/Test
context-mood: 25
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
199
23.41. Source spacing-1 hour /Source context-rooms/Source repetition-5/Test
context-mood: 25
23.42. Source spacing-1 hour /Source context-mood/Source repetition-5/Test
context-mood: 25
23.43. Source spacing-1 day/Source context-same/Source repetition-5/Test
context-mood: 65
23.44. Source spacing-1 day /Source context-rooms/Source repetition-5/Test
context-mood: 65
23.45. Source spacing-1 day /Source context-mood/Source repetition-5/Test
context-mood: 65
23.46. Source spacing-1 minute/Source context-same/Source repetition-5/Test
context-mood: 0
23.47. Source spacing-1 minute /Source context-rooms/Source repetition-
5/Test context-mood: 0
23.48. Source spacing-1 minute /Source context-mood/Source repetition-
5/Test context-mood: 0
23.49. Source spacing-1 minute /Source context-same/Source repetition-5/Test
context-mood: 0
23.50. Source spacing-1 minute /Source context-rooms/Source repetition-
5/Test context-mood: 0
23.51. Source spacing-1 minute /Source context-mood/Source repetition-
5/Test context-mood: 0
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
2 0 0
Appendix K
Novice B Simulation Data
ROUND ONE
3. Source Repetitions 0:40
3.1. 3 conditions 0:42
3.1.1. First condition: 2 0:43
3.1.2. Second condition: 3
3.1.3. Third condition: 5
4. Source Spacings 0:47
4.1. 1 condition 0:48
4.1.1. First condition: 5 0:49
4.1.2. First condition: minutes
5. Source Context 0:55
5.1. 1 condition 0:59
5.1.1. First condition: Same 1:00
6. Test Task 1:02
6.1. 3 conditions 1:06
6.1.1. First condition: Free Recall
6.1.2. Second condition: Recognition
6.1.3. Third condition: Stem completion
7. Test Delay 1:12
7.1. 3 conditions 1:16
7.1.1. First condition: 1
7.1.2. First condition: minute
7.1.3. Second condition: 20
7.1.4. Second condition: minutes
7.1.5. Third condition: 1
7.1.6. Third condition: hour
8. Test Context 1:48
8.1. 3 conditions 1:54
8.1.1. First condition: Same
8.1.2. Second condition: Rooms
8.1.3. Third condition: Mood
9. Hypotheses 2:08
9.1. Test delay-1 hour/Test context-same/Test task-recognition/Source repetitions-
2:5 2:46
9.2. Test delay-1 hour/Test context-same/Test task-stem completion/Source
repetitions-2: 5
9.3. Test delay-1 hour/Test context-same/Test task-free recall/Source repetitions-
3: 5
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
201
9.4. Test delay-1 hour/Test context-same/Test task-recognition/Source repetitions-
3:10
9.5. Test delay-1 hour/Test context-same/Test task-stem completion/Source
repetitions-3:10
9.6. Test delay-1 hour/Test context-same/Test task-free recall/Source repetitions-
5: 10
9.7. Test delay-1 hour/Test context-same/Test task-recognition/Source repetitions-
5:12
9.8. Test delay-1 hour/Test context-same/Test task-stem completion/Source
repetitions-5: 12
9.9. Test delay-20 minutes/Test context-same/Test task-free recall/Source
repetitions-2: 5
9.10. Test delay-20 minutes/Test context-same/Test task-recognition/Source
repetitions-2: 10
9.11. Test delay-20 minutes/Test context-same/Test task-stem
completion/Source repetitions-2: 10
9.12. Test delay-20 minutes/Test context-same/Test task-free recall/Source
repetitions-3:15
9.13. Test delay-20 minutes/Test context-same/Test task-free recall/Source
repetitions-3:15 ->10
9.14. Test delay-20 minutes/Test context-same/Test task-recognition/Source
repetitions-3:15
9.15. Test delay-20 minutes/Test context-same/Test task-stem
completion/Source repetitions-3:15
9.16. Test delay-1 hour/Test context-same/Test task-recognition/Source
repetitions-5: 12 ->15
9.17. Test delay-1 hour/Test context-same/Test task-stem completion/Source
repetitions-5: 12 ->15
9.18. Test delay-1 hour/Test context-same/Test task-recognition/Source
repetitions-5: 12
9.19. Test delay-20 minutes/Test context-same/Test task-free recall/Source
repetitions-5: 15
9.20. Test delay-20 minutes/Test context-same/Test task-recognition/Source
repetitions-5: 20
9.21. Test delay-20 minutes/Test context-same/Test task-stem
completion/Source repetitions-5: 20
9.22. Test delay-1 minute/Test context-same/Test task-free recall/Source
repetitions-2: 10
9.23. Test delay-1 minute/Test context-same/Test task-recognition/Source
repetitions-2:15
9.24. Test delay-1 minute/Test context-same/Test task-stem
completion/Source repetitions-2: 15
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
2 0 2
9.25. Test delay-1 minute/Test context-same/Test task-free recall/Source
repetitions-3:15
9.26. Test delay-1 minute/Test context-same/Test task-recognition/Source
repetitions-3: 20
9.27. Test delay-1 minute/Test context-same/Test task-stem
completion/Source repetitions-3: 20
9.28. Test delay-1 minute/Test context-same/Test task-free recall/Source
repetitions-5:20
9.29. Test delay-1 minute/Test context-same/Test task-recognition/Source
repetitions-5: 25
9.30. Test delay-1 minute/Test context-same/Test task-stem
completion/Source repetitions-5:25
9.31. Test delay-1 hour/Test context-same/Test task-free recall/Source
repetitions-2: 5
9.32. Test delay-1 hour/Test context-same/Test task-free recall/Source
repetitions-3: 5 ~ > 4
9.33. Test delay-1 hour/Test context-same/Test task-free recall/Source
repetitions-2: 5 ->1
9.34. Test delay-1 hour/Test context-same/Test task-free recall/Source
repetitions-2:1 ->0
9.35. Test delay-1 hour/Test context-same/Test task-free recall/Source
repetitions-3: 4 ~^5
9.36. Test delay-1 minute/Test context-rooms/Test task-free recall/Source
repetitions-2:15
9.37. Test delay-1 minute/Test context-mood/Test task-free recall/Source
repetitions-2:15
9.38. Test delay-1 minute/Test context-rooms/Test task-free recall/Source
repetitions-3: 20
9.39. Test delay-1 minute/Test context-rooms/Test task-free recall/Source
repetitions-5: 25
9.40. Test delay-1 minute/Test context-mood/Test task-free recall/Source
repetitions-3: 20
9.41. Test delay-1 minute/Test context-mood/Test task-free recall/Source
repetitions-5: 25
9.42. Test delay-1 minute/Test context-rooms/Test task-recognition/Source
repetitions-2: 20
9.43. Test delay-1 minute/Test context-rooms/Test task-recognition/Source
repetitions-3: 25
9.44. Test delay-1 minute/Test context-rooms/Test task-recognition/Source
repetitions-5: 30
9.45. Test delay-1 minute/Test context-rooms/Test task-stem
completion/Source repetitions-2: 20
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
203
9.46. Test delay-1 minute/Test context-rooms/Test task-stem
completion/Source repetitions-3: 25
9.4 7. Test delay-1 minute/Test context-rooms/Test task-stem
completion/Source repetitions-5: 30
9.48. Test delay-1 minute/Test context-mood/Test task-recognition/Source
repetitions-2: 20
9.49. Test delay-1 minute/Test context-mood/Test task-recognition/Source
repetitions-3: 25
9.50. Test delay-1 minute/Test context-mood/Test task-recognition/Source
repetitions-5: 30
9.51. Test delay-1 minute/Test context-mood/Test task-stem
completion/Source repetitions-2: 20
9.52. Test delay-1 minute/Test context-mood/Test task-stem
completion/Source repetitions-3: 25
9.53. Test delay-1 minute/Test context-mood/Test task-stem
completion/Source repetitions-5: 30
9.54. Test delay-20 minutes/Test context-rooms/Test task-free recall/Source
repetitions-2: 10
9.55. Test delay-20 minutes/Test context-rooms/Test task-free recall/Source
repetitions-3:15
9.56. Test delay-20 minutes/Test context-rooms/Test task-free recall/Source
repetitions-5:20
9.5 7 . Test delay-20 minutes/Test context-mood/Test task-free recall/Source
repetitions-2: 10
9.58. Test delay-20 minutes/Test context-mood/Test task-free recall/Source
repetitions-3:15
9.59. Test delay-20 minutes/Test context-mood/Test task-free recall/Source
repetitions-5: 20
9.60. Test delay-20 minutes/Test context-rooms/Test task-recognition/Source
repetitions-2: 15
9.61. Test delay-20 minutes/Test context-rooms/Test task-recognition/Source
repetitions-3: 20
9.62. Test delay-20 minutes/Test context-rooms/Test task-recognition/Source
repetitions-5: 25
9.63. Test delay-20 minutes/Test context-mood/Test task-recognition/Source
repetitions-2: 15
9.64. Test delay-20 minutes/Test context-mood/Test task-recognition/Source
repetitions-3: 20
9.65. Test delay-20 minutes/Test context-mood/Test task-recognition/Source
repetitions-5:25
9.66. Test delay-20 minutes/Test context-rooms/Test task-stem
completion/Source repetitions-2: 15
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
204
9.67 . Test delay-20 minutes/Test context-rooms/Test task-stem
completion/Source repetitions-3: 20
9.68. Test delay-20 minutes/Test context-rooms/Test task-stem
completion/Source repetitions-5: 25
9.69. Test delay-20 minutes/Test context-mood/Test task-stem
completion/Source repetitions-2:15
9.70. Test delay-20 minutes/Test context-mood/Test task-stem
completion/Source repetitions-3: 20
9.71. Test delay-20 minutes/Test context-mood/Test task-stem
completion/Source repetitions-5: 25
9.72. Test delay-1 hour/Test context-rooms/Test task-free recall/Source
repetitions-2: 5
9.73. Test delay-1 hour/Test context-rooms/Test task-free recall/Source
repetitions-3:10
9.74. Test delay-1 hour/Test context-rooms/Test task-free recall/Source
repetitions-5:15
9.75. Test delay-1 hour/Test context-mood/Test task-free recall/Source
repetitions-2: 5
9.76. Test delay-1 hour/Test context-mood/Test task-free recall/Source
repetitions-3: 10
9.77. Test delay-1 hour/Test context-mood/Test task-free recall/Source
repetitions-5: 15
9.78. Test delay-1 hour/Test context-rooms/Test task-recognition/Source
repetitions-2: 10
9.79. Test delay-1 hour/Test context-rooms/Test task- recognition /Source
repetitions-3: 15
9.80. Test delay-1 hour/Test context-rooms/Test task- recognition /Source
repetitions-5: 20
9.81. Test delay-1 hour/Test context-mood/Test task-recognition/Source
repetitions-2: 10
9.82. Test delay-1 hour/Test context-mood/Test task- recognition /Source
repetitions-3:15
9.83. Test delay-1 hour/Test context-mood/Test task- recognition /Source
repetitions-5: 20
9.84. Test delay-1 hour/Test context-rooms/Test task-stem completion/Source
repetitions-2: 10
9.85. Test delay-1 hour/Test context-rooms/Test task- stem completion
/Source repetitions-3: 15
9.86. Test delay-1 hour/Test context-rooms/Test task- stem completion
/Source repetitions-5: 20
9.87. Test delay-1 hour/Test context-mood/Test task-stem completion/Source
repetitions-2: 10
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
205
9.88. Test delay-1 hour/Test context-mood/Test task- stem completion
/Source repetitions-3:15
9.89. Test delay-1 hour/Test context-mood/Test task- stem completion
/Source repetitions-5: 20 9:31
ROUND TWO
26:35
1. Source Repetitions
1.1. 1 conditions
1.1.1. First condition: 2 27:02
2. Source Spacings 27:03
2.1. 3 conditions 27:10
2.1.1. First condition: 1 27:11
2.1.2. First condition: minutes
2.1.3. Second condition: 20
2.1.4. Second condition: minutes
2.1.5. Third condition: 1
2.1.6. Third condition: hour
3. Source Repetitions 27:34
3.1. First condition: 2->5 27:42
4. Source Context 27:42
4.1. 1 condition
4.1.1. First condition: Same 27:48
5. Test Task 27:52
5.1. 3 conditions
5.1.1. First condition: Free Recall
5.1.2. Second condition: Recognition
5.1.3. Third condition: Stem completion 27:58
6. Test Delay 28:00
6.1. 3 conditions
6.1.1. First condition: 1
6.1.2. Second condition: 20
6.1.3. Third condition: 1
6.1.4. Third condition: hour
6.1.5. Second condition: minutes
6.1.6. First condition: minute 28:12
7. Test Context 28:16
7.1.3 conditions
7.1.1. First condition: Same
7.1.2. Second condition: Rooms
7.1.3. Third condition: Mood 28:24
8. Hypotheses 28:26
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
206
8.1. Test Task-stem completion/Test Delay-1 minute/Source Spacing-1 hour/Test
Context-mood: 100 28:40
8.2. Test Task-recognition/Test Delay-1 minute/Source Spacing-1 hour/Test
Context-mood: 100
8.3. Test Task-free recall/Test Delay-1 minute/Source Spacing-1 hour/Test
Context-mood: 75
8.4. Test Task-stem completion/Test Delay-1 minute/Source Spacing-20
minutes/Test Context-mood: 95
8.5. Test Task-recognition/Test Delay-1 minute/Source Spacing-20 minutes/Test
Context-mood: 95
8.6. Test Task-free recall/Test Delay-1 minute/Source Spacing-20 minutes/Test
Context-mood: 70
8.7. Test Task-free recall/Test Delay-1 minute/Source Spacing-1 minute/Test
Context-mood: 65
8.8. Test Task-recognition/Test Delay-1 minute/Source Spacing-1 minute/Test
Context-mood: 90
8.9. Test Task-stem completion/Test Delay-1 minute/Source Spacing-1 minute/Test
Context-mood: 90
8.10. Test Task-free recall/Test Delay-1 minute/Source Spacing-1
minute/Test Context-rooms: 65
8.11. Test Task-free recall/Test Delay-1 minute/Source Spacing-20
minutes/Test Context-rooms: 70
8.12. Test Task-free recall/Test Delay-1 minute/Source Spacing-1 hour/Test
Context-rooms: 75
8.13. Test Task-recognition/Test Delay-1 minute/Source Spacing-1
minute/Test Context-rooms: 90
8.14. Test Task-recognition/Test Delay-1 minute/Source Spacing-20
minutes/Test Context-rooms: 95
8.15. Test Task-recognition/Test Delay-1 minute/Source Spacing-1 hour/Test
Context-rooms: 100
8.16. Test Task-stem completion/Test Delay-1 minute/Source Spacing-1
hour/Test Context-rooms: 100
8.17. Test Task-stem completion/Test Delay-1 minute/Source Spacing-1
minute/Test Context-rooms: 90
8.18. Test Task-stem completion/Test Delay-1 minute/Source Spacing-20
minutes/Test Context-rooms: 95
8.19. Test Task-stem completion/Test Delay-20 minutes/Source Spacing-1
hour/Test Context-mood: 99
8.20. Test Task-recognition/Test Delay-20 minutes/Source Spacing-1
hour/Test Context-mood: 99
8.21. Test Task-free recall/Test Delay-20 minutes/Source Spacing-1
hour/Test Context-mood: 74
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
207
8.22. Test Task-free recall/Test Delay-20 minutes/Source Spacing-20
minutes/Test Context-mood: 69
8.23. Test Task-recognition/Test Delay-20 minutes/Source Spacing-20
minutes/Test Context-mood: 94
8.24. Test Task-stem completion/Test Delay-20 minutes/Source Spacing-20
minutes/Test Context-mood: 94
8.25. Test Task-stem completion/Test Delay-1 minute/Source Spacing-20
minutes/Test Context-mood: 89
8.26. Test Task-recognition/Test Delay-1 minute/Source Spacing-20
minutes/Test Context-mood: 89
8.27. Test Task-free recall/Test Delay-1 minute/Source Spacing-20
minutes/Test Context-mood: 64
8.28. Test Task-free recall/Test Delay-1 hour/Source Spacing-1 minute/Test
Context-mood: 63
8.29. Test Task-free recall/Test Delay-1 hour/Source Spacing-20
minutes/Test Context-mood: 68
8.30. Test Task-free recall/Test Delay-1 hour/Source Spacing-1 hour/Test
Context-mood: 73
8.31. Test Task-recognition/Test Delay-1 hour/Source Spacing-1 minute/Test
Context-mood: 83
8.32. Test Task-stem completion/Test Delay-1 hour/Source Spacing-1
minute/Test Context-mood: 88
8.33. Test Task-recognition/Test Delay-1 hour/Source Spacing-20
minutes/Test Context-mood: 93
8.34. Test Task-stem completion/Test Delay-1 hour/Source Spacing-20
minutes/Test Context-mood: 93
8.35. Test Task-recognition/Test Delay-1 hour/Source Spacing-1 hour/Test
Context-mood: 98
8.36. Test Task-stem completion/Test Delay-1 hour/Source Spacing-1
hour/Test Context-mood: 98
8.37. Test Task-free recall/Test Delay-20 minutes/Source Spacing-1
minute/Test Context-rooms: 64
8.38. Test Task-free recall/Test Delay-20 minutes/Source Spacing-1
minute/Test Context-rooms: 64
8.39. Test Task-recognition/Test Delay-20 minutes/Source Spacing-1
minute/Test Context-rooms: 89
8.40. Test Task-stem completion/Test Delay-20 minutes/Source Spacing-1
minute/Test Context-rooms: 89
8.41. Test Task-recognition/Test Delay-20 minutes/Source Spacing-20
minutes/Test Context-rooms: 94
8.42. Test Task-stem completion/Test Delay-20 minutes/Source Spacing-20
minutes/Test Context-rooms: 94
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
208
8.43. Test Task-stem completion/Test Delay-20 minutes/Source Spacing-1
hour/Test Context-rooms: 99
8.44. Test Task-recognition/Test Delay-20 minutes/Source Spacing-1
hour/Test Context-rooms: 99
8.45. Test T ask-free recall/T est Delay-20 minutes/Source Spacing-1
hour/Test Context-rooms: 74
8.46. Test Task-free recall/Test Delay-20 minutes/Source Spacing-20
minutes/Test Context-rooms: 69
8.47. Test Task-free recall/Test Delay-1 hour/Source Spacing-1 minute/Test
Context-rooms: 63
8.48. Test Task-recognition/Test Delay-1 hour/Source Spacing-1 minute/Test
Context-rooms: 88
8.49. Test Task-stem completion/Test Delay-1 hour/Source Spacing-1
minute/Test Context-rooms: 88
8.50. Test Task-free recall/Test Delay-1 hour/Source Spacing-20
minutes/Test Context-rooms: 68
8.51. Test Task-recognition/Test Delay-1 hour/Source Spacing-20
minutes/Test Context-rooms: 93
8.52. Test Task-stem completion/Test Delay-1 hour/Source Spacing-20
minutes/Test Context-rooms: 93
8.53. Test Task-free recall/Test Delay-1 hour/Source Spacing-1 hour /Test
Context-rooms: 73
8.54. Test Task-recognition/Test Delay-1 hour/Source Spacing-1 hour /Test
Context-rooms: 98
8.55. Test Task-stem completion/Test Delay-1 hour/Source Spacing-1 hour
/Test Context-rooms: 98
8.56. Test Task-stem completion/Test Delay-1 minute/Source Spacing-1
hour /Test Context-same: 90
8.57. Test Task-recognition/Test Delay-1 minute/Source Spacing-1 hour
/Test Context-same: 90
8.58. Test Task-free recall/Test Delay-1 minute/Source Spacing-1 hour /Test
Context-same: 65
8.59. Test Task-stem completion/Test Delay-1 minute/Source Spacing-20
minutes/Test Context-same: 85
8.60. Test Task-recognition/Test Delay-1 minute/Source Spacing-20
minutes/Test Context-same: 85
8.61. Test Task-stem completion/Test Delay-1 minute/Source Spacing-1
hour/Test Context-same: 90->80
8.62. Test Task-stem completion/Test Delay-1 minute/Source Spacing-1
hour /Test Context-same: 80->90
8.63. Test Task-stem completion/Test Delay-1 minute/Source Spacing-1
minute /Test Context-same: 80
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
209
8.64. Test Task-recognition/Test Delay-1 minute/Source Spacing-1 minute
/Test Context-same: 80
8.65. Test Task-free recall/Test Delay-1 minute/Source Spacing-20
minutes/Test Context-same: 60
8.66. Test Task-free recall/Test Delay-1 minute/Source Spacing-1 minute
/Test Context-same: 55
8.67. Test Task-free recall/Test Delay-20 minutes/Source Spacing-1 minute
/Test Context-same: 50
8.68. Test Task-free recall/Test Delay-20 minutes/Source Spacing-20
minutes /Test Context-same: 55
8.69. Test Task-free recall/Test Delay-20 minutes/Source Spacing-1
hour/Test Context-same: 60
8.70. Test Task-recognition/Test Delay-20 minutes/Source Spacing-1 minute
/Test Context-same: 75
8.71. Test Task- recognition /Test Delay-20 minutes/Source Spacing-20
minutes /Test Context-same: 80
8.72. Test Task- recognition /Test Delay-20 minutes/Source Spacing-1
hour/Test Context-same: 85
8.73. Test Task-stem completion/Test Delay-20 minutes/Source Spacing-1
minute /Test Context-same: 75
8.74. Test Task- stem completion /Test Delay-20 minutes/Source Spacing-20
minutes /Test Context-same: 80
8.75. Test Task- stem completion /Test Delay-20 minutes/Source Spacing-1
hour/Test Context-same: 85
8.76. Test Task-free recall/Test Delay-1 hour/Source Spacing-1 minute /Test
Context-same: 45
8.77. Test Task- free recall /Test Delay-1 hour/Source Spacing-20 minutes
/Test Context-same: 50
8.78. Test Task- free recall /Test Delay-1 hour/Source Spacing-1 hour/Test
Context-same: 55
8.79. Test Task-recognition/Test Delay-1 hour/Source Spacing-1 minute
/Test Context-same: 70
8.80. Test Task-recognition/Test Delay-1 hour/Source Spacing-20 minutes
/Test Context-same: 75
8.81. Test T ask-recognition/Test Delay-1 hour/Source Spacing-1 hour/T est
Context-same: 80
8.82. Test Task-stem completion/Test Delay-1 hour/Source Spacing-1
minute /Test Context-same: 70
8.83. Test Task-stem completion/Test Delay-1 hour/Source Spacing-20
minutes /Test Context-same: 75
8.84. Test Task-stem completion/Test Delay-1 hour/Source Spacing-1
hour/Test Context-same: 80 33:36
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
2 1 0
ROUND THREE
46:51
1. Source Repetitions
1.1. 1 condition
1.1.1. First condition: 5
2. Source Spacings
2.1. 1 condition
2.1.1. First condition: 1
2.1.2. First condition: hour 50:46
3. Source Context 50:48
3.1. 3 condition
3.1.1. First condition: Same
3.1.2. Second condition: Rooms
3.1.3. Third Condition: Mood 50:55
4. Test Task
4.1.3 conditions
4.1.1. First condition: Free Recall
4.1.2. Second condition: Recognition
4.1.3. Third condition: Stem completion
5. Test Delay
5.1. 3 conditions
5.1.1. First condition: 1
5.1.2. First condition: minute
5.1.3. Second condition: 20
5.1.4. Second condition: minutes
5.1.5. Third condition: 1
5.1.6. Third condition: hour
6. Test Context
6.1. 3 conditions
6.1.1. First condition: Same
6.1.2. Second condition: Rooms
6.1.3. Third condition: Mood
7. Hypotheses
7.1. Test Task-free recall/Test Delay-1 hour/Source Context-same/Test Context-
same: 20
7.2. Test Task-recognition/Test Delay-1 hour/Source Context-same/Test Context-
same: 30
7.3. Test Task-stem completion/Test Delay-1 hour/Source Context-same/Test
Context-same: 30
7.4. Test Task-free recall/Test Delay-1 hour/Source Context-rooms/Test Context-
same: 30
7.5. Test Task-recognition/Test Delay-1 hour/Source Context-rooms/Test Context-
same: 40
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
211
7.6. Test Task-stem completion/Test Delay-1 hour/Source Context-rooms/Test
Context-same: 40
7.7. Test Task-free recall/Test Delay-1 hour/Source Context-mood/Test Context-
same: 30
7.8. Test Task-recognition/Test Delay-1 hour/Source Context-mood/Test Context-
same: 40
7.9. Test Task-stem completion/Test Delay-1 hour/Source Context-mood/Test
Context-same: 40
7.10. Test Task-recognition/Test Delay-1 hour/Source Context-rooms/Test
Context-same: 40->30
7.11. Test Task-stem completion/Test Delay-1 hour/Source Context-
rooms/Test Context-same: 40->30
7.12. Test Task-stem completion/Test Delay-1 hour/Source Context-
mood/Test Context-same: 40->30
7.13. Test Task-recognition/Test Delay-1 hour/Source Context-mood/Test
Context-same: 40->30
7.14. Test T ask-free recall/T est Delay-1 hour/Source Context-mood/T est
Context-same: 30~>20
7.15. Test T ask-free recall/T est Delay-1 hour/Source Context-rooms/T est
Context-same: 30->20
7.16. Test Task-free recall/Test Delay-1 hour/Source Context-same/Test
Context-same: 20->30
7.17. Test Task-recognition/Test Delay-1 hour/Source Context-same/Test
Context-same: 30->40
7.18. Test Task-stem completion/Test Delay-1 hour/Source Context-
same/Test Context-same: 30->40
7.19. Test Task-stem completion/Test Delay-20 minutes/Source Context-
same/T est Context-same: 5 0
7.20. Test Task-recognition/Test Delay-20 minutes/Source Context-
same/T est Context-same: 5 0
7.21. Test Task-free recall/Test Delay-20 minutes/Source Context-same/Test
Context-same: 40
7.22. Test Task-stem completion/Test Delay-20 minutes/Source Context-
rooms/Test Context-same: 40
7.23. Test Task-recognition/Test Delay-20 minutes/Source Context-
rooms/Test Context-same: 40
7.24. Test Task-free recall/Test Delay-20 minutes/Source Context-
rooms/Test Context-same: 30
7.25. Test Task-stem completion/Test Delay-20 minutes/Source Context-
mood/Test Context-same: 40
7.26. Test Task-recognition/Test Delay-20 minutes/Source Context-
mood/Test Context-same: 40
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
2 1 2
7.27. Test Task-free recall/Test Delay-20 minutes/Source Context-mood/Test
Context-same: 30
7.28. Test Task-stem completion/Test Delay-1 minute/Source Context-
same/Test Context-same: 55
7.29. Test Task-recognition/Test Delay-1 minute/Source Context-same/Test
Context-same: 55
7.30. Test Task-free recall/Test Delay-1 minute/Source Context-same/Test
Context-same: 45
7.31. Test Task-stem completion/Test Delay-1 minute/Source Context-
rooms/Test Context-same: 45
7.32. Test Task-recognition/Test Delay-1 minute/Source Context-rooms/Test
Context-same: 45
7.33. Test Task-free recall/Test Delay-1 minute/Source Context-rooms/Test
Context-same: 35
7.34. Test Task-free recall/Test Delay-1 minute/Source Context-mood/Test
Context-same: 35
7.35. Test Task-recognition/Test Delay-1 minute/Source Context-mood/Test
Context-same: 45
7.36. Test Task-stem completion/Test Delay-1 minute/Source Context-
mood/Test Context-same: 45
7.37. Test Task-free recall/Test Delay-1 minute/Source Context-same/Test
Context-rooms: 70
7.38. Test Task-recognition/Test Delay-1 minute/Source Context-same/Test
Context-rooms: 80
7.39. Test Task-stem completion/Test Delay-1 minute/Source Context-
same/Test Context-rooms: 80
7.40. Test Task-free recall/Test Delay-20 minutes/Source Context-same/Test
Context-rooms: 75
7.41. Test Task-recognition/Test Delay-20 minutes/Source Context-
same/Test Context-rooms: 85
7.42. Test Task-stem completion/Test Delay-20 minutes/Source Context-
same/Test Context-rooms: 85
7.43. Test Task-free recall/Test Delay-1 minute/Source Context-same/Test
Context-rooms: 70~>80
7.44. Test Task-recognition/Test Delay-1 minute/Source Context-same/Test
Context-rooms: 80->90
7.45. Test Task-stem completion/Test Delay-1 minute/Source Context-
same/Test Context-rooms: 80->90
7.46. Test Task-free recall/Test Delay-20 minutes/Source Context-same/Test
Context-rooms: 75->85
7.47. Test Task-recognition/Test Delay-20 minutes/Source Context-
same/T est Context-rooms: 8 5 -> 95
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
213
7.48. Test Task-stem completion/Test Delay-20 minutes/Source Context-
same/Test Context-rooms: 85->95
7.49. Test Task-free recall/Test Delay-20 minutes/Source Context-same/Test
Context-rooms: 85->75
7.50. Test Task-recognition/Test Delay-20 minutes/Source Context-
same/Test Context-rooms: 95->85
7.51. Test Task-stem completion/Test Delay-20 minutes/Source Context-
same/Test Context-rooms: 95->85
7.52. Test Task-free recall/Test Delay-1 hour/Source Context-same/Test
Context-rooms: 70
7.53. Test Task-recognition/Test Delay-1 hour/Source Context-same/Test
Context-rooms: 80
7.54. Test Task-stem completion/Test Delay-1 hours/Source Context-
same/Test Context-rooms: 80
7.55. Test Task-free recall/Test Delay-1 minute/Source Context-rooms/Test
Context-rooms: 75
7.56. Test Task-recognition/Test Delay-1 minute/Source Context-rooms/Test
Context-rooms: 85
7.57. Test Task-stem completion/Test Delay-1 minute/Source Context-
rooms/Test Context-rooms: 85
7.58. Test Task-free recall/Test Delay-20 minutes/Source Context-
rooms/Test Context-rooms: 70
7.59. Test Task-recognition/Test Delay-20 minutes/Source Context-
rooms/Test Context-rooms: 80
7.60. Test Task-stem completion/Test Delay-20 minutes/Source Context-
rooms/Test Context-rooms: 80
7.61. Test Task-free recall/Test Delay-1 hour/Source Context-rooms/Test
Context-rooms: 65
7.62. Test Task-recognition/Test Delay-1 hour/Source Context-rooms/Test
Context-rooms: 60
7.63. Test Task-stem completion/Test Delay-1 hour/Source Context-
rooms/Test Context-rooms: 60
7.64. Test Task-free recall/Test Delay-1 minute/Source Context-mood/Test
Context-rooms: 75
7.65. Test Task-recognition/Test Delay-1 minute/Source Context-mood/Test
Context-rooms: 85
7.66. Test Task-stem completion/Test Delay-1 minute/Source Context-
mood/Test Context-rooms: 85
7.67. Test Task-free recall/Test Delay-20 minutes/Source Context-mood/Test
Context-rooms: 65
7.68. Test Task-recognition/Test Delay-20 minutes/Source Context-
mood/Test Context-rooms: 75
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
214
7.69. Test Task-stem completion/Test Delay-20 minutes/Source Context-
mood/Test Context-rooms: 75
7.70. Test Task-free recall/Test Delay-1 hour/Source Context-mood/Test
Context-rooms: 60
7.71. Test Task-recognition/Test Delay-1 hour/Source Context-mood/Test
Context-rooms: 75
7.72. Test Task-free recall/Test Delay-1 minute/Source Context-same/Test
Context-mood: 80
7.73. Test Task-free recall/Test Delay-1 minute/Source Context-rooms/Test
Context-mood: 75
7.74. Test Task-free recall/Test Delay-1 minute/Source Context-mood/Test
Context-mood: 75
7.75. Test Task-recognition/Test Delay-1 minute/Source Context-same/Test
Context-mood: 90
7.76. Test Task-recognition/Test Delay-1 minute/Source Context-rooms/Test
Context-mood: 85
7.77. Test Task-recognition/Test Delay-1 minute/Source Context-mood/Test
Context-mood: 85
7.78. Test Task-stem completion/Test Delay-1 minute/Source Context-
same/Test Context-mood: 90
7.79. Test Task-stem completion/Test Delay-1 minute/Source Context-
rooms/Test Context-mood: 85
7.80. Test Task-stem completion/Test Delay-1 minute/Source Context-
mood/Test Context-mood: 85
7.81. Test Task-free recall/Test Delay-20 minutes/Source Context-same/Test
Context-mood: 75
7.82. Test Task-recognition/Test Delay-20 minutes/Source Context-
same/Test Context-mood: 85
7.83. Test Task-free recall/Test Delay-20 minutes/Source Context-
rooms/Test Context-mood: 70
7.84. Test Task-recognition/Test Delay-20 minutes/Source Context-
rooms/Test Context-mood: 80
7.85. Test Task- recognition /Test Delay-20 minutes/Source Context-
mood/Test Context-mood: 75
7.86. Test Task- free recall /Test Delay-20 minutes/Source Context-
mood/Test Context-mood: 65
7.87. Test Task- stem completion/Test Delay-20 minutes/Source Context-
same/Test Context-mood: 85
7.88. Test Task- stem completion/Test Delay-20 minutes/Source Context-
rooms/Test Context-mood: 80
7.89. Test Task- stem completion/Test Delay-20 minutes/Source Context-
mood/Test Context-mood: 75
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
215
7.90. Test Task- free recall/Test Delay-1 hour/Source Context-same/Test
Context-mood: 70
7.91. Test Task- recognition/Test Delay-1 hour/Source Context-same/Test
Context-mood: 80
7.92. Test Task- stem completion/Test Delay-1 hour/Source Context-
same/Test Context-mood: 80
7.93. Test Task- free recall/Test Delay-1 hour/Source Context-rooms/Test
Context-mood: 65
7.94. Test Task- recognition/Test Delay-1 hour/Source Context-rooms/Test
Context-mood: 60
7.95. Test Task- stem completion/Test Delay-1 hour/Source Context-
rooms/Test Context-mood: 60
7.96. Test Task- stem completion/Test Delay-1 hour/Source Context-
mood/Test Context-mood: 75
7.97. Test Task- recognition/Test Delay-1 hour/Source Context-mood/Test
Context-mood: 75
7.98. Test Task- free recall/Test Delay-1 hour/Source Context-mood/Test
Context-mood: 60 56:33
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Appendix L
Novice C Simulation Data
ROUND ONE 0:08
1. Source Repetitions
1.1. 3 conditions____________
1.1.1. First condition: 2
1.1.2. Second condition: 4
1.1.3. Third condition: 4
2. Source Contexts
2.1. 3 conditions
0:19
0:10
0:18
1:03
1:13
2.1.1. Same
2.1.2. Rooms
2.1.3. Moods 1:43
3. Source Repetitions
3.1. Second condition: 4->3
3.2. Third condition: 4->3
3.3. First condition: 2->3
4. Source Repetitions
4.1. First condition: 3->2 2:49
4.2. 3 conditions2 conditions 3:04
4.3. Second condition: 3->4
5. Test Tasks
5.1. First condition: Recognition
5.2. 2 conditions
5.3. Second condition: Free recall
6. Test Contexts 3:48
6.1. 3 conditions
6.1.1. First condition: Same
6.1.2. Second condition: Rooms
6.1.3. Third condition: Moods 4:02
7. Test Tasks
7.1. 2 conditions -> 1 condition
7.1.1. First condition: Recognition (no change)
8. Test Tasks
8.1. 1 condition -> 2 conditions
8.1.1. Second condition: Free Recall
9. Test Tasks
9.1. 2 conditions 1 condition
9.1.1. First condition: Recognition (no change)
10. Source Spacings
10.1. 1 condition
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
10.1.1. First condition: 10
10.1.2. First condition: Minutes
11. Test Tasks
12. Test Delays
12.1. 1 condition
12.1.1. First condition: 10
12.1.2. First condition: Minutes
13. Done with design
14. Hypotheses
14.1. Source Context-same/Repetitions-2/Test Context-same: 15
14.2. Source Context-rooms/Repetitions-2/Test Context-same: 25
14.3. Source Context-moods/Repetitions-2/Test Context-same: 25
14.4. Source Context-same/Repetitions-4/Test Context-same: 20
14.5. Source Context-rooms/Repetitions-2/Test Context-same: 40
14.6. Source Context-moods/Repetitions-2/Test Context-same: 40
14.7. Source Context-same/Repetitions-2/Test Context-rooms: 15
14.8. Source Context-rooms/Repetitions-2/Test Context-rooms: 20
14.9. Source Context-moods/Repetitions-2/Test Context-rooms: 15
14.10. Source Context-same/Repetitions-4/Test Context-rooms: 20
14.11. Source Context-rooms/Repetitions-2/Test Context- rooms: 30
14.12. Source Context-moods/Repetitions-2/Test Context-rooms: 20
14.13. Source Context-same/Repetitions-2/Test Context-moods: 15
14.14. Source Context-rooms/Repetitions-2/Test Context-moods: 15
14.15. Source Context-moods/Repetitions-2/Test Context-moods: 20
14.16. Source Context-same/Repetitions-4/Test Context-moods: 20
14.17. Source Context-rooms/Repetitions-2/Test Context-moods: 20
14.18. Source Context-moods/Repetitions-2/Test Context-moods: 30
9:49
ROUND 2
27:38
1. Source Contexts
1.1.3 conditions
1.1.1. Same
1.1.2. Rooms
1.1.3. Moods
2. Source Repetitions
2.1. 1 condition
2.1.1. First condition: 5
3. Test Contexts
3.1. 3 conditions
3.1.1. First condition: Same
3.1.2. Second condition: Rooms
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
3.1.3. Third condition: Moods
4. Source Spacings
4.1. 1 condition
4.1.1. First condition: 5
4.1.2. First condition: minutes
5. Test Delays
5.1. 1 condition
5.1.1. First condition: 5
5.1.2. First condition: minutes
6. Test Tasks
6.1. 1 condition
6.1.1. First condition: Recognition
7. Hypotheses
7.1. Source Context-same/Test Context-same: 25
7.2. Source Context-rooms/Test Context-same: 45
7.3. Source Context-same/Test Context-rooms: 20
7.4. Source Context-same/Test Context-moods: 20
7.5. Source Context-rooms/Test Context-rooms: 50
7.6. Source Context-moods/Test Context-moods: 50
7.7. Source Context-moods/Test Context-rooms: 25
7.8. Source Context-rooms/Test Context-moods: 25
7.9. Source Context-moods/Test Context-same: 45 31:13
ROUND THREE
47:02
1. Source Contexts
1.1.3 conditions
1.1.1. Same
1.1.2. Rooms
1.1.3. Moods
2. Source Spacings
2.1. 1 condition
2.1.1. First condition: 5
2.1.2. First condition: minutes
3. Source Repetitions
3.1. 2 conditions
3.1.1. First condition: 3
3.1.2. Second condition: 5
3.1.3. First condition: 3-> 2
4. Test Tasks
4.1. 1 condition
4.1.1. First condition: Free recall
5. Test Delays
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
5.1. 1 condition
5.1.1. First condition: 15
5.1.2. First condition: minutes
6. Test Contexts
6.1. 1 condition
6.1.1. First condition: same
7. Hypotheses
7.1. Source Context-same/Source Repetitions-2: 20
7.2. Source Context-moods/Source Repetitions-5: 60
7.3. Source Context-same/Source Repetitions-5: 35
7.4. Source Context-rooms/Source Repetitions-2: 30
7.5. Source Context-moods/Source Repetitions-2: 30
7.6. Source Context-same/Source Repetitions-5: 35->40
7.7. Source Context-rooms/Source Repetitions-5: 65
7.8. Source Context-moods/Source Repetitions-5: 60->65
ROUND FOUR
1. Source Contexts
1.1. 2 conditions
1.1.1. Same
1.1.2. Rooms
2. Source Repetitions
2.1. 2 conditions
2.1.1. First condition: 2
2.1.2. Second condition: 5
3. Source Spacings
3.1. 1 condition
3.1.1. First condition: 10
3.1.2. First condition: minutes
4. Test Contexts
4.1. 1 condition
4.1.1. First condition: same
5. Test Delays
5.1. 1 condition
5.1.1. First condition: 10
5.1.2. First condition: minutes
6. Test Tasks
6.1. 1 condition
6.1.1. First condition: Free Recall
7. Hypotheses
7.1. Source Context-same/Source Repetitions-2: 20
7.2. Source Context-rooms/Source Repetitions-2: 35
50:10
1:04:04
1:04:18
1:04:23
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
7.3. Source Context-same/Source Repetitions-5: 40
7.4. Source Context-rooms/Source Repetitions-5: 65 1:06
ROUND FIVE
1:18:52
1. Source Repetitions
1.1.2 conditions
1.1.1. First condition: 2
1.1.2. Second condition: 5
2. Source Contexts
2.1. 2 conditions
2.1.1. Same
2.1.2. Rooms
3. Source Spacings
3.1. 1 condition
3.1.1. First condition: 10
3.1.2. First condition: minutes
4. Test Tasks
4.1. 1 condition
4.1.1. First condition: Free Recall
5. Test Delays
5.1. 1 condition
5.1.1. First condition: 10
5.1.2. First condition: minutes
6. Test Contexts
6.1. 1 condition
6.1.1. First condition: same
7. Hypotheses 1:20:36
7.1. Source Context-same/Source Repetitions-2: 20 1:20:54
7.2. Source Context-rooms/Source Repetitions-5: 24________ 1:21:14
7.3. Source Context-same/Source Repetitions-5: 23_________ 1:21:16
7.4. Source Context-rooms/Source Repetitions-2: 22________ 1:21:17
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
A phenomenological inquiry into the essential meanings of the most intense experience of religiosity and spirituality
PDF
Cognitive aging: Expertise and fluid intelligence
PDF
Individual motivation loss in group settings: An exploratory study of the social -loafing phenomenon
PDF
A cluster analysis of the differences between expert and novice counselors based on real time training
PDF
Can individual creativity be enhanced by training? A meta analysis
PDF
Interhemispheric interaction in bilateral redundancy gain: Effects of physical similarity
PDF
Development and validation of the Cooper Quality of Imagery Scale: A measure of vividness of sporting mental imagery
PDF
Change beliefs and academic outcomes: A construct validation
PDF
An analysis in the use of student performance data in schools
PDF
Data -driven strategies to improve student achievement: A cross-case study of four California schools
PDF
An integrative analysis of racism and quality of life: A comparison of multidimensional moderators in an ethnically diverse sample
PDF
Cognitive efficiency of animated pedagogical agents for learning English as a second language
PDF
Acquisition, consolidation and storage of an associative memory in the cerebellum
PDF
Districtwide instructional improvement: A case study of Algiers Elementary School in the Orchard Unified School District
PDF
Factors that influence students' investment of mental effort in academic tasks: A validation and exploratory study
PDF
Alcohol treatment entry and refusal in a sample of older veterans
PDF
Assessment of racial identity and self -esteem in an Armenian American population
PDF
Emotion regulation as a mediator between family-of-origin aggression and marital aggression
PDF
An analysis of program planning in schools with emerging excellence in science instructional design
PDF
Involvement of the secretory pathway for AMPA receptors in the expression of LTP
Asset Metadata
Creator
Feldon, David Frank (author)
Core Title
Inaccuracies in expert self -report: Errors in the description of strategies for designing psychology experiments
School
Graduate School
Degree
Doctor of Philosophy
Degree Program
Educational Psychology
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
education, educational psychology,OAI-PMH Harvest,psychology, experimental
Language
English
Contributor
Digitized by ProQuest
(provenance)
Advisor
Clark, Richard (
committee chair
), Horn, John (
committee member
), Marsh, David D. (
committee member
)
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c16-665425
Unique identifier
UC11340839
Identifier
3145196.pdf (filename),usctheses-c16-665425 (legacy record id)
Legacy Identifier
3145196.pdf
Dmrecord
665425
Document Type
Dissertation
Rights
Feldon, David Frank
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the au...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus, Los Angeles, California 90089, USA
Tags
education, educational psychology
psychology, experimental