Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Using cognitive task analysis to capture expert reading instruction in informational text for students with mild to moderate learning disabilities
(USC Thesis Other)
Using cognitive task analysis to capture expert reading instruction in informational text for students with mild to moderate learning disabilities
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Running Head: COGNITIVE TASK ANALYSIS
USING COGNITIVE TASK ANALYSIS TO CAPTURE EXPERT READING
INSTRUCTION IN INFORMATIONAL TEXT FOR STUDENTS WITH MILD TO
MODERATE LEARNING DISABILITIES
By
Diana Zepeda-McZeal
________________________________________________________________________
A Dissertation Presented to the
FACULTY OF THE USC ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
August 2014
Copyright 2014 Diana Zepeda-McZeal
COGNITIVE TASK ANALYSIS
2
Dedication
Believe and achieve. Each fall, monarch butterflies in Maine begin an
unbelievable journey to a hilltop in Mexico. How do they do it? They focus on the goal,
not the difficulties. Each day they take their bearings and set off, allowing their instincts
and desire to steer them. They accept what comes: some winds blow them off course,
others speed them along. They keep flying until, one day, they arrive. It is their
determination that makes the difference. – Author Unknown
With that said, I dedicate this journey and arrival to my husband, Ja, and to my
two beautiful children, Idalis and Jaaziah. Because of their unwavering love, support,
and sacrifice, I was able to believe and achieve. This is for you. I love you.
COGNITIVE TASK ANALYSIS
3
Acknowledgements
First and foremost, I would like to express my sincerest gratitude to Dr. Kenneth
Yates, my dissertation chairperson, mentor, and friend. Thank you for your support,
great kindness, generosity of time, and unending patience. Thank you for sharing your
vast knowledge and expertise, and for your encouragement when I so desperately needed
it. I truly could not have completed this journey without you. I am also grateful to Drs.
Robert Rueda and Pat Gallagher for their time, willingness to serve as committee
members, and invaluable feedback on this research.
I am greatly indebted to the five special education teachers who participated in
this study. Your teaching methodologies and passion for student learning have inspired
and influenced my own instructional practices. Thank you for your generous time and
commitment to this research. I am also thankful to district administrators for allowing
this research to be conducted within the district and for identifying expert special
education teachers for cognitive task analysis.
Last, but not least, I am eternally grateful to my family and friends for their love
and support throughout my participation in this doctoral program. To the ladies in the
Cohort of 2011 with whom I spent so much precious time, thank you for making each
and every class memorable, insightful, and full of laughter. May the REE be with you.
COGNITIVE TASK ANALYSIS
4
Table of Contents
Dedication 2
Acknowledgements 3
List of Tables 7
List of Figures 8
List of Abbreviations 9
Abstract 10
Chapter One: Overview of the Study 11
Statement of the Problem 11
3i+3r Independent and 1i+3r Incremental Methods of Knowledge
Elicitation 15
Purpose of the Study 17
Research Questions 18
Question 1 19
Question 1a 19
Question 2 19
Question 3 19
Definition of Terms 19
Chapter Two: Literature Review 22
Addressing the Instructional Needs of Students with Mild to Moderate
Learning Disabilities in Reading 22
Reading Instruction in Informational Text 22
Strategy Instruction 23
Text Structures 25
Beyond the Research-to-Practice Gap in Special Education 26
The Role of Professional Development 27
Professional Learning 28
Knowledge and Skills 29
Summary 31
Cognitive Architecture for Learning 32
Declarative Knowledge 35
Procedural Knowledge 36
Conditional Knowledge 38
Expert Performance and Expertise 39
Deliberate Practice in Expert Performance 41
Automaticity in Expert Performance 42
Automaticity and Expert Omissions 43
Transfer of Expert Knowledge 44
Cognitive Task Analysis 46
Brief History of Cognitive Task Analysis 46
Cognitive Task Analysis Methodologies 47
Multistage Process of Conducting Cognitive Task
Analysis 48
Collect Preliminary Knowledge 48
COGNITIVE TASK ANALYSIS
5
Identify Knowledge Representations 49
Apply Focused Knowledge Elicitation Methods 49
Analyze and Verify Data Acquired 50
Format Results for the Intended Application 51
Comparison of Knowledge Elicitation Methods 51
Concepts, Processes, and Principles 51
Critical Decision Method 52
Precursor, Action, Result, and Interpretation 53
Effectiveness of Cognitive Task Analysis 54
Meta-Analysis of Studies 54
Cognitive Task Analysis in Instruction 55
Instructional Design 56
Summary 57
Chapter Three: Methodology 58
Overview of Method 58
Research Questions 58
Question 1 58
Question 1a 58
Question 2 58
Question 3 59
Task 59
Participant Selection 59
Study Design 60
Instrumentation 61
Data Collection 62
Data Analysis 64
Chapter Four: Results 66
Overview of Results 66
Inter-Coder Reliability 66
Research Questions 67
Question 1 67
Total Action and Decision Steps 67
Expert Contribution of Action and Decision Steps 70
3i+3r Independent Method Analysis 71
1i+3r Incremental Method Analysis 72
Question 1a 73
Follow-Up Interviews 73
Preliminary Gold Standard Protocol Review 75
Question 2 77
Total Knowledge Omissions 77
Analysis of 3i+3r Independent Method Omissions 79
Analysis of 1i+3r Incremental Method Omissions 79
Question 3 82
Relationship between Self-Reported Teacher Data and
Knowledge Recall 82
Chapter Five: Discussion 85
COGNITIVE TASK ANALYSIS
6
Overview of Study 85
Process of Conducting Cognitive Task Analysis 86
Selection of Experts 86
Collection of Data 87
Discussion of Findings 91
Research Question 1 91
3i+3r Independent Method Verses 1i+3r Incremental
Method 91
Action Steps Verses Decision Steps 94
Research Question 1a 96
Expert Review of Preliminary Interview
Reports and Preliminary Gold Standard
Protocols 96
Research Question 2 98
3i+3r Independent Method Verses 1i+3r Incremental
Method 98
Research Question 3 100
Expert Knowledge Recall 101
Limitations 101
Researcher Experience in the Task Domain 101
Automated Knowledge and Expert Review 102
Identification of Experts for the Study 103
Implications and Future Research 103
Conclusion 104
References 105
Appendix A: 3i + 3r Independent Method Flow Chart 123
Appendix B: 1i + 3r Incremental Method Flow Chart 124
Appendix C: Teacher Biographical Data Survey 125
Appendix D: Teacher Biographical Data Survey: Data Analysis Spreadsheet 127
Appendix E: Cognitive Task Analysis Interview Protocol 130
Appendix F: Verification of Gold Standard Protocol Letter 133
Appendix G: Coding Spreadsheet: 3i+3r Independent Method Gold Standard
Protocol Procedures 135
Appendix H: Coding Spreadsheet: 1i+3r Incremental Method Gold Standard
Protocol Procedures 159
Appendix I: 3i+3r Independent Method Gold Standard Protocol 178
Appendix J: 1i+3r Incremental Method Gold Standard Protocol 190
COGNITIVE TASK ANALYSIS
7
List of Tables
Table 1: Comparison of Overall Action and Decision Steps from 3i+3r
Independent Method and 1i+3r Incremental Method Gold Standard
Protocols 69
Table 2: Additional Expert Knowledge Recall During Follow-up
Interviews 74
Table 3: Additional Expert Knowledge Recall During the Preliminary
Gold Standard Protocol Review Phase 76
Table 4: Expert Knowledge Omissions from the 3i+3r Independent Method
Gold Standard Protocol 80
Table 5: Expert Knowledge Omissions from the 1i+3r Incremental Method
Gold Standard Protocol 81
Table 6: Correlations between Self-Reported Expert Biographical Data and
Knowledge Recall 83
COGNITIVE TASK ANALYSIS
8
List of Figures
Figure 1: Total Expert Knowledge Recall for the 3i+3r Independent Method
and 1i+3r Incremental Method 70
Figure 2: Total Expert Knowledge Omissions for the 3i+3r Independent
Method and 1i+3r Incremental Method Gold Standard Protocols 79
COGNITIVE TASK ANALYSIS
9
List of Abbreviations
ACT: Adaptive Control of Thought 34
ACT-R: Adaptive Control of Thought-Rational 34
CDM: Critical Decision Method 50
CPP: Concepts, Processes, and Principles 50
CTA: Cognitive Task Analysis 50
IRB: Institutional Review Board 62
NICHD: National Institute of Child Health and Human Development 24
PARI: Precursor, Action, Result, and Interpretation 50
SME: Subject Matter Expert 15
1i+3r: One Independent Interview + Three Reviews 10
3i+3r: Three Independent Interviews + Three Reviews 10
COGNITIVE TASK ANALYSIS
10
Abstract
This study applies cognitive task analysis (CTA) to capture expert reading instruction in
informational text for students with mild to moderate learning disabilities in third through
fifth grade. CTA extracts the highly automated and unconscious knowledge and skills
experts employ to problem solve and perform complex tasks. The purpose of this study
was to compare the relative effectiveness of two knowledge elicitation methods of CTA–
3i+3r independent method and 1i+3r incremental method–in extracting declarative and
procedural knowledge from expert special education teachers as they provide step-by-
step descriptions of a routine reading instruction task. Also, this study sought to examine
knowledge omissions of expert special education teachers to determine if the up-to-70%
rule of expert omissions in other fields of study would be upheld in education. Lastly, the
study explored relationships between expert biographical data and the total number of
critical knowledge steps yielded from expert special education teachers using CTA. This
study employed a mixed-methods approach to capturing and analyzing data collected
from five expert special education teachers during semi-structured CTA interviews.
Findings from this study upheld the up-to-70% rule of expert knowledge omissions and
indicate that the 3i+3r independent method yielded a greater number of critical
knowledge steps than the 1i+3r incremental method. Although not statistically
significant due to the small sample of expert special education teachers, findings suggest
a moderate relationship between total hours of professional development and total
decision steps captured through CTA. This study contributes to research on the
effectiveness of CTA to elicit automated and unconscious expert knowledge across
myriad domains.
COGNITIVE TASK ANALYSIS
11
CHAPTER ONE: OVERVIEW OF THE STUDY
Statement of the Problem
“Teachers confront complex decisions that rely on many different kinds of
knowledge and judgment and that can involve high-stakes outcomes for students’
futures” (Darling-Hammond & Bransford, 2005, p. 1). These decisions involve complex
processes in the context of teaching that pertain to instruction and judgments about
student learning. To this end, teachers are “a source and creator of knowledge and skills
needed for instruction” (Cohen & Ball, 1999, p. 6). To effectively meet the needs of
students who learn in different ways, teachers can foster student learning in myriad ways
across learning domains. Moreover, there are instructional practices and strategies
common amongst highly effective teachers that produce an increase in learning outcomes
for all students (Darling-Hammond & Bransford, 2005; Guskey & Sparks, 2002). While
effective instructional approaches and strategies for differentiating instruction can
address the needs of students with learning disabilities, the extent to which these
instructional practices are broadly implemented in special education classrooms has been
a long standing concern in academic research (Bondy & Brownell, 2004; Landrum,
Tankersley, & Kauffman, 2003; McLeskey & Billingsley, 2008; Vaughn, Levy,
Coleman, & Bos, 2002). One of the apparent influences of this knowledge and
implementation gap for teachers is pre-service and in-service teacher preparation
(Elmore, 2002; Sparks, 2002). Despite any such, or so-called knowledge gaps, teachers
can implement highly effective strategies to support students with various learning needs
develop complex reading comprehension skills, particularly in the domain of
informational text, when provided with targeted professional learning opportunities to
COGNITIVE TASK ANALYSIS
12
enhance knowledge and skill. As such, professional development must ensure learning
methods for teachers match the learning processes teachers are expected to provide their
students (Sparks, 2002). According to Sparks (2002), “The most powerful forms of
professional development engage teachers in the continuous improvement of their
teaching and expands the repertoire of instructional approaches they use to teach that
content” (p. 99). These professional learning opportunities can optimize novice teacher
learning by incorporating explicit strategies and procedures adopted by expert teachers.
Standards for teaching and learning are at an all time high, as teachers must equip
all students with the necessary knowledge and skills to become proficient learners
(Darling-Hammond & Bradsford, 2005; Elmore, 2002; Sparks, 2002). These standards
exacerbate the need for well-qualified special education teachers to provide high-quality
instruction to students with special needs (McLeskey & Billingsley, 2008). Since teacher
expertise and ability is a critical contributor to student learning and achievement gains
(Ferguson, 1991; Sparks, 2002), teachers must become adaptive experts in the content
areas in which they teach (Darling-Hammond & Bradsford, 2005). Teachers with subject
matter expertise who employ expert teaching methods are far more effective in positively
impacting student learning outcomes than teachers who are not experts in the domains in
which they teach (Darling-Hammond, 2007). Teacher preparation is among the most
important and demanding kinds of professional preparation (Darling-Hammond &
Bransford, 2005); therefore, ongoing professional development that continues to bolster
novice teacher learning and directly impact student achievement is paramount.
The domain of reading instruction involves complex tasks for expert teachers to
perform and novice teachers to learn. Complex reading tasks deploy both declarative and
COGNITIVE TASK ANALYSIS
13
procedural knowledge types and therefore require controlled, conscious knowledge and
automated, unconscious knowledge to execute (Clark, 2014; Clark & Elen, 2006; van
Merriënboer, Clark, & de Croock, 2002). In fact, almost all task performance involves
the interaction between declarative and procedural knowledge (Clark & Elen, 2006).
While experts can solve complex problems quickly and efficiently, as a consequence of
their expertise they are unable to completely and accurately recall the essential
knowledge and skills required for novices to replicate and perform these tasks (Clark,
2014). Experts develop such expertise through deliberate and repeated practice within a
particular domain, whereby the knowledge and skills acquired through the repetition of
complex tasks becomes gradually automated (Ericsson, Krampe, & Tesch-Römer, 1993).
Automated knowledge and expertise limits the demands placed on working
memory during complex task execution. Research suggests that as a result of automated
expertise, experts may omit or distort up to 70% of critical procedural steps when
describing decision-making and problem-solving strategies involved in performing
complex tasks (Clark, 2014; Feldon & Clark, 2006). This high omission rate is of
particular concern when experts teach others how to perform demanding tasks as
omissions in expert knowledge confound the transfer of expertise to novice learners,
particularly during professional development or mentorship opportunities. For the most
part, experts believe that the information they are providing to novices is accurate and
complete; however, research on expertise and the contextualized nature of knowledge
structures indicates that experts may be largely unaware of what they know (Clark, 2014;
Clark & Elen, 2006). When experts attempt to teach their knowledge and skills to
novices, they are unaware of how most of their own decisions are made due to the
COGNITIVE TASK ANALYSIS
14
automaticity of their decision-making (Ericsson et al., 1993). As a result, the critical
decision-making knowledge novices need to learn and apply to task performance is often
unintentionally omitted from experts’ instruction (Clark, 2014). In effect, when experts
teach novices demanding tasks, they tend to describe what to do, rather than how to do it
(Feldon & Clark, 2006). Consequently, most professional development for teachers does
not focus on content knowledge or instructional practices; instead teachers tend to
“passively ‘sit and get’ the wisdom of ‘experts’” (Sparks, 2002, p. 23), and do not engage
in active learning. This is a matter of utmost concern as the professional learning of
teachers is a critical factor in the quality of their teaching, particularly for inexperienced,
novice teachers, and in student learning (Sparks, 2002).
Cognitive science and human performance research have resulted in knowledge
elicitation methods that capture the analyses and decision-making processes experts use
to problem solve and perform tasks (Clark, 2014). In the early twentieth century, prior to
the rise of cognitive task analysis (CTA), experts’ observable behavior was captured
through behavioral task analysis and these overt behaviors were used to develop training
programs for novices (Hoffman & Militello, 2009). While this method captured the
observable actions of experts as they executed complex tasks, behavioral task analysis
alone was insufficient in capturing the unobservable and nonconscious decisions experts
employ in complex problem solving. Knowledge elicitation methodologies of CTA
capture experts’ overt and covert declarative and procedural knowledge to develop
instruction for novices that more completely replicates expert task performance (Clark,
2014). The application of CTA provides a systematic approach to capturing expertise
and transferring expert knowledge to novices when incorporated into instruction, training,
COGNITIVE TASK ANALYSIS
15
and ongoing professional development.
Over 100 knowledge elicitation strategies have been developed to identify,
analyze, and structure the knowledge and skills experts apply to performing complex
tasks (Yates & Feldon, 2011). Knowledge elicitation strategies of interest to this current
study include the 3i+3r independent and 1i+3r incremental methods of CTA. With both
the 3i+3r independent method and 1i+3r incremental method, automated and controlled
knowledge is captured from multiple subject matter experts (SMEs) to define the gold
standard of practice for a set of skills.
3i+3r Independent and 1i+3r Incremental Methods of Knowledge Elicitation
The 3i+3r independent method has a particular application to instructional design
and follows the five-stage process common to most dominant knowledge elicitation
methods. The multistage process of CTA includes (a) collecting preliminary domain-
specific knowledge, (b) identifying the types of knowledge associated with the task, (c)
applying the knowledge elicitation technique in a semi-structured interview, (d) verifying
and analyzing results from interviews, and subsequently, (e) formatting the edited
knowledge for instructional design and job aids (Clark, Feldon, van Merriënboer, Yates,
& Early, 2008). The 3i+3r independent method has its origins in the evidence-based
CTA methods of Precursor, Action, Result, and Interpretation (PARI; Hall, Gott, &
Pokorny, 1995), Critical Decision Method (CDM; Klein, Calderwood, & MacGregor,
1989), and Concepts, Processes, and Principles (CPP; Clark, 2006; Clark et al., 2008).
Flynn (2012) coined the term 3i+3r to describe the knowledge elicitation method in
which a knowledge analyst interviews three SMEs independently–thus the 3i–to elicit the
cognitive strategies experts use to perform the same demanding task. To maintain
COGNITIVE TASK ANALYSIS
16
fidelity with research on the optimal number of SMEs required to capture a complete
description of a skill set, three experts are selected for participation in knowledge
elicitation interviews (Bartholio, 2010; Chao & Salvendy, 1994). Each of the three semi-
structured interviews is digitally recorded, transcribed, coded, and analyzed to produce
three independent interview reports, which are reviewed for accuracy and completeness,
and revised by the respective expert–thus the 3r. The 3r represents this review and
revision process by each of the SMEs. In the 3i+3r independent method, the three final
independent interview reports are aggregated into a preliminary gold standard protocol
and once again submitted to the three experts for verification and revision, with the
option of submitting the preliminary gold standard protocol to a fourth and senior expert
for review and revision to generate a final gold standard protocol. A visual
representation of the 3i+3r independent method can be found in Appendix A.
In Flynn’s (2012) research study on the relative efficiency of two knowledge
elicitation methods, the 1i+3r incremental method was introduced as an alternative to the
3i+3r independent method. The 1i+3r term coined by Flynn (2012) represents a method
whereby one expert, rather than three, is interviewed independently to elicit the
knowledge and skills required to perform a task within the expert’s domain of expertise–
thus the 1i. One expert undergoes the same knowledge elicitation process as in the 3i+3r
independent method with an in-depth, semi-structured interview that is recorded,
transcribed, coded, analyzed, and later submitted to the expert in an independent
interview report for review and revision. The final independent interview report is then
presented to a second expert during a semi-structured knowledge elicitation interview.
The second SME is prompted to review the procedures set forth in the interview report
COGNITIVE TASK ANALYSIS
17
and to revise or make changes as necessary to capture a complete description of the task.
The revised interview report is then presented to a third and final SME for review and
revision. The review and revision process for the three SMEs is represented by the 3r.
The 1i+3r incremental method remains true to research and elicits knowledge from three
experts to optimize the knowledge elicitation process (Chao & Salvendy, 1994). The
final revised interview report is then formatted into a preliminary gold standard protocol.
Similar to the 3i+3r independent method, the preliminary gold standard protocol is
submitted to the three SMEs for review and revision to generate a final gold standard
protocol. A flowchart of the 1i+3r incremental method can be found in Appendix B.
Knowledge elicited from the 3i+3r independent and 1i+3r incremental methods
can then be formatted and used for instructional design, as well as job aids and worked
examples of the task in training, or professional development.
Purpose of the Study
To replicate the expert delivery of reading instruction for informational or
expository text, and thereby increase access to literacy for students with special needs,
teacher training based on demonstrated instructional methods of expert special education
teachers is highly desirable. CTA is a knowledge elicitation method that effectively
captures highly automated, unobservable knowledge and skills experts use to solve
difficult problems and perform complex tasks (Clark et al., 2008; Yates & Feldon, 2011).
Moreover, CTA-informed training has proven more effective in the transfer of expert
skills to novice learners when compared to training where expertise has not been captured
(Tofel-Grehl & Feldon, 2013).
The purpose of this study is to examine the effectiveness of CTA in capturing the
COGNITIVE TASK ANALYSIS
18
expertise of special education teachers who provide reading instruction in informational
text to students in grades 3-5 with mild to moderate learning disabilities. In doing so, the
study implemented the 3i+3r independent and 1i+3r incremental methods of CTA to
capture the knowledge of expert special education teachers as they describe expository
literacy instruction. The two knowledge elicitation methods were then compared to
determine their relative effectiveness in capturing tacit and underlying cognitive
knowledge structures from expert special education teachers. Similar to Flynn’s (2012)
study, which compared the 3i+3r independent method and 1i+3r incremental method to
determine whether the 1i+3r incremental method yielded more decision steps than its
longstanding counterpart, this present study sought to replicate–in part–Flynn’s
methodology in a K-12 educational setting with expert special education teachers.
Knowledge captured in this process was subsequently analyzed for expert omissions of
critical declarative and procedural knowledge novice teachers must know and understand
to perform complex reading tasks efficiently and effectively.
Research Questions
This current study contributes to the body of existing research on the effectiveness
of CTA to capture both observable, overt behavior and unobservable, covert cognitive
processes from experts, and to research on expert knowledge omissions when recalling
highly automated cognitive processes. Moreover, this study contributes to research
comparing the relative effectiveness of two knowledge elicitation methods in capturing
declarative and procedural knowledge–as measured by critical action and decision steps–
experts employ while executing complex problem-solving tasks. The following research
questions guided this descriptive study.
COGNITIVE TASK ANALYSIS
19
Question 1. Will the number of action and decision steps in the gold standard
protocol produced by the 3i+3r independent method differ from the number of action and
decision steps in the gold standard protocol produced by the 1i+3r incremental method?
Question 1a. What additional action and decision steps do experts recall during
follow-up interviews to review initial interview reports and during review of the
preliminary gold standard protocols?
Question 2. What percentage of action and decision steps do expert special
education teachers omit when describing how they teach informational literacy to
students with mild to moderate learning disabilities, in grades 3-5, using the 3i+3r
independent and 1i+3r incremental methods?
Question 3. Does an expert special education teacher's self-reported
biographical data relate to the number of action and decision steps yielded when
describing reading instruction in informational text for students in grades 3-5 with mild
to moderate learning disabilities?
Definition of Terms
Adaptive expertise: When experts can rapidly retrieve and accurately apply
appropriate knowledge and skills to solve problems in their fields of expertise; to possess
cognitive flexibility in evaluating and solving problems (Gott, Hall, Pokorny, Dibble, &
Glaser, 1993; Hatano & Inagaki, 2000).
Automaticity: An unconscious fluidity of task performance following sustained
and repeated execution; results in automated mode of functioning (Anderson, 1996;
Ericsson, 2004).
Automated knowledge: Knowledge about how to do something: operates outside
COGNITIVE TASK ANALYSIS
20
of conscious awareness due to repetition of task (Wheatley & Wegner, 2001).
Cognitive load: Simultaneous demands placed on working memory during
information processing that can present challenges to learners (Sweller, 1988).
Cognitive tasks: Tasks that require mental effort and engagement to perform
(Clark & Estes, 1996).
Cognitive task analysis: Knowledge elicitation techniques for extracting implicit
and explicit knowledge from multiple experts for use in instruction and instructional
design (Clark et al., 2008; Schraagen, Chipman, & Shalin, 2000).
Conditional knowledge: Knowledge about why and when to do something; a type
of procedural knowledge to facilitate the strategic application of declarative and
procedural knowledge to problem solve (Paris, Lipson, & Wixson, 1983).
Declarative knowledge: Knowledge about why or what something is; information
that is accessible in long-term memory and consciously observable in working memory
(Anderson, 1996; Clark & Estes, 2006).
Expertise: The point at which an expert acquires skills and knowledge essential
for consistently superior performance and complex problem solving in a domain;
typically develops after a minimum of 10 years of deliberate practice or repeated
engagement in domain-specific tasks (Ericsson, 2004).
Procedural knowledge: Knowledge about how and when something occurs;
acquired though instruction or generated through repeated practice (Anderson, 1982;
Clark & Estes, 1996).
Subject matter expert: An individual with extensive experience in a domain who
can perform tasks rapidly and successfully; demonstrates consistent superior performance
COGNITIVE TASK ANALYSIS
21
or ability to solve complex problems (Clark et al., 2008).
COGNITIVE TASK ANALYSIS
22
CHAPTER TWO: LITERATURE REVIEW
Addressing the Instructional Needs of Students with Mild to Moderate Learning
Disabilities in Reading
Special education differs from general education for students with learning
disabilities when it is explicit, intensive, and supportive (Torgesen, 1996). Students with
learning disabilities require explicit and systematic strategy instruction with differentiated
learning opportunities closely related to instructional need (Denton, Vaughn, & Fletcher,
2003; Vaughn, Gersten, & Chard, 2000; Vaughn & Linan-Thompson, 2003). Moreover,
instruction should be overt and visible; offered in small, interactive group formats;
formulated in a way that controls task difficulty; and include specific elements of literacy,
as well as the application of metacognitive strategies (Denton et al., 2003). In the same
vein, a meta-analysis conducted by Swanson (1999) on higher order thinking skills of
students with learning disabilities revealed three instructional components that produce
the strongest impact on student learning: control of task difficulty; small interactive
groups of six or fewer students; and directed response questioning. Further, these
students require many opportunities to practice literacy skills and to obtain accurate
performance feedback from teachers (Gersten, Fuchs, Williams, & Baker, 2001). All in
all, the implementation of high-quality, effective remedial interventions in instructional
programs is required to produce significant learning outcomes for students with learning
disabilities (Denton et al., 2003; Vaughn & Linan-Thompson, 2003).
Reading Instruction in Informational Text
Expository literacy skills are central to developing what Duke (2000) coined as
semiotic capital, or the discourse knowledge valued in schools, the community, and
ultimately in the work place. With Common Core State Standards upon us, the focus on
COGNITIVE TASK ANALYSIS
23
developing literacy skills with simple narratives has shifted to reading to gain knowledge
and understanding from informational text (Brozo, 2010). That is, students must know
how to read for information and to critically evaluate information presented in text
(Caswell & Duke, 1998; Duke, Bennett-Armistead, & Roberts, 2002; Saenz & Fuchs,
2002). According to research, the so-called slump in fourth grade reading achievement
may be attributed to the scarcity of expository reading instruction and to limited
experiences with informational text in the primary grades (Brozo, 2005; Chall, Jacobs, &
Baldwin, 1990; Saenz & Fuchs, 2002; Williams, Hall, & Lauer, 2004). For decades,
researchers have argued that deficiencies in reading comprehension can be mitigated by
explicitly teaching students, particularly at-risk students and those with learning
disabilities, to interact with expository text in the primary grades and across content
areas, given that most student learning in the upper grades is based upon expository text
(Duke, 2000; Duke et al., 2002; Gersten et al., 2001; Jitendra, Burgess, & Gajria, 2011;
Saenz & Fuchs, 2002; Williams et al., 2004; Williams, Stafford, Lauer, Hall, & Pollini,
2009).
Strategy Instruction
To teach students to actively engage in productive reading, comprehension
instruction should entail: explicit description of the strategy and when and how it should
be used; teacher and/or student modeling of the strategy in action; collaborative use of the
strategy in action; guided practice using the strategy with gradual release of
responsibility; and independent use of the strategy (Duke & Pearson, 2002; Duke,
Pearson, Strachan, & Billman, 2011). Strategy instruction helps students understand
when and how to flexibly apply reading strategies to guide their learning (Duke &
COGNITIVE TASK ANALYSIS
24
Pearson, 2002; Duke et al., 2011; Dymock & Nicholson, 2010; Vaughn et al., 2000;
Vaughn & Linan-Thompson, 2003). In addition to this approach, establishing effective
instruction specific to expository reading is vital because informational material is
especially difficult for students who struggle with reading (Dymock & Nicholson, 2010;
Jitendra et al., 2011; Saenz & Fuchs, 2002).
Strategies that improve reading comprehension include: (1) setting purposes for
reading (Duke et al., 2011); (2) previewing and predicting (Duke et al., 2011; National
Institute of Child Health and Human Development [NICHD], 2000); (3) activating prior
knowledge (Ambrose, Bridges, Lovett, DiPietro, & Norman, 2010; Duke et al., 2011;
Pressley, 2002); (4) response questioning for comprehension monitoring (Duke &
Pearson, 2002; Gersten et al., 2001; Swanson, 1999); (5) analyzing text structure (Block,
Gambrell, & Pressley, 2002; Duke et al., 2011; Dymock, 2005; Jitendra et al., 2011;
Williams et al., 2004); (6) creating mental images (Duke & Pearson, 2002; Pearson &
Duke, 2002; Pressley, 2002); and (7) summarizing and retelling (Duke & Pearson, 2002;
NICHD, 2000). Vocabulary development and reading fluency are also recognized as
critical components of reading; however, the aforementioned key comprehension
strategies reflect the techniques students use to gather specific information from text
(Dymock & Nicholson, 2010).
Unlike narrative text that has one structure, informational text has many structures
(e.g., cause-effect, description, problem-solution, etc.), some of which are complex
(Jitendra et al., 2011; Saenz & Fuchs, 2002). Yet, knowledge of a single text structure
does not transfer to other text structures; therefore, students must learn each text structure
to aid in the process of connecting ideas and abstract logical relations, as well as
COGNITIVE TASK ANALYSIS
25
constructing meaning from the text (Meyer, Brandt, & Bluth, 1980; Saenz & Fuchs,
2002; Williams, 2005; Williams et al., 2004).
Text structures. As stated, textbooks and instructional materials in later grades
often consist of expository text and this text appears in a variety of single and mixed
organizational structures (Jitendra et al., 2011; Saenz & Fuchs, 2002; Williams et al.,
2004), rather than a simple sequencing of events (Dymock & Nicholson; Williams et al.,
2004; Williams et al., 2009). To this end, expository text can be more difficult for
students to navigate than narrative text. Therefore, students must first possess knowledge
of text structures to organize and gain meaning from informational text (Gersten et al.,
2001; Williams et al., 2004). This requires direct and systematic instruction on the
recognition of, and interaction with, different text types. The explicit and systematic
teaching of text and mixed text structure knowledge for expository text is an effective,
empirically validated approach to reading comprehension instruction for students with
learning disabilities (Gersten et al., 2001). Students with learning disabilities may
possess limited knowledge of the different types of textual organization and structure
(Gersten et al., 2001); hence, student knowledge and use of specific strategies to access
meaning from text is critical (O’Connor, Fulmer, Harty, & Bell, 2005).
Informational or expository text includes many or all of these features:
information about the natural or social world; factual content; timeless verb and generic
noun constructions; technical vocabulary; classificatory and definitional material; text
structures to organize content; repetition of topical theme; and graphical elements (Duke,
2000). Understanding textual information concerns the actual content interwoven in the
organizational patterns necessary to construct meaning from text (Gersten et al., 2001;
COGNITIVE TASK ANALYSIS
26
Jitendra et al., 2011; Williams et al., 2004; Williams et al., 2009). According to research,
some of the most common structures of expository text include: description; sequencing
of events; explanation of content; compare and contrast; definition and example; and
problem-solution-effect (Anderson & Armbruster, 1984). These three descriptive and
three sequential structures (Calfee & Patrick, 1995; Dymock & Nicholson, 2010) have
less of an influence on the comprehension of proficient readers than struggling readers
(Williams et al., 2004). Because some students with learning disabilities experience
challenges with organizing information, expository text may be particularly difficult for
these students to comprehend (Englert & Thomas, 1987; Wong, 1980). Nonetheless,
research suggests that specific, targeted strategy instruction can improve the
comprehension of expository text for all students (Jitendra et al., 2011; Saenz & Fuchs,
2002; Williams et al., 2004).
Beyond the Research-to-Practice Gap in Special Education
The extent to which research-based instructional practices are routinely used in
special education settings has received criticism and is a source of considerable concern
to special education researchers (Bondy & Brownell, 2004; Landrum et al., 2003;
McLeskey & Billingsley, 2008; Vaughn et al., 2002). Research-based instructional
practices are critical to student learning and may be even more important for students
with learning disabilities (Burns & Ysseldyke, 2009). Special education research has
provided advances leading to an improvement in learning outcomes and academic
competencies of students with learning disabilities (Bondy & Brownwell, 2004;
Greenwood, 2001; Greenwood & Abbott, 2001; McLeskey & Billingsley, 2008);
COGNITIVE TASK ANALYSIS
27
however, bringing knowledge of evidence-based research findings into instructional
practice has been challenging (Greenwood & Abbott, 2001).
From one research perspective, “research knowledge is something teachers
acquire, not generate” and “ is for use by practitioners, but generated outside of the
classroom” (Bondy & Brownell, 2004, p. 48). From a different perspective, research is
not separate from practice and thinking in terms of a research-to-practice gap; in effect,
instructional practices generated outside of the classroom may be considered
counterproductive to the deliberation of knowledge to inform instructional practices in
the classroom (Bondy & Brownell, 2004). Bondy and Brownell (2004) advocate for
researching against the grain to engage teachers in gathering and debating knowledge of
existing strategies, and in examining alternatives to meet the needs of all students, rather
than the assumption that research should govern practice. Whatever the case may be, the
implementation of results-oriented practices in the special education community can be
achieved in these two ways: (1) pre-service education must better prepare highly
qualified special education teachers; and (2) in-service professional development must be
more effective in meeting the needs of novice and practicing teachers (Denton et al.,
2003; Greenwood, 2001; McLeskey & Billingsley, 2008).
The Role of Professional Development
Teacher quality is one of the primary factors and single most important
measurable predictor influencing student achievement gains (Ferguson, 1991; Sparks,
2002). Hence, subject matter expertise and expert teaching methods are both important
factors in positively impacting student-learning outcomes (Darling-Hammond, 2000;
Darling-Hammond, 2007). Research reveals that teachers fully prepared in pre-service
COGNITIVE TASK ANALYSIS
28
teacher preparation programs are more confident and successful teaching students when
compared to teachers with little preparation (Darling-Hammond, 2000). Therefore,
novice and practicing teachers in need of support to build their teaching capacity must be
provided with a stronger understanding of how students learn through content and
pedagogy, how specific instructional strategies address student need, and how changes in
instructional practices support student growth and achievement (Darling-Hammond,
2007; Elmore, 2002). To this end, school districts that provide ongoing professional
development to build the capacity of its teachers are better equipped to enhance student
learning and to meet the needs of diverse learners (Darling-Hammond, 2004; Elmore,
2002; Sparks, 2002). Drawing on the original standards for professional development
(Sparks, 1995), the consensus on effective professional development is to improve
student learning through the improvement of teacher knowledge and skills (Elmore,
2002). Ultimately, teacher learning is a critical ingredient in student learning, as teacher
development is inextricably linked to student achievement (Darling-Hammond, 2004).
Professional Learning
According to Elmore (2002), “Professional development is the set of knowledge-
and skill-building activities that raise the capacity of teachers…to engage in the
improvement of practice and performance” (p. 13). Despite its intent, professional
development does not always lead to professional learning (Sparks, 2002). Furthermore,
poorly designed professional development can be a deeply insulting experience for
teachers (Elmore, 2002), while well-designed and rich professional development can
enhance teacher knowledge of the content and produce a sense of efficacy (Darling-
Hammond & Richardson, 2009). Often times, the professional development provided to
COGNITIVE TASK ANALYSIS
29
teachers is episodic and disconnected from practice, myopic, meaningless, and lacks high-
intensity, job-imbedded, collaborative learning opportunities (Darling-Hammond &
Richardson, 2009; Darling-Hammond, Wei, Andree, Richardson, & Orphanos, 2009;
Knapp, 2003). Professional development is ineffective when approached as a “one-shot”
or “drive-by” workshop instead of an ongoing teacher learning process to build the
knowledge and skills necessary to effect change in student learning (Darling-Hammond &
Richardson, 2009; Elmore, 2002; Guskey & Yoon, 2009; Sparks, 2002). In fact, time is
an essential factor in the success of most professional development (Guskey & Yoon,
2009). Professional development requires considerable time to produce the kinds of
learning opportunities necessary for teachers to integrate knowledge and skills into
practice (Fishman, Marx, Best, & Tal, 2003; Guskey & Yoon, 2009; Penuel, Fishman,
Yamaguchi, & Gallagher, 2007). As such, teachers require approximately 50 hours of
professional development in a given domain to bolster their knowledge and skills; yet, on
a national survey, the majority of teachers surveyed reported receiving no more than two
days of professional development, over a 12-month period, in content specific to the
subjects they taught (Darling-Hammond et al., 2009).
Knowledge and Skills
In the status report on teacher development in the United States and abroad,
Darling-Hammond et al. (2009) assert, “No matter what states and districts do to bolster
the education workforce, they will need to do more and better with the talent they have”
(p. 2). With that said, school districts can build the capacity of novice and practicing
teachers and improve student learning outcomes through professional development that is:
intensive, ongoing, and connected to instructional practice; focuses on the teaching and
COGNITIVE TASK ANALYSIS
30
learning of specific academic knowledge and content; is connected to other school
initiatives; builds strong collaborative and working relationships amongst teachers
(Darling-Hammond et al., 2009; Elmore, 2002); embodies a clear model of adult learning
that is made explicit to participants; and is evaluated continuously for the effect of
professional development on student achievement (Elmore, 2002).
Effective, curriculum-linked professional development has strong positive effects
on practice when it emphasizes student learning by developing teacher competencies and
pedagogical skills associated with specific content areas (Blank, de las Asas, & Smith,
2007; Fishman et al., 2003; Penuel et al., 2007). Professional development must also
emphasize how teachers learn (Darling-Hammond & Richardson, 2009; Elmore, 2002) by
modeling strategies presented and collaborated upon during trainings, and by providing
teachers with opportunities to practice and reflect upon the implementation of strategies
(Garet, Porter, Desimone, Birman, & Yoon, 2001; Supovitz, Mayer, & Kahle, 2000) to
determine the potential impact of such practices on learners (Bondy & Brownell, 2004).
Teacher learning is a complex system (Opfer & Pedder, 2011) and professional
development that focuses on concrete tasks of teaching is paramount to enhancing teacher
expertise and improving instructional practices that ultimately support an increase in
student learning outcomes (Darling-Hammond et al., 2009; Elmore, 2002). In the same
way, training that includes direct application of knowledge to classroom instruction has a
greater likelihood of influencing teacher practices and leading to student achievement
gains (Darling-Hammond et al., 2009; Sparks, 2002).
Teacher learning is a process, which involves change in knowledge, beliefs,
behaviors, and attitudes (Ambrose et al., 2010; Darling-Hammond & Richardson, 2009;
COGNITIVE TASK ANALYSIS
31
Elmore, 2002). Professional development should make explicit what new knowledge and
skills teachers will learn, how this will manifest in instructional practice, and what
activities will facilitate this learning (Elmore, 2002). Furthermore, training that results in
significant change to instructional practice should focus on three domains necessary for
successful teaching: (1) deep knowledge of the subject-matter and skills to be taught; (2)
expertise in general content knowledge, or instructional practices for all content areas; and
(3) expertise in pedagogical content knowledge, or instructional practices germane to
specific content areas (Elmore, 2002; Fishman et al., 2003). It is widely understood that
novice and expert teachers vary considerably in their command of these domains and in
their ability to use strategies flexibly, to modify strategies as needed to meet the needs of
their students at various performance levels, and in their fluency and automaticity with
which they apply strategies to instruction (Elmore, 2002). Expertise in teaching exists;
however, experience alone does not equate to expertise (Elmore, 2002). Teachers must
learn component skills necessary to perform complex tasks, practice integrating
knowledge fluently and automatically, and know when and how to apply their knowledge
to instruction (Ambrose et al., 2010). To design professional development around the
authority of expertise, teacher knowledge and skill, mastery of knowledge, and the
resources and capacities required to support this knowledge must be connected to the
practice of improvement to bolster student learning (Elmore, 2002).
Summary
In summary, effective instructional practices for students with special needs are
characterized by explicit and systematic instruction, and teacher understanding of the
critical factors associated with progress in academic areas such as reading (Gersten et al.,
COGNITIVE TASK ANALYSIS
32
2001). Reading research and consensus reports provide converging evidence about
effective reading instruction for students with learning disabilities (Donovan & Cross,
2002). Most reading required of students throughout schooling is that of expository text;
therefore, students require instruction in informational text in the primary grades to build
their capacity to make meaning from this type of text and to critically analyze
information presented in expository text (Duke, 2000; Gersten et al., 2001; Jitendra et al.,
2011; Williams et al., 2004; Williams et al., 2007; Williams et al., 2009) throughout their
educational careers. Research suggests that a knowledge gap exists between the
documentation of effective instructional practices and their use in special education
settings (Cooper, 1996; Greenwood, 2001; Lloyd, Weintraub, & Safer, 1997);
notwithstanding, teacher knowledge and skills can be bolstered through professional
development that focuses on how teachers and students actually learn in order to effect
change in the classroom.
Cognitive Architecture for Learning
Schneider and Shiffrin (1977) proposed a two-part theory of human information
processing to include automatic detection and controlled search in the cognitive system.
In this classic research, Schneider and Shiffrin (1977) posited that controlled processing
occurs in the short-term store, or working memory, which is limited in its capacity and
thereby requires the active attention of an individual to process elements, or nodes of
information. Controlled processing allows for a temporary activation and retrieval of
information from the long-term store, or long-term memory, and is controlled by the
individual; information active in the short-term store can then be applied to novel
situations. The information processing system is a flow of information into and out of the
COGNITIVE TASK ANALYSIS
33
short-term store (Schneider & Shiffrin, 1977). Long-term store contains sequences of
informational elements, or nodes with associative connections that have “become
complexly and increasingly interassociated and interrelated through learning” (Schneider
& Shiffrin, 1977, p. 2). With consistent practice and repeated mapping, sequences of
nodes become unitized for automatic processing and when activated by a control process
or environmental cue, place few demands on the limited capacity of the short-term store.
Wegner (1994), in his theory of ironic processes of mental control, proposed a
cooperative interaction between two cognitive processes: an operating process and a
monitoring process. Similar to Schneider and Shiffrin’s (1977) theory, Wegner (1994)
outlined a symbiotic cognitive system with necessary controlled and automated processes
for complex cognition. According to Wegner (1994), the operating process is effortful
and consciously guided (nonautomatic), and an individual is aware of and able to report
what they are doing. The operating process is dependent upon the cognitive resources and
capacity of the working memory, which absorbs and is impacted by attention. As a
result, the operating system requires more effort, and is less equipped to compete with
distractions or attention demands that contribute to cognitive load (Wegner, 1994). In
contrast, the monitoring process is unconscious and automated, and is activated by the
operating process. Because the monitoring process is largely unconscious, it is less
demanding of cognitive resources and requires little cognitive effort as it is less impacted
by “variations in the allocation of attention” (Wegner, 1994, p. 39). While this theory
holds for the conscious control of state of mind, the two-process system is relative to the
application of cognitive skills to solve complex problems.
COGNITIVE TASK ANALYSIS
34
The adaptive control of thought (ACT) and ACT—Rational (ACT-R) are based
on a theory of cognitive architecture, which provides insight into the integration of
complex human cognition (Anderson, 1982, 1996; Anderson et al., 2004; Anderson &
Schunn, 2000). This relatively “simple theory of learning and cognition” provides an
analysis of the acquisition and application of cognitive skills (Anderson & Schunn, 2000,
p. 2).
According to ACT-R, complex cognition is comprised of intimately connected
declarative and procedural knowledge units in which each component of a knowledge
unit is acquired through learning (Anderson, 1995, 1996; Anderson & Schunn, 2000).
Knowledge units are acquired for declarative knowledge in networks of knowledge
chunks, or schema-like structures that represent factual information. Declarative
knowledge structures result from environmental encodings (Anderson, 1996).
Knowledge units are also acquired for procedural knowledge in condition-action
productions, or production rules, which retrieve declarative knowledge chunks and
determine how to use these chunks to solve problems. Production rules state the
conditions in which a particular action is taken, as well as the action itself (Hall et al.,
1995). Schneider and Shiffrin (1977) refer to these condition-action productions as if-
then decision rules that take place between the cue and the procedure. Procedural
knowledge structures are encoded from examples of transformations of declarative
chunks in the environment (Anderson, 1996). These knowledge units are organized into
goal structures with sequences of subgoals that can be accessed and applied to tasks. A
production responds to specific problem-solving goals and subgoals and can only be
applied to a task when declarative chunks have met the conditions of the production rule
COGNITIVE TASK ANALYSIS
35
(Anderson, 1996). As Anderson (1996) explained, “The mind keeps track of general
usefulness and combines this with contextual appropriateness to make some inference
about what knowledge to make available in the current context” (p. 360).
The interaction between declarative and procedural knowledge is part of the
knowledge compilation process whereby sequences of productions are consolidated into
single productions along with the declarative information required to perform the task in
order to increase cognitive efficiency and enhance cognitive performance (Anderson,
1982, 1983; Anderson & Schunn, 2000; Neves & Anderson, 1981). As the skill
transitions from declarative to procedural knowledge, it undergoes a process known as
composition (Anderson, 1982). Thus, declarative knowledge becomes part of the
production and can be performed without conscious awareness or mental effort. This
process is known as proceduralization (Anderson, 1982). The gradual compilation of
declarative and procedural knowledge that occurs as a result of deliberate practice and
understanding of a task can be flexibly applied to novel situations that share similar
cognitive elements. According to Neves and Anderson (1981), knowledge compilation is
a process in which knowledge moves from an interpretive application of declarative
knowledge to a direct application of procedural knowledge.
Declarative and procedural knowledge will be discussed in greater detail as a
precursor to the discussion on expertise and the effects of automaticity on expert
knowledge recall in subsequent sections.
Declarative Knowledge
Different knowledge structures contribute to skilled performance in complex
problem solving (Hall et al., 1995). Declarative knowledge is conscious and controlled in
COGNITIVE TASK ANALYSIS
36
that it can be modified quickly and efficiently in working memory (Clark & Elen, 2006;
Clark & Estes, 1996). This type of knowledge about why and that information is
consciously accessible; to think or remember is to use declarative knowledge (Clark &
Elen, 2006). Declarative knowledge is referred to as how-it-works knowledge and is the
basis for the adaptiveness that characterizes expertise (Hall et al., 1995). Domain-
specific knowledge is initially acquired through discrete steps in the conscious,
declarative knowledge form and is then transformed to unconscious, automated
procedural knowledge form as a result of repeated mapping or practice (Anderson, 1982;
Neves & Anderson, 1981). Declarative knowledge includes factual information
organized into schemes of task structures and task goals (Anderson, 1996; Anderson &
Schunn, 2000; Gagné, Briggs, & Wager, 1992; Paris et al., 1983). This information can
be used to set goals and to help change the course of action when performing a task to
adapt to changing task conditions (Paris et al., 1983). The total amount of declarative
knowledge possessed by adults is estimated to range between 10 to 30 percent (Bargh &
Chartrand, 1999; Glaser & Chi, 1988). While declarative knowledge is the starting point
of knowledge acquisition, this type of knowledge alone is insufficient to execute skilled
performance (Anderson, 1982). Acquisition of declarative knowledge precedes
procedural knowledge (Hinds, Patterson, & Pfeffer, 2001), and the execution of complex
tasks involves interactions between conscious declarative knowledge and unconscious
procedural knowledge (Anderson, 1996; Clark, 2008a; Sun, Slusarz, & Terry, 2005).
Procedural Knowledge
Declarative knowledge, or conscious action, provides the pathway for procedural
knowledge acquisition of information on how and when to perform a task (Clark & Estes,
COGNITIVE TASK ANALYSIS
37
1996). It is declarative knowledge that enables the flexible use of procedural knowledge
(Hall et al., 1995). Procedural knowledge includes information about the execution of a
task and is often acquired through direct instruction and repeated practice (Paris et al.,
1983). It is the execution of discrete covert decisions and overt actions based on rules
and strategies required to attain a particular goal (Neves & Anderson, 1981). With each
repeated engagement or practice of a task, procedural knowledge becomes increasingly
more rapid and automated, thereby requiring the use of fewer cognitive resources (Clark
& Elen, 2006; Clark et al., 2008). Procedural knowledge is stored in long-term memory
as productions (Anderson, 1996) and once automated, operates outside of conscious
awareness (Bargh & Chartrand, 1999; Clark et al., 2008). Expert selection of task
procedures optimizes the efficiency of task performance and markedly distinguishes
expert from novice problem solving (Hall et al., 1995). Automated, unconscious
knowledge increases cognitive efficiency beyond the speed and execution of that which
can be accomplished in conscious knowledge; however, with the gradual automatization
of procedural knowledge, comes a largely unconscious and unmodifiable knowledge
structure (Anderson, 1996; Clark & Elen, 2006; Clark et al., 2008; Schneider & Shiffrin,
1977). Most cognitive processes in regard to task performance are automated (Bargh &
Chartrand, 1999). In fact, research indicates that procedural knowledge possessed by
adults is estimated to range between 70 to 90 percent of total knowledge (Bargh &
Chartrand, 1999; Glaser & Chi, 1988).
Declarative and procedural knowledge pertain to the skills required to execute a
task; however, this knowledge alone is not sufficient to address the conditions under
which considerations are made to the selection and application of various actions in
COGNITIVE TASK ANALYSIS
38
various situations (Hall et al., 1995; Paris et al., 1983). To adjust decision-making to
changing task demands requires the use of conditional knowledge.
Conditional knowledge. Conditional knowledge is a dimension, or subcategory
of procedural knowledge (Clark, 2003; Ormrod & Davis, 2004) and includes the when
and why information of task execution (Paris et al., 1983). Conditional knowledge is
when-to-do-it knowledge (Hall et al., 1995). Conditional knowledge reflects adaptive
knowledge that guides decision-making; it is knowing when to proceed with an action
and when to decide that an alternate action is necessary (Hall et al., 1995; Paris et al.,
1983). In other words, conditional knowledge identifies the point at which decisions are
made in order to move forward with the execution of a task. Hall et al. (1995) refer to
conditional knowledge as “strategic decision factors or processes that are responsible for
knowledge deployment and also serve an executive control function in problem solving”
(p. 6). According to Anderson (1982), all tasks involve an alternate path or choice in the
method used to perform the task; and with practice, these choices are “more likely to lead
to rapid success” (p. 390). In Anderson’s (1996) concept of production, if-then
conditions are used to execute tasks in productions of procedural knowledge by providing
the conditions under which a decision should be made for a particular course of action;
this is the conditional knowledge-processing model.
Strategic, or conditional knowledge involves the consideration of multiple
problem-solving paths (Hall et al., 1995). Thus, it is necessary to orchestrate and
modulate declarative and procedural knowledge, and to know when to employ
conditional knowledge to identify the point at which a decision must be made in a
particular task or context (Clark et al., 2008; Paris et al., 1983). While Paris et al. (1983)
COGNITIVE TASK ANALYSIS
39
refer to conditional knowledge as a third knowledge type, most research recognizes
conditional knowledge as a subcategory of procedural knowledge (Ormrod & Davis,
2004).
Declarative, procedural, and conditional knowledge are all requisites to skillful
problem solving or task performance. As a learner acquires and develops competence in
a specific skill, the learner becomes more adept at selecting useful actions to attain
specific goals (Paris et al., 1983). Over time, the learner can manage available resources
to fit the changing conditions of a task. The development of domain-specific
performance ensues after years of deliberate practice and success in accomplishing highly
challenging tasks, which result in cognitive structures responsible for advanced problem-
solving expertise (Clark & Estes, 1996; Ericsson et al., 1993). The selection of
declarative, procedural, and strategic (conditional) knowledge processes are critical to the
execution of complex problem solving (Hall et al., 1995).
Expert Performance and Expertise
In research on experts across numerous domains, Glaser and Chi (1988)
characterize experts as: (1) exceling primarily in their own domains; (2) perceiving deep,
meaningful patterns in their chosen domain; (3) performing domain-specific skills faster
than novices; (4) having superior short- and long-term memory within a domain; (5)
analyzing and representing problems in their domain at a deeper (more principled) level
than novices; (6) spending a significant amount of time analyzing and examining
problems qualitatively; and (7) effectively self-monitoring performance while problem
solving. A deep understanding of tasks within a specific domain allows experts to
effectively and efficiently solve novel and complex problems (Gagné & Medsker, 1996;
COGNITIVE TASK ANALYSIS
40
Glaser & Chi, 1988; Hall, et al., 1995).
Ericsson and Charness (1994) reported that expert performance is mediated by
complex skill acquisition with corrective feedback and sustained effort in the adaptation
of skills. Through experiences in structured learning activities, experts can acquire
complex, domain-specific skills that circumvent the limited capacity of the short-term
memory (Chase & Ericsson, 1982). Elaborate schema-based knowledge representations
allow experts to accurately retain and recall information (Ericsson & Kintsch, 1995). In
the classic work by Chase and Simon (1973), the authors found that expert performance
depends upon experience in performing tasks within a domain of mastery that generates
high-speed performance and large working memory capacity. In fact, Chase and
Ericsson (1982) noted that years of practice resulted in highly organized long-term
memory knowledge structures in experts. Chase and Ericsson (1982) also reported
exponential increases in short-term memory capacity as a result of extended practice in
domain-specific skills. Research suggests that unlike novices, experts can form mental
representations of problems within their domain that activate the knowledge components
required to reason, plan, and evaluate possible solutions (Ericsson & Charness, 1994).
There are notable differences in the ways experts and novices use their knowledge to
problem solve, with experts using more systematic means of efficiently identifying
solutions and engaging in dynamic, opportunistic reasoning (Hall et al., 1995). These
elaborate schemas are advantageous in that they enhance performance in knowledge
recall to inform decision-making (Feldon, 2007). As a means of comparison, for novices,
the nonautomated process of knowledge recall is a conscious, deliberate decision (Clark
et al., 2008). Thus, the goal in research on expertise is to understand and identify how
COGNITIVE TASK ANALYSIS
41
experts excel and achieve superior performance to enhance the performance of novices
(Chi, 2006).
Deliberate Practice in Expert Performance
As Anderson (1982) stated, “the ability to perform successfully in novel situations
is the hallmark of human cognition ” (p. 391). Ericsson et al. (1993) posited that
characteristics of experts thought to be innate talent are largely acquired through
deliberate practice in which continuous performance adaptations and performance
improvement efforts are made. The effect of practice on superior performance has more
profound of an impact than what was once believed (Ericsson, 2004; Ericsson et al.,
1993). In this relative approach to expertise, expert performance is defined in relation to
novice performance and assumes that novices can become expert-like by way of
deliberate practice, rather than arising from exceptional talent alone (Chi, 2006). Such
deliberate practice entails highly structured practice activities in which the goal is to
improve and maintain performance (Ericsson, 2004; Ericsson et al., 1993).
Anderson (1982) posited that proficiency in a domain requires over 100 hours of
learning and practice, while others have argued that high levels of proficiency require at
least 10 years to achieve and maintain expert performance, which corresponds to several
thousand hours of deliberate practice (Chi, 2006; Ericsson, 2004; Ericsson & Charness,
1994; Ericsson et al., 1993; Feldon & Clark, 2006; Kirschner, Sweller, & Clark, 2006).
This 10-year rule of extensive domain-specific experience is a reliable indicator of
expertise; however, experience in and of itself is not indicative of expertise (Ericsson,
2004). To attain a high level of expertise, individuals must refine and improve their skills
through active and mastery learning in a specific domain (Ericsson & Charness, 1994).
COGNITIVE TASK ANALYSIS
42
Anderson and Schunn (2000) point out that the application of acquired skills requires
extensive practice in a broad range of situations to develop competencies transferable to
novel problem solving. As a result of extensive practice, skills become automated and
experts are less able to consciously access their knowledge (Ericsson, 2004).
Automaticity in Expert Performance
Anderson’s (1982, 1983) research on the acquisition of cognitive skill
corresponds to Fitts (1964) work on the three stages of development including the
cognitive stage, associative stage, and the autonomous stage. As Fitts (1964) posited, the
cognitive stage involves the initial encoding of knowledge often coupled with verbal
mediation to rehearse the skill, followed by the associative stage in which knowledge
connections are strengthened and errors are detected, and lastly the autonomous stage
where the acquisition of skill and the speed of performance is enhanced. For Anderson
(1982, 1983), the cognitive stage is the declarative stage where knowledge is interpreted
in a new domain; the associative stage is called knowledge compilation where declarative
knowledge is converted into procedural knowledge; and the autonomous stage is the
procedural stage in which knowledge undergoes a process of generalization,
discrimination, and strengthening. In essence, these are the stages of automaticity in
which complex skills require fewer and fewer cognitive resources to execute and result in
skills that are increasingly rapid, automated, and efficient (Anderson, 1982).
According to Wheatley and Wegner (2001), automatic processes do not require
constant conscious monitoring and operate independent of conscious control. Skill
acquisition begins with conscious, effortful learning and becomes automated after
consistent repetition and frequent practice (Anderson, 1982; Neves & Anderson, 1981).
COGNITIVE TASK ANALYSIS
43
Ericsson (2004) noted that knowledge structures become highly automated with increased
expertise. For this reason, automatic processes require minimal mental effort and
attention capacity to perform, and result in an increase in speed and efficiency of task
performance (Bargh & Chartrand, 1999). Wheatley and Wegner (2001) posited that the
automaticity of skills is both a blessing and a curse. Anderson and Schunn (2000) noted,
“As knowledge domains become more advanced, their underlying cognitive structure
tends to become more obscure” (p. 17). Automated task performance is extremely
beneficial to the cognitive system; however, there is an inherent lack of flexibility and
control of our thought processes as a consequence (Wheatley & Wegner, 2001). The
latter presents as a problem when experts provide instruction to novices on how to
problem solve or perform a task because experts are unable to recall their own thought
processes and accurately explain the points at which decisions are made (Clark et al.,
2008; Wheatley & Wegner, 2001).
Automaticity and expert omissions. As expertise increases, the deep,
conceptual understanding and mental representations of processes within a domain can
become abstract and obscure to the expert (Anderson & Schunn, 2000; Hinds et al.,
2001). According to Gagné (1985), domain-specific expertise is difficult for experts to
retrieve and to articulate discrete steps because the information has become highly
automated. Further, research on the role of automaticity in expertise has demonstrated
that experts’ self-reports about their approaches to problem-solving strategies are often
incomplete or contain knowledge omissions (Chao & Salvendy, 1994). In Feldon’s
(2007) analysis of the research, he reported an increase in errors and omissions in
experts’ self-reports as their skills improved. Expert recall replete with omissions and
COGNITIVE TASK ANALYSIS
44
inaccurate descriptions of tasks is echoed throughout the research (Clark et al., 2008;
Clark et al., 2011; Feldon, 2007; Feldon & Clark, 2006). In Chao and Salvendy’s (1994)
study on knowledge elicitation methods, single experts omitted up to 71% of the
problem-solving steps required to perform various tasks. In a study examining expert
army trauma surgeons’ recall of critical information necessary for femoral artery shunt
placement, Clark and colleagues (Clark et al., 2011; Clark, Pugh, Yates, & Sullivan,
2008) reported that experts omitted 68.75% of standard procedural steps. In yet another
examination of the accuracy of expert self-reports in an experimental design study,
Feldon and Clark (2006) found expert self-reports highly susceptible to omission errors,
which ranged from 48% to 88%. This sample of studies on self-reported problem-solving
processes demonstrates high omission rates in expert recall of complex cognitive tasks.
When experts are expected to communicate what they know to novices and to
inform instructional design processes (Feldon, 2007), automaticity often impairs the
effectiveness of the transfer of expertise.
Transfer of Expert Knowledge
Experts acquire advanced skills in a domain, which allow for automated, rapid,
and accurate execution of actions in response to a problem-solving situation; however,
these superior skills are limited in their transferability to other domains (Ericsson &
Charness, 1994). Experts may also have difficulty transferring their domain-specific
knowledge and expertise to novices during training and instruction (Hinds et al., 2001).
As a result of an experts’ superior knowledge, their ability to predict novice task
performance and the underlying knowledge acquisition needs of a novice learner is
impaired (Hinds, 1999). Because novices and experts organize and process knowledge
COGNITIVE TASK ANALYSIS
45
differently, when incomplete or inaccurate information is given to a novice learner or
knowledge is communicated to novices abstractly, it may be difficult for novices to
conceptualize and understand (Hinds et al., 2001).
Experts are largely unaware of the cognitive strategies they employ to perform a
task because they have automated their selection of problem-solving strategies and
cannot consciously identify many of the decisions they make (Blessing & Anderson,
1996). According to Clark et al. (2008), automaticity can impair an expert’s articulation
of the cognitive processes required for a learner to successfully replicate an expert’s
performance. While observable psychomotor components of a task may be relatively
simple for a learner to observe and replicate, environmental and situational cues that
trigger decision-making require explanation to replicate (Clark et al., 2008).
In academic and professional fields, accomplished experts are recruited to teach
their domain of expertise to novice learners without regard to the process of
automatization (Ericsson, 2004). As a result of automaticity, expertise is characterized
by abstractions in which key details and process information necessary to provide
instruction on optimal performance is lacking (Clark et al., 2008). Consequently,
expertise can degrade an experts’ ability to consciously access their knowledge during
skill performance or when providing instruction to others (Ericsson, 2004). As Anderson
and Schunn (2000) stated, “For practice to be effective the right chunks and production
rules need to be communicated. For this to happen, the teacher must know in some sense
what these are and communicate them” (p. 17). Hence, cognitive models with underlying
knowledge components should represent and communicate the competencies experts are
to teach (Anderson & Schunn, 2000) to transfer expert knowledge to novices.
COGNITIVE TASK ANALYSIS
46
Cognitive Task Analysis
CTA is a methodology used to elicit cognitive processes that underlie expert
behaviors and decision-making (Clark et al., 2008; Cooke, 1999; Schraagen et al., 2000;
Yates, 2007). Using various interview and observation strategies, CTA captures the
implicit and explicit knowledge that experts use to perform complex tasks (Clark et al.,
2008). According to van Merriënboer, Clark, and de Croock (2002), complex tasks can
extend over several hours or days and involve both conscious and automated knowledge
to perform. CTA captures both “overt observable behaviors and covert cognitive
functions” to yield information about knowledge structures (Chipman, Schraagen, &
Shalin, 2000, p. 3).
Brief History of Cognitive Task Analysis
The origins of task analysis go back to the late 1800s and evolved from a
behavioral approach to analyzing physical performance of manual labor to a focus on
unobservable cognitive performance (Ryder & Redding, 1993). For the purpose of
instructional targets, behavioral task analysis focused on the observable steps required to
perform a task (Anderson & Faust, 1973). In traditional task analysis, emphasis was
placed on observable procedural knowledge, rather than unobservable processes such as
declarative and strategic knowledge (Hall et al., 1995). However, with the rise of
cognitive psychology and research on CTA (Annett, 2000; Crandall, Klein, & Hoffman,
2006; Hall et al., 1995), as well as a shift from physical to more mental processes in the
workplace (Clark & Estes, 1996), came the understanding that simple behavioral analysis
was not enough to capture the cognitive processes and knowledge structures underlying
overt physical performance (Greeno, 1976). Complex tasks require the integrated use of
COGNITIVE TASK ANALYSIS
47
declarative, procedural (van Merriënboer et al., 2002), and conditional (Paris et al., 1983)
knowledge processes for advanced problem solving. The move from behavior task
analysis to CTA shifted training from the observable behavior model to training based on
the integration of multiple knowledge processes captured from experts for the purpose of
advanced instructional design (Clark & Estes, 1999; Glaser & Bassok, 1989; Ryder &
Redding, 1993; Schneider, 1985). As a result of this shift, CTA began to inform
instructional development in the 1970s (Greeno, 1976).
Cognitive Task Analysis Methodologies
Cooke (1994, 1999) identified more than 100 variations of CTA methods and
applications, which can be categorized into a classification scheme of three general
families of CTA techniques to include: interviews and observations of experts
performing the task to allow for flexible knowledge elicitation; process tracing, which
entails a more structured collection of data as the expert performs the task and provides a
think-aloud of his or her thought processes; and the indirect method of conceptual
techniques to produce a highly structured representation of the task. Wei and Salvendy
(2004) later identified an additional family of techniques to include the use of simulations
to model knowledge- or rule-based tasks. In order to elicit accurate and complete expert
knowledge descriptions, Cooke (1994, 1999) suggested using multiple knowledge
elicitation techniques to capture a rich representation of the task.
Similar to Cooke (1994, 1999), Yates (2007) identified over 100 different CTA
methods in his examination of empirical literature. Yates and Feldon (2011) noted that
existing classification schemes tend to sort CTA methods by the knowledge elicitation
process, rather than the outcome. As a result, CTA practices can present challenges for
COGNITIVE TASK ANALYSIS
48
practitioners in the selection of techniques and the ways in which these techniques are
applied to knowledge elicitation (Yates & Feldon, 2011). Because of the vast variations
in the application of knowledge elicitation techniques to the solution of problems, CTA
remains a craft, rather than a technology (Yates & Feldon, 2011).
Multistage process of conducting cognitive task analysis. Crandall et al.
(2006) identified three common approaches that comprise CTA methodologies:
knowledge elicitation, data analysis, and knowledge representation. The methods of
particular interest to this present study pertain to a sequenced five-stage process in which
the knowledge elicited from CTA interviews can be applied to instructional design
through: (1) collection of preliminary knowledge; (2) identification of knowledge
representations; (3) application of focused knowledge elicitation methods; (4) analysis
and verification of data; and (5) format results for its intended application (Chipman et
al., 2000; Clark & Estes, 1996; Clark et al., 2008; Cooke, 1994; Jonassen, Tessmer &
Hannum, 1999).
Collect preliminary knowledge. In preparation for subsequent task analysis,
analysts collect preliminary knowledge by what is known as bootstrapping (Crandall et
al., 2006; Hoffman, Shadbolt, Burton, & Klein, 1995) to gain a general familiarity of the
task and specialized vocabulary within the domain by reading relevant literature and
reviewing documents describing the tasks (Chipman et al., 2000; Clark et al., 2008;
Jonassen et al., 1999). The analyst then outlines a sequence of tasks necessary to perform
the skill, which will become the focus of semi-structured interviews with SMEs. In this
initial stage, experts with a consistent record of exceptional performance at the task are
selected to participate in CTA interviews. Observation and unstructured interviews are
COGNITIVE TASK ANALYSIS
49
also common to provide a general overview of tasks within the domain (Clark et al.,
2008; Cooke, 1994).
Identify knowledge representations. In the second stage, analysts identify
subtasks within the sequence of each task outlined in the preliminary collection of
information stage. Knowledge representations are organized into concept maps, flow
charts, repertory grids, and protocols to determine the types of knowledge required to
perform the task, along with the knowledge elicitation method best suited to elicit
particular knowledge types (Chipman et al., 2000; Clark et al., 2008). A learning
hierarchy analysis may then be used to order tasks from more complex skills to the most
basic of skills to guide the structure of focused knowledge elicitation methods (Clark et
al., 2008; Jonassen et al., 1999).
Apply focused knowledge elicitation methods. A third stage is to then apply
focused knowledge elicitation methods using various techniques to target and elicit the
specific type of knowledge identified in the secondary stage (Chipman et al., 2000; Clark
et al., 2008). Particular knowledge elicitation methods yield differential knowledge types
(Hoffman, Crandall, & Shadbolt, 1998); therefore, elicitation efforts should involve
multiple techniques to capture in-depth knowledge and cognitive processes for complex
problem solving (Clark et al., 2008). Selected techniques must be structured to elicit the
appropriate knowledge and knowledge abstractions from the expert. The most common
knowledge elicitation methods employ variations of structured and semi-structured
interview techniques (Ericsson & Simon, 1993; Hoffman et al., 1995), as well as
observations of overt, observable components of the task (Chipman et al., 2000). In this
stage, multiple experts are interviewed using CTA methods proven to elicit essential
COGNITIVE TASK ANALYSIS
50
automated and unconscious knowledge necessary to perform the task. CTA methods
effective at capturing specific declarative and procedural knowledge types involved in
performing a task include CPP (Clark, 2004, 2006), CDM (Klein, Calderwood, &
MacGregor, 1989), and PARI (Hall et al., 1995). Multiple experts participate in
knowledge elicitation to ensure the completeness and accuracy of results captured for the
domain of analysis (Hoffman, 1987).
Specific CTA techniques used to capture cognitive processes relative to this
present study will be discussed in greater detail in sections that follow the five-stage
knowledge elicitation discussion.
Analyze and verify data acquired. As a fourth step, knowledge elicitation data
are coded and formatted for the intended application (Clark et al., 2008). The transcript
from an expert interview is coded for analysis and categorization of data according to
specific knowledge types. Knowledge representations captured in the knowledge
elicitation method are then given to the expert for verification and revision to ensure that
the cognitive processes accurately reflect the skills necessary to perform the task (Clark,
2004, 2006; Clark et al., 2008). Crandall et al., (2006) noted interwoven and
interconnected phases of analysis, which include: preparing data in a structured process
of inquiry; structuring data into discrete elements for examination; discovering meaning
by identifying central questions and issues; and representing findings to give the data
meaning.
The analysis stages in CPP (Clark, 2004, 2006), CDM (Klein et al., 1989), and
PARI (Hall et al., 1995) methods will be discussed in greater detail in the sections that
follow the discussion on the five-stage process of knowledge elicitation.
COGNITIVE TASK ANALYSIS
51
Format results for the intended application. In the fifth and final step, results are
translated into models of highly complex tasks and tacit knowledge to inform the
instructional design of curriculum and training (Clark et al., 2008; Klein et al., 1989).
Comparison of knowledge elicitation methods. While methods of CTA vary in
their approach to knowledge elicitation and in their overall focus, all CTA methods share
a purpose in capturing the complex processes experts employ to solve novel and routine
tasks. A brief comparison of knowledge elicitation techniques will be described to
provide context for the methodologies and to make clear the utility of the approach
selected for the present study.
Concepts, Processes, and Principles. CPP is a knowledge elicitation method
proposed by Clark (2004, 2006), which captures experts’ automated and tacit knowledge
in performing a task and can be effectively applied to instructional design (Clark et al.,
2008). CPP entails a series of multistage, semi-structured interviews with multiple
experts to capture relevant knowledge descriptions of concepts, processes, and principles
necessary to perform a domain-specific task (Clark et al., 2008). Each expert selected for
an interview is prompted to describe the same task to capture a complete description of
the skill. Experts begin by outlining the major task sequence and proceed to describe
discrete subtasks involved in execution of the procedure. Analysts question experts on
the decisions made to perform the task and alternative sequences to consider when a
decision must be made. During the interviews, experts are also asked to describe routine
to complex authentic problems that novices should be able to solve to demonstrate
mastery of the task (Clark et al., 2008). After the knowledge elicitation interview with an
expert is complete, the analyst prepares an interview report of the procedures yielded,
COGNITIVE TASK ANALYSIS
52
which is formatted to include the tasks, subtasks, conditions, standards, equipment, and
materials needed to perform the task. Experts are then asked to self- and peer-review
these procedures for accuracy and completeness. Data captured in the initial semi-
structured interviews with experts are aggregated into a preliminary gold standard
protocol of the procedure for final expert approval and to be used in training novices to
perform the task. Because the CPP method captures conceptual knowledge, conditions,
and the action and decision steps necessary to perform a task, the product of CPP can be
incorporated into instructional design.
Critical Decision Method. CDM is a semi-structured interview method to
examine experts’ situational “assessment and decision making during nonroutine
incidents” (Klein et al., 1989, p. 462). CDM is a retroactive strategy in which the expert
selects a critical nonroutine task that required judgment and decision-making on the part
of the expert. Subsequently, the expert provides a brief overview of the incident defined
as critical. CDM uses cognitive probes to prompt experts to reflect upon strategies and
decisions employed during problem solving. The role of decision-making and selection
of strategies is essential to expertise and as Crandall (1989) noted, many of the decisions
made by experts rely upon subtle perceptual cues and tacit knowledge not easily
articulated by these experts. Experts find it difficult “to say what they know” and “need
help telling what they know” (Crandall, 1989, p. 145). CDM techniques capture
knowledge that is tacitly held by experts, and as a result, is resistant to articulation.
Gaining access to expert content knowledge and naturalistic decision-making is the focus
of CDM (Klein et al., 1989). Decision-making emphasizes concurrent deliberation
between two choices (Klein & Calderwood, 1987) when reasonable alternative actions
COGNITIVE TASK ANALYSIS
53
were possible (Clark et al., 2008). The CDM method captures explicit and tacit
knowledge that can be applied to instruction (Klein et al., 1989).
Precursor, Action, Result, and Interpretation. The PARI method of knowledge
elicitation captures problem-solving expertise using a structured, think-aloud dialogue
approach (Hall et al., 1995). The PARI method comprises nine stages of data collection:
In the first stages, experts and problem-solving tasks are selected for task analysis, while
the final stages involve expert and novice structured interviews, follow-up interviews,
and expert review of the data (Hall et al., 1995). The PARI method examines the
cognitive skills that allow for complex problem solving and adaptive expertise, and
identifies the declarative and procedural knowledge required to problem solve routine
and novel tasks (Clark & Estes, 1996). PARI’s structured problem-solving techniques
capture the knowledge components responsible for knowledge deployment (Hall et al.,
1995). Moreover, the situated problem solving and dyadic interaction features of this
method enable the elicitation of procedural, declarative, and strategic knowledge, and
yield an understanding of the coordinated deployment of all three knowledge components
(Hall et al., 1995).
To examine problem-solving expertise, the PARI method grounds knowledge
elicitation in authentic performance contexts through structured interviews designed to
reveal the knowledge, skills, and reasoning employed by experts in situated problem
solving (Hall et al., 1995). Through dyadic interviews with pairs of experts, one expert
poses a problem to a second expert who then generates and describes a step-by-step
solution to the specific problem (Clark & Estes, 1996; Hall et al., 1995). According to
Hall et al. (1995), “An analysis of naïve problem-solving is more fruitful in revealing all
COGNITIVE TASK ANALYSIS
54
relevant knowledge bases, search procedures, and strategic deployment processes” (p. 4).
PARI procedures yield discrete knowledge components of complex tasks, which are then
incorporated into instruction designed to accelerate flexible and adaptive skill
development (Hall et al., 1995).
Effectiveness of Cognitive Task Analysis
When underlying cognitive processes can be elicited from experts and represented
through CTA, these mental models can be captured and used to develop effective CTA-
based instruction (Clark & Estes, 1996). Strong evidence of the instructional value of
CTA has been demonstrated in multiple studies across various domains spanning from
medicine to the military (Clark, 2014). Effect sizes from meta-analyses of studies (Lee,
2004; Tofel-Grehl & Feldon, 2007) found that knowledge elicited through CTA provides
for highly effective instruction. According to Yates (2007), “Meta-analysis is a technique
of quantitative research synthesis incorporating the findings of different research studies
that can be meaningfully compared using the size of the statistical effect” (p. 5).
Meta-analysis of studies. Lee (2004) performed a meta-analytic review of
studies on the instructional effectiveness of CTA-based training as measured by
performance gains. Lee (2004) analyzed 39 comparison studies for pretest and posttest
differences and found effect sizes ranging from .91 to 1.45 with a large mean effect size
of 1.72 when compared to more traditional instructional design using behavioral task
analysis and other instructional approaches. This accounts for a post-training
performance gain of 75.2%. In fact, effect sizes of CTA-based instruction tripled that of
non-CTA based instruction (Tofel-Grehl & Feldon, 2013).
In Tofel-Grehl and Feldon’s (2013) meta-analysis of CTA studies across various
COGNITIVE TASK ANALYSIS
55
domains, the researchers reported a mean effect size of .871 for the overall treatment
effect of CTA-based instruction when compared to other instructional approaches such as
behavioral task analysis and unguided expert self-report. This accounts for an overall
post-training learning gain of 31% for CTA-based instruction (Clark, 2014). Tofel-Grehl
and Feldon (2013) also reported varied effect sizes for different knowledge elicitation
methods. Some CTA methods were more effectively applied to instruction as evidenced
by a mean effect size of .329 for CDM and 1.598 for the PARI method (Tofel-Grehl &
Feldon, 2013). The effect sizes demonstrate a low 13% learning gain for CDM and a
high 45% learning gain for the PARI method (Clark, 2014).
When CTA is used to capture complete and accurate cognitive processes, the
resulting instruction is consistently more effective than other non-CTA approaches
(Clark, 2014; Tofel-Grehl & Feldon, 2013).
Cognitive Task Analysis in Instruction
According to Anderson and Schunn (2000), CTA receives relatively little
institutional support in education and is not widely recognized for identifying cognitive
structures underlying knowledge components of complex skills. Nonetheless, there is
value in taking a target domain, analyzing its discrete knowledge components, and
finding examples that utilize these components to communicate during instruction
(Anderson & Schunn, 2000). As stated, “Changing the curriculum does not change the
value for componential analysis nor the need of students to master the components”
(Anderson & Schunn, 2000, p. 17). Experts’ cognitive processes can be used to inform
the design of instruction and instructional materials, which has proven more effective
COGNITIVE TASK ANALYSIS
56
than traditional methods of instruction (Merrill, 2002). Hence, instruction based on
expert knowledge captured via CTA is a promising practice (Clark et al., 2008).
Instructional design. The ultimate purpose of knowledge elicitation is to guide
the design of instructional systems, which target the acquisition and integration of
declarative, procedural, and conditional knowledge components (Hall et al., 1995). The
most effective learning can take place when all necessary information is available to the
learner in the form of instruction and/or prior knowledge (Kirschner et al., 2006).
Therefore, developing instruction for complex problem solving requires analysis of the
cognitive processes and underlying cognitive structures that contribute to task
performance (Hall et al., 1995). Just teaching learners conceptual knowledge of a task is
insufficient for generating highly effective performance outcomes (Clark et al., 2008).
Research indicates that when experts teach the conceptual knowledge on which their
performance is based, they unintentionally provide learners with incomplete or inaccurate
information (Clark et al., 2008). Learners must be equipped with the multiple
declarative, procedural, and conditional knowledge sources that contribute to skilled
performance (Hall et al., 1995). Without CTA to facilitate knowledge elicitation, these
knowledge structures are not made explicit to the learner. Yet, according to Hall et al.
(1995), few instructional designers have attempted to integrate the cognitive
underpinnings of complex task performance needed to understand the problem-solving
process.
Summary
CTA captures underlying cognitive processes in performing complex tasks and
solving complex problems. Experts may omit up to 70% of critical knowledge when
COGNITIVE TASK ANALYSIS
57
describing how to perform a domain-specific complex task. Teaching of informational
text is a complex task in which knowledge and skills must be adaptive to meet the
instructional needs of students with mild to moderate learning disabilities. As such, the
domain of reading instruction in informational text may benefit from conducting CTA to
inform instruction in pre-service and in-service teacher training programs. Therefore, the
purpose of this present study is to conduct CTA to examine expert knowledge omissions
and recall of the critical steps required to provide reading instruction in informational
text, and to capture a complete description of a reading instruction task described by
expert special education teachers. CTA is recognized as being time and resource
intensive; therefore, this present study will also examine the effectiveness of two
different knowledge elicitation methods–the 3i+3r independent method and 1i+3r
incremental method–in capturing expertise.
COGNITIVE TASK ANALYSIS
58
CHAPTER THREE: METHODOLOGY
Overview of Method
This present study examined the knowledge and skills of expert special education
teachers who provide reading instruction in informational text to students with mild to
moderate learning disabilities in grades 3-5. The study employed CTA methods (Clark et
al., 2008) to capture the declarative and procedural knowledge of expert special education
teachers as they described reading procedures in informational text. This chapter outlines
the research methodology employed in this mixed-methods descriptive study on CTA and
presents details on the design of the study, participant selection, task, instrumentation,
data collection, and data analysis.
Research Questions
Research questions that guided the design, data collection, and data analysis of
this descriptive study are as follows:
Question 1. Will the number of action and decision steps in the gold standard
protocol produced by the 3i+3r independent method differ from the number of action and
decision steps in the gold standard protocol produced by the 1i+3r incremental method?
Question 1a. What additional action and decision steps do experts recall during
follow-up interviews to review initial interview reports and during review of the
preliminary gold standard protocols?
Question 2. What percentage of action and decision steps do expert special
education teachers omit when describing how they teach informational literacy to
students with mild to moderate learning disabilities, in grades 3-5, using the 3i+3r
independent and 1i+3r incremental methods?
COGNITIVE TASK ANALYSIS
59
Question 3. Does an expert special education teacher's self-reported
biographical data relate to the number of action and decision steps yielded when
describing reading instruction in informational text for students in grades 3-5 with mild
to moderate learning disabilities?
Task
The complex task of reading instruction in informational text requires both
declarative (what and why of task performance) and procedural (when and how to
perform the steps) knowledge for experts to perform. Because the expert special
education teachers selected for participation in this present study have years of deliberate
practice and expertise in performing the reading procedure, their knowledge and skills in
the discrete steps required to execute the reading procedure were, in part, unconscious
and inaccessible in working memory. To elicit a complete description of the reading
task, multiple experts participated in CTA-guided interviews to capture the automated
declarative and procedural knowledge necessary for a novice teacher to perform the
reading task.
Participant Selection
Expert special education teachers from a school district in Southern California
were selected for participation in this present study. This researcher collaborated with
district-level administration to identify five elementary special education teachers with an
expertise in literacy instruction. Expert teachers identified for the study had at least 10
years of teaching experience in special education, were recognized as having achieved
proficiency and excellence in literacy instruction, had recent field performance of the task
and broad experience performing the task in various contexts, and had not served as
COGNITIVE TASK ANALYSIS
60
instructors of the task to other teachers.
Special education teachers identified by the school district as experts in reading
instruction were first asked to voluntarily participate in this inquiry study by completing a
biographical data survey to establish years of experience as a special education teacher,
education level, additional training and professional development in expository literacy
instruction, and self-reported student achievement data. The biographical data survey
given to teachers can be found in Appendix C. Teacher-reported data were entered into a
spreadsheet for organization and analysis; see Appendix D for the spreadsheet. Expert
special education teachers were then asked to voluntarily participate in CTA-guided
interviews to capture domain-specific declarative and procedural knowledge–in the form
of action and decision steps–of the reading task for the purpose of developing an
aggregated gold standard protocol.
Study Design
This present study employed two knowledge elicitation methods–the 3i+3r
independent method and 1i+3r incremental method–for conducting CTA to compare their
relative effectiveness in capturing highly automated declarative and procedural
knowledge processes from SMEs. Expert contributions to a gold standard protocol are
considered of marginal diminishing return if the knowledge elicited from a SME yields
less than 10% in additional knowledge steps (Bartholio, 2010; Chao & Salvendy, 1994).
Marginal diminishing return is based upon the investment of time and human effort in
conducting CTA when additional knowledge acquired from a SME is less than 10%. To
maintain marginal utility value of each SME’s knowledge contributions, research
indicates that three SMEs are optimal for knowledge elicitation (Bartholio, 2010; Chao &
COGNITIVE TASK ANALYSIS
61
Salvendy, 1994; Crispen, 2010). Therefore, to remain consistent with research on the
optimal number of experts for knowledge elicitation, three SMEs were randomly
assigned to the 3i+3r independent method and 1i+3r incremental method groupings.
Knowledge was elicited from SMEs using a convergence of CPP, CDM, and PARI
methods outlined in Chapter Two: Literature Review of this present study. The
knowledge elicited from a randomly selected SME in the 3i+3r independent method
group served as the base for the 1i+3r incremental method interviews with SMEs. The
1i+3r incremental method was introduced in Flynn’s (2012) research study as an
alternative to the 3i+3r independent method of CTA.
Instrumentation
This study used qualitative methods to employ CTA-guided interviews for
knowledge elicitation from five expert special education teachers of students with mild to
moderate learning disabilities in order to generate 3i+3r independent and 1i+3r
incremental method gold standard protocols of reading instruction in informational text.
The elicitation of knowledge from expert special education teachers was achieved via
semi-structured interviews based on the CTA method proposed by Clark et al. (2008),
which includes: (a) collecting preliminary domain-specific knowledge; (b) identifying the
types of knowledge associated with the task; (c) applying the knowledge elicitation
technique in a semi-structured interview; (d) verifying and analyzing the results from the
interviews; and subsequently (e) formatting results for its intended purpose. The CTA
interview protocol is found in Appendix E.
As discussed in Chapter 1: Overview of the Study, the 3i+3r independent method
is a knowledge elicitation technique whereby this knowledge analyst interviewed three
COGNITIVE TASK ANALYSIS
62
SMEs independently to capture the cognitive strategies and processes expert special
education teachers used to perform the same reading instruction task. Each of the three
semi-structured interviews was digitally recorded, transcribed, coded, and analyzed to
produce three independent interview reports, which were reviewed for accuracy and
completeness, and revised by the respective expert. In the 3i+3r independent method, the
final independent interview reports were aggregated into a preliminary gold standard
protocol and once again submitted to the three experts for verification and revision; see
Appendix A for a flowchart of the 3i+3r independent method. For the 1i+3r incremental
method, one final independent interview report from the 3i+3r independent method was
randomly selected to serve as the base for incremental method interviews. The final
independent interview report was then presented to a second expert during a semi-
structured knowledge elicitation interview. The second SME was prompted to review the
procedures set forth in the interview report and to revise or make changes as necessary to
capture a complete description of the reading instruction task. The revised interview
report was then presented to a third and final SME for review and revision. The final
revised interview report was then formatted into a preliminary gold standard protocol.
The preliminary gold standard protocol was submitted to the three SMEs for review and
revision to generate a final gold standard protocol; see Appendix B for a flowchart of the
1i+3r incremental method.
Data Collection
Following Institutional Review Board (IRB) approval, five expert special
education teachers from a school district in Southern California were asked to participate
in approximately two-hour long, semi-structured CTA interviews designed to capture the
COGNITIVE TASK ANALYSIS
63
implicit and nonobservable decisions, judgments, analyses, and other cognitive processes
employed during a routine reading instruction task (see Appendix E for the interview
protocol). After each initial interview report was drafted, the respective SME was asked
to participate in a follow-up interview with this researcher to review and verify
knowledge captured in the report for accuracy and completeness of the task. Final expert
interview reports were aggregated into 3i+3r independent and 1i+3r incremental method
preliminary gold standard protocols. Experts were then asked to provide a final review
and verification of the preliminary gold standard protocol for the knowledge elicitation
method in which he or she participated.
In order to limit confirmation bias by this author, as a junior researcher who is
also a literacy practitioner, a senior researcher, who is not a literacy practitioner,
conducted the first CTA-guided interview with a SME. This author, with oversight and
guidance from the senior researcher, conducted the remaining CTA-guided interviews
with experts. With permission from participating expert special education teachers,
interviews were digitally recorded for edited transcription and coding of knowledge
types. Edited transcription was used in lieu of verbatim transcription. This researcher
drafted edited transcripts for the purpose of representing knowledge and knowledge types
elicited during initial and follow-up interviews with experts. Based on the transcription
and coding of each 3i+3r independent method and 1i+3r incremental method interviews
with SMEs, a preliminary gold standard protocol was generated with step-by-step
procedures for the reading instruction task. All expert special education teachers who
participated in knowledge elicitation interviews and the subsequent follow-up interview
review process were identified only as SME A, SME B, SME C and so forth in the gold
COGNITIVE TASK ANALYSIS
64
standard protocols.
Data Analysis
To answer the first research question, Will the number of action and decision
steps in the gold standard protocol produced by the 3i+3r independent method differ
from the number of action and decision steps in the gold standard protocol produced by
the 1i+3r incremental method?, the number of action and decision steps captured in the
final 3i+3r independent and 1i+3r incremental method gold standard protocols were
analyzed to determine which knowledge elicitation method yielded a greater number of
critical action and decision steps.
To answer the first subquestion, question 1a, What additional action and decision
steps do experts recall during follow-up interviews to review initial interview reports and
during review of the preliminary gold standard protocols?, additional action and decision
steps contributed to interview reports and preliminary gold standard protocols during
expert review of the reading procedures elicited through CTA were tallied for analysis
and coding.
For the second question, What percentage of action and decision steps do expert
special education teachers omit when describing how they teach informational literacy to
students with mild to moderate learning disabilities, in grades 3-5, using the 3i+3r
independent and 1i+3r incremental methods?, knowledge steps from the 3i+3r
independent and 1i+3r incremental method gold standard protocols were transferred to a
Microsoft Excel spreadsheet for analysis. Using frequency counts, action and decision
steps captured in each expert special education teacher’s interview report were compared
with the total number of steps captured in the respective gold standard protocol to
COGNITIVE TASK ANALYSIS
65
determine the percentage of knowledge omissions when compared to the gold standard
protocol. A “1” indicated inclusion of knowledge that was in agreement with the gold
standard protocol, while a “0” indicated an omission or exclusion of knowledge. An
omission is described as an indication on the gold standard protocol that is not included
in an expert’s interview report of the reading instruction task. Frequency counts were
converted to percentages to represent expert knowledge omissions.
Regarding the third and final question, Does an expert special education teacher's
self-reported biographical data relate to the number of action and decision steps yielded
when describing reading instruction in informational text for students in grades 3-5 with
mild to moderate learning disabilities?, further analysis of the data was made to
determine the relationship between experts’ experience in reading instruction and his or
her recall of critical action and decision steps of the reading instruction task.
COGNITIVE TASK ANALYSIS
66
CHAPTER FOUR: RESULTS
Overview of Results
This study examined the declarative and procedural knowledge–represented in
action and decision steps–of five expert special education teachers using two knowledge
elicitation methods of CTA to capture their expertise. Following the CTA methods
outlined in Chapter Three: Methodology, three semi-structured interviews using the 3i+3r
independent method and three semi-structured interviews using the 1i+3r incremental
method were conducted with SMEs to elicit descriptions of procedures used to provide
reading instruction in informational text to students with mild to moderate learning
disabilities in grades 3-5. This chapter presents descriptive statistics to discuss the results
of the 3i+3r independent method and 1i+3r incremental method used to capture the
expertise of SMEs for the study. Results of the data analysis are organized by research
question.
Inter-Coder Reliability
Two knowledge analysts coded data from the three 3i+3r independent method
interview reports for the purpose of knowledge type cataloging of action and decision
steps captured by SMEs. To determine inter-coder reliability, the number of coded steps
in agreement between the knowledge analysts was tallied and divided by the total number
of coded action and decision steps. Upon completion of the coding, the knowledge
analysts met to reconcile discrepancies. The mean percentage of consensus, or inter-
coder reliability, was initially established at 97% with 100% inter-coder reliability
established after reconciliation of discrepancies for the 3i+3r independent method
interview reports. One 3i+3r independent method interview report was randomly
COGNITIVE TASK ANALYSIS
67
selected as the base for the 1i+3r incremental method knowledge elicitation process.
Once the percentage of coder agreement was established, this researcher coded the
remaining two SMEs’ 1i+3r incremental method interview reports and a senior researcher
reviewed each report for accuracy of coding, cohesion of action and decision steps, and
logical progression of procedures. SME interview reports were then aggregated into
3i+3r independent method and 1i+3r incremental method gold standard protocols.
Research Questions
Question 1. Will the number of action and decision steps in the gold standard protocol
produced by the 3i+3r independent method differ from the number of action and decision
steps in the gold standard protocol produced by the 1i+3r incremental method?
Total action and decision steps. As set forth in Chapter Three: Methodology, to
answer the first study question, the number of action and decision steps captured in an
expert’s interview report was aggregated with two other experts’ interview reports to
form 3i+3r independent method and 1i+3r incremental method procedures for analysis
and coding of action and decision steps. 3i+3r independent method and 1i+3r
incremental method procedures were transferred to Microsoft Excel spreadsheets for
analysis of the action and decision steps captured by SMEs when describing reading
instruction in informational text. Coding spreadsheets are attached for the 3i+3r
independent method in Appendix G and the 1i+3r incremental method in Appendix H.
Using frequency counts, data provided from each expert’s interview report were
compared and analyzed with data from the respective gold standard protocol to determine
the total number of action and decision steps each SME recalled. Each step was coded an
“A” for action step or a “D” for decision step in a column of the spreadsheet. Steps that
COGNITIVE TASK ANALYSIS
68
were neither an action nor a decision were coded as an “S” for standard and were not
assigned a numerical value. The main procedure for each subtask was coded “O” for
objective and was not assigned a numerical value. Each SME was then assigned a
column in the spreadsheet of the knowledge elicitation method in which he or she
participated to track the inclusion or omission of declarative and procedural knowledge–
as measured by action and decision steps–when compared to the respective gold standard
protocol. A “1” indicated inclusion of knowledge in agreement with the gold standard
protocol and “0” indicated an omission or exclusion of the knowledge. The number of
action and decision steps for each expert was tallied and divided by the total number of
action and decision steps of all experts who participated in the respective knowledge
elicitation method to yield the percentage of knowledge agreements.
The total number of action and decision steps from the 3i+3r independent method
and 1i+3r incremental method was then compared to determine which knowledge
elicitation method yielded a greater number of critical action and decision steps. Table 1
provides a side-by-side comparison of total action and decision steps yielded from the
3i+3r independent method and 1i+3r incremental method gold standard protocols.
COGNITIVE TASK ANALYSIS
69
Table 1
Comparison of Overall Action and Decision Steps from 3i+3r Independent Method and
1i+3r Incremental Method Gold Standard Protocols
Steps
Gold Standard Protocol
Total Action &
Decision
Action
Decision
3i+3r Independent
179
100
79
1i+3r Incremental
140
83
57
Difference
39
17
22
% Difference
24.45
18.58
32.35
Action steps are observable task performances, while decision steps are
unobservable cognitive processes that inform task performance and cue the expert to
execute the action. A comparison of action and decision steps using two knowledge
elicitation methods demonstrated that the 3i+3r independent method yielded a greater
number of overall steps than the 1i+3r incremental method. SMEs collectively recalled
179 nonrepeating action and decision steps with the 3i+3r independent method and 140
nonrepeating action and decision steps with the 1i+3r incremental method. The 3i+3r
independent method yielded a total of 39 more steps, a difference of 24.45%. While the
3i+3r independent method produced 17 more action steps than the 1i+3r incremental
method, an 18.58% difference, the greatest contrast between the two methods was
evident in the number of critical decision steps. The 3i+3r independent method yielded
22 more decision steps and accounted for a 32.35% difference in decision steps when
compared to the 1i+3r incremental method. Percentage difference is the difference
between two relative numbers (reference number subtracted by the relative number),
COGNITIVE TASK ANALYSIS
70
divided by the average of the reference and relative numbers, and multiplied by 100 to
convert to a percentage. The percentage difference formula was applied to the analysis of
action and decision step data when the purpose of the measure was to compare the
difference between steps captured in the 3i+3r independent method and the 1i+3r
incremental method. The formula was used to analyze data pertaining to the first
research question, except where otherwise noted.
Expert contribution of action and decision steps. Figure 1 reports action and
decision steps recalled by each SME for the two knowledge elicitation methods–the 3i+3r
independent method and 1i+3r incremental method.
Total Expert Knowledge Recall for the 3i+3r Independent Method and 1i+3r Incremental
Method
Figure 1. Total nonrepeating steps captured from the 3i+3r independent method gold standard protocol:
action and decision steps – 179; action steps – 100; decision steps – 79. Total nonrepeating steps captured
from the 1i+3r incremental method gold standard protocol: action and decision steps – 140; action steps –
83; decision steps – 57.
48
24
41
24
35
57
41
20
31
20
31
34
0
10
20
30
40
50
60
SME
1
SME
2
SME
3
SME
A
SME
B
SME
C
Total
Steps
Recalled
3i+3r
Independent
Method
1i+3r
Incremental
Method
Action
Steps
Decision
Steps
COGNITIVE TASK ANALYSIS
71
It should be noted that many procedural steps yielded via CTA are not unique to
one individual SME; therefore, the action and decision steps captured from each
individual SME, as shown in Figure 1, do not add up to the total number of nonrepeating
action and decision steps aggregated into respective gold standard protocols, as shown in
Table 1. It should also be noted that the final, revised version of SME 2’s 3i+3r
independent method interview report served as the base (SME A) of the 1i+3r
incremental method gold standard protocol. The same expert is referred to as SME 2 in
the 3i+3r independent method and SME A in the 1i+3r incremental method.
Across methods, SMEs consistently recalled more action than decision steps.
SMEs who participated in the 3i+3r independent method recalled an average of 20.56%
more action than decision steps. SME 1 recalled 48 action steps and 41 decision steps, a
difference of 15.73% more action steps. SME 2 (SME A) recalled 24 action steps and 20
decision steps, an 18.18% difference. SME 3 recalled 41 action steps and 31 decision
steps, a difference of 27.78%. SMEs who participated in the 1i+3r incremental method
recalled an average of 26.95% more action than decision steps. SME A (SME 2) recalled
24 action steps and 20 decision steps, with 18.18% more action than decision steps. SME
B recalled 35 action steps and 31 decision steps, a difference of 12.12%. SME C recalled
57 action steps and 34 decision steps, a 50.55% difference.
3i+3r independent method analysis. The 3i+3r independent method captured a
total of 100 nonrepeating action steps and 79 nonrepeating decision steps from SMEs 1,
2, and 3. When compared to the 3i+3r independent method gold standard protocol, SME
1 independently recalled the highest percentage of steps with 48.00% of action steps and
51.90% of decision steps, while SME 2 independently recalled the lowest percentage of
COGNITIVE TASK ANALYSIS
72
steps with 24.00% of action steps and 25.32% of decision steps. SME 3 independently
recalled 41.00% of action steps and 39.24% of decision steps. The percentage difference
between the highest (SME 1) and least (SME 2) number of steps captured for the 3i+3r
independent method is 66.67% of action steps and 68.85% of decision steps.
SMEs were ranked according to the total number of action and decision steps
captured during focused 3i+3r independent method interviews to analyze the percentage
increase of each SME’s contributions to the gold standard protocol. SME 2 generated the
fewest number of steps with a total of 44 action and decision steps, while SME 3
generated median results with a total of 72 action and decision steps. SME 3’s
contributions accounted for a 63.64% increase in additional action and decision steps
above SME 2. SME 1 contributed a total of 89 action and decision steps, the highest
number of steps amongst SMEs, with a 23.61% increase in additional knowledge elicited
above SME 2. Percentage increase is the change between two relative numbers
(reference number subtracted by the relative number), divided by the reference number,
and multiplied by 100 to convert to a percentage. The percentage increase formula was
applied to the analysis of action and decision step data when the purpose of the measure
was to compare an increase in steps captured between SMEs.
1i+3r incremental method analysis. The 1i+3r incremental method captured a
total of 83 nonrepeating action steps and 57 nonrepeating decision steps from SMEs A,
B, and C. Disaggregation of the data revealed that SME A, who is also SME 2 in the
3i+3r independent method, recalled 28.92% of action steps and 35.09% of decision steps
when compared to the 1i+3r incremental method gold standard protocol. The addition of
steps from SME A’s independent interview report to SME B’s incremental interview
COGNITIVE TASK ANALYSIS
73
report accounted for percentages increase of 37.29% in action steps and 43.14% in
decision steps. Including the steps in agreement with SME A’s independent interview
report, SME B recalled 42.17% of action steps and 54.39% of decision steps when
compared to the gold standard protocol. The addition of steps from SME B to SME C’s
incremental interview reports accounted for percentages increase of 47.83% in action
steps and 9.23% in decision steps. Including the steps in agreement with SME B’s
incremental interview report, SME C recalled 68.67% of action steps and 59.65% of
decision steps when compared to the gold standard protocol. The percentage increase
formula was applied to the analysis of these data.
Question 1a. What additional action and decision steps do experts recall during follow-
up interviews to review initial interview reports and during review of the preliminary
gold standard protocols?
Follow-up interviews. To answer this subquestion, the number of action and
decision steps captured during follow-up interviews to review independent and
incremental interview reports was tallied to analyze additional knowledge recall data for
each SME. Table 2 reports the recall of additional action and decision steps during the
initial review phase of CTA and the method of conducting the review.
COGNITIVE TASK ANALYSIS
74
Table 2
Additional Expert Knowledge Recall During Follow-up Interviews
Steps
Method
SME Action Decision Follow-up Interview
3i+3r
Independent
1
4
6
no prior email, met to review
interview report
2
3
7
emailed report, no changes; then
requested in-person interview
3
11
16
no prior email, met to review
interview report
1i+3r
Incremental
A
0
0
emailed report to review, no interview
B
1
0
emailed report to review, met for
follow-up interview
C
8
4
no prior email, met to review
interview report
Note. SME 2 and SME A is the same expert whose individual report was used for both the 3i+3r
independent and 1i+3r incremental methods.
Analysis of the data revealed that for the majority of SMEs, the follow-up
interview phase captured additional action and decision steps for reading instruction in
informational text. These data show that when a SME reviewed his or her interview
report of the reading instruction procedure during an in-person, follow-up interview with
this researcher, the process yielded a greater number of action and decision steps as
compared to when a SME reviewed his or her interview report independent of this
researcher. SMEs who reviewed the interview report during an in-person, follow-up
interview with this researcher recalled an average of 6.5 additional action steps and 8.25
additional decision steps, while SMEs who reviewed the interview report without the
benefit of, or prior to an in-person, follow-up interview recalled an average of .5
additional action steps and zero decision steps.
COGNITIVE TASK ANALYSIS
75
As noted, one expert teacher’s independent interview report was used for both
gold standard protocols; the expert’s action and decision steps are reported out as SME A
in the 1i+3r incremental method and SME 2 in the 3i+3r independent method. SME A
(SME 2) independently reviewed the interview report for the 1i+3r incremental method
and no revisions or additional steps were added. However, the same person (SME 2;
SME A) later met in-person with this researcher to review the interview report for the
3i+3r independent method and the process yielded three additional action steps and seven
additional decision steps. The additional action and decision steps captured from SME 2
during the follow-up interview to review the 3i+3r independent interview report were
also added to the 1i+3r incremental interview report for SME A.
Preliminary gold standard protocol review. Table 3 lists additional action and
decision steps generated during the preliminary gold standard protocol review phase. All
SMEs were contacted via email to review his or her respective preliminary gold standard
protocol.
COGNITIVE TASK ANALYSIS
76
Table 3
Additional Expert Knowledge Recall During the Preliminary Gold Standard Protocol
Review Phase
Steps
Method
SME Action Decision
3i+3r
Independent
1
4
0
2
0
0
3
3
4
1i+3r
Incremental
A
0
0
B
0
0
C
0
0
SMEs who made additions to the 3i+3r independent or 1i+3r incremental method
gold standard protocols during the protocol review phase also increased his or her
percentage of knowledge in agreement with the respective gold standard protocol and
decreased his or her percentage of knowledge omissions.
SMEs 1, 2, and 3 responded electronically with changes to the preliminary 3i+3r
independent method gold standard protocol using the track changes feature of Microsoft
Word. SME 1 added four action steps to the gold standard protocol, while SME 3 added
three action steps and four decision steps. SME 2 added comments to the preliminary
gold standard protocol, but added no action or decision steps. SMEs 1, 2, and 3 were
participants in the 3i+3r independent method of CTA.
COGNITIVE TASK ANALYSIS
77
SMEs A, B, and C reviewed the 1i+3r incremental method gold standard protocol,
but made no revisions or additions to action and decision steps. No changes were made
to the 1i+3r incremental method gold standard protocol during the protocol review phase.
Question 2. What percentage of action and decision steps do expert special education
teachers omit when describing how they teach informational literacy to students with
mild to moderate learning disabilities, in grades 3-5, using the 3i+3r independent and
1i+3r incremental methods?
Total knowledge omissions. To answer the second study question, information
from the 3i+3r independent and 1i+3r incremental method gold standard protocols was
transferred to Microsoft Excel spreadsheets for analysis of the percentage of critical
procedural steps omitted when describing reading instruction in informational text.
Using frequency counts, data provided from each expert’s interview report were
compared and analyzed with the data from the gold standard protocol to determine the
number of action and decision steps included or omitted. Each step was coded an “A” for
action step, a “D” for decision step, “S” for standard, or “O” for objective in a column of
the spreadsheet; see Appendix G for the 3i+3r independent method spreadsheet and
Appendix H for the 1i+3r incremental method spreadsheet. Each SME was then assigned
a column within the spreadsheet of his or her respective knowledge elicitation method to
track inclusion or omission of knowledge when compared to the gold standard protocol.
A “1” in the SME’s column indicated inclusion of knowledge in agreement with the gold
standard protocol. A “0” indicated an omission or exclusion of the knowledge. A
numerical value was not assigned to steps considered a standard, denoted by “S,” or an
objective, denoted by “O.” The number of action and decision steps for each expert was
COGNITIVE TASK ANALYSIS
78
tallied and divided by the total number of action and decision steps of all experts who
participated in the respective knowledge elicitation method to yield the percentage of
knowledge agreements with the gold standard protocol. To determine knowledge
omissions, this percentage was then subtracted from 100%.
Action and decision steps refer to the knowledge elicited from SMEs during semi-
structured CTA-guided interviews and include information on how to perform a task.
These steps are critical to successful task execution. Based upon the data analysis, SMEs
omitted a significant number of action and decision steps during the knowledge
elicitation process for both the 3i+3r independent method and 1i+3r incremental method
of CTA. Figure 2 provides a side-by-side analysis of the action and decision steps
omitted by each SME when compared to the respective knowledge elicitation method
gold standard protocol.
COGNITIVE TASK ANALYSIS
79
Total Expert Knowledge Omissions for the 3i+3r Independent Method and 1i+3r
Incremental Method Gold Standard Protocols
Figure 2. Total nonrepeating steps captured from the 3i+3r independent method gold standard protocol:
action and decision steps – 179; action steps – 100; decision steps – 79. Total nonrepeating steps captured
from the 1i+3r incremental method gold standard protocol: action and decision steps – 140; action steps –
83; decision steps – 57.
Analysis of 3i+3r independent method omissions. Table 4 and Table 5 represent
knowledge omission data for the 3i+3r independent and 1i+3r incremental method gold
standard protocols. The data contained in these tables were used to conduct a within
method analysis for each knowledge elicitation method. Table 4 shows the action and
decision steps omitted by SMEs 1, 2, and 3 when compared to the 3i+3r independent
method gold standard protocol.
52
76
59
59
48
26
38
59
48
37
26
23
0
10
20
30
40
50
60
70
80
SME
1
SME
2
SME
3
SME
A
SME
B
SME
C
Total
Steps
Omitted
3i+3r
Independent
Method
1i+3r
Incremental
Method
Action
Steps
Decision
Steps
COGNITIVE TASK ANALYSIS
80
Table 4
Expert Knowledge Omissions from the 3i+3r Independent Method Gold Standard
Protocol
Steps Omitted
Data Source
Total Action &
Decision
Action
Decision
SME 1 90 50.28% 52 52.00% 38 48.10%
SME 2 135 75.42% 76 76.00% 59 74.68%
SME 3 107 59.78% 59 59.00% 48 60.76%
Mean
Omissions 110.67 61.82% 62.33 62.33% 48.33 61.18%
Range 45 24 21
SD 22.72 12.34 10.50
Note. Total nonrepeating steps captured from the 3i+3r independent method gold standard protocol: action
and decision steps – 179; action steps – 100; decision steps – 79.
A total of 179 nonrepeating steps were generated by SMEs 1, 2, and 3 to describe
reading instruction in informational text in the final 3i+3r independent method gold
standard protocol. Of these steps, 100 were action steps and 79 were decision steps. The
total number of action and decision steps included revisions and additions SMEs made to
their independent interview reports and to the preliminary 3i+3r independent method gold
standard protocol during the CTA review phases.
Data shown in Table 4 were analyzed for expert omissions of critical information
when compared to the 3i+3r independent method gold standard protocol. Overall, the
percentage of expert knowledge omissions when describing the reading instruction
procedure for the final 3i+3r independent method averaged: 110.67 steps, or 61.82% (SD
± 22.72) for both action and decision steps; 62.33 steps, or 62.33% (SD ± 12.34) for
COGNITIVE TASK ANALYSIS
81
action steps; and 48.33 steps, or 61.18% (SD ± 10.50) for decision steps. The percentage
of knowledge omissions varied significantly amongst SMEs when compared to the gold
standard protocol with ranges from: 50.28% to 75.42% for both action and decision steps;
52.00% to 76.00% for action steps; and 48.10% to 74.68% for decision steps.
Analysis of 1i+3r incremental method omissions. Table 5 shows the action and
decision steps omitted by SMEs A, B, and C when compared to the 1i+3r incremental
method gold standard protocol.
Table 5
Expert Knowledge Omissions from the 1i+3r Incremental Method Gold Standard
Protocol
Steps Omitted
Data Source
Total Action &
Decision
Action
Decision
SME A 96 68.57% 59 71.08% 37 64.91%
SME B 74 52.86% 48 57.83% 26 45.61%
SME C 49 35.00% 26 31.33% 23 40.35%
Mean
Omissions 73.00 52.14% 44.33 53.41% 28.67 50.29%
Range 47 33 14
SD 23.52 16.80 7.37
Note. Total nonrepeating steps captured from the 1i+3r incremental method gold standard protocol: action
and decision steps – 140; action steps – 83; decision steps – 57.
A total of 140 nonrepeating action and decision steps were generated by SMEs to
describe reading instruction in informational text in the final 1i+3r incremental method
gold standard protocol. Of the steps outlined in the gold standard protocol, 83 were
action steps and 57 were decision steps. The total action and decision steps included
COGNITIVE TASK ANALYSIS
82
revisions and additions SMEs made to their incremental interview reports and to the
preliminary 1i+3r incremental method gold standard protocol during the CTA review
phases.
Critical knowledge omissions, when compared to the 1i+3r incremental method
gold standard protocol, were analyzed using the data presented in Table 5. Overall, the
percentage of expert knowledge omissions when describing the reading instruction
procedure for the final 1i+3r incremental method averaged: 73.00 steps, or 52.14% (SD ±
23.53) for both action and decision steps; 44.33 steps, or 53.41% (SD ± 16.80) for action
steps; and 28.67 steps, or 50.29% (SD ± 7.37) for decision steps. The percentage of
knowledge omissions varied significantly amongst SMEs when compared to the gold
standard protocol with ranges from: 35.00% to 68.57% for both action and decision steps;
31.33% to 71.08% for action steps; and 40.35% to 64.91% for decision steps.
Question 3. Does an expert special education teacher's self-reported biographical data
relate to the number of action and decision steps yielded when describing reading
instruction in informational text for students in grades 3-5 with mild to moderate
learning disabilities?
Relationship between self-reported teacher data and knowledge recall. To
answer the final study question, self-reported teacher data and knowledge recall were
examined to determine any potential relationships between predictor and outcome
variables. Predictor variables included total hours of professional development, total
years of teaching experience, years since last credential obtained, and percentage of
students proficient on standardized state assessments, while outcome variables included
total action steps, total decision steps, and total action and decision steps captured in
COGNITIVE TASK ANALYSIS
83
SMEs’ independent and incremental interview reports. Table 6 presents findings on
correlational data.
Table 6
Correlations between Self-Reported Expert Biographical Data and Knowledge Recall
Pearson Correlation Coefficients, N = 5
Prob > |r| under H0: Rho=0
Outcome
Variables
Predictor Variables
Yrsexp
Tothrsprodev
Yrssincecredobt
Percstudprof
Totactdec
-0.20489
0.7410
0.16634
0.7892
-0.56199
0.3241
-0.66000
0.2255
Totact
-0.29859
0.6255
-0.02994
0.9619
-0.48614
0.4064
-0.58680
0.2983
Totdec
-0.02502
0.9681
0.46280
0.4325
-0.60946
0.2752
-0.68993
0.1973
Note. Variable key: Yrsexp = years of experience; Tothrsprodev = total hours of professional development;
Yrssincecredobt = years since last credential/masters obtained; Totactdec = total number of action and
decision steps; Totact = total number of action steps; Totdec = total number of decision steps; Percstudprof
= percentage of students advanced or proficient on state standardized testing.
A correlation coefficient reflects the relationship between two variables with a
numerical value ranging between -1 and +1 (Salkind, 2013). The correlation coefficient,
which is a widely used statistic for summarizing the relationship between two variables,
was calculated using each of the four predictor variables with the three outcome variables
shown in Table 6. Correlations for the four predictor variables with the outcome variable
of total action and decision steps ranged–in order of magnitude–from approximately 0.17,
with a positive correlation between total hours of professional development, to -0.66,
with an inverse correlation between percentage of students advanced or proficient on
state standardized testing. For the inverse correlation, the greater the number of students
advanced or proficient on state standardized testing, the lower the total number of action
and decision steps yielded from SMEs. Correlations for the four predictor variables with
COGNITIVE TASK ANALYSIS
84
the outcome variable of total action steps ranged from approximately -0.03 to -0.59,
while correlations with the outcome variable of total decision steps ranged from
approximately 0.46 to -0.69, with a moderate relationship between the total hours of
professional development predictor variable.
It should be emphasized that the Pearson correlation coefficient measures the
strength of linear relationships between variables. Scatterplots were used to check for
linear relationships between variables in this present study. Three sets of scatterplots
were analyzed with the four predictor variables plotted against each of the three outcome
variables. Because of the relatively small sample size (N = 5), no strong linear
relationships between variables were observed. Therefore, the numerical indices are not
considered statistically significant.
COGNITIVE TASK ANALYSIS
85
CHAPTER FIVE: DISCUSSION
Overview of Study
The purpose of this mixed-methods, descriptive study was to determine the effects
of automated knowledge on expert special education teachers’ ability to recall the action
and decision steps required to teach reading in informational text to students in grades 3-5
with mild to moderate learning disabilities. Specifically, the study first sought to analyze
and compare the effectiveness of the 3i+3r independent and 1i+3r incremental methods
of CTA in capturing critical action and decision steps of the reading task from expert
special education teachers. This study sought, as the second research question, to identify
critical knowledge omissions when expert special education teachers describe reading
instruction in informational text for students in grades 3-5 with mild to moderate learning
disabilities. Research has shown that experts may omit up to 70% of critical knowledge
when communicating skills to others, although experts can perform the task without
conscious recollection of these skills (Clark, 2014; Clark et al., 2008). Automation of
skills is critical to understanding expertise and may be a contributing factor in the so-
called research-to-practice gap in special education. According to McLeskey and
Billingsley (2008), the research-to-practice gap is influenced by pre-service and in-
service teacher preparation programs. Traditional forms of in-service training and
professional development provided to novice and practicing special education teachers
have generally limited impact on classroom practice, or on achievement outcomes of
students with learning disabilities (Greenwood, 2001). As the final research question, the
study sought to determine whether total hours of professional development and teaching
experience related to knowledge elicited from experts when describing the reading task.
COGNITIVE TASK ANALYSIS
86
The effects of CTA on capturing experts’ procedural knowledge has been researched
across multiple domains including medicine and the military (Clark, 2014); however, this
present study is the first known study to capture expert special education teachers’
procedural knowledge for a reading instruction task. The knowledge elicited from SMEs
may be used to inform and design instruction (Feldon & Yates, 2011) in pre-service and
in-service teacher training programs. Moreover, CTA-informed instruction has shown to
be more effective than traditional training methods that are based upon behavioral task
analysis and expert self-reports (Clark et al., 2008).
This chapter first discusses the process of conducting CTA with expert special
education teachers in the context of prior CTA studies and then discusses the results of
the study organized by research question. Limitations of the study, implications, and
future research are also discussed.
Process of Conducting Cognitive Task Analysis
Selection of Experts
Literature on the selection of experts to engage in CTA suggests consideration of
the following criteria to identify a SME: at least 3-5 years of consistent success in
performing the task–with 10 or more years being ideal; recent field performance of the
task; broad experience performing the task across settings; and has not worked as an
instructor of the task for a year or more (Clark, 2014; Clark et al., 2008; Flynn, 2012).
The selection of SMEs for this study maintained fidelity to the criteria set forth in
prominent CTA research (Clark, 2014; Clark et al., 2008; Flynn, 2012). As such, experts
selected for the study demonstrated consistently high performance in reading instruction
for students in grades 3-5 with mild to moderate disabilities. Each SME had more than
COGNITIVE TASK ANALYSIS
87
10 years of experience in performing the reading instruction task with students in their
classrooms and had performed the task within six months of participating in CTA. While
the experts may have served as mentors to other teachers, they had not served primarily
as instructors to other teachers for the reading instruction task captured through CTA.
Lastly, the SMEs each had broad experience in performing the reading instruction task
with hundreds of students in third through fifth grade.
Bartholio (2010) and Chao and Salvendy (1994) argue that marginal cost-benefit
utility is achieved with three SMEs, while Crispen (2010) maintains that three to four
SMEs provide the optimal level of procedural steps in a CTA gold standard protocol.
Although the elicitation of knowledge from more than three SMEs will yield additional
action and decision steps, the additional knowledge acquired is considered of marginal
diminishing return. Marginal diminishing return is based upon the investment of time
and human effort in conducting CTA when additional knowledge acquired from a SME is
less than 10% (Bartholio, 2010). In both the 3i+3r independent and 1i+3r incremental
methods, knowledge elicited from three experts was compiled to develop gold standard
protocols.
Collection of Data
Data were collected using the 3i+3r independent and 1i+3r incremental methods
of knowledge elicitation discussed in Chapter Three: Methodology. The 3i+3r
independent method selected and adapted for the study has been documented as effective
in capturing expertise to inform instructional design of the task and to develop
corresponding instructional materials (Clark et al., 2008; Flynn, 2012). The 1i+3r
incremental method is an adaptation of the 3i+3r independent method and was first
COGNITIVE TASK ANALYSIS
88
introduced in Flynn’s (2012) dissertation as an alternate, yet effective, CTA methodology
to capture automated expert knowledge. The two knowledge elicitation methods differ in
that the 3i+3r independent method aggregates three final, revised independent interview
reports of SMEs into a gold standard protocol, while the 1i+3r incremental method uses
the final, revised independent interview report of a single SME to serve as the base for
semi-structured incremental interviews with two additional SMEs to develop a gold
standard protocol. Both knowledge elicitation methods incorporate the knowledge of
three SMEs into final gold standard protocols. By applying these focused knowledge
elicitation methods to multistage interview processes, experts’ unconscious and
automated procedural knowledge in reading instruction was effectively captured for this
study.
As outlined in Clark et al. (2008) and consistent with other research studies
(Canillas, 2010; Flynn, 2012; Tolano-Leveque, 2010), CTA was conducted in the
following five stages: (a) collection of preliminary knowledge; (b) identification of
knowledge representations; (c) application of focused knowledge elicitation methods; (d)
analysis and verification of data collected; and (e) formatting of results for the intended
purpose or application.
This researcher reviewed literature in the task domain of reading instruction in
informational text to collect preliminary knowledge and to compile a sequence of task
parts along with the knowledge types required to perform the task. These knowledge
representations became the focus of CTA for this study and were expanded to
accommodate any underlying skills that emerged during interviews with expert teachers.
COGNITIVE TASK ANALYSIS
89
It should be noted that this researcher is experienced in the task domain as a special
education teacher.
Focused knowledge elicitation methods were applied to CTA interviews with
SMEs. Similar to other 3i+3r independent method research studies on expert knowledge
recall and omissions (Canillas, 2010; Flynn, 2012; Tolano-Leveque, 2010), SMEs that
participated in the 3i+3r independent method were asked to provide a description of the
major tasks and subtasks of the reading procedure in the order in which they are
performed. SMEs were also asked to describe decision steps, indicators of student
performance, and materials needed to perform the task. Unlike other 3i+3r independent
method research studies, SMEs were not asked about sensory cues required to perform
the task. Edited transcriptions of digitally recorded interviews with SMEs were
formatted into independent interview reports and coded for the knowledge necessary to
perform the task including objectives/procedures, standards, and declarative and
procedural knowledge including both action and decision steps. Final, revised
independent interview reports were aggregated into a preliminary gold standard protocol
for expert review.
Similar to Flynn’s (2012) research study on the relative effectiveness of the 3i+3r
independent and 1i+3r incremental methods of knowledge elicitation, the knowledge of
three SMEs was used to develop gold standard protocols. Using the final, revised
independent interview report from a SME as the base of the 1i+3r incremental method
gold standard protocol, a second SME was given the report to review, revise, and make
additions to the declarative and procedural knowledge required to perform the reading
COGNITIVE TASK ANALYSIS
90
instruction task. The final, revised incremental interview report was then given to a third
SME to review.
Unlike Flynn’s (2012) research study, one SME’s independent interview report
was used for both the 3i+3r independent and 1i+3r incremental methods. SMEs who
participated in the 1i+3r incremental method were interviewed to elicit his or her
expertise in the reading instruction task. Edited transcriptions using digital recordings of
the interviews were used to draft incremental interview reports and were coded for the
same knowledge types as in the 3i+3r independent method. Final, revised incremental
interview reports were aggregated into a preliminary gold standard protocol for expert
review.
This researcher drafted edited transcriptions of digitally recorded independent and
incremental interviews with SMEs, rather than verbatim transcriptions, for the purpose of
producing independent and incremental interview reports. Edited transcriptions captured
each of the procedural steps described by SMEs and were used to construct interview
reports to code for knowledge types. This is a variation from the verbatim transcription
most commonly used in CTA interviews with SMEs. Rationale for the use of edited
transcription in this study is that verbatim transcription was not necessary to provide this
researcher with the interview content needed to draft independent and incremental
interview reports or to interpret knowledge types. It should be noted that digital
recordings of interviews were reviewed extensively for accuracy of procedural steps.
This researcher took brief notes during each of the expert interviews and compared notes
with edited transcriptions to ensure the accuracy and fidelity of declarative and
procedural knowledge captured from SMEs.
COGNITIVE TASK ANALYSIS
91
After careful review of the independent and incremental interview reports to
check for missing or incomplete steps, concept mapping was used to ensure the cohesion
and completeness of tasks. This researcher noted procedural steps requiring clarification
and noted questions to ask SMEs during follow-up interviews. During the review phase,
SMEs were asked to review interview reports for accuracy, make changes and corrections
to errors in procedures, and to remove unnecessary steps. Revised interview reports were
then submitted to SMEs for final approval. Final, revised interview reports were
aggregated into preliminary gold standard protocols and emailed to SMEs during the
protocol review phase. Results of this process are discussed later in this chapter.
Discussion of Findings
No formal hypotheses were developed for this study; rather, the study was guided
by three main research questions and a subquestion.
Question 1. Will the number of critical action and decision steps in the gold standard
protocol produced by the 3i+3r independent method differ from the number of action and
decision steps in the gold standard protocol produced by the 1i+3r incremental method?
3i+3r independent method verses 1i+3r incremental method. Across
knowledge elicitation methods, experts consistently recalled more action than decision
steps. The difference between expert recall of action steps over and above decision steps
is suggestive of the difficulty in recalling unobservable cognitions involved in critical
decision-making. Expert knowledge becomes automated as a result of conscious,
deliberate practice in performing a task (Clark, 2014; Clark & Estes, 1996; Ericsson,
2004; Ericsson et al., 1993). As the automaticity of skill develops, knowledge transfers
from working memory to long-term memory to free the mental resources required to
COGNITIVE TASK ANALYSIS
92
effectively problem solve in novel situations. While automated knowledge can improve
cognitive efficiency, it can also impede experts’ deliberate access and retrieval of
knowledge pertaining to the critical action and decision steps necessary to perform a task
(Clark, 2008b; Ericsson, 2004; Sweller, 1988), as demonstrated in this research study.
As discussed in Chapter Four: Results, the 3i+3r independent method captured
more action and decision steps than the 1i+3r incremental method, with steps for each
method totaling 179 and 140, respectively. The 3i+3r independent method yielded 39
more total action and decision steps with 17 more action steps and 22 more decision steps
than the 1i+3r incremental method.
A possible explanation for the 3i+3r independent method generating significantly
more action and decision steps than the 1i+3r incremental method may be that the 3i+3r
independent method encourages SMEs to think deeply and critically about the declarative
and procedural knowledge required to perform a task, while in contrast, the 1i+3r
incremental method inhibits the knowledge–particularly decision-making knowledge–
SMEs recall and contribute to the gold standard protocol. An unknown priming effect
(Clark, 2008b; Squire, 2004; Tulving & Schacter, 1990) of the 1i+3r incremental method
may have influenced the thought processes and procedural knowledge recall of SMEs
who participated in the study. These findings may also suggest that when SMEs are
prompted to analyze declarative and procedural knowledge elicited by other SMEs and to
determine the need for additional action and decision steps, this process may call into
question the SMEs own knowledge of the reading procedure. As a result, SMEs may
unknowingly resort to taking mental shortcuts in the form of heuristic analysis (Aronson,
2012) to ease the cognitive load (Sweller, 1988) of recalling unobservable and
COGNITIVE TASK ANALYSIS
93
unconscious cognitive processes involved in performing a task. As “cognitive misers,”
people adopt strategies to simplify complex problems and conserve limited cognitive
capacity to process information (Aronson, 2012, p. 119). While cognitive miserliness can
be efficient in creating mental shortcuts to reduce cognitive load, the information that is
overlooked in this process can result in significant decision-making errors or omissions
(Aronson, 2012). According to Aronson (2012), judgmental heuristics–or mental
shortcuts–are most often used when: an immediate decision is required; experiencing
cognitive overload; issues are insignificant or unimportant; or there is a lack of
information available to make thoughtful decisions. During the 1i+3r incremental
method knowledge elicitation process, experts may experience pressure to quickly
analyze another expert’s procedures and to either produce additional action and decision
steps, or simply agree with the steps already established by an expert to ease the cognitive
load. The application of simple, approximate strategies to execute decision-making may
have failed to incorporate holistic considerations (Aronson, 2012) of the steps involved in
the reading procedure. It may be that the 1i+3r incremental method is more of an
effective strategy to elicit knowledge of linear procedures, rather than more complex
procedures such as a reading instruction task.
In this present study, the 3i+3r independent method generated more critical action
and decision steps than the 1i+3r incremental method. These results are in contrast with
the findings in Flynn’s (2012) research study on the relative effectiveness of the two
knowledge elicitation methods. An explanation of this result may be found in the nature
of the task (Hoffman, 1987). It may be that the interview task investigated by Flynn
(2012) was more linear than the complex task of teaching informational text to students
COGNITIVE TASK ANALYSIS
94
with mild to moderate disabilities. As suggested by Hoffman (1987), the brevity and
complexity of Flynn’s task may be different than the complexity of the task of this
current study, which is conducted over the course of a week or more. Therefore, future
research should identify possible interactions between the content and complexity of
expert knowledge and the specific structured knowledge elicitation methods most suitable
to the complexity of the task in order to maximize the educational benefits of
instructional content generated by way of CTA (Feldon, 2007).
Action steps verses decision steps. All SMEs recalled more action than decision
steps in the reading instruction in informational text task domain, regardless of the
knowledge elicitation method used to capture expertise. These findings demonstrate that
across knowledge elicitation methods, SMEs were consistently better able to recall
knowledge on how to execute a task in an action step, than knowledge of when to execute
the task in a decision step. In Clark’s (2014) view, “The high recall of action steps may
result from experts forming a mental image of their actions and describing the image.
Since our decisions are not directly observable, even when they are conscious they may
not lend themselves to images that represent thought processes” (p. 545).
SMEs who participated in the 3i+3r independent method recalled an average of
20.56% more action than decision steps and yielded a greater number of action than
decision steps with differences ranging from 15.73% to 27.78% per SME. These results
are consistent with findings in research studies conducted in the medical field, which
applied the 3i+3r independent method of CTA to elicit expert knowledge pertaining to
complex medical procedures (Canillas, 2010; Tolano-Leveque, 2010). In a study
conducted using a central venous catheter procedure for CTA, SMEs collectively recalled
COGNITIVE TASK ANALYSIS
95
44 action steps and 14 decision steps (Canillas, 2010). In a similar study involving an
open cricothyrotomy procedure, SMEs recalled 37.95% more action than decision steps
(Tolano-Leveque, 2010).
SMEs who participated in the 1i+3r incremental method for this present study
recalled an average of 26.95% more action than decision steps, with the percentage of
action steps over and above decision steps presenting at 18.18% for SME A, 12.12% for
SME B, and 50.55% for SME C. The Flynn (2012) study found that the 1i+3r
incremental method captured more decision steps than the 3i+3r independent method;
however, at this time, there are no other known research studies to corroborate findings
from this present study, which indicate that the 1i+3r incremental method captures a
greater number of action than decision steps, although these findings may likely be
replicated in future studies. The present 1i+3r incremental method findings are
interesting in that there is a significant discrepancy between the difference in action and
decision steps captured from SME C as compared to the differences in action and
decision steps captured between SMEs A and B. This may be attributed to the fact that
SME C reviewed and made final contributions to an incremental interview report
constructed using SME A and SME B’s knowledge of the reading instruction task. The
discrepancy between SME C’s action and decision steps may suggest the SME’s reliance
on the use of his or her explicit memory to consciously and intentionally recall the what
to do declarative knowledge of the steps involved in the reading instruction task, rather
than his or her implicit memory for unconscious and unintentional recall of critical when
and how to procedural knowledge for the task (Bargh & Chartrand, 1999; Clark, 2008b;
Tulving & Schacter, 1990). This implication may serve as a limitation of the 1i+3r
COGNITIVE TASK ANALYSIS
96
incremental method in this study and in future research.
Question 1a. What additional critical action and decision steps, do experts recall during
follow-up interviews to review preliminary interview reports and during review of the
preliminary gold standard protocols?
Expert review of preliminary interview reports and preliminary gold
standard protocols. As discussed in Chapter Four: Results, when compared to the
1i+3r incremental method, the 3i+3r independent method yielded a greater number of
additional action and decision steps during follow-up interviews with SMEs, and during
the electronic gold standard protocol review phase.
Two methods were used to validate interview reports. First, two SMEs who
participated in the 1i+3r incremental method were sent his or her interview report
electronically and asked to review and return any comments and/or changes via email.
This method yielded no increase, or change in decision steps. As a result, this researcher
then requested in-person, follow-up interviews with SMEs who participated in both the
3i+3r independent and 1i+3r incremental methods to review interview reports and this
resulted in a number of knowledge step additions and changes.
All SMEs reviewed the respective preliminary gold standard protocol
electronically and made changes using the track changes feature in Microsoft Word. To
validate the preliminary 3i+3r independent and 1i+3r incremental method gold standard
protocols, SMEs were emailed the respective protocol to review electronically. SMEs
who contributed to the 1i+3r incremental method made no revisions to the gold standard
protocol, while SMEs who contributed to the 3i+3r independent method made numerous
revisions, as well as changes to both action and decision steps.
COGNITIVE TASK ANALYSIS
97
Based upon these findings, it can be inferred that an in-person, follow-up
interview to review the preliminary 3i+3r independent or 1i+3r incremental interview
report for the first time with a SME is the most effective method for capturing additional
declarative and procedural knowledge. A SME may respond more carefully and review
the interview report more critically when it is presented during a follow-up interview, and
this may be due, at least in part, to the personal interaction and communicative exchange
with the researcher. It may also be the case that when interview reports are emailed to
experts preceding a follow-up, in-person interview, the revision process becomes
reiterative and inadvertently discourages experts from investing the mental effort required
for multiple, critical reviews of the interview report. As such, SMEs may apply a more
superficial, rather than critical analysis to the cohesion and completeness of the interview
report. This researcher began the 1i+3r incremental method review process by emailing
SMEs A and B with his or her preliminary interview report in advance of the follow-up,
in-person interview, which captured minimal additional procedural steps. In an attempt
to elicit further declarative and procedural knowledge from SMEs participating in the
3i+3r independent and 1i+3r incremental methods, all SMEs thenceforth were presented
with the interview report at the follow-up interview. This approach captured a
significantly greater number of additional action and decision steps from SMEs.
A final consideration of the differences between additional action and decision
steps captured from participants in the 3i+3r independent and 1i+3r incremental methods
during follow-up interviews and the electronic preliminary gold standard protocol review
phase is that SMEs in the former method had no previous interaction with an interview
report of any kind and approached the review phase without familiarity of the structure of
COGNITIVE TASK ANALYSIS
98
an interview report. This may have attributed to the consistent feedback provided by
SMEs in the 3i+3r independent method. The differences in SMEs’ review and
verification of the respective gold standard protocol may be that experts’ repeated
exposure to the action and decision steps in the 1i+3r incremental method may have
primed them to accept the gold standard protocol as being the most authentic
representation of the task. On the other hand, SMEs who contributed to the 3i+3r
independent method reviewed the aggregated gold standard protocol for the very first
time during the preliminary gold standard protocol review phase and therefore may have
scrutinized the results in greater depth. Canillas (2010) noted that the knowledge
elicitation process is demanding of experts’ time and mental effort, which may also
impact the amount of feedback experts provide.
Question 2. What percentage of action and decision steps do expert special education
teachers omit when describing how they provide reading instruction in informational text
to students with mild to moderate learning disabilities, in grades 3-5, using the 3i+3r
independent and 1i+3r incremental methods?
3i+3r independent method verses 1i+3r incremental method. Final gold
standard protocols were analyzed to determine the effects of automaticity on SMEs’
declarative and procedural knowledge omissions when describing reading instruction in
informational text. Declarative knowledge is the why and what, while procedural
knowledge is the how and when of task performance (Clark & Elen, 2006). Research
suggests that experts may unintentionally omit up to 70% of critical knowledge when
describing action and decision steps essential to task execution. Automaticity of
COGNITIVE TASK ANALYSIS
99
observable and unobservable processes reduces the cognitive demands placed on the
working memory to allow for problem solving of novel tasks.
When describing the reading instruction procedure for the 3i+3r independent
method, SMEs omitted an average of 61.82% of total critical action and decision steps, as
compared to the gold standard protocol. In declarative and procedural knowledge types,
SMEs omitted an average of 62.33% of action steps and 61.18% of decision steps.
Knowledge omissions varied amongst SMEs when compared to the gold standard
protocol. Percentage of omissions ranged from: 50.28% to 75.42% for total action and
decision steps; 52.00% to 76.00% for action steps; and 48.10% to 74.68% for decision
steps. Although SMEs collectively recalled a greater number of action than decision
steps in the 3i+3r independent method, when the percentage of omissions was compared
to the gold standard protocol, the analysis revealed that SMEs collectively omitted a
slightly higher percentage of action than decision steps. This is an unexpected result in
that previous research studies reported higher percentages of omissions for decision than
action steps when compared to the gold standard protocol (Canillas, 2010; Tolano-
Leveque, 2010). Canillas’s (2010) study found that SMEs omitted an average of 34.52%
of decision steps and 29.17% of action steps, while in Tolano-Leveque’s (2010) study,
SMEs omitted an average of 76.92% of decision steps and 41.92% of action steps.
When describing the reading instruction procedure for the 1i+3r incremental
method, SME omissions averaged 52.14% for total action and decision steps, as
compared to the gold standard protocol. Action step omissions averaged 53.41% for
SMEs, while decision step omissions averaged 50.29%. When compared to the gold
standard protocol, the percentage of expert knowledge omissions ranged from: 35.00% to
COGNITIVE TASK ANALYSIS
100
68.57% for total action and decision steps; 31.33% to 71.08% for action steps; and
40.35% to 64.91% for decision steps. Similar to the 3i+3r independent method, the
average percentage of omissions when compared to the gold standard protocol was
slightly higher for action than decision steps. There is currently no other known research
study, which compares the percentages of action and decision step omissions to a 1i+3r
incremental method gold standard protocol.
One explanation of the marginal difference in percentage of action step omissions
above decision step omissions when compared to the gold standard protocol is that SMEs
in this present study routinely perform the reading instruction task with multiple groups
of students in grades 3-5 each day. Presumably, SMEs who participated in this study had
greater implicit knowledge of the action and decision steps necessary to differentiate
instruction for students with mild to moderate learning disabilities. Moreover, as expert
special education teachers, they may be accustomed to impromptu decision-making when
teaching to meet the needs of diverse learners. The slightly higher percentage of
omissions for action than decision steps when compared to the gold standard protocol
was, nevertheless, an unexpected result.
Question 3. Does an expert special education teacher's self-reported biographical data
relate to the number of action and decision steps yielded when describing reading
instruction in informational text for students in grades 3-5 with mild to moderate
learning disabilities?
Expert knowledge recall. The correlation between total hours of professional
development and total action and decision steps revealed a small positive relationship
(0.17). However, the correlation between total hours of professional development and
COGNITIVE TASK ANALYSIS
101
total decision steps revealed a moderate relationship (0.46). Due to the small sample size
of expert special education teachers who participated in this present study, these data are
not statistically significant. In a qualitative analysis, this positive correlation in the more
implicit decision steps may reflect a greater awareness of the discrete steps involved in
teaching reading due to more professional development hours in literacy instruction.
Such findings may warrant further investigation into the relationship between
professional development and expert procedural knowledge.
Limitations
The present study yielded results that are both consistent and inconsistent with
existing research studies pertaining to expert knowledge omissions, expert recall of
critical action and decision steps, and the relative effectiveness of the 3i+3r independent
and 1i+3r incremental methods in capturing procedural knowledge. Limitations of the
study are discussed.
Researcher Experience in the Task Domain
The first limitation of this present study pertains to this researcher’s experience in
the task domain of analysis. The most effective knowledge analysts bootstrap
information to become familiar with the domain in which they are conducting CTA
(Crandall et al., 2006; Schraagen et al., 2000). The key to bootstrapping is to become
familiar with the domain as an accomplished novice, but not overly familiar or
experienced in the task. Consequently, knowledge analysts who are highly experienced
in the domain in which knowledge is being elicited from experts tend to “edit what they
are told…in CTA interviews so that the information they collect is consistent with their
own experience and expectations” (Clark, 2014, p. 544). This researcher has 11 years of
COGNITIVE TASK ANALYSIS
102
experience as a special education teacher of students with mild to moderate learning
disabilities. Consistent with the literature, this researcher had to be cognizant of her own
experience in reading instruction in informational text so not to fill-in or edit knowledge
elicited from SMEs.
Automated Knowledge and Expert Review
Another possible limitation of the study is that automated knowledge may have
impeded the experts’ capacity to recognize critical action and decision step omissions
during review of the incremental interview reports and preliminary 1i+3r incremental
method gold standard protocol for the reading procedure, more so than in the 3i+3r
independent method. Two out of the three experts who participated in the 1i+3r
incremental knowledge elicitation method made additions or changes to his or her
incremental interview report, and of these two SMEs, one SME contributed just one
additional action step. No SMEs in the 1i+3r incremental method made revisions or
additions to the preliminary gold standard protocol. In contrast, all SMEs who
participated in the 3i+3r independent method of CTA recalled additional knowledge steps
and/or made revisions to both the independent interview report and preliminary gold
standard protocol. These findings may suggest that due to automated and unconscious
knowledge, experts who participated in the 1i+3r incremental method may have
experienced greater challenges in recognizing omissions when reviewing self-reported
data pertaining to declarative and procedural knowledge of the reading task, than experts
who participated in the 3i+3r independent method.
Identification of Experts for the Study
COGNITIVE TASK ANALYSIS
103
The final limitation is the number of experts available for participation in this
study. Five expert special education teachers met the criteria for participation in this
study, yet the study design called for a total of six experts: three experts for the 3i+3r
independent method and three experts for 1i+3r incremental method. As a result, one
SME from the 3i+3r independent method was randomly selected to serve as the base for
the 1i+3r incremental method of CTA. This is in contrast to previous research (Flynn,
2012) conducted to compare the overall effectiveness of the two knowledge elicitation
methods. In Flynn’s (2012) study, six SMEs were selected and available for participation
in the study.
Implications and Future Research
CTA is effective in capturing highly automated expert knowledge that is
otherwise unavailable for recall. Through knowledge elicitation, declarative and tacit
procedural knowledge required for complex decision-making is made explicit and can be
applied to instruction to improve novice performance and post-training learning gains. It
is unknown whether CTA-guided instruction has been applied to pre-service and in-
service teacher training programs to date. A search of the research did not reveal any
such studies. Therefore, future research may consider a randomized experimental design
study to implement pre-service and in-service teacher training, particularly for novice
teachers, using CTA-guided instruction and traditional instructional methods to compare
learning gains on reading instruction tasks similar to the tasks outlined in this present
study. Longitudinal research may also benefit this body of research to determine short-
and long-term learning gains in the reading tasks chosen for instruction.
In consideration of procedural optimization of knowledge elicitation methods,
COGNITIVE TASK ANALYSIS
104
future research is needed to compare the relative effectiveness of the 3i+3r independent
and 1i+3r incremental methods in capturing critical declarative and procedural
knowledge for complex task performance in K-12 school settings. Further research
should be conducted within the domain of reading instruction to determine which of the
two methodologies is most appropriate for complex learning tasks. The 1i+3r
incremental method may be another methodology to add to the repertoire of CTA;
however, additional research is needed to determine its overall effectiveness in capturing
complete and accurate knowledge representations of complex, problem-solving tasks.
Conclusion
This study further expands the body of knowledge for the benefits of CTA to
capture the knowledge and skills of how experts solve difficult problems and perform
complex tasks. As perhaps the first exploration of CTA in a K-12 educational setting,
this study demonstrates results similar to other studies in regard to expert omissions of
declarative and procedural knowledge when describing how to perform a complex task.
Similar to experts in various other domains, expert special education teachers omitted up
to 70% of the critical procedural steps required to perform the reading instruction task.
The results of this study and future research may show that capturing and incorporating
expert knowledge in K-12 pre-service and in-service teacher preparation programs for
novice special education teachers may increase student achievement in the classroom.
All in all, the knowledge elicitation methods of CTA implemented in this study were
effective in extracting the automated and unconscious cognitive processes expert teachers
employ during complex problem solving in a reading instruction task.
COGNITIVE TASK ANALYSIS
105
References
Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010).
How learning works: Seven research-based principles for smart teaching. San
Francisco, CA: John Wiley & Sons.
Anderson, J. R. (1982). Acquisition of cognitive skill. Psychological Review, 89(4), 369-
406.
Anderson, J. R. (1983). The architecture of cognition. Cambridge, MA: Harvard
University Press.
Anderson, J. R. (1995). Cognitive psychology (4th ed.). New York, NY: W.H. Freeman &
Company.
Anderson, J. R. (1996). ACT: A simple theory of complex cognition. American
Psychologist, 51(4), 355-365.
Anderson, J. R., Bothell, D., Byrne, M. D., Douglass, S., Lebiere, C., & Qin, Y. (2004).
An integrated theory of the mind. Psychological Review, 111(4), 1036-1060.
Anderson, J. R., & Schunn, C. D. (2000). Implications of the ACT-R learning theory: No
magic bullets. In R. Glaser (Ed.), Advances in instructional psychology (Vol. 5).
Mahwah, NJ: Erlbaum.
Anderson, R. C., & Faust, G. W. (1973). Educational psychology: The science of
instruction and learning. New York: Harper & Row.
Anderson, T. H., & Armbruster, B. B. (1984). Content area textbooks. In R. Anderson, J.
Osborn, & R. Tierney (Eds.), Learning to read in American schools (pp. 195-
226). Mahwah, NJ: Lawrence Erlbaum Associates.
COGNITIVE TASK ANALYSIS
106
Annett, J. (2000). Theoretical and pragmatic influences on task analysis methods. In J.
M. Schraagen, S. F. Chipman, & V. L. Shalin (Eds.), Cognitive task analysis (pp.
25-37). Mahwah, NJ: Lawrence Erlbaum Associates.
Aronson, E. (2012). The social animal, eleventh edition. Madison, NY: Worth Publishers.
Bargh, J. A., & Chartrand, T. L. (1999). The unbearable automaticity of being. American
Psychologist, 54(7), 462-479.
Bartholio, C. W. (2010). The use of cognitive task analysis to investigate how many
experts must be interviewed to acquire the critical information needed to perform
a central venous catheter placement (Unpublished doctoral dissertation).
University of Southern California, Los Angeles.
Blank, R. K., de las Alas, N., & Smith, C. (2007). Analysis of the quality of professional
development programs for mathematics and science teachers: Findings from a
cross-state study. Washington, DC: CCSSO.
Blessing, S. B. & Anderson, J. R. (1996). How people learn to skip steps. Journal of
Experimental Psychology: Learning, Memory, and Cognition 22(3), 576- 598.
Block, C. C., Gambrell, L. B., & Pressley, M. (2002). Improving comprehension
instruction: Rethinking research, theory, and classroom practice. San Francisco,
CA: Jossey-Bass.
Bondy, E., & Brownell, M. T. (2004). Getting beyond the research to practice gap:
Researching against the grain. Teacher Education and Special Education: The
Journal of the Teacher Education Division of the Council for Exceptional
Children, 27(1), 47-56.
COGNITIVE TASK ANALYSIS
107
Brozo, W. G. (2005). Avoiding the “fourth grade slump.” Thinking
Classroom/Peremena, 6, 48-49.
Brozo, W. G. (2010). The role of content literacy in an effective RTI program. The
Reading Teacher, 64(2), 147-150.
Burns, M. K., & Ysseldyke, J. E. (2009). Reported prevalence of evidence-based
instructional practices in special education. The Journal of Special Education,
43(1), 3-11.
Calfee, R.C., & Patrick, C.L. (1995). Teach our children well: Bringing K–12 education
into the 21st century. Stanford, CA: Stanford Alumni.
Canillas, E. N. (2010). The use of cognitive task analysis for identifying the critical
information omitted when experts describe surgical procedures (Unpublished
doctoral dissertation). University of Southern California, Los Angeles.
Caswell, L. J., & Duke, N. K. (1998). Non-narrative as a catalyst for literacy
development. Language Arts, 108-117.
Chall, J. S., Jacobs, V. A., & Baldwin, L. E., (2009). The reading crisis: Why poor
children fall behind. Cambridge, MA: Harvard University Press.
Chao, C. J., & Salvendy, G. (1994). Percentage of procedural knowledge acquired as a
function of the number of experts from whom knowledge is acquired for
diagnosis, debugging and interpretation tasks. International Journal of Human-
Computer Interaction, 6, 221-233.
Chase, W. G., & Ericsson, K. A. (1982). Skill and working memory. The psychology of
learning and motivation, 16, 1-58.
Chase, W. G., & Simon, H. A. (1973). Perception in chess. Cognitive psychology, 4(1),
55-81.
COGNITIVE TASK ANALYSIS
108
Chi, M. T. H. (2006). Two approaches to the study of experts’ characteristics. In K.
Ericsson, N. Charness, P. Feltovich, & R. Hoffman (Eds.), The Cambridge
handbook of expertise and expert performance (pp. 21-30). New York:
Cambridge University Press.
Chipman, S. F., Schraagen, J. M., & Shalin, V. L. (2000). Cognitive task analysis.
Mahwah, NJ: Lawrence Erlbaum Associates.
Clark, R. E. (2003). Turning research and evaluation into results for ISPI. Performance
Improvement, 42(2), 21-22.
Clark, R. E. (2004). Design document for a Guided Experiential Learning course: Final
report on contract DAAD 19-99-D-0046-0004 from TRADOC to the Institute for
Creative Technologies to the Center for Cognitive Technology, University of
Southern California.
Clark, R. E. (2006). Training Aid for Cognitive Task Analysis. Technical report produced
under contract ICT 53-0821-0137-W911NF-04-D-0005 from the Institute for
Creative Technologies to the Center for Cognitive Technology, University of
Southern California.
Clark, R. E. (2008a). How much and what kind of guidance is optimal for learning from
instruction? In S. Tobias & T. Duffy (Eds.), Constructivist theory applied to
education: Success or failure? (pp. 158-183). Routledge, NY: Taylor and Francis.
Clark, R. E. (2008b). Resistance to change: Unconscious knowledge and the challenge of
unlearning. In D. Berliner & H. Kupermintz (Eds.), Changing institutions,
environments and people (pp. 75-94). Mahwah, NJ: Lawrence Erlbaum
Associates.
COGNITIVE TASK ANALYSIS
109
Clark, R.E. (2014). Cognitive task analysis for expert-based instruction in healthcare. In
J.M. Spector, M.D. Merrill, J. Elen, & M.J. Bishop (Eds.), Handbook of research
on educational communications and technology (4th ed., pp. 541–551). New
York, NY: Springer.
Clark, R. E., & Elen, J. (2006). When less is more: Research and theory insights about
instruction for complex learning. In J. Elen, R. Clark & J. Lowyck (Eds.),
Handling complexity in learning environments: Theory and research (pp. 283-
279). Boston, MA: Elsevier.
Clark, R. E., & Estes, F. (1996). Cognitive Task Analysis for Training. International
Journal of Educational Research, 25(5), 403-417.
Clark, R. E., Feldon, D., van Merriënboer, J. G., Yates, K. A., & Early, S. (2008).
Cognitive task analysis. In J. Spector, M. Merrill, J. van Merriënboer, & M.
Driscoll (Eds.), Handbook of research on educational communications and
technology (3rd ed., pp. 578-591). Mahwah, NJ: Lawrence Erlbaum Associates.
Clark, R. E., Pugh, C. M., Yates, K. A., Inaba, K., Green, D. J., & Sullivan, M. E. (2011)
The use of cognitive task analysis to improve instructional descriptions of
procedures. Journal for Surgical Research.
http://www.ncbi.nlm.nih.gov/pubmed/22099596
Clark, R. E., Pugh, C. M., Yates, K. A., & Sullivan, M. (2008). The use of cognitive
task analysis and simulators for after action review of medical events in Iraq.
Technical Report 5-21- 2008 developed for the Center for Cognitive Technology,
Rossier School of Education, University of Southern California. Retrieved from
http://www.cogtech.usc.edu.
COGNITIVE TASK ANALYSIS
110
Cohen, D. K., & Ball, D. L. (1999). Instruction, capacity, and improvement. CPRE
Research Report Series. Philadelphia: Consortium for Policy Research in
Education.
Cooke, N. J. (1994). Varieties of knowledge elicitation techniques. International Journal
of Human-Computer Studies, 41, 801-849.
Cooke, N. J. (1999). Knowledge elicitation. In F. T. Durso (Ed.), Handbook of applied
cognition (pp. 479-509). New York: Wiley.
Cooper, H. (1996). Speaking power to truth: Reflections of an educational researcher
after 4 years of school board service. Educational Researcher, 25(1), 29-34.
Crandall, B. W. (1989). A comparative study of think-aloud and critical decision
knowledge elicitation methods. ACM SIGART Bulletin, 108, 144-146.
Crandall, B. W, Klein, G., & Hoffman, R. R. (2006). Working minds: A practitioner's
guide to cognitive task analysis. Cambridge, MA: The MIT Press.
Crispen, P.D. (2010). Identifying the point of diminishing marginal utility for cognitive
task analysis surgical subject matter interviews (Unpublished doctoral
dissertation). University of Southern California, Los Angeles.
Darling-Hammond, L. (2000). How teacher education matters. Journal of Teacher
Education, 51(3), 166-173.
Darling-Hammond, L. (2004). Standards, accountability, and school reform. The
Teachers College Record, 106(6), 1047-1085.
Darling-Hammond, L. (2007). Third annual Brown lecture in education research—The
flat earth and education: How America’s commitment to equity will determine
our future. Educational Researcher, 36(6), 318-334.
COGNITIVE TASK ANALYSIS
111
Darling-Hammond, L., & Bransford, J. (2005). Preparing teachers for a changing world:
What teachers should learn and be able to do. San Francisco, CA: Jossey-Bass.
Darling-Hammond, L. & Richardson, N. (2009). Research review/teacher learning: What
matters? How Teachers Learn, 66(5), 46-53.
Darling-Hammond, L., Wei, R. C., Andree, A., Richardson, N., & Orphanos, S. (2009).
Professional learning in the learning profession. Washington, DC: National Staff
Development Council.
Denton, C. A., Vaughn, S., & Fletcher, J. M. (2003). Bringing research‐based practice in
reading intervention to scale. Learning Disabilities Research & Practice, 18(3),
201-211.
Donovan, M., & Cross, C. (2002). Minority students in special and gifted education.
Washington, D.C.: National Research Council.
Duke, N. K. (2000). 3.6 minutes per day: The scarcity of informational texts in first
grade. Reading Research Quarterly, 35(2), 202-224.
Duke, N. K., Bennett-Armistead, S., & Roberts, E. M. (2002). Incorporating
informational text in the primary grades. In C. Roller (Ed.), Comprehensive
reading instruction across the grade levels: A collection of papers from the 2001
Reading Research Conference (pp. 41-54). Newark, DE: International Reading
Association.
Duke, N.K., & Pearson, P.D. (2002). Effective practices for developing reading
comprehension. In A. Farstrup & S. Samuels (Eds.), What research has to say
about reading instruction (3rd ed., pp. 205-242). Newark, DE: International
Reading Association.
COGNITIVE TASK ANALYSIS
112
Duke, N. K., Pearson, P. D., Strachan, S. L., & Billman, A. K. (2011). Essential elements
of fostering and teaching reading comprehension. In S. Samuels & A. Farstrup
(Eds.), What research has to say about reading instruction (4
th
ed., pp. 286-314).
Dymock, S. (2005). Teaching expository text structure awareness. The Reading Teacher,
59(2), 177-181.
Dymock, S., & Nicholson, T. (2010). “High 5!” Strategies to enhance comprehension of
expository text. The Reading Teacher, 64(3), 166-178.
Elmore, R. (2002). Bridging the gap between standards and achievement. Washington,
DC: Albert Shanker Institute.
Englert, C. S., & Thomas, C. C. (1987). Sensitivity to text structure in reading and
writing: A comparison between learning disabled and non-learning disabled
students. Learning Disability Quarterly, 10(2), 93-105.
Ericsson, K. A. (2004). Invited Address: Deliberate practice and the acquisition and
maintenance of expert performance in medicine and related domains. Academic
Medicine, 79(10), s70-s81.
Ericsson, K. A., & Charness, N. (1994). Expert performance: Its structure and
acquisition. American Psychologist, 49(8), 725-747.
Ericsson, K. A., & Kintsch, W. (1995). Long-term working memory. Psychological
Review, 102(2), 211.
Ericsson, K. A., Krampe, R. T., & Tesch-Römer, C. (1993). The role of deliberate
practice in the acquisition of expert performance. Psychological Review, 100(3),
363-406.
COGNITIVE TASK ANALYSIS
113
Ericsson K. A., & Simon H. A. (1984). Protocol analysis: Verbal reports as data.
Cambridge, MA: Bradford Books/MIT Press.
Feldon, D. (2007). Implications of research on expertise for curriculum and pedagogy.
Educational Psychology Review, 19(2), 91-110.
Feldon, D. F., & Clark, R. E. (2006). Instructional implications of cognitive task analysis
as a method for improving the accuracy of experts’ self-report. In G. Clarebout &
J. Elen (Eds.), Avoiding simplicity, confronting complexity: Advances in studying
and designing (computer-based) powerful learning environments (pp. 109-116).
Rotterdam, The Netherlands: Sense Publishers.
Ferguson, R. F. (1991). Paying for public education: New evidence on how and why
money matters. Harvard Journal on Legislation, 28(2), 465–498.
Fishman, B. J., Marx, R. W., Best, S., & Tal, R. T. (2003). Linking teacher and student
learning to improve professional development in systemic reform. Teaching and
Teacher Education, 19(6), 643-658.
Fitts, P. M. (1964). Perceptual-Motor Skill Learning. Categories of human learning, 243-
285.
Flynn, C. L. (2012). The relative efficiency of two strategies for conducting cognitive
task analysis (Unpublished doctoral dissertation). University of South Carolina.
Gagné, R. M. (1985) The conditions of learning (4th Ed.). New York: Holt, Rinehart and
Winston.
Gagné, R. M., Briggs, L. J., & Wager, W. W. (1992). Principles of instructional design.
Forth Worth, TX: Harcourt Brace Jovanovich.
COGNITIVE TASK ANALYSIS
114
Gagné, R. M., & Medsker, K. L. (1996). The conditions of learning: Training
applications. New York: Harcourt Brace.
Garet, M. S., Porter, A. C., Desimone, L., Birman, B. F., & Yoon, K. S. (2001). What
makes professional development effective? Results from a national sample of
teachers. American Educational Research Journal, 38(4), 915-945.
Gersten, R., Fuchs, L. S., Williams, J. P., & Baker, S. (2001). Teaching reading
comprehension strategies to students with learning disabilities: A review of
research. Review of Educational Research, 71(2), 279-320.
Glaser, R., & Bassok, M. (1989). Learning theory and the study of instruction. Annual
Review of Psychology, 40, 631-666.
Glaser, R., & Chi, M. T. H. (1988). Overview. In M. Chi, R. Glaser & M. Farr (Eds.),
The nature of expertise (pp. xv–xxviii). Mahwah, NJ: Lawrence Erlbaum
Associates.
Gott, S. P., Hall, E. P., Pokorny, R. A., Dibble, E., & Glaser, R. (1993). A naturalistic
study of transfer: Adaptive expertise in technical domains. In D. K. Detterman &
R. J. Sternberg (Eds.), Transfer on trial: Intelligence, cognition, and instruction
(pp. 258-288). Norwood, NJ: Ablex.
Greeno, J. G. (1976). Indefinite goals in well-structured problems. Psychological Review,
83(6), 479.
Greenwood, C.R. (2001). Bridging the gap between research and practice in special
education: Issues and implications for teacher preparation. Teacher Education
and Special Education, 24(4), 273-275.
Greenwood, C. R., & Abbott, M. (2001). The research to practice gap in special
COGNITIVE TASK ANALYSIS
115
education. Teacher Education and Special Education: The Journal of the Teacher
Education Division of the Council for Exceptional Children, 24(4), 276-289.
Guskey, T. R. & Sparks, D. (2002). Linking professional development to improvements
in student learning. Paper presented at Annual Conference for American
Educational Research Association, New Orleans, LA.
Guskey, T. R., & Yoon, K. S. (2009). What works in professional development? Phi
Delta Kappan, 90(7), 495-500.
Hall, E. M., Gott, S. P., & Pokorny, R. A. (1995). A procedural guide to cognitive task
analysis: The PARI methodology. Brooks Air Force Base, TX: Manpower and
Personnel Division, U.S. Air Force.
Hatano, G. & Inagaki (2000). Practice makes a difference: Design principles for
adaptive expertise. Presented at the Annual Meeting of the American Education
Research Association. New Orleans, Louisiana: April, 2000.
Hinds, P. J. (1999). The curse of expertise: The effects of expertise and debiasing
methods on predictions of novice performance. Journal of Experimental
Psychology: Applied, 5(2), 205-221.
Hinds, P. J., Patterson, M., & Pfeffer, J. (2001). Bothered by abstraction: The effect of
expertise on knowledge transfer and subsequent novice performance. Journal of
Applied Psychology, 86(6), 1232-1243.
Hoffman, R. R. (1987). The problem of extracting the knowledge of experts from the
perspective of experimental psychology. AI Magazine, 8(2), 53-67.
Hoffman, R. R., Crandall, B., & Shadbolt, N. R. (1998). Use of the critical decision
method to elicit expert knowledge: A case study in the methodology of cognitive
COGNITIVE TASK ANALYSIS
116
task analysis. Human Factors, 40(2), 254-276.
Hoffman, R. R., & Militello, L. G. (2009). Perspectives on cognitive task analysis.
New York, NY: Psychology Press, Taylor & Francis Group.
Hoffman, R. R., Shadbolt, N. R., Burton, A. M., & Klein, G. (1995). Eliciting knowledge
from experts: A methodological analysis. Organizational Behavior and Human
Decision Process, 62(2), 129-158.
Jitendra, A. K., Burgess, C., & Gajria, M. (2011). Cognitive strategy instruction for
improving expository text comprehension of students with learning disabilities:
The quality of evidence. Exceptional Children, (77)2, 135-159.
Jonassen, D. H., Tessmer, M., & Hannum, W. H. (1999). Task analysis methods for
instructional design. Mahweh, NJ: Lawrence Erlbaum Associates.
Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimally guided instruction
does not work: An analysis of the failure of constructivist, discover, problem-
based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2),
75-86.
Klein, G. A., & Calderwood, R. (1987). Decision models: Some lessons from the field.
IEEE Transactions on Systems, Man and Cybernetics, 21(5), 1018–1026.
Klein, G. A., Calderwood, R., & MacGregor, D. (1989). Critical decision method for
eliciting knowledge. IEEE Transactions on Systems, Man and Cybernetics, 19(3),
462-472.
Knapp, M. S. (2003). Professional development as policy pathway. Review of Research
in Education, 27(1), 109-157.
Landrum, T. J., Tankersley, M., & Kauffman, J. M. (2003). What is special about special
COGNITIVE TASK ANALYSIS
117
education for students with emotional or behavioral disorders? The Journal of
Special Education, 37(3), 148-156.
Lee, R. L. (2004). The impact of cognitive task analysis on performance: A meta-analysis
of comparative studies (Unpublished doctoral dissertation). University of
Southern California, Los Angeles, CA.
Lloyd, J. W., Weintraub, F. J., & Safer, N. D. (1997). A bridge between research and
practice: Building consensus. Exceptional Children, 63(4), 535-538.
McLeskey, J., & Billingsley, B. (2008). How does the quality and stability of the teaching
force influence the research-to-practice gap? A perspective on the teacher
shortage in special education. Remedial and Special Education, 29(5), 293-305.
Merrill, M. D. (2002). A pebble-in-the-pond model for instructional design. Performance
Improvement, 41(7), 39-44.
Meyer, B. J. F., Brandt, D. M., & Bluth, G. J. (1980). Use of top-level structure in text:
Key for reading comprehension of ninth-grade students. Reading Research
Quarterly, 16, 72–103.
National Institute of Child Health and Human Development (NICHD) (2000). Report of
the National Reading Panel. Teaching children to read: An evidence-based
assessment of the scientific research literature on reading and its implications for
reading instruction. NIH Publication No. 00-4769. Washington, D.C.: U.S.
Government Printing Office.
Neves, D. M., & Anderson, J. R. (1981). Knowledge compilation: Mechanisms for the
automatization of cognitive skills. In J. R. Anderson (Ed.), Cognitive skills and
their acquisition. Hillsdale, NJ: Erlbaum.
COGNITIVE TASK ANALYSIS
118
O’Connor, R. E., Fulmer, D., Harty, K. R., & Bell, K. M. (2005). Layers of Reading
Intervention in Kindergarten through Third Grade: Changes in teaching and
student outcomes. Journal of Learning Disabilities, 38(5), 440-455.
Opfer, V. D., & Pedder, D. (2011). Conceptualizing teacher professional learning. Review
of Educational Research, 81(3), 376-407.
Ormrod, J. E., & Davis, K. M. (2004). Human learning (4th ed.). Upper Saddle River,
NJ: Pearson Prentice Hall.
Paris, S. G., Lipson, M. Y., & Wixson, K. K. (1983). Becoming a strategic reader.
Contemporary Educational Psychology, 8(3), 293-316.
Pearson, P. D., & Duke, N. K. (2002). Comprehension instruction in the primary grades.
In C. Collins-Block & M. Pressley (Eds.), Comprehension instruction:
Research-based best practices (pp. 247–258). New York, NY: Guilford Press.
Penuel, W. R., Fishman, B. J., Yamaguchi, R., & Gallagher, L. P. (2007). What makes
professional development effective? Strategies that foster curriculum
implementation. American Educational Research Journal, 44(4), 921-958.
Pressley, M. (2002). Comprehension strategies instruction: A turn-of-the-century status
report. In C. Block & M. Pressley (Eds.), Comprehension instruction: Research–
based best practices (pp. 11–27). New York, NY: Guilford Press.
Ryder, J. M., & Redding, R. E. (1993). Integrating cognitive task analysis into
instructional systems development. Educational Technology Research and
Development, 41(2), 75-96.
COGNITIVE TASK ANALYSIS
119
Sáenz, L. M., & Fuchs, L. S. (2002). Examining the reading difficulty of secondary
students with learning disabilities: Expository versus narrative text. Remedial and
Special Education, 23(1), 31-41.
Salkind, N. J. (2006). Tests & measurement for people who (think they) hate tests &
measurement. Thousand Oaks, CA: Sage Publications.
Schneider, W. (1985). Training high-performance skills: Fallacies and guidelines. Human
Factors: The Journal of the Human Factors and Ergonomics Society, 27(3), 285-
300.
Schneider, W., & Shiffrin, R. M. (1977). Controlled and automatic human information
processing: I. Detection, search, and attention. Psychological Review, 84(1), 1.
Schraagen, J. M., Chipman, S. F., & Shalin, V. L. (2000). Cognitive task analysis.
Mahwah, NJ: Lawrence Erlbaum Associates.
Sparks, D. (1995). A paradigm shift in staff development. The ERIC Review, 3(3), 5-11.
Sparks, D. (2002). Designing powerful professional development for teachers
and principals. Oxford, OH: National Staff Development Council.
Squire, L. R. (2004). Memory systems of the brain: A brief history and current
perspective. Neurobiology of Learning and Memory, 82(3), 171-177.
Sun, R., Slusarz, P., & Terry, C. (2005). The interaction of the explicit and the implicit in
skill learning: a dual-process approach. Psychological Review, 112(1), 159.
Supovitz, J. A., Mayer, D. P., & Kahle, J. B. (2000). Promoting inquiry-based
instructional practice: The longitudinal impact of professional development in the
context of systemic reform. Educational Policy, 14(3), 331-356.
Swanson, H. L. (1999). Interventions for students with learning disabilities: A meta-
COGNITIVE TASK ANALYSIS
120
analysis of treatment outcomes. New York, NY: Guilford Press.
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive
Science, 12(2), 257-285.
Tofel-Grehl, C., & Feldon, D. F. (2013). Cognitive task analysis–based training: A meta-
analysis of studies. Journal of Cognitive Engineering and Decision Making, 7(3),
293-304.
Tolano-Leveque, M. (2010). Using cognitive task analysis to determine the percentage of
critical information that experts omit when describing a surgical procedure
(Unpublished doctoral dissertation). University of Southern California, Los
Angeles.
Torgesen, J. K. (1996). Thoughts about intervention research in learning disabilities.
Learning Disabilities: A Multidisciplinary Journal, 7(2), 55-58.
Tulving, E., & Schacter, D. L. (1990). Priming and human memory systems. Science,
247(4940), 301-306.
van Merriënboer, J. J. G., Clark, R. E., & de Croock, M. B. M. (2002). Blueprints for
complex learning: The 4C/ID-model. Educational Technology, Research and
Development, 50(2), 39-64.
Vaughn, S., Gersten, R., Chard, D.J. (2000). The underlying message in LD intervention
research: Findings from research syntheses. Exceptional Children, 67(1), 99-
114.
Vaughn, S., Levy, S., Coleman, M., & Bos, C. S. (2002). Reading instruction for students
with LD and EBD: A synthesis of observation studies. The Journal of Special
Education, 36(1), 2-13.
COGNITIVE TASK ANALYSIS
121
Vaughn, S., & Linan-Thompson, S. (2003). What is special about special education for
students with learning disabilities? The Journal of Special Education, 37(3), 140-
147.
Wegner, D. M. (1994). Ironic processes of mental control. Psychological Review, 101,
34-52.
Wei, J., & Salvendy, G. (2004). The cognitive task analysis methods for job and task
design: Review and reappraisal. Behaviour & Information Technology, 23(4),
273-299.
Wheatley, T., & Wegner, D. M. (2001). Automaticity of action, Psychology of. In N. J.
Smelser & P. B. Baltes (Eds.), International Encyclopedia of the Social and
Behavioral Sciences, (pp. 991-993). Oxford, IK: Elsevier Science Limited.
Williams, J.P. (2005). Instruction in reading comprehension for primary-grade students:
A focus on text structure. Journal of Special Education, 39, 6–18.
Williams, J. P., Hall, K. M., & Lauer, K. D. (2004). Teaching expository text structure to
young at-risk learners: Building the basics of comprehension instruction.
Exceptionality, 12(3), 129-144.
Williams, J. P., Nubla-Kung, A. M., Pollini, S., Stafford, K. B., Garcia, A., & Snyder, A.
E. (2007). Teaching cause—effect text structure through social studies content to
at-risk second graders. Journal of Learning Disabilities, 40(2), 111-120.
Williams, J. P., Stafford, K. B., Lauer, K. D., Hall, K. M., & Pollini, S. (2009).
Embedding reading comprehension training in content-area instruction. Journal of
Educational Psychology, 101(1), 1.
COGNITIVE TASK ANALYSIS
122
Wong, B. Y. (1980). Activating the inactive learner: Use of questions/prompts to enhance
comprehension and retention of implied information in learning disabled children.
Learning Disability Quarterly, 3(1), 29-37.
Yates, K. A. (2007). Towards taxonomy of cognitive task analysis methods: A search for
cognition and task analysis interactions. (Unpublished doctoral dissertation).
University of Southern California, Los Angeles.
Yates, K. A., & Feldon, D. F. (2011). Advancing the practice of cognitive task analysis:
A call for taxonomic research. Theoretical Issues in Ergonomics Science, 12(6),
472-495.
COGNITIVE TASK ANALYSIS
123
Appendix A
3i+3r Independent Method Flow Chart
Interview Transcription
Analyst Coding
Initial Interview Report
Expert Report Review/Revise
Final Interview Report
Interview Transcription
Interview Transcription
Analyst Coding Analyst Coding
Initial Interview Report Initial Interview Report
Expert Report Review/Revise
Expert Report Review/Revise
Final Interview Report Final Interview Report
Preliminary 3i+3r Independent Method Gold Standard Protocol
Expert Verification of 3i+3r Independent Method Gold Standard Protocol
StadftandarPraroProtocol Protocol StStaStProtocolrification
Final 3i+3r Independent Method Gold Standard Protocol
Expert 1 Interview Expert 2 Interview Expert 3 Interview
COGNITIVE TASK ANALYSIS
124
Appendix B
1i+3r Incremental Method Flow Chart
Expert 1 Interview
Interview Transcription
Initial Interview Report
Analyst Coding
Expert Report
Review/Revision
Final Interview Report
Expert 2 Interview &
Review of Expert 1
Interview Report
Expert 3 Interview &
Review of Expert 1 & 2
Incremental Interview
Report
Interview Transcription Interview Transcription
Analyst Coding Analyst Coding
Initial Expert 1 & 2
Incremental Interview
Report
Initial Expert 1, 2, & 3
Incremental Interview
Report
Expert Report
Review/Revision
Expert Report
Review/Revision
Final Expert 1 & 2
Incremental Interview
Report
Preliminary 1i+3r
Incremental Method Gold
Standard Protocol
Expert 1, 2, & 3 Verification
of Preliminary 1i+3r
Incremental Method Gold
Standard Protocol
Final 1i+3r Incremental
Method Gold Standard
Protocol
COGNITIVE TASK ANALYSIS
125
Appendix C
Teacher Biographical Data Survey
Name: ________________________________________
School Site: ___________________________________
Date: _________________________________________
Education History:
Undergraduate major, university attended, and year degree attained: _________________
________________________________________________________________________
University attended for teacher preparation program and year preliminary teaching
credential attained: ________________________________________________________
________________________________________________________________________
If you began teaching prior to attaining a preliminary teaching credential, please specify
whether you were granted an emergency or intern credential: ______________________
________________________________________________________________________
Did you demonstrate subject matter competency via a single-subject exam, multiple-
subject exam, or subject matter waiver program? ________________________________
________________________________________________________________________
If subject matter competency was verified via a single-subject exam or subject matter
waiver program, please specify subject area: ___________________________________
________________________________________________________________________
Please specify whether you completed student teaching or an alternative supervised
teaching experience to satisfy teaching credential fieldwork requirements: ____________
________________________________________________________________________
Graduate major and type of degree, university attended, and year degree attained (if
applicable): ______________________________________________________________
________________________________________________________________________
Teaching/Professional Experience:
Number of years teaching in general education (if applicable): _____________________
Number of years teaching in special education: _________________________________
Number of years providing literacy instruction to students with mild to moderate learning
disabilities: _____________________________________________________________
COGNITIVE TASK ANALYSIS
126
Approximate total hours of professional development received in literacy instruction
throughout your teaching career: _____________________________________________
Approximate total hours of professional development received in literacy instruction
within the last year: _______________________________________________________
Please specify strategies, teaching tools, research-based program implementation, etc.
learned during professional development opportunities: ___________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
Student Achievement:
Please specify average student reading growth in grade and month equivalents (e.g. 1.5
years) for this or the most recent school year: ___________________________________
Please specify the percentage of students who met or exceeded the expectations of their
IEP goals in the area of literacy for this or the most recent school year: ______________
For students in grades 3-5 on your caseload, please tally the total number of students who
scored within each of the following performance bands on the California Modified
Assessment or California Standards Test for English/Language Arts, for the previous
school year (2011/2012):
CMA: ____ Advanced; ____ Proficient; ____ Basic; ____ Below Basic; ____ Far Below
Basic
CST: ____ Advanced; ____ Proficient; ____ Basic; ____ Below Basic; ____ Far Below
Basic
Please feel free to share additional information regarding the achievement of your
students in the area of literacy: ______________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
COGNITIVE TASK ANALYSIS
127
Appendix D
Teacher Biographical Data Survey: Data Analysis Spreadsheet
Teacher Biographical Data Survey Teacher
Education History: 1 3 A/2 B C
Undergraduate major, university
attended, and year degree attained
University attended for teacher
preparation program and year
preliminary teaching credential
attained
If you began teaching prior to
attaining a preliminary teaching
credential, please specify whether
you were granted an emergency or
intern credential
Did you demonstrate subject matter
competency via a single-subject
exam, multiple-subject exam, or
subject matter waiver program?
If subject matter competency was
verified via a single-subject exam or
subject matter waiver program,
please specify subject area
Please specify whether you
completed student teaching or an
alternative supervised teaching
experience to satisfy teaching
credential fieldwork requirements
Graduate major and type of degree,
university attended, and year degree
attained (if applicable)
Years since last credential/degree
obtained
Teaching/Professional Experience:
Number of years teaching in general
education (if applicable)
Number of years teaching in special
education
Total years of teaching experience
Number of years providing literacy
instruction to students with mild to
COGNITIVE TASK ANALYSIS
128
moderate learning disabilities
Approximate total hours of
professional development received
in literacy instruction throughout
teaching career
Approximate total hours of
professional development received
in literacy instruction within the last
year
Please specify strategies, teaching
tools, research-based program
implementation, etc. learned during
professional development
opportunities
Student Achievement:
Please specify average student
reading growth in grade and month
equivalents (e.g. 1.5 years) for this
or the most recent school year
Please specify the percentage of
students who met or exceeded the
expectations of their IEP goals in the
area of literacy for this or the most
recent school year
For students in grades 3-5 on
caseload, please tally the total
number of students who scored
within each of the following
performance bands on the California
Modified Assessment or California
Standards Test for English/Language
Arts, for the previous school year
(2011/2012)
Special education setting
Analysis:
Years of experience
Total hours of professional
development
Years since last credential/masters
obtained
Total number of action and decision
steps
COGNITIVE TASK ANALYSIS
129
Total number of action steps
Total number of decision steps
Percentage of students advanced or
proficient on CST/CMA
Range Analysis:
Years of experience (0-10, 11-20,
21-30, 31-40)
Total hours of professional
development (1-100, 101-200, 201-
300, 301-400)
Years since last credential/masters
obtained (1-10, 11-20, 21-30, 31-40)
Total number of action and decision
steps
Total number of action steps
Total number of decision steps
Percentage of students advanced or
proficient on CST/CMA, per self-
report
COGNITIVE TASK ANALYSIS
130
Appendix E
Cognitive Task Analysis Interview Protocol
Begin the interview: Meet the subject matter expert (SME) and explain the purpose of
the interview. Ask the SME for permission to record the interview. Explain to SME the
recording will be only used to ensure that you do not miss any of the information the
SME provides.
Name of task(s):
Performance objective:
Ask: “How would the action term be stated? What action verb should be used?”
Step 1:
Objective: Capture a complete list of student outcomes for teaching expository literacy
instruction.
A. Ask the SME to list student outcomes when these tasks are complete. Ask them to
make the list as complete as possible.
B. Ask SME how students are assessed on these outcomes.
Step 2:
Objective: Provide practice exercises that are authentic to the classroom environment in
which the tasks are performed.
A. Ask the SME to list all the contexts in which these tasks are performed.
B. Ask the SME how the tasks would change for each setting.
Step 3:
Objective: Identify main steps or stages to accomplish the task.
A. Ask SME the key steps or stages required to accomplish the task.
B. Ask SME to arrange the list of main steps in the order they are performed, or if
there is no order, from easiest to most difficult.
Step 4:
Objective: Capture a list of “step-by-step” actions and decisions for each task.
A. Ask the SME to list the sequence of actions and decisions necessary to complete
the task and/or solve the problem.
B. Tell SME: “Please describe how you accomplish this task, step-by-step, so a
novice teacher could perform it.”
For each step the SME gives you, ask yourself, “Is there a decision being made by
the SME here?” If there is a possible decision, ask the SME.
COGNITIVE TASK ANALYSIS
131
If SME indicates that a decision must be made…
Ask: “Please describe the most common alternatives (up to a maximum of three)
that must be considered to make the decision and the criteria teachers should use
to decide between the alternatives.”
Step 5:
Objective: Identify prior knowledge and information required to perform the task.
A. Ask SME about the prerequisite knowledge and other information required to
perform the task.
1. Ask the SME about Cues and Conditions.
Ask: “For this task, what must happen before someone starts the task? What
prior task, permission, order, or other initiating event must happen? Who
decides?”
2. Ask the SME about New Concepts and Processes.
Ask: “Are there any concepts or terms required of this task that may be new to a
novice teacher?”
Concepts – Terms mentioned by the SME that may be new to the
novice teacher.
Ask for a definition and at least one example.
Processes - How something works.
If equipment (technology) and/or materials are being used, ask the SME to
“Please describe how the technology and/or materials work - in words
that a novice teacher will understand. Processes usually consist of
different phases and within each phase, there are different activities –
think of it as a flow chart.”
Ask: “Must a novice teacher know this process to do the task?” “Will
they have to use it to change the task in unexpected ways?”
IF the answer is NO, do NOT collect information about the process.
3. Ask the SME about Equipment (Technology) and Materials.
COGNITIVE TASK ANALYSIS
132
Ask: “What equipment (technology) and materials are required to
succeed at this task in routine situations? Where are they located? How
are they accessed?”
4. Performance Standard.
Ask: “How do we know the objective has been met? What are the
criteria, such as time, efficiency, and quality indicators (if any)?”
5. Sensory experiences required for task (may not be appropriate or required
for every task).
Ask: “Must a novice teacher see, hear, smell, feel, or taste something in
order to learn any part of the task? For example, are there any parts of
this task they could not perform unless they could see something (like a
student reaction)?”
Step 6:
Objective: Identify problems that can be solved by using the procedure.
A. Ask the SME to describe at least one routine problem that a novice teacher
should be able to solve if they can perform each of the tasks on the list you just
made.
Ask: “Of the task we just discussed, describe at least one routine problem that a
novice teacher should be able to solve IF they learn to perform the task.”
COGNITIVE TASK ANALYSIS
133
Appendix F
Verification of Gold Standard Protocol Letter
Dear _______________:
I want to first thank you for your generous time and for sharing your expertise for this
project.
We are now at the stage where we have aggregated all the interview transcripts into a
draft “gold standard” protocol for the reading procedure that might be sufficiently clear
and accurate so that a novice teacher could understand and apply it to instruction. As one
of the final steps in this process, I would like to ask you to review the attached draft of
the protocol and make comments or revisions. Recognizing that there may be more than
one way experts perform this procedure, we seek to capture a procedure that is
comprehensive and integrates best practice from each of the expert teachers interviewed
for implementation in special education settings and which could be trained to novice
teachers.
With this in mind, please review the procedures and use the Microsoft Word Track
Changes feature under the Tools tab to add steps we may have omitted and revise any
inaccurate steps. Also, feel free to make comments in the margin using the Microsoft
Word New Comment feature under the Insert tab. If you are not familiar with these
Microsoft Word functions, please feel free to print out the protocol and write in your
changes and comments.
While reviewing the gold standard protocol, please keep in mind the following questions:
1. Are there any steps that are irrelevant?
a. IF you find an irrelevant step, THEN please add a comment indicating a
reason why the step is irrelevant.
2. Are any of these steps wrong?
a. Meaning the step will not work;
b. The step will not benefit the student; or
c. The step will not make instruction more successful.
i. IF you find a wrong step, THEN please add a comment indicating
one of the following reasons:
1. A: The step will not work;
2. B: The step will not benefit the student; or
3. C: The step will not make the instruction more successful.
3. Is there a more efficient way to complete the steps in this procedure?
a. IF a step could be completed more efficiently, THEN please add a
comment indicating how the step could be completed more efficiently.
If you have any questions, please do not hesitate to contact me at anytime by email or
phone. I would appreciate your help by reviewing the protocol as soon as it is convenient
and emailing it to me with your changes and comments. If you wrote in your changes
COGNITIVE TASK ANALYSIS
134
and comments, we can arrange a time for me to come by your classroom or front office of
your school to pick up the protocol.
Thank you again for your participation in this project and for sharing your expertise to
improve teacher preparation and professional development in teaching reading.
Sincerely,
Diana McZeal
COGNITIVE TASK ANALYSIS
135
Appendix G
Coding Spreadsheet: 3i+3r Independent Method Gold Standard Protocol Procedures
Subject Matter Expert Section
Steps
Step Type Final 3i+3r Independent Method
Gold Standard Protocol Procedures
1 2 3 A D
O Procedure 1. Select the text for
the lesson.
7 2
1 A 1.1. Match student instructional
reading levels to the text by
selecting text that meets all of the
following criteria:
0 0 0
2 A 1.1.1. Select instructional level text
that students can decode with at
least 90% accuracy.
0 0 0
3 A 1.1.2. Review the structural
language pattern of the text (i.e.,
sentence structure, word usage,
prose) to determine if students will
be able to read and understand the
text with your assistance.
0 0 0
4 A 1.1.3. Examine the text to identify
vocabulary that may be new to the
students and select text that contain
words they can learn with your
assistance.
0 0 0
5 A 1.1.4. Identify the number of ideas
in the text that may be new to the
students and select text that
contains ideas they can learn with
your assistance.
0 0 0
6 A 1.1.5. Preview text layout and
select text that contains features
such as headings, subheadings,
pictures with captions, side notes,
etc. that will be supportive to the
students in your group.
0 0 0
7 A 1.2. Determine the amount of time
and resources you are able to
dedicate to the lesson.
0 0 0
8 D 1.3. IF there is sufficient time to
teach new ideas and vocabulary
AND the level of the text is
challenging, but doable for the
0 0 0
COGNITIVE TASK ANALYSIS
136
students, THEN use the text.
9 D 1.4. IF any of the conditions above
are not met, THEN adjust the
length of the text to fit the time and
resources you have available, OR
identify another text that is
challenging, but doable by the
students.
0 0 0
O Procedure 2. Prepare the
lesson.
4 0
10 A 2.1. IF students are reading from a
reader without bolded vocabulary
or key terms, THEN preview text,
highlight important words for
students to know, and prepare
definitions before presenting the
reader to students.
1 0 1
11 A 2.2. Write vocabulary words on
index cards.
0 1 0
12 A 2.3. Generate main idea and detail
questions for each section or
paragraph that check on students’
understanding of that section or
paragraph and write them down.
0 0 0
13 A 2.4. Pair students according to
academic and social need using
Kagan student grouping strategies
(i.e., low, low medium, medium
high, and high student ability
levels).
0 0 1
O Procedure 3. Activate prior
knowledge and introduce the
topic.
7 5
14 D 3.1. IF students are in a small
group of 3-4 students AND there
are multiple topics for students to
read, THEN allow student groups
to decide which topic they would
like to read about.
1 0 0
15 D 3.2. IF students are in a large
group of 5 or more students, THEN
choose the topic of the reading for
students.
1 0 0
COGNITIVE TASK ANALYSIS
137
16 A 3.3. Tell students to read only the
title and ask them to tell you what
the title means to them.
0 0 1
17 A 3.4. Before opening the book or
introducing the passage, activate
students’ background knowledge
by asking students to share what
they know about the subject matter
to facilitate a general discussion of
the topic. (Ask students, “What do
you know about this subject?”).
1 1 0
18 A 3.5. Chart student responses on the
board using a circle map.
1 0 0
19 A 3.6. Gauge student interest level in
the topic by monitoring their facial
expressions, body language, and
engagement.
1 0 0
20 D 3.7. IF students are not interested
in the topic, THEN bring in hands-
on materials and activities, or show
students pictures and video clips
relevant to the topic to increase
student interest and excitement
(e.g., rocks).
1 0 1
21 A 3.8. Ask student groups to make a
prediction about what will happen
in the text based upon the title and
background knowledge discussion.
1 0 1
22 A 3.9. Ask a student from each group
to share a prediction with the class
(“We predict that this is going to
happen…”).
1 0 1
23 D 3.10. IF a student uses the title in
their prediction, or gives a vague
prediction, THEN ask student to
elaborate and/or fix the prediction.
0 0 1
24 D 3.11. IF a student makes an
incorrect prediction, THEN make a
mental note to discuss the
prediction after student reads the
text.
0 0 1
25 A 3.12. After all groups have shared
predictions, repeat or paraphrase
each group’s prediction aloud for
the class.
0 0 1
COGNITIVE TASK ANALYSIS
138
O Procedure 4. Preview the text. 2 3
26 A 4.1. Give students 1-2 minutes to
independently preview and think
about the text by looking at text
features including heading,
subheadings, captions, graphs,
maps, bolded words, and pictures,
then preview the text together as a
group to explain the general outline
of the text.
1 1 1
27 D 4.2. IF students are excited about
the text features they have
discovered or information they
already knew about the topic,
THEN encourage students to orally
share what they have found with
the group.
1 1 0
28 A 4.3. Give students 30 seconds to
think about the predictions made in
Step 3.8 and ask students if they
would like to change or keep their
predictions.
0 0 1
29 D 4.4. IF a student group decides that
their initial prediction was accurate,
THEN ask the student group to
orally confirm their initial
prediction.
0 0 1
30 D 4.5. IF a student group decides that
their initial prediction was
inaccurate, THEN ask the student
group to change the prediction and
share aloud any revised prediction
with the class.
0 0 1
O Procedure 5. Teach vocabulary
words.
10 13
31 A 5.1. Tell students that whenever
they read textbooks, highlighted
and boldfaced words are the words
they need to know before reading.
1 0 0
32 D 5.2. IF using a textbook, THEN
ask students to scan and find
bolded vocabulary words or key
terms in the text.
1 0 0
COGNITIVE TASK ANALYSIS
139
33 D 5.3. IF students are reading from a
reader without bolded vocabulary
or key terms, THEN present and
read aloud the vocabulary words
you identified as important in Step
2.1 AND ask students to scan, find,
and highlight each of the
vocabulary words in the text.
1 1 1
34 A 5.4. Give students a vocabulary
pre-assessment by asking students
to rate how well they know each
word (e.g., 1) they do not know it
at all; 2) they have heard of it; or 3)
they know it and can use it in a
sentence; thumbs up or down) and
demonstrate their understanding of
the word by providing a sentence or
definition of the word.
0 0 1
35 A 5.5. Prompt students to identify
words that are unfamiliar to them
or that they perceive as difficult in
the text AND highlight these words
to use them as vocabulary words.
(Ask students, “Do you see any
other words that we can use as
vocabulary words?”, “Do you see
any hard words?”).
1 0 0
36 A 5.6. Ask students to take turns
reading aloud each of the
vocabulary words.
0 1 0
37 D 5.7. IF the vocabulary word is a
compound word, THEN explain
that two words are combined to
make a new word.
0 1 0
38 D 5.8. IF a student does not
recognize a word, THEN model
tactile approach to word
memorization with simultaneous
finger tracing and reciting of the
word (Dr. Glass Method: hearing,
touching, and saying).
0 1 0
39 A 5.9. Tell the student to finger trace
and recite the word three times
AND THEN visualize the word or
make a picture in their mind about
the word to help remember it.
0 1 0
COGNITIVE TASK ANALYSIS
140
40 D 5.10. IF a student is still unable to
read the vocabulary word, THEN
write a note to provide one-on-one
instruction to reteach the student
the word using the tactile approach
at a later time.
0 1 0
41 A 5.11. Repeat Steps 5.5-5.9 for each
vocabulary word.
0 1 0
42 A 5.12. Tell students that these are
the vocabulary words they will
encounter while reading and that
they should understand the
meaning of these words. (Tell
students, “Let’s look ahead and
make sure we know what these
words mean before we start
reading.”).
1 0 1
43 A 5.13. Ask students to
independently read the sentences
that contain vocabulary words to
figure out the meanings of words.
1 0 0
44 D 5.14. IF students know a definition
without looking up the word
meaning in the dictionary, THEN
ask students to share aloud the
meaning of the word with the
group.
1 0 0
45 D 5.15. IF students cannot determine
the meaning of a word using
context clues, THEN move on to
Step 5.16.
1 0 0
46 D 5.16. IF using a textbook and
definitions are written on the sides
of pages, THEN ask students to
take turns reading aloud a
definition to the group.
1 0 0
47 D 5.17. IF using a textbook and
definitions are not written on the
sides of the pages, THEN ask
students to look in the glossary or
dictionary in the back of the book
for definitions of vocabulary or key
terms and take turns reading aloud
a definition to the group.
1 0 0
48 D 5.18. IF there is limited time for
students to find and read aloud
1 0 0
COGNITIVE TASK ANALYSIS
141
vocabulary definitions to the group,
THEN tell students the meaning of
some words to keep the lesson
moving along.
49 D 5.19. IF students are reading from
a reader without the bolded
vocabulary or key terms with
definitions, THEN read aloud the
definitions you prepared in Step 2.1
to students.
1 0 0
50 D 5.20. IF any of the vocabulary
words you and the students have
identified have complex meanings
(i.e., difficult for students to
understand; unfamiliar content),
THEN write the definition on the
whiteboard to serve as a visual
reminder of the word meaning for
students to reference as they read
the text.
1 0 0
51 A 5.21. Ask students to take turns
using the vocabulary words in
context.
0 1 0
52 D 5.22. IF time permits, THEN
prompt students to share synonyms,
word variations, or make
associations for vocabulary or key
terms (e.g., canine: dog; place,
misplace, displace; pioneers:
wagon) by thinking of words they
might encounter while reading that
are related to the topic. (Ask
students, “What other words do you
know that relate to that word?”).
1 0 0
53 A 5.23. Complete a CORE
vocabulary map on the whiteboard
as a group, using the words from
Step 5.22, and ask students to copy
the map in their journals.
1 0 0
O Procedure 6. Teach guiding
questions.
2 0
54 A 6.1. Read end of chapter or
passage questions, or teacher-
generated questions as an
anticipatory set for learning. (Say,
0 1 0
COGNITIVE TASK ANALYSIS
142
“What do you think we are going to
learn today?”, “Let’s see who is
right.”, “Let’s look at the questions
at the end of the section.”).
55 A 6.2. Remind students to keep the
anticipatory set questions in mind
as they read.
0 1 0
O Procedure 7. Facilitate shared
reading.
4 6
56 A 7.1. Tell students to follow along
as others are reading because they
will be called upon when it is their
turn to start reading aloud.
0 1 0
7.2. Use one or more of the
following techniques
interchangeably for shared reading
throughout the lesson to keep
students engaged and on task:
57 D 7.2.1. IF students are pre-assigned
sections or paragraphs to read (i.e.,
round robin), THEN students may
be given two minutes to practice
reading their sections ahead of
time.
1 1 0
58 D 7.2.2. IF students are not pre-
assigned sections or paragraphs to
read, THEN students are randomly
assigned the order in which they
will read to avoid having students
not pay attention, or look ahead to
practice reading their sections (i.e.,
popcorn reading).
1 1 0
59 D 7.2.3. IF the passage is grade level
text and students in the group are
comfortable reading aloud, THEN
these students may volunteer or be
asked to take turns reading a
paragraph aloud.
1 0 1
60 D 7.2.4. IF the passage is grade level
text and students in the group are
not comfortable reading aloud,
THEN read the entire text aloud to
students as they follow along
before asking students to read
aloud.
1 0 1
COGNITIVE TASK ANALYSIS
143
61 D 7.2.5. IF students are struggling
readers, THEN assign shorter
paragraphs or sections of the text to
students to read OR you and each
student take turns reading one
sentence aloud to the group.
1 0 0
62 D 7.2.6. IF the passage is at students’
instructional or independent
reading level AND students are in
Kagan cooperative learning groups
with a range of student ability
levels (i.e., low, low medium,
medium high, high), THEN prompt
students to take turns reading with
a shoulder or face partner, OR read
in the order that their desks are
numbered.
0 0 1
63 A 7.3. Read the first paragraph aloud
to students to begin shared reading.
0 1 0
64 A 7.4. Ask students to share aloud
what they visualized or pictured in
their minds after reading along or
listening to the paragraph.
1 0 0
65 A 7.5. Jot down words incorrectly
decoded by each student and teach
student words using the tactile
approach (Dr. Glass Method) to
word recognition in Steps 5.7-5.8 at
a later time.
0 1 0
O Procedure 8. Teach for reading
comprehension.
9 10
66 A 8.1. After every paragraph conduct
structured reading comprehension
by asking questions from the end of
the chapter or text to maintain
student attention and focus, and to
check for understanding of the
reading.
1 1 1
67 D 8.2. IF students are able to mark in
text, THEN prompt students to
highlight answers to anticipatory
set questions as they are read.
0 1 0
68 D 8.3. IF the paragraph is 3 to 4
sentences long, THEN stop at the
end of the paragraph to ask end of
0 1 0
COGNITIVE TASK ANALYSIS
144
chapter or text questions.
69 D 8.4. IF the paragraph is longer or
complex (i.e., introduces multiple
vocabulary words; presents
information using complex
sentences; information-rich text),
THEN stop midway through the
paragraph to ask end of chapter or
text questions.
0 1 0
70 A 8.5. Ask the same student to
continue reading until the end of
the paragraph.
0 1 0
71 D 8.6. IF the paragraph is not
complex, THEN stop after the
paragraph or section to ask end of
chapter or text questions.
0 1 0
72 D 8.7. IF a student provides an
incorrect response, THEN ask the
student for evidence to support
his/her response. (Ask student,
“Where in the text did you find
evidence that supports that?”).
1 0 0
73 D 8.8. IF the student determines that
his/her answer is incorrect, THEN
allow the student to change his/her
response.
1 0 0
74 D 8.9. IF student cannot find
evidence to support response,
THEN prompt other students to
look for the evidence. (Ask other
members of the group, “What do
you think?”, “Thumbs up or
thumbs down, do you agree or
disagree?”, “Can you find
evidence?”, “If it’s not there, where
is it?”`).
1 0 0
75 D 8.10. IF the group determines that
the response is incorrect, THEN
another student may answer the
question.
1 0 0
76 A 8.11. After each paragraph, ask 1-2
students to orally paraphrase what
they have read and to build upon
other students’ restatements of the
text.
1 0 0
77 A 8.12. Point out vocabulary words 1 0 0
COGNITIVE TASK ANALYSIS
145
introduced in Procedure 4. (Ask
students, “How does the
vocabulary word relate to what we
are reading?”).
78 D 8.13. IF the group rereads text the
following day, THEN prompt
students to read questions from the
end of the chapter or passage and
ask students if they can locate
responses in the text.
1 0 0
79 A 8.14. Ask students to engage in
group discussions to confirm or
disconfirm their group predictions
and to discuss the rationale they
will share with the class for
keeping or changing their
prediction.
0 0 1
80 A 8.15. Ask a student from each
group to explain if their group
prediction was correct or required
revision. (Ask, “Why was your
prediction right?”, “What would
you change about your
prediction?”)
0 0 1
81 A 8.16. Provide extrinsic rewards
(i.e., praise, tokens) to groups for
keeping or making strong
predictions, even if initial
predictions were wrong.
0 0 1
82 D 8.17. IF a student group confirms
the accuracy of a prediction, but the
prediction is incorrect, THEN ask
the class to give the group thumbs
up or down to show agreement or
disagreement with the prediction.
0 0 1
83 A 8.18. Ask groups to make revisions
to their prediction until it is correct.
0 0 1
84 A 8.19. Prompt students to visualize
the steps or sequence of the passage
now that they have the correct
prediction.
1 0 0
O Procedure 9. Teach for
identifying the main idea and
details. (This procedure is taught
to novice teachers in two
21 13
COGNITIVE TASK ANALYSIS
146
sections: Steps 9.1-9.13.1 and
9.14-9.34.)
S Note: Main idea and supporting
detail skills are taught separately
through teacher modeling and
practice, and when students have
mastered these individual skills
they can be concurrently applied to
informational text.
85 A 9.1. Tell students that they are
going to identify the main idea and
supporting details of the paragraph
and that the main idea is talked
about in every paragraph.
1 0 1
86 A 9.2. Remind students that the main
idea is typically found in the first
sentences of the paragraph,
whereas supporting details come
after the main idea and may be
found in the middle and end of the
paragraph.
1 0 1
87 A 9.3. Informally assess student
understanding of main idea and
supporting details by asking
students to read a short paragraph
and orally respond to main idea and
supporting detail questions.
1 0 0
88 D 9.4. IF students demonstrate an
understanding of main idea and
supporting details, THEN go to
Step 9.6.
1 0 0
89 D 9.5. IF students do not demonstrate
an understanding of main idea and
supporting details or struggled with
these skills in Step 9.3, THEN go
to Step 9.24 to use short paragraphs
to teach and reinforce these skills.
1 0 0
90 A 9.6. Give students 2-3 minutes to
skim the text to look for and
underline the main idea of each
paragraph.
0 0 1
91 A 9.7. Explain/remind students that
skimming is to quickly look over
the text, without reading the entire
paragraph or passage to search for
the main idea.
0 0 1
COGNITIVE TASK ANALYSIS
147
92 D 9.8. IF skimming is a new skill for
students, THEN skim the passage
and think aloud to demonstrate this
process for students.
0 0 1
93 D 9.9. IF a paragraph is a “skinny”,
or 3-4 sentences in length, THEN
students underline the first
sentence.
0 0 1
94 D 9.10. IF a paragraph is a “fat”,
THEN students underline the first
and last sentences.
0 0 1
95 A 9.11. Prompt students to read the
last paragraph and underline the
entire paragraph.
0 0 1
96 A 9.12. Explain that the writer will
most likely restate the main idea of
the passage in the last paragraph.
0 0 1
97 A 9.13. Ask one student from each
Kagan group to read all underlined
text aloud to group members.
0 0 1
S 9.13.1. Note: Gradually fade group
reading so that each student is
required to read underlined text
individually.
98 A 9.14. As students read the passage,
stop at the end of each paragraph
and ask students teacher-generated
questions to elicit the main idea.
(Ask students, “What did you just
learn or discover?”, “What is the
purpose of the paragraph?”, “How
did the author explain…?”, “Can
we add to that?”).
0 1 1
99 D 9.15. IF the paragraph is critical to
understanding the main idea,
THEN ask a sufficient number of
questions for students to
demonstrate an understanding of
the ideas in the paragraph.
0 1 0
100 A 9.16. Ask the group teacher-
generated detail questions. (“What
did the passage say about …?”,
“How did the passage explain
…?”).
0 1 0
101 A 9.17. After each student response,
ask other students for thumbs up or
1 0 1
COGNITIVE TASK ANALYSIS
148
down to show agreement or
disagreement with the response.
102 D 9.18. IF students in the group do
not agree with an answer a student
shares aloud, THEN ask students to
“bounce” back to the text to reread
and skim for the correct answer OR
the teacher may reread text aloud.
0 1 1
103 A 9.19. Ask the same student to
make another attempt to answer the
main idea question.
0 1 0
104 D 9.20. IF the student still does not
provide the correct answer, THEN
ask another student to answer the
question and ask other students in
the group for thumbs up or down to
show agreement or disagreement
with the response.
0 1 1
105 D 9.21. IF the other student does not
provide the correct answer, THEN
ask remaining students to share
answers to questions until the class
reaches a consensus OR give
student the answer and discuss why
this is the correct answer.
0 1 1
106 A 9.22. Prompt one of the students
that experienced difficulty to repeat
the correct answer to the group.
0 1 0
107 D 9.23. IF structured writing is part
of the lesson, THEN proceed with a
writing lesson after this step.
0 0 1
108 D 9.24. IF students do not
demonstrate an understanding of
main idea and supporting details or
struggled with these skills in Step
9.3, THEN provide students with
an independent reading level
paragraph to read aloud as a group
and model how to distinguish
between main idea and supporting
details of the paragraph.
1 0 1
109 A 9.25. Ask students to describe what
they are visualizing in their minds
after reading the paragraph to
check that students are using
visualization strategies.
1 0 0
COGNITIVE TASK ANALYSIS
149
110 D 9.26. IF students are able to
identify the main idea, THEN
proceed with a lesson in supporting
details.
1 0 0
111 A 9.27. To teach students how to
identify supporting details, select a
text that is familiar to students and
ask students to first identify the
main idea.
1 0 0
112 A 9.28. Model how to identify
supporting details in a passage by
pointing out a detail sentence AND
color coding the main idea and
supporting details with a
highlighter to help visual learners
see the pattern of a paragraph.
1 0 1
113 A 9.29. Prompt students to take turns
identifying detail sentences from
the same paragraph. (Ask students,
“Are these the bits of information
that go with the big picture?”, “Do
they match the picture in your
head?”).
1 0 0
114 A 9.30. Use concepts of main idea
and supporting details from a
narrative passage or a biography to
remind students of the differences
between the “big idea” and
supporting details in text. (Ask
students, “How do you know what
the text is about?”, “Where do you
find main idea and details?”, “What
did the character in the story do
first?”).
1 0 0
115 D 9.31. IF students demonstrate an
understanding of main idea and
supporting details when the skills
were taught in isolation, THEN
introduce paragraphs that require
students to concurrently identify
the main idea and supporting
details AND ask students to take
turns identifying the main idea and
supporting details of a paragraph as
a group.
1 0 0
116 A 9.32. Give students an unfamiliar 1 0 0
COGNITIVE TASK ANALYSIS
150
paragraph and ask students to
independently highlight the main
idea and three supporting details in
the paragraph.
117 A 9.33. Ask students to share the
main idea and supporting details
with a partner or the group.
1 0 0
118 A 9.34. After several opportunities
for practice, ask students to identify
which details are most important in
supporting the topic.
1 0 0
O Procedure 10. Teach for making
inferences.
5 4
119 A 10.1. Remind students that a
response to an inferential question
is not directly stated or written in
the text and that inferences are
made using information from the
text to draw a conclusion.
1 1 1
120 A 10.2. Ask inferential questions as
students read each section and
again at the end of the passage.
(For example, “What effect does
this have on…?”, “What is the best
solution to this problem?”, “What
do you think caused this to
happen?”).
0 1 1
121 A 10.3. Tell students which questions
require them to make inferences.
0 0 1
122 D 10.4. IF students do not
demonstrate a solid understanding
of making inferences during
questioning, THEN teach students
the cause and effect text structure
in Step 11.1 and reintroduce
inferences after students are able to
consistently identify cause and
effect relationships in text.
1 0 0
123 A 10.5. Ask students to look for and
underline clues within the text to
support their inferences.
0 0 1
S 10.5.1. Note: May use example of
a detective looking for clues, or
cookie crumbs left behind to solve
a case.
COGNITIVE TASK ANALYSIS
151
124 D 10.6. IF student does not give the
correct answer to an inferential
question, THEN think aloud to
model the inferential thought
process on how to make inferences.
0 1 1
125 A 10.7. Ask students to explain how
you generated the inferential
response. (Ask students, “How did
I come up with that?”).
0 1 0
126 D 10.8. IF a student response is still
incorrect, THEN ask other students
for thumbs up or down to show
agreement or disagreement with the
response AND ask the student to
try again.
0 0 1
127 D 10.9. IF students from the group
do not generate the correct answer,
THEN give the correct answer.
0 0 1
O Procedure 11. Teach specific
text types to show how passages
are structured. (Text types in
Procedure 11 are initially taught
to students as sub-skills and then
incorporated into comprehension
questions in Procedure 8 as
student skill increases.)
22 16
128 A 11.1. Teach students text structure
for cause and effect as students
progress from literal to inferential
text.
1 0 0
S 11.1.1.1. Note: Cause and effect
provides a foundation for students
to make inferences and to draw
their own conclusions from the
text.
129 A 11.1.2. As an anticipatory set,
provide students with a concrete
example of cause and effect by
demonstrating a cause and effect
relationship. (Tell students,
“Here’s my cup. If I push it off the
table what will happen?”).
1 0 0
130 A 11.1.3. Prompt students to
demonstrate cause and effect by
having students push their pencils
1 0 0
COGNITIVE TASK ANALYSIS
152
off the desk, or a comparable cause
and effect activity. (Ask students,
“What was the cause? What was
the effect?”).
131 A 11.1.4. Explain a cause and an
effect. (Tell students, “A cause is
why something happens and the
effect is what happens.”)
1 0 0
132 A 11.1.5. List examples of cause and
effect scenarios on the board and
introduce a multi-flow map to
demonstrate cause and effect
relationships. (Ask students, “If
you get in a fight on the playground
what will happen, what is the
effect?”, “If this is the effect, what
were the causes?”).
1 0 1
133 A 11.1.6. Prompt students to make
connections to stories they have
read and to share aloud specific
examples of cause and effect
relationships from the stories.
1 0 0
134 A 11.1.7. Present students with text
and prompt students to skim for
cause and effect key or clue words
to identify a cause and effect
relationship in the text.
1 0 1
135 A 11.1.8. Remind students that key
words are posted on the wall for
their reference (i.e., Zoom In or
Thinking Maps).
0 0 1
136 A 11.1.9. Tell students to pay close
attention to where the key words
are located in the passage, as the
key words help figure out the cause
and effect of a topic (e.g., lightning
storms, earthquakes).
0 0 1
137 A 11.1.10. Prompt students to
underline key words as they are
found in the text.
0 0 1
138 D 11.1.11. IF you or the students
identify a key word in the text that
is not listed on the key words chart,
THEN add the word to the chart.
0 0 1
139 A 11.1.12. Ask students to share out
the key words they found while
0 0 1
COGNITIVE TASK ANALYSIS
153
skimming the text. (Ask students,
“Which key words did we find?”)
140 A 11.1.13. Prompt students to read
the text aloud using one or more of
the methods in Step 7.2.
1 0 1
141 D 11.1.14. IF students are reading in
their Kagan groups, THEN remind
students to raise their hands to ask
for assistance with decoding
difficult words or text.
0 0 1
142 A 11.1.15. Prompt students to listen
and look for cause and effect key
words in the text as it is read aloud.
0 0 1
143 D 11.1.16. IF a key word has been
read, THEN stop to ask students if
they recognized the key word in the
sentence and ask a student to tell
the group the key word.
0 0 1
144 D 11.1.17. IF the passage contains
multiple paragraphs, THEN stop
after each paragraph to ask students
if a cause and effect has been
identified in the passage (Ask, ‘Do
we have a cause yet?”, “Here’s a
key word, is it telling us
anything?”).
0 0 1
145 A 11.1.18. Ask students to identify
the cause and effect relationship
using key words and evidence from
the text.
1 0 0
146 A 11.1.19. Prompt students to
underline sentences that contain the
answers to teacher-generated
comprehension questions from Step
9.14. (Ask students, “Where did we
see the answer?”, “Where is the
key word that guided us to the
answer?”).
0 0 1
147 D 11.1.20. IF students are able to
identify cause and effect
relationships when presented with
familiar and unfamiliar text, THEN
provide students an opportunity for
independent practice with
unfamiliar text.
1 0 0
COGNITIVE TASK ANALYSIS
154
148 D 11.1.21. IF students demonstrate
an understanding of cause and
effect relationships, THEN imbed
cause and effect questioning in
comprehension questions in
Procedure 8 to generalize these
skills in subsequent instruction.
1 0 0
149 A 11.1.22. Project a multi-flow
thinking map on the board to
provide students with a visual
representation of cause and effect
relationships in the text.
0 0 1
150 D 11.1.23. IF this is the first time
students have worked on multi-
flow maps, THEN complete the
thinking map as a group to model
how to identify cause and effect
relationships (i.e., think aloud; look
for key words; explain how
paragraphs are organized in text)
and allow students to copy your
entries into their journals.
0 0 1
151 D 11.1.24. IF students have had
several opportunities to work with
multi-flow maps, THEN model one
cause and effect relationship and
ask students to make additional
entries into the multi-flow map by
working collaboratively with their
table groups or on their own.
0 0 1
152 A 11.1.25. Remind students to focus
on the organization of cause and
effect relationships, not the spelling
of words.
0 0 1
153 D 11.1.26. IF the multi-flow maps
are completed in table groups,
THEN assign each student a role as
leader, recorder, presenter, and
timekeeper of the task and ask the
presenter to report one cause and
effect relationship recorded by the
group.
0 0 1
154 D 11.1.27. IF students completed the
multi-flow map independently,
THEN ask a few students to share
aloud one of their findings.
0 0 1
COGNITIVE TASK ANALYSIS
155
155 A 11.1.28. Ask students to orally
justify their responses using
evidence from the text. (Ask, “How
do you know that is correct?”).
0 0 1
156 D 11.1.29. IF students demonstrate a
solid understanding of cause and
effect relationships in text, THEN
ask questions to make inferences
using implicit information from
text.
1 0 0
O 11.2. Teach students text structure
for compare and contrast.
157 D 11.2.1. IF you are introducing
other text structures such as
compare and contrast, THEN
follow the same process as in cause
and effect Procedure 11, but
provide students with a list of key
words germane to the text structure
(i.e., for this reason, because, as a
result, was caused by, etc.) and use
a double bubble thinking map to
compare and contrast information
from the passage read.
1 0 1
158 A 11.2.2. After the reading, engage
students in a discussion regarding
the similarities and differences
between the physical characteristics
of students within the group to
provide students with an
opportunity to practice compare
and contrast with real life
situations.
1 0 0
159 A 11.2.3. Ask students how the
subject of the text compares to
someone in the group or someone
they know and use a double bubble
map to list similarities and
differences on the board. (Ask,
“What about this person is similar
or different than you?”).
1 0 0
160 D 11.2.4. IF time permits, THEN
ask students to compare and
contrast what was read in two
different informational passages.
1 0 0
COGNITIVE TASK ANALYSIS
156
161 D 11.2.5. IF students demonstrate an
understanding of compare and
contrast, THEN imbed compare
and contrast questioning in
comprehension questions in
Procedure 8 to generalize these
skills in subsequent instruction.
1 0 0
O 11.3. Teach students text structure
for sequencing of events.
162 D 11.3.1. IF you are introducing
other text structures such as
sequencing of events, THEN
follow the same process as in cause
and effect Procedure 11, but
provide students with a list of key
words germane to the text structure
(i.e., first, then, next, etc.) and use a
flow map to sequence information
and events from the passage read.
1 0 1
163 A 11.3.2. Remind students that key
words or transition words explain
the order in which events occur in
the passage.
1 0 0
164 A 11.3.3. After the reading, guide
students in a general discussion of
the sequence of events in the text.
1 0 0
165 D 11.3.4. IF students demonstrate an
understanding of sequencing of
events, THEN imbed sequencing of
event questioning in
comprehension questions in
Procedure 8 to generalize these
skills in subsequent instruction.
1 0 0
O Procedure 12. Assess student
learning.
7 7
Assess student learning by using
one or more of the following
assessments:
166 A 12.1. After students have finished
reading the text, prompt students to
tell a partner one thing they
learned, or something they already
knew that was written in the text.
1 0 0
167 A 12.1.1. Ask students to share their
partner’s idea with the group.
1 0 0
COGNITIVE TASK ANALYSIS
157
S 12.1.2. Note: Pair-share activities
support social skills, active
listening, and expressive language
in students.
168 D 12.2. IF there is an accompanying
activity (worksheet) from the
general education teacher, THEN
preview the activity as a group OR
provide students with a modified
version of the activity sheet (i.e.,
larger spaces, word bank) to
complete during small group
instruction.
1 0 0
169 A 12.2.1. Answer 1-2 questions as a
group and ask students to answer
the remaining questions
independently or with your
assistance as needed.
1 0 0
170 D 12.2.2. IF students require
assistance, THEN direct students
back to specific sections of the
reading to independently search for
information in the text.
1 0 0
171 A 12.2.3. Correct student work and
ask students to share aloud their
responses with the group.
1 0 0
172 D 12.2.4. IF several students
experienced difficulty with a
particular question, THEN reteach
that particular concept.
1 0 0
173 A 12.3. Assess student learning by
prompting students to answer end-
of-chapter, passage, and teacher-
generated questions.
0 1 0
174 D 12.3.1. IF using a remedial reading
comprehension program, (e.g.,
CARS Comprehensive Assessment
of Reading Strategies), THEN ask
students to read the questions
provided at the end of the passage.
0 1 0
175 D 12.3.2. IF using a textbook, THEN
reword or simplify questions
provided at the end of the section
or chapter, if needed to clarify what
the question is asking, and ask
students to read the questions.
0 1 0
COGNITIVE TASK ANALYSIS
158
176 A 12.4. Present students with
curriculum-based assessment using
the format of standardized state
assessments to determine if
students can generalize these skills
across different types of material.
1 1 0
177 D 12.4.1. IF the student is unable to
read the test independently, THEN
read the test aloud to student,
otherwise, student takes the
curriculum-based assessment
independently.
0 1 0
178 D 12.4.2. IF a student is able to read
the test without your assistance,
THEN ask the student to take the
curriculum-based assessment
independently.
0 1 0
179 A 12.4.3. Remind students to use
test-taking strategies such as
process of elimination and
rereading of passage when
answering questions.
0 1 0
179 Total Action and Decision Steps 89 44 72 100 79
100 Action Steps 48 24 41
79 Decision Steps 41 20 31
Total Action and Decision Steps 49.72% 24.58% 40.22%
Action Steps 48.00% 24.00% 41.00%
Decision Steps 51.90% 25.32% 39.24%
Action and Decision Steps Omitted 90 135 107
Action Steps Omitted 52 76 59
Decision Steps Omitted 38 59 48
Action and Decision Steps Omitted 50.28% 75.42% 59.78%
Action Steps Omitted 52.00% 76.00% 59.00%
Decision Steps Omitted 48.10% 74.68% 60.76%
Average Captured Omitted
Total Action and Decision Steps 38.18% 61.82%
Action Steps 37.67% 62.33%
Decision Steps 38.82% 61.18%
COGNITIVE TASK ANALYSIS
159
Appendix H
Coding Spreadsheet: 1i+3r Incremental Method Gold Standard Protocol Procedures
Subject Matter Expert Section
Steps
Step Type Final 1i+3r Incremental Method
Gold Standard Protocol Procedures
A B C A D
O Procedure 1. Select the text for
the lesson.
7 2
1 A 1.1. Match student instructional
reading levels to the text by
selecting text that meets all of the
following criteria:
0 0 0
2 A 1.1.1. Select instructional level text
that students can decode with at
least 90% accuracy.
0 0 0
3 A 1.1.2. Review the structural
language pattern of the text (i.e.,
sentence structure, word usage,
prose) to determine if students will
be able to read and understand the
text with your assistance.
0 0 0
4 A 1.1.3. Examine the text to identify
vocabulary that may be new to the
students and select text that contain
words they can learn with your
assistance.
0 0 0
5 A 1.1.4. Identify the number of ideas
in the text that may be new to the
students and select text that
contains ideas they can learn with
your assistance.
0 0 0
6 A 1.1.5. Preview text layout and
select text that contains features
such as headings, subheadings,
pictures with captions, side notes,
etc. that will be supportive to the
students in your group.
0 0 0
7 A 1.2. Determine the amount of time
and resources you are able to
dedicate to the lesson.
0 0 0
8 D 1.3. IF there is sufficient time to
teach new ideas and vocabulary
AND the level of the text is
challenging, but doable for the
0 0 0
COGNITIVE TASK ANALYSIS
160
students, THEN use the text.
9 D 1.4. IF any of the conditions above
are not met, THEN adjust the
length of the text to fit the time and
resources you have available, OR
identify another text that is
challenging, but doable by the
students.
0 0 0
O Procedure 2. Prepare the
lesson.
4 0
10 A 2.1. Consider three variables when
preparing for the lesson: context
(i.e., time; resources), complexity
of information presented in text
(i.e., sentence structure, new
vocabulary, information-rich text),
and student ability.
0 1 0
11 A 2.2. Preview text and determine
words to use as vocabulary words,
including bolded text and other
words that may be unfamiliar to
students, and write them on
individual index cards.
1 1 1
12 A 2.3. Generate comprehension
questions for each section or
paragraph that check on students’
understanding of that section or
paragraph and write them down.
0 0 1
13 A 2.4. Pair students according to
academic and social need using
Kagan student grouping strategies
(i.e., low, low medium, medium
high, and high student ability
levels).
0 1 1
O Procedure 3. Activate prior
knowledge and introduce the
topic.
7 8
14 D 3.1. IF there are 4 or fewer
students, THEN keep the students
in one group to work with the
teacher for Know-Want-Learn
(KWL) activities AND go to Step
0 1 0
COGNITIVE TASK ANALYSIS
161
3.5.
15 D 3.2. IF there are 5 or more
students, THEN break students into
groups for Know-Want-Learn
(KWL) activities.
0 1 0
16 D 3.3. IF the class includes both non-
independent and independent
students AND an instructional
assistant is available, THEN assign
the independent group of students
to the instructional assistant and
work with the non-independent
group.
0 1 0
17 D 3.4. IF an instructional assistant is
not available to work with
independent student groups, THEN
allow independent student groups
to work independently and work
directly with non-independent
students.
0 1 0
18 A 3.5. Present students with pictures
about the general topic to engage
students in learning without an
emphasis on vocabulary or
questioning.
0 0 1
19 D 3.6. IF you are conducting KWL,
THEN go to Step 3.8.
0 1 0
20 D 3.7. IF you are not conducting
KWL, THEN go to Step 3.14.
0 0 1
21 A 3.8. Ask students to write what
they know about the topic (Know
of KWL).
0 1 0
S 3.8.1. Standard: Students provide
the number of Know responses
according to their ability.
22 A 3.9. Ask one student from each
group to report out what the group
knows about the topic.
0 1 0
23 A 3.10. While students report out,
prompt students from other groups
to record new information on their
KWL charts.
0 1 0
COGNITIVE TASK ANALYSIS
162
24 D 3.11. IF students do not
demonstrate familiarity with the
content, as determined by student
responses during the Know activity,
THEN skip KWL process AND go
to Procedure 4 (preview of
textbook or passage to build
knowledge about the topic).
0 1 0
25 D 3.12. IF students are familiar with
content, as demonstrated by student
responses during the Know activity,
THEN continue with KWL by
asking students what they want to
learn about the topic (Want of
KWL) and tell them to write their
answers in the KWL chart.
0 1 0
26 A 3.13. Tell the students that they
will complete the Learn section of
KWL at the end of the lesson.
0 0 0
27 A 3.14. Prompt each student to
provide input by sharing their
knowledge of the topic and/or their
experiences related to the topic
with the group.
1 0 1
28 A 3.15. Write student-shared
experiences about the topic on the
board in a visually clear manner
(i.e., Thinking Map, list, etc.).
0 0 1
O Procedure 4. Preview the text. 1 1
29 A 4.1. Ask students to preview the
text by looking at text features
including title, subtitles, captions,
graphs, maps, bolded words, and
pictures in the textbook or passage
to build knowledge about the topic
and write down these text features
in their journals. (Tell students,
“Own the page by previewing what
you will read.”)
1 1 1
4.1.1. Alternative sequence: This
procedure may also come after
vocabulary building, Procedure 5.
COGNITIVE TASK ANALYSIS
163
30 D 4.2. IF students are excited about
text features they discovered while
previewing the reading or
information they already knew
about the topic, THEN ask students
to share aloud with the class.
1 1 1
O Procedure 5. Teach vocabulary
words.
15 10
31 A 5.1. Ask students to skim the text
by telling them not to read every
word and to find at least one word
they do not know.
0 0 1
32 A 5.2. Ask students to share
vocabulary words they do not know
from the text and write them on the
board.
0 0 1
33 D 5.3. IF students are reading from a
reader without bolded vocabulary
or key terms, THEN show and read
aloud the vocabulary words you
preselected and wrote on the index
cards in Step 2.2 (e.g., earthquake).
1 1 1
34 D 5.4. IF the word is a compound
word, THEN explain that two
words are combined to make a new
word.
1 1 1
35 A 5.5. Ask students to take turns
reading the words introduced aloud,
or to choral read the words
introduced.
1 1 1
36 D 5.6. IF a student is unable to reread
a particular multi-syllabic
vocabulary word or word part,
THEN chunk the word into
syllables to make the word more
easily decodable.
0 0 1
37 D 5.7. IF a student still cannot read
the word, THEN tell student the
word AND model a tactile
approach to word memorization BY
EITHER:
1 1 1
38 A 5.7.1. Finger tracing and reciting
of the word simultaneously (i.e.,
Dr. Glass Method: hear, touch, and
say) AND THEN tell the student to
1 0 0
COGNITIVE TASK ANALYSIS
164
finger trace and recite the word
three times, OR
39 A 5.7.2. Guide student to use symbol
imagery to visualize and sequence
the sounds and letters within the
word (i.e., Seeing Stars).
0 1 1
40 A 5.7.3. For symbol imagery, show
the word to student, cover the word,
student spells the word orally, and
student skywrites the word.
0 1 1
41 A 5.8. Repeat process for all
vocabulary words, not just difficult
words, so not to invalidate
vocabulary words that students
have chosen, or point out struggling
readers.
0 0 1
42 D 5.9. IF all students are able to
recite the vocabulary word, THEN
engage students in a discussion
regarding the meaning of the word
in Step 5.14.
0 1 1
43 D 5.10. IF a student is unable to read
the vocabulary word, THEN write a
note to provide one-on-one
instruction at a later time to reteach
the student the word using the
tactile approach AND go to Step
5.14.
1 0 0
44 A 5.11. Ask students to find an
unfamiliar vocabulary word listed
on the board or on your index card
in the text by telling them the
specific page where the vocabulary
word can be found.
0 1 0
45 A 5.12. Prompt students to read the
sentence with the vocabulary word
and use context clues to guess word
meaning. (Ask students, “What do
you think the word means based on
that sentence?”).
0 1 0
46 D 5.13. IF the word meaning is
difficult or complex, THEN point
out the word parts (prefix, suffix,
etc.) and the meaning of these word
parts. (Tell students, “That’s how
0 1 1
COGNITIVE TASK ANALYSIS
165
you can remember what this word
might mean because it has this
word part.” “Does this word sound
like another word you know?”).
47 D 5.14. IF the text contains bolded
words with definitions, THEN read
the definition of the vocabulary
word provided in the text to
students.
0 1 1
48 D 5.15. IF the definition is not
provided in the text, THEN engage
students to collaboratively look up
word meaning online or in a
dictionary.
0 0 1
49 A 5.16. Ask students to write the
word and its definition in their
journals.
0 1 1
50 D 5.17. IF the definition is long or
complex, THEN shorten definition
for students.
0 0 1
51 A 5.18. Ask students to orally
construct a sentence using the
vocabulary word with a shoulder or
table partner, or to state the
meaning of the word without using
the word in the definition either
with a partner or with the whole
group.
1 1 1
52 A 5.19. Pair up with a student or pair
of students requiring more support
to ensure that all students
understand the task.
0 0 1
53 A 5.20. Ask students to share the
sentence generated by his or her
partner with the group (i.e., Kagan
strategies).
0 0 1
5.20.1. Reason: This process is
time-efficient and keeps all students
engaged in learning.
54 A 5.21. Repeat step for each
vocabulary word.
1 1 1
55 A 5.22. Remind students to refer to
their journals for the definition of
vocabulary words as needed during
the reading, assignment
0 1 1
COGNITIVE TASK ANALYSIS
166
completion, and for testing.
O Procedure 6. Teach guiding
questions.
4 3
56 A 6.1. Tell the students to look at the
titles or subtitles to make
predictions about what they are
going to read and ask them to share
with the class.
0 1 0
57 D 6.2. IF the content of the reading is
complex (i.e., content unfamiliar to
students based on prior knowledge
discussion; new vocabulary
introduced; information-rich text),
THEN repeat this process for each
subtitle before students read that
particular section.
0 0 1
58 A 6.3. Read questions from the end
of chapter or passage to students as
an anticipatory set for learning.
(Tell students, “Let’s look at the
questions at the end of the
section.”, “What do you think we
are going to learn today?”, “Let’s
see who is right.”).
1 0 1
59 A 6.4. Explain to students that
questions guide what is going to be
learned in each section of the
reading.
0 1 1
60 D 6.5. IF there are questions in the
margin of the page, THEN read the
margin questions to students as
each section of the reading is
presented.
0 1 0
61 D 6.6. IF questions are not provided
in the margin of the page, THEN
provide students with a copy of
teacher-generated questions from
Step 2.3.
0 0 1
6.6.1. Note: These questions are in
addition to the end of chapter
questions introduced (Procedure 6)
as they target more specific details
from the reading and are
inferentially oriented.
COGNITIVE TASK ANALYSIS
167
62 A 6.7. Remind students to keep the
comprehension questions in mind
as they read.
1 1 1
O Procedure 7. Facilitate shared
reading.
9 3
63 A 7.1. Assign shorter paragraphs or
readings to struggling readers, OR,
if preferred,
0 1 0
64 A 7.2. Assign all students paragraphs
similar in length.
0 0 1
65 D 7.2.1. IF a student is struggling or
has reached frustration with the
reading, THEN stop to ask a
comprehension question and ask
the next student to start reading.
0 0 1
66 A 7.3. Use one or more of the
following techniques for shared
reading interchangeably throughout
the lesson to keep students engaged
and on task:
0 1 0
67 D 7.3.1. IF students are pre-assigned
sections or paragraphs to read for
round-robin, THEN allow students
to preview and practice their
sections before round-robin reading
begins.
1 1 1
68 D 7.3.2. IF students are not pre-
assigned sections or paragraphs to
read for popcorn reading, THEN
the student reading uses numbered
sticks to determine who will read
next.
1 1 1
69 A 7.3.2.1. Remind students to follow
along because they will be called
upon when it is their turn to start
reading aloud.
1 1 1
70 A 7.3.3. Choral reading (i.e., students
read aloud in unison with the whole
class or group of students); OR
0 1 0
71 A 7.3.4. Oral cloze where teacher
reads aloud and when he or she
pauses, students choral read the
word.
0 1 0
72 A 7.5. Read the first paragraph aloud
to students.
1 1 1
COGNITIVE TASK ANALYSIS
168
73 A 7.6. Call on each student to read a
paragraph aloud using the
techniques in Step 7.3.
0 1 1
74 A 7.7. Jot down words incorrectly
decoded by each student and teach
student words using the tactile
approach (Dr. Glass Method) to
word recognition in Steps 5.7-5.7.1
at a later time.
1 0 0
O Procedure 8. Teach for reading
comprehension.
5 8
75 A 8.1. After every paragraph conduct
structured reading comprehension
by asking questions from the end of
the chapter to maintain student
attention and focus, and to check
for understanding of the reading.
1 0 0
76 D 8.2. IF students are able to mark in
text, THEN prompt students to
highlight answers to questions as
they are read.
1 1 1
77 D 8.3. IF school policy prohibits
students from highlighting in the
text, THEN tell students to use
sticky notes to keep track of
answers to main idea, detail, and
inferential questions.
0 1 1
78 D 8.4. IF the answer to a margin
question or question generated by
teacher is read in a sentence,
paragraph, or section of the
reading, THEN prompt students to
look for the answer in the section
read. (Tell students, “We just
answered that question. What’s the
answer to the question?”, “I just
heard the answer to the question.
Who can find it?”, “I wonder if a
question was just answered.”).
0 1 1
79 D 8.5. IF the paragraph is 3 to 4
sentences long, THEN stop at the
end of the paragraph.
1 0 0
80 D 8.6. IF the section or paragraph a
student is reading is difficult for the
student to decode, THEN stop
0 0 1
COGNITIVE TASK ANALYSIS
169
immediately after the sentence is
read to discuss answer to margin
question or question generated by
teacher.
81 A 8.7. Ask another student in the
group to proceed with reading.
0 0 1
82 D 8.8. IF the paragraph is longer or
complex (i.e., introduces multiple
vocabulary words; presents
information using complex
sentences; information-rich text),
THEN stop midway through the
paragraph to check for
understanding.
1 0 0
83 A 8.9. Ask the same student to
continue to read until the end of the
paragraph.
1 0 0
84 D 8.10. IF the reading is not
complex, THEN stop after the
paragraph or section to discuss
answer to margin question(s).
1 1 1
85 A 8.11. Point out vocabulary terms
after every paragraph read. (Ask
students, “Is one of our vocabulary
words in this paragraph?”).
0 0 1
86 A 8.12. Give the reader an
opportunity to respond first.
0 0 1
87 D 8.13. IF the reader does not
recognize the vocabulary term used
in the paragraph, THEN allow other
students to respond.
0 0 1
O Procedure 9. Teach for
identifying the main idea and
details.
8 12
88 D 9.1. IF the students have received
explicit instruction on the concepts
of “main idea” and “supporting
details,” THEN remind students
that the main idea is almost always
the first sentence of a paragraph by
telling them, “When you are unsure
of the main idea, go back and read
the first sentence.”
0 1 1
COGNITIVE TASK ANALYSIS
170
89 D 9.2. IF the students have not
received explicit instruction on the
concepts of “main idea” and
“supporting details,” THEN teach
the two concepts by providing a
definition and at least one example
and non-example for each concept
AND ask students to provide at
least one example and non-example
for each concept.
0 0 0
90 D 9.3. IF the paragraph is critical to
understanding the main idea,
THEN ask a sufficient number of
questions for students to
demonstrate an understanding of
the ideas in the paragraph.
1 1 1
91 D 9.4. IF the paragraph is not critical
to understanding the main idea,
THEN use fewer questions to
facilitate a general discussion of the
paragraph AND move on to Step
9.6.
0 1 0
92 A 9.5. During student reading, stop at
the end of each paragraph and ask
students teacher-generated
questions to elicit the main idea.
(Ask students, “What did you just
learn or discover?”, “What is the
purpose of the paragraph?”, “How
did the author explain…?”, “Can
we add to that?”, “Where is the
main idea sentence?”, and higher-
level “Why?” questions to engage
students in their responses.)
1 1 1
93 D 9.6. IF the student does not get the
correct answer, THEN the same
student or teacher may reread, or
the group may choral read the
paragraph or sentence that
illustrates the answer, OR the
teacher may narrow the reading
selection for the student and prompt
student to reread the section in
search for the answer.
1 1 1
94 A 9.7. Ask the same student to make
another attempt to answer the main
1 1 1
COGNITIVE TASK ANALYSIS
171
idea question.
95 D 9.8. IF the student still does not
provide the correct answer, THEN
ask another student to answer the
question and ask other students in
the group for thumbs up or down to
show agreement or disagreement
with the response.
1 0 0
96 D 9.9. IF the student still does not
provide the correct answer, THEN
give the student the answer and
discuss why this is the correct
answer, OR another student or the
teacher may help this student
identify the answer to the question.
1 1 1
97 A 9.10. Prompt the student that
experienced difficulty to share the
correct answer with the group.
1 0 1
98 A 9.11. Ask the group teacher-
generated detail questions (“What
did the author say about …?”,
“How did the author explain …?”,
“Why is this important to the main
idea?”, “Why is this an important
detail?”) and wait for student
responses.
1 1 1
99 D 9.12. IF student responses to
supporting detail questions do not
support main idea, THEN prompt
students to look back at the text for
additional details.
0 1 1
100 D 9.13. IF there has been an in-depth
discussion regarding the content of
the reading and important
supporting details, and students are
able to identify the most important
details in the reading, THEN go to
Step 9.14.
0 1 1
101 D 9.14. IF students are still unable to
identify important supporting
details, THEN continue the
discussion to elicit more responses
to detail questions, and to reinforce
comprehension and understanding
of the reading.
0 1 1
COGNITIVE TASK ANALYSIS
172
102 A 9.15. After each paragraph read,
prompt students to write the main
idea of the paragraph in their
journal using their own words.
0 0 1
103 A 9.16. Ask for a volunteer to share
out their main idea statement.
0 0 1
104 D 9.17. IF students do not agree with
this main idea statement, THEN
students can debate the main idea
statement until a consensus is
reached to encourage active
engagement in reading.
0 0 1
105 D 9.18. IF a consensus is reached for
the main idea statement, THEN
write the sentence on the board for
students to copy.
0 0 1
106 A 9.19. Prompt students to copy or
change their main idea sentence if
the main idea statement was
incorrect.
0 0 1
107 A 9.20. Ask students to record the
page number where the information
for each main idea statement was
found.
0 0 1
O Procedure 10. Ask students
inferential questions.
4 1
108 A 10.1. Tell students that
inferences are implied about events
in the story, but not explicitly stated
in the text.
1 0 1
109 A 10.2. Ask inferential questions
as students read each section and
again at the end of the passage. (For
example, “What effect does this
have on…?”, “What is the best
solution to this problem?”, “What
do you think caused this to
happen?”).
1 1 1
110 D 10.3. IF student does not give
correct answers to inferential
questions, THEN think aloud to
model the inferential thought
process on how to make inferences.
1 1 0
COGNITIVE TASK ANALYSIS
173
111 A 10.4. Ask students to explain
how you generated the inferential
response. (Ask students, “How did
I come up with that?”).
1 1 0
112 A 10.5. Query students for a
response; students may be just a
question away from the correct
answer.
0 0 1
O Procedure 11. Teach specific
text types to show how passages
are structured.
5 0
113 A 11.1. Explain the meaning of
text types and use short paragraphs
to show examples and non-
examples of compare and contrast,
cause and effect, sequential, and
general description text types.
0 0 1
114 A 11.1.1. Show how Thinking Maps
are used to visually organize the
information presented in these
paragraphs by, for example, using a
venn diagram or double bubble
map for compare and contrast
passages, or multi-flow map for
cause and effect passages.
0 0 1
115 A 11.2. Explain that these are ways
that the writer organizes text to
point out important information in
the text.
0 0 1
116 A 11.3. Prompt students to underline
signal words for compare and
contrast, cause and effect,
sequential, and general description
text.
0 0 1
117 A 11.4. Use oral questioning to check
for understanding of text types.
0 0 1
O Procedure 12. Assess student
learning.
14 9
Assess student learning by using
one or more of the following
assessments:
COGNITIVE TASK ANALYSIS
174
118 D 12.1. IF you are using KWL,
THEN assess student learning by
prompting students to
independently record what they
have learned from the reading in
the Learn column of the KWL
chart.
0 1 0
119 A 12.2. Assess student learning by
prompting students to answer end-
of-chapter, passage, and teacher-
generated questions.
1 1 1
120 D 12.2.1. IF using a remedial reading
comprehension program, (e.g.,
CARS Comprehensive Assessment
of Reading Strategies), THEN ask
students to read the questions
provided at the end of the passage.
1 0 0
121 D 12.2.2. IF using a textbook, THEN
reword or simplify questions
provided at the end of the section or
chapter, if needed to clarify what
the question is asking, and ask
students to read the questions.
1 0 0
122 D 12.2.3. IF you generated questions
for the reading, THEN ask students
to also read those questions.
0 0 0
123 A 12.2.4. Prompt students to look
back in the reading and refer to
their journals for main idea
statements, vocabulary words, and
page numbers where specific
information can be found in the text
(STEP 9.19) to find the information
they need to answer the questions.
0 1 1
124 A 12.2.5. Ask students to either
independently write answers to
questions in their journals along
with the page numbers where the
answers were found and share
aloud their responses to questions
with the group, OR, if preferred,
ask students to answer questions
orally by sharing aloud their
responses with the group.
0 1 1
125 D 12.2.6. IF a student shares aloud an
answer that is incorrect, THEN
0 0 1
COGNITIVE TASK ANALYSIS
175
prompt the student to go back and
find the answer in the text.
126 D 12.2.7. IF the student is unable to
find the answer in the text, THEN
direct the student to the section of
the text where the information
needed to answer the question can
be found and tell the student to
correct their answer orally.
0 0 1
127 A 12.2.8. Ask students to share out
the answers they have recorded in
their journal, THEN write down
one student answer or combine
student answers to form a correct
answer to each question and use the
document camera, whiteboard, or
other means to project what you
have written.
0 1 1
128 A 12.2.9. Prompt students to compare
the answers they have written in
their journals to the answers you
have projected.
0 0 1
129 D 12.2.10. IF student answers are
incorrect, THEN ask students to
revise the answers they wrote down
in their journals.
0 0 1
130 A 12.3. Assess student learning by
prompting students to write a group
summary of the main idea and
supporting details of the text in
their journals.
0 0 1
131 A 12.3.1. Ask students to share aloud
the main idea sentences they wrote
in Step 9.14 with the group.
0 0 1
132 A 12.3.2. Ask students to determine
(based on consensus) which student
sentences to use for the main idea
and supporting details in the group
summary.
0 0 1
133 A 12.3.3. Remind students that one
sentence will be used as the main
idea sentence and the rest of the
sentences will become supporting
detail sentences.
0 0 1
134 A 12.3.4. Encourage students to
alternate the use of sentences from
0 0 1
COGNITIVE TASK ANALYSIS
176
each student and to combine
student sentences to formulate more
complete thoughts.
135 A 12.3.5. Write down the main idea
and detail sentences to form a
summary paragraph and use the
document camera, whiteboard, or
other means to project what you
have written.
0 0 1
136 A 12.3.6. Ask students to write the
group summary in their journals.
0 0 1
137 A 12.4. Assess student learning by
administering a curriculum-based
assessment.
1 0 0
138 D 12.4.1. IF a student is unable to
read the test independently, THEN
read the test aloud to the student
and ask the student to write down
his/her responses.
1 0 0
139 D 12.4.2. IF a student is able to read
the test without your assistance,
THEN ask the student to take the
curriculum-based assessment
independently.
1 0 0
140 A 12.4.3. Remind students to use
test-taking strategies such as
process of elimination and
rereading of the passage to answer
questions.
1 0 0
140 Total Action and Decision Steps 44 66 91 83 57
83 Action Steps 24 35 57
57 Decision Steps 20 31 34
Total Action and Decision Steps 31.43% 47.14% 65.00%
Action Steps 28.92% 42.17% 68.67%
Decision Steps 35.09% 54.39% 59.65%
Action and Decision Steps Omitted 96 74 49
Action Steps Omitted 59 48 26
Decision Steps Omitted 37 26 23
Action and Decision Steps Omitted 68.57% 52.86% 35.00%
Action Steps Omitted 71.08% 57.83% 31.33%
Decision Steps Omitted 64.91% 45.61% 40.35%
COGNITIVE TASK ANALYSIS
177
Average Captured Omitted
Total Action and Decision Steps 47.86% 52.14%
Action Steps 46.59% 53.41%
Decision Steps 49.71% 50.29%
COGNITIVE TASK ANALYSIS
178
APPENDIX I
3i+3r Independent Method Gold Standard Protocol
Teaching Reading in Informational Text to 3
rd
-- 5
th
Grade Students with Mild to
Moderate Learning Disabilities During Small Group Instruction
Reading instruction in informational text is outlined in this protocol. The instructional
tasks are taught over multiple days.
Main Procedures:
1. Select the text for the lesson.
2. Prepare the lesson.
3. Activate prior knowledge and introduce the topic.
4. Preview the text.
5. Teach vocabulary words.
6. Teach guiding questions.
7. Facilitate shared reading.
8. Teach for reading comprehension.
9. Teach for identifying the main idea and details.
10. Teach making inferences.
11. Teach specific text types to show how passages are structured.
12. Assess student learning.
Procedure 1. Select the text for the lesson.
1.1. Match student instructional reading levels to the text by selecting text that meets all
of the following criteria:
1.1.1. Select instructional level text that students can decode with at least 90%
accuracy.
1.1.2. Review the structural language pattern of the text (i.e., sentence structure,
word usage, prose) to determine if students will be able to read and understand
the text with your assistance.
1.1.3. Examine the text to identify vocabulary that may be new to the students
and select text that contain words they can learn with your assistance.
1.1.4. Identify the number of ideas in the text that may be new to the students
and select text that contains ideas they can learn with your assistance.
1.1.5. Preview text layout and select text that contains features such as headings,
subheadings, pictures with captions, side notes, etc. that will be supportive to the
students in your group.
1.2. Determine the amount of time and resources you are able to dedicate to the lesson.
1.3. IF there is sufficient time to teach new ideas and vocabulary AND the level of the
text is challenging, but doable for the students, THEN use the text.
1.4. IF any of the conditions above are not met, THEN adjust the length of the text to fit
the time and resources you have available, OR identify another text that is
challenging, but doable by the students.
COGNITIVE TASK ANALYSIS
179
Procedure 2. Prepare the lesson.
2.1. IF students are reading from a reader without bolded vocabulary or key terms, THEN
preview text, highlight important words for students to know, and prepare definitions
before presenting the reader to students.
2.2. Write vocabulary words on index cards.
2.3. Generate main idea and detail questions for each section or paragraph that check on
students’ understanding of that section or paragraph and write them down.
2.4. Pair students according to academic and social need using Kagan student grouping
strategies (i.e., low, low medium, medium high, and high student ability levels).
Procedure 3. Activate prior knowledge and introduce the text.
3.1. IF students are in a small group of 3-4 students AND there are multiple topics for
students to read, THEN allow student groups to decide which topic they would like
to read about.
3.2. IF students are in a large group of 5 or more students, THEN choose the topic of the
reading for students.
3.3. Tell students to read only the title and ask them to tell you what the title means to
them.
3.4. Before opening the book or introducing the passage, activate students’ background
knowledge by asking students to share what they know about the subject matter to
facilitate a general discussion of the topic. (Ask students, “What do you know about
this subject?”).
3.5. Chart student responses on the board using a circle map.
3.6. Gauge student interest level in the topic by monitoring their facial expressions, body
language, and engagement.
3.7. IF students are not interested in the topic, THEN bring in hands-on materials and
activities relevant to the topic, or show students pictures and video clips to increase
student interest and excitement (e.g., rocks).
3.8. Ask student groups to make a prediction about what will happen in the text based
upon the title and background knowledge discussion.
3.9. Ask a student from each group to share a prediction with the class (“We predict that
this is going to happen…”).
3.10. IF a student uses the title in their prediction, or gives a vague prediction, THEN
ask student to elaborate and/or fix the prediction.
3.11. IF a student makes an incorrect prediction, THEN make a mental note to discuss
the prediction after student reads the text.
3.12. After all groups have shared predictions, repeat or paraphrase each group’s
prediction aloud for the class.
Procedure 4. Preview the text.
4.1. Give students 1-2 minutes to independently preview and think about the text by
looking at text features including heading, subheadings, captions, graphs, maps,
bolded words, and pictures, then preview the text together as a group to explain the
general outline of the text.
COGNITIVE TASK ANALYSIS
180
4.2. IF students are excited about the text features they have discovered or information
they already knew about the topic, THEN encourage students to orally share what
they have found with the group.
4.3. Give students 30 seconds to think about the predictions made in Step 3.8 and ask
students if they would like to change or keep their predictions.
4.4. IF a student group decides that their initial prediction was accurate, THEN ask the
student group to orally confirm their initial prediction.
4.5. IF a student group decides that their initial prediction was inaccurate, THEN ask the
student group to change the prediction and share aloud any revised prediction with
the class.
Procedure 5. Teach vocabulary words.
5.1. Tell students that whenever they read textbooks, highlighted and boldfaced words
are the words they need to know before reading.
5.2. IF using a textbook, THEN ask students to scan and find bolded vocabulary words or
key terms in the text.
5.3. IF students are reading from a reader without bolded vocabulary or key terms, THEN
present and read aloud the vocabulary words you identified as important in Step 2.1
AND ask students to scan, find, and highlight each of the vocabulary words in the
text.
5.4. Give students a vocabulary pre-assessment by asking students to rate how well they
know each word (e.g., 1) they do not know it at all; 2) they have heard of it; or 3)
they know it and can use it in a sentence; thumbs up or down) and demonstrate their
understanding of the word by providing a sentence or definition of the word.
5.5. Prompt students to identify words that are unfamiliar to them or that they perceive as
difficult in the text AND highlight these words to use them as vocabulary words.
(Ask students, “Do you see any other words that we can use as vocabulary words?”,
“Do you see any hard words?”).
5.6. Ask students to take turns reading aloud each of the vocabulary words.
5.7. IF the vocabulary word is a compound word, THEN explain that two words are
combined to make a new word.
5.8. IF a student does not recognize a word, THEN model tactile approach to word
memorization with simultaneous finger tracing and reciting of the word (Dr. Glass
Method: hearing, touching, and saying).
5.9. Tell the student to finger trace and recite the word three times AND THEN visualize
the word or make a picture in their mind about the word to help remember it.
5.10. IF a student is still unable to read the vocabulary word, THEN write a note to
provide one-on-one instruction to reteach the student the word using the tactile
approach at a later time.
5.11. Repeat Steps 5.5-5.9 for each vocabulary word.
5.12. Tell students that these are the vocabulary words they will encounter while
reading and that they should understand the meaning of these words. (Tell students,
“Let’s look ahead and make sure we know what these words mean before we start
reading.”).
5.13. Ask students to independently read the sentences that contain vocabulary words to
figure out the meanings of words.
COGNITIVE TASK ANALYSIS
181
5.14. IF students know a definition without looking up the word meaning in the
dictionary, THEN ask students to share aloud the meaning of the word with the
group.
5.15. IF students cannot determine the meaning of a word using context clues, THEN
move on to Step 5.16.
5.16. IF using a textbook and definitions are written on the sides of pages, THEN ask
students to take turns reading aloud a definition to the group.
5.17. IF using a textbook and definitions are not written on the sides of the pages,
THEN ask students to look in the glossary or dictionary in the back of the book for
definitions of vocabulary or key terms and take turns reading aloud a definition to
the group.
5.18. IF there is limited time for students to find and read aloud vocabulary definitions
to the group, THEN tell students the meaning of some words to keep the lesson
moving along.
5.19. IF students are reading from a reader without the bolded vocabulary or key terms
with definitions, THEN read aloud the definitions you prepared in Step 2.1 to
students.
5.20. IF any of the vocabulary words you and the students have identified have
complex meanings (i.e., difficult for students to understand; unfamiliar content),
THEN write the definition on the whiteboard to serve as a visual reminder of the
word meaning for students to reference as they read the text.
5.21. Ask students to take turns using the vocabulary words in context.
5.22. IF time permits, THEN prompt students to share synonyms, word variations, or
make associations for vocabulary or key terms (e.g., canine: dog; place, misplace,
displace; pioneers: wagon) by thinking of words they might encounter while reading
that are related to the topic. (Ask students, “What other words do you know that
relate to that word?”).
5.23. Complete a CORE vocabulary map on the whiteboard as a group, using the words
from Step 5.22, and ask students to copy the map in their journals.
Procedure 6. Teach guiding questions.
6.1. Read end of chapter or passage questions, or teacher-generated questions as an
anticipatory set for learning. (Say, “What do you think we are going to learn
today?”, “Let’s see who is right.”, “Let’s look at the questions at the end of the
section.”).
6.2. Remind students to keep the anticipatory set questions in mind as they read.
Procedure 7. Facilitate shared reading.
7.1. Tell students to follow along as others are reading because they will be called upon
when it is their turn to start reading aloud.
7.2. Use one or more of the following techniques interchangeably for shared reading
throughout the lesson to keep students engaged and on task:
7.2.1. IF students are pre-assigned sections or paragraphs to read (i.e., round
robin), THEN students may be given two minutes to practice reading their
sections ahead of time.
COGNITIVE TASK ANALYSIS
182
7.2.2. IF students are not pre-assigned sections or paragraphs to read, THEN
students are randomly assigned the order in which they will read to avoid having
students not pay attention, or look ahead to practice reading their sections (i.e.,
popcorn reading).
7.2.3. IF the passage is grade level text and students in the group are comfortable
reading aloud, THEN these students may volunteer or be asked to take turns
reading a paragraph aloud.
7.2.4. IF the passage is grade level text and students in the group are not
comfortable reading aloud, THEN read the entire text aloud to students as they
follow along before asking students to read aloud.
7.2.5. IF students are struggling readers, THEN assign shorter paragraphs or
sections of the text to students to read OR you and each student take turns
reading one sentence aloud to the group.
7.2.6. IF the passage is at students’ instructional or independent reading level
AND students are in Kagan cooperative learning groups with a range of student
ability levels (i.e., low, low medium, medium high, high), THEN prompt
students to take turns reading with a shoulder or face partner, OR read in the
order that their desks are numbered.
7.3. IF you have not read the entire text aloud for students, THEN read the first paragraph
aloud to students to begin shared reading.
7.4. Ask students to share aloud what they visualized or pictured in their minds after
reading along or listening to the paragraph.
7.5. Jot down words incorrectly decoded by each student and teach student words using
the tactile approach (Dr. Glass Method) to word recognition in Steps 5.7-5.8 at a
later time.
Procedure 8. Teach for reading comprehension.
8.1. After every paragraph conduct structured reading comprehension by asking
questions from the end of the chapter or text to maintain student attention and focus,
and to check for understanding of the reading.
8.2. IF students are able to mark in text, THEN prompt students to highlight answers to
anticipatory set questions as they are read.
8.3. IF the paragraph is 3 to 4 sentences long, THEN stop at the end of the paragraph to
ask end of chapter or text questions.
8.4. IF the paragraph is longer or complex (i.e., introduces multiple vocabulary words;
presents information using complex sentences; information-rich text), THEN stop
midway through the paragraph to ask end of chapter or text questions.
8.5. Ask the same student to continue reading until the end of the paragraph.
8.6. IF the paragraph is not complex, THEN stop after the paragraph or section to ask end
of chapter or text questions.
8.7. IF a student provides an incorrect response, THEN ask the student for evidence to
support his/her response. (Ask student, “Where in the text did you find evidence that
supports that?”).
8.8. IF the student determines that his/her answer is incorrect, THEN allow the student to
change his/her response.
COGNITIVE TASK ANALYSIS
183
8.9. IF student cannot find evidence to support response, THEN prompt other students to
look for the evidence. (Ask other members of the group, “What do you think?”,
“Thumbs up or thumbs down, do you agree or disagree?”, “Can you find evidence?”,
“If it’s not there, where is it?”`).
8.10. IF the group determines that the response is incorrect, THEN another student may
answer the question.
8.11. After each paragraph, ask 1-2 students to orally paraphrase what they have read
and to build upon other students’ restatements of the text.
8.12. Point out vocabulary words introduced in Procedure 4. (Ask students, “How does
the vocabulary word relate to what we are reading?”).
8.13. IF the group rereads text the following day, THEN prompt students to read
questions from the end of the chapter or passage and ask students if they can locate
responses in the text.
8.14. Ask students to engage in group discussions to confirm or disconfirm their group
predictions and to discuss the rationale they will share with the class for keeping or
changing their prediction.
8.15. Ask a student from each group to explain if their group prediction was correct or
required revision. (Ask, “Why was your prediction right?”, “What would you change
about your prediction?”)
8.16. Provide extrinsic rewards (i.e., praise, tokens) to groups for keeping or making
strong predictions, even if initial predictions were wrong.
8.17. IF a student group confirms the accuracy of a prediction, but the prediction is
incorrect, THEN ask the class to give the group thumbs up or down to show
agreement or disagreement with the prediction.
8.18. Ask groups to make revisions to their prediction until it is correct.
8.19. Prompt students to visualize the steps or sequence of the passage now that they
have the correct prediction.
Procedure 9. Teach for identifying the main idea and details. (This procedure is
taught to novice teachers in two sections: Steps 9.1-9.13.1. and 9.14-9.34.)
Note: Main idea and supporting detail skills are taught separately through teacher
modeling and practice, and when students have mastered these individual skills they can
be concurrently applied to informational text.
9.1. Tell students that they are going to identify the main idea and supporting details of
the paragraph and that the main idea is talked about in every paragraph.
9.2. Remind students that the main idea is typically found in the first or last sentences of
the paragraph, whereas supporting details come after the main idea and may be
found in the middle and end of the paragraph.
9.3. Informally assess student understanding of main idea and supporting details by
asking students to read a short paragraph and orally respond to main idea and
supporting detail questions.
9.4. IF students demonstrate an understanding of main idea and supporting details, THEN
go to Step 9.6.
9.5. IF students do not demonstrate an understanding of main idea and supporting details
or struggled with these skills in Step 9.3, THEN go to Step 9.24 to use short
paragraphs to teach and reinforce these skills.
COGNITIVE TASK ANALYSIS
184
9.6. Give students 2-3 minutes to skim the text to look for and underline the main idea of
each paragraph.
9.7. Explain/remind students that skimming is to quickly look over the text, without
reading the entire paragraph or passage to search for the main idea.
9.8. IF skimming is a new skill for students, THEN skim the passage and think aloud to
demonstrate this process for students.
9.9. IF a paragraph is a “skinny”, or 3-4 sentences in length, THEN students underline
the first sentence.
9.10. IF a paragraph is a “fat”, THEN students underline the first and last sentences.
9.11. Prompt students to read the last paragraph and underline the entire paragraph.
9.12. Explain that the writer will most likely restate the main idea of the passage in the
last paragraph.
9.13. Ask one student from each Kagan group to read all underlined text aloud to group
members.
9.13.1. Note: Gradually fade group reading so that each student is required to read
underlined text individually.
9.14. As students read the passage, stop at the end of each paragraph and ask students
teacher-generated questions to elicit the main idea. (Ask students, “What did you just
learn or discover?”, “What is the purpose of the paragraph?”, “How did the author
explain…?”, “Can we add to that?”).
9.15. Ask the group teacher-generated detail questions. (“What did the passage say
about …?”, “How did the passage explain …?”).
9.16. After each student response, ask other students for thumbs up or down to show
agreement or disagreement with the response.
9.17. IF students in the group do not agree with an answer, THEN ask students to
“bounce” back to the text to reread and skim for the correct answer.
9.18. Ask another student to answer the question and ask other students in the group for
thumbs up or down to show agreement or disagreement with the response.
9.19. IF more than one student provides an inaccurate response, THEN ask for a student
volunteer to reread the paragraph aloud.
9.20. IF there are no student volunteers, THEN reread paragraph aloud to the group.
9.21. Ask one of the students that responded incorrectly to make another attempt to
answer the main idea or supporting details question.
9.22. IF the student still does not provide the correct answer, THEN ask other students
to share answers to questions until the class reaches a consensus OR give student the
answer and discuss why this is the correct answer.
9.23. IF structured writing is part of the lesson, THEN proceed with a writing lesson
after this step.
9.24. IF students do not demonstrate an understanding of main idea and supporting
details or struggled with these skills in Step 9.3, THEN provide students with an
independent reading level paragraph to read aloud as a group and model how to
distinguish between main idea and supporting details of the paragraph.
9.25. Ask students to describe what they are visualizing in their minds after reading the
paragraph to check that students are using visualization strategies.
9.26. IF students are able to identify the main idea, THEN proceed with a lesson in
supporting details.
COGNITIVE TASK ANALYSIS
185
9.27. To teach students how to identify supporting details, select a text that is familiar
to students and ask students to first identify the main idea.
9.28. Model how to identify supporting details in a passage by pointing out a detail
sentence AND color coding the main idea and supporting details with a highlighter
to help visual learners see the pattern of a paragraph.
9.29. Prompt students to take turns identifying detail sentences from the same
paragraph. (Ask students, “Are these the bits of information that go with the big
picture?”, “Do they match the picture in your head?”).
9.30. Use concepts of main idea and supporting details from a narrative passage or a
biography to remind students of the differences between the “big idea” and
supporting details in text. (Ask students, “How do you know what the text is
about?”, “Where do you find main idea and details?”, “What did the character in the
story do first?”).
9.31. IF students demonstrate an understanding of main idea and supporting details
when the skills were taught in isolation, THEN introduce paragraphs that require
students to concurrently identify the main idea and supporting details AND ask
students to take turns identifying the main idea and supporting details of a paragraph
as a group.
9.32. Give students an unfamiliar paragraph and ask students to independently highlight
the main idea and three supporting details in the paragraph.
9.33. Ask students to share the main idea and supporting details with a partner or the
group.
9.34. After several opportunities for practice, ask students to identify which details are
most important in supporting the topic.
Procedure 10. Teach for making inferences.
10.1. Remind students that a response to an inferential question is not directly stated or
written in the text and that inferences are made using information from the text to
draw a conclusion.
10.2. Ask inferential questions as students read each section and again at the end of the
passage. (For example, “What effect does this have on…?”, “What is the best
solution to this problem?”, “What do you think caused this to happen?”).
10.3. Tell students which questions require them to make inferences.
10.4. IF students do not demonstrate a solid understanding of making inferences during
questioning, THEN teach students the cause and effect text structure in Step 11.1 and
reintroduce inferences after students are able to consistently identify cause and effect
relationships in text.
10.5. Ask students to look for and underline clues within the text to support their
inferences.
10.5.1. Note: May use example of a detective looking for clues, or cookie crumbs
left behind to solve a case.
10.6. IF student does not give the correct answer to an inferential question, THEN think
aloud to model the inferential thought process on how to make inferences.
10.7. Ask students to explain how you generated the inferential response. (Ask
students, “How did I come up with that?”).
COGNITIVE TASK ANALYSIS
186
10.8. IF a student response is still incorrect, THEN ask other students for thumbs up or
down to show agreement or disagreement with the response AND ask the student to
try again.
10.9. IF students from the group do not generate the correct answer, THEN give the
correct answer.
Procedure 11. Teach specific text types to show how passages are structured. (Text
types in Procedure 11 are initially taught to students as sub-skills and then
incorporated into comprehension questions in Procedure 8 as student skill increases.)
11.1. Teach students text structure for cause and effect as students progress from literal
to inferential text.
11.1.1.1. Note: Cause and effect provides a foundation for students to make
inferences and to draw their own conclusions from the text.
11.1.2. As an anticipatory set, provide students with a concrete example of cause
and effect by demonstrating a cause and effect relationship. (Tell students,
“Here’s my cup. If I push it off the table what will happen?”).
11.1.3. Prompt students to demonstrate cause and effect by having students push
their pencils off the desk, or a comparable cause and effect activity. (Ask
students, “What was the cause? What was the effect?”).
11.1.4. Explain a cause and an effect. (Tell students, “A cause is why something
happens and the effect is what happens.”).
11.1.5. List examples of cause and effect scenarios on the board and introduce a
multi-flow map to demonstrate cause and effect relationships. (Ask students, “If
you get in a fight on the playground what will happen, what is the effect?”, “If
this is the effect, what were the causes?”).
11.1.6. Prompt students to make connections to stories they have read and to share
aloud specific examples of cause and effect relationships from the stories.
11.1.7. Present students with text and prompt students to skim for cause and effect
key or clue words to identify a cause and effect relationship in the text.
11.1.8. Remind students that key words are posted on the wall for their reference
(i.e., Zoom In or Thinking Maps).
11.1.9. Tell students to pay close attention to where the key words are located in
the passage, as the key words help figure out the cause and effect of a topic (e.g.,
lightning storms, earthquakes).
11.1.10. Prompt students to underline key words as they are found in the text.
11.1.11. IF you or the students identify a key word in the text that is not listed on
the key words chart, THEN add the word to the chart.
11.1.12. Ask students to share out the key words they found while skimming the
text. (Ask students, “Which key words did we find?”).
11.1.13. Prompt students to read the text aloud using one or more of the methods in
Step 7.2.
11.1.14. IF students are reading in their Kagan groups, THEN remind students to
raise their hands to ask for assistance with decoding difficult words or text.
11.1.15. Prompt students to listen and look for cause and effect key words in the
text as it is read aloud.
COGNITIVE TASK ANALYSIS
187
11.1.16. IF a key word has been read, THEN stop to ask students if they recognized
the key word in the sentence and ask a student to tell the group the key word.
11.1.17. IF the passage contains multiple paragraphs, THEN stop after each
paragraph to ask students if a cause and effect has been identified in the passage
(Ask, ‘Do we have a cause yet?”, “Here’s a key word, is it telling us
anything?”).
11.1.18. Ask students to identify the cause and effect relationship using key words
and evidence from the text.
11.1.19. Prompt students to underline sentences that contain the answers to teacher-
generated comprehension questions from Step 9.14. (Ask students, “Where did
we see the answer?”, “Where is the key word that guided us to the answer?”).
11.1.20. IF students are able to identify cause and effect relationships when
presented with familiar and unfamiliar text, THEN provide students an
opportunity for independent practice with unfamiliar text.
11.1.21. IF students demonstrate an understanding of cause and effect
relationships, THEN imbed cause and effect questioning in comprehension
questions in Procedure 8 to generalize these skills in subsequent instruction.
11.1.22. Project a multi-flow thinking map on the board to provide students with a
visual representation of cause and effect relationships in the text.
11.1.23. IF this is the first time students have worked on multi-flow maps, THEN
complete the thinking map as a group to model how to identify cause and effect
relationships (i.e., think aloud; look for key words; explain how paragraphs are
organized in text) and allow students to copy your entries into their journals.
11.1.24. IF students have had several opportunities to work with multi-flow maps,
THEN model one cause and effect relationship and ask students to make
additional entries into the multi-flow map by working collaboratively with their
table groups or on their own.
11.1.25. Remind students to focus on the organization of cause and effect
relationships, not the spelling of words.
11.1.26. IF the multi-flow maps are completed in table groups, THEN assign each
student a role as leader, recorder, presenter, and timekeeper of the task and ask
the presenter to report one cause and effect relationship recorded by the group.
11.1.27. IF students completed the multi-flow map independently, THEN ask a few
students to share aloud one of their findings.
11.1.28. Ask students to orally justify their responses using evidence from the text.
(Ask, “How do you know that is correct?”).
11.1.29. IF students demonstrate a solid understanding of cause and effect
relationships in text, THEN ask questions to make inferences using implicit
information from text.
11.2. Teach students text structure for compare and contrast.
11.2.1. IF you are introducing other text structures such as compare and contrast,
THEN follow the same process as in cause and effect Step 11.1, but provide
students with a list of key words germane to the text structure (i.e., for this
reason, because, as a result, was caused by, etc.) and use a double bubble
thinking map to compare and contrast information from the passage read.
COGNITIVE TASK ANALYSIS
188
11.2.2. After the reading, engage students in a discussion regarding the
similarities and differences between the physical characteristics of students
within the group to provide students with an opportunity to practice compare
and contrast with real life situations.
11.2.3. Ask students how the subject of the text compares to someone in the group
or someone they know and use a double bubble map to list similarities and
differences on the board. (Ask, “What about this person is similar or different
than you?”).
11.2.4. IF time permits, THEN ask students to compare and contrast what was
read in two different informational passages.
11.2.5. IF students demonstrate an understanding of compare and contrast, THEN
imbed compare and contrast questioning in comprehension questions in
Procedure 8 to generalize these skills in subsequent instruction.
11.3. Teach students text structure for sequencing of events.
11.3.1. IF you are introducing other text structures such as sequencing of events,
THEN follow the same process as in cause and effect Step 11.1, but provide
students with a list of key words germane to the text structure (i.e., first, then,
next, etc.) and use a flow map to sequence information and events from the
passage read.
11.3.2. Remind students that key words or transition words explain the order in
which events occur in the passage.
11.3.3. After the reading, guide students in a general discussion of the sequence of
events in the text.
11.3.4. IF students demonstrate an understanding of sequencing of events, THEN
imbed sequencing of event questioning in comprehension questions in
Procedure 8 to generalize these skills in subsequent instruction.
Procedure 12. Assess student learning.
Assess student learning by using one or more of the following assessments:
12.1. After students have finished reading the text, prompt students to tell a partner one
thing they learned, or something they already knew that was written in the text.
12.1.1. Ask students to share their partner’s idea with the group.
12.1.2. Note: Pair-share activities support social skills, active listening, and
expressive language in students.
12.2. IF there is an accompanying activity (worksheet) from the general education
teacher, THEN preview the activity as a group OR provide students with a modified
version of the activity sheet (i.e., larger spaces, word bank) to complete during small
group instruction.
12.2.1. Answer 1-2 questions as a group and ask students to answer the remaining
questions independently or with your assistance as needed.
12.2.2. IF students require assistance, THEN direct students back to specific
sections of the reading to independently search for information in the text.
12.2.3. Correct student work and ask students to share aloud their responses with
the group.
12.2.4. IF several students experienced difficulty with a particular question,
THEN reteach that particular concept.
COGNITIVE TASK ANALYSIS
189
12.3. Present students with curriculum-based assessment using the format of
standardized state assessments to determine if students can generalize these skills
across different types of material.
12.3.1. IF the student is unable to read the test independently, THEN read the test
aloud to student, otherwise, student takes the curriculum-based assessment
independently.
12.3.2. Remind students to use test-taking strategies such as process of
elimination and rereading of passage when answering questions.
COGNITIVE TASK ANALYSIS
190
APPENDIX J
1i+3r Incremental Method Gold Standard Protocol
Teaching Reading in Informational Text to 3
rd
-- 5
th
Grade Students with Mild to
Moderate Learning Disabilities During Small Group Instruction
Reading instruction in informational text is outlined in this protocol. The instructional
tasks are taught over multiple days.
Main Procedures:
1. Select the text for the lesson.
2. Prepare the lesson.
3. Activate prior knowledge and introduce the topic.
4. Preview the text.
5. Teach vocabulary words.
6. Teach guiding questions.
7. Facilitate shared reading.
8. Teach for reading comprehension.
9. Teach for identifying the main idea and details.
10. Teach making inferences.
11. Teach specific text types to show how passages are structured.
12. Assess student learning.
Procedure 1. Select the text for the lesson.
1.1. Match student instructional reading levels to the text by selecting text that meets all
of the following criteria:
1.1.1. Select instructional level text that students can decode with at least 90%
accuracy.
1.1.2. Review the structural language pattern of the text (i.e., sentence structure,
word usage, prose) to determine if students will be able to read and understand
the text with your assistance.
1.1.3. Examine the text to identify vocabulary that may be new to the students
and select text that contain words they can learn with your assistance.
1.1.4. Identify the number of ideas in the text that may be new to the students
and select text that contains ideas they can learn with your assistance.
1.1.5. Preview text layout and select text that contains features such as headings,
subheadings, pictures with captions, side notes, etc. that will be supportive to the
students in your group.
1.2. Determine the amount of time and resources you are able to dedicate to the lesson.
1.3. IF there is sufficient time to teach new ideas and vocabulary AND the level of the
text is challenging, but doable for the students, THEN use the text.
1.4. IF any of the conditions above are not met, THEN adjust the length of the text to fit
the time and resources you have available, OR identify another text that is
challenging, but doable by the students.
COGNITIVE TASK ANALYSIS
191
Procedure 2. Prepare the lesson.
2.1. Consider three variables when preparing for the lesson: context (i.e., time;
resources), complexity of information presented in text (i.e., sentence structure, new
vocabulary, information-rich text), and student ability.
2.2. Preview text and determine words to use as vocabulary words, including bolded text
and other words that may be unfamiliar to students, and write them on individual
index cards.
2.3. Generate comprehension questions for each section or paragraph that check on
students’ understanding of that section or paragraph and write them down.
2.4. Pair students according to academic and social need using Kagan student grouping
strategies (i.e., low, low medium, medium high, and high student ability levels).
Procedure 3. Activate prior knowledge and introduce the topic.
3.1. IF there are 4 or fewer students, THEN keep the students in one group to work with
the teacher for Know-Want-Learn (KWL) activities AND go to Step 3.5.
3.2. IF there are 5 or more students, THEN break students into groups for Know-Want-
Learn (KWL) activities.
3.3. IF the class includes both non-independent and independent students AND an
instructional assistant is available, THEN assign the independent group of students to
the instructional assistant and work with the non-independent group.
3.4. IF an instructional assistant is not available to work with independent student groups,
THEN allow independent student groups to work independently and work directly
with non-independent students.
3.5. Present students with pictures about the general topic to engage students in learning
without an emphasis on vocabulary or questioning.
3.6. IF you are conducting KWL, THEN go to Step 3.8.
3.7. IF you are not conducting KWL, THEN go to Step 3.14.
3.8. Ask students to write what they know about the topic (Know of KWL).
3.8.1. Standard: Students provide the number of Know responses according to
their ability.
3.9. Ask one student from each group to report out what the group knows about the topic.
3.10. While students report out, prompt students from other groups to record new
information on their KWL charts.
3.11. IF students do not demonstrate familiarity with the content, as determined by
student responses during the Know activity, THEN skip KWL process AND go to
Procedure 4 (preview of textbook or passage to build knowledge about the topic).
3.12. IF students are familiar with content, as demonstrated by student responses during
the Know activity, THEN continue with KWL by asking students what they want to
learn about the topic (Want of KWL) and tell them to write their answers in the
KWL chart.
3.13. Tell the students that they will complete the Learn section of KWL at the end of
the lesson.
3.14. Prompt each student to provide input by sharing their knowledge of the topic
and/or their experiences related to the topic with the group.
3.15. Write student-shared experiences about the topic on the board in a visually clear
manner (i.e., Thinking Map, list, etc.).
COGNITIVE TASK ANALYSIS
192
Procedure 4. Preview the text.
4.1. Ask students to preview the text by looking at text features including title, subtitles,
captions, graphs, maps, bolded words, and pictures in the textbook or passage to
build knowledge about the topic and write down these text features in their journals.
(Tell students, “Own the page by previewing what you will read.”)
4.1.1. Alternative sequence: This procedure may also come after vocabulary
building, Procedure 5.
4.2. IF students are excited about text features they discovered while previewing the
reading or information they already knew about the topic, THEN ask students to
share aloud with the class.
Procedure 5. Teach vocabulary words.
5.1. Ask students to skim the text by telling them not to read every word and to find at
least one word they do not know.
5.2. Ask students to share vocabulary words they do not know from the text and write
them on the board.
5.3. IF students are reading from a reader without bolded vocabulary or key terms, THEN
show and read aloud the vocabulary words you preselected and wrote on the index
cards in Step 2.2 (e.g., earthquake).
5.4. IF the word is a compound word, THEN explain that two words are combined to
make a new word.
5.5. Ask students to take turns reading the words introduced aloud, or to choral read the
words introduced.
5.6. IF a student is unable to reread a particular multi-syllabic vocabulary word or word
part, THEN chunk the word into syllables to make the word more easily decodable.
5.7. IF a student still cannot read the word, THEN tell student the word AND model a
tactile approach to word memorization BY EITHER:
5.7.1. Finger tracing and reciting of the word simultaneously (i.e., Dr. Glass
Method: hear, touch, and say) AND THEN tell the student to finger trace and
recite the word three times, OR
5.7.2. Guide student to use symbol imagery to visualize and sequence the sounds
and letters within the word (i.e., Seeing Stars).
5.7.3. For symbol imagery, show the word to student, cover the word, student
spells the word orally, and student skywrites the word.
5.8. Repeat process for all vocabulary words, not just difficult words, so not to invalidate
vocabulary words that students have chosen, or point out struggling readers.
5.9. IF all students are able to recite the vocabulary word, THEN engage students in a
discussion regarding the meaning of the word in Step 5.14.
5.10. IF a student is unable to read the vocabulary word, THEN write a note to provide
one-on-one instruction at a later time to reteach the student the word using the tactile
approach AND go to Step 5.14.
5.11. Ask students to find an unfamiliar vocabulary word listed on the board or on your
index card in the text by telling them the specific page where the vocabulary word
can be found.
COGNITIVE TASK ANALYSIS
193
5.12. Prompt students to read the sentence with the vocabulary word and use context
clues to guess word meaning. (Ask students, “What do you think the word means
based on that sentence?”).
5.13. IF the word meaning is difficult or complex, THEN point out the word parts
(prefix, suffix, etc.) and the meaning of these word parts. (Tell students, “That’s how
you can remember what this word might mean because it has this word part.” “Does
this word sound like another word you know?”).
5.14. IF the text contains bolded words with definitions, THEN read the definition of
the vocabulary word provided in the text to students.
5.15. IF the definition is not provided in the text, THEN engage students to
collaboratively look up word meaning online or in a dictionary.
5.16. Ask students to write the word and its definition in their journals.
5.17. IF the definition is long or complex, THEN shorten definition for students.
5.18. Ask students to orally construct a sentence using the vocabulary word with a
shoulder or table partner, or to state the meaning of the word without using the word
in the definition either with a partner or with the whole group.
5.19. Pair up with a student or pair of students requiring more support to ensure that all
students understand the task.
5.20. Ask students to share the sentence generated by his or her partner with the group
(i.e., Kagan strategies).
5.20.1. Reason: This process is time-efficient and keeps all students engaged in
learning.
5.21. Repeat step for each vocabulary word.
5.22. Remind students to refer to their journals for the definition of vocabulary words
as needed during the reading, assignment completion, and for testing.
Procedure 6. Teach guiding questions.
6.1. Tell the students to look at the titles or subtitles to make predictions about what they
are going to read and ask them to share with the class.
6.2. IF the content of the reading is complex (i.e., content unfamiliar to students based on
prior knowledge discussion; new vocabulary introduced; information-rich text),
THEN repeat this process for each subtitle before students read that particular
section.
6.3. Read questions from the end of chapter or passage to students as an anticipatory set
for learning. (Tell students, “Let’s look at the questions at the end of the section.”,
“What do you think we are going to learn today?”, “Let’s see who is right.”).
6.4. Explain to students that questions guide what is going to be learned in each section
of the reading.
6.5. IF there are questions in the margin of the page, THEN read the margin questions to
students as each section of the reading is presented.
6.6. IF questions are not provided in the margin of the page, THEN provide students with
a copy of teacher-generated questions from Step 2.3.
6.6.1. Note: These questions are in addition to the end of chapter questions
introduced (Procedure 6) as they target more specific details from the reading
and are inferentially oriented.
6.7. Remind students to keep the comprehension questions in mind as they read.
COGNITIVE TASK ANALYSIS
194
Procedure 7. Facilitate shared reading.
7.1. Assign shorter paragraphs or readings to struggling readers, OR, if preferred,
7.2. Assign all students paragraphs similar in length.
7.2.1. IF a student is struggling or has reached frustration with the reading,
THEN stop to ask a comprehension question and ask the next student to start
reading.
7.3. Use one or more of the following techniques for shared reading interchangeably
throughout the lesson to keep students engaged and on task:
7.3.1. Round-robin reading by first assigning each student to the order in which
they will read aloud according to how students are seated;
7.3.1.1. IF students are pre-assigned sections or paragraphs to read for
round-robin, THEN allow students to preview and practice their sections
before round-robin reading begins.
7.3.2. IF students are not pre-assigned sections or paragraphs to read for popcorn
reading, THEN the student reading uses numbered sticks to determine who will
read next.
7.3.2.1. Remind students to follow along because they will be called upon
when it is their turn to start reading aloud.
7.3.3. Choral reading (i.e., students read aloud in unison with the whole class or
group of students); OR
7.3.4. Oral cloze where teacher reads aloud and when he or she pauses, students
choral read the word.
7.4. Remind students to keep the comprehension questions in mind as they read
(Procedure 6).
7.5. Read the first paragraph aloud to students.
7.6. Call on each student to read a paragraph aloud using the techniques in Step 7.3.
7.7. Jot down words incorrectly decoded by each student and teach student words using
the tactile approach (Dr. Glass Method) to word recognition in Steps 5.7-5.7.1 at a
later time.
Procedure 8. Teach for reading comprehension.
8.1. After every paragraph conduct structured reading comprehension by asking
questions from the end of the chapter to maintain student attention and focus, and to
check for understanding of the reading.
8.2. IF students are able to mark in text, THEN prompt students to highlight answers to
questions as they are read.
8.3. IF school policy prohibits students from highlighting in the text, THEN tell students
to use sticky notes to keep track of answers to main idea, detail, and inferential
questions.
8.4. IF the answer to a margin question or question generated by teacher is read in a
sentence, paragraph, or section of the reading, THEN prompt students to look for the
answer in the section read. (Tell students, “We just answered that question. What’s
the answer to the question?”, “I just heard the answer to the question. Who can find
it?”, “I wonder if a question was just answered.”).
8.5. IF the paragraph is 3 to 4 sentences long, THEN stop at the end of the paragraph.
COGNITIVE TASK ANALYSIS
195
8.6. IF the section or paragraph a student is reading is difficult for the student to decode,
THEN stop immediately after the sentence is read to discuss answer to margin
question or question generated by teacher.
8.7. Ask another student in the group to proceed with reading.
8.8. IF the paragraph is longer or complex (i.e., introduces multiple vocabulary words;
presents information using complex sentences; information-rich text), THEN stop
midway through the paragraph to check for understanding.
8.9. Ask the same student to continue to read until the end of the paragraph.
8.10. IF the reading is not complex, THEN stop after the paragraph or section to discuss
answer to margin question(s).
8.11. Point out vocabulary terms after every paragraph read. (Ask students, “Is one of
our vocabulary words in this paragraph?”).
8.12. Give the reader an opportunity to respond first.
8.13. IF the reader does not recognize the vocabulary term used in the paragraph,
THEN allow other students to respond.
Procedure 9. Teach for identifying the main idea and details.
9.1. IF the students have received explicit instruction on the concepts of “main idea” and
“supporting details,” THEN remind students that the main idea is almost always the
first sentence of a paragraph by telling them, “When you are unsure of the main idea,
go back and read the first sentence.”
9.2. IF the students have not received explicit instruction on the concepts of “main idea”
and “supporting details,” THEN teach the two concepts by providing a definition and
at least one example and non-example for each concept AND ask students to provide
at least one example and non-example for each concept.
9.3. IF the paragraph is critical to understanding the main idea, THEN ask a sufficient
number of questions for students to demonstrate an understanding of the ideas in the
paragraph.
9.4. IF the paragraph is not critical to understanding the main idea, THEN use fewer
questions to facilitate a general discussion of the paragraph AND move on to Step
9.6.
9.5. During student reading, stop at the end of each paragraph and ask students teacher-
generated questions to elicit the main idea. (Ask students, “What did you just learn
or discover?”, “What is the purpose of the paragraph?”, “How did the author
explain…?”, “Can we add to that?”, “Where is the main idea sentence?”, and higher-
level “Why?” questions to engage students in their responses.)
9.6. IF the student does not get the correct answer, THEN the same student or teacher
may reread, or the group may choral read the paragraph or sentence that illustrates
the answer, OR the teacher may narrow the reading selection for the student and
prompt student to reread the section in search for the answer.
9.7. Ask the same student to make another attempt to answer the main idea question.
9.8. IF the student still does not provide the correct answer, THEN give the student the
answer and discuss why this is the correct answer, OR another student or the teacher
may help this student identify the answer to the question.
9.9. Prompt the student that experienced difficulty to share the correct answer with the
group.
COGNITIVE TASK ANALYSIS
196
9.10. Ask the group teacher-generated detail questions (“What did the author say about
…?”, “How did the author explain …?”, “Why is this important to the main idea?”,
“Why is this an important detail?”) and wait for student responses.
9.11. IF student responses to supporting detail questions do not support main idea,
THEN prompt students to look back at the text for additional details.
9.12. IF there has been an in-depth discussion regarding the content of the reading and
important supporting details, and students are able to identify the most important
details in the reading, THEN go to Step 9.14.
9.13. IF students are still unable to identify important supporting details, THEN
continue the discussion to elicit more responses to detail questions, and to reinforce
comprehension and understanding of the reading.
9.14. After each paragraph read, prompt students to write the main idea of the
paragraph in their journal using their own words.
9.15. Ask for a volunteer to share out their main idea statement.
9.16. IF students do not agree with this main idea statement, THEN students can debate
the main idea statement until a consensus is reached to encourage active engagement
in reading.
9.17. IF a consensus is reached for the main idea statement, THEN write the sentence
on the board for students to copy.
9.18. Prompt students to copy or change their main idea sentence if the main idea
statement was incorrect.
9.19. Ask students to record the page number where the information for each main idea
statement was found.
Procedure 10. Ask students inferential questions.
10.1. Tell students that inferences are implied about events in the story, but not
explicitly stated in the text.
10.2. Ask inferential questions as students read each section and again at the end of the
passage. (For example, “What effect does this have on…?”, “What is the best
solution to this problem?”, “What do you think caused this to happen?”).
10.3. IF student does not give correct answers to inferential questions, THEN think
aloud to model the inferential thought process on how to make inferences.
10.4. Ask students to explain how you generated the inferential response. (Ask
students, “How did I come up with that?”).
10.5. Query students for a response; students may be just a question away from the
correct answer.
Procedure 11. Teach specific text types to show how passages are structured.
11.1. Explain the meaning of text types and use short paragraphs to show examples and
non- examples of compare and contrast, cause and effect, sequential, and general
description text types.
11.1.1. Show how Thinking Maps are used to visually organize the information
presented in these paragraphs by, for example, using a venn diagram or double
bubble map for compare and contrast passages, or multi-flow map for cause and
effect passages.
COGNITIVE TASK ANALYSIS
197
11.2. Explain that these are ways that the writer organizes text to point out important
information in the text.
11.3. Prompt students to underline signal words for compare and contrast, cause and
effect, sequential, and general description text.
11.4. Use oral questioning to check for understanding of text types.
Procedure 12. Assess student learning.
Assess student learning by using one or more of the following assessments:
12.1. IF you are using KWL, THEN assess student learning by prompting students to
independently record what they have learned from the reading in the Learn column
of the KWL chart.
12.2. Assess student learning by prompting students to answer end-of-chapter, passage,
and teacher-generated questions.
12.2.1. IF using a remedial reading comprehension program, (e.g., CARS
Comprehensive Assessment of Reading Strategies), THEN ask students to read
the questions provided at the end of the passage.
12.2.2. IF using a textbook, THEN reword or simplify questions provided at the
end of the section or chapter, if needed to clarify what the question is asking,
and ask students to read the questions.
12.2.3. IF you generated questions for the reading, THEN ask students to also
read those questions.
12.2.4. Prompt students to look back in the reading and refer to their journals for
main idea statements, vocabulary words, and page numbers where specific
information can be found in the text (STEP 9.19) to find the information they
need to answer the questions.
12.2.5. Ask students to either independently write answers to questions in their
journals along with the page numbers where the answers were found and share
aloud their responses to questions with the group, OR, if preferred, ask students
to answer questions orally by sharing aloud their responses with the group.
12.2.6. IF a student shares aloud an answer that is incorrect, THEN prompt the
student to go back and find the answer in the text.
12.2.7. IF the student is unable to find the answer in the text, THEN direct the
student to the section of the text where the information needed to answer the
question can be found and tell the student to correct their answer orally.
12.2.8. Ask students to share out the answers they have recorded in their journal,
THEN write down one student answer or combine student answers to form a
correct answer to each question and use the document camera, whiteboard, or
other means to project what you have written.
12.2.9. Prompt students to compare the answers they have written in their journals
to the answers you have projected.
12.2.10. IF student answers are incorrect, THEN ask students to revise the answers
they wrote down in their journals.
12.3. Assess student learning by prompting students to write a group summary of the
main idea and supporting details of the text in their journals.
12.3.1. Ask students to share aloud the main idea sentences they wrote in Step
9.14 with the group.
COGNITIVE TASK ANALYSIS
198
12.3.2. Ask students to determine (based on consensus) which student sentences
to use for the main idea and supporting details in the group summary.
12.3.3. Remind students that one sentence will be used as the main idea sentence
and the rest of the sentences will become supporting detail sentences.
12.3.4. Encourage students to alternate the use of sentences from each student and
to combine student sentences to formulate more complete thoughts.
12.3.5. Write down the main idea and detail sentences to form a summary
paragraph and use the document camera, whiteboard, or other means to project
what you have written.
12.3.6. Ask students to write the group summary in their journals.
12.4. Assess student learning by administering a curriculum-based assessment.
12.4.1. IF a student is unable to read the test independently, THEN read the test
aloud to the student and ask the student to write down his/her responses.
12.4.2. IF a student is able to read the test without your assistance, THEN ask the
student to take the curriculum-based assessment independently.
12.4.3. Remind students to use test-taking strategies such as process of
elimination and rereading of the passage to answer questions.
Abstract (if available)
Abstract
This study applies cognitive task analysis (CTA) to capture expert reading instruction in informational text for students with mild to moderate learning disabilities in third through fifth grade. CTA extracts the highly automated and unconscious knowledge and skills experts employ to problem solve and perform complex tasks. The purpose of this study was to compare the relative effectiveness of two knowledge elicitation methods of CTA–3i+3r independent method and 1i+3r incremental method—in extracting declarative and procedural knowledge from expert special education teachers as they provide step‐by‐step descriptions of a routine reading instruction task. Also, this study sought to examine knowledge omissions of expert special education teachers to determine if the up‐to‐70% rule of expert omissions in other fields of study would be upheld in education. Lastly, the study explored relationships between expert biographical data and the total number of critical knowledge steps yielded from expert special education teachers using CTA. This study employed a mixed‐methods approach to capturing and analyzing data collected from five expert special education teachers during semi‐structured CTA interviews. Findings from this study upheld the up‐to‐70% rule of expert knowledge omissions and indicate that the 3i+3r independent method yielded a greater number of critical knowledge steps than the 1i+3r incremental method. Although not statistically significant due to the small sample of expert special education teachers, findings suggest a moderate relationship between total hours of professional development and total decision steps captured through CTA. This study contributes to research on the effectiveness of CTA to elicit automated and unconscious expert knowledge across myriad domains.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Using individual cognitive task analysis to capture expert writing instruction in expository writing for secondary students
PDF
Using cognitive task analysis for capturing expert instruction of food safety training for novice employees
PDF
Using cognitive task analysis to capture expert instruction in division of fractions
PDF
Using incremental cognitive task analysis to capture expert instruction in expository writing for secondary students
PDF
The use of cognitive task analysis to capture expert instruction in teaching mathematics
PDF
Towards a taxonomy of cognitive task analysis methods: a search for cognition and task analysis interactions
PDF
The use of cognitive task analysis for identifying the critical information omitted when experts describe surgical procedures
PDF
The use of cognitive task analysis to investigate how many experts must be interviewed to acquire the critical information needed to perform a central venous catheter placement
PDF
Using cognitive task analysis to determine the percentage of critical information that experts omit when describing a surgical procedure
PDF
Using cognitive task analysis to capture how expert principals conduct informal classroom walk-throughs and provide feedback to teachers
PDF
Using cognitive task analysis to capture palliative care physicians' expertise in in-patient shared decision making
PDF
The use of cognitive task analysis to capture expert patient care handoff to the post anesthesia care unit
PDF
The effect of cognitive task analysis based instruction on surgical skills expertise and performance
PDF
Cognitive task analysis for instruction in single-injection ultrasound-guided regional anesthesia
PDF
The use of cognitive task analysis to capture exterptise for tracheal extubation training in anesthesiology
PDF
Identifying the point of diminishing marginal utility for cognitive task analysis surgical subject matter expert interviews
PDF
Using cognitive task analysis to capture how expert anesthesia providers conduct an intraoperative patient care handoff
PDF
Employing cognitive task analysis supported instruction to increase medical student and surgical resident performance and self-efficacy
PDF
The use of cognitive task analysis for the postanesthesia patient care handoff in the intensive care unit
PDF
The use of cognitive task analysis to determine surgical expert's awareness of critical decisions required for a surgical procedure
Asset Metadata
Creator
Zepeda-McZeal, Diana Marie (author)
Core Title
Using cognitive task analysis to capture expert reading instruction in informational text for students with mild to moderate learning disabilities
Contributor
Electronically uploaded by the author
(provenance)
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education
Publication Date
08/29/2014
Defense Date
03/05/2014
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
automaticity,cognitive task analysis,expertise,knowledge elicitation,knowledge types,learning disabilities,OAI-PMH Harvest,reading instruction,Special Education,subject matter expert,Training
Format
application/pdf
(imt)
Language
English
Advisor
Yates, Kenneth A. (
committee chair
), Gallagher, Raymond John (
committee member
), Rueda, Robert (
committee member
)
Creator Email
dianamczeal@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-466641
Unique identifier
UC11287864
Identifier
etd-ZepedaMcZe-2860.pdf (filename),usctheses-c3-466641 (legacy record id)
Legacy Identifier
etd-ZepedaMcZe-2860.pdf
Dmrecord
466641
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Zepeda-McZeal, Diana Marie
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
automaticity
cognitive task analysis
expertise
knowledge elicitation
knowledge types
learning disabilities
reading instruction
subject matter expert
Training