Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Cognitive efficiency of animated pedagogical agents for learning English as a second language
(USC Thesis Other)
Cognitive efficiency of animated pedagogical agents for learning English as a second language
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
NOTE TO USERS This reproduction is the best copy available. ® UMI Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. COGNITIVE EFFICIENCY OF ANIMATED PEDAGOGICAL AGENTS FOR LEARNING ENGLISH AS A SECOND LANGUAGE by Sunhee Choi A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (EDUCATION) August 2005 Copyright 2005 Sunhee Choi Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. UMI Number: 3196792 INFORMATION TO USERS The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleed-through, substandard margins, and improper alignment can adversely affect reproduction. In the unlikely event that the author did not send a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion. ® UMI UMI Microform 3196792 Copyright 2006 by ProQuest Information and Learning Company. All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code. ProQuest Information and Learning Company 300 North Zeeb Road P.O. Box 1346 Ann Arbor, Ml 48106-1346 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. DEDICATION To my family, for their unconditional love and support. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. ACKNOWLEDGMENT I owe a sincere dept of gratitude to many individuals who have given me their support and help during my 7 year journey. Since the beginning of my journey in 1998,1 have met numerous people and experienced countless things, a lot of times pleasant but not always. Yet, because of those individuals with whom I have interacted, studied, and worked together on a daily basis, I have been able to make it through all the upheavals. Specially, I would like to thank Dr. Richard Clark, who has guided me through my doctoral study as my mentor and committee chair. He has provided me with invaluable insights into instructional technology and support from the moment I met him in 2001. Without his support and encouragement, I would not have been able to come this far. I also want to express my genuine gratitude to Dr. Nam-kil Kim who gave me the wonderful opportunity to study at USC. He was there when I began this journey and he witnessed its end as well. He has given me so much support throughout my graduate study, and I will never forget that. Dr. Ed Kazlauskas deserves a great deal of thanks too. He has shared his expertise in instructional technology, especially animated pedagogical agents, and has been so patient with me despite time conflicts. I also like to acknowledge his generosity and caring as my former boss at RSOE technology support department. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. IV And Dr. Joel Colbert, whose support and encouragement have never dried out for me, has been a great help to me since I first met him in 2003. As my mentor, committee member, and colleague, he has encouraged me to reflect and challenge myself. His enthusiasm and open-mindedness always make my work as his teaching assistant exciting and enjoyable. I also want to thank my colleagues at USC Undergraduate and Teacher Education department: Dr. Sandra Kaplan and Margo Pensavalle, for their trust and support. Their enthusiasm for education and students has reawakened my passion for teaching and thanks to them, now I can picture myself as a teacher. And Beverly Franco, Roxanna Harvey, Katina Williams, and Hubert Wang, their assistance, encouragement, and humor have kept me sane while I was writing the dissertation. My true gratitude also goes to my friend, Hyogyoung Lee. I was extremely lucky to meet and work with her. She stayed up so many nights with me when we were developing the computer system for this research. Despite her own study and work, she never hesitated to offer me help. Without her, this dissertation would not be here with me right now. Finally, Mi-chung Lee, my guardian, mentor, and life teacher: I will never be able to express my thanks to her enough. She was the one who advised me to study instructional technology and she has always stood by me since then. Her unconventional love and support have inspired me to do so much more than I could. Because of her, I am who I am now. Thank you! Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. V TABLE OF CONTENTS DEDICATION...................................................................................................................... ii ACKNOWLEDGMENTS..................................................................................................iii LIST OF TABLES................................................................................................................ix LIST OF FIGURES..............................................................................................................xi ABSTRACT....................................................................................................................... xiii CHAPTER I: REVIEW OF LITERATURE Introduction............................................................................................................................ 1 Second Language Learning.................................................................................................. 5 Overview of Form-Focused Instruction.....................................................................5 Attention and Awareness in Second Language Learning........................................ 9 Role of Attention.................................................................................................. 9 Types of Attention Measures.............................................................................11 Instructional Methods to Focus Learner Attention................................................. 14 Review of the Effects of Explicit Rule Presentation.............................................. 17 Summary......................................................................................................................24 Cognitive Efficiency of Multimedia Learning................................................................ 26 Theoretical Constructs Relevant to Cognitive Efficiency.....................................28 Measurement o f Mental Effort..................................................................................30 Self-Report Opinion Measures......................................................................... 30 Secondary Task M easures.................................................................................32 Physiological Measures..................................................................................... 33 Building Cognitive Efficiency in Multimedia Learning....................................... 35 Instructional Strategies to Reduce Cognitive Load................................................ 36 Avoiding split attention effects and Utilizing modality effects................... 38 Leaving out redundancy effects.......................................................................40 Integrating Leaner Prior Knowledge and Expertise......................................42 Summary...................................................................................................................... 44 Animated Pedagogical A gents...........................................................................................46 Benefits of Animated Pedagogical Agents.............................................................. 49 Motivating Learners...........................................................................................49 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Focusing Learner Attention.............................................................................. 56 Summary..................................................................................................................... 60 Significance of the Study...................................................................................................63 Research Questions of the Study.......................................................................................65 Research Hypotheses o f the Study................................................................................... 65 Hypothesis 1................................................................................................................68 Hypothesis 2 ................................................................................................................69 Hypothesis 3 ................................................................................................................69 Hypothesis 4 ................................................................................................................70 Hypothesis 5 ................................................................................................................ 71 Hypothesis 6 ................................................................................................................ 73 CHAPTER II: METHODOLOGY Overview of the Research Design.....................................................................................74 Participants........................................................................................................................... 76 Target L2 Form for Instruction.......................................................................................... 79 Multimedia-Based Learning Environment: Reading Wizard........................................ 83 Pre-Task: Explicit Rule Presentation on Target Form............................................86 Main Task: Reading Comprehension.......................................................................90 Apparatus..............................................................................................................................93 Variables and Measuring Instruments.............................................................................. 97 Learner Background Survey.....................................................................................97 Performance - Acquisition of the Target Form .......................................................98 Sentence Combination Test.............................................................................101 Picture Interpretation Test...............................................................................102 Grammaticality Judgment Test....................................................................... 102 Motivation.................................................................................................................. 103 Mental Effort.....................................................................................................105 Active Choice....................................................................................................106 Subjective Ratings............................................................................................ 106 Self-Efficacy.....................................................................................................107 Tim e............................................................................................................................ 108 Cognitive Efficiency.................................................................................................109 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. vii Procedures..........................................................................................................................109 Pilot Test..............................................................................................................................112 CHPATER III: RESULTS AND DISCUSSIONS Descriptive Statistics.........................................................................................................115 Performances............................................................................................................. 116 Time and Mental Effort............................................................................................123 Self Efficacy, Active Choice, and Subjective Ratings.........................................130 Computer Usages......................................................................................................134 Results by Hypotheses......................................................................................................137 Hypothesis 1.............................................................................................................. 137 Hypothesis 2 .............................................................................................................. 140 Hypothesis 3 .............................................................................................................. 147 Hypothesis 4 .............................................................................................................. 153 Hypothesis 5 .............................................................................................................. 156 Hypothesis 6 .............................................................................................................. 158 CHAPTER IV: CONCLUSIONS AND FUTURE RESEARCH General Discussions.......................................................................................................... 170 Research Question 1..................................................................................................170 Research Question 2..................................................................................................171 Research Question 3..................................................................................................173 Research Question 4..................................................................................................176 Conclusions.........................................................................................................................178 Limitations and Future Research.....................................................................................180 REFERENCES................................................................................................................... 184 APPENDIX A Reading Comprehension T ask........................................................... 198 APPENDIX B Lexical Items Included in Electronic Dictionary............................ 206 APPENDIX C English Relative Clauses Embedded in Reading Comprehension Task.......................................................................... 209 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. viii APPENDIX D MASH Interface................................................................................ 212 APPENDIX E Learner Background Survey............................................................ 213 APPENDIX F Performance Tests............................................................................. 217 APPENDIX G Mental Effort Measures.....................................................................225 APPENDIX H Subjective Ratings............................................................................. 226 APPENDIX I Experiment Instructions for Agent Group and Arrow Group.......................................................................................228 APPENDIX J Consent Form Approved by USC IR B ............................................234 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. LIST OF TABLES Table 1 Selected Animated Pedagogical Agents and Their Functions.........................48 Table 2 Summary of User Profiles....................................................................................78 Table 3 Sentence Types with Embedded Relative Clauses........................................... 81 Table 4 Relative Clauses Included in Sentence Combination Test............................ 101 Table 5 Descriptive Statistics of Self-Report Scale Reliabilities ..............................115 Table 6 Descriptive Statistics of Performance M easures.............................................117 Table 7 Descriptive Statistics of Time M easures..........................................................124 Table 8 Descriptive Statistics of Mental Effort Measures............................................124 Table 9 Correlations between Mental Effort and Time................................................ 125 Table 10 Descriptive Statistics of Self Efficacy and Active C hoice.......................... 131 Table 11 Descriptive Statistics of Subjective Rating Variable....................................134 Table 12 Results of Paired Samples t-Test on Pretest and Posttest...........................138 Table 13 Results of Paired Samples t-Test for Agent Group......................................140 Table 14 Results of Paired Samples t-Test for Arrow Group.....................................140 Table 15 Descriptive Statistics of Pre- and Posttest Score...........................................141 Table 16 Independent Samples t-Test of Pre-Test Scores............................................142 Table 17 Independent Samples t-Test of Post-Test Scores...........................................143 Table 18 Summary of ANOVA on Gain Scores of Each Testing M easure............... 144 Table 19 Frequencies of Prior Knowledge Levels........................................................148 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. X Table 20 Descriptive Statistics of Gain Scores by Prior Knowledge and Group 150 Table 21 Correlational Statistics of Subjective Ratings 1 and Achievement............154 Table 22 Correlational Statistics of Subjective Ratings 2 and Achievement............154 Table 23 Descriptive Statistics of Mental Effort and Time of Each G roup.............. 156 Table 24 Means and SDs of Efficiency Variables of Each Group...............................157 Table 25 Correlations between Prior Knowledge and Mental Effort/Time.............. 159 Table 26 Correlations between Prior Knowledge and Cognitive Efficiency............159 Table 27 Descriptive Statistics of Paas Mental Effort of Prior Knowledge Levels .161 Table 28 Descriptive Statistics of Salomon AIME of Each Prior Knowledge Levels.......................................................................... 161 Table 29 Descriptive Statistics of Time Spent on Pre-task of Each Prior Knowledge Levels.......................................................................... 161 Table 30 Salomon Mental Efficiencies by Groups and Prior Knowledge Levels.... 163 Table 31 Paas Mental Efficiencies by Groups and Prior Knowledge Levels 166 Table 32 Time Efficiencies by Groups and Prior Knowledge Levels........................ 168 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. XI LIST OF FIGURES Figure 1 Animated Pedagogical Agent’s Sample Behaviors......................................85 Figure 2 Agent Version of Pre-Task Environment...................................................... 87 Figure 3 Arrow with voice Version of Pre-Task Environment...................................88 Figure 4 Electronic Dictionary Embedded in Reading Task......................................92 Figure 5 Experiment Schedule..................................................................................... 112 Figure 6 Histogram of Pre-Treatment Sentence Combination Test........................ 118 Figure 7 Histogram of Pre-Treatment Picture Interpretation T est.......................... 119 Figure 8 Histogram of Pre-Treatment Grammaticality Judgment Test................. 120 Figure 9 Histogram of Post-Treatment Sentence Combination Test.......................121 Figure 10 Histogram of Post-Treatment Picture Interpretation Test.......................... 122 Figure 11 Histogram of Post-Treatment Grammaticality Judgment Test ............... 123 Figure 12 Mental Effort Investment in Salomon AIME Item 1 ...................................127 Figure 13 Mental Effort Investment in Salomon AIME Item 2 ......................128 Figure 14 Mental Effort Investment in Salomon AIME Item 3 ......................129 Figure 15 Mental Effort Investment in Paas Mental E ffort........................................130 Figure 16 Percentages of Active Choice for Future Use of Reading Wizard............ 133 Figure 17 Percentages of Frequencies for Computer Use............................................136 Figure 18 Percentages of Levels for Computer Expertise...........................................137 Figure 19 Gain Scores for Sentence Combination Test by Two Groups....................145 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. xii Figure 20 Gain Scores for Picture Interpretation Test by Two Groups......................146 Figure 21 Gain Scores for Grammaticality Judgment Test by Two Groups............. 147 Figure 22 Gain Scores by Three Prior Knowledge Level G roups..............................149 Figure 23 Interaction of Prior Knowledge and Delivery M edia.................................152 Figure 24 Interaction of Prior Knowledge and Group on Salomon AIME Mental Efficiency................................................................ 164 Figure 25 Interaction of Prior Knowledge and Group on Paas Mental Effort Efficiency........................................................................ 167 Figure 26 Interaction of Prior Knowledge and Group on Time Efficiency.............. 169 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. ABSTRACT The present study explored the use of an animated pedagogical agent (Agent Group) with an electronic arrow and voice narration (Arrow Group) in a multimedia based learning environment where 74 college level English as a Second Language (ESL) students learned English relative clauses with a specific instructional method called ‘explicit rule presentation’. No significant difference in learning gain from pretest to posttest was found between Agent and Arrow Group. This result corroborates Clark’s (2001) claim that what causes learning is an instructional method (in this study, pointing to and voicing key concepts during instruction), not a delivery medium (in this study an animated pedagogical agent or electronic arrow with voice). It was also found that the animated pedagogical agent’s visual appearances and social behaviors did not motivate, interest, or tutor learners better than a simple electronic arrow with voice. Thus, this result does not support the Persona Effect (Lester, Converse, Kahler, Barlow, Stone, & Bhogal, 1997), which is derived from the hypothesis that an animated pedagogical agent makes human-computer interaction more social and interesting, and then leads learners to work harder. In the present study learners in Agent and Arrow Group did not differ in the amount of self-reported mental effort that they invested in processing the instruction. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. XIV The study also did not support the cognitive efficiency hypothesis (Cobb, 1997). Cognitive efficiency refers to one medium requiring less conscious effort from learners in achieving a specific learning criterion or leading to faster learning than another medium. No significant difference was found between Agent and Arrow Group in the levels of cognitive efficiency. Yet, the result suggested potential benefit of an animated pedagogical agent for learners with little prior knowledge. The lowest prior knowledge learners who interacted with the animated pedagogical agent achieved higher learning scores at a given unit of mental effort than their counterparts, although the effect size was relatively small. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. CHAPTER I: REVIEW OF LITERATURE Introduction The primary purpose of this doctoral study is to examine the claims that an animated pedagogical agent, when used in multimedia-based learning programs, increases learning scores over instructional treatments that do not employ an agent (Atkinson, 2002; Johnson, Rickel, & Lester, 2000; Lester, Converse, Stone, Kahler, & Barlow, 1997). An animated pedagogical agent is a lifelike animated character that inhabits a computer-based learning environment, and provides learners with pedagogical assistance such as presenting explanation, directing attention and giving advice on learning strategies. Recent studies that compare pedagogical agents with alternative treatments (Atkinson, 2002; Craig, Gholson, & Driscoll, 2002; Moreno, Mayer, Spires, & Lester, 2001) have yielded ambiguous findings. Clark (1983, 1994a, 1994b, 2001, 2003) has claimed that animated pedagogical agents (and other media and media attribute treatments) are not instrumental in learning unless they provide essential instructional methods and that any method can be implemented in a variety o f media systems with equal learning impact. He further argues that while pedagogical agents are able to present essential instructional methods such as instructional plans, explanation of concepts, and feedback, other less expensive and less distracting implementations of instructional methods are equally effective for learning (i.e., using simple arrows or color coding of key points rather than having a pedagogical agent “point” to text or parts of Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 2 instructional graphics). Therefore, Clark concludes, it is the instructional method used, not the specific medium or audio-visual agent used to deliver the method that leads to learning gains. Nevertheless, this argument is not settled (Kozma, 1991, 1994; Ullmer, 1994). On one hand, with the wide availability of multimedia and information technology, an ever increasing number of educational software programs are promoting the inclusion of multimedia elements in instruction as a panacea to learning problems (Kimmel & Deek, 1996; Lowe, 2002). Animated pedagogical agents are simply the latest iteration of recent technological advances in user interface and autonomous software agents that are being developed to aid instruction. On the other hand, a number of researchers and educational economists are concerned that unnecessarily expensive instructional tools are being proposed to solve critical learning and educational access problems when less expensive options would have either equal or greater impact on learning (Erickson, 1997; Levin & McEwan, 2001). The present study explored the use of an animated pedagogical agent as well as other delivery systems in a multimedia learning environment in which college level students learn English as a Second Language (ESL). O f particular interest of this study are the relative effects and efficiency of a pedagogical agent on the acquisition of English relative clauses compared to an alternative multimedia system (i.e., an electronic arrow with voice). The main research questions explored in the Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. study include whether or not a pedagogical agent fosters the process and outcome of learning English as a second language when it is used to deliver Explicit Rule Presentation, an instructional method widely used to teach a linguistic form. Experiments were conducted to assess the effects of two different delivery methods (an animated pedagogical agent vs. an electronic arrow with voice) on the acquisition of English relative clauses, the target of instruction. In order to address the issue, the study adopted a true experimental design with two treatment groups (Agent Group vs. Arrow Group). Several dependent variables were measured to estimate the relative effectiveness and efficiency of two different multimedia used in the study. The dependent variables measured include the mental effort, learner interest, time required for learning the target grammar, learner self-efficacy, active choice, and the acquisition of the target form. In addition, the study examined cognitive efficiencies of different delivery media including an animated pedagogical agent and an electronic arrow with voice used to deliver the instructional method - the explicit rule presentation. According to Cobb (1997), who proposed to include ‘Cognitive Efficiency’ as a variable in media studies, cognitive efficiency refers to “one medium being more or less effortful than another, more or less likely to succeed with a particular learner, or interacting more or less usefully with a particular prior knowledge set” (p. 25), leading to faster learning, or requiring less conscious effort from learners for processing learning materials (Clark, 1998; Cobb, 1997). Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 4 The underlying idea of cognitive efficiency is that a specific medium through which instruction is presented to learners may not produce different cognitive outcomes compared to another, such as a superior mental representation, but it still can have a direct impact on cognitive processes with different levels of cognitive efficiency. In other words, different media in which instruction is delivered might produce the same cognitive product, but they can determine the ways in which different learners with different prior knowledge process the information presented to them. Despite its potential for multimedia- and computer-assisted learning, cognitive efficiency is still in the early stage of development and needs to have sound theoretical frameworks and empirical evidence to support its hypothesis. To lay out the foundation of the present research, the following sections will review relevant theories and research findings and investigate how to build cognitive efficiency of multimedia learning using an animated pedagogical agent and alternative media. First, the review will look at the factors involved in learning second and foreign language (L2) grammar, which are one of the major dependent variables of the present study. In particular, it will focus on instructional methods that have been found to enhance the acquisition of L2 grammar. Yet, it should be noted that the major focus of the present study is not second language acquisition per se, but rather it focuses on examining instructional technology questions using ESL as a subject matter. Secondly, the review will look at the factors relevant to cognitive efficiency and examine the ways to improve cognitive efficiency of a multimedia Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 5 learning environment. Finally, the review will summarize what and how studies have been conducted in the field of animated pedagogical agents and closely examine pedagogical agents from a cognitive efficiency perspective, and then outline the challenges facing the field including the need for a sound study design to clarify to what factors learning outcomes can be attributed. Second Language Learning Overview of Form-Focused Instruction It is not exaggeration to say that over the last two decades the research in the field of second and foreign language (L2) learning has been largely dominated by Communicative Language Teaching (CLT), whose main goal is to promote learners’ L2 communicative proficiency, mainly focusing on the listening and speaking capabilities (Richards & Rodgers, 1998). The communicative language teaching was a reaction to the traditional language teaching methods that mainly focused on teaching grammatical structures (e.g., grammar translation method, direct method, repetitive drill and practice). Advocates of CLT insist that such grammar-centered instructional methods produce L2 learners who know about the language but who cannot use the language to communicate with others. Thus, the central idea of the communicative language teaching is that learners of L2 should focus on communicating meaning with one another instead of learning structures and rules of the target L2 (Skehan, 2003). Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 6 In particular, Krashen (1985), one of the most influential figures of the CLT approach to L2 learning, insisted that the learner’s communicative competence in the target language is determined by his or her implicit language knowledge which is acquired through the naturalistic exposure to large amounts of L2 input. He further questioned the usefulness of explicit instruction in acquiring L2 and argued that comprehensible input and the naturalistic use of L2 are necessary and sufficient for L2 development. The pedagogical implications of CLT were manifested through the wide use of communicative activities or tasks in which learners use L2 to express meaning or convey information the way it is used in the target L2 environment. In other words, in CLT-adopted L2 classrooms, learning activities are organized around meaning-focused communications with possible resemblance to real life, and teaching of form or grammar is generally dismissed as a corrupting factor for learners’ L2 development. Although no one denies the importance of input and meaning-oriented activities in L2 learning, an increasing number of L2 studies, especially Canadian immersion studies (e.g., Harley, 1992; Harley & Swain, 1984), have shown that when an L2 instruction is primarily meaning-focused and provides comprehensible input only, learners do not develop a high level of L2 proficiency (Doughty & Williams, 1998). Based on the Canadian immersion studies, researchers have made claims that it may be necessary for learners to focus on form (lay people’s term for form is ‘grammar’) as well as meaning to develop more target like L2 abilities. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 7 For instance, after comparing the learner achievement of explicit instruction (i.e., providing explanation about the target linguistic form and its use) with that of natural exposure or combination of two, Long (1983) concluded that form-focused L2 instruction (e.g., explicit rule presentation or explicit instruction) indeed makes a difference in learner performance. Since then, a number of quasi-experimental and experimental studies have been conducted to examine the effectiveness of various form-focused instructions. Additionally, the question of the role of attention to form in L2 acquisition, whether it facilitates L2 acquisition or not, has been a major topic of debate, and thereby has produced a considerable number of theories, debates and studies (Norris & Ortega, 2000). Yet, what seems clear now is that the focus of L2 instruction research has gradually moved from ‘whether form-focused L2 instruction is necessary’ to ‘what form-focused instructional methods are more effective in leading learners to focus on form and thus learn better’ (Norris & Ortega, 2000). In particular, researchers working in Task-Based Teaching, a widely accepted approach to second language instruction, generally admit the importance of the form-focused instruction and try to incorporate it in task-based instruction through the use of various methodological applications. A typical definition of a task is “an activity which requires learners to use language, with emphasis on meaning, to attain an objective” (Bygate, Skehan, & Swain, 2001, p. 11). In the task-based approach, it is essential to design a communicative task in which learners have to use the target L2 form to achieve the Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 8 goal, whether it is to buy an airline ticket or solve a language puzzle. The present study also adopted the task-based approach to develop the experimental tasks and learning material. According to Ellis (2003), there are three major types of form-focused tasks: structure-based production task, consciousness-raising task, and comprehension- based task. In the first type, the major concern for designing a task is how to encourage learners to produce a specific L2 form while engaging in a meaningful task. On the other hand, a consciousness-raising task is one in which the target structure itself is part of the task and learners are required to explicitly talk about the target structure. In other words, this type of task raises learners’ conscious awareness of the target form. Finally, in the comprehension-based task, learners’ attention is indirectly drawn to the target structure in the input through the use of visual enhancement of the form. Methodological options for the form-focused instruction and the task-based teaching are different from one another depending on the amount of emphasis they place on feedback, interaction, output production, or attentional sources. Despite the variety, the basic premise of all these methods is the same: an instructional treatment should attract learners’ focal attention to a target L2 form within a meaningful context so that the form is more likely to be noticed, processed, and acquired (Schmidt, 1995; Skehan, 2003; Spada, 1997). The issues regarding the role of attention in second language learning and its measurement are presented in the next Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 9 section followed by a discussion of several instructional methods to draw learner attention. Attention and Awareness in Second Language Learning Role of Attention The role of attention in learning has been extensively studied both in cognitive psychology and second language acquisition (Curran & Keele, 1993; Dulany, 1991; Reber, 1989, 1993; Schmidt, 1995; Tomlin & Villa, 1994). The general consensus in SLA is that attention to a certain linguistic feature is crucial for learning to take place, and it is also accountable for the ways that learners process linguistic stimuli to which they are exposed (Gass, Svetics, & Lemelin, 2003). Only attended stimuli will be encoded in long-term memory, while unattended ones will persist in working memory briefly and then be discarded. It is generally agreed upon as well that more complex learning requires more attention from learners than simpler learning, and the more learners pay attention to the target form, the better they learn it (Schmidt, 1995). Given the complexity of English relative clauses, the target linguistic form of the present study, it can be predicted that when learners’ limited attention is distracted to extraneous factors such as the interface of a computer-based learning environment or multimedia-embedded learning environment, their learning will be damaged (i.e., learner spending more time and mental effort to process a unit of instruction due to the distracting elements in learning environment). Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 10 It is still controversial, however, how much and what type of subjective awareness or attention to an L2 form is necessary for learning to occur (Izumi, 2002; Schmidt, 1995). There are three major positions regarding this issue. First, Richard Schmidt claims that “what learners notice in input becomes intake for learning” (1995, p. 20), Here, intake is referred to as the portion of input which has been perceived and processed by learners. From Schmidt’s point of view, therefore, learning cannot take place without learners’ subjective awareness at the level of noticing which refers to a low level of awareness or conscious attention, since noticing is the necessary and sufficient condition for input to be converted into intake (1990, 1995,2001). On the contrary, drawing from their theoretical model of attention Tomlin and Villa (1994) proposed that conscious awareness is not necessary for learning. In their model, attention is divided into three interrelated processes; alertness - “general readiness to deal with incoming stimuli or data” (p. 190), orientation - “the specific aligning of attention on a stimulus” (p. 191), and detection - “the cognitive registration of sensory stimuli” (p. 192). They further argued that learners’ awareness of the target form might facilitate learning, but what is crucial for learning is the detection which does not require the learners’ conscious awareness. The third position is posed by Peter Robinson (1995) who posits himself in the middle of the previous two propositions. He defines ‘noticing’ as detection plus further activation which uses attentional resources allocated by a central executive Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 11 processor. He also contends that learning might happen even when a learner detects the form without subjective awareness, but the amount of learning would be very limited. Rather, in order to store a stimulus into long term memory, a learner has to detect the target form, and then rehearse it in short-term memory. That is, ‘noticing’. Robinson also suggests including a higher level of subjective awareness, ‘understanding’, into the awareness research in SLA. To be more specific, understanding refers to recognition o f general rules or patterns underlying input, while noticing refers to surface level of phenomena or simple recognition of certain forms or events. Thus, understanding is correlated to a higher level of learning than noticing. Despite the controversy presented above, it is now widely agreed that focal attention to a target linguistic form is necessary for learning to take place and a higher level of subjective awareness or noticing is correlated with better learning (Rosa & O'Neill, 1999). Several recently published studies have also provided empirical evidence for the facilitative effects of subjective awareness on L2 learning (Alanen, 1995; Leow, 1997, 2000; Robinson, 1997a, b; Rosa & O ’Neill, 1999). Types of Attention Measures There is another issue that has been subject to debate in the field - the ways to operationalize and to measure learner awareness or attention. The debate has been prompted mainly due to (Leow, 1997): (a) different definitions of what constitutes awareness; (b) the speediness of a learner’s experience of cognitive processing; and Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 12 (c) a learner’s potential inability to report his or her own awareness. Nevertheless, many awareness measuring instruments are constructed mostly based on Schmidt’s operational definition of noticing (1995) - the availability of self-report or verbalizing what one has experienced during or immediately after one’s exposure to the target form. Carr and Curran (1994) also listed methodological criteria for assessing learner awareness that include changes in a learner’s behavior patterns and some form of meta-awareness, that is, verbalizing their conscious register of the targeted form. Many attention measuring instruments employ online or offline self- report or questionnaires that are believed to capture the cognitive operation that has taken place while learners were exposed to the input. Yet, it should be mentioned that some experiences are not easy to report or verbalize, and hence, the lack of self- report does not necessarily mean the lack of awareness (Schmidt, 1995). There are two major data collection procedures in capturing learner awareness: concurrent or introspective verbal reports and post-exposure or retrospective measures. The concurrent verbal data get collected while learners are performing tasks; that is, learners are asked to verbalize their thoughts as they are processing incoming stimuli. On the contrary, the latter is usually conducted immediately after learners have completed learning tasks (off-line), either through questionnaires (e.g., Robinson, 1997a, b) or interviews (e.g., Kormos, 2000). This type of measurement, however, has been criticized that it may obtain only indirect evidence of learners’ awareness (Leow & Morgan-Short, 2004). Moreover, Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 13 retrospective self-report measures are restricted to a certain degree because learners might have limited ability to retain in their memory what they have experienced and thus they may report what they have inferred instead of what they have actually experienced (Rosa & O ’Neill, 1999). Nevertheless, it is hard to dispute its practical value: it is easy to administer and we can also get a large amount of data at once when administered in a questionnaire format. Recently, researchers have started to use more direct methods in order to capture what is really happening when learners attend to forms and process them for further learning. For instance, the think-aloud protocol, through which learners are required to verbalize whatever comes to their minds while they are interacting with the L2 data, is one of the popular direct methods used in the field of SLA. Despite its growing popularity in the field, the validity of the think-aloud protocol has also been questioned as learners’ verbal reports may not be same as their behaviors (Nisbett & Wilson, 1977, cited by Leow & Morgan-Short, 2004). Another major concern for the concurrent verbal reports is the issue of reactivity: Thinking aloud could change the very nature of the process under investigation by prompting learners to process the input in a more systematic way than they would without thinking aloud (Rosa & O ’Neill, 1999). Furthermore, thinking aloud could act as an additional secondary task, imposing a cognitive load on learners who also have to simultaneously process the input (Izumi, 2002). This additional task could require learners to spend more time and effort to complete learning tasks (Ericsson & Simon, 1993). Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 14 Instructional Methods to Focus Learner Attention A learner’s focal attention to a target linguistic form is necessary for learning to take place and the more a learner is aware of the target form, the better s/he is likely to learn the form (Rosa & O'Neill, 1999). Thus, the fact that the amount of attention which a learner can pay to a certain target at a certain moment is limited has constantly been a major factor that teachers consider when making spontaneous as well as planned instructional decisions. In the field o f L2 education, particularly, directing learners to pay attention to a linguistic form is not an easy job, not only because they have a short attention span but also because paying attention to linguistic forms during communication is counter-intuitive. People use language to communicate meaning and they can do it successfully without paying too much attention to syntactic details of the language thanks to their schematic knowledge coupled with their communication strategies (e.g., reading other’s facial expressions and the context in which the conversation is being carried out). Therefore, it is quite natural for people to pay more attention to the meaning rather than to the linguistic form of their messages (Skehan & Foster, 2001). Nevertheless, prioritizing meaning over form does not help learners achieve complete success in developing L2 proficiency as evidenced in the Canadian immersion education studies. In fact, our nature of focusing more on meaning is to blame for the development of pidgin languages, frequently observed among adult learners who have developed a great deal of schema and communication strategies in Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. their LI and thus do not have to capture all the linguistic details of the message to fully function as members of a society (Givon, 1985). Several instructional methods have been developed to draw learners’ attention on form since Long (1983) proposed that instruction does make a difference in learners’ learning of L2, and the effectiveness of different methods have been examined. Examples of instructional methods for drawing learner attention include, but are not limited to, textual or visual enhancement, explicit explanation of the linguistic form or explicit rule presentation (i.e., syntax or morphology) before or after exposure to input, and frequent use of the target form in input (e.g., input flood). Instructional methods can be categorized according to the degree of their explicitness in requiring learners to pay attention to the target form. Explicit instruction aims to direct learners’ attention on target grammatical elements by explicitly presenting rules of the target form (e.g., presenting a lesson on syntactic or morphological aspects of the linguistic form or how the form is used in sentences or utterances), corrective feedback, and/or tasks that explicitly ask learners to focus on specific linguistic forms. On the other hand, implicit instruction utilizes unobtrusive techniques and/or tasks (e.g., visually manipulating forms by italicizing, underlining specific words or sentences) to lead learners to notice forms without interrupting their processing the meaning of the message. During the last two decades, explicit instruction, especially explicit rule presentation in isolation has been criticized and prohibited by noninterventionists who questioned the usefulness of any explicit rule permission of the copyright owner. Further reproduction prohibited without permission. learning and maintained that only implicit learning of the form, resulting from exposure to large amounts of input is the only possible way o f L2 learning (Krashen, 1985; 1992; 1993). However, as Doughty and Williams argued (1998), the criticism has been unfairly applied to any form of explicit language teaching. As a matter fact, empirical studies have suggested that explicit instruction is better than implicit instruction, in particular, when explicit rule presentation is mixed with carefully prepared, relevant examples (Ellis, 1994; Long & Robinson, 1998). Different instructional treatments may have a different selection and arrangement of the methods listed above. The selection of instructional methods, whether explicit or implicit, should be made based on several factors involved, such as levels of learner prior knowledge or overall proficiency, complexity of a target linguistic form, nature of learning tasks employed, and so forth (Doughty & Williams, 1998). Among several methods, however, explicit rule presentation and visual enhancement of a target form have been most discussed in the field. Their relative effectiveness have been investigated, particularly when they are matched up with other moderating factors including learners’ prior knowledge or proficiency levels, learner characteristics (i.e., language aptitude), and the type of target language forms (e.g., simple vs. complex linguistic features) (Norris & Ortega, 2000). Explicit rule presentation was also chosen as the instructional method for the present study because of empirically proven effects o f the method, which will be discussed in detail in the next section. The rules about the target form, English permission of the copyright owner. Further reproduction prohibited without permission. relative clauses, were blended with relevant examples, and then presented through different forms of multimedia (i.e., an animated pedagogical agent and an electronic arrow with voice). The study also provided participants with a reading task which included the target form in meaningful context. Explicit rule presentation, thus, was used as a prerequisite for the reading task in which the knowledge of the target form should be used for successful comprehension of the readings. The effectiveness of the explicit rule presentation was studied with relation to learners’ prior knowledge of the target form to get a more complete understanding of its impact on learning. The following section will review a body of literature regarding the effectiveness of explicit rule presentation compared to other instructional methods. Review of the Effects of Explicit Rule Presentation The rationale behind using explicit rule presentation for L2 instruction is that simply exposing learners to a linguistic form is not sufficient to enable learners to acquire most of L2 forms and that the shortfall of any meaning-focused instruction should be compensated for by explicitly explaining the target form (DeKeyser, 1998). The contents of explicit instruction prime learners for specific grammatical features so that they pay longer and deeper attention to the target forms and develop conscious, accurate knowledge of the form. On the other hand, visual input enhancement (i.e., targeted linguistic features are italicized, bold, or capitalized for perceptual salience), one of the popular implicit and unobtrusive attention drawing techniques, is hypothesized to help learners to focus on a specific grammatical permission of the copyright owner. Further reproduction prohibited without permission. structure contained in written text by making the form perceptually salient. The fundamental idea behind visual enhancement is that learners’ attention will be drawn to the highlighted grammatical structure in input, and then the attended form will be learned because attention transforms input into intake (Izumi, 2002). It is also based on the notion that explicitly drawing learners’ attention to linguistic forms interferes with meaning-making process. Recently, a number of laboratory and classroom studies have revealed that learners who are explicitly exposed to target forms and who are given explicit rule presentation outperformed those who are implicitly directed to forms (Alanen, 1995; DeKeyser, 1995; Ellis, 1994; Robinson, 1996, 1997a, b). On the contrary, the research on implicit instructional methods including visual enhancement produced mixed results (Jourdenais, Ota, Stauffer, Boyson & Doughty, 1995). The mixed results could be due to a number of other independent and mediating factors involved in studies, such as the length of treatment, additional instructional methods implemented besides textual input enhancement, and learners’ proficiency levels. For example, Alanen (1995) found positive effects for explicit rule presentation, but mixed results from visual enhancement on the acquisition of semi-artificial Finnish locative suffixes and consonant gradation. In the study, participants were asked to read a text with the target structure embedded in it after they were assigned to one control and three treatment groups: (a) Control Group only read two descriptive passages with a picture and glossary; (b) Enhance Group received the same text with Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. the target structures printed in italics; (c) Rule Group was given explicit explanation about the use of the target structures before reading the text; and (d) Rule & Enhance Group received explicit rule explanation and typographically enhanced text. The first two groups were referred to as meaning-based groups whereas the latter two groups were referred to as rule-based groups. Participants were required to think aloud while reading the text to find out what features of the input they paid attention to and whether they had noticed the form. As hypothesized by the author, the results of various assessments (i.e., think aloud protocol, grammaticality judgment, sentence completion, comprehension, word translation, and rule statement) revealed that the rule-based groups, Rule and Rule & Enhance Group, outperformed the meaning-based groups, Control and Enhance Group, in learning both locative suffixes and consonant gradation. Interestingly, however, the data did not support the initial hypotheses that visual input enhancement would have positive impact on learning of the target structure. The students in the visual enhancement group noticed the italics in the text, but it appeared that not many of them thought about why those elements were typed differently. As a consequence, no significant difference was found between Control and Enhance group as well as between Rule and Rule & Enhance Group. Based on these findings, Alanen concluded that it was the explicit rule presentation that made differences in learning. With regard to the failure of the visual input enhancement, Alanen attributed it to low degrees of perceptual saliency of italics. permission of the copyright owner. Further reproduction prohibited without permission. 20 The study by Jourdenais et al. (1995) is one of the few studies which found textually modified input facilitated noticing of and subsequent production of grammatical forms. Participants, 14 college students learning Spanish, were first assigned to either visual enhancement or comparison group, and then given a sample text written in Spanish as a stimulus for a writing task. However, only the visual enhancement group received a textually enhanced sample text in which the target forms, Spanish preterit and imperfect, were visually manipulated (e.g., shadowed, underlined, bolded, etc.). After reading the sample text, the participants were told to narrate a series of pictures in writing. They were also told to think aloud as they wrote. The think-aloud protocol was employed to investigate a learner’s concurrent cognitive processing of the forms and to measure the degree and the nature of noticing. In addition, the students’ written products were analyzed to evaluate the learners’ use of the target forms. The think-aloud protocol data revealed that the students in the visual enhancement group made significantly more explicit mentions of the forms than those in the comparison group, which Jourdenais and colleagues interpreted as that the enhancement group had been better primed for processing of the target forms due to the textual enhancement. The written productions also showed that the enhancement group used the forms more often than the comparison group although differences in accuracy did not significantly differ. The results, however, should be interpreted with caution because the students were already familiar with the forms, Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 21 and had different proficiency levels in reading and writing skills in Spanish. As the researchers mentioned, it was not clear what role these individual differences played in the cognitive processing as well as the outcomes. Moreover, the fact that the study did not include explicit rule explanation as one of instructional treatments makes it hard to decide degrees of visual enhancement’s relative effectiveness. An L2 instruction may be composed of more than one method to effectively draw otherwise elusive learner attention to form. Ellis (1994) insists that the combination of implicit and explicit method works better than the use of only one method, either implicit or explicit, since learners need not only to learn the form, but also to integrate the form and the meaning to successfully complete language acquisition process. White (1998) also suggests that typographically enhanced input does not provide enough information about the use of forms, and therefore, in addition to visually enhancing the input it might be more helpful to provide explicit rule explanation or different types of visual enhancement such as arrows or color- coding which could clarify the relationship among pertinent elements. She also maintains that individual differences and some external factors, such as regular classroom activities, should be considered when studying effects of instructional methods. Then, there arises a question of what it means that the visual manipulation of text was not perceptually salient enough. Based on the two studies discussed above, it appears that visual enhancement is not qualitatively good enough to trigger further Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 22 cognitive processing beyond mere noticing of forms for acquisition to take place. Consequently, it should be sought out then what other instructional methods could be used with or without a visual input enhancement method to promote learners’ cognitive processing beyond mere noticing of forms. Izumi (2002) partially addressed the issue by investigating the facilitative effects of output practice in addition to visual enhancement on drawing learners’ attention and consequent learning of grammatical forms in a controlled experimental study. A total of 61 adult ESL (English as a Second Language) learners were randomly divided into four different treatment groups, each of which was exposed to a target form, English relative clauses, presented in two reading tasks through different combinations of output practice and visual input enhancement. The output practice was assumed to facilitate learners’ noticing of forms by inducing them to realize problems in their production, which in turn gives learners heightened awareness of the target form provided in subsequent input. Note-taking of form- related words was used to measure learner awareness at the level of noticing while learners were reading texts. Yet, it was also assured that the absence of note-taking did not mean the absence of noticing of the form. The instruction given for note- taking was different for the output groups and the input enhancement groups; while the output groups were asked to take notes of any words which they thought important to following text reconstruction tasks, the enhancement groups were asked to take notes for following reading comprehension tasks. Acquisition o f the forms Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 23 was measured through various production (e.g., sentence combination, pictured- cured sentence completion) and reception tests (e.g., interpretation, grammaticality judgment). Unlike Alanen (1995) who found positive correlation between noticing and learning of target forms in some groups, Izumi did not find such relationship in either output practice or visual enhancement group. To be detailed, the visual input enhancement had a significant effect on the number o f noticing, but failed to produce any measurable gains in learning of the target forms. On the contrary, the output practice caused great improvements on learning of the forms despite it did not create any evident impact on noticing of the forms. Izumi explained that the failure of visual enhancement to facilitate learning was because noticing was not automatically led to further cognitive processing that might be necessary for acquisition. It was also argued that what is really important for acquisition of a grammatical structure is not the quantity of attention, but the quality of attention, that is, levels and types of attention. A shallow level of attention, such as maintaining continued attention to certain forms at one level without shifting to a deeper processing level, is not enough to prompt learners to learn the target forms. In Izumi’s study, the visual input enhancement was claimed to cause only a shallow level of attention while the output practice led the learners to go beyond superficial levels of noticing and to acquire the form. Izumi also called attention to the notion of ‘Integrative Processing’ and its relevance to SLA. Integrative processing emphasizes Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 24 not only learners’ attention to individual elements, but also their understanding of the relationship among the elements. And it is the latter which was missing in the visual input enhancement, but present in the output practice according to him. However, Izumi did not provide any statistical evidence nor explain how the output practice encouraged learners to connect all the related elements and to conceive them as a coherent structure, English relative clauses. In addition, it is not clear why the output practice group did not notice the form, but was still able to learn it. It could be argued that the output practice group’s note-taking for sentence reconstruction caused qualitatively different cognitive processing than the note- taking for reading comprehension as Izumi himself argued that production requirements could lead learners to process the language differently than simple comprehension requirements. Summary This section was devoted to the review of theories and research with regard to learning and instruction of L2 linguistic structures. In particular, different instructional methods to draw learners’ attention to a target linguistic form (e.g., explicit rule presentation, visual input enhancement, and output practice) were examined in terms of their effectiveness and limitations. Most of the instructional methods discussed above are based on the notion that the amount of attention to target linguistic elements is positively correlated with degrees of learning a target forms. Interestingly, however, the visual manipulation of target forms failed to show Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 25 significant impact on learner performance despite its proposed instructional benefits for drawing learners’ focal attention to forms. On the other hand, other explicit methods, such as explicit rule presentation, produced measurable gains in terms of noticing and learning of a target form. Nevertheless, due to pervasive practice of the noninterventionist approach to L2 instruction in the field of SLA, the effect of explicit rule presentation plus examples are not much recognized by L2 scholars and educators, which necessitates further research. There are some other issues to be considered. First, focusing on only one instructional factor, that is, how to draw learners’ attention, the existing studies did not give much consideration to other mediating variables involved in L2 learning including learner prior knowledge. Second, only few studies examined the effects of a learning task itself and directions for conducting the learning task, despite previous research has revealed that they might have certain impact on them (Rosa & O'Neill, 1999; Williams, 1999). According to Rosa and O ’Neill (1999), their subjects benefited from a problem-solving jigsaw puzzle task and directions accompanying the task in noticing and processing the form. In particular, they claimed that regardless of types of treatment group to which they were assigned, all the participants improved significantly from pretests to posttests due to the properties of the puzzle task (i.e., task essentialness and immediate feedback). Based on the findings and limitations presented in this section, the present study examined the effect of one specific instructional method - explicit rule Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 2 6 presentation plus relevant examples provided in reading text in order to give learners opportunities to understand how the target form is used in meaningful context. In particular, it employed more elaborated multimedia techniques (i.e., an animated pedagogical agent and an electronic arrow with voice) to deliver the method in a computer-assisted learning environment. Given the recent trend in the field of L2 instruction that instructions are increasingly delivered through computers and multimedia, but mostly in a text mode, it was considered worth investigating what other multimedia delivery methods are available and which ones are useful for drawing learners’ attention and facilitating L2 learning. Furthermore, the present study conducted a study conforming to rather stricter methodological standards by taking into consideration learner prior knowledge and by controlling instructional tasks as well as directions for carrying out the tasks. Cognitive Efficiency of Multimedia Learning Media researchers now agree to a certain extent that questioning about the effect of one medium over another on cognitive product is a circular, fruitless argument, and it is high time to shift our focus onto media impact on learners’ cognitive process and on how to utilize media to improve the process (Clark, 1998; Mayer, 1997, 2001). One of the media researchers who is concerned with how new information is processed and how media can help this process is Tom Cobb who proposed to include ‘Cognitive Efficiency’ as a variable in media studies (Cobb, Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 27 1997). Cognitive efficiency refers to one medium requiring more or less mental effort or time over another medium while one is performing a task (Cobb, 1997; Clark, 1998). Under the proposition of cognitive efficiency, there are two independent, but interactive variables; the mode of presenting an instruction and individual or group differences in terms of their prior knowledge and experience that would influence whether they process the instruction faster and/or easier (Clark, 1998). Therefore, researchers may ask questions like how much time is required for a particular learner to learn a unit of learning material through one particular medium vs. another medium or how much mental effort a learner had to invest in processing information when a certain medium is employed compared to another medium. Hence, cognitive efficiency has varying degrees of relevance to media selection. Yet, cognitive efficiency is not about choosing a unique and best medium for presenting instruction. Rather, it is more about choosing a cognitively most efficient medium among alternatives for a given task and given learners. The present study examined whether a pedagogical agent is more cognitively efficient than an electronic arrow with voice for presenting explicit rules about English relative clauses to college level ESL learners. This section reviews theoretical frameworks and empirical findings associated with cognitive efficiency in multimedia learning. It further looks at the instructional and design principles which should be considered when building cognitive efficiency in a multimedia learning environment. These will Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 2 8 be the basis for a discussion in the next section about animated pedagogical agents because an animated pedagogical agent is one of the delivery media employed along with an electronic arrow with voice in the present study. Theoretical Constructs Relevant to Cognitive Efficiency As discussed above, cognitive efficiency has not yet earned much theoretical or empirical consideration from media researchers and educators, in spite of its potential to take forward the field of media learning. Consequently, it has not been able to build up much theoretical support for conducting empirical studies in order to confirm its promises, which necessitates for the present study to examine relevant ideas and constructs from existing media studies. Among various theoretical constructs found in media research, the concept of mental effort is closely related to cognitive efficiency. Mental effort is typically defined as “the number o f non automatic elaborations applied to processing a unit of material” (Salomon, 1983, p.44), or “the amount of actual effort invested in the lesson” which includes “cognitive activities such as perceptual processing, searching memory for appropriate schemata, and elaborating on the content” (Cennamo, 1993, p. 15). There are a few concepts similar to mental effort, although there is a lack of agreement among researchers in defining these types of mental constructs (Flad, 2002). What is common among all these concepts, however, is that they are expended only in processing declarative knowledge (e.g., knowing concepts, processes, and principles), which requires learners’ conscious attention and Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 29 processing, rather than highly automated (requiring little or no conscious processing) procedural knowledge (e.g., knowing how to do something). For example, Posner and Snyder (1975) proposed the term ‘conscious attention’ compared to ‘automated attention’ to indicate a mental process put into operation when making overt responses, retrieving information from memory, and developing a hypothesis. On the contrary, Shiffrin and Schneider (1977) used the term ‘conscious attention’ to describe the level of cognitive involvement beyond automated processing. Another related construct, which has been extensively studied by Sweller and colleagues, is ‘cognitive load’ (Sweller, 1999; Sweller & Chandler, 1994, Sweller, Cooper, Tierney, & Cooper, 1990; Sweller, van Marrienboer, & Paas, 1998). Recent work on cognitive load frameworks has concentrated on multimedia learning with the advance of ‘Cognitive Load Theory’ (Brunken, Plass, & Leutner, 2003; Camp, Paas, Rikers, & van Merrienboer, 2001; Kalyuga, Chandler, & Sweller, 1999; Paas, Touvinen, Tabbers, & van Gerven, 2003; Sweller et ah, 1998; Tindall-Ford, Chandler, & Sweller, 1997), which deserves further inquiry. The construct of cognitive load is multi-dimensional, consisting of a causal and an assessment dimension: The causal dimension represents interaction between task (i.e., difficulty, format, use of multimedia, time pressure) and learner characteristics (spatial abilities, prior knowledge, age), whereas the assessment dimension represents measurable concepts of mental load, mental effort, and performance (Pass & van Merrienboer, 1994). In the cognitive load frameworks, Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 30 mental effort refers to the amount of cognitive resources actually allocated to cope with the demands imposed by a task, and its measures are believed to provide essential information about cognitive load when combined with the measures of performance (Paas, van Merrienboer, & Adam, 1994; Paas et al., 2003; Sweller et al., 1998). Measurement of Mental Effort Given its importance for the study of cognitive efficiency, the construct of mental effort should be considered and incorporated into designing multimedia learning. Then, the first step to take in helping a learner to learn more efficiently with media should be measuring the exact amount of mental effort invested by a learner. However, the construct of mental effort is hard to measure directly (Clark, 1999) because it is not directly observable. There have been several methods and techniques developed to indirectly assess the amount of mental effort, which could be grouped into three main ones according to the types of processing they attempt to measure: self-report opinion measures, secondary task measures, and physiological measures. Each method corresponds to areas of introspection, information processing, and neural processing, respectively (Cennamo, 1992). Self-Report Opinion Measures Mostly in a self-report format, subjective opinion measures is based on the assumption that investing mental effort is an intentional and non-automatic process, and readily accessible by individuals (Beentjes, 1989). It also presumes that Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 31 individuals are relatively capable of reporting the amount of mental effort which they put in processing learning materials (Salomon, 1984). Opinion measures include Inventory of Learning Processes scales designed to measure individual differences in levels of processing, Amount of Invested Mental Effort (AIME) questionnaire developed by Gabriel Salomon (1983), and several scales used to assess mental workload imposed in operating a flight simulator such as Cooper-Harper aircraft handing rating scales and Workload Compensation Interference/Technical Effectiveness scales (Casali, Wierwille, & Cordes, 1983). All of these measures have a reputation of being most efficient to use and having high reliability (Dweck, 1989). Among them, Solomon’s AIME questionnaire and Paas’ Mental Effort Scale have been frequently used in studies examining differential effect of media and instructional methods on learners’ mental effort investment (Beentjes, 1989; Salomon, 1983; Salomon, 1984; Salomon & Leigh, 1984; Paas et al., 1994). Typically, subjective measures ask subjects to answer a set of questions on a 4 point Likert-type scale about how much effort they think they have invested or how much they concentrated in processing a particular unit of task (Salomon, 1983). Although some researchers have questioned the validity of self- report measures and have called for the use of more precise and multiple methods (Beentjes, 1989; Gimino, 2000), they are still highly reliable, sensitive to differences in mental load, and non-intrusive (Gimino, 2000; Paas et al., 1994; Paas et al., 2003). Moreover, it is o f better practical use than any other measures in educational settings. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 32 Secondary Task Measures Secondary task measures encompass a variety of methods that require a subject to work on a primary and a secondary task simultaneously. Among various secondary tasks, finger tapping, digit shadowing, and memory-scanning tasks are used the most. The secondary task methodology is based on the notion that one’s cognitive capacity is limited, and when much of the cognitive capacity is used by a conscious, non-automated primary task, there will be less cognitive capacity available for a secondary task. The amount of invested mental effort is computed based on the differences between the baseline performance of the secondary task and the actual performance in an experimental condition (Cennamo, 1992) where the primary and secondary tasks are performed at the same time. The baseline performance of the secondary task is reaction time or errors caused to perform the secondary task alone and it is measured before the primary task is presented. The increase in reaction time or errors in the experimental condition indicates the amount of conscious effort allocated to the primary task. Several studies have found these measures sensitive to differences in cognitive task demands, that is, different metal effort requirement of different tasks (Gimino, 2000). However, the use of secondary task measures has been warned in that they can impose too much extra mental load onto learners and thus, interfere with learner performance o f the primary task (Paas et al., 2003). The impact of the secondary task would be especially disastrous when the primary task is complex. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 33 When using a secondary task method, it is important for researchers to decide at which point in the experiment learners should be asked to respond to a secondary stimulus because there is individual difference in the speed of processing the primary task (Gimino, 2000). For instance, some learners might face the secondary task when they are closer to the solution, which imposes high cognitive load and causes longer reaction time. On the contrary, learners who are interrupted by the secondary task closer to the beginning do not need to spend as much mental effort. In addition, the structure and complexity of learning material could influence mental effort expenditure. That is, although the overall content of the lesson is easy, some parts of the content may not be so. If learners have to respond to the secondary task while the presented material at the very moment is complex, the level of mental effort would be shown high. However, that does not mean the overall material requires a great amount of mental effort (Cennamo, 1996). Physiological Measures Researchers have been using physiological measures to measure psychomotor, perceptual, communication and cognitive task demands. Physiological measures assume that there are physiological reactions in subjects to the increases in mental effort spending while the subjects are performing a task. The difference between a baseline measurement and a measurement taken in an experimental condition in which a leaner is working on an assigned task indicates the amount of mental effort. The physiological measures include a range of techniques such as Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 34 monitoring heart rate, blood pressure, or respiration variability, using electroencephalogram (EEG), and recording the number of eye blinks per minute (Wierwille, Rahimi, & Casali, 1985). Although physiological measures are extensively used in fields like human factors engineering, they are not very practical to use in classrooms due to high cost equipments (Gimino, 2000). Each measurement technique discussed above seizes different aspects of the construct of mental effort (Fisher & Ford, 1998), which requires researchers and educators to carefully inspect the purpose of their study and to decide what aspects of mental effort they plan to measure. Furthermore, research has shown that the sensitivity of different measures depends on the types of task placed on learners (Casali et al., 1983; Wierwille et al., 1985). Therefore, it is necessary for educators and researchers to take into consideration the types of learning tasks they will use when designing a study in order to obtain more accurate results. The present study adopted two self-report measures, Salomon’s AIME and Pass’ Mental Effort Scale, because of the following reasons: (a) the instructional task used in the study, the learning of English relative clauses, is quite complicated; (b) two multimedia delivery systems by themselves require a certain amount of cognitive resources and thus imposing a secondary task may interrupt learning process; and (c) self-report measures are a lot more practical than any other measuring techniques. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 35 Building Cognitive Efficiency in Multimedia Learning Given the definition of cognitive load, that is, the total amount of mental activity imposed on working memory at one time (Paas & van Merrienboer, 1994), and its conceptual similarity to mental effort, it can be inferred that by reducing the amount of cognitive load placed on working memory, it is possible to impose less cognitive demand and to request less mental effort from learners, and eventually improves the efficiency of cognitive processing. Then, how can we reduce the amount of cognitive load? A fundamental idea of cognitive load theory is that working memory is limited in terms of the amount of information it can hold and process at a time, and thus, it should not be overloaded with too much information at a given time (Chandler & Sweller, 1991; Kalyuga, Chandler, & Sweller, 1998; Sweller & Chandler, 1994; Sweller et al., 1990; van Gerven, Paas, & Schmidt, 2000). Another important idea of cognitive load theory is that working memory consists of partly independent auditory and visual buffers (Baddeley, 1992), and each buffer processes different forms of information. With separate working memory buffers being utilized to process information presented in auditory and visual modes, it is hypothesized that the capacity of working memory will be increased. Consequently, the amount of information that can be processed for the fixed amount of time or conscious effort will be increased, which will in turn result in improved efficiency of cognitive processing. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 36 Instructional Strategies to Reduce Cognitive Load There are three types of cognitive load: intrinsic load, extraneous or ineffective load, and germane load (Paas et al, 2003). Intrinsic cognitive load is determined by interaction between learner expertise and nature of learning material itself, such as task complexity or high element interactivity of the learning material, meaning that each element of the material cannot be learned without reference to other elements. On the other hand, an extraneous component of cognitive load results from an instructional format or strategy used to deliver the material. The problem with extraneous cognitive load is that it is irrelevant to learning, thus not leading to schema acquisition and automation, two major products of human learning process (Sweller & Chandler, 1994). Unlike the extraneous load, germane cognitive load refers to the load that has been devoted to construction and automation of schema. Since it is not possible to change the intrinsic nature of instruction or to lower levels of intrinsic element interactivity, it is best to keep the extraneous cognitive load to a minimum level so that the total amount of cognitive load imposed by a task falls within limited mental resources of human working memory (Chandler & Sweller, 1991). There are several instructional strategies that can be used to reduce levels of extraneous cognitive load (Sweller, 1999) and consequently the amount of mental effort: (a) using goal-free problems that do not allow learners to employ means-ends strategies and backward reasoning. With a typical goal-fee problem unlike a goal-specific problem, a learner does not have to keep main goals and sub Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 37 goals o f the problem in working memory, which in turn saves working memory resources; (b) using worked examples in which the problems accompanied by their worked-out solutions. Worked examples do not force learners to use means-ends strategy which demands a considerable amount of working memory capacity; (c) avoiding split-attention effect which results from presenting mutually referring information separately. When presented with mutually referring information separately, a learner has to split his or her attention and again mentally integrate them, which results in imposing a high level of extraneous cognitive load. Instead, the information should be presented in a physically integrated format; (d) distributing information over different modalities by which two different modal processors, a visuo-spatial sketchpad and a phonological loop, are utilized simultaneously in working memory; and (e) avoiding providing redundant information through different modes. Among these five instructional strategies, the last three have specific relevance to multimedia presentation and cognitive efficiency, and thus to the present study. Several studies have been conducted to test if these strategies have significant impact on cognitive processes and learning outcomes (Jeung, Chandler, & Sweller, 1997; Mayer, Moreno, Boire, & Vagge, 1999; Moreno & Mayer, 2000a, b; Mousavi, Low & Sweller, 1995; Sweller & Chandler, 1994). Yet, these studies mainly focused on measuring learners’ cognitive products or final performance, not their learning processes. So instead of asking questions like ‘How long did it take to learn the Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 38 materials presented through a specific mode?’ or ‘Was it easier to learn with one specific medium over another?’, these studies were more interested to know about how well learners perform in a number of performance tests (e.g., knowledge of simple facts, knowledge transfer) after receiving a certain instructional treatment. This lack of studies on learning process emphasizes the importance of the present study because it investigates relative effect and efficiency of different presentation media on learning process as well as final performance when learning a second language. Avoiding split attention effects and Utilizing modality effects A split attention effect occurs when a lesson consists of two or more different sources o f information that are incomprehensible until they are mentally integrated by a learner (Mousavi et al., 1995), such as a separately presented geometric diagram and accompanying sentences that are spreading across a page or computer screen. In this situation, the learner has to find referential connections between corresponding aspects of the diagram and sentences, since they cannot be understood in isolation. This extra mental integration process requires cognitive resource of working memory, and as a result, there might not be enough resources left required to achieve more essential learning objectives, such as schema acquisition. Several studies have shown that by physically integrating multiple sources of information (e.g., placing accompanying explanations in appropriate places near a Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 39 diagram), it is possible to avoid splitting learner attention and to prevent learners from investing mental effort in unnecessary processes (Chandler & Sweller, 1991; Mayer & Moreno, 1998; Mayer, Heiser, & Lonn, 2001, Experiment 1 & 2; Sweller et al., 1990; Sweller & Chandler, 1994; Ward & Sweller, 1990). The split attention effect can also be reduced by creating a modality effect. A modality effect can be obtained when we increase the effective size of working memory by utilizing different types of working memory processors (Jeung et al., 1997), derived from a popular assumption in memory research that there are multiple working memory processors, also called multiple memory channels. A typical hypothesis of dual or multiple processing modalities is that students who receive verbal information in narration along with animation or diagrams simultaneously or sequentially outperform those who receive the verbal information in an on-screen text format, when other things being equal (Moreno & Mayer, 1999, Experiment 2; Mousavi et al., 1995). Being presented with an explanation in an auditory format, learners can utilize the verbal processing channel in working memory as well as the visual channel to process the diagram, which eventually increase the effect size of working memory capacity. Jeung et al. (1997) also agreed that mixed mode of presentations (e.g., visual diagrams and auditory text) would allow more working memory available for learning compared to a single mode of presentations of equivalent contents (e.g., visual diagrams and written text). Yet they took a further step that clearly has very Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 40 significant implications in designing instructional multimedia. They hypothesized that a multimodal presentation would be useful only when learners do not have much difficulty in relating audio and visual information. Furthermore, they also suggested that when a learner has to invest a high level of cognitive resource to search for connections between two sources of material due to the complexity of the diagram, a visual indicator such as flashing or color would reduce the learner’s search effort so that cognitive resource could be used for learning. Leaving out redundancy effects A common assumption in multimedia learning is that different instances of the same information, or simply stated, repeated information in different modalities would enhance learning by allowing learners to choose a presentation mode that best fits their learning preferences. However, it has been repeatedly shown that if two different segments of information can be understood in isolation, one of them is redundant, and removing the redundant information has a positive effect on learning (Bobis, Sweller, & Cooper, 1993; Chandler & Sweller, 1991; Harp & Mayer, 1997, 1998; Mayer et al., 2001; Sweller & Chandler, 1994, Experiment 2). Chandler and Sweller (1991) revealed that not only redundant information is unnecessary for learning, but also it in fact interferes with the learning of core information by leading learners to invest scarce cognitive resource in processing redundant material, and hence imposes extraneous cognitive load on working memory. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 41 Another major source of redundancy effects is adding interesting but conceptually irrelevant material to a lesson in an attempt to induce learner interest, which is commonly known as seductive details. Seductive details are redundant because they are in many cases added to entertain learners not because they are essential to acquisition of core elements of a lesson. The underlying premise of including entertaining elements in a multimedia presentation is that increased learner interest will lead learners to pay more attention, to persist longer to learn, and to exert more effort to process instructional material. Moreno and Mayer (2000a), however, found that adding entertaining, but irrelevant to learning, pictures, music and sounds interfered with learners’ recall and understanding of instruction. Any additional elements that are not crucial for understanding of core material decrease working memory capacity, and as a result, learners are left with fewer cognitive resources to use for processing core elements. The result is poorer performance. Research also points out that including seductive details that lack conceptual relevance to main ideas of the lesson hinders learning process since they could take learner attention away from selecting and processing key elements of the lesson; causing a split-attention effect (Mayer et al., 2001). In addition, entertaining but irrelevant information may prime inappropriate knowledge about the topic and reduce the amount of cognitive resources that learners can use for processing essential information. Entertaining elements are most seductive when they are novel, Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 42 active, concrete, and personally interesting (Gamer, Brown, Sanders, & Menke, 1992) such as animated graphics and characters embedded in computer-based lessons. Integrating Leaner Prior Knowledge and Expertise Although the theories and research findings discussed so far are very much comprehensive and convincing, there is still one missing component, that is, learners. As many educators have argued, learners should be and in fact, are active participants of their own learning (Dalgamo, 2001; Wertsch & Bivens, 1992). They faithfully play their part in learning by bringing varying degrees of prior domain knowledge and expertise although often ignored by instructional designers. In effect, it is reported that differences in the levels of learner expertise make significant differences in cognitive processing and performance (Ericsson & Chamess, 1994). According to Kalyuga et al., (1998), there are two important concepts to understand learner expertise and prior knowledge, ‘Schema’ and ‘Automation’. They define schema as “a cognitive construct that permits people to treat multiple sub elements of information as a single element, categorized according to the manner in which it will be used” (p.l). Thanks to schema, which is stored in our long-term memory, we can identify various kinds of dogs as a dog, and as a consequence, avoid overburdening working memory. One of the important characteristics of schemata is that they are transferable (van Gerven et al., 2000). That is, we can transfer existing schema to new problems as far as they are related to the earlier encountered problems. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Another important feature of learner expertise is automation. Automation decreases working memory load (Kotovosky & Simon, 1985; Shifffin & Schneider, 1977), and requires little cognitive resources since automated knowledge can be processed with little or no conscious effort. Schemata are saved in long-term memory with varying degrees of automaticity, and depending on the degree of automaticity, activating schema requires a different amount of working memory resources. Therefore, if a learner faces with a problem for which he or she has a fully automated schema, then the learner does not have to spare cognitive resource for organizing and categorizing the problem. Instead, more conscious effort can be used to search for a solution and performing a task, and thus, a better and faster learning process can be achieved. Keeping the effects of learner expertise on learning in mind, we may have to ask the following questions in designing multimedia learning environment: Do seemingly redundant learning materials have the same effect on every learner regardless of their level of expertise? Kalyuga, Chandler, and Sweller (2000) reported that while inexperienced learners benefit from dual-mode presentation (i.e., a diagram with auditory narration), more experienced learners do not need additional source of information since they have already acquired sufficient knowledge to fully understand instruction through only one source of information. Furthermore, it was found that experienced learners actually were able to skip the extra information when given the options to do so, and thereby there was no redundancy effects found. Yet, under the condition in which permission of the copyright owner. Further reproduction prohibited without permission. 44 experienced learners were forced to attend to both auditory and visual information, their performance became worse because they had to process redundant information, which probably imposed extra cognitive loads on working memory and required unnecessary mental effort. The authors concluded that multimedia instructions have beneficial effects on learning only under well-defined circumstances, and learners should be given options to adapt themselves to a learning environment in order to take advantage of prior knowledge and expertise or even compensate lack of them. Summary In this section, the construct of cognitive efficiency and its importance in multimedia learning was discussed. In addition, several instructional strategies were reviewed that can be used to prevent working memory from being cognitively overloaded and to require less mental effort from learners. Sweller and Chandler (1994) suggested that by avoiding the limitations of working memory, it is possible to increase the efficiency of learning process. In learning situations, therefore, a total amount of cognitive load imposed by instructional materials should not exceed the limited capacity of working memory to speed up learning process and to achieve successful performance. Moreover, it has been shown that learner prior knowledge is an important factor to be considered when designing multimedia-based instruction, because different degrees of learner prior knowledge have different effects on learning processes and outcomes as well. Hence, it is necessary to examine the role of prior Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 45 knowledge in cognitive efficiency of multimedia learning, and the present study hypothesizes that there is a significant positive correlation between the level of learner prior knowledge of the target form and the level of cognitive efficiency. In other words, the more prior knowledge learners have of the target form, the less mental effort and time they will spend to process learning material. Another important point has been made in this section about adding interesting but conceptually irrelevant material to a lesson in an attempt to induce learner interest. A body of research has shown that any additional elements that are not crucial for understanding core elements decrease working memory capacity, and as a result, learners are left with fewer cognitive resources to use for processing core elements. Research also points out that any additional elements which are entertaining but unrelated to learning could take learner attention away from selecting and processing key elements of instructional input; causing a split-attention effect. This point is, in particular, closely related to the next section on animated pedagogical agents because one of the main reasons for using animated pedagogical agents is to entertain students and direct attention to key elements of instruction. Yet, given that entertaining elements could also split learner attention and simple flashing arrows can assist learners to focus and integrate related elements (Jeung et al., 1997), the effect of an animated pedagogical agent on learning processes and outcomes should be carefully examined and compared with other alternative media. The Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 46 present study hypothesizes that there is no significant difference between an animated pedagogical and an electronic arrow with voice in fostering learner performance when learning L2 structures, but the arrow with voice requires less time and mental effort and therefore, produces better cognitive efficiency. Animated Pedagogical Agents An animated pedagogical agent, by definition, is a lifelike character that resides in an interactive computer-based learning environment. It is a product of recent technological advances in user interface and autonomous software agents, and has been claimed to have great potential in facilitating human learning by offering customized instruction, and context-specific feedback and advice to learners (Johnson et al., 2000; Moreno, Mayer, & Lester, 2000; Sampson, Karagiannidis, & Kinshuk, 2002). It is also a specific type of intelligent interface agents whose main purpose is to provide learners with pedagogical assistance. In other words, an animated pedagogical agent is a specific form of media used to present instruction, not an instructional method itself, and thereby what an animated pedagogical agent does can be delivered through other media with different degrees of efficiency and effectiveness. The field of animated pedagogical agents has originated from two research areas: ‘Animated Interface Agent’ and ‘Intelligent Tutoring System’ (Johnson et al., 2000). The research on animated interface agent provides a new way of human- Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 47 computer interaction by applying features of face-to-face human communication to human-computer interaction. This field has great impact on the technological development of animated pedagogical agents, especially, its emphasis on lifelike and believable agent behaviors, such as gesture, facial expression, and gaze. On the other hand, the intelligent tutoring system focuses on developing software that can adapt to individual learners and provide personalized feedback through the use of artificial intelligence. By incorporating these two areas of research, animated pedagogical agents are believed to enhance computer-based learning, especially affective and motivational aspects of learning experiences (Atkinson, 2002). As presented in Table 1, animated pedagogical agents have a range of functions depending on the environment which they inhabit: (a) they can provide a learner with opportunistic instructions through which they can respond and adapt dynamically to a surrounding environment including a learner (Moreno et al., 2001): (b) they can demonstrate how to perform a task (Sampson et al., 2002): (c) they can focus a learner’s attention on certain elements or aspects of instructional systems using gestures, locomotion, or gaze (Atkinson, 2002): and (d) they can provide nonverbal as well as verbal feedback on learners’ actions. In particular, it is argued that the capability o f using nonverbal communicative behaviors (i.e., head-nodding for approval, head-shaking for disapproval, jumping up and down for congratulating students’ success, look of puzzlement for misunderstanding)allows a tutoring system to provide less obtrusive feedback (Johnson et al., 2000). Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 48 Table 1 Selected Animated Pedagogical Agents and Their Functions Agent Name Learning Environment Roles & Functions Delivery Modalities Adele (Shaw et al., 1999) Medicine & Dentistry Observe students’ actions, Answer questions, & Test students Speech, Text, Facial expressions, & Gestures AutoTutor (Graesser, Wiemer- Hastings, Wiemer- Hasting, & Kreuz, 1999) Computer Literacy (Hardware, Operating Systems, Internet) Ask questions, Test students, & Provide feedback. Text, Facial expression, & Head nodding Cosmo (Lester et al., 1997) Internet Packet Routing Provide explanations & Advice on contents and problem solving Speech, Locomotion, & Gestures Gandalf (Cassell & Torrison, 1999) Solar System Answer questions, & Explain on contents Speech & Gestures Herman the Bug (Lester, Stone, & Stelling, 1999) Botanical Anatomy & Physiology Offers advice, feedback, & encouragements Speech, Facial expressions, & Gestures Jacob (Evers & Nijholt, 2000) Manipulating objects in the virtual environment Gives feedbacks & Demonstrate tasks Texts, Facial expressions, & Gestures PPP Persona (Andre etal., 1999) Online Help Presentation Present Information & Draw learner attention Synthesized speech & Pointing gestures Steve (Johnson et al, 2000, Kroetz, 1999) Operating the engines aboard US Navy surface ships Demonstrate tasks, Monitor students, & Provide feedbacks Speech, Locomotion, & Gestures WhizLow (Lester, Zettlemoyer, Gregoire, & Bares, 1999) Architectures and Algorithms of CPU Provides instruction, & Corrects students’ mistakes Speech & Gestures Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Benefits of Animated Pedagogical Agents A number of instructional benefits of animated pedagogical agents have been claimed and studied although no one agent has provided all o f the benefits to date (Johnson et al., 2000). A review of literature shows that animated pedagogical agents in general have three possible effects on leamer-computer interaction: (a) they may have a positive impact on learners’ motivation and perceived experience of interaction with the system: (b) they may direct learners’ attention to the system or tasks through the use of motion, gesture, and facial expression, and: (c) they may improve learning outcomes (i.e., problem solving, understanding o f knowledge) by providing learners with contextualized advice. In particular, advocates of animated pedagogical agents maintain that agents render learning environment entertaining, which in turn motivate learners to interact more and to stay longer in the system. Due to the increased motivation and interaction, it is assumed that learner performance would improve. Among the three effects, however, the first two have close bearing with the present study, and thus, in the following section, they will be discussed in relation with learning process and outcomes based on empirical research findings. Motivating Learners One of the biggest benefits of using animated pedagogical agents put forward by advocates is that agents can entertain and motivate students better than other media and consequently, lead students to exert more efforts to make sense of instructional material. This claim is based on the interest theories of motivation Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 50 (Harp & Mayer, 1998) which suggest that learners invest more effort when they are interested in presented learning material. A similar effect, called “Persona Effect” has also been the focus of research in the field. Precisely, the persona effect refers to learners’ positive perception of their learning experience caused by the presence of an animated agent (Moreno et al., 2001). The persona effect is derived from the hypothesis that people interpret their interaction with a computer or a computer- mediated figure as a social interaction (Reeves & Nass, 1996) and they form personal and emotional connection with a computerized character. This personal and positive feeling is believed to foster interest in learning tasks and lead learners to work harder (Lester, Converse, Kahler, Barlow, Stone, & Bhogal, 1997). Although these two effects have originated from different theoretical backgrounds, they will be discussed together in the following because they are concerned with similar phenomena - learners’ subjective experiences of learning Numerous claims have been made on the positive influence on learning brought by increased interest and positive perception. Yet, it should be pointed out that increased motivation or positive perception does not necessarily have a cause and effect relationship with students’ actual learning outcomes. In other words, although students enjoy their interaction with an agent, it does not mean that they will learn better (Lester et al., 1997). In fact, many studies which employed animated pedagogical agents to deliver instruction found the persona effect, but were not able to find any significant cognitive learning benefits of the agents. In the studies where Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 51 better learning outcomes were found for the students who interacted with agents, the benefits were more likely caused by either different levels of advice offered or different instructional methods rather than higher interest or motivation triggered by the presence of agents (Clark & Choi, in press; Dehn & van Mulken, 2000). The lack of increased learning outcomes in spite o f high levels of interest elicited by animated agents could be explained by the theories of individual and situational interests. Individual interest refers to a relatively stable predisposition towards certain topics, objects, or events, which develops slowly overtime (Ainley, Hidi, & Bemdorff, 2002). Situational interest, on the other hand, is a psychological state triggered by immediate environmental stimuli, and as a result, may not have lasting influence on personal interest or learning (Hidi & Anderson, 1992). Examples of situational interest are the way in which learners respond to seductive details (Krapp, Hidi, & Renninger, 1992). Given that an animated pedagogical agent is an environmental stimuli embedded in a computerized learning environment, not a learner-developed characteristic, it can be inferred that learner interest generated by an agent might not last long enough to have significant impact on learning outcome. One of the studies that examined the motivational effect of an animated agent on learning is Moreno et a/.’s (2001). They investigated whether learners make more effort to understand material and accordingly achieve deeper learning in a ‘social agency environment’ (Experiment 1 and 2). Participants in the experiment group interacted with a pedagogical agent, ‘Herman’, an alien bug residing in a Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 52 discovery- and design-based learning environment called ‘Design-A-Plant. The experiment group participants were asked to design a plant based on environmental conditions (e.g., the amount of sunlight and rainfall) and received verbal feedback from the agent on their choices. In contrast, participants in the control group were given onscreen text which explained about making right choices under specific environmental conditions using step-by-step worked examples instead of the agent. But they were not allowed to design plants by themselves unlike their counterparts in the experiment group. In immediate posttests, significant learning differences were in problem solving between the agent and non-agent conditions while no difference was found in retention. Their interpretation of the results was that students interacting with the agent felt personal connection with the agent and interested in the task, which in turn resulted in more effort and better performance. However, it should be noted that the two groups received different instructional treatments in addition to the presence or absence of the agent. Only the agent group had an opportunity to design plants and received contingent feedbacks, and thus the better performance of the agent group cannot be exclusively attributed to the presence of the agent. Furthermore, although the agent group indicated higher interest in the material than the non-agent group, the groups did not show any significant difference in their subjective ratings of how difficult or understandable the material was, which can be related to the amount of mental effort they put in during the learning (Salomon, 1984). It is also important to Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 53 mention that the amount of mental effort invested by the participants during learning was not measured, and their claims were solely based on the results of simple t-test on the levels of self-reported interest of two groups. In a study using five clones of Herman the alien bug, Lester et al. (1997) also examined affective impact of animated pedagogical agents. The five clones were identical in their appearance, voice qualities, and nonverbal communicative behaviors, but were different in the levels of instructional advice and feedback they provided (task-specific vs. principle-based), and in modalities they employed to deliver instruction (verbal-only or verbal and animated). Students who were taught by the agent with full functionalities and modalities produced highest performance and gave the highest ratings to subjective assessment questions. The questions asked students’ opinions on helpfulness, believability and utility of the agent’s advice. Based on simple, descriptive analyses (i.e., means, standard deviations), the researchers interpreted that enhanced learning was likely to result from students’ positive perception of learning experiences and increased motivation, despite the obvious differences in instructional treatments and pedagogical assistance provided. Unlike the previous two studies that failed to control the effect of methods and feedbacks, Andre and colleagues (1999) conducted a well controlled agent study. To find empirical support for the affective and cognitive effects of the PPP Persona on man-machine communication, they exposed participants to a technical description (the operation of pulley systems) and an informational presentation (names, pictures, Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 54 and office locations of fictitious employees) on the World Wide Web. Both experiment and control versions provided the same treatments except that the control groups did not have the PPP Persona; instead, a voice was employed to convey the same explanations as the Persona, and its pointing gestures were replaced with an electronic arrow. Following the presentations, the Persona’s affective effect was measured through a questionnaire whereas the cognitive impact was measure by comprehension and recall questions. The results showed significant differences in only the affective measures. Participants interacting with the Persona for the technical descriptions found the presentation less difficult and more entertaining. These positive effects, yet, were not found for the informational presentation about the fictitious employees; rather subjects reported that the Persona was less appropriate for the domain and less helpful as an attention direction aid. As for the cognitive effect, no significant difference was found between the Person version and the non-Persona version both in the technical domain and the non-technical domain. The results of this study is consistent with Dehn and van Mulken’s claim (2000) that the persona effect of an agent is domain-specific and can improve human-computer interaction if the agent displays the functional behaviors matching the system’s purposes. The findings also suggest that agent behaviors can be easily replaced by simpler means of communication, not necessarily requiring an embodied character. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 55 Erickson (1997) also argued that the adaptive functionality of an instructional system is often enough for learners to perform a task and achieve the same outcomes without the guidance of an agent. He also suggested that when including an agent, instructional designers should think about what benefits and costs the agent would bring, and far more research be conducted on how people experience agents. Nass and Steuer (1993) found that simply using audio was sufficient to induce learners to use social rules when interacting with a computer. Moreno and colleagues (2001) also noted that learners may form a social relationship with a computer itself without the help of an agent and thus, the image of an agent might not be necessary to invoke a social agency metaphor in a computer-based learning environment. Given the studies discussed above, it is premature to conclude that the enchanting presence of an agent causes instructional benefit unless an agent condition is compared with a non-agent condition that provides the exact same learning conditions including the types of instructional methods, and the levels of feedback and advice. In fact, Moreno et al. (2001) found that students who interactively participated in designing plants and received contingent feedback on their choices still learned better than those who just passively listened to the verbal explanation, even after the image of the agent was deleted from the screen and the voice of agent was replaced by audio (Experiment 3). What is more interesting is that students in two groups did not show any significant difference in their perception of Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 56 learning experiences, which was speculated to be the main cause of learning outcomes. That is, even without the benefit of the presence of an agent, students were still able to learn the material better if an instructional method was right for the task. Moreno et al. also found that the presence of the agent’s visual image did not have any impact on affective or cognitive aspect of learning, while the modalities of the agent made significant differences in learning outcomes (Experiment 4 and 5). These results may imply that what makes a difference in student learning is not the agent itself or increased motivation or positive perception caused by the agent, but rather the level of interactivity and contingent feedback as well as modalities, which will be empirically tested in the present study. Focusing Learner Attention According to cognitive load theory, the presence of an animated pedagogical agent can be detrimental to learning by dividing a learner’s limited cognitive resources into different visual segments. More specifically, cognitive load theory predicts that when animation of an agent is presented simultaneously with other visual information such as graphics or texts, learners need to split their attention between these two sources, and as a consequence, the presence of an agent becomes harmful to learning rather than beneficial. The split attention effect could be even worse when the agent’s dialogues are presented as onscreen text since both animation and text require learners’ visual resources (Moreno et al., 2001). However, even replacing onscreen text with spoken text may not be enough to overcome the Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 57 split attention effect if learners have to mentally connect the visual information which the agent is presenting and the aural information delivered through the spoken text (Jeung et al., 1997). In order to reduce the split attention effect caused by the presence of an agent, it is suggested that animated pedagogical agents use non-verbal behaviors (i.e., pointing gesture, jumping to the target object) in order to draw learners’ attention to relevant learning material (Atkinson, 2002). By focusing a learner’s attention to pertinent segments of the lesson, the agent can connect auditory and visual information for learners, thus freeing learners’ cognitive resources to be used for solving problems or understanding material. Yet, there is a possibility that agents’ attention-focusing behaviors can act as seductive details in that they are not conceptually related to primary learning objectives but still catching learners’ attention. Additionally, despite that agents’ behaviors are supposed to help learners pay attention to the lesson, learners still have to spare their limited cognitive resources to understand their behaviors and facial expressions. Nevertheless, studies have shown neutral effects of presence of agents, neither providing negative nor positive impacts on drawing learner attention (Atkinson, 2002; Moreno et al., 2000). Using worked examples to instruct proportion-word problem solving, Atkinson (2002) examined the effect of presence of an animated pedagogical agent and delivery modes (i.e., voice vs. text) on learning. In the study, the animated pedagogical agent was programmed to deliver instructional explanations which Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 58 contain elaborated information regarding solution steps for word problems, either aurally through human voice or textually through a cartoon-like word balloon appearing above the agent’s head. Yet, the contents of explanations were identical for both conditions. The agent was also programmed to link the explanations with the respective visual information on the screen by utilizing nonverbal communicative behaviors (i.e., pointing gesture, glancing toward a solution step). The analyses of learning-process and leaming-outcome measures revealed that participants in the voice plus agent group outperformed those of the text only group in perception of difficulty of practice problems, near- and far-transfer problems, and affective measures of the learning environment, while their performance was superior to that o f the voice only group only in far-transfer problems. Atkinson interpreted this result positively insisting that presence of an agent did not distract learners’ attention. However, the fact that the voice only group performed as well as the voice plus agent group in most assessments suggests that we might not need the image of an agent to achieve a certain level of learning criterion. Moreover, given that the voice only group did not have the benefit of visual indicators connecting aural information with appropriate visual information, it is not very clear what proportion of the learning benefits can be attributed to the present of the agent and its attention-directing behaviors. And again what should we choose if other visual indicators rather than the agent’s nonverbal behavior, such as an Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 59 electronic arrow, could have the same level of effect on focusing learner attention, but are a lot easier and more economical to create? Craig, Gholson and Driscoll (2002) provide a possible answer for the question proposed above. They designed a 3 (agent properties: agent only, agent with gestures, no agent) x 3 (picture features: static picture, sudden onset, animation) study to investigate issues concerning the ways to capture learners’ attention. For the agent property, they investigated: (a) whether the use of an agent causes a split attention effect; and (b) whether integration of the agent’s gestures with a picture or animation helps to direct learners’ attention. For the picture features, they tested if using parts of a picture or animation captures learners’ attention. In the experiments, participants learned the process of lightning formation presented through an agent and multimedia material (e.g., pictures, narration, or animation). The narrative information was synchronized prior to or simultaneously with the agent’s pointing gestures toward, or with a sudden onset (e.g., color singleton or electronic flashing) or animation of relevant parts of a picture. The study did not find the Persona effect, that is, participants did not find learning with an agent particularly more interesting than that without an agent. Moreover, the agent made no difference in learners’ performance both in cognitive load assessment and performance tests (retention, matching, and transfer). On the contrary, Craig and his colleagues found significant effect of both a sudden onset and animation of parts of the pictures for focusing learners’ attention. This result can be Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 60 taken as evidence that a well integrated pedagogical agent is not harmful to learning, but not beneficial either. It can be also inferred that there is a far simpler and easier way (coloring and flashing) to have the same effect rather than using a technically complicated agent. To clarify the confusion about the relative effect of a pedagogical agent for directing learner attention, the present study compares an agent to an electronic arrow with voice. Summary Animated pedagogical agents are autonomous interface agents that attempt to improve and extend Intelligent Tutoring Systems (ITS) by modeling the kinds of interaction that occur between a student and a human teacher (Shaw, Johnson, & Ganeshan, 1999). Thus, a typical agent-based learning environment incorporates not only cognitive tutoring elements but also affective ones (i.e., employing an animated agent that utilizes interactive language, facial expressions and bodily gestures). For example, an effective pedagogical agent should be able not only to provide a learner with knowledge and skills but also to motivate the learner. Due to their affective, social behaviors and visual appearance, animated pedagogical agents are hypothesized to render a learning environment entertaining, which in turn improves learner interest and motivation. The review reveals that while there is strong enthusiasm persisting in the field regarding the motivational and instructional effects of animated pedagogical agents in a multimedia-based learning environment, very few empirical studies have Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 61 been conducted so far and the results of these studies are rather neutral or sometimes negative. The literature does not support the assumption that the motivational effects or interestingness of animated pedagogical agents lead to better performance in learning including problem solving and knowledge retention (Dehn & van Mulken, 2000). The optimistic perspective of the field is largely based on the hope that rapidly advancing technology will be able to fix the existing technical limitations that prevent an animated agent from fully imitating a human tutor (e.g., recognizing individual learners’ learning styles as well as specific characteristics and hence providing more contextualized and individualized feedback). As a matter of fact, the majority of animated pedagogical agent studies published to date have focused on describing technological capacity of pedagogical agents without paying much attention to theory- and research-based principles (Bradshaw, 1997). Yet, it should be pointed out that there are some aspects of these studies that cannot be fixed by advanced technology, including methodological shortcomings (e.g., theoretical foundation, study design, variable manipulation, and measurement instruments). For instance, only few studies did compare an agent-embedded environment with a non-agent environment on an equal basis. In many cases, there were other significant factors in addition to the presence of an agent, which could have contributed to differences in learners’ attitudes and learning. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 62 The instruments employed to measure the effects of agents have posed another major problem. Several studies failed to provide the validity and reliability of the instruments they used to measure motivational effects, which consequently undermined researchers’ interpretation with regard to agent effects. At this point, thus, we need a more theory-based approach to investigate agent-embedded learning environments and more structured research methodologies to investigate the effects of these environments. What is clear about the field of animated pedagogical agent is that due to its infancy of development, only a small number of empirical studies have been conducted, and even worse, only few studies have employed systematic and thorough research methods (e.g., small sample sizes, confounding variables, unreliable measuring instruments). In addition, the fact that only a small number of knowledge domains have been the focus of agent studies does not help to understand what kind of domain could benefit most from animated pedagogical agents with what kind of functionalities. For example, most pedagogical agent studies have employed discovery- based learning environment and scientific materials for instruction even though there is evidence that entertaining effect of an agent is domain specific (van Mulken, Andre, & Muller, 1998). Consequently, it is not clear that what effects would be gained in different environments with different domains. The present study is expected to contribute to the field in this aspect because it examines the effect of an Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 63 animated pedagogical agent for learning linguistic structures, a rarely explored research area. Significance of the Study The forgoing review and discussion of the studies on second or foreign language (L2) instruction, in particular how to focus learner attention to target linguistic elements, have revealed the great need for research on effects of explicit instruction on learning of L2 linguistic forms. Due to the strong influence of noninterventionists’ approach to teaching L2 which basically suggests that there is no need for instruction in L2 classrooms, explicit rule presentation, one of the explicit instructional methods, has received negative reviews and reactions from language scholars and practitioners alike. Although some recent studies have shown that explicit rule presentation helps students acquire L2 forms better than other attention directing instructional methods, more empirical evidence supporting the method is definitely required to lessen the existing distrust. The present study will provide empirical support to explicit mle presentation by demonstrating that students’ learning of the target form, English relative clauses, is facilitated by the method. In addition, the present study will expand the field of SLA (Second Language Acquisition) and CALL (Computer Assisted Language Learning) by employing advanced multimedia systems (i.e., a pedagogical agent and an electronic arrow with voice) to deliver the instruction, furthermore, the present Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 64 study will shed light on the issue of cognitive efficiency of multimedia which has not been included in the field of instructional technology in spite of its potential to save the field from the circular arguments of whether or not multimedia can make any significant differences to cognitive products. In order to do so, the study conducted a controlled experiment in which one instructional method (i.e., explicit rule presentation) was delivered via two different delivery media (i.e., an animate pedagogical agent vs. an electronic arrow with voice) and examined the relative effects of each medium on both cognitive processing and products. The study will also contribute to the field of animated pedagogical agents by solving the important issues surrounding animated pedagogical agents: (a) Does an animated pedagogical agent help direct learner attention to certain instructional elements?; (b) Is its physical presence and movement on a computer screen distracting learners and splitting their attention?, and; (c) Can the increased interest caused by the entertainingness of an animated agent make learners exert more mental effort in processing learning material?. Specifically, given that only few empirical studies have been conducted on the topic and their results were mixed due to inappropriate experimental designs and measuring instruments, the present study which adopts a more refined and rigorous methodology and measurements will move forward the field of animated pedagogical agents. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 65 Research Questions of the Study The major research questions addressed in the present study include: 1. Do explicit rule presentation and reading comprehension tasks have positive effects on learning of English relative clauses? 2. Does the type of medium - an animated pedagogical agent vs. an electronic arrow with voice - delivering the same instructional method (explicit rule presentation) have a differential effect on learning of English relative clauses? 3. Does the type of medium - an animated pedagogical agent vs. an electronic arrow with voice - delivering the same instructional method (explicit rule presentation) have a differential effect on the amount o f learning made at a given unit of time and mental effort? 4. What is the relationship between learner interest in the system and the subsequent learning of English relative clauses in an agent-based learning environment? Research Hypotheses of the Study The review of literature in three research areas - second language learning, multimedia learning, and animated pedagogical agents - is the foundation of the present study. The primary objective of the study is to solve the confusion between instructional methods and animated pedagogical agents (a delivery medium), to Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 66 investigate the effects of explicit rule explanation on acquisition of an L2 linguistic form, English relative clauses, and to empirically examine cognitive efficiencies of different media for learning second language (L2) grammar in a multimedia-based learning environment. The design of the study is a true experimental with pre- and posttest involving two treatment groups. The pre- and posttests consist of one production test and two comprehension tests. The treatment groups differ with regard to the way in which they are given explicit rule presentation on the target linguistic form. 74 participants are randomly assigned in equal proportions to either treatment conditions. There are total seven dependent variables in the present study which include the amount of mental effort, time, learner interest, self-efficacy beliefs, active choice, performances, and cognitive efficiency measures. Descriptive statistics are used throughout the study to estimate all measures. Additionally, correlation coefficients, t-tests, one way and factorial ANOVAs, and regression analyses are used to investigate the relationship among mediating (i.e., learner prior knowledge, interest, motivation), process (i.e., time required to acquire the form, the amount of mental effort invested to process the lesson), and outcome measures (i.e., scores from comprehension and production tests). The design of the present study, however, differs from other existing second language learning studies that typically provide learners with a series of pre-made sentences containing a target form, deprived from a specific communicative context. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 67 Instead, the present study adopted a task-based approach to create a computer-based learning environment for second language learning. Specifically, learners in the study are presented with two language learning tasks which are developed in a multimedia environment. The first task includes provision of explicit rule explanation to learners, while the second task, a reading comprehension task, consists of two reading texts which are thematically related to each other and a series of comprehension questions. Thus, learners are required to complete a language learning task in a meaningful context, rather than passively read sequences of thematically unrelated sentences void of context. Learners’ focal attention and noticing of target grammatical elements is required for learning o f L2 grammar (Rosa & O ’Neill, 1999) because only attended forms can be perceived and processed by learners. Several instructional methods have been developed to direct learner attention to target linguistic features. The present study used one particular instructional method to direct learners’ attention to English relative clauses: Explicit Rule Presentation. Explicit rule presentation is provided to learners before they get to the reading texts. The rules are delivered in an aural mode rather than in a text mode to avoid the split attention effect between the explanation and example sentences displayed on the screen. Two multimedia delivery methods (i.e., an animated pedagogical agent vs. an electronic arrow with voice) used to deliver explicit rule presentation are compared on the extent to: (a) which they improve learners’ learning of English relative clauses and (b) which they Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 68 improve the cognitive efficiency in acquiring an L2 linguistic form. The six research hypotheses of the present study with relation to the above research questions are explained in detail below. Hypothesis 1 There will be a significant difference in learner performance between the pretests and posttests. Participants’ performances will significantly improve after receiving explicit rule explanation on the target form and reading comprehension task. The general consensus in the field of second language acquisition (SLA) is that attention and noticing of the target grammatical element is necessary for learning to take place, although it has been controversial how much and what type of attention to the target form is necessary for learning (Izumi, 2002). Several instructional methods to direct learners’ attention to target linguistic forms have been developed. Among them, the present study adopts explicit rule presentation which provides explicit explanations about the rules of a target form, English relative clauses, and its use. Recent studies show that explicit attention-directing methods, such as explicit rule presentation, work better than implicit methods, and accordingly, learners who are explicitly directed to the target grammar perform better than those who are implicitly exposed to the target form. It is because an explicit method better primes learners for a specific grammatical form (Alanen, 1995). Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 69 Hypothesis 2 There will be no significant difference in learner performances between participants who interact with an animated pedagogical agent and those who interact with an electronic arrow with voice. What causes learning is an instructional method, not a delivery medium (Clark, 1983, 1994a, b, 2001). Therefore, there will be no significant differences in learner performance on the target L2 grammar among participants who receive the same instructional method that are delivered through different media. The instructional method employed in this study to facilitate learners’ learning of the target form is explicit rule presentation. When the instructional method is held constant, the final product o f learning, comprehension and production o f the target grammar, will be the same regardless of what medium is used to deliver the instruction. Therefore, the two multimedia techniques utilized to deliver explicit rule presentation, an animated pedagogical agent vs. an electronic arrow with voice, will not make any significant differences in final learning outcomes. Hypothesis 3 The more prior knowledge o f a target L2 grammar a learner has, the less effect of an instructional method will be obtained on learner performance. That is, participants with less prior knowledge on the target form will benefit from instructional treatment more than those with more prior knowledge. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 70 The instructional method employed in this study will significantly enhance learner performance when learners have little prior knowledge on target L2 grammar. Research shows that explicit rule presentation has positive effects on L2 grammar acquisition by sensitizing learners to target linguistic features embedded in learning material. Given that explicit rule presentation provides information not only about the structure of the target grammar but also about the use of the target grammar, it will help learner build a schema for the target grammar. However, once learners have acquired schema related to a target grammar, the benefit of an instructional method will decrease or disappear. That is because the acquired schema allows learners to identify and bring out existing knowledge with regard to the target grammar to process learning material without the information provided by the instructional method (Kalyuga et al., 1999). Furthermore, once learners have understood the relationship among relevant elements of the target grammar, the guidance provided by an animated agent or an electronic arrow with voice will not be needed. Hypothesis 4 There will be no causal relationship between the levels of learner interest in instructional system with which they interacted and the levels of learner achievement measured by gain scores from the pretests to posttests. The underlying premise of including entertaining elements such as an animated pedagogical agent in an instructional presentation is that increased learner interest triggered by entertainingness of the system will lead learners to pay more Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 71 attention, to persist longer in the system, and to exert more effort to process incoming information. Although numerous claims have been made about the positive influence of animated pedagogical agents on interest or motivation, it should be noted that increased interest does not necessarily have a cause and effect relationship with students’ actual learning outcomes. It is especially true when the positive emotion triggered by an animated pedagogical agent is situational interest. Research shows that situational interest may not be maintained long enough to have significant impact on learner performance. In other words, although students enjoy their interaction with an animated agent, it is not guaranteed that they will learn better than those who do not interact with an animated agent. For instance, a number of studies which claimed any motivational and consequent learning effects of animated pedagogical agents in fact provided different instructional treatments to agent groups and non-agent groups. And as a result, it is hard to attribute the learning benefits of the agent groups exclusively to animated pedagogical agents and their affective impact on learners (Clark & Choi, in press; Dehn & van Mulken, 2000). Hypothesis 5 An electronic arrow with voice will require less time and mental effort from participants than an animated pedagogical agent in achieving the same level of learning performance, when compared to an animated pedagogical agent in delivering the same instruction method, explicit rule presentation. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 72 Different media employed to deliver the same instructional method will have significantly different effect on levels of cognitive efficiency, defined as ‘the relative amount of learning scores that was made at a given unit of mental effort and/or time with a specific delivery medium’ in the present study. The underlying premise of cognitive efficiency is that a specific medium used to present instruction may not produce different cognitive outcomes compared to another medium, but it still can have direct impact on cognitive processes. Participants receiving an electronic arrow with voice will need to invest less mental effort than those interacting with an animated pedagogical agent in reaching the same level of performance. Moreover, participants interacting with an electronic arrow with voice will achieve the same level of learning outcome significantly faster than those interacting with an animated pedagogical agent. An animated pedagogical agent is seductive details because it is active, concrete and interesting, but conceptually irrelevant to learning of English relative clauses. As a consequence, it takes away learners’ visual attention from the instruction. Even though the animated pedagogical agent in this study is used to deliver explicit explanation of the target form, its presence and eye-catching behaviors (i.e., gestures, lip movements, locomotion) will demand learners’ conscious attention and impose unnecessary extraneous cognitive load on working memory. On the contrary, the electronic arrow with voice will achieve the same effect in explaining the rules and usages of the target form without overloading learners’ limited working memory because it is integrated in the input text. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 73 Hypothesis 6 There will be a significant positive correlation between the levels of learner prior knowledge of a target grammar and the levels of cognitive efficiency. According to Cobb (1997), learner prior knowledge or schema is one of the major contributing factors to cognitive efficiency. The more prior knowledge learners have of the target form, the less mental effort and time they will have to invest to process learning material. Schema helps learners distinguish relevant target elements from irrelevant ones and then integrate new information into existing knowledge of the target form (Kalyuga el al., 1998). Therefore, learners who have already developed appropriate schema of the target form will not need extra help from attention-directing techniques for processing the instruction, and will be able to avoid spending limited cognitive resources in integrating information. Instead, more conscious effort can be used for comprehending instructional material. As a consequence, they will be able to learn faster and/or easier than their counterparts who have not developed appropriate schema yet. However, the strong presence of an animated pedagogical agent could force even experienced learners to attend to unnecessary visual information provided by the agent. This will probably impose extraneous cognitive load on learners’ working memory and require unnecessary mental effort investment. As a result, their performance will be damaged and the level o f cognitive efficiency will decrease. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 74 CHAPTER II: METHODLOGY Overview of the Research Design The main purpose of the present study is to compare the impact of two different multimedia-based learning systems used to deliver instructional treatment for teaching English as a second language, in particular English relative clauses. The design of the study is true experimental with pre- and posttests and involves two treatment groups: Agent Group and Arrow Group. Explicit rule presentation on the target linguistic form, English relative clauses, (e.g., what functions English relative clauses have and how English relative clauses are formed) and a reading comprehension task were adopted as an instructional treatment for the study. In particular, the explicit presentation of the target linguistic feature was used as a pre task for the reading comprehension task. Throughout the study, the group that received explicit rule presentation via an animated pedagogical agent is indicated as ‘Agent Group’, while the group interacting with an electronic arrow with voice is denoted as ‘Arrow Group’: (a) Agent Group received a lesson about the target form from an animated pedagogical agent that explained the rules and functions of English relative clauses, and the animated agent also displayed facial expressions and gestures to pinpoint important aspects of instruction on the computer screen - the target form and related elements. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 75 (b) Arrow Group received a lesson about the target form from voice (the same human voice incorporated into the pedagogical agent) and a simple electronic arrow used to direct learner attention to important aspects of instruction on the computer screen - the target grammar and related elements. In other words, the two treatment groups differed from each other only in the way how explicit rule presentation for the pre-task was delivered, while other things were all equal. The text used for the reading comprehension task was the same regardless of treatment groups. The rule learning and the reading comprehension tasks were developed based on the task-based approach to L2 instruction, which is explained in detail below. Both pre- and reading comprehension tasks were computerized. The rule learning task was delivered in a CD while the reading comprehension task was delivered on the World Wide Web (the Web). The rule learning task was designed to sensitize learners to the target form embedded in the reading task by providing essential information on what functions English relative clauses have and how they are formed. On the other hand, the reading comprehension task was developed to expose learners to the usages of the target form in a meaningful context. A total of seven major dependent variables were measured in the present study which include mental effort, time spent to learn English relative clauses, self- efficacy for learning English, learner interest, active choice, cognitive efficiency, and performance. Among them, the scores from mental effort, self-efficacy, learner Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 76 interest and the active choice measures were combined to measure a more collective construct, ‘Motivation’. The measures of these dependent variables were used to investigate the research hypotheses discussed in Chapter 1. In summary, the present study employed a true experimental design with one between-group factor, the way in which explicit rule presentation was delivered to learners. The two treatment groups, denoted as Agent Group and Arrow Group, were compared with regard to the dependent variables and relations among them. The study also employed time as a within-subject factor with the pretest and posttest. Participants The minimum number of participants required to have enough power to detect Type II error was determined using Cohen’s Power Analysis (Cohen, 1988). It was estimated that 30 participants in each group would give the effect size of 0.5 (a medium effect size) at the significance level of 0.05. Several ESL programs at local universities and community colleges were contacted to request participation in the study and two of them, Language Academy at the University of Southern California (USC) and the ESL department at Santa Monica College (SMC) agreed to participate. USC is a private, research-oriented university, whereas SMC is a public community college. Despite this difference, the ESL programs of the two institutions are similar in that most of their students are international students or new immigrants who want to improve their English skills such as listening, reading, speaking, and writing. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. To recruit participants, the researcher visited classrooms, explained about the study, and then asked students to volunteer as the subjects of the study. It was also explained that after completion of the study, each participant would be given a $20 gift certificate from a bookstore or a coffee shop. Then, a sign-up sheet was passed around to get the contact information of the students who wanted to participate in the study. Initially, a total of 94 students signed up, 29 from USC Language Academy and 65 from SMC ESL department. Each participant was randomly assigned by the researcher to a treatment group, resulting in 50 participants for Agent Group and 48 participants for Arrow Group. O f these 98 participants, 74 (Agent Group = 32, Arrow Group = 42) completed all the treatments and the tests, while 24 missed Day 2 of the experiment or failed to complete posttests or other measures. The rather large number of students who did not show up on Day 2 either did not come to school at all or had to go to a special workshop session that day. Consequently, only the data from the 74 participants (USC = 26, SMC = 48) were included in the analyses. Due to the small number of participants from USC Language Academy, it was decided to combine the participants from two institutions instead of running separate analyses for different institutions. The result of an independent t-test backed up this decision by showing that there was no significant difference between USC and SMC participants in their prior knowledge o f the target linguistic form which was measured on Day 1 of the experiment (t = .683, p — .497). Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 78 In summary, of 74 participants, 24 were male and 50 were female. The mean age of participants was 24.21 years old (minimum age — 17, maximum age = 40) with standard deviation, 4.805. Participants’ mean length of stay in the United States is 16.90 month (SD = 24.3). The rather large standard deviation observed in the mean length of stay in the U. S. might be due to some participants who had stayed over 3 years at the time of the experiment. Participants’ first languages varied including Korean (N = 29), Japanese (N = 17), Chinese (N = 14), Spanish (N = 7), and Arabic (N = 4). There was only one participant in the following native languages: Russian, Indonesian, Tamil, Urdu, and German. The demographic information of each group is summarized in Table 2. There was no significant difference between groups on age (t = -1.584, p = .118) and length of stay (t = -.753, p = .454). Table 2 Summary of User Profiles Agent Group (N = 32) Arrow Group (N = 42) Distribution of Institution USC: 1 1 (34.4%), SMC: 21 (65.6%) USC: 15 (35.7%) SMC: 27 (64.3%) Distribution of Gender Male: 1 1 (34.4%) Female: 21 (65.6%) Male: 13 (31.0%) Female: 29 (69.0%) Age Mean = 23, Median = 23, SD = 4 Mean = 25, Median = 24, SD = 5 First languages represented in groups Korean (19), Japanese (7), Farsi (3), Chinese (3), Arabic (2), Spanish (1), Indonesian (1), Russian (1), German (1) Chinese (11), Korean (10), Japanese (10), Spanish (6), Arabic (2), Tamil (1), Urdu (1) Length of Stay in the U.S. (Unit: Month) Mean = 14.4, Median = 6.50, SD = 18.72 Mean = 18.75, Median = 6.0, SD = 27.91 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 79 Target L2 form for Instruction The English relative clause was selected as a target linguistic form of the present study’s instructional treatment. Relativization in English is a sentence structure where one sentence is embedded in another sentence when these two sentences share a co-referential noun or noun phrase. An English relative clause is a dependent clause and acts as an adjective. In other words, it modifies a noun or noun phrase in the main clause by making it more specific or giving additional information about a person, idea, or thing. A relative clause should always be located right after the noun which it modifies. An English relative clause can be categorized as restrictive or nonrestrictive depending upon the necessity of information it provides (Lock, 1996). A restrictive relative clause provides essential information to define or clarify the noun or noun phrase it modifies, whereas a nonrestrictive clause provides unnecessary, but possibly interesting information. The examples of the two types of relative clauses are as follows: (a) Restrictive clause: The lady who lives next door is a famous writer. (b) Nonrestrictive clause: Ms. Hoff, who lives next door, is a famous writer. In the first example, the relative clause ‘who lives next door’ is used to make the lady more specific and eventually changes the meaning of the sentence. Without this information, the meaning o f the sentence, ‘the lady is a famous writer’, would not be the same or clear to the listener or reader. On the contrary, the second sentence does not need the information presented in the relative clauses because ‘Ms. Hoff’ Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 80 itself conveys enough information about the person and the listener or reader does not need the information in the relative clause, ‘who lives next door’ to identify the subject of the sentence. The present study used only the restrictive relative clauses because they are more common than the other type (Izumi, 2000). A relative clause typically begins with a relative pronoun such as who, whom, which, that, and whose. Of them, who, which and that are most commonly used pronouns. The selection of a pronoun depends on the noun which the relative clause refers to and what type of relative clauses is used. A relative pronoun can have different functions in a sentence (e.g., subject, direct or indirect objects, object of a preposition). The following examples demonstrate different functions of relative pronouns according to their grammatical functions in relative clauses: (a) Subject (SU) - the student who is sitting at the comer. (b) Direct Object (DO) - the young man who I taught years ago. (c) Indirect Object (10) -girl who he gave his mom’s necklace. (d) Object of Preposition (OPREP) - the person with who I went to Paris. Research on the acquisition of English relative clauses has mainly focused on the relationship between the head noun phrase in the main clause and the relative pronoun in the relative clause. For instance, studies examined which sentence type including a relative clause is acquired better and faster than other types (e. g., Gass, 1980, Roth 1984). Table 3 presents possible sentence types with embedded relative clauses. Among them, subject relative clauses such as SS and OS are reported to be Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 81 acquired first, and object of preposition relative clauses such as OOP and SOP are to be the last (Izumi, 2000). Studies showed that instruction helps learners acquire relative clauses faster and better, and furthermore, a lesson on the object of preposition relative clause (OPREP) not only helps learners obtain the target form, but easier forms as well (Gass, 1982; Eckman, Bell, & Nelson, 1988). Table 3 Sentence Types with Embedded Relative Clauses Head Noun in Main Clause Relative Pronoun in Relative Clause Sentence Type Examples Subject Subject SS The rabbit which scratches the bear looks at the tiger. Subject Object SO The girl who I saw was jumping in the park. Object Subject OS Everybody loves the book which was about London. Object Object 0 0 I checked out the book which my friend recommended. Subject Object of Preposition OOP The woman who my friend is interested in is a doctor. Object Object of Preposition SOP I know the man who John is talking to. Regardless of grammatical function and difficulty, however, each sentence type with an embedded English relative clause basically involves the same embedding process (Doughty, 1991): substitute co-referential noun phrase in the dependent clause with a relative pronoun; put the relative pronoun at the beginning of the relative clause; and move the relative clause right after the co-referential noun Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 82 phrase. Therefore, in order to master the embedding process a learner should be able to (1) place the relative clauses correctly, (2) select an appropriate relative pronoun, and (3) make any necessary rearrangements of the clause constituents” (Lock, 1996, p. 54). For the present study, the object of preposition relative clause (OPREP) was selected as the main target linguistic form because of the form’s difficulty of acquisition and also its generalizability towards easier types of relative clauses such as subject and object relative clauses. In other words, pedagogical intervention on OPREP relative clauses would facilitate the learning process of other types of relative causes as well by allowing learners to apply the knowledge and skills which they acquired from the instruction on the OPREP type to easier types of relative clauses (Zobl, 1983). This is possible because the same embedding procedure is involved in composition of any type of English relative clauses as discussed earlier. There are two types of OPEREP relative clauses which differ in terms of the place in which the preposition is located in the relative clause: the pied-ping and the preposition stranding OPREP. The former places the preposition right before the relative pronoun (e.g., stage phobia from which many professional performers suffer) and is considered more formal. The latter, on the other hand, places the preposition after the verb (e.g., stage phobia which many professional performers suffer from). The present study only used the latter type of OPREP for developing the learning tasks and the performance tests because the preposition stranding OPREP is more Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 83 common and studies have shown that L2 learners acquire this type of OPREP earlier than the pied-ping OPREP type (Wolf-Quintero, 1992). Multimedia-Based Learning Environment: Reading Wizard The present study adopted the task-based approach to develop a multimedia language learning environment called ‘Reading Wizard’. The pedagogical and multimedia elements of ‘Reading Wizard’ were designed and developed entirely by the researcher using several multimedia applications and web technologies which will be discussed in detail in the apparatus section. Reading Wizard, whose major pedagogical function was to teach a particular linguistic form, English relative clauses, was designed to draw learners’ attention to structures, functions, and meanings of the target form. The name of the learning environment, ‘Reading Wizard’, was purposely selected to prevent learners from paying attention exclusively to grammatical structures of the target form. For the development of learning tasks included in ‘Reading Wizard, the task- based approach, discussed in the previous chapter, was adopted in the present study. The task-based approach to L2 instruction, which was by and large influenced by communicative language teaching (CLT) as well as form-focused instruction, advocates the use of task to promote learning of both language structures and communicative skills, and thus, emphasizes the incorporation of meaning and form in language learning syllabus and activities. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 84 There were two major tasks in Reading Wizard: a rule-learning task (pre task) and a reading comprehension task. Participants were informed that their main task was to read stories about Phobias and then to answer questions which asked their opinions about the topics covered in the reading text. The reading text included numerous examples of English relative clauses to provide learners the target form in meaningful contexts. In addition to the reading comprehension task, the present study also included a pre-task in which learners were presented with explicit explanation of when English relative clauses are needed, how they work, and how to process them. The pre-task was intended to prime the learners to the target form so that they process the form better when they encounter it in the readings. The pre-task included a learning guide, ‘Genie’, to deliver explicit rule presentation on English relative clauses. Genie was represented as an animated male character in the agent version of the learning environment, whereas he was represented as a voice only in the arrow with voice version. In fact, the pre-task was the place where the differences between Agent and Arrow Group lie. In Agent Group, Genie (200 by 200 pixels) displays a number of facial expressions and motions (Figure 1). A typical agent study hypothesizes that these human-like behaviors render learning environment more entertaining and consequently induce social relationship between a learner and a computer. And the resulting social relationship encourages the learner to employ a range of cognitive and motivational strategies which eventually improves learning outcomes (Moreno et al., 2001). Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 85 Figure 1 Animated Pedagogical Agent’s Sample Behaviors Locomotion Pointing Gestures *Ji\ Signaling Behaviors Idle Time Behaviors t f c . * ? i & The pointing gestures of the animated Genie were replaced by an electronic arrow in the arrow with voice version of the environment. The inclusion o f the arrow was decided based on Atkinson’s study which compared Agent group with Voice only group (Atkinson, 2002). He suggested that the better learning scores found in Agent group might have resulted from the agent’s pointing gestures which could reduce the amount of cognitive resources used to connect the aural information with the corresponding texts on the computer screen. In other words, the Voice only group Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 86 which did not get the extra help of the pointing gestures might have had less mental resources available for processing learning materials because they had to use some of their resources connecting the aural information with the visual information. Nevertheless, the spoken explanations and the display of the text information lasted for the same amount of time in both versions. Pre-Task: Explicit Rule Presentation on Target Form The pre-task was delivered in a Microsoft PowerPoint file on a CD-ROM. It consisted of 9 slides, each of which included spoken explanations about English relative clauses and examples in which the target form was embedded. Both Agent version and Arrow with voice version contained the same spoken explanations and example sentences. The spoken explanations were recorded using a male voice speaking Standard American English instead of using a text-to-speech engine commonly used in agent-based programs. It was demonstrated that people prefer human voice to computerized voice in agent-based learning environments (Atkinson, 2002). The spoken explanations were also delivered in a personalized, conversational style which was shown more effective than a formal style (Mayer, Fennell, Farmer, & Campbell, 2004). In the present study, participants were addressed in a first and second person (e. g., “Hi, there! Welcome back to Reading Wizard”, “Today, we are going to learn about English relative clauses”, “When you are ready, please click this button to move on to the next page”). Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced w ith permission o f th e copyright owner. Further reproduction prohibited without permission. fading Wizard Relative Clauses I H □ Step 1: Find a shared noun in sentences I The m an is a com puter program m er. The man lives n ex t door. ^Combirtipc tw o Sentences ^ Using Relative Clause r □ Step 2: Replace the noun with Relative Pronoun □ Step 3: Move the relative pronoun and relative clause Relative Clauses 2 v i \ □ Exam ple 1: » 1) The man is my father. 2) The man is talking to Mr. Smith. -> The man who is talking to Mr. Smith is my father. r ^ ' : Relative Pronoun As Object of Preposition ~ Example 2 The bed wasn't very comfortable. ■ I slept In the bed last night. M The bed w asn't very comfortable. I slept in which last night. ► cro n 3 £ C/J o * © o "S T O “ c £ < * > w © S. © © s © © © 00 Reproduced w ith permission o f th e copyright owner. Further reproduction prohibited without permission. ifl (Rgadmg W iza rd Relative Clauses % llcornl g Two Sentences Using Relative Clause □ Step 1: Find a shared noun in sentences .g. The m an is a com p uter program m er. VW §' The man liv e s n ext door. □ Step 2: Replace the noun with Relative Pronoun □ Step 3: Move the relative pronoun and relative clause Relative Clauses 2 □ E xam ple 1: lv ' 1) The man is my father. 2) The man is talking to Mr. Smith. -> The man who is talking to Mr. Smith is my father. ^R elative Pronoun As O bject of Preposition r \ □ Example 2 * The bed wasn't very com fortable. ■ I slept in the bed last night. The bed wasn't very comfortable. I slept in whjcjh last night. > - S o 3 2. E * ft * s. 3 e o -f f D f t b w 7T M = ■ T © S rs 1 3 co oo 89 In each slide, a learner has to click a microphone icon at the top left comer of the screen to bring out Genie and listen to his explanations in the Agent version. Here, all verbal explanations are delivered by the animated pedagogical agent (Figure 2). The voice of Genie and his mouth movements are all synchronized. Genie not only speaks to the learner, but also moves around screen and uses pointing gestures to direct the learner’s attention to a certain word or sentence on the screen. Genie also displays other non-pointing gestures such as waving, holding hands, and smiling. These non-pointing gestures are used to render Genie more human-like and believable. Genie’s behaviors used in the Agent environment were summarized in Figure 1. In contrast, a learner does not see a character in the Arrow with voice version. The learner only hears a male speaking, the same voice used in the Agent version, who also introduces himself as ‘Genie’. In this version, an electronic arrow is used to direct the learner’s attention to a specific word or sentence (Figure 3). The electronic arrow shows up from a comer of the screen and dissolves where it has stopped when Genie finishes the explanation about the specific sentence or word. In the first slide of both learning environments, when a learner clicks on the microphone icon, Genie, realized either as an animated character or electronic arrow with voice, shows up. Then, he welcomes learners and tells that they are going to leam about English relative clauses. Genie also explains that they should click the arrow button at the bottom of the screen to move on to the next page, and then disappears. In the next 4 slides, Genie explains that an English relative clause is used Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 90 to give more information about a noun or a noun phrase, and it usually starts with a relative pronoun such as ‘which, that, and who’. He also explains how to form a relative clause by combining two sentences and he uses the examples displayed on the screen to help the learner understand better. The first half of the pre-task slides was intended to induce learners’ prior knowledge of the target form. It was also used to provide some basic information to those who had very little prior knowledge. In the second half of the pre-task, Genie explains the OPREP function of English relative clauses, the main instructional target of the present study. The slides on the OPREP English relative clauses have basic interactivity features: (a) the learner can control the pace of learning; and (b) the learner can check the correct answer by clicking the question icon on the screen after solving an OPREP problem. Although learners do not have to provide their answers in writing, Genie always asks whether the learner’s solution is correct or not. It should be also noted that even though both versions of the pre-task last the same length, individual learners have the total control of how long they would stay in each slide. They can click the arrow button at any time they want to move on, which produces different amount of time each individual learner has stayed in the pre-task environment, and this amount was used to calculate the cognitive efficiency. Main Task: Reading Comprehension The reading comprehension task was designed to provide learners with opportunities in which they experience the target form in meaningful contexts. It was Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 91 also aimed that learners would be repeatedly exposed to the target form so that they would leam the form better by synthesizing and analyzing the features of English relative clauses. The reading comprehension task is composed of two stories about phobias which are then divided into three sub-sections (Appendix A). Each section includes a semantically coherent group of sentences. Each sub section also comes with three to five questions. The questions do not directly ask about what learners have read in the page. Rather the questions ask learners’ opinions related to the topics covered in the section in order to prevent learners from exclusively focusing on the grammatical form to answer the questions. This way, it was expected that learners are not overtly concerned with the grammar, but they are more concerned with the contents and meaning of the text. Unlike the pre-task, the reading comprehension task was identical for both versions, and was delivered through the Web. The two reading texts about phobias were created based on reading passages found in an ESL textbook, ‘Finishing Touches’, volume A (Eckstut-Didier, 1994) and the information about phobias found on the Web. However, the reading passages were substantially altered to include a number of samples of English relative clauses. The first story included in the reading comprehension task provides basic information on phobia such as symptoms, causes, and cures. The second part of the task, delivered in two different pages, is about a female singer who suffered from a stage phobia. As the story unfolds, a learner finds out why she developed a stage Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 92 phobia and how she has eventually overcome the problem. The reading text also comes with an electronic dictionary. A learner can move the computer mouse on a vocabulary item whose meaning is unknown to the learner. Then, a small window pops up with the meaning of the word displayed in English (Figure 4). The list of lexical items included in the electronic dictionary was presented in Appendix B. Figure 4 Electronic Dictionary Embedded in Reading Task I Back! Address ;,j Google II h ttp ://1 2 Q .1 2 5 .6 4 .1 2 6 - V ocabulary - M icrosoft Into m o t Fxplorer Symptom: Evidence of disease or physical problem Close this window Done m might not be frightened of. If you are terrifie definitely have a phobia. Because phobias ca belly buttons and goldfish., they sometimes s people, For the phobic person however, the it When you have a phobia, it shows in your mi powerful heartbeats, tensions and pains in rr Different types of English relative clauses are embedded in the two stories about phobias. There are 29 sentences which include English relative clauses in the three reading sections out of 62 sentences. That is, approximately 47 % of sentences in the three reading sections are related to the target of the instruction. Out of 29 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 93 sentences, there are 6 Subject relative clauses (SU), 4 Direct Object relative clauses (DO), and 19 Object of Preposition relative clauses (OPERP). About 72% of the target sentences which include some type of English relative clauses had ‘which’ as the relative pronoun (n = 21), while the other 28% included ‘who’ (n = 8). Since the OPREP is the main target of the instruction, participants were exposed to more sentences of this type than any other types of relative clauses, and a range of prepositions were used in these sentences (i.e., of, through, from, for, on, with, about, in, into, and to). Appendix C provides the sentences which include English relative clauses used in the three reading sections. Apparatus The entire instructional treatment of the study was computerized. Therefore, all the experiments were conducted in computer labs at both schools. The computers in the USC computer lab were PCs and running on the Windows XP operating system, whereas those in the SMC computer lab were running on the Windows 2000 operating system. Although the Windows XP is a newer version of the PC operating system, no difference was expected in carrying out the experiments because the multimedia programs and other online materials of the present study run the same on both operating systems. Each computer in both labs was equipped with a 19 inches (USC) or a 17 inches color monitor (SMC), a computer mouse, and a headset. Before the experiments, the researcher set the monitors to a pixel resolution of 1024 x 768 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 94 so that each participant would have the same screen setting regardless of the physical sizes of the monitors. The multimedia elements of the pre-task were developed using several multimedia application programs. First, Genie’s dialogues for both versions of the pre-task were recorded in the WAV fde format using an audio editing program called ‘Audacity’. Audacity is a free shareware program which lets users not only record but also edit sound fdes. The script for the dialogues was created by the researcher, and then read by a male professor from the USC Rossier School of Education. Some of the dialogue files were edited to cut void areas of the file and to remove noise in the background. Once the audio files were edited, they were imported either in the agent version or the arrow version of the agent behavior script files. The agent version of the pre-task was developed using the Microsoft Agent technology. The Microsoft Agent technology enables users to incorporate animated characters into the interface of desktop computer applications or web pages. Animated characters can be programmed to move within the computer screen, speak via a text-to-speech engine or recorded audio files, and accept spoken voice commands. In the present study, the agent was programmed to move around, display various gestures along with emotions, and speak through recorded audio files. The Microsoft Agent software is pre-installed in the Windows 2000 operating system and for other operating systems it can be downloaded for royalty-free if it is used only in the user’s application and not distributed for commercial purposes. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 95 An animated character can be programmed using a number of computer programming languages (e.g., C++, JavaScript, Visual Basic, etc.). However, the scripts of the animated agent’s behaviors in the present study were generated using a script helper program for the Microsoft Agent technology called ‘MASH’ 6.5 from BellCraft Technologies (2004). MASH is an easy-to-use program that allows users to develop an agent-based presentation by simply dragging the agent character around the screen and directing what the agent says and does (Appendix D). With the help of MASH, users do not have to know any programming language, and can use plain language to generate the agent’s dialogues and behaviors. For example, to move the agent to the left, a user can simply type a command line - Genie.Play “MoveLeft”. After the agent’s verbal and nonverbal behaviors were generated in executable files (.exe), they were incorporated into the PowerPoint presentation so that the agent version of the pre-task could be delivered in the same format as the arrow version of the pre-task. The Microsoft Office PowerPoint (Microsoft Corporation, 2003) was the only software used to develop the Arrow with voice version of the pre-task. The electronic arrow was selected from the ready-made graphic components of the program, also called ‘AutoShapes’, and changing the size of the arrow required only dragging comers of the arrow. To synchronize movements of the arrow, such as entrance, emphasis, and exit, and the audio files of the agent dialogues, the researcher used the custom animation function of the PowerPoint. Using this function, the researcher Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 96 was able to control the appearance of the arrow and the presentation of the corresponding spoken explanations. Not only the pre-task was developed using computer multimedia technology, but the main reading task was also developed with the help of computer technology. Microsoft Office FrontPage (Microsoft Corporation, 2003) was utilized to deliver the reading comprehension tasks online via the Web. The FrontPage is a popular web editing program which allows users to create and maintain web pages. It can incorporate texts, sounds, pictures, and movies to enhance the presentation of the material online. By using FrontPage, the researcher was able to hyperlink different sections of the reading comprehension tasks. JavaScript was also used to prevent learners from going back to the previous pages o f the reading comprehension tasks. All the data collected from the user background survey, various questionnaires, and performance measures were stored electronically on the researcher’s personal computer by using computer database and server technology such as Apache HTTP server and PHP. The Apache HTTP server is one of the most popular open source web servers in the world. The PHP which stands for PHP Hypertext Preprocessor is a widely-used open source server side scripting language used to create dynamic Web pages and interactive database and to process the data passed from the HTML forms. Both programs are freely available for downloads. A computer programmer who was a graduate student at the USC computer science department was hired to download and use these programs to develop the database Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 97 and the web server for the present study which stored and processed the user data dynamically. The computerized data collecting system provided the convenience and accuracy of the procedure. Particularly, through the database and the Web server technology, participants’ answers and entry times along with exit times were automatically recorded in the server, and therefore, the present study was less prone to coding errors. Furthermore, because all the data were stored as soon as they were generated in the experiments, a significant amount of time which would be normally spent to record the data in the data files was reduced. All the data from the database were converted in several spreadsheet files for further statistical analyses by the programmer. Variables and Measuring Instruments Learner Background Survey The learner background survey solicited information with relation to a participant’s name, email address, the length of staying in the United States, and the scores of English skill tests such as TOEIC (Test of English for International Communication) or TOEFL (Test of English as a Foreign Language) if she or he had any (Appendix E). The survey was delivered online and the participant’s answers were automatically recorded in the database. The participant’s English skill test scores were included to measure the participant’s overall prior knowledge of English. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 98 However, because only a small number of participants took and submitted their English skill test scores, the data were not used in the analyses. The participant’s email address was used as a user login ID for the experiment in order to track the participant’s navigation and interaction with the system. The learner background survey included two questions concerning the participant’s use of computers, that is, the frequency of computer use and the levels of knowledge and skills for using a computer. Since the present study examined the effects of computer multimedia technology on learning English, it was expected that different frequencies and competency levels of computer use might have different impact on participants’ interaction with the system. Finally, the survey included seven task specific self-efficacy questions to measure the participant’s initial self efficacy beliefs regarding learning English. The same self-efficacy belief questions were also presented to the participant at the end of the study in order to measure the differences possibly caused by the participant’s interaction with Reading Wizard. The detailed discussion of the self-efficacy measure will be provided in the following section. Performance - Acquisition of the Target Form Participants’ knowledge of the target form, English relative clauses, was measured using three different types of performance tests: a sentence combination test, a picture interpretation test, and a grammaticality judgment test. The three tests were derived from Izumi (2000), who also adopted them from Doughty (1988) and Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 99 Gass (1982). The three performance tests were then modified by shortening the length of each test and by using difference sentence structures and vocabulary items. However, the picture interpretation test adopted from Izumi (200) was used without any change. The primary purpose of the performance tests was to assess participants’ knowledge of relative clauses, and they included items related to SU, DO, and OPREP types of relative clauses (Appendix F). To be more specific, the picture interpretation test and the grammaticality judgment test were utilized to assess participants’ receptive knowledge of the target form. In the present study, receptive knowledge refers to one’s capability to comprehend a certain linguistic form in a given context or to judge whether the form is used correctly or not. On the other hand, the sentence combination test was used to measure participants’ productive knowledge of the target form. Productive knowledge refers to one’s capability to produce a form in a specific context. In summary, the two comprehension tests measured to what extent the learner understood the meaning and the usage of English relative clauses, and the production test measured how well the learner used the form in sentences. No time limits were set for any of these tests. The performance tests were administered twice, before and after the instructional treatment, in order to measure the effects of the instructional treatment and multimedia delivery media on participants’ acquisition of English relative clauses. The pre- and posttests used the same picture interpretation test, but items Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1 0 0 were randomly reordered in the posttest. For the sentence combination test and the grammatically judgment test, basically the same items were used in both tests. However, in the posttest the majority of content words were replaced by other words with similar difficulty and the items were presented in a different order. The sentence combination test, which is the production test, was administered before the picture interpretation test and the grammaticality test in order to remove any possible effect of the positive input (i.e., sentences with embedded relative clauses) presented in the comprehension tests (Izumi, 2000). Participants’ answers from each test measure were scored separately first and then combined to gain overall performance scores regarding participants’ acquisition of the target form. The answers from the sentence combination test were manually graded by the researcher, while the answers from the picture interpretation test and the grammaticality judgment test were graded automatically by the system based the pre-programmed answers. One point was assigned to a correct answer which showed a target like use of English relative clause in the given context, while zero point was assigned to an incorrect answer. For example, in the sentence combination test, when a learner used the DO type of relative clause when the SU type was needed, the combined sentence was given no point. However, errors such as spelling, articles, and tense were ignored as far as the sentence contained the correct type of relative clause. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 101 Sentence Combination Test There were 12 items in the sentence combination test which aimed at assessing the degree to which a learner could construct a correct sentence using a relative clause. In the test, participants were asked to combine two sentences in a given item by starting always with the first sentence. The head noun in the first sentence was underlined, and participants were told to combine the two sentences in the way that the head noun would be identified or specified. In order to lead learners to use English relative clauses, it was not allowed to use connecting words such as ‘And’, ‘However’, ‘Because’, ‘So’, ‘While’, ‘When’, and so forth. Table 4 Relative Clauses Included in Sentence Combination Test Function of Relative Position of Relative Clause Pronoun Subject Position Object Position Subject of the Sentence The teacher who is now in the _ , , ... . . . , . , , . . Everybody likes the book hospital was miured m the , . , . . T , f , x which is about London, accident. Object of the Sentence The girl who I saw was jumping in the park. I have bought the car which Cynthia wanted to buy. Object of Preposition The woman who Jim is interested in is married. I know the boy who Sarah is talking to. Six out of 12 items included the relative clauses in the object position, while the other six had the relative clauses embedded in the subject position. Furthermore, three items had the relative pronouns functioning as a subject of the relative clause Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 102 (SU), three items had them as an object of the relative clause (DO), and six of them had them as an object of preposition (OPREP). A variety of prepositions were embedded in the six OPREP relative clauses including ‘In’, ‘To’, ‘With’, ‘For’, ‘By’, and ‘On’. Picture Interpretation Test The picture interpretation test was used to measure how well a learner could comprehend a sentence containing a relative clause. In each item, participants were asked to read a sentence and then select one picture out of three that best describes the sentence. The other two pictures in a given item are partially correct, but only the correct answer depicts the picture fully appropriately. There were total nine items: five items have a relative clause modifying the subject of a main sentence, and the other four were constructed to have a relative clause embedded in the object position of a main sentence. The items were also built in such a way that three had a relative pronoun (i.e., who, which, that) placed in a subject position in the relative clause, three in an object position, and three in a object of preposition position. Grammaticality Judgment Test Like the picture interpretation test, the grammaticality judgment test was also adopted to measure a learner’s receptive knowledge of English relative clauses. In the test, participants were presented with 12 sentences containing different types of relative clauses. They were asked to decide whether or not the sentences were correct. Participants were informed that all words were spelled correctly so that Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 103 participants would pay attention only to the structure and meaning of a sentence. Participants were also told that ‘WHOM’ was not used intentionally and that they should not mark ‘incorrect’ the item in which they thought ‘WHOM’ must be used. Of 12 items, six represented the OPREP type of relative clauses, while the other six were divided between the SU type and DO type. There were six correct and six incorrect items, and the incorrect items included four different types of error included as in Gass (1982) and Izumi (2000): (a) Pronoun retention: The woman who Ted was attracted to her was working at a hospital. (b) Non-adjacency: The man is intelligent who I met yesterday. (c) Incorrect relative pronoun: My son has the toy car who Tom talked about last week. (d) Inappropriate relative pronoun omission: Michelle was interested in the guy fixed my computer. Motivation Although motivation itself was not included in the six hypotheses discussed in the previous chapter, several motivational variables were measured in the present study in addition to learner interest. The motivational variables were included for a more in-depth study on the relationship between learner characteristics and performances. Following Pintrich and Schunk’s suggestions (2002), participants’ motivation to learn English relative clauses in a multimedia-based learning Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 104 environment, ‘Reading Wizard’, was measured using four different measures: mental effort, active choice, subjective rating of the learning system, and self-efficacy beliefs. Pintrich and Schunk also included in their motivation indexes learner persistence which involves focusing on a learning task and working for a longer time, especially in the face of distractions. This can be measured by tracking the consistency of interactions over time in media-based programs (Clark & Choi, in press), but the present study did not use persistence or time spent on a task as a motivation index because the realization of a tracking system was technically very difficult and that the more time spent on a task could result from the fact that a learner might be distracted by the visual aspects the learning environment. Rather, the present study measured participants’ subjective ratings of the learning environment in order to assess their emotional responses to the system, including their interest in the learning guide, Genie, the lessons, and the overall system. Additionally, participants’ self-efficacy beliefs in learning English as a second language were measured as an indicator of their motivation. Self efficacy refers to people’s beliefs about their capability to organize and execute actions to succeed at designated levels (Bandura, 1986). Self-efficacy has positive influence on various aspects of motivation including choice of task, effort, and persistence and performance (Pintrich & Schunk, 2002). Self-efficacy is also task-specific in that it indicates one’s perceived competence in a given domain (Bong & Clark, 1999). Joo and colleagues argued that a learner’s perceived confidence in performing the given Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 105 academic task as well as confidence in using computers should be measured in a computer-based instructional setting to determine the effects of different motivation factors (Joo, Bong, & Choi, 2001). Mental Effort A motivated learner is more likely to exert more mental effort during instruction (Pintrich & Schunk, 2002). The present study defines the construct of mental effort as ‘the amount of cognitive resources or conscious effort which a learner has spent to acquire a unit of new information or new declarative knowledge’. Learning of instructional material involves perceptual processing, retrieval of relevant knowledge, elaborating on the content, construction of new knowledge, and tuning of action. It is also assumed that a learner expends mental effort only in processing new declarative knowledge because as the learner’s knowledge and skill increase, the learner can succeed even with less or no mental effort. Two self-report measures, a modified version of Salomon’s Amount of Invested Mental Effort (AIME) scale and Paas’ Mental Effort Scale were employed to determine the amount of mental effort that a learner invested in processing the instruction (Appendix G). Salomon’s original AIME scale is a four point Likert-type scale with four items. The present study, however, eliminated the last item and added three more points to improve the reliability of the scale based on Gimino’s findings (2000). Consequently, the AIME scale used in the study had three items with seven response points ranging from 1 or ‘not at all’ to 7 or ‘extremely’. Paas’ Mental Effort Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 106 Scale, which was developed from Borg and Domic’s ‘Perceived Task Difficulty Scale’ (1972), was also modified to be a seven point scale. Both scales were delivered online, and as soon as a participant indicated the amount of mental effort s/he thought s/he had invested during instruction, it was electronically stored in the database. Each participant’s mental effort was measured right after interacting with either the animated pedagogical agent or the electronic arrow with voice in the pre task. Each participant’s mean mental effort scores of each scale were computed based on his or her answers. Active Choice A learner’s choice of a task or learning software, when the learner is given multiple options, could indicate which task or software the learner is interested in. To measure the degrees to which participants were willing to learn English using 'Reading Wizard’ in the future, one item using a seven-point scale was included in the post-task questionnaire: “If you had a chance to use 'Reading Wizard' again, how much would you like to do so?” Each participant’s scores from two treatment groups were collected to compute each group’s mean score. Subjective Ratings Learner interest in a pedagogical agent or an agent-based learning system is one of the most popular motivation variables in agent studies. Typically, learners are presented with a number of questions after interacting with an animated pedagogical agent regarding their perceptions of the agent’s usefulness, interestingness, and Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 107 helpfulness. In the present study, participants’ perceptions of the system were assessed twice on a 7-point self-report scale: the first one was administered after learning English relative clauses in the pre-task and the second one after working on the reading task (Appendix H). The first scale included questions regarding the helpfulness, interestingness, and usefulness of the lesson and the learning guide ‘Genie’, realized as an animated pedagogical agent or an electronic arrow with voice. The second scale, on the other hand, contained the questions asking participants to rate the helpfulness, interestingness, and usefulness of the overall system, Reading Wizard, including the dictionary function contained in the reading task. Self-Efficacy Two different types of self efficacy were measured in the present study following Joo, Bong, & Choi (2000): Computer self-efficacy and ESL self-efficacy were assessed separately (Appendix E). Participants’ computer self-efficacy beliefs were measured before they interacted with the instructional system by asking participants to rate themselves in terms of their overall levels of technical skill and knowledge of computers (i.e., 1- Not very competent, 2 - Low level of competency, 3 - Moderate level of competency, 4 - High level of competency, 5 - Expert). As for participants’ academic self-efficacy, that is, ESL self-efficacy, participants were asked to rate their confidence in performing various learning activities which they would do when learning English with Reading Wizard before and after they interacted with the system. The reason for measuring the academic Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 108 self-efficacy twice was to investigate if the instructional system had any impact on learner self-efficacy at all. A 7-item ESL self-efficacy scale was developed based on Bandura’s guide on constructing self-efficacy scales (1986). Participants were asked to rate how confident they were in doing essential language learning activities (i.e., listening, reading, and learning grammar) using 11 point scale ranging from ‘0 - Cannot do at all’ to ’ 10 - Certain can do’. In particular, the questions regarding reading English text and learning English grammar were more specific in that they asked about learners’ conviction to perform these activities in a computer-based environment. In addition, the post-task academic self efficacy addressed participants’ projected confidence to do the activities in the future. Time Although ‘Persistence’ was not included in the indexes of motivation in the present study due to the technical and interpretational issues mentioned above, learners’ interaction with the system was logged online including their entry and exit times for each task and web pages. In particular, the time spent on the pre-task and the main task, automatically recorded by the computer system, was used to assess how much time was required for an individual learner to process the task and to learn the target linguistic form. The time-spent on a task was measured in seconds. This variable was also used to compute the learner’s cognitive efficiency score which is described in detail below. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 109 Cognitive Efficiency Cognitive efficiency of media is referred to as ‘the relative amount of speed and mental effort required by a delivery medium (delivering the same instructional method) to reach a certain learning criterion’. Here, speed is defined as ‘the amount of time invested by a learner to achieve a learning criterion’. However, in the present study, cognitive efficiency is defined as ‘the relative amount of learning scores that was made at a given unit of mental effort and/or time with a specific delivery medium’. Therefore, the relative cognitive efficiency of a delivery medium was determined using the means of the gain scores from the pretests to posttest and time/mental effort scores: the gain score was divided by the amount of time or mental effort invested. It is assumed that a medium A is cognitively more efficient than a medium B if a student using the medium A achieved a higher level of performance than a student who spent the same amount of time and mental effort but achieved a lower level of performance using the medium B. Procedures The study was introduced as a computer-based language learning software development project using multimedia technologies. Participants were informed that the major purpose of this computer-based ESL program was to enhance their reading comprehension skills, and their tasks included: (a) to read and understand the reading text; and (b) answer the questions after reading the passages. As shown in the tasks, Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 110 reading comprehension was emphasized as the main purpose of the study so that learners would not extensively focus on the target language structures. During the experiment, participants were required to meet with the researcher two times at the designated computer lab. The average gap between the first and the second meeting was about 2 weeks although it varied from participants to participants. Throughout the treatment, the researcher was present at the computer lab to supervise the experiment and to help participants with technical and procedural problems, if necessary. On the first day of the study (Day 1), participants read and signed the consent form which was approved by the USC Institutional Review Board (Appendix J). Participants were then given an instruction sheet according to their groups which explained the procedures and things that they should and should not do. Agent and Arrow Group received slightly different instructions (Appendix I). The researcher also read the instruction sheet together with participants. Afterwards, participants were provided with an introduction CD in which Genie, realized either an animated character or an electronic arrow with voice, explained the purpose of the study and the things they would do during the experiment including navigation tips. By interacting with the introduction CD, participants had an opportunity to experience the pre-task environment (either agent based or arrow with voice based environment) and train themselves for navigating in the learning environment. The introduction CD aimed at familiarizing participants to the learning environment and at the same Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. I l l time removing any effect caused by their unfamiliarity with the instructional system. After learning about the study from the introduction CD, each participant typed a web address in a web browser (the address was presented in the instruction sheet) to logon to the system and fill out the learner background survey questionnaire. Participants also took the pretests which measured their prior domain knowledge. On the second day of the study, participants received explicit rule presentation on the target form from the pre-task CD and then worked on the reading comprehension task delivered via the Web. During the learning tasks, the amount of time that a learner spent was recorded. The amount of mental effort which a learner invested to process the explicit rule presentation was as well measured through the use of both Salomon’s AIME and Paas’ Mental Effort Scale. In addition, participants’ perceptions of the learning system and self-efficacy beliefs were asked using the subjective ratings scale and the self efficacy scale, respectively. Finally, learners’ post-treatment performances were measured through the use of one production test and two receptive tests. In order to control participants’ exposure to the target form outside of the experiments, participants were asked not to discuss what they did during the treatments with others including their professors and fellow students, which was also specified in the consent form. The professors o f the participating classes were asked not to teach the form during the treatment period and not to answer students’ questions related to the target form. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 112 Figure 5 Experiment Schedule 2 weeks 2 weeks Day 2 Explicit Rule Presentation Mental Effort & Subjective Rating Reading Task Subjective Rating Post-task Self-Efficacy Posttests Day 2 Explicit Rule Presentation Mental Effort & Subjective Rating Reading Task Subjective Rating Post-task Self-Efficacy Posttests Explicit Rule Presentation by Electronic Arrow with voice (Arrow Group) Explicit Rule Presentation by Animated pedagogical agent (Agent Group) Day 1 Introduction CD General Info Collection Pretests Day 1 Introduction CD General Info Collection Pretests Pilot Test In January of 2005, a pilot test for the present study was conducted with 19 ESL students (8 Korean, 6 Chinese, 3 Thai, & 2 Japanese speakers), recruited from two local universities: University o f Southern California and California State University at Dominguez Hill. Among them, 9 students were enrolled at an intensive Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 113 ESL program, whereas the other 10 students were studying for graduate degrees. Participants were randomly assigned to one of the two treatment groups, Agent Group (9 participants) and Arrow Group (9 participants). On the first day of the experiment, participants came to the computer lab to fill out the learner background survey questionnaire and then to take the pretest which measured their prior knowledge of English relative clauses. The pretest consisted of three performance tests: a sentence combination test (12 questions), an interpretation test (9 questions), and a grammaticality judgment test (12 questions). In order to control participants’ exposure to the target form outside of the experiments, participants were asked not to discuss what they did during the treatments with others including their friends, family members, or even spouses. When participants came to the lab the second time (the overall interval between the first and the second session was about 7 days), participants were instructed to use an ESL learning program, ‘Reading Wizard’, to improve their reading comprehension skill. After the target learning tasks (explicit rule presentation and reading comprehension task), mental effort and interest measures were administered, and then the posttests followed by the post-treatment questionnaire were presented to participants. The posttests were essentially the same as the pretests except that some lexical items were replaced with other equally difficult items and all items were presented in a different order. No time limit was set for any of tasks, tests, or questionnaires. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 114 Through the pilot study, the researcher was able to learn several problems that might be encountered by participants while using ‘Reading Wizard’. Based on the results of the pilot test, the following adjustments were made to the present study: (a) One week interval between Day 1 and Day 2 seemed to be too short. Due to the possible practice effect of the pre-test, it was decided to extend the interval to two weeks. (b) Even after the researcher explained the procedure in detail, participants needed a lot more assistance to work on the pre-task and the reading task. In particular, participants seemed to need help with the navigation of the system. As a result, more detailed instruction sheets were made for each group and the introduction CD was developed to familiarize participants to the system. (c) A few questions in the subjective rating questionnaire were modified or removed because they solicited redundant information. It was also decided that participants’ mental effort measured after the reading task was not necessary because the study attempted to find out possible differences in the amount of mental effort made by delivery media used in the pre-task. (d) It was discovered that the PHP did not recognize the single quotation mark and when it encountered the mark it terminated the program. To avoid the problem, changes were made to the PHP part of the program. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 115 CHPATER III: RESULTS AND DISCUSSIONS Descriptive Statistics Tables 5 through 11 summarize descriptive analyses of the results for 74 participants of the present study. As shown in Table 5, all the self-report measures have good reliabilities (Cronbach’s alpha ranging from the lowest ‘.7624’ to the highest ‘.8966’), providing reliable ways of measuring variables. The alpha for the Salomon’s mental effort scale (AIME) was ‘.7624’ which is the lowest among the scales utilized in the study. The internal consistency reliability of the Paas’ Mental Effort Scale could not be calculated because it had only one item. However, it is known that the scale has a relatively good reliability (Cronbach’s alpha > 0.85) regardless of whether it is used in its original 9-point format (Paas, Van Merrienboer, & Adam, 1994) or an adapted 6-point format (Marcus, Cooper, & Sweller, 1996). Table 5 Descriptive Statistics of Self-Report Scale Reliabilities Scales N of Items Mean Variances SD Alpha Salomon AIME 3 12.3387 18.7850 18.7850 .7624 Pre-Treatment Self Efficacy Post-Treatment Self Efficacy 7 41.1757 85.8728 9.2668 .8577 7 43.3623 95.1756 9.7558 .8952 Subjective Rating 1 4 20.4054 23.8882 4.8876 .8966 Subjective Rating 2 5 25.2394 29.0704 5.3917 .8255 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 116 Two subjective ratings scales also had good reliabilities. Subjective Ratings 1 (alpha = .8966) was administered right after the participants learned English relative clauses from Genie, the learning guide. Therefore, the scale focused on measuring participants’ perceptions of rather specific features of the pre-task environment, such as how interesting or helpful the lesson and the learning guide were. On the other hand, Subjective Ratings 2 (alpha = .8255) asked the participants’ opinion on more general features of the system and was administered after participants finished the reading task. The pre- and post-treatment self-efficacy scales (an alpha of .8577 and an alpha = .8952, respectively) also demonstrated good reliabilities. Performances The skewness and kurtosis indexes in Table 6 show that all pre-treatment performance tests have normal distributions (between 1 and -1) except the kurtosis of the pre-treatment sentence combination test (-1.346). It should also be noted that the majority of the kurtosis indexes are negatively skewed, which may be interpreted either as the performance tests were easy for many participants or as distributions were affected by a few extremely high scores. Histograms of the performance variables (Figures 6-12) indicate that the former is more likely the case for the picture interpretation test and the grammaticality judgment test, whereas the latter is for the sentence combination test. In other words, participants had more receptive knowledge of the target form than the productive knowledge in the beginning. The distributions of pos-treatment performance variables are also negatively skewed, but Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 117 to a greater degree, which demonstrates that more participants earned high scores after the instructional treatment. Table 6 Descriptive Statistics of Performance Measures Variables N Mean SD Skewness Kurtosis Sum of Pretests* 74 21.8919 7.03722 -.277 -.999 Pretest - SC* 74 6.5676 4.22349 -.241 -1.346 Pretest - PI* 74 7.0135 1.57457 -.521 -.551 Pretest - GJ* 74 8.3108 2.38675 -.293 -.903 Sum of Posttests 74 24.8784 6.24380 -.827 -.253 Posttest - SC 74 8.7703 3.23740 -.980 .024 Posttest - PI 74 7.0811 1.84136 -.718 -.667 Posttest - GJ 74 9.0270 2.19572 -.522 -.642 * SC - Sentence Com bination Test (Max. Score - 12); PI - Picture Interpretation Test (Max. Score - 9); GJ - G ram m aticality Judgm ent Test (M ax. Score - 12); thus, the m axim um total score for the pre- and the posttest is ‘33’. The means of the pretests and posttest scores displayed in Table 6 show that participants’ performance scores were increased in every category of the performance tests after receiving the instructional treatment: Sentence Combination (from 6.5676 to 8.7703), Picture Interpretation (from 7.0135 to 7.0811), and Grammaticality Judgment (8.3108 to 9.0270). The results of paired samples T-tests demonstrated that the increases are statistically significant in the sentence combination test (t - -5.952,p = .000) and the grammaticality judgment test (t = - 2.975, p = .004), but not in the picture interpretation test (t = -.360, p = .720). The Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 118 data imply that the picture interpretation test was too easy for many participants, which might result from that only one third of the items in the test involved the OPREP type of relative clauses which are considered harder to acquire than other types o f relative clauses such as SU and DO types. Unlike the picture interpretation test, the other two tests included the OPREP type of relative clauses in the half of their items. Figure 6 Histogram of Pre-Treatment Sentence Combination Test 10 - Std. Dev = 4.22 Mean = 6.6 N = 74.00 12.0 8.0 10.0 0.0 2.0 4.0 6.0 Pre-Treatment Sentence Combination Test Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 119 Figure 7 Histogram of Pre-Treatment Picture Interpretation Test 7 Std. Dev = 1.57 M ean = 7.0 N = 74.00 3.0 4.0 5.0 Pre-Treatment Picture Interpretation Test Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 12 0 Figure 8 Histogram of Pre-Treatment Grammaticality Judgment Test 20 H 101 Std. Dev =2.39 M ean = 8.3 N = 74.00 12.0 10.0 8.0 6.0 4.0 Pre-Treatment Grammaticality Judgment Test Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 121 Figure 9 Histogram of Post-Treatment Sentence Combination Test 20 ■ 1 0 ' Std. Dev = 3.24 Mean = 8.8 N = 74.00 12.0 8.0 10.0 4.0 6.0 0.0 2.0 Post-Treatment Sentence Combination Test Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 12 2 Figure 10 Histogram of Post-Treatment Picture Interpretation Test 101 Std. Dev = 1.84 M ean = 7. N = 74.00 8.0 9.0 6.0 7.0 4.0 5.0 3.0 Post-Treatment Picture Interpretation Test Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 123 Figure 11 Histogram of Post-Treatment Grammaticality Judgment Test Std. Dev = 2.20 M ean = 9.0 N = 74.00 3.9 5.7 7.5 9.3 11.1 Post-Treatment Grammaticality Judgment Test Time and Mental Effort Participants’ entry and exit times for each task were recorded in seconds by the computer and the mean amounts of time that participants spent on the pre-task and the reading task are shown in Table 7. Time data along with mental effort data (Table 8) were collected to compute cognitive efficiency of each delivery medium, that is, how much scores a learner gained from the pretests to posttest at a given unit of time/mental effort in a certain delivery condition compared to an alternative condition. 20 1 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 124 Table 7 Descriptive Statistics of Time Measures (Unit: Second) Variables N Mean SD Skewness Kurtosis Time on Pre task 74 992.35 369.850 1.009 2.491 Time on Main Task 74 840.49 407.842 1.851 5.158 Reading 1 74 450.03 318.416 2.981 9.876 Reading 2 74 233.53 120.680 .319 -.132 Reading 3 74 156.93 94.935 .450 -.173 Table 8 Descriptive Statistics of Mental Effort Measures (7-point scale) Variables N Mean SD Skewness Kurtosis Salomon’s AIME 74 3.9054 1.43613 .227 -.359 Item 1 74 3.8784 1.82789 .033 -.662 Item 2 63 3.7778 1.65046 .190 -.853 Item 3 73 4.6849 1.64043 -.251 -.654 Paas’ Mental Effort 74 4.4324 1.51776 -.097 -.278 The means of time data show that participants utilized about 16 minutes and 50 seconds to finish the pre-task, receiving the explicit explanation on the target form from Genie, the learning guide. On the other hand, participants used about 14 minutes on the reading task containing three reading texts. What is noticeable about the time spent on the reading task is that the standard deviation is relatively big compared to the pre-task. It may result from the fact that participants had grater control in the reading task than in the pre-task in which they had to listen to the pre Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 125 recorded, fixed-length explanation from Genie although they were instructed on how to move on to the next slide. Rather, participants might be to work on the reading task at their own pace by clicking on the forward button located at the bottom of the page at any time. Table 9 Correlations between Mental Effort and Time Correlation Time on Pre-task AIME Paas Time on Pre-task Pearson Correlation 1 .067 .030 Sig. (2-tailed) .569 .800 AIME Pearson Correlation .067 1 .421** Sig. (2-tailed) .569 .000 Paas Pearson Correlation .030 .421** 1 Sig. (2-tailed) .800 .000 The bivariate zero-order correlations in Table 9 show that there are statistically significant correlations between the Salomon’s AIME and the Paas’ Mental Effort Scale and the magnitude of the correlation was fairly strong (r = .421, p = .000), which indicates both scales probably measure the same construct. However, there was no statistically significant correlation found between the mental effort and the time spent on the pre-task. In other words, the participants who invested a great amount of mental effort did not necessarily stay long in the learning environment. There were very weak positive correlations among the time and mental effort variables, but they did not reach significance (p > .05). Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 126 Figures 12 through 15 provide frequency data of the amount of mental effort that participants invested in processing explicit rule presentation in the pre-task. A visual inspection of these graphed data illustrates that in each question about a third of the participants perceived that they exerted a medium level of mental effort to process the instruction. This might reflect a common phenomenon observed in sel- report measures that people tend to select a medium level answer in a survey regardless of their actual beliefs or situations in an attempt to avoid extremes. It also challenges the validity and accuracies of the self-report mental effort measures. The only exception is the Salomon’s AIME item 2 in which 51% of participants believed that their friends who were also participating in the study invested less than a medium level o f mental effort, which might indicate that they perceived themselves less competent than their classmates. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 127 Figure 12 Mental Effort Investment in Salomon AIME Item 1 40 ------------------------------------------------------------------------------------------------ Not at all 2 3 A verage 5 6 Very M uch How hard did you try in order to understand the lesson? Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 128 Figure 13 Mental Effort Investment in Salomon AIME Item 2 20 ■ 10 ■ a CD CD CD C D 6 Very M uch Not at all 3 A verage 5 2 How hard do you think your friends (in the room) tried? Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 129 Figure 14 Mental Effort Investment in Salomon AIME Item 3 40 30 ■ 20 ■ 10 ■ 6 Very M uch 5 3 A verage Not at all 2 How much did you concentrate in order to undersatnd? Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 130 Figure 15 Mental Effort Investment in Paas Mental Effort 40 38 30 ■ 20 ■ 10 ■ 6 Very M uch Not at all 3 A verage 5 2 How much mental effort did you invest? Self Efficacy, Active Choice, and Subjective Ratings Participants’ self-efficacy beliefs for learning English were measured before and after the instructional treatment in order to examine the impact of Reading Wizard on learner motivation. The self-efficacy beliefs were divided into three different sub categories of language learning activities: Listening, Reading, and Grammar Learning as shown in Table 10. The mean scores o f pre-treatment self efficacy beliefs ranged from 5.7252 to 6.1959 on the 11-point scale (0 - Cannot do at all, 5 - Moderately certain can do, 10 - Absolutely certain can do). Among the three Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 131 self-efficacy beliefs, the mean of grammar learning self efficacy scores was the lowest meaning that participants were least certain about their capabilities of mastering English grammars in the beginning. Table 10 Descriptive Statistics of Self Efficacy and Active Choice Variables N Mean SD Skewness Kurtosis Pre-Treatment Self Efficacy (11 Point) Listening 74 6.1959 1.70579 .031 -.674 Reading 74 5.8041 1.64861 -.046 -.410 Grammar 74 5.7252 1.68922 -.070 -.328 Post-Treatment Self Efficacy (11 Point) Listening 69 6.4203 1.93578 .056 -1.180 Reading 69 6.1957 1.51030 -.140 -.814 Grammar 69 6.0435 1.50045 -.320 -.665 Active Choice (7 Point) 74 4.9189 1.45015 -.465 -.399 The mean scores of post-treatment self efficacy beliefs were higher than those of the pre-treatment, meaning that participants felt more competent about their abilities to learn English after interacting with Reading Wizard. However, the results of a series of paired samples T-test indicate that only the increase found in the reading self-efficacy was statistically significant (t = -2.047,p = .044), while the increases found in the other two categories did not reach statistical significance (Listening - 1 - -1.268,/? = .209; Grammar - t = -1.658,/? = .102). Given that the Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 132 present test was introduced as a reading comprehension task, participants could feel that they benefited from Reading Wizard in improving their reading skills. Active Choice is a measure of participants’ willingness to use Reading Wizard when given a chance in the future. The mean score of the participants’ responses on the 7 point scale (M= 4.9189) indicates that the participants had relatively high interest in the future use of the system to learn English. A simple visual inspection of Figure 16 also confirms the result: more than 65% of participants said that they would use the system again with the above average level of intensity. Participants of Arrow Group (M= 5.1905, SD = 1.3834) were more willing to use the system in the future than Agent Group (M = 4.5625, SD = 1.4797). However, the difference between the groups was not statistically significant (t = -1.877, p = .065). In other words, a delivery medium did not have considerable impact on participants’ choice for the instructional system, one of the motivational indexes adopted in the present study. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 133 Figure 16 Percentages of Active Choice for Future Use of Reading Wizard < D PL, Not at all Average 6 Very Much In addition to the self efficacy beliefs and the active choice, participants’ subjective ratings of the system, Reading Wizard, were also included in the measures of learner motivation. As shown in Table 11, participants generally rated the system higher than average on various aspects of the system, such as the lesson about the target form, the learning guide (Genie - an animated character or an electronic arrow with voice), the reading task, and the dictionary function embedded in the readings. The highest mean rating (5.3378) was given to both usefulness o f the lesson and interestingness of the reading topic, that is, ‘Phobias’. In other words, the Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 134 participants perceived more positively of the explicit rule presentation and the main reading comprehension task. Table 11 Descriptive Statistics of Subjective Rating Variables (7-point scale) Variables N Mean SD Skewness Kurtosis Subjective Rating 1 Lesson - Interesting 74 4.8378 1.35512 -.240 .009 Genie - Interesting 74 4.9865 1.47577 -.712 .579 Genie - Helpful 74 5.2432 1.37304 -.813 1.034 Lesson - Useful 74 5.3378 1.38759 -.602 -.134 Subjective Rating 2 Program - Interesting 74 5.0000 1.42387 -.176 -.334 Reading Text - Easy 74 4.8919 1.40027 -.510 .149 Reading Topic - Interesting 74 5.3378 1.42653 -.565 -.213 Dictionary - Helpful 73 5.1096 1.50519 -.593 -.028 Program - Helpful 72 4.9167 1.21898 .019 -.671 Computer Usages Two computer usage variables were included to assess participants’ computer use. The first variable, the frequency of computer use garnered information on how often participants use computers (Figure 17). Among 74 participants, 50 participants or 68% of them reported that they use computers everyday, indicating their familiarity with computers. Yet, it should be noted that this variable does not Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 135 provide any information about the types of activities that they do with computers. The second variable is the measure of the participants’ perception of how good they are with computers (Figure 18). The majority of the participants (56 participants or 74% of them) reported that they were moderately or less than moderately competent in using computers. The computer usage variables were found to have statistically significant association with the pre-treatment listening and reading self efficacy beliefs, and the amount of time spent on the pre-task. To be more specific, the computer use frequency was negatively correlated with the time spent on the pre-task (r = -.440, p = .000), which suggests that a learner who uses the computer more frequently spent less time in processing explicit rule presentation. However, the correlation was not transferred to performance (r = -.125,p = 291). On the other hand, the computer expertise level was positively correlated with the pre-treatment self efficacy for listening and reading English (r = .337,p = .001, and r = .354,p = .002, respectively), although these correlations disappear after the treatment. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 136 Figure 17 Percentages of Frequencies for Computer Use 60 50 ■ 40 ■ 30 ■ 20 ■ '18 10 ■ Daily Couple o f times a mo Not at all Less than once a mon Couple o f times a we Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 137 Figure 18 Percentages of Levels for Computer Expertise 40 30 ■ 20 ■ 10 ■ c 3 O U Expert Not very com petent M oderately competent Low level o f compete High level o f compet Results by Hypotheses Hypothesis 1 There will be a significant difference in learner performance between the pretests and posttests. Participants’ performances will significantly improve after receiving explicit rule explanation on the target form and reading comprehension task. As discussed in the previous section, participants’ performance increased in every category of the performance tests after receiving the instructional treatment, Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 138 explicit rule presentation and reading comprehension task. The mean gain scores ranged from .0676 (Picture Interpretation - Test 2) to 2.2027 (Sentence Combination - Test 1). The biggest increase was obtained in the production test (Sentence Combination Test), which is very encouraging given that it is more difficult to develop productive knowledge of a linguistic form than the comprehensive knowledge. Table 12 Results of Paired Samples t-Test on Pretest and Posttest Test Compared Mean Diff. SD T df Sig. Pair 1 Pretest 1 - Posttest 1 -2.2027 3.18378 -5.952 73 .000** Pair 2 Pretest 2 - Posttest 2 -.0676 1.61611 -.360 73 .720 Pair 3 Pretest 3 - Posttest 3 -.7162 2.07080 -2.975 73 .004** Pair 4 Sum of Pretests - Sum of Posttests -2.9865 4.21508 -6.095 73 .000** * Test 1- Sentence Com bination Test, Test 2 - Picture Interpretation Test, Test 3 - Gram m aticality Judgment Test. The results of a series of paired samples t-tests in Table 12 indicate that the increases from the pretests to posttests are statistically significant in the sentence combination test and the grammaticality judgment test at the set alpha level of 0.01 level, but not in the picture interpretation test (t = -.360, p = .720). Again, it could be that the picture interpretation test was relatively easier than the other two tests because it did not include as many items on the OPREP type of relative clauses as Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 139 the other tests did. Consequently, participants might experience a ceiling effect. This can also be confirmed from the high pretest scores: the mean of 7.0135 out of 9 (over 70% of the accuracy rate). Similar statistical results were attained when separate analyses were conducted on the two treatment groups (Table 13 and Table 14). Participants in Agent Group made statistically significant increases in the sentence combination test (,t = -3.099,p = .004) and grammaticality judgment test (t = -3.205, p = .003). Interestingly, the participants in this group performed worse in the post-treatment picture interpretation test than the pre-treatment test, although it was not statistically meaningful (t = 329, p = .745). On the other hand, participants in Arrow Group made significant improvement only in the sentence combination test (t = -5.348, p = .000), while made no meaningful improvement in the other two tests. Although the participants did not made improvements in every performance measure, overall they performed significantly better in the posttests regardless of groups {t = -2.9865,p = .000), which indicates that the instructional method adopted in the present study, explicit rule presentation and reading comprehension task, has positive effects on the acquisition of English relative clauses and supports Hypothesis 1. In particular, the instructional treatment made biggest impact on the participants’ productive knowledge of the target form, English relative clauses. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 140 Table 13 Results of Paired Samples t-Test for Agent Group Test Compared Mean Diff. SD t df Sig. Pair 1 Pretest 1 - Posttest 1 -1.9375 3.53724 -3.099 31 .004** Pair 2 Pretest 2 - Posttest 2 .0938 1.61364 .329 31 .745 Pair 3 Pretest 3 - Posttest 3 -1.2188 2.15128 -3.205 31 .003** Pair 4 Sum of Pretests - Sum of Posttests -3.0625 5.00282 -3.463 31 .002** Table 14 Results of Paired Samples t-Test for Arrow Group Test Compared Mean Diff. SD t df Sig. Pair 1 Pretest 1 - Posttest 1 -2.4048 2.91388 -5.348 41 .000** Pair 2 Pretest 2 - Posttest 2 -.1905 1.62658 -.759 41 .452 Pair 3 Pretest 3 - Posttest 3 -.3333 1.94644 -1.110 41 .274 Pair 4 Sum of Pretests - Sum of Posttests -2.9286 3.56400 -5.325 41 .000** Hypothesis 2 There will be no significant difference in learner performances between participants who interact with an animated pedagogical agent and those who interact with an electronic arrow with voice. Table 15 displays the descriptive statistics for the means and standard deviations of different test scores on the pretest and posttest obtained by Agent Group and Arrow Group. Before comparing the two groups on their learning of English relative clauses, it is necessary to first check if the two groups were different Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 141 at all in terms of their prior knowledge of the target form. To this end, an Independent Samples t-test was performed on the two groups’ total pre-treatment test scores. As shown in Table 16, no significant difference was found between Agent Group and Arrow Group in their cumulative prior knowledge of English relative clauses (t= 313,p = .755). Table 15 Descriptive Statistics of Pre- and Posttest Scores Agent Group Arrow Group Mean SD Mean SD Pretest 1 7.03 4.52 6.21 4.00 Pretest 2 7.06 1.54 6.98 1.62 Pretest 3 8.09 2.76 8.48 2.07 Sum of Pretests 22.19 7.83 21.67 6.45 Posttest 1 8.97 3.11 8.62 3.36 Posttest 2 6.97 1.96 7.17 1.77 Posttest 3 9.31 2.29 8.81 2.12 Sum of Posttests 25.25 6.47 24.60 6.13 *Test 1 - Sentence Com bination Test (Max. Score - 12), Test 2 - Picture Interpretation Test (Max. Score - 9), Test 3 - Grammaticality Judgm ent Test (M ax. Score - 12); thus, the maximum total score for the pre- and the posttest is ‘33’. Additionally, in order to examine if there was any difference in individual testing areas, the pretest scores obtained from the three testing measures (Sentence Combination Test, Picture Interpretation Test, and Grammaticality Judgment Test) were also submitted to a series of Independent Samples t-tests. Again, as shown in Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 142 Table 16, the two groups did not differ at all in their prior knowledge of English relative clauses (both productive and receptive). Therefore, any differences observed in the posttest scores or the gain scores obtained from the pretests to posttests could be attributed to different instructional treatment that the two groups received considering that the present study was carefully designed. In the present study, the only difference made between the groups was the way in which the explicit rule presentation was delivered: either through an animated pedagogical agent or through an electronic arrow with voice. Table 16 Independent Sample t-Test of Pre-Test Scores Pretest Variables Mean Diff. t df Sig. Sum of Pretests .5208 .313 72 .755 Sentence Combination .8170 .823 72 .413 Picture Interpretation .0863 .232 72 .817 Grammaticality Judgment -.3824 -.680 72 .498 To compare the two treatment groups’ posttest scores, several independent samples t-tests were also performed on the individual testing measures and the total posttest scores. The test results illustrated in Table 17 reveal that there is no difference between Agent and Arrow group in any of the post-treatment testing measures adopted for the present study as well as the combined test scores. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 143 Table 17 Independent Sample t-Test of Post-Test Scores Posttest Variables Mean Diff. t df Sig. Sum of Posttests .6548 .444 72 .658 Sentence Combination .3497 .458 72 .648 Picture Interpretation -.1979 -.456 72 .650 Grammaticality Judgment .5030 .976 72 .332 Finally, a visual examination of graphed data in Figures 19 through 21 reveals that the two treatment groups made different amounts of gains from the pretests to posttest in the three performance measures. In order to further analyze these differences in the gain scores between the two treatment groups, each group’s gain scores from the three testing measures were submitted to a series of one way analyses of variance (ANOVAs) with treatment group (Agent vs. Arrow) as a between subjects factor, that is, as the independent variable. The results of these ANOVAs are summarized in Table 18. For the sentence combination test, Arrow group made higher score gains (M = 2.405) than Agent Group (M= 1.937), but the difference was not significant, F (1, 72) = .388, p = .535. For the picture interpretation test, Agent Group’s performance actually decreased (M = -.0938), while that of Arrow Group increased (M= .190). Nevertheless, the gain scores differences between the two groups was marginal, and it did not reach statistical significance, F (1, 72) = .558, p = .457. On the contrary, Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 144 Agent Group made fairly bigger gains in the grammaticality judgment test (M = 1.219) than Arrow Group (M= .333), but the difference again did not reach statistical significance, F (1, 72) = 3.431,/> = .068. Taken together these statistical data support Hypothesis 2 and Clark’s position that what makes difference in learning is the instructional method, not the delivery medium (1983, 1994a, 1994b, 2001, 2003). In other words, when the instructional method is held constant, the delivery method, no matter how fancy it is, does not induce better performance or cognitive product. Table 18 Summary of ANOVA on Gain Scores of Each Testing Measure Measures Sum of Squares df F Sig. Sentence Combination Test 3.965 1, 72 .388 .535 Picture Interpretation Test 1.467 1, 72 .558 .457 Grammaticality Judgment Test 14.238 1.72 3.431 .068 Sum of All Gain Scores .326 1, 72 .018 .893 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 145 Figure 19 Gain Scores for Sentence Combination Test by Two Groups 9.5 8.5 ■ 7.01 ^ Group 6.5. A gent Group Arrow Group 2 1 Pretest Posttest Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 146 Figure 20 Gain Scores for Picture Interpretation Test by Two Groups 7.2 7.0 ^ Group ^ 1 1 ____ A gent Group Arrow Group 6.9 2 1 Pretest Posttest Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 147 Figure 21 Gain Scores for Grammaticality Judgment Test by Two Groups Group D A gent Group Arrow Group 1 2 Pretest Posttest Hypothesis 3 The more prior knowledge of a target L2 grammar a learner has, the less effect of an instructional method will be obtained on learner performance. That is, participants with less prior knowledge on the target form will benefit from instructional treatment more than those with more prior knowledge. To test Hypothesis 3, participants were arbitrarily divided into three groups in a post-hoc fashion, based on their combined scores of the three pretests: The low prior knowledge group consisted of the participants who answered correctly less than Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 148 30% of the questions; the intermediate prior knowledge group included those who answered correctly from 30 to 80% of the questions; and the high prior knowledge group was composed of the participants who answered correctly more than 80% of the questions in the pretests. Table 19 Frequencies of Prior Knowledge Levels Prior Knowledge Levels Low Intermediate High Groups Agent Group 3 9.4% 20 62.5% 9 28.1% Arrow Group 4 9.5% 28 66.7% 10 23.8% Total 7 9.5% 48 64.9% 19 25.7% The frequency data in Table 19 show that there were around 9.5% of low prior knowledge learners, around 65% of intermediate level learners, and around 25% of high prior knowledge learners. In other words, the majority of participants were categorized as having the intermediate level of prior knowledge. The percentages were similar in each treatment group. The graphical data in Figure 22 reveal that participants who were categorized as having the low level of prior knowledge gained most in the posttests (M = 7.2857, SD = 5.93617), compared to those with the intermediate level (M = 3.8958, SD = 3.26320) and to those with the high level (M= -.8947, SD = 2.68524). What is Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 149 noticeable about these data is that the participants who started with low prior knowledge made far bigger improvement in their posttest performances than the whole group whose mean gain score from the pretests to posttests is 2.9865. Moreover, the participants who had much prior knowledge of the target form before the instructional treatment actually performed worse in the posttests, although it is yet to determine if these data have any statistical significance. . Figure 22 Gain Scores by Three Prior Knowledge Level Groups 8 6 4 2 0 Intermediate High Prior Knowledge Levels A one way ANOVA was performed on each prior knowledge group’s gain scores using the prior knowledge level group (Low vs. Intermediate vs. High) as a Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. between subjects factor. The ANOVA provided evidence for a significant prior knowledge effect on the gain scores, F (1, 71) = 19.203, p = .0000), meaning each prior knowledge group’s gain score is significantly different form one another. The effect size was 0.389. The post-hoc Fisher’s least significant difference (LSD) test revealed the following. The low prior knowledge group’s gain score is significantly higher than the intermediate prior knowledge group’s (p = .017) as well as the high prior knowledge group’s gain score (p = .000). And the intermediate group’s gain score is also significantly higher than the high knowledge group’s (p = .000). As a whole the data presented in this section supports Hypothesis 3 that the effect of an instructional method disappears or decreases when learners already have high prior knowledge of the domain due to their schemas, while learner with little prior knowledge benefit most from the instruction. Table 20 Descriptive Statistics of Gain Scores by Prior Knowledge and Group Prior Knowledge Levels Group Mean SD N Low Agent Group 11.0000 6.24500 n Arrow Group 4.5000 4.50925 4 Intermediate Agent Group 3.9500 3.66312 20 Arrow Group 3.8571 3.01495 28 High Agent Group -1.5556 2.40370 9 Arrow Group -.3000 2.90784 10 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 151 Then, in order to check for any interaction between prior knowledge level and group or delivery medium, each group’s mean gain scores and standard deviations were obtained for each prior knowledge levels as shown in Table 20. The data show that low prior knowledge learners in Agent Group made bigger gain scores than their counterpart in Arrow Group, while the opposite was the case for high prior knowledge learners. Yet, no outstanding difference is observed for the intermediate prior knowledge learners. A further analysis was conducted by running a 2 (Agent vs. Arrow) x 3 (Low vs. Intermediate vs. High) two factor ANOVA on the gain scores. The results of the ANOVA revealed a significant interaction effect between levels of prior knowledge and delivery media, F (2, 68) = 3.474, p = .037. The effect size was 0.093. A visual inspection of the interaction plot in Figure 16 suggests that an animated pedagogical agent may have greater pedagogical effect for low prior knowledge learners than an electronic arrow with voice in spite of extrinsic cognitive load that visual features of an agent might impose on learners’ limited working memory. This result is particularly interesting because it has been claimed by cognitive load theorists that extraneous cognitive load is more damaging for learners with low prior knowledge because they need more cognitive resources to compensate their lack of schema, and including seductive details like animated characters in a learning environment would hinder their learning process (Jeung et al, 1997). However, it should also be noted that there were a very small number of low prior knowledge Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 152 learners in each group (Agent Group = 3, Arrow = 4 Group), and the standard deviations are relatively big (Agent Group = 6.24, Arrow = 4.509) suggesting great individual variation, especially for Agent Group. Moreover, the effect size is extremely small (partial Eta squared = .093) meaning that only the small proportion of the dependent variable, the gain scores, can be attributed to the interaction. Thus, it is not yet safe to draw a definite conclusion that an animated pedagogical agent is more beneficial for low prior knowledge learners than an electronic arrow with voice. Figure 23 Interaction of Prior Knowledge and Delivery Media Group i g g , □ 1 A gent Group Arrow Group Low Intermediate High Prior Knowledge Levels Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 153 Hypothesis 4 There will be no causal relationship between the levels of learner interest in instructional system with which they interacted and the levels of learner achievement measured by gain scores from the pretests to posttests. To examine the relationship between participants’ interest level and their achievement, preliminary analyses using the Pearson product-moment correlation were performed first. Participants’ interest levels were measured through the use of two subjective ratings scales assessing participants’ perceptions of the tasks, the learning guide (Genie realized as either the animated agent or arrow with voice), and various features of the system. Recall that the first measure was administered immediately after the participants received explicit rule presentation from Genie (the pre-task) and the second measure was presented after they finished the reading comprehension task. Tables 19 and 20 present the correlations between participants’ total gain scores made from the pretests to posttests and their perceptions of the learning environment. Only one statistically significant positive correlation was found (r = .319 ,p< .05): between participants’ achievement and their perceptions of the lesson they received (interestingness of the lesson). There exist some negative correlations, but none of them was meaningful. No significant correlation was found between participants’ interests in Genie and their achievement either, which suggests that a delivery medium may not have substantial impact on learner performance. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 154 Table 21 Correlational Statistics of Subjective Ratings 1 and Achievement Lesson - Lesson - Genie - Genie - Interesting Useful Interesting Helpful Gain Scores .319* .069 .053 -.066 * It indicates that Pearson r value is significant at the set alpha level of.05. Table 22 Correlational Statistics of Subjective Ratings 2 and Achievement System - System - Dictionary - Lesson - Reading - Interesting Helpful Helpful Help Reading Interesting Gain Scores .067 -.168 -.207 .087 * It indicates that Pearson r value is significant at the alpha level of.05. However, since the product-moment correlation coefficients (r) do not indicate what kind of relationship there exists between interest and performance, further analyses were conducted by submitting the subjective ratings data to a Multiple Regression test. To simplify the analysis, the seven subjective ratings variables were collapsed according to the main features to which they pertained, yielding three newly computed variables: perception of Genie (interestingness and helpfulness of Genie), Perception of Lesson (interestingness, usefulness and helpfulness of lesson), and Perception of System (interestingness, helpfulness, and dictionary function of the system). Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 155 The results of a simultaneous regression analysis showed that none of the interest variables, Perceptions of Genie, Lesson, and System, was a significant predictor of performance (t = 1.045, y ? = .300, t = -.900, p = .371, and t = -.144,/? = .866, respectively). Furthermore, only 1.6% of the variance in achievement was explained by the interest variables, which was not statistically significant (F = .390, p = .790). Taken together the data presented here suggest that there is no significant relationship, whether it is correlational or causal, between participants’ interest in the learning environment which they interacted with (situational interest) and their achievement. In addition to the subjective ratings data representing learners’ situational interest in the learning environment, other motivation (more stable, trait-like) data collected through mental effort and self-efficacy beliefs measured before and after the instructional treatment were also analyzed. Any of the motivational variables had significant correlation with the gain scores except the amount of mental effort measured using Salomon’s AIME scale (r = .254, p = .029). In other words, the more mental effort a learner invested in processing instruction, the higher achievement s/he made. Yet, caution is required in interpreting this result because the magnitude of correlation is relatively small. A simple regression analysis was performed as well using the AIME mental effort scores as an independent variable and the gain scores as a dependent variable. The results showed that only 6.5% of the variance in the gain scores was accounted for by the mental effort (F = 4.965, p = .029). Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 156 Hypothesis 5 An electronic arrow with voice will require less time and mental effort from participants than an animated pedagogical agent in achieving the same level of learning performance, when compared to an animated pedagogical agent in delivering the same instruction method, explicit rule presentation. Hypothesis 5 is about cognitive efficiency of delivery media. Before examining the cognitive efficiency of each delivery medium used in the study, it was decided to compare each medium on the absolute amount of mental effort and time required by each delivery medium. The descriptive statistics in Table 23 reveal that there are some differences between the two groups in these variables. In order to assess the statistical significance of these differences, a series of independent samples t-test were performed. Table 23 Descriptive Statistics of Mental Effort and Time of Each Group Measures Group N Mean SD Salomon AIME Agent Group 32 3.5625 1.45034 Arrow Group 42 4.1667 1.38566 Paas Mental Effort Agent Group 32 4.8750 1.56060 Arrow Group 42 4.0952 1.41092 Time on Pre-task Agent Group 32 983.75 236.920 (Unit = Sec.) Arrow Group 42 998.90 448.340 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 157 The results of the t-tests showed that participants in Arrow group invested significantly less mental effort than those in Agent Group when using the Pass’ Mental Effort Scale (t = 2.250, p = .028). However, the significant difference in the amount of mental effort disappeared when using the Salomon’s AIME scale (t = - 1.821 ,p = .073). No significant difference was found in the amount of time spent on finishing the pre-task, the explicit rule presentation (t = -. 173, p = .863), and the main task, the reading comprehension task (t =.339, p = .736) between the two groups. Table 24 Means and SDs of Efficiency Variables of Each Group Variables Agent Group Arrow Group Mean SD Mean SD Efficiency - AIME .85 2.27 .72 .98 Efficiency - Paas .86 1.56 .69 .98 Efficiency - Time .17 .28 .30 .89 The cognitive efficiency of each medium used to deliver explicit rule presentation was calculated by dividing the gain scores with the amount of mental effort and time spent. In other words, each individual learner’s gain score was divided by the amount of time and effort that s/he exerted in processing explicit rule presentation. Through this process, it was possible to obtain the amount of achievement that each participant made at a unit of time or at a unit of mental effort. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 158 Each delivery medium’s mean efficiency scores calculated from Salomon and Paas’ mental effort measures and the time spent to process explicit rule presentation are displayed in Table 24. Agent Group achieved higher levels of mental effort efficiency than Arrow Group when using Salomon’s AIME (Agent = .85, Arrow = .72) and Paas’ Mental Effort Scale (Agent Group = .86, Arrow Group = .69), meaning that the participants in Agent Group achieved the same level of learning with less mental effort. In contrast, Arrow group (M = 0.30) spent less time to achieve the same amount of learning than Agent Group (M= 0.17), earning a higher level of time efficiency (before calculating the time efficiency, the time data were converted into minutes from seconds in order to avoid extremely small time efficiency values). Nevertheless, the differences in the mental effort and time efficiencies found between the groups were not statistically significant: AIME (t = .332,p = .741), Paas (t = .55, p = .581), and Time (t = -.769, p = .445). In summary, the data do not support Hypothesis 5 that the electronic arrow with voice will require less time and mental effort from participants than the animated pedagogical agent in achieving the same level of learning performance, when both media are used to deliver the explicit rule presentation treatment. Hypothesis 6 There will be a significant positive correlation between the levels of learner prior knowledge of a target grammar and the levels of cognitive efficiency. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 159 In order to probe Hypothesis 6, first the relationship between learner prior knowledge (the sum of pretest scores) and the absolute amount of mental effort and time learners spent in processing explicit rule presentation was examined. As shown in Table 25, in overall there exist negative associations between the amount of prior knowledge and the amount of time and mental effort invested. Yet, only the correlation between the prior knowledge and the time spent on the pre-task was statistically significant. That is, the more prior knowledge a learner has of the target o f instruction, the less time s/he invests in processing instruction. Table 25 Correlations between Prior Knowledge and Mental Effort/Time Time spent on Pre-task Salomon AIME Paas Mental Effort Sum of Pretests Pearson Correlation -.294* -.106 -.105 Sig. (2-tailed) .011 .367 .375 Table 26 Correlations between Prior Knowledge and Cognitive Efficiency Efficiency - Paas Efficiency - AIME Efficiency - Time Sum of Pretests Pearson Correlation I ON * * -.359** -.249* Sig. .000 .002 .032 The correlation matrix in Table 26 illustrates the associations between participants’ prior knowledge of the target form and the mental effort and time Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 160 efficiencies. Again, the amount of prior knowledge that participants had before the instructional treatment is shown to have negative associations with the metal effort and time efficiencies. However, given that the mental effort and time efficiencies indicate the amount of performance that a learner has achieved at a given unit of time or mental effort, this result is different from the prediction made by Hypothesis 6. In other words, the more prior knowledge a learner has of the target form, the less gain scores s/he makes from the pretests to posttests at a given amount of time or mental effort. Nevertheless, the results were somewhat expected because all participants were required to listen through the explicit rule presentation disregarding whether or not they were already familiar with the target form. Furthermore, provided that they made only small amount of gains from the pretests to posttests due to their high scores in the pretests, it is not surprising at all. For further analysis of the relationship between learner prior knowledge and cognitive efficiency of each delivery medium, the amount of mental effort and time spent on processing explicit rule presentation was again compared based on participants’ prior knowledge levels. Recall that participants were arbitrarily categorized into three prior knowledge groups based on the amount of their pretest scores. In general, participants with the intermediate level of prior knowledge invested more mental effort in processing instruction than those with low and high prior knowledge (Tables 27 and 28). Yet, low prior knowledge learners spent more time to process the instruction than the other two groups (Table 29). Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 161 Table 27 Descriptive Statistics of Paas Mental Effort of Prior Knowledge Levels Prior Knowledge Levels Mean Std. Deviation N Low 4.4286 .78680 7 Intermediate 4.5417 1.59732 48 High 4.1579 1.53707 19 Total 4.4324 1.51776 74 Table 28 Descriptive Statistics of Salomon AIME of Each Prior Knowledge Levels Prior Knowledge Levels Mean Std. Deviation N Low 4.0000 1.20185 7 Intermediate 4.0278 1.55107 48 High 3.5614 1.19697 19 Total 3.9054 1.43613 74 Table 29 Descriptive Statistics of Time Spent on Pre-task of Each Prior Knowledge Levels Prior Knowledge Levels Mean Std. Deviation N Low 1125.29 334.542 7 Intermediate 1043.81 397.856 48 High 813.37 237.462 19 Total 992.35 369.850 74 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 162 In order to determine the statistical significance of the observed differences, a series of one-way ANOVAs were conducted. Multiple ANOVAs were conducted instead of a single Multivariate Analysis of Variance (MANOVA) test because: (a) the dependent variables analyzed here were not correlated with one another; and (b) MANOVA is not robust for repeated measures designs. Recall that the present study adopted a repeated measures design involving a set o f pre- and posttests. For the Salomon’s AIME scale, the differences among the three prior knowledge levels were not significant (F = .729, p - .486), resulting in the effect size 0.020 (Partial Eta Squared). The similar result was obtained for the Paas’ Mental Effort Scale (F = .428, t = .653, Effect Size = .012). In contrast, the differences for the amount of time spent on the pre-task among the different prior knowledge levels were found statistically significant (F = 3.343,1 = .041), although it was rather marginal given the effect size of .086. The post-hoc LSD test showed that only the intermediate level group spent significantly more time than the high level group (p = .020). All together these data suggest that there is little difference among different prior knowledge groups in terms of the absolute amount of mental effort and time they invest in processing the instruction. In the next step, mental effort and time efficiencies of each prior knowledge level group were compared. In particular, in order to examine the interaction effects of the levels of prior knowledge and the delivery media, 3 (Low vs. Intermediate vs. High) x 2 (Agent vs. Arrow) factorial ANOVAs were run on the mental effort Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 163 efficiency and time efficiency measures using prior knowledge levels and delivery media as two independent variables. Table 30 provides descriptive statistics for the mental efficiency calculated using the Salomon’s AIME and gain scores. Although a casual inspection of these data may suggests that Agent Group and Arrow Group differ, no group or delivery medium effect was found: F {2, 68) = 3.2176, p = .077, and Effect Size = .045. However, the main effect of prior knowledge was statistically significant: F (2, 68) = 9.42 \,p = .000, Effect Size = .217. The results of a post-hoc LSD test demonstrated that the low prior knowledge group achieved higher mental effort efficiency than the intermediate ip = .038) and high level group ip = .000). The intermediate level group also gained better mental effort efficiency than the high knowledge group ip = .003). Table 30 Salomon Mental Efficiencies by Groups and Prior Knowledge Levels Group Prior Knowledge Mean SD N Agent Group Low 3.8974 2.50630 3 Intermediate 1.0057 2.28589 20 High -.5047 .64891 9 Total .8520 2.27238 32 Arrow Group Low .9787 1.04089 4 Intermediate .9450 .88983 28 High -.0035 .92518 10 Total .7224 .97846 42 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 164 Moreover, there was a significant effect of the interaction between prior knowledge levels and delivery media: F (2, 68) = 3.508, p = .035, and Effect Size = .094. A visual investigation of Figure 24 reveals that the effects of learner prior knowledge on the mental effort efficiency (Salomon’s AIME) are different in different groups. Participants with a low level of prior knowledge gained more scores at a given unit of mental effort when they interacted with an animated pedagogical agent than when they interacted with a simple electronic arrow with voice. Figure 24 Interaction of Prior Knowledge and Group on Salomon AIME Mental Efficiency ^ . o G tU W G < D w < ' ‘ m □ Group A gent Group Arrow Group Low Intermediate High Prior Knowledge Levels Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 165 However, the interaction effect disappeared for the intermediate, and quite the opposite phenomenon was observed for high knowledge learners. What is interesting about this result is that Agent Group’s mental effort efficiency drops sharply as the participants’ prior knowledge increases. This corroborates the prediction made in Chapter 2 that participants with higher prior knowledge would be disadvantaged by the animated pedagogical agent. Yet, it should be noted that the effect size of the interaction between prior knowledge and group is very small, which requires caution when interpreting the results. Similar outcomes were obtained for the mental effort efficiency computed using the Paas’ Mental Effort Scale (Table 31). The results of a 2 x 3 Factorial ANOVA show that there is a statistically significant, but marginal main effect of delivery media or group: F (2, 68) = 4.140,p = .046, and Effect Size = .057. Participants in Agent Group accomplished higher levels of mental effort efficiency than those in Arrow Group. Like the results discussed for the mental efficiency measured using Salomon’s AIME, the strong main effect of prior knowledge level was again observed: F ( 2, 68) = 15.649,p = .000, and Effect Size = .315. However, the significant differences were found only between the low and the high knowledge group (p = .000) and between the intermediate and the high knowledge group (p = .000). The difference between the low and the intermediate group did not reach statistical significance [p = .054). Again unlike the prediction of Hypothesis 6, the high level prior knowledge group did not achieve higher levels of mental efficiency. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 166 Table 31 Paas Mental Efficiencies by Groups and Prior Knowledge Levels Group Prior Knowledge Mean SD N Agent Group Low 3.0444 1.95088 3 Intermediate 1.0875 1.38858 20 High -.3796 .49101 9 Total .8583 1.55912 32 Arrow Group Low .9625 .91958 4 Intermediate .9718 .73000 28 High -.1917 1.16736 10 Total .6939 .98132 42 The interaction effect of prior knowledge and deliver media on the mental effort efficiency was almost significant, but not statistically meaningful: F (2, 68) = 3.094, p = .052, and Effect Size = .083. As shown in Figure 25, learners with low prior knowledge were cognitively more efficient when they received the lesson from the animated pedagogical agent than when they received it from the simple arrow with voice. Once again, Agent Group’s high level of mental effort efficiency was not repeated with learners with higher prior knowledge, and as of matter of fact, the group’s efficiency levels dropped dramatically as learners had more prior knowledge of the target form. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 167 Figure 25 Interaction of Prior Knowledge and Group on Paas Mental Effort Efficiency O a o Group c d A gent Group Arrow Group Intermediate Low High Prior Knowledge Levels Finally, it was examined whether delivery media made any impact on the speed at which a learner achieved a certain level of learning objectives. Overall, Arrow Group displayed better time efficiencies than Agent Group (Table 32), even though the difference was not statistically significant at all: F (2, 68) = .004, p = .948, and Effect Size = .000. The main effect of the prior knowledge levels was not obtained either: F (2, 68) = 2.611, p — .076, and Effect Size = .073, which is different from the other two efficiency measures. Likewise, the interaction effect was not significant either: F (2, 68) = .372, p = .691, and Effect Size = .011. Nevertheless, Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 168 Figure 26 illustrates the interaction pattern for the time efficiency is a bit different from the two mental efficiencies in that here the arrow with voice took less time for intermediate level prior knowledge learners to learn a certain amount of learning than the animated pedagogical agent, though it was not statistically significant at all and a very small effect size was obtained. Table 32 Time Efficiencies by Groups and Prior Knowledge Levels Group Prior Knowledge Mean SD N Agent Group Low .5766 .32260 3 Intermediate .2349 .20525 20 High -.0979 .15359 9 Total .1733 .27957 32 Arrow Group Low .2996 .36409 4 Intermediate .4250 1.04748 28 High -.0532 .32073 10 Total .2992 .89216 42 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 169 Figure 26 Interaction of Prior Knowledge and Group on Time Efficiency Group b 0.0 A gent Group Arrow Group High Intermediate Low Prior Knowledge Levels Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 170 CHAPTER IV: CONCLUSIONS AND FUTURE RESEARCH General Discussions Research Question 1 Did explicit rule presentation and reading comprehension task have positive effects on learning of English relative clauses? Explicit rule presentation is used to direct learners’ attention to a target linguistic form and consequently to facilitate learning of the target form. Explicit rule presentation has been criticized by noninterventionists who argue that natural exposure to large amounts of input is sufficient for L2 learning (Krashen, 1985; 1992; 1993). However, a few empirical studies have shown that explicit rule presentation has positive impact on L2 learning especially when it is delivered with relevant examples (Ellis, 1994; Long & Robinson, 1998). The present study is now adding new evidence that explicit rule presentation has positive effects on acquisition of English relative clauses when it is given with examples in meaningful context. Explicit rule presentation was incorporated in this study as an instructional method to provide learners with explanations on the rules and usages of English relative clauses. In particular, the present study gave explicit rule presentation using two multimedia systems (an animated pedagogical agent - Agent Group, and an electronic arrow with voice - Arrow Group) in order to investigate if the same instructional method could have different effect on learning when delivered in a different medium. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 171 Learner performances in both multimedia systems were significantly better in the posttests than the pretests indicating that explicit rule presentation and reading comprehension task had positive effects on L2 learning. In particular, the instructional treatment made its biggest impact on the production test in which learners had to combine two sentences using the target form. A production test is comparable to a far-transfer test in that both require more in-depth knowledge from learners. Yet, the positive effect of the instructional treatment was not statistically significant in the picture interpretation test where the learners might have experienced a ceiling effect. Research Question 2 Did the type of medium - an animated pedagogical agent vs. an electronic arrow with voice - delivering the same instructional method (explicit rule presentation) have a differential effect on learning of English relative clauses? An animated pedagogical agent is an autonomous interface agent that attempts to model the kinds of interaction occurring between a student and a human tutor. A typical agent-based learning environment incorporates not only knowledge of a human tutor but also behavioral and emotional characteristics of the tutor, such as interactive language, facial expressions and bodily gestures. These affective, social behaviors and visual appearance of an animated pedagogical agent have been hypothesized by advocates to render a learning environment more entertaining, to motivate a learner, and subsequently to improve performances. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 172 The present study did not find any significant differences between Agent Group and Arrow Group in terms of the gain scores that learners made from the pretests to posttests. In other words, a type of delivery medium did not affect learner performance, which empirically supports Clark’s claim that what makes difference in learning is the instructional method, not the delivery medium (1983, 1994a, 1994b, 2001, 2003). The results further corroborate his arguments that an instructional method can be delivered through a variety of media, not necessarily fancy and expensive, without sacrificing learner performance. The findings of the present study did not support the interest theories of motivation which suggest that learners invest more effort when they are interested in the learning environment and an animated pedagogical agent would make the learning more interesting. In the present study, learners who interacted with an animated pedagogical agent did not exert more effort nor spend more time in processing the instruction than those who interacted with a simple electronic arrow with voice. And since there is no significant difference found between groups in their gain scores, the interest theory of motivation is not supported, which also implies that a delivery medium might not have any significant differential impact on learner cognition or on learner emotion. The results are not consistent with the Persona Effect either. The Persona Effect is derived from the hypothesis that an animated pedagogical agent makes human-computer interaction social, fosters the learner’s interest in learning tasks, Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 173 and leads learners to work harder. However, learners in the agent and non-agent environment did not differ in terms of effort that they invested in processing the instruction, and their final performances were not statistically different either. Maybe learners in the present study, especially those in the non-agent environment, were able to find their interaction with a computer itself interesting enough to form a social relationship with a computer and thus use social rules (Erickson, 1997; Moreno et al., 2001; Nass & Steuer, 1993). . Yet, this research also suggests potential benefit of an animated pedagogical agent. The data revealed that for learners with little prior knowledge, an animated pedagogical agent might be more efficient than a less humanized delivery medium (e.g., an electronic arrow with voice), although the effect size was relatively small. This result is somewhat contradicting with the propositions of cognitive load theory that extraneous cognitive load, such as the image of an animated agent, could damage learners with low prior knowledge. The rationale behind this claim is that learners with low prior knowledge need more cognitive resources to compensate their lack of schema, and inclusion of animated characters, which could act as seductive details, would hinder their learning process by requiring the share of limited cognitive resources (Jeung et al., 1997). Research Question 3 Did the type of medium - an animated pedagogical agent or an electronic arrow with voice - delivering the same instructional method (explicit rule Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 174 presentation) have a differential effect on the amount of learning made at a given unit of time and mental effort? Cognitive efficiency is defined in the present study as ‘the relative amount of learning scores that was made at a given unit of mental effort and/or time with a specific delivery medium’. And cognitive efficiency of each delivery medium adopted in the study was obtained from the process of dividing the gain score of each medium group by the amount of time or mental effort invested by learners of the groups. It is supposed that a medium A is cognitively more efficient than a medium B if a student using the medium A achieved a higher level of learning than a student using the medium B who spent the same amount of time and/or mental effort but achieved a lower level of learning. In the present study, it was hypothesized that the electronic arrow with audio would require less time and/or mental effort from participants or foster higher levels of learning at a given unit of time and/or mental effort than the animated pedagogical agent, when both media are used to deliver the explicit rule presentation. The results, however, did not support the hypothesis. Rather, no significant difference was found between the animated pedagogical agent and the electronic arrow with voice in terms of the amount of learning scores made at a given unit of time and/or mental effort. In other words, an animated pedagogical agent and an electronic arrow with voice were not different in cognitive efficiency measures. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 175 One interesting finding related to cognitive efficiency of media is that learners with little prior knowledge achieved a higher level of mental effort efficiency than those with more prior knowledge, meaning that learners with low levels o f prior knowledge made the most out of instruction with a given amount of mental effort. More interestingly, learners who had low levels of prior knowledge and interacted with an animated pedagogical agent gained more scores at a given unit of mental effort than their counterpart who interacted with a simple electronic arrow with voice. That is, an animated pedagogical agent was cognitively more efficient than an electronic arrow with voice for learners with less prior knowledge. However, the medium effect on the mental efficiency quickly disappeared as learners had more prior knowledge. Furthermore, none of these effects was found in time efficiency measures. It is not clear why an animated pedagogical agent was more efficient with learners with less prior knowledge. It might be that the interestingness of an animated pedagogical agent stimulated learners’ cognitive processing at a deeper level, which led to more learning, as some animated agent researchers insisted (Lester et al., 1997; Mayer et al., 2004). Or considering Gimino’s finding that the Salomon’s AIME and Paas’ Mental Effort Scale might measure subjects’ perceptions of task difficulty, not the amount of mental effort (2000), participants in Agent Group might have felt the instruction less difficult than their counterparts in Arrow Group. If this is the case, it has nothing to do with cognitive efficiency because the measure Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 176 does not reflect the amount of mental effort invested by learners. Or it might be the case that the efficiency calculation formula which used the absolute amount of time, mental effort, and achievement was not adequate to capture the differences between groups. Remember that there was no significant difference found between the groups in the performance measures, time measures, and mental effort measures. It might be necessary to use standardized scores of these measures to compute efficiency scores like Camp and colleagues did in their study (2001). It is also not clear why the effect of the animated agent on the cognitive efficiency disappeared for learners with higher prior knowledge despite the fact that there was a negative correlation between the amount of prior knowledge and the amount o f mental effort and/or time invested. It could be that high prior knowledge learners experienced a ceiling effect, which prevented them from gaining more from the pretests to posttests. Research Question 4 What was the relationship between learner interest in the system and the subsequent learning of English relative clauses in an agent-based learning environment? The rationale behind the claim that animated pedagogical agents may improve learning outcomes is as follows: animated pedagogical agents’ positive impact on learners’ motivation and perceived experience of interaction (through human-like characteristics) may motivate learners to interact more and to stay longer Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Ill in the instructional system, and because of the increased motivation and interaction the learners’ performance will improve. Here, a causal relationship is assumed between an animated pedagogical agent and learner performance. In particular, advocates of animated pedagogical agents insist that the effects of an animated pedagogical agent would be greater than non-humanized tutoring systems. The present study, however, did not find a cause and effect relationship, let alone simple correlations, between an animated pedagogical agent or delivery medium and learner performance including both cognitive products and processing. To be detailed, the results showed that learners who interacted with an animated pedagogical agent did not find their learning environment more interesting or motivating compared to their counterparts in a non-agent based environment. And more importantly, their cognitive products and processing did not exceed their counterparts’ products and processing. The results of the study also show that the interestingness of explicit rule presentation and the content of the lesson, disregarding delivery media, seem more related to learner performance. Again, the results confirm that the delivery medium does not really cause learning, but the instructional method does. Furthermore, learners’ interest in the instructional system including the learning guide, lesson, and some miscellaneous features were not significant predictors of learner performance. Taken together, this research suggests that there is no correlational or causal relationship between participants’ interest in the learning environment and their academic achievement. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 178 Conclusions With the advancement of multimedia and information technology, a growing number of instructional programs are now including multimedia elements in their instructional presentations in an attempt to improve their pedagogical effects. Educators at every level are also encouraged to utilize multimedia in their instructional practices. One of the ‘hot’ multimedia elements that are currently discussed to a great extent in the field of instructional technology is animated pedagogical agents. It is clear that there exists great enthusiasm about the potential of animated pedagogical agents, in particular, their entertaining and engaging features. On the other hand, several educational researchers insist that adaptive functionality of an instructional program is sufficient to increase learning scores without advanced multimedia technology. In other words, as far as the system is equipped with functions required to achieve specific learning objectives, impressive but expensive technological components are not required to achieve the objectives (Clark & Choi, in press; Erickson, 1997). Educational economists along with educational researchers also argue that instructional designers should think about the cost and benefits (both cognitive and economic) of including expensive multimedia elements (Levin & McEwan, 2001), that is, the efficiency of instructional media. The present study was set out to find relative effects of an animated pedagogical agent, when used in multimedia-based learning programs, compared to an alternative multimedia system (i.e., a simple electronic arrow with voice). The Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 179 results from this study indicate that an animated pedagogical agent is not more effective than a simpler multimedia system in motivating learners and improving learner performance when they deliver the same instruction. The results are consistent with the results of other agent studies (i.e., van Mulken et al., 1998; Craig et al., 2002; Craig, Driscoll, & Gholson, 2004). At the same time, this research provides counter evidence for the claims made by advocates of animated pedagogical agents that animated pedagogical agents can induce learners to work harder and perform better than simpler multimedia systems. Rather, the present study demonstrates that what causes learning is the instructional method, not the delivery media. O f particular interest o f the present study was cognitive efficiency of multimedia systems. Cognitive efficiency of multimedia is based on the hypothesis that a specific medium through which instruction is presented to learners may not produce different cognitive outcomes compared to another, but it may affect cognitive processes in which different learners with different prior knowledge process the information with less or more mental effort and time investment. In other words, when two multimedia systems have the same effect on cognitive products, then the cognitive efficiency should be brought into the equation when designing and selecting a most optimized system. Even though the present study did not provide empirical evidence that animated pedagogical agents produce different levels of cognitive efficiency from simpler multimedia systems, it showed learners with little Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 180 prior knowledge were able to produce higher learning scores at a given amount of mental effort compared to their counterparts in the alternative multimedia system. This result is important because it verifies the role of learners and their knowledge, and provides support those who insist that multimedia instruction should take into consideration the learner factor in its design. It also offers some guidance to educators for selecting instructional technology appropriate for their target student population. Nevertheless, due to the questionable validity of the self-report mental effort measures used in the study, the result should be taken with great caution. Limitations and Future Research One major limitation of the present study is that many o f the participating students had intermediate to high levels of prior knowledge, which might undermine the effects of the instructional method employed, explicit rule presentation and reading comprehension task. In particular, it is possible that the learners with more prior knowledge experienced a ceiling effect and that their low gain scores from the pretests to posttest and cognitive efficiency affected overall results of the study. One more limitation, somewhat related to the one just presented, is that many of the participants were from East Asian countries that are known for their exclusive focus on grammar when teaching English, and also for their extensive use of advanced technology in everyday life. This might decrease the effect of the instruction. Additionally the multimedia systems used in the present study might not interest Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 181 them too much because they were already familiar with advanced multimedia technology, which, however, could support the novelty effect of instructional technology at the same time. Therefore, future research should be designed to include more diverse subject pools that do not have too much prior knowledge and are possibly from several different cultures. Another major limitation lies in the fact that the instructional intervention included in the study was relatively short. The study was conducted over two week periods and the participants interacted with the system only once. As discussed above, almost every agent study was short term, and it is not clear what effects animated pedagogical agents will have on learner interest as well as performance given that previous instructional technology studies found the novelty effect when they were conducted for short period of time. Even though the animated pedagogical agent did not gain higher learner scores or motivation in this study, it is still crucial to find out the long term effect. Future research can also explore the effects of multimedia systems when they are used as part of regular curriculum. Most media studies including the present study examined multimedia systems in isolation. However, considering that an increasing number of educators and schools are trying to incorporate multimedia in the existing curriculum and instruction, it will be beneficial to conduct a study on the effects of integrated media systems. The mental effort measures also posed a great challenge. Despite the high correlation between the two measures, Salomon’s AIME and Paas’ Mental Effort Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 182 Scale, and the good reliabilities of both measures, both of them failed to predict learner performance. The measures of mental effort did not have significant association with the amount of time that learners invested in processing the instruction or the levels of learner motivation. Taken together, the results might indicate that the two mental effort scales did not measure what the study intended. In other words, they may not have good construct validities. Thus, future research should consider adopting a more valid as well as reliable measure for assessing the amount of mental effort exerted by learners during instruction. For instance, a more direct measure such as dual-task methodology could give more accurate assessment of mental effort (Brunken, Steinbacher, Plass, & Leutner, 2002), although it should be ensured that an extra task used by the method should not impost unnecessary cognitive load on learners’ limited working memory. A final limitation of the study is related to the nature of subject matter and instructional materials included in the study. Dehn and van Mulken (2000) maintained that the persona effect of an animated pedagogical agent is domain- specific and can improve human-computer interaction if the agent displays the functional behaviors matching the system’s purposes. It is clear that the delivery systems utilized in the study had the necessary functions for delivering explicit rule presentation (e.g., provide verbal explanations, point to important aspects or example sentences on the computer screen). However, given that animated pedagogical agents could have more functions that might be more appropriate for other subject matters Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 183 (Table 1), it is still not clear if the use of an animated pedagogical agent was a good choice for teaching a linguistic structure, let alone for delivering the chosen instructional method. Nevertheless, it should be noted that no matter what functions an animated pedagogical agent has in order to deliver an instructional method, it can be done through the use of a variety of other multimedia systems, possibly less expensive and time consuming to develop. In conclusion, the present study provided empirical evidence that the behaviors of animated pedagogical agent can be easily replaced by simpler means of communication, not necessarily requiring an embodied character without forfeiting learner performance. It also shed light on the issue of cognitive efficiency of multimedia instructional systems by comparing two multimedia systems not only on cognitive products, but also cognitive processing generated. The results confirm the premise of cognitive efficiency that one medium might be more or less likely to succeed with a particular learner. In the study, learners with little prior knowledge tend to achieve higher levels of cognitive efficiency when they interacted with an animated pedagogical agent. Finally, the study highlighted the need for better measures of mental effort. Without reliable and valid measures of mental effort, it will not be possible to accurately calculate cognitive efficiency scores of multimedia, which will eventually impact the selection of multimedia system appropriate for a particular group o f learners and a learning objective. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 184 REFERENCES Ainley, M., Hidi, S., & Bemdorff, D. (2002). Interest, and the psychological processes that mediate their relationship. Journal o f Educational Psychology, 94(3), 545-561. Alanen, R. (1995). Input enhancement and rule presentation in second language acquisition. Attention and awareness in foreign language learning. University of Hawaii Press, Tech. Rep. No. 9, pp. 259-302. Andre, E., Rist, T., & Muller, J. (1999). Employing AI methods to control the behavior o f animated interface agents. Applied Artificial Intelligence, 13, 415-448. Atkinson, R. K. (2002). Optimizing learning from examples using animated pedagogical agents. Journal o f Educational Psychology, 94(2), 416-427. Baddeley, A. D. (1992). Working memory. Science, 255, 556-559. Bandura, A. (1986). Social foundations o f thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice-Hall. Beentjes, J. W. J. (1989). Learning from television and books: A Dutch replication study based on Salomon's Model. ETR & D, 37(2), 47-58. BellCraft Technologies (2004). Microsoft Agent Scripting Helper Version 6.5. Bobis, J., Sweller, J., & Cooper, M. (1993). Cognitive load effects in a primary- school geometry task. Learning and Instruction, 3, 1-21. Bong, M., & Clark, R. E. (1999). Comparison between self-concept and self-efficacy in academic motivation research. Educational Psychologist, 34, 139-153. Bradshaw, J. M. (1997). Software agents. Cambridge, MA, MIT Press. Brunken, R., Plass, J., & Leutner D. (2003). Direct measurement of cognitive load in multimedia learning. Educational Psychologist, 38(1), 53-61. Brunken, R., Steinbacher, S., Plass, J., & Leutner. D. (2002). Assessment of cognitive load in multimedia learning using dual-task methodology. Experimental Psychology, 49(2), 109-19. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 185 Bygate, M., Skehan, P. & Swain, M. (2001). Researching pedagogic tasks: Second language learning, teaching and testing. London, UK: Longman. Camp, G.., Paas, F., Rikers, R., & van Merrienboer, J. J. G.. (2001). Dynamic problem selection in air traffic control training: A comparison between performance, mental effort, and mental efficiency. Computers in Human Behavior, 17, 575-595. Carr, T. H., & Curran, T. (1994). Cognitive factors in learning about structured sequences: Applications to syntax. Studies in Second Language Acquisition, 16, 205- 230. Casali, J. G. W., Wierwille, W. W., & Cordes, R. E. (1983). A comparison of rating scale, secondary task, physiological and primary-task workload estimation techniques in a simulated flight task emphasizing communication load. Human Factors, 25(6), 623-641. Cassell, J., & Thorisson, K. R. (1999). The power of a nod and a glance: Envelope vs. emotional feedback in animated conversational agents. Applied Artificial Intelligence, 13, 519-538. Cennamo, K. S. (1992). Students' perceptions of the ease of learning from computers and interactive video: An exploratory study. Journal o f Educational System, 21, 251 - 263. Cennamo, K. S. (1993). Learning from video, Factors influencing learners' perceptions and invested mental effort. Educational Technology Research and Development, 41(3), 33-45. Cennamo, K. S. (1996). The effect o f relevance on mental effort. ERIC document reproduction service No. ED397 783. Chandler, P., & Sweller, J. (1991). Cognitive load theory and the format of introduction. Cognition and Instruction, 8, 293-332. Clark, R. E. (1983). Reconsidering research on learning from media. Review o f Educational Research, 53(4), 445-459. Clark, R. E. (1994a). Media and method. Educational Technology Research & Development, 42(3), 7-10. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 186 Clark, R. E. (1994b). Media will never influence learning. Educational Technology Research & Development, 42(2), 21-29. Clark, R. E. (1998). Cognitive efficiency research on media: A rejoinder to Cobb. An unpublished manuscript. Clark, R. E. (1999). Yin and Yang Cognitive Motivational Process Operating in Multimedia Learning Environment. In J. Van Merrienboer (Ed.), Cognition and Multimedia Design. Herleen, Netherlands: Open University Press. Clark, R. E. (2001). Learning from media: Arguments, analysis and evidence. Greenwich, CT: Information Age Publishers. Clark, R. E. (2003), Research on web-based learning: A half-full glass. In Bruning, R., Horn, C. and PytlikZillig, L. (Eds.), Web-Based Learning: Where do we know? Where do we go? Greenwich, CT: Information Age Publishers. Clark, R. E., & Choi, S. (In press). Five principles for the design o f experiments on the effects of animated pedagogical agents. Journal o f Research on Educational Computing. Cobb, T. (1997). Cognitive Efficiency, Toward a revised theory of media. Educational Technology Research & Development, 45(4), 21-35. Cohen, J. (1988). Statistical power analysis fo r the behavioral sciences (2n d Ed.). Hillsdale, NJ: Lawrence Erlbaum Associates. Craig, S., Driscoll, D. M., & Gholson, B. (2004). Constructing knowledge from dialog in an intelligent tutoring system: Interactive learning, vicarious learning, and pedagogical agents. Journal o f Educational Multimedia and Hypermedia, 13(2), 163-183. Craig, S. D., Gholson, B., & Driscoll, D. M. (2002). Animated pedagogical agents in multimedia educational environments: Effects of agent properties, picture features and redundancy. Journal o f Educational Psychology, 94(2), 428-434. Curran, T., & Keele, S. W. (1993). Attentional and nonattentional forms o f sequence learning. Journal o f Experimental Psychology, Learning, Memory and Cognition, 19, 189-202. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 187 Dalgamo, B. (2001). Interpretations of constructivism and consequences for computer assisted learning. British Journal o f Educational Technology, 32(2), 183- 194. DeKeyser, R. (1995). Learning second language grammar rules: An experiment with a miniature linguistic system. Studies in Second Language Acquisition, 77(3), 379- 410. DeKeyser, R. (1998). Beyond focus on form: Cognitive perspectives on learning and practicing second language grammar. In C. Doughty and J. Williams (Eds.), Focus on Form in Classroom Second Language Acquisition (pp. 42-63), Cambridge, MA: Cambridge University Press. Dehn, D. M., & van Mulken, S. (2000). The impact of animated interface agents: a review of empirical research. International Journal o f Human-Computer Studies, 52, 1- 22. Doughty, C. (1988). The effect o f instruction on the acquisition o f relativization in English as a Second Language. Unpublished doctoral dissertation, University of Pennsylvania. Philadelphia, PA. Doughty, C. (1991). Second language does make a difference, Evidence from an empirical study of SL relativization. Studies in Second Language Acquisition, 13, 431-469. Doughty, C., & Williams, J. (1998). Pedagogical choices in focus on form. In C. Doughty and J. White (Eds.), Focus on form in classroom second language acquisition (pp. 197-261). New York, NY: Cambridge University Press. Dulany, D. E. (1991). Conscious representations and thought systems. Advances in social cognition (Vol. 4). R. Wuer, J., & T. Srull. Hillsdale, NJ, Erlbaum. Dweck, C. S. (1989). Motivation. In A. R. G. Lesgold (Ed.), Foundations fo r a Psychology o f Education. Hillsdale, New Jersey: Lawrence Erlbaum Associates. Eckman, F., Bell, L., & Nelson, D. (1988). On the generalization o f relative clause instruction in the acquisition of English as a second language. Applied Linguistics, 9, 1- 2 0 . Eckstut-Didier, S. (1994). Finishing touches: A complete high intermediate course in English. London: Prentice Hall International (UK) Limited. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 188 Ellis, N., Ed. (1994). Implicit and explicit learning o f languages. London, Academic Press. Ellis, R. (2003). Task-based language learning and teaching. Oxford: Oxford University Press. Erickson, T. (1997). Designing agents as if people mattered. In J. M. Bradshaw (Ed.), Software Agents (pp. 79-96). Menlo Park, CA: MIT Press. Ericsson, K. A., & Chamess, N. (1994). Expert performance: Its structure and acquisition. American Psychologist, 49, 725-747. Ericsson, K. A., Simon, H. A. (1993). Protocol analysis: Verbal reports as data. Cambridge: MIT Press. Evers, M., & Nijholt, A. (2000, October). Jacob on animated instruction agent in virtual reality. Paper presented at the 3rd International Conference on Multimodal Interaction, Beijing, China. Fisher, S. L., & Ford, J. K. (1998). Differential effects of learner effort and goal orientation on two learning outcomes. Personnel Psychology, 51(2), 397-420. Flad, J. A. (2002). The effects o f increasing cognitive load on self-report and dual task measures o f mental effort during problem solving. Unpublished doctoral dissertation, University of Southern California. Los Angeles, California. Gamer, R., Brown, R., Sanders, S., & Menke, D. (1992). Seductive details and learning from text. In K. A. Renninger, S. Hidi, & A. Krapp (Eds.), The role o f interest in learning and development (pp. 239-254). Hillsdale, NJ: Erlbaum. Gass, S. (1980). An investigation of syntactic transfer in adult second language learners. In R. Scarcella & S. Krashen (Eds.), Research in second language acquisition: Selected papers from the Los Angles Second Language Acquisition Research Forum. Rowley, Mass.: Newbury House. Gass, S. (1982). From theory to practice. In M. Hynes & W. Rutherford (Eds.), On TESOL ’ 81: Selected papers from the fifteenth annual conference o f Teachers o f English to Speakers o f Other Languages (pp. 129-139). Washington, DC: TESOL. Gass, S., Svetics, I., & Lemelin, S. (2003). Differential effects of attention. Language Learning, 53(3), 497- 545. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 189 Gimino, A. E. (2000). Factors that influence students' investment o f mental effort in academic tasks: A validation and exploratory study. Unpublished doctoral dissertation, University of Southern California. Los Angeles, CA. Givon, T. (1985). Function, structures, and language acquisition. In D. Slobin (Ed.), The crosslinguistic study o f language acquisition: Vol. 1 (pp. 1008-1025). Hillsdale, NJ: Erlbaum. Graesser, A. C., Wiemer-Hastings, K., Wiemer-Hastings, P., & Kreuz, R. (1999). AutoTutor: A simulation of a human tutor. Journal o f Cognitive Systems Research, 1, 35-51. Harp, S. F., & Mayer, R. E. (1997). The role of interest in learning from scientific text and illustrations: On the distinction between emotional interest and cognitive interest. Journal o f Educational Psychology, 89{ 1), 92-102. Harp, S. F., & Mayer, R. E. (1998). How seductive details do their damage, A theory o f cognitive interest in science learning. Journal o f Educational Psychology, 90, 414- 434. Harley, B. (1992). Patterns of second language development in French immersion. Journal o f French Language Studies, 2, 159-183. Harley, B., & Swain, M. (1984). The interlanguage o f immersion and its implications for second language teaching. In A. Davies, C. Criper, & A.P.R. Howatt (Eds.), Interlanguage (pp. 291-311). Edinburgh: Edinburgh University Press. Hidi, S., & Anderson, V. (1992). Situational interest and its impact on reading and expository writing. In K., A. Renninger, S. Hidi, and A. Krapp (Eds.), The Role o f Interest in Learning and Development (pp. 215 - 238), Hillsdale, NJ: Lawrence Erlbaum Associates. Izumi, S. (2000). Promoting noticing and SLA: An empirical study o f the effects o f output and input enhancement on ESL relativization. Unpublished doctoral dissertation, Georgetown University. Washington, DC. Izumi, S. (2002). Output, input enhancement, and the noticing hypothesis: An experimental study on ESL relativization. Studies in Second Language Acquisition, 24, 541-577. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 190 Jeung, H., Chandler, P., & Sweller, J. (1997). The role of visual indicators in dual sensory mode instruction. Educational Psychology, 77(3), 329-343. Johnson, W. L., Rickel, J. W., & Lester, J. C. (2000). Animated pedagogical agents, Face-to-face interaction in interactive learning environments. International Journal o f Artificial Intelligence in Education, 11, 47-78. Joo, Y., Bong, M., & Choi, H. (2000). Self-efficacy for self-regulated learning, academic self-efficacy and internet self-efficacy in web-based instruction. Educational Technology Research & Development, 48(2), 5-17. Jourdenais, R., Ota, M., Stauffer, S., Boyson, B., & Doughty, C. (1995). Does textual enhancement promote noticing? A think-aloud protocol analysis. In R. Schmidt (Ed.), Attention and awareness in foreign language learning. Honolulu, Hawaii, University of Hawaii Press. Kalyuga, S., Chandler, P., & Sweller, J. (1998). Levels of expertise and instructional design. Human Factors, 40( 1), 1-17. Kalyuga, S., Chandler, P., & Sweller, J (1999). Managing split-attention and redundancy in multimedia instruction. Applied Cognitive Psychology, 13, 351-371. Kalyuga, S., Chandler, P., & Sweller, J (2000). Incorporating learner experience into the design of multimedia instruction. Journal o f Educational Psychology, 92(1), 126- 136. Kimmer, H., & Deek, F. (1996). Instructional technology: A tool or a panacea? Journal o f Science Education and Technology, 5(1), 87-92. Kormos, J. (2000). The timing of self-repairs in second language speech production. Studies in Second Language Acquisition, 22, 145-167. Kotovosky, K. H., J. R., & Simon, H. A. (1985). Why are some problems hard? Evidence from Tower of Hanoi. Cognitive Psychology, 17, 248-294. Kozma, R.B. (1991). Learning with media. Review o f Educational Research, 61 (2), 179-211. Kozma, R.B. (1994). Will media influence learning? Reframing the debate. Educational Technology Research and Development, 42 (2), 7-19. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 191 Krapp, A., Hidi, S., & Renninger, A. (1992). Interest, learning, and development. In K. A. Renninger, S. Hidi, & A. Krapp (Eds.), The role o f interest in learning and development (pp. 3-25). Hillsdale, NJ: Erlbaum. Krashen, S. (1985). The input hypothesis: Issues and Implications. New York: Longman. Krashen, S. (1992). Formal grammar instruction: Another educator comments... TESOL Quarterly, 26, 409-411. Krashen, S. (1993). The effect of formal grammar teaching: Still peripheral. TESOL Quarterly, 27, 722-725. Kroetz, A. W. (1999). The role o f intelligent agency in synthetic instructor and human student dialogue. Unpublished doctoral dissertation. University of Southern California, Los Angeles, CA. Leow, R. (1997). Attention, awareness, and foreign language behavior. Language Learning, 47(3), 467-505. Leow, R. (2000). A study o f the role of awareness in foreign language behavior: Awareness versus unaware learners. Studies in Second Language Acquisition, 22, 557-584. Leow, R., & Morgan-Short, K. (2004). To think aloud or not to think aloud. Studies in Second Language Acquisition, 26, 35-57. Lester, J. C., Converse, S. A., Kahler, S. E., Barlow, S. T., Stone, B. A., & Bhogal, R. S. (1997, March). The persona effect, Affective impact o f animated pedagogical agents. Proceedings of Computer-Human Interaction '97 (pp. 359-366), Atlanta. Lester, J. C., Converse, S. A., Stone, B., Khaler, S., & Barlow, T. (1997). Animated Pedagogical Agents and Problem-Solving Effectiveness: A Large-Scale Empirical Evaluation. Proceedings of the 8th World Conference on Artificial Intelligence in Education (pp. 23-30), Kobe, Japan. Lester, J. C., Stone, B., & Stelling, G. (1999). Lifelike pedagogical agents for mixed- initiative problem solving in constructivist learning environments. User Modeling and User-Adapted Interaction, 9(1-2), 1-44. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 192 Lester, J. C., Voerman, J., Towns, S., & Callaway, C. (1997). Cosmo, A life-like animated pedagogical agent with deictic believability. IJCAI '97 Workshop on Animated Interface Agents, Making them intelligent, Nagoya, Japan. Lester, J. C., Zettlemoyer, L., Gregoire, J., & Bares, W. (1999). Explanatory lifelike avatars: performing user-centered tasks in 3D learning environments. Proceedings of the Third International Conference on Autonomous Agents (pp. 24-31), Seattle, Washington. Levin, H. M., & McEwan, P. M. (2001). Cost-effectiveness analysis: Methods and applications (2n d ed.). Thousand oaks, CA: Sage. Lock, G. (1996). Functional English Grammar: An introduction fo r second language teachers. Cambridge: Cambridge University Press. Long, M. H. (1983). Does second language instruction make a difference? TESOL Quarterly, 17(3), 359-382. Long, M. H., & Robinson, P. (1998). Focus on form: Theory, research, and practice. In C. Doughty and J. White (Eds.), Focus on form in classroom second language acquisition (pp. 197-261). New York, NY: Cambridge University Press. Lowe, J. (2002). Computer-based education: Is it a panacea? Journal o f Research on Technology in Education, 34(2), 163-171. Marcus, N., Cooper, M. & Sweller, J. (1996) Understanding instructions, Journal o f Educational Psychology 88: 49— 63. Mayer, R. E. (1997). Multimedia learning: Are we asking the right questions. Educational Psychology Review, 8, 357-371. Mayer, R. E. (2001). Multimedia Learning. New York: Cambridge University Press. Mayer, R. E., Fennel, S., Farmer, S., & Campbell, J. (2004). A personalization effect in multimedia learning: Students learn better when words are in conversational style rather than formal style. Journal o f Educational Psychology, 96(2), 389-395. Mayer, R. E., Heiser, J., & Lonn, S. (2001). Cognitive constraints on multimedia learning: When presenting more materials results in less understanding. Journal o f Educational Psychology, 93(1), 187-198. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 193 Mayer, R. E., & Moreno, R. (1998). A Split-attention effect in multimedia learning: Evidence for dual processing systems in working memory. Journal o f Educational Psychology, 90, 312-320. Mayer, R. E., Moreno, R., Boire, M., & Vagge, S. (1999). Maximizing constructivist learning from multimedia communications by minimizing cognitive load. Journal o f Educational Psychology, 91(4), 638-643. Microsoft Corporation (2003). Microsoft Office FrontPage 2003. Microsoft Corporation (2003). Microsoft Office PowerPoint 2003. Moreno, R., & Mayer, R. E. (1999). Cognitive principles of multimedia learning: The role of modality and contiguity. Journal o f Educational Psychology, 91, 358-368. Moreno, R., & Mayer, R. E. (2000a). A coherence effect in multimedia learning: The case for minimizing irrelevant sounds in the design of multimedia instructional messages. Journal o f Educational Psychology, 97, 117-125. Moreno, R., & Mayer, R. E. (2000b). A learner-centered approach to multimedia explanations: Deriving instructional design principles from cognitive theory. Interactive Multimedia Electronic Journal o f Computer-Enhanced Learning, 2(2). Retrieved April 17, 2002, from http://imej.wfu.edU/articles/2000/2/05/index.asp. Moreno, R., Mayer, R. E., & Lester, J. C. (2000). Life-like pedagogical agents in constructivist multimedia environments, Cognitive consequences o f their interaction. The World Conference on Educational Multimedia, Hypermedia, and Telecommunications (ED-MEDIA), Montreal, Canada. Moreno, R., Mayer, R. E., Spires, H. A., & Lester, J. C. (2001). The case for social agency in computer-based teaching: Do students learn more deeply when they interact with animated pedagogical agents? Cognition and Instruction, 19(2), 177- 213. Mousavi, S. Y., Low R., & Sweller, J. (1995). Reducing cognitive load by mixing auditory and visual presentation modes. Journal o f Educational Psychology, 87(2), 319-334. Nass, C., & Steuer, J. (1993). Anthropomorphism, agency, and thopoeia: Computers as social actors. Human Communication Research, 19(4), 504-527. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 194 Nisbett, R., & Wilson, T. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84, 231-259. Norris, J. M., & Ortega, L. (2000). Effectiveness of L2 instruction: A research synthesis and quantitative meta-analysis. Language Learning, 50(3), 417-528. Paas, F. (1992). Training strategies for attaining transfer of problem-solving skills in statistics: A cognitive-load approach. Journal o f Educational Psychology, 84, 429- 343. Paas, F., Tuovinen, J. E., Tabbers, FL, & van Gerven, P. W. M. (2003). Cognitive load measurement as a means to advance cognitive load theory. Educational Psychologist, 35(1), 63-71. Paas, F., & van Merrienboer, J. J. G. (1994). Instructional control of cognitive load in the training of complex cognitive tasks. Educational Psychology Review, 6, 51-71. Paas, F., van Merrienboer, J.J.G. & Adam, J.J. (1994) Measurement of cognitive load in instructional research, Perceptual and Motor Skills 79: 419-430. Pintrich, P. R., & Schunk, D. H. (2002). Motivation in Education: Theory, Research, and Applications (2n d Ed.). Columbus, OH: Merrill Prentice Hall. Posner, M. I., & Snyder, R. R. (1975). Facilitation and inhibition in the processing of signals. In P. M. A. Rabbitt & S. Domic (Eds.), Attention and performance V (pp. 669-682). New York: Academic Press. Reber, A. S. (1976). Implicit learning of synthetic languages: The role of instructional set. Journal o f Experimental Psychology, Human Learning and Memory, 2, 88-94. Reber, A. S. (1989). Implicit learning and tacit knowledge. Journal o f Experimental Psychology, General, 118, 219-235. Reber, A. S. (1993). Implicit Learning and Tacit Knowledge: An essay in the cognitive unconscious. Oxford: Oxford University Press. Richards, J. C., & Rodgers, T. S. (1998). Approaches and methods in language teaching: A description and analysis. New York, Cambridge University Press. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 195 Reeves, B., & Nass, C. (1996). The Media Equation: How people treat computers, television, and new media like real people and places. New York: Cambridge University Press. Robinson, P. (1995). Attention, memory, and the ‘Noticing Hypothesis’. Language Learning, 45, 283-331. Robinson, P. (1996). Learning simple and complex second language rules under implicit, incidental, rule search, and instructed conditions. Studies in Second Language Acquisition, 18, 27-68. Robinson, P. (1997a). Generalizability and automaticity of second language learning under implicit, incidental, enhanced, and instructed conditions. Studies in Second Language Acquisition, 19, 223-247. Robinson, P. (1997b). Individual differences and the fundamental similarity of implicit and explicit adult second language learning. Language Learning, 47, 45-99. Rosa, E., & O'Neill, M. (1999). Explicitness, intake, and the issue of awareness. Studies in Second Language Acquisition, 21, 511-556. Roth, F. (1984). Accelerating language learning in young children. Journal o f Child Language, II, 89-107. Salomon, G. (1983). The differential investment of mental effort in learning from different sources. Educational Psychologist, 18( 1), 42-50. Salomon, G. (1984). Television is easy and print is tough: The differential investment of mental effort in learning as a function of perceptions and attributions. Journal o f Educational Psychology, 76(4), 647-658. Salomon, G., & Leigh, T. (1984). Predispositions about learning from print and television. Journal o f Communication, 34, 119-125. Sampson, D., Karagiannidis, C., & Kinshuk (2002). "Personalised learning: Educationla, technological and standardisation perspective." Interactive Educational Multimedia, 4, 24-39. Schmidt, R. (1990). The role of consciousness in second language learning. Applied Linguistics, 11, 206-226. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 196 Schmidt, R. (1995). Consciousness and foreign language learning: A tutorial on the role of attention and awareness in learning. In R. Schmidt (Ed.), Attention and awareness in foreign language learning. Honolulu, Hawaii, University of Hawaii Press. Schmidt, R. (2001). Attention. In P. Robinson (Ed.), Cognition and second language instruction (pp. 3-32). New York: Cambridge University Press. Shaw, E., Johnson, W. L., & Ganeshan, R. (1999, May). Pedagogical agents on the web. Paper presented at the 3rd international conference on autonomous agents, Seattle, WA. Shiffrin, R., & Schneider, W. (1977). Controlled and automatic human information processing, II. Perceptual learning, automatic attending, and a general theory. Psychological Review, 84, 127-190. Skehan, P. (2003). Task-based instruction. Language Teaching, 36, 1-14. Skehan, P., & Foster, P. (2001). Cognition and tasks. In P. Robinson (Ed.), Cognition and Second Language Instruction (pp. 183-205). Cambridge, UK: Cambridge University Press. Spada, N. (1997). Form-focused instruction and second language acquisition: A review of classroom and laboratory research. Language Teaching, 29, 1-15. Sweller, J. (1999). Instructional design in technical areas. Camberwell, Australia: ACER Press. Sweller, J., & Chandler, P. (1994). Why some material is difficult to learn. Cognition and Instruction, 12, 185-233. Sweller, J., Cooper, G. A., Tierney, P., & Cooper, M. (1990). Cognitive load and selective attention as factors in the structuring of technical material. Journal o f Experimental Psychology, General, 119, 176-192. Sweller, J., van Marrienboer, J. J. G., & Paas, F. G. W. C. (1998). Cognitive architecture and instructional design. Educational Psychology Review, 10, 251-296. Tindall-Ford, S., Chandler, P., & Sweller, J. (1997). When two sensory modes are better than one. Journal o f Experimental Psychology: Applied, 3, 257-287. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 197 Tomlin, R., & Villa, V. (1994). Attention in cognitive science and second language acquisition. Studies in Second Language Acquisition, 16, 183-203. Ullmer, E. J. (1994). Media and learning: Are there two kinds of truth? Educational Technology Research and Development, 42 (1), 21-32. van Gerven, P. W. M., Paas, F. G. W. C., & Schmidt, H. G. (2000). Cognitive load theory and the acquisition of complex cognitive skills in the elderly: Towards an integrative framework. Educational Gerontology, 26, 503-521. van Mulken, S., Andre, E., & Muller, J. (1998). The person effect: How substantial is it? In H. Johnson, L. Nigay & C. Roast (Eds.), People and Computers XIII: Proceedings o f HCE98, pp. 53-66. Berlin: Springer. Ward, M., & Sweller, J. (1990). Structuring effective worked examples. Cognition and Instruction, 7, 1-39. Wertsch, J., & Bivens, J. A. (1992). The social origins of individual mental functioning, Alternatives and perspectives. The Quarterly Newsletter of the Laboratory of Comparative Human Cognition, 14, 35-44. Wierwille, W. W., Rahimi, M., & Casali, J. G. (1985). Evaluation of 16 measures of mental workload using a simulated flight task emphasizing mediational activity. Human Factors, 27, 489-502. White, J. (1998). Getting the learners' attention: A typographical input enhancement study. In C. Doughty and J. White (Eds.), Focus on form in classroom second language acquisition (pp.85-113). New York, NY: Cambridge University Press. Williams, J. (1999). Memory, attention, and inductive learning. Studies in Second Language Acquisition, 21, 1-48. Wolfe-Quintero, K. (1992). Leamability and the acquisition o f extraction in relative clauses and Wh-questions. Studies in Second Language Acquisition, 14, 39-70. Zobl, H. (1983). Markedness and the projection problem. Language Learning, 33, 292-313. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 198 APPENDIX A Reading Comprehension Task Reading Wizard Computer-Based Reading Comprehension Program Story I: Phobias - Strange but Simple, Terrible but Curable Section 1 What is phobia? For the phobic person however, the intensity of terror can be depressing and horrific. When you have a phobia, it shows in your mind and body. Symptoms of a phobia include powerful heartbeats, tensions and pains in muscles, inability to relax, and dizziness. These symptoms show only in particular situation. For example, the height phobic, a person who is afraid of height, is fine on the ground floor, and the dog phobic is okay away from dogs. A phobia may have its root in a previous experience. It may be caused by a movie which you saw before. Or it could even be a scary story which your parents told you when you were young. Some phobias are associated with repetitive obsessive thoughts which intrude into minds. For example, some people cannot J0 ». * * A phobia is an intense fear of an object, situation, or even a thought which other people might not be frightened of. If you are terrified even by a little sleeping dog, then you definitely have a phobia. Because phobias can be triggered by anything, such as loud noise, belly buttons and goldfish, they sometimes seem ridiculous, absurd or even funny to other people. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 199 leave the house because they are afraid that the stove might burst into flames. So each time they leave home, they might have a urge to check the stove, not once, but several times. The most common strategy which phobic people take to deal with phobias is avoidance. That is, they try to avoid frightening things. However, it is surprisingly easy and painless to cure phobias. In fact, among all the mental illnesses, the phobia is the easiest thing to cure. Phobic people often cope with a phobia well if they recognize what causes it and that it will not last. If the symptoms persist, though, professional help is often sought such as psychotherapy. Please answer the following questions. Question 1. What do you think is the most common phobia among people? ^ 1) Snake Phobia ^ 2) Height Phobia ^ 3) Spider Phobia ^ 4) Water Phobia Question 2. What is the general attitude toward phobia in your country? a serious mental illness and it should be treated by a doctor, not a serious illness but a phobic person still needs to see a doctor, an illusion and a phobic person has to overcome it. an illusion and a phobic person does not have to do anything about it. c 1) It is 2) It is 3) It is 4) It is Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 200 Question 3. What are you afraid of most? ^ 1) Scary animals ^ 2) Standing on a high building or bridge ^ 3) Watching a scary movie ^ 4) Being in a small place Question 4. Do you think phobic people should seek professional help? ^ 1) Definitely ^ 2) Maybe ^ 3) Absolutely not ^ 4) Doesn't matter Submit Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 201 Reading Wizard Computer-Based Reading Comprehension Program Story II: I can't sing! - Stage Phobia Section 1 Stage Phobia Have you ever had to speak or sing in front of a large group, experiencing a feeling which you just could not go through - pains in your chest, and sweating all over your body? Do you think such experiences only happen to ordinary people like you? And it never happens to a professional performer, maybe your favorite singer or actor who is always in front of a huge audience? Well, stage phobia, also called performance anxiety, is a common symptom which, in fact, many professional performers suffer from. Here is a story about a popular singer who had stage phobia. * Debbie was a popular singer and many people enjoyed listening to her music. Although to her audiences she looked relaxed and confident, she felt awful inside. She was one of those performers who were overwhelmed by stage phobia. Stage phobia happened to her when a concert was to begin and even in the middle of her performance. In fact, as she became more famous, her stage phobia got worse. Panic attacks on stage were something which she was experiencing even when she was a child. Debbie was a very shy kid and had a stuttering problem which her sister made fun of her for. So she always got a non-speaking role in family plays. Debbie also thought that she was ugly and stupid. On the other hand, she thought that her sister was beautiful and smart. She had no self-confidence at all. All these feelings just became stronger as the years went by until she finally cracked up at a Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 202 concert sever years ago. Debbie was also having several problems with her life at that time. One problem which she was going through was her marriage. Debbie and her husband, Ben, had a lot of issues which they did not agreed on. Even worse, Ben was having an affair with another woman who Debbie was also a friend with. Another issue which she was worried about was her daughter's health. Her daughter who then was 7 years old had just undergone a serious surgery. In addition to all these family problems, there was her singing career which she had to take care of. Please answer the following question. Question 5. How do you feel when you sing in front of a large group? ^ 1) Terrified ^ 2) Nervous but not terrified ^ 3) Comfortable ^ 4) Don't feel anything Question 6. What kind of music do you like to listen? C 1) Hip Hop ^ 2) Classical Music ^ 3) Country Music ^ 4) Pop Music Question 7. What best describes you as a child? Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 203 E 1) Smart but Shy E 2) Confident about yourself E 3) Active E 4) None of them above Question 8. Has your personality changed since you were a child? E 1) Yes E 2) No Submit Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 204 Reading Wizard Computer-Based Reading Comprehension Program I can't sing! - Stage Phobia Section 2 A t a Concert It was one night in New York when a strong panic hit her on stage. She was having a concert which she had been involved in not only as a singer but also as a producer. Thus, she had spent a great of amount time and effort on preparing the show which she was quite happy about. However, when she went onto the stage to perform she became so anxious that she started having heart palpitations. It was so bad that after the first song, she felt she had to tell the audience about the trouble which she was in. Unfortunately, before she started the second song, she collapsed at the stage and the show which many people had been waiting for had to be canceled. After her collapse in New York, she returned to Los Angeles and checked into a hospital. She spent a week under treatment for tiredness. When she got out, she did an extensive research about stage phobia and its cure. Among the several methods of treatment which she looked into, she found that psychotherapy was the best treatment for her. Besides psychotherapy, her friends and family who she relied on also helped her very much. She could not have done it without all those people around her. Nevertheless she would not say she is completely over her performance anxiety yet. It took her several years to get the nerve to sing again in public. So, when she did her recent concert for cable TV, the audience was made up of friends and people who she was close to. Even so, she was more apprehensive than she looked. She went through about 25 different emotions which she was very well aware of. The Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 205 strategy which she took on to survive was to focus on the live, immediate audience, not the TV cameras which she was always frightened by. Please answer the following question. Question 9. Have you ever been to any music concert? C 1) Yes C 2) No Question 10. If your answer for Question 1 is Yes, how was the concert? ^ 1) It was fantastic, and I enjoyed so much. ^ 2) It was okay. IP* ® ' 3) It was so boring that I don't want to go to a music concert any more. IF * 5 1 ^ 4) Never been to a music concert Question 11. What is your strategy to speak or sing in front of big audience? ip-*® * ^ 1) Take a deep breath before speak or sing ^ 2) Drink lost of water before speak or sing ^ 3) Practice a lot before speak or sing ^ 4) Take a nap before speak or sing Submit Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 206 APPENDIX B Absurd: Lexical Items Included in Electronic Dictionary Unreasonable, unsound, or meaningless Anxiety: Painful or apprehensive uneasiness of mind Anxious: having extremely uneasy feeling or having fear about something Apprehensive: fearful or afraid of Associated: Being related or connected Avoidance: An act of keeping away from something or someone Awful: Afraid or Terrified Burst into: To emerge or spring suddenly Check into: To check in at Collapse: To fall down or to break down completely Completely: Entirely, fully, perfectly, quite, thoroughly, utterly, wholly Confident: Certain, self-reliant or trustful Cope with: To deal with and attempt to overcome problems and difficulties Cope with: To deal with and attempt to overcome problems and difficulties Crack up: To crash, to break down, or to break violently Cure: To restore to health, soundness, or normality Deal with: To take action with regard to someone or something Depressing: Causing emotional depression Dizziness: A whirling sensation in the head with a tendency to fall Extensive: Widely ranging in areas, scope or application Flame: Glowing gaseous part of a fire Frighten: To make afraid or terrify Go through: To continue firmly or obstinately to the end Horrific: Causing to feel horror Immediate: Close or near Inability: Lack of sufficient power, resources, or capacity Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 207 Intense: Intrude: Involve: Last: Look into: Nerve: Obsessive: Ordinary: Overwhelm: Palpitation: Panic: Perform: Persist: Phobia: Prepare: Previous: Producer: Psychotherapy Rely on: Repetitive: Sought: Stage Phobia: Strategy: Stutter: Suffer from: Existing in an extreme degree Enter without invitation, permission, or welcome To engage as a participant To continue in time To examine or to search for Power of endurance or control, or strength Tending to cause obsession Plain, unremarkable, usual, or not special To overcome by superior force or numbers, or to overpower in thought or feeling One's heart beating rapidly and strongly A sudden unreasoning terror To carry out an action, to do pattern of behavior or to act To remain unchanged or fixed in a specified character, condition, or position An exaggerated and illogical fear o f a particular object, or situation To make ready beforehand for some use or activity Going before in time or order A person who supervises or finances the production of a radio or television program : Treatment of mental or emotional illness To be dependent on Saying or doing the same thing again and again Being asked for, searched, or discovered Fear of speaking or performing in public A plan, blueprint, or design To speak with involuntary stops or to repeat words To experience or to undergo Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 2 0 8 Symptom: Evidence of disease or physical problem Take on: To hire, to engage, or to undertake Tension: Condition or degree of being stretched to stiffness Tiredness: A state of being tired Treatment: The act or manner o f treating someone or something Trigger: To start, initiate, actuate, or set off Undergo: To experience or to go through Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 209 APPENDIX C English Relative Clauses Embedded in Reading Comprehension Task Readings Relative Clauses Included Types Section 1 1) A phobia is an intense fear of an object, situation, or SU (2) even a thought which other people might not be DO (3) frightened of. OPREP (1) 2) The height phobic, a person who is afraid of height, is fine on the ground floor. 3) It may be caused by a movie which you saw before. 4) Or it could even be a scary story which your parents told you when you were young. 5) Some phobias are associated with repetitive obsessive thoughts which intrude into minds. 6) The most common strategy which phobic people take to deal with phobias is avoidance. Section 2 1) Have you ever had to speak or sing in front of a large group, experiencing a feeling which you just could not go through? 2) And it never happens to a professional performer, maybe your favorite singer or actor who is always in front of a huge audience? 3) Well, stage phobia, also called performance anxiety, is a common symptom which, in fact, many professional performers suffer from. 4) Here is a story about a popular singer who had stage phobia. 5) She was one o f those performers who were SU (4) DO (1) OPREP (8) Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 210 overwhelmed by stage phobia. 6) Panic attacks on stage were something which she was experiencing even when she was a child. 7) Debbie was a very shy kid and had a stuttering problem which her sister made fun of her for. 8) One problem which she was going through was her marriage. 9) Debbie and her husband, Ben, had a lot of issues which they did not agree on. 10) Even worse, Ben was having an affair with another woman who Debbie was also a friend with. 11) Another issue which she was worried about was her daughter's health. 12) Her daughter who then was 7 years old had just undergone a serious surgery. 13) In addition to all these family problems, there was her singing career which she had to take care of. Section 3 1) She was having a concert which she had been SU (0) involved in not only as a singer but also as a producer. DO (0) 2) Thus, she had spent a great o f amount time and effort OPREP on preparing the show which she was quite happy about. (10) 3) She felt she had to tell the audience about the trouble which she was in. 4) The show which many people had been waiting for had to be canceled. 5) Among the several methods of treatment which she looked into, she found that psychotherapy was the best Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 211 treatment for her. 6) Besides psychotherapy, her friends and family who she relied on also helped her very much. 7) The audience was made up of friends and people who she was close to. 8) She went through about 25 different emotions which she was very well aware of. 9) The strategy which she took on to survive was to focus on the live, immediate audience. 10) Not the TV cameras which she was always frightened by. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 212 APPENDIX D MASH Interface New Open M<lll> A ihr S |H ;;ik M<l|l T nxl S p e a k W in n - I u ric riM id d e i Sirt- S im j T T S V aiL K U a llu u n h u m h ld lle u n S ty le U i t o k n u r k s C n in i i i n i il s [) H s k tn |i I'D w e il'u iiil S i.n p t O u tp u t S r.l l|it y ► ■ i « r Save Play Stop Find Tour m Menu Character j Genie d Show j Hide Move to X I Y! A Move To Gesture at x j _▼ ] Yi A Gesture At Animation j Acknowledge A Play Animation Speak IWelcome to the Microsoft Agent Scripting Helper! Speak | Whisper Think I Show & Tell Add Last Show . . '22 ? ; ■ F Auto-Add Actions to S cript jG enie.T T SM odelD = "{CA141FD0-AC7F-11D1-97A3-006008273001}1 1 : jG enie.M oveT o 430, 200 :G enie.P lay "W ave" G e n ie .S p e a k "u , "F:\R01_sen01 .wav" G enie.M oveT o 750, 450 iG e n ie .S p e a k "", "F:\R01_sen03.w av" 'G enie.PI a y "G estureRight" i G e n i e . S p e a k "F:\R01_sen04.w av" G enie.Play "Explain" i G e n i e . S p e a k "F.\R01__senO5.wav" G en ie.P lay "P lea se d " G e n ie .S p e a k "", "F:\R01_sen06.w av" ; Line: Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 213 APPENDIX E Learner Background Survey Reading Wizard Computer-Based Reading Comprehension Program Learner Background Survey l.N am e: 1 2. Gender: ^ Female C Male 3. Age: 4. Email: 5. Phone: 1 6. Major: I r 7. Do you like learning English? ^ Yes * 5 ^ ^ No 8. Rank the following English learning activities according to the order you like each activity. Give T to the one you like most and '6' to one least. Vocabulary 1 Grammar 1 Listening 1 Reading I Writing f Speaking 9. How long have you been studying English? I 10. Where did you learn English (for example, high school in China)? 11. Are you taking any English classes now? C Yes C No Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 214 If yes, What classes are you taking? I 12. How long have you been in the United States? 13. Have you ever lived in or visited the United States efore? ^ Yes No 14. If your answer to No. 13 is YES, when was your previous visit to the United And how long did you stay? 1 15. Have you ever lived in or visited any other English-speaking countries? And how long did you stay there? 17. Have you ever taken any English language tests (for example, TOEFL, TOEIC)? C Yes C No What tests have you taken? 1 When did you take? I What are your most recent scores? 1 18. What is your native language? And what other languages do you also speak other than your first language and English? 19.1 normally use a computer: ^ Don't use one ^ Less than once/month States? ip*n |p“si ^ Yes ^ No 16. If your answer to No. 15 is YES, when did you visit the country? ^ Yes Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 215 ^ Couple times/month ^ Couple times/week ^ Daily 20. Which phrase below describes your OVERALL level of technical skill and knowledge of computers. ^ Not very competent ^ Low level of competency ^ Moderately competency ^ High level of competence ^ Expert * The following are the activities that you may do to learn English from Reading Wizard. Please read the sentences below and rate your degree of confidence that you can do each activity well. Put your confidence score (0-10) for each sentence in the boxes. 7 8 9 10 Absolut ely Certain Can Do 21. When listening to English, I can understand the main point of what I hear. 0 1 2 3 4 5 6 Moder Cannot Do At All ately Certain can do 22. When listening to English, I can understand details. * 23. When reading English text on a computer, I can figure out the main topic of the text. I Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 216 24. When reading English text on a computer, I can figure out the meanings of words or phrases that I don't know. I 25. When learning English grammar, I can understand how English grammar is used in a sentence. I 26. When learning English grammar, I can tell if the grammar used in a sentence is correct or not. 1 27. When learning English grammar, I can use English grammar correctly in a sentence. Submit Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 217 APPENDIX F Performance Tests Reading Wizard Computer-Based Reading Comprehension Program English Language Skill Test Part I Sentence Combining Test • Combine two sentences into one correct English sentence that makes sense. • As you combine the sentences, try to specify or identify the underlined words by using the information contained in the second sentence. • Always begin with the first sentence. • DO NOT leave out any information. . DO NOT use the words, BECAUSE, SO, WHILE, WHEN, AFTER, SINCE, BEFORE, AS, or AND. • DO NOT hit 'RETURN' Key when you write your answers. 1 .1 have bought the car. Cynthia wanted to have the car. 2. The woman is married. Jim is interested in the woman. 3 .1 know the bov. Sarah is talking to the boy. 4 .1 have not seen the woman. I fell in love with the woman last week. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 218 5. The company was established in 1898. The company makes cars. 6. The teacher was injured in the accident. The teacher is now in hospital. 7 .1 bought the book. My friend recommended the book strongly. 8. The bov was playing in the park. I saw the boy. 9. The waitress was very impolite. We were served by the waitress. 1 0 .1 found the camera. The teacher was looking for the camera. 11. We loved the book. The book was about Paris. 12. The floor was not very clean. The cat walked on the floor. Submit Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 219 Reading Wizard Computer-Based Reading Comprehension Program English Language Skill Test Part II Picture Interpretation Test Directions: • Read carefully each sentence as you look at the pictures. • After you read each sentence, decide which picture best describes the sentence you just read. • Check the box of the appropriate picture. 1. The girl touches the policeman who the thief kicks. c c c 2. The doctor who kisses the nurse treats the patient. □ E Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 220 3. The rabbit which scratches the bear looks at the tiger. 4. The horse looks at the lion which the monkey jumps over. c c c 5. The rabbit touches the tiger which the bear smells. C A j U ) Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 221 6. The cat which bites the mouse chases the bird. / i V 7. The lion watches the tiger which the sheep pushes. 8. The customer who the manager shouts at pushes the waiter. C Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 9. The boy stands besides the mother who the girl thinks about. S ubrrit Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 223 Reading Wizard Computer-Based Reading Comprehension Program English Language Skill Test Part III Grammaticality Judgment Test Directions: • Read the following sentences very carefully. • After you read each sentence, decide whether it is correct or incorrect. • Select only ONE answer for each sentence. • All words are spelled correctly. WHOM was not used in these sentences. DO NOT mark a sentence wrong because you think it should contain WHOM. 1. He looked at the dog which was wagging its tail. E CORRECT C INCORRECT 2. The man is intelligent who I met yesterday. C CORRECT C INCORRECT 3. Michelle was interested in the guy fixed my computer. C CORRECT C INCORRECT 4 .1 didn't eat the pie which you were saving for tonight. C CORRECT C INCORRECT 5. My son has the toy car who Tom talked about last week. C CORRECT C INCORRECT 6. The man who I sent an application to is Mr. Johnson. C CORRECT C INCORRECT 7. The book which Nancy was looking for at the library was missing. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 224 C CORRECT C INCORRECT 8. The woman who Ted was attracted to her was working at the hospital. w it*1 ^ CORRECT ^ INCORRECT 9. She is the woman which John was worried about. C CORRECT C INCORRECT 10.1 liked the restaurant which John took me to the other day. C CORRECT C INCORRECT 11. The man threw away the book which Michael was reading. C CORRECT C INCORRECT 12. The teacher talked to the student which the man attacked with a knife. E CORRECT ^ INCORRECT S u b m it * Basically the same performance tests were used for the pre- and posttest except that the order in which the items were presented was different and some lexical items were replaced with other similarly difficulty words. Therefore, the posttests were omitted in the appendix. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 225 APPENDIX G Mental Effort Measures Reading Wizard Computer-Based Reading Comprehension Program Please answer the following questions. Effort Questions 1 Not at all 2 3 4 Avera ge 5 6 7 Very Much 1 How hard did you try in order to understand the lesson? E E c c E E c 2 How hard do you think your friends (in the room) tried in order to understand the lesson? c C c c c E E 3 How much did you concentrate in order to understand the lesson? c c E c c C c 4 How much mental effort did you invest in order to understand the lesson? e c E c c c Submit * First three items are from Salomon’s AIME (Amount of Invested Mental Effort) and the last item is from Paas’ Mental Effort Scale. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 226 APPENDIX H Subjective Ratings Reading Wizard Computer-Based Reading Comprehension Program Pre-task Interest Measure Please answer the following questions. Interest Questions 1 Not at All 2 3 4 Avera ge 5 6 7 Very Much 1 How interesting was the lesson in general? c c E c E C 2 How interesting was Genie, your learning guide? e c c E c E c 3 How helpful was Genie, your learning guide? c c c C c c c 4 How useful was the lesson for understanding English relative clauses? E c E E c E Submit Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 227 Reading Wizard Computer-Based Reading Comprehension Program Reading Task Interest Measure Please answer the following questions. Questions 1 Not at all 2 3 4 Avera ge 5 6 7 Very Much 1 How interesting was "Reading Wizard"? E c c E E c c 2 After learning with Genie, how easy was it to understand the story about Phobias? c c C c E E 3 How interesting was the topic of the story (Phobia)? c E c c c C E 4 How helpful was the Dictionary function for understanding the stories? c c c E E C 5 In general, how helpful was the instruction and assistance provided by the program? E C c E C E E 6 If you had a chance to use 'Reading Wizard' again, how much would you like to do so? E E c c c E E Submit * The last item is the active choice measure. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 228 APPENDIX I Experiment Instructions for Agent Group and Arrow Group Day 1 - Instruction and Procedures for Agent Group 1. Please turn off your cell phone. 2. During the experiment, please DO NOT talk to your friends or people sitting next you. 3. If you have any question, please ask Sunhee Choi. 4. Use the Headset. 5. Put the CD in the computer. 6. Double click on “Agent_Intro.exe” 7. After interacting with Genie, Right Click on Genie and then click ‘Close’. 8. Open ‘Internet Explorer’ to connect to the Web. 9. Type this address: http://128.125.64.126/Agent Group/Day 1 /00_Reading_Intro.htm 10. When you see the following message, choose “Yes” jrogram, you will study English with Genie, your learning guide. Please click HERE to go to the next page. M icrosoft Internet Explorer ■ . The Web page you a re viewing is trying to close th e window, Do you w ant to close this window? Yes No Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 229 11. Read the instructions very carefully. 12. There are no time limits, so please task as much time as you want. 13. Answer as many questions as you can. 14. DO NOT use “Enter Key” in your answers. 15. Please DO NOT talk about the study with other people. Day 1 - Instruction and Procedures for Arrow Group 1. Please turn off your cell phone. 2. During the experiment, please DO NOT talk to your friends or people sitting next you. 3. If you have any question, please ask Sunhee Choi. 4. Use the Headset. 5. Put the CD in the computer. 6. Double click on “Intro_RW_02.ppt” and hit ‘F5’ key to run the PowerPoint. 7. After interacting with Genie, Close the PowerPoint. 8. Open ‘Internet Explorer’ to connect to the Web. 9. Type this address: http://128.125.64.126/Arrow Group/Day 1 /00_Reading_Intro.htm Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 230 10. When you see the following message, choose “Yes” )rogram , you will study English with G enie, your learning guide. Please click HERE to go to the next page. M icrosoft in te rn e t E xplorer * V * p ' \ The W eb p a g e y ou a re viewing is trying to d o s e th e w indow. Do y o u w ant to d o s e this window? Yes No 11. Read the instructions very carefully. 12. There are no time limits, so please task as much time as you want. 13. Answer as many questions as you can. 14. DO NOT use “Enter Key” in your answers. 15. Please DO NOT talk about the study with other people. Day 2 - Instruction and Procedures for Agent Group 1. Put the CD in your computer. Open the CD Rom and click on the file “Relative_Clauses.ppt”. Don’t run the presentation yet. 2. In PowerPoint, click on “Tools” menu Macro Security, and then select ‘Low’ and click ‘OK’. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 231 3. Open Internet Explorer and type the address below. http://128.125.64.126/Agent Groun/Day2/Reading Intro.htm 4. Click ‘Yes’ when you have the following message showing up on your screen. W elcom e Back to R eading Wizards Today, you a| Microsoft Internet Explorer ■ —.1 i !*p\ The Web page you are viewing is trying to close the window. Do you want to dose this window? Yes No earning guide 5. Read the direction very carefully. 6. Click on the ‘Activate’ button to activate the system. 7. Go back to the PowerPoint. 8. Click F5 key to start presentation. In the presentation, don’t click anything unless you are told to do so. 9. In every presentation slide, click on to start presentation. 10 In the presentation, by clicking on the icon ^ 11 Take time as much as you want to study each slide. 12 Now, go back to the Internet Explorer. Follow the instruction. you can see answers. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 232 13 In the reading section, if there is any word you don’t know in the reading, move the mouse over the word. If the mouse changes to the hand shape, click on it to see the meaning. Day 2 - Instruction and Procedures for Arrow Group 1. Put the CD in your computer. Open the CD Rom and click on the file “Relative Clauses Arrow.ppt”. Don’t run the presentation yet. 2. In PowerPoint, click on “Tools” menu Macro Security, and then select ‘Low’ and click ‘OK’. 3. Open Internet Explorer and type the address below. http://128.125.64.126/Arrow Group/Day2/Reading Intro.htm 4. Read the direction very carefully. 5. Click ‘Yes’ when you have the following message showing up on your screen. Welcome Back to Reading Wizard! Today, you earning guide The Web page you are viewing is trying to dose the window, f ’ * Do you w ant to d ose this window? Yes No 6. Click on the ‘Activate’ button to activate the system. 7. Go back to the PowerPoint. 8. Click F5 key to start presentation Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 9. In every presentation slide, click on 1 — v |1| IJ to start presentation. In the presentation, don’t click anything unless you are told to do so. 10. In the presentation, by clicking on the icon , you can see answers. 11. Take time as much as you want to study each slide. 12. Now, go back to the Internet Explorer. Follow the instruction. 13. In the reading section, if there is any word you don’t know in the reading, move the mouse over the word. If the mouse changes to the hand shape, click on it to see the meaning. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 234 APPENDIX J Consent Form Approved by USC IRB University of Southern California Rossier School of Education INFORMED CONSENT FOR NON-MEDICAL RESEARCH CONSENT TO PARTICIPATE IN RESEARCH Cognitive Efficiency of Animated Pedagogical Agent for Learning English as a Second Language You are asked to participate in a research study conducted by Dr. Richard E. Clark and Ms. Sunhee Choi, from the Rossier School of Education at the University of Southern California. You are asked to volunteer as a possible participant in this study because your first language is not English and you are learning English as a second language (ESL) in the U.S. A total of 60 participants will be recruited from local universities and colleges that offer ESL courses. Your participation is voluntary. PURPOSE OF THE STUDY The primary purpose of this study is to investigate how multimedia-based ESL reading program affect students’ reading comprehension. In particular, the study will examine the effects of different multimedia elements (i.e., animated pedaogigcal agent - a computerized animated character residing in a computer-based learning environment to provide learners with instructional advice and feedback, and electronic arrow with audio) on enhancing college level ESL students’ English reading comprehension. The results of the study will be utilized to further develop multimedia-based ESL reading software. PROCEDURES If you volunteer to participate in this study, we would ask you to do the following things during Spring 2005: Day 1 - (30 minutes) 1) Complete one questionnaire which should take approximately 10 minutes for you to finish. In the questionnaire, you will be asked to provide base information regarding your experience of learning English as a Second language. The Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 235 questionnaire will be expected to inform us about you as a learner of English. You may decline to answer any questions at any time. Your questionnaire will be identified by your email address which you provide in the questionnaire. 2) Complete the ‘English Language Skill Pre-Test’ which will be given to you right after the basic information questionnaire described above. The test has three sub sections, each of which examines different aspects o f your English skills (comprehension and production skills). It will take about 20 minutes to finish all three sections. The results of this pretest will be used to find out the relationship between the level of learner prior knowledge and multimedia-based ESL learning. Day 2 - (90 minutes) 1) Work on a computer-based reading program, "Reading Wizard" which has two different, but thematically related, reading tasks. Each reading task will take about 30 minutes to complete. All the necessary equipment including computers, software, paper, and pen will be provided by us. The topic of the two reading tasks is "Phobia". During this task, you will be asked to answer some questions about what you have read. 2) Complete a questionnaire and a post-test. The questionnaire will collect data about your experience and opinions of ’’ Reading Wizard”, whereas the post-test will investigate how much you have improved your ESL skills after using the program. POTENTIAL RISKS AND DISCOMFORTS One discomfort may be that you will spend a total of 2 hours on this study instead of your own study. POTENTIAL BENEFITS TO SUBJECTS AND/OR TO SOCIETY There will be no direct benefit to you for participating in this study. However, the results may contribute to out understanding of the effects of multimedia-based reading program on ESL learners’ language skills. The results of the study will be used to further develop an effective computer-based ESL learning program for learners of ESL. PAYMENT/COMPENSATION FOR PARTICIPATION You will be given a $20 gift card from Starbucks or Bams & Nobles for your participation in the study. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 236 CONFIDENTIALITY Any information that is obtained in connection with this study and that can be identified with you will remain confidential. The data will be stored in a locked cabinet located in the researcher’s office, and will be disclosed only with your permission or as required by law. The data will be destroyed after the researcher performed analyses. When the results of the research are published or discussed in conferences or journals, no information will be included that would reveal your identity. If your responses will be used for educational purposes, your identity will be protected or disguised. PARTICIPATION AND WITHDRAWAL You can choose whether to be in this study or not. If you volunteer to be in this study, you may withdraw at any time without consequences of any kind. You may also refuse to answer any questions you don’t want to answer and still remain in the study. The investigator may withdraw you from this research if circumstances arise which warrant doing so. IDENTIFICATION OF INVESTIGATORS If you have any questions or concerns about the research, please feel free to contact Richard E. Clark, Faculty Sponsor, or Sunhee Choi, at Rossier School of Education at the University of Southern California, 3470 Trousdale Pkwy. WPH 1004A, Los Angeles, CA 90089, (213) 407-3378 or at sunheech@usc.edu. RIGHTS OF RESEARCH SUBJECTS You may withdraw your consent at any time and discontinue participation without penalty. You are not waiving any legal claims, rights or remedies because of your participation in this research study. If you have questions regarding your rights as a research subject, contact the University Park IRB, Grace Ford Salvatori Hall, Room 306, Los Angeles, CA 90089-1695, (213) 821-5272 or upirb@usc.edu. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 237 SIGN ATI'RK OF RESEARCH Sl'BJKCT, PARENT OR LEGAL REPRESENTATIVE. I understand the procedures described above and I understand fully the rights of a potential subject in a research study involving people as subjects. My questions have been answered to my satisfaction, and I agree to participate in this study. I have been given a copy of this form. Name of Subject Name of Parent or Legal Representative (if applicable) Signature of Subject, Parent or Legal Representative Date SIGNATURE OF INVESTIGATOR I have explained the research to the subject or his/her legal representative, and answered all of his/her questions. I believe that he/she understands the information described in this document and freely consents to participate. Name of Investigator Signature of Investigator Date (must be the same as subject’s) Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Commitment to deliberate practice in the development of writing expertise: A survey of the motivational patterns of academic writers in Tier One research universities
PDF
Beginning teachers use of the Internet for classroom learning activities: A study of affect
PDF
An evaluation of perceived task value, self-efficacy, and performance in a geography blended distance course
PDF
Assessing students' and professors' attitudes toward the use of computer -based technology in the classroom: A case study at the University of Jordan
PDF
A formative evaluation of the training effectiveness of a computer game
PDF
Comparing cognitive task analysis to behavior task analysis in training first year interns to place central venous catheters
PDF
Factors that influence students' investment of mental effort in academic tasks: A validation and exploratory study
PDF
Cost of university -based online, distributed, and traditional learning
PDF
Inaccuracies in expert self -report: Errors in the description of strategies for designing psychology experiments
PDF
Development and validation of the Cooper Quality of Imagery Scale: A measure of vividness of sporting mental imagery
PDF
Individual resistance to organizational change: The impact of personal control and job ambiguity
PDF
A history of the development of the California Science Content Standards: 1990--2005
PDF
A case study of a principal's transformation of a high poverty urban school
PDF
A qualitative exploratory study of a proceduralized system to learning key sport knowledge: Will a proceduralized approach enhance cognitive aspects of sport team performance?
PDF
A longitudinal study into the learning style preferences of university ESL students
PDF
Factors influencing teachers' and administrators' identification of diverse students for gifted programs in Title I schools
PDF
Effects of teaching self -monitoring in a distance learning course
PDF
Development of a diagnostic tool for assessing knowledge management
PDF
The Effects Of A Team Training Program And Inferences For Computer Software Development
PDF
A cluster analysis of the differences between expert and novice counselors based on real time training
Asset Metadata
Creator
Choi, Sunhee (author)
Core Title
Cognitive efficiency of animated pedagogical agents for learning English as a second language
School
Graduate School
Degree
Doctor of Philosophy
Degree Program
Education
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
education, curriculum and instruction,education, educational psychology,Education, higher,education, language and literature,OAI-PMH Harvest
Language
English
Contributor
Digitized by ProQuest
(provenance)
Advisor
Clark, Richard (
committee chair
), Colbert, Joel (
committee member
), Kazlauskas, Edward (
committee member
)
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c16-397451
Unique identifier
UC11341009
Identifier
3196792.pdf (filename),usctheses-c16-397451 (legacy record id)
Legacy Identifier
3196792.pdf
Dmrecord
397451
Document Type
Dissertation
Rights
Choi, Sunhee
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the au...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus, Los Angeles, California 90089, USA
Tags
education, curriculum and instruction
education, educational psychology
education, language and literature