Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
“Clickers” and metacognition: How do electronic response devices ("clickers") influence student metacognition?
(USC Thesis Other)
“Clickers” and metacognition: How do electronic response devices ("clickers") influence student metacognition?
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
“CLICKERS” AND METACOGNITION:
HOW DO ELECTRONIC RESPONSE DEVICES (“CLICKERS”) INFLUENCE
STUDENT METACOGNITION?
by
Melanie Manke-Brady
A Dissertation Presented to the
FACULTY OF THE USC ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
May 2012
Copyright 2012 Melanie Manke-Brady
ii
Dedication
To my children, Katelyn and Christian, always follow hard after that which is
good and right. Don’t be afraid of the hard things. God can do more in a moment than we
can do in a lifetime.
iii
Acknowledgements
I want to extend my gratitude to my mentors in this pursuit. I received incredible
support from my committee members that lead to accomplishing more than I could have
imaged when I first embarked on this journey. The members of my committee provided
opportunities that challenged me personally and professionally. I thank Helena Seli, my
chair and supervisor in my first TA position, who has become a friend. I thank Patricia
Burch, without whom my dissertation would not be the same; she graciously allowed for
my dissertation committee to form in a way that best fit my path. I thank Robert Keim
who has been a mentor extraordinaire in developing my understanding of research
statistics in my second TA position.
I would like to thank the following people without whom I would not have gotten
this far: First I thank God, because He gave me life and strength and breathe without
which I would not be here in the first place. Second I thank my kids who, in spite of
many challenges with health and educational issues, were a source of strength through
this process. Third I would like to thank my parents who were kind and supportive as my
live took many twists and turns over the years, and who provided a safe place for my kids
so that I could pursue this education free of concern for my children’s wellbeing.
iv
Table of Contents
Dedication ii
Acknowledgements iii
List of Tables vii
Abstract viii
Chapter One: Overview Of The Study 1
Background of the Problem 5
Statement of the Problem 7
Purpose of the Study 8
Importance of the Study 9
Research Questions and Hypotheses 10
Limitations and Delimitations 10
Organization of the Study 11
Chapter Two: Literature Review 13
Technology in Higher Education 15
Multimedia Technology in Higher Education 16
Technology and Gender Differences 18
Faculty Uses and Perception of Technology 18
Summary 19
Metacognition and Self-Regulation 22
Metacognition 23
Self-Regulation 24
Summary 31
Audience Response Systems 32
History of “Clicker” Use in Higher Education 33
Research on “Clickers” 35
Literature Summary 50
Chapter Three: Research Methodology 53
Research Questions 53
Research Design 54
Instrumentation 58
Procedure and Data Collection 66
Data Analysis 66
Chapter Four: Results 69
Measures and Descriptive Statistics 70
Quantitative Results 84
v
Research Question 1 84
Research Question 2 92
Research Question 3 94
Qualitative Results 99
Research Question 4 116
Research Question 5 118
Summary 118
Chapter Five: Discussion 121
“Clickers” versus Paddles 123
“Clickers” and the Social Context 128
“Clickers’” Relationship to Metacognitive Processes 129
“Clickers” and Student Outcomes 133
The Effect of Feedback Type and Engagement in Learning 134
Implications 135
Limitations 138
Future Research 140
Summary 142
Bibliography 145
Appendix A: Biographical and Demographic Data Survey 153
Appendix B: Motivated Strategies for Learning Questionnaire 154
Appendix C: Electronic Feedback Devices and Metacognitive Self-Regulation 156
Appendix D: Metacognition and Electronic Feedback Devices 158
Appendix E: Summer Group Results of Pencil and Paper Survey 160
Appendix F: Summer Group Qualitative Interview Question 1 Responses 161
Appendix G: Summer Group Qualitative Interview Question 2 Responses 162
Appendix H: Summer Group Qualitative Interview Question 3 Responses 163
Appendix I: Summer Group Qualitative Interview Question 4 Responses 164
Appendix J: Fall ‘Group A’ Results of Pencil and Paper Survey 165
Appendix K: Fall ‘Group B’ Results of Pencil and Paper Survey 166
Appendix L: Fall ‘Group A’ Question 1 Responses 167
Appendix M: Fall ‘Group B’ Question 1 Responses 168
Appendix N: Fall ‘Group A’ Question 2 Responses 169
Appendix O: Fall ‘Group B’ Question 2 Responses 170
Appendix P: Fall ‘Group A’ Question 3 Responses 171
Appendix Q: Fall ‘Group B’ Question 3 Responses 172
Appendix R: Fall ‘Group A’ Question 4 Responses 173
Appendix S: Fall ‘Group B’ Question 4 Responses 174
vi
List of Tables
Table 1: Research Questions 67
Table 2: Overview of Measures 70
Table 3: Cronbach’s Alpha for Metacognition and Feedback
Device Scales 72
Table 4: Demographic Characteristics 73
Table 5: Summer Group: Means, SD, and Pearson Product Correlations
for measured variables 77
Table 6: Fall ‘Group A’ (Experimental): Means, SD, and Pearson Product
Correlations for Measured Variables 80
Table 7: Fall ‘Group B’ (Comparison): Means, SD, and Pearson Product
Correlations for Measured Variables 81
Table 8: Correlations: Pre-MSLQ and Post-MSLQ 89
Table 9: Performance Outcome Descriptive Statistics 92
Table 10: Demographic Characteristics, (e.g., gender, major status,
athletic status, ethnicity, and English language status) 97
Table 11: Fall Experimental (“clickers”) and Comparison (paddles)
Performance Outcomes 126
Table 12: Fall Experimental Group Extent of Use (“Clickers”) 141
Table 13: Fall Comparison Group Extent of Use (Paddles) 141
vii
Abstract
The purpose of this study was to examine whether electronic response systems
influence student metacognitions in large lecture settings, and how metacognitive
processes are influenced. Moreover, this study compared electronic response systems
with a low technology system and sought to establish whether differences exist in
how the two response systems influence metacognition. The design of the study was
quasi-experimental, and employed both quantitative and qualitative measures with
multiple groups and multiple points of data collection. A context was selected that
utilized electronic response systems as a part of the instructional design of the course
and in conjunction with instructional strategies (e.g., questioning, and Peer
Instruction). Three sections of the same undergraduate educational psychology course
with the same instructor and instructional design were utilized in the study. There
were a total of 198 participants, 33 in the Summer Group, 87 in the Fall experimental
(“clickers”) group and 78 in the Fall (paddles) comparison group. The study found
that metacognitions are influenced more by the low technology response systems than
by “clickers,” but performance outcomes were significantly higher with “clicker” use
(p < .01). Results from the study indicate that metacognitive processes are influenced
by response systems and there are similarities and difference in the influence of the
two response systems. This study found that the low technology response system
resulted in negative feelings because answers were visible to peers before the correct
response was indicated. This resulted in students changing responses based on
perceived pressure from the group. While this resulted in more metacognition than
viii
“clickers,” the visible nature of the low technology device generated negative
feelings which may indicate that this method of response may have been an
impediment to learning goals and creating a learner-centered environment. The use of
“clickers” seems to influence honesty and reduce the conformity effects to which
students are prone. Results indicate that it may be useful to view metacognitions as
productive or unproductive, and in the case of response systems, as having a self-
reflective or group reflective quality. In addition the respondents who experienced
enhanced learning outcomes with “clickers” were the participants who had low to
average performance outcomes as compared to participants who tended to have
higher performance outcomes. Participants who had higher outcomes experienced
the least benefits and may have had consistent performance outcomes regardless of
the response device in use.
1
Introduction
Overview of the Study
Higher education is continuing a process of change, transforming from the “sage
on the stage to the guide on the side” (King, 1993). The change in presentation of
information in the higher education context is described as a “recent turn to inquiry
guided learning” (Mollborn & Hoekstra, 2010, p. 19); inquiry guided learning is an
instructional strategy to actively engage students in lectures rather than passively taking
notes as in a traditional lecture context. This shift in instructional design increases the
responsibility of the student requiring active participation in the learning process (Chen,
2002). Researchers are interested in meaningful learning and the conditions and strategies
that lend to instruction resulting in meaningful learning (Chen, Whittinghill, & Kadlowic,
2010; Mayer, 2008). The golden standard of effective instruction is to deliver instruction
that is appropriate for the learner and the learning environment that will prime
appropriate cognitive processing in the learner that lead to the construction of knowledge
which in indicated by a change in the learner’s behavior (Mayer, 2008). Construction of
knowledge includes development of self-regulation and metacognition; metacognition
has an integral part in self-regulation (Schraw, Crippen & Hartley, 2006).
In an effort to construct learner centered educational environments and in an
effort to increase student self-regulation and metacognition, the role of technology cannot
be ignored. In higher education, technology is utilized to engage learners, improve
academic performance, and increase retention rates (Nora & Blanca, 2009). There is a
generally positive connection between utilizing technology in learning, student
2
engagement, and learning outcomes (Chen, Lambert & Guidry, 2009). Technology usage
is supported by administrators, many of whom believe technology use in higher
education provides timely feedback on progress and promotes student success (Arnold,
Tanes & King, 2010). No one is disputing that computers will have an impact upon
education, but the supposed impact upon schools is widely debated (Tamim, Bernard,
Borokhovski, Abrami & Schmit, 2011). Some researchers purport that increases in
student outcomes are more likely connected to the instructional design and methods
rather than the technology that is utilized (Clark & Feldon, 2005). The importance in
technology’s effectiveness lies in the degree to which students and educators are enabled
to achieve the desired learning outcomes and goals of instruction (Ross, Morrison, &
Lowther, 2010). In order for traditional educational settings to contend with the future of
education there must be an understanding that other venues in which education occur are
very technologically oriented (Tamim et al., 2011; Collins & Halverson, 2009). In the
workplace, learning experienced rapid growth and is highly specific and technologically
based; an increasing number of firms provide programs to businesses that directly address
specific need; Motorola, Xerox, Accenture, and the military have training programs
(Collins & Halverson, 2009). To compete in an educational market that is increasingly
independent, technology must be integrated into programs and courses utilizing
instructional methods and designs that provide increased learner outcomes (Collins &
Halverson 2009; Tamim et al, 2011). The focus of the present proposal considers
technology’s role within the context of learner centered education, through examination
of whether technology, social context, and peer comparisons influence metacognition and
3
self-regulation and how these constructs are influenced. Moreover, the central purpose is
to examine if electronic feedback devices and how electronic feedback devices influence
student metacognitive self-regulation in the context of a large lecture.
Technology employed in higher education necessitates research demonstrating
effective usage, because of a keen interest in research validated peer reviewed literature
to validate use and practice. When using technology in education, it is important to
determine whether the use of the technology is effective and whether student outcomes
are influenced. If use of technology does not have a positive impact on effectiveness of
instructional design or methods, and upon student outcomes, than the technology may be
viewed as an unnecessary expense and use of time (Clark & Feldon, 2005). Evaluating
the use of technology in the classroom surfaced as a concern as early as the 1960’s and
has persisted as an important consideration for educators and researchers to the present
(Tamim, et al., 2011). In a meta-analysis of more than 40 years worth of studies
examining the effectiveness of technology in the classroom including over 1055 studies,
results indicate use of technology improves student outcomes when compared to
classrooms in which a traditional style of instruction occurs and technology is not used
(Tamim et al., 2011). Although there is a significant amount of literature attributing
gains to educational technology, caution is warranted, because many such tools do not
produce the anticipated student gains (Mayer et al., 2009). A form of electronic feedback
utilized, “clickers” is reported to increase student outcomes because of increased student
engagement in the lecture (Mayer, et al., 2009). Personal response systems, commonly
referred to as “clickers” because of the sound made when pressed, are known by a variety
4
of terms including classroom assessment systems, electric voting systems, interactive
voting systems (Moss & Crowley, 2011), classroom response systems, classroom
communication systems, audience paced response system, classroom network, keypad,
handsets, and zappers (Caldwell, 2007). Clickers are small, television remote style
keypads that are numeric and have 10 keys; in college and universities students utilize
clickers to transmit answers to prompts while faculty records answers typically in lectures
and other education formats (Caldwell, 2007).
Many instructors utilize “clickers” in lecture to increase student participation,
because of a desire to create student centered learning environments, and because there
is generally accepted view that technology is a useful tool to engage students and has a
positive influence on student outcomes. “Clicker” use is reported to increase student’s
comprehension of core course concepts and develop of critical thinking (Mollborn &
Hoekstra, 2010). Furthermore, technology is reportedly useful for developing student
metacognition (Schraw, Crippen & Hartley, 2006). As previously mentioned multimedia
is commonly utilized in the effort to increase student engagement, interest, and outcomes,
but with mixed results and opinions (Clark & Feldon, 2005). “Clickers” have become
increasingly popular among faculty in higher education; “clickers are technological
devices that provide the opportunity to gain student responses during instruction, (Moss
& Crowley, 2011; Mollborn & Hoekstra, 2010). Utilizing “clickers” as an instructional
aide to gauge student learning aligns emphasis that is placed on student metacognition,
which promotes a learner centered educational environment (Prather, Slater, Brissenden,
& Dokter, 2006). Because the use of “clickers” in higher education is increasing among
5
faculty, empirically validated research that examines the effectiveness of “clickers” in
light of the current emphasis upon learner centered classrooms and the development of
student metacognition is needed. If research demonstrates that “clickers” are effective
educational tools that connect with improved student metacognitive self-regulation and to
improved student outcomes, faculty will have robust results to substantiate use of
“clickers.
Background of the Problem
Technology’s effect on life is profound and is, indeed, a “mode of revolution” in
communication (Warschauer & Matuchniak, 2010, p. 179). Technology use is growing
rapidly and use of technology in education occurs across many environments including
home-schooling, the work place, distance learning, adult education, learning centers,
educational television and videos, computer based learning software, technical
certificates, internet cafes, and life-long learners (Collins & Halverson, 2009). Demand
for technology is up and discomfort with technology is no longer a problem (Popovich,
Gullekson, Morris & Morse, 2007).
Technology takes different forms and is used both passive and interactive ways;
technology types utilized include media, power points, smart boards, internet, cellular
phones utilized with a variety of audience response systems (aka “clickers”), notebooks,
and personal digital assistants. There are researchers who question the role technology
plays in student outcomes (Clark & Feldon, 2005); some think of education technology as
tools, modalities, or as a component of instructional strategy or design (Ross, Morrison &
Lowther, 2010). However, research continues to surface that associates certain outcomes
6
with the use of technology in class such as increased interest, sustained interest, academic
performance (Nora & Synder, 2009), collaboration, bridging social gaps and gaps in
education (Warschauer & Matuchniak, 2010). Technology is usage is extensive
nationally, internationally, throughout the education contexts and higher education
(Mollborn & Hoekstra 2010).
Many concerns arise with the growing use of technology; one is whether faculty
employs the devices effectively. Educators and educational researchers, as discussed
earlier, are interested in effective instruction that leads to meeting instructional goals and
student learning outcomes; instructional devices or aids, including technology, are the
topics of research in order to identify uses that are connected with student achievement.
As the effectiveness of technology is researched, best practices in educational settings
tend to surface in literature. Another concern is whether the university is supportive of
faculty’s use of technology through making technology available and providing training.
A third concern is whether the technology is utilized effectively; does its use engage
students in deeper cognitive processing of material. Fourth, there is the concern over
whether the technologies are used so that learning outcomes are improved. A fifth
concern is whether students feel the use of the technology is effective and engaging and
whether increased positive outcomes can be connected to technology utilization; do
students feel that they have learned more, were more engaged the course, that their
outcomes were enhanced by the manner in which the technology was employed?
Research about “clickers” demonstrates increased engagement, cognitions, and increased
outcomes (Mollborn & Hoekstra, 2010). Research does not indicate if or how
7
metacognition is influenced or if or how the increase in social comparisons and increased
self-evaluations are elements that influence students and contribute to increased
outcomes. The aim of the proposed research is to examine the usage of “clickers” by
faculty in a large lecture undergraduate classroom.
Statement of the Problem
In higher education, the educational environment is impacted by and influenced
through technology use. Faculty seeks to engage learners in meaningful learning, and as
students’ participate in these learning environments, they respond to interacting elements
(e.g., instructional strategies). In the case of the current work, “clickers” and faculty use
of “clickers” in a large lecture are interacting elements, and if students are influenced and
how students are influenced is the focus. Many faculty members struggle to ensure
students are engaged in the learning process (Mollborne & Hoekstra, 2010). The use of
technology as a means to engage students is commonplace among faculty (Brown, 2010).
As “clickers” are employed in this pursuit more often, questions about best practices are
surfacing. Now that researchers seem to agree that “clickers” engage learners and some
research even demonstrated better learning outcomes, examination of those elements in
the learning environment, the social context, and the influence upon the students are
important questions to address. In the current study, the preceding elements are
examined through the framework of metacognitive self-regulation.
Purpose of the Study
Research indicates that student cognition is increase by utilizing “clickers”
(Mayer et al., 2009). This study explored whether “clickers” influence the metacognitive
8
aspects of self-regulation. Inquiry was made into the possible relationships between
technology, metacognitive self-regulation, faculty use of technology, and student
response. Because “clickers” have been shown to increase student cognition (Mayer et
al., 2009), the metacognitions of students may contribute to the increase in student
cognitions, and this deeper level of engagement may be a factor in increase in learning
outcomes. Because the social context of the lecture facilitates peer comparisons through
displaying results to “clicker” questions and surveys, a pertinent aspect of the current
study is student self-evaluations, which falls within the conceptual framework of
metacognitive self-regulation.
The intent of the present study is to examine “clickers” influence upon student
metacognitive self-regulation and the resulting peer comparisons in a large lecture
context with undergraduate students in a lower division educational psychology course.
“Clickers” may increase student self-evaluation, increasing the salience of self-
evaluations in a large lecture setting more than a traditional lecture context. In addition,
the social context may be altered as a result of an increase in opportunities for peer
comparisons. The present study examined whether there a relationship between
social/peer comparisons and “clickers” and whether metacognitions influenced the self-
regulatory aspects of metacognition. The secondary purpose examined how the context in
which “clickers” are employed increase student self-evaluation. Student evaluative self-
reflections may be increased, and the type of evaluative self-reflections students have are
of interest to the present work. The independent variables for this study included
9
“clickers,” and gender, and the dependent variables included student metacognitive self-
regulation and social/peer comparisons.
Importance of the Study
This study will extend the research on “clickers” effectiveness. Specifically, this
study will examine whether student metacognition and self-regulation are influenced and
how they are influenced by the social context and peer comparisons. Examining the
research questions below will add to the body of research on “clickers.” The questions
asked will provided further research about the use of “clickers” in a large lecture and
whether and how the “clicker” questions influence student social comparisons and self-
judgment. As a result, the current effort may illicit insights beneficial to faculty by
supporting “clicker” use in instructional design and strategies that lend to improving
student outcomes. Additionally, faculty stand to benefit from this study if there are
positive connections to the use of “clickers,” the body of peer review empirically
validated research that supports the use of this technology will increase and validate this
tool that is popular among faculty nationally and internationally. Instructors may stand to
benefit from this study, because results may inform the practice of using electronic
feedback device.
Research Questions and Hypotheses
In the current study the primary research question is, 1) are there differences in
student metacognitive self-regulation, motivated learning strategies, and metacognitions
in lecture based on whether the students utilize “clickers” or paddles, and the extent of
10
use? The second research question is, is, does use of “clickers” verses paddles predict
performance outcomes? The third question is, does extent of use of “clickers” predict
performance outcomes (e.g., quizzes, course grade)? The secondary research questions
are qualitative. The first is, if “clickers” affect metacognition, how do they? The second
is, is, and how is, the experience different for students using “clickers” verses students
using paddles? In the current effort there are three hypotheses. The first hypothesis is
“clicker” use influences student metacognition and self-regulation. The second
hypothesis is students engage in social/peer comparisons as a result of “clicker” usage.
The third hypothesis is by engaging in self-evaluation following “clicker” questions
student metacognitive awareness and self-regulation, or metacognitive self-regulation,
increase.
Limitation and Delimitations
This study investigated “clicker” use and student metacognition through the lens
of the self-regulation. While the study holds general interest for any higher education
setting employing the use of “clickers,” the study has clear limitations. The limitations
are due to the sample characteristics, the procedures employed in this study, and to the
limited ability for randomization. The study investigates student metacognitions, self-
regulation, and use of “clickers” in a large lecture setting for an undergraduate
educational psychology course. The first limitation is that the study cannot control for
gender, ethnic background, socio-economic level, previous experience with technology,
and college entrance score. Because the characteristics of this sample may not reflect the
typical undergraduate student, results of the proposed research may have limited
11
generalizability. The second limitation is due to the instrumentation. The survey style of
collecting data prevents the ability to predict causation. The third limitation to consider is
that of self-selection and social desirability, because students are voluntarily participating
in this study and there is the tendency of participants to answer questions in ways that
they believe are socially acceptable. The biases may prevent the results of this study from
generalizing to other students at this institution and the limitations may confine the
generalizability.
The extent to which this research will be randomized is a toss of a coin. There
will be two groups of students during the fall semester in separate lectures, approximately
100 students in each lecture. A flip of the coin will determine which group begins the
semester utilizing “clickers” and which group begins with raising hands in response to
questions or student surveys.
Organization of the Study
Chapter 1 includes an overview of the proposed study including an introduction,
background, and statement of the problem, statement of the problem and proposed
research questions, and the importance of the study. Chapter 2 includes a literature
review that will synthesize the literature on technology, metacognition, faculty use of
technology, student, and self-regulation (or self-efficacy). Chapter 3 describes the
intended method and instrumentation of the study. Chapter 4 presents the results of the
study. Chapter 5 presents the summary of findings.
12
Chapter 2
Literature Review
“Clickers” change a passive classroom into an interactive learning environment
(Mollborn & Hoekstra 2010; Moss & Crowley, 2011). Because of this, “clicker,” usage is
closely connected with the teaching process (Burnstein & Lederman, 2006; Duncan,
2006). The importance of studying the contribution of “clickers” to the learning process
is paramount, because “clickers” are popular and widely employed (Mollborn & Hoekstra
2010; Moss & Crowley, 2011). Research about the learning process indicates that
students who are metacognitively aware tend to have higher educational outcomes than
learners who are unaware of their cognitions (Mayer, 2008). In addition, the development
of metacognition is an area identified as important for 21
st
century learners to develop
(Anderman, 2011). As higher education reaches toward creation of optimal learner
centered environments, the role of technology, in this case “clickers,” and student
metacognitive self-regulation, are factors to consider in development of instruction goals
and instructional design of programs and courses.
The search for relevant studies and conceptual frameworks included a search of
online library databases, review of foundation and organization resources, and review of
government documents and databases. Google Scholar, ERIC, and the University of
Southern California library were searched using the following key terms: “clickers,”
metacognition, self-regulation, technology and academic outcomes, technology and
higher education, technology and effectiveness, motivated strategies for learning
questionnaire (MSLQ), metacognitive awareness inventory, metacognitive self-
13
regulation, and combinations of the aforementioned. Metacognition yielded upwards of
25,000 results and ‘technology’ with ‘higher education’ and ‘academic outcomes’ yielded
472,000. In searches key words were not limited to title or abstract. In order to maintain
relevance to the current study, only research pertaining to academic learning in higher
education was included except in the case of ground breaking articles or measurement
instruments (e.g., Flavell, 1979; Pintrich et al., 1993), and the year 2008 to 2011 was
preferred, as were relevant “clicker” articles in the areas of social sciences and
psychology and with undergraduates. Academic learning in higher education included
studies in the context of class, lectures, and labs. While some studies from subject areas
of science, engineering, technology, and mathematics were included, articles from social
sciences and psychology were of primary interest. Articles from the following journals
were used in this study: Journal of Higher Education, The Journal of Academic
Administration in Higher Education, Review of Educational, Research, Psychological
Reports, Contemporary Educational Psychology, Educational Researcher, American
Psychologist, American Psychological Association, Educational Technology Research
and Development, Educational Psychologist, the Chronicle of Higher Education,
Computers in Education, Journal of Interactive Education, American Journal of Physics,
College Teaching, Personality and Individual Differences, The Journal of Mind and
Behavior, Computers of Human Behavior, Theory and Practice, and Education and
Psychological Measures.
In addition to articles retrieved through the searches described above, examination
of the sources referenced in the retrieved articles led to additional sources of information
14
including books and reports. In total, 660 studies, which included qualitative and
qualitative research designs, were reviewed. The sources included in the present effort
include studies in psychology, social science, and educational context that discuss
information most relevant to the present investigation. From the 660 studies viewed, the
sources selected to include in the current effort were from higher education contexts,
recent research, landmark research, and, when possible, “clicker” studies from social
sciences, psychology, and general education courses. Additionally, 23 sources that
presented theoretical frameworks were reviewed. These sources span from 1993 to 2011.
In the following literature review, studies that highlighted current research
involving technology’s role in learning and metacognition and self-regulation in higher
education are discussed. Topics of discussion include technology in higher education,
faculty uses and perceptions of technology, metacognition and self-regulation, and
audience response systems in higher education. Each section introduces the topic and end
with a discussion of the relevance of the information to the current study. Finally, the
literature review is summarized, drawing attention to the factors that are more important
to this study, and lead into chapter 3 which describes the study design and methodology.
Technology in Higher Education
Technology’s presence is seen in a wide variety of educational settings including
traditional educational environments, home schooling, workplace learning, learning
centers, distance learning, computer based learning software, adult education, educational
television and videos, technical certifications, internet cafes, and in lifelong learning
15
environments (Collins & Halverson, 2009). There is a revolution in education, because of
the flood of educational technology (Collins & Halverson, 2009). The thrust of
technology’s influence on education occurs outside of traditional educational settings.
While traditional education seeks uniformity, technology supplies individualized
education. In order for traditional educational contexts to survive, technology must be
incorporated into the educational system with intention and from a universally accepted
framework that lends vision and structure to the end that technology and education are
working in tandem. For such a framework to develop and be embraced, empirical
research examining technology’s impact on education, and the depth of that impact, is a
foundational necessity, because educational technology is developing rapidly.
Multimedia technology in higher education. In the quest to create a learner
centered environment in higher education, technology is frequently employed, at times
effectively, and, at times, with limited benefits. Clark and Feldon (2005) argued that
while multimedia is exciting, creates more instructional opportunities, and can reduce
costs, multimedia presents some problems, because the benefits are often misunderstood
and misinterpreted, and in such cases multimedia is an unjustifiable expense. When
articles discuss multimedia, Clark and Feldon stated this term is usually employed in
reference to “any vehicle for presenting or delivering instruction,” (p. 3). Clark and
Feldon discussed multimedia learning and five common notions about effective uses of
technology, that, while widely accepted, are not validated by research. First, technology
has often been assumed to be a tool that improves student learning, when in reality
instruction by a person can yield the same results as technology (Clark & Feldon, 2005).
16
Clark and Feldon declare that studies purporting to provide evidence of benefits of
multimedia on student outcomes do not separate instructional methods from “sensory
mode” which confound results. Often animation and other forms of entertainment are
used, because these are thought to increase learning and attention; research indicates that
while the students enjoy such experiences, their minds are overloaded by the distractions
and learning is reduced (Clark & Feldon, 2005). Furthermore, when visual and text
explanations are given so that students are receiving the same information two ways,
again, memories are overloaded and learning is diminished. A second faulty belief is that
when student interest is increased, academic achievement will increase. Research
demonstrated that when students indicate a higher level of interest, outcomes can
decrease; in fact, higher interest can be correlated with lower course outcomes (Ainley,
Hidi, & Berndorff, 2002; Clark & Feldon, 2005). A third faulty belief is that mental effort
is thought to improve when using multimedia; however, current research indicates
increased complexity of visual displays, and complexity of instructional design, reduces
mental effort and impedes attainment of learning goals. A fourth faulty belief is that
multimedia is thought to increase student motivation. Research on multimedia and
student motivation are reported to have a tendency to measure factors of student
enjoyment and interest rather than learning goals for courses (Clark & Feldon, 2005).
Students frequently think that in a course with multimedia designs more individual
attention is received; there is little research to draw from to verify this. This belief is not
clearly backed by research, but it is important to note there are indications that more
instructor contact seems related to increased student persistence, and more instructor
17
contact may lead to higher retention rates (Clark & Feldon, 2005). A fifth faulty
assumption is that multimedia can effectively teach to different learning styles. Each
student’s learning styles and preferences would need to be identified, and research has
failed to consistently support connections between learning styles and preferred
instructional style. Although many studies purported to demonstrate that a variety of
learning styles exist, such as “auditory learners” and visual learners,” and that teaching to
different learning styles improves learning and performance, those studies have not
employed a random research design that would lend credibility to the results (Pashler,
McDaniel, Rohrer, & Bjork, 2008). According to Pashler, et al. (2008), a team of eminent
researchers in the psychology of learning assigned the task of reviewing existing
literature on learning styles, concluded that a study about teaching to learning styles to
benefit students that lacks an experimental design, including a robust corroboration
between clearly delineated learning styles linked to instructional strategies, would only
produce results which lack credibility and validity. Research indicates students’
preference for a particular type of instruction did not seem to be related to benefits in
learning.
Suggestions are made by Clark and Feldon (2005) to guide instructional design
and further research. First, they suggest when designing instruction to avoid overly
complex tasks and visual displays. Clark and Feldon point out that the two most common
predictors in positive student outcomes are prior knowledge and learners who have a
mastery orientation, one in which a learner focus on subject mastery as opposed to
performing well on a single task (Schunk, Pintrich, & Meece, 2008). A reliance on
18
multimedia may result in less instructional guidance and Clark and Felden identify
insufficient instructional guidance as a disadvantage to the novice learner, while strict
guidance instruction is a disadvantage to more advance students. Ultimately, instruction
that is beneficial takes into account prior knowledge of the learner. This has little to do
with the use of multimedia, and more to do with the vehicle of instruction.
Technology and gender differences. In past research about level of comfort
with technology, gender differences were indicated as a factor (Popovich, Gullekson,
Morris & Morse, 2008). The comfort level of males, as well as general use of technology,
was higher than that of females; Popovich and colleagues (2008) compared comfort level
and use of computers to examine for gender difference from undergraduate students in
1986 compared to undergraduate students in 2005. Popovich and colleagues discovered
that although gender differences existed in 1986 the same difference were not reflected in
the 2005 participants. Results from Popovich and colleagues are consistent with past
studies of comfort level with technology, the amount of time spent using computers is
related to positive computer attitudes in 1986 and 2005. However, unlike the 1986 study,
there was no difference in 2005 between genders for comfort level using computer, time
spent on the computer, number of computer courses, level of anxiety, or attitude toward
computer.
Faculty uses and perceptions of technology. Reasons behind faculty choice of
technology use has been the topic of recent research (Pirato, 2011), as have commonly
held beliefs about technology (Clark & Feldon, 2005). Most often, faculty, as compared
19
to administrators, are the ones who use educational technology in higher educational
settings with students, therefore, how faculty uses technology, and faculty perception of
technology, are important considerations in research. Uses and perception of technology
change over time (Burnstein & Lederman, 2006). According to recent research, faculty
acceptance of technology is indicative of whether technology will be utilized in a course,
to what degree, and what type (Lantz, 2010; Pirato, 2011).
A meta-analysis performed on research literature pertaining to acceptance and use
of technology by faculty in higher education included 79 articles (Turner, Kitchenhan,
Brereton, Charters, & Budgen, 2010). Turner and colleagues (2010) results indicated that
ease of use and perception of how useful the technology was are not necessarily
indicators of usage. In research investigating faculty use of technology, Pirato (2010)
considered whether usage of technology was influenced by faculty attitude toward
technology. Pirato (2011) suggests perception of technology, position of the faculty
member within an organization (e.g., faculty, or administrators), and type of position
(e.g., sciences, or education), are factors that impact whether resources are employed.
Also the cost associated with the educational technology, and the possibility that faculty
may be reluctant to change from a traditional lecture style, are indicative that technology
usage is a function of comfort level and position.
Technology Use in Higher Education Summary
Clark and Felden (2005) provided significant points to consider for the current
work. Technology is incorporated in courses in order to engage students in deeper
20
cognitions with the goal of increased student outcomes. Utilization of educational
technology in courses must consider empirically based research, not commonly held
beliefs; implementation considerations should be addressed in instructional design.
Moreover, instructional design should take into account prior knowledge of the learners,
and view technology as a vehicle, and as such, technology as a means to an end, and not
the end in itself. Of further importance to the current work is that faculty familiarity with
technology influences usage in the educational setting, because cultural and social factors
are involved in the acceptance and use of technology in higher education settings (Pirato,
2010). The educational technology with which the current effort is concerned is regularly
employed in the lecture context, use of the technology is written into the syllabus, and
use of the technology is expected of students enrolled in the course. Faculty, in general,
can display a reluctance to incorporate newer technologies (Kelly, 2007; Pirato, 2010).
Many instructors continue to rely on a lecture format, and use of dry-erase board, or even
blackboard and chalk. Moreover, concerning the elements that comprised the course
selected for this study, the faculty member’s position in the organization, acceptance of
the technology, and comfort level with the technology of concern, this setting provided
optimal conditions for conducting research so that confounding variables were reduced
(Lantz, 2010; Pirato, 2010).
Research indicates the “Net Generation” or “the Millennials,” (Carlson, 2005, p.
1), born between 1980 and 1994, are “smart yet impatient” (Carlson, 2005; MacGeorge et
al., 2008; Mollborn & Hoekstra, 2010). These students seem to be able to absorb a
barrage of information from various technological devices while doing homework,
21
though the depth of understanding is debatable. Because of ease of access to information,
this generation is characterized as impatient, expecting immediate results, and wanting
more control over learning, including when they learn, what they learn, and how they
learn (Carlson, 2005). Students in the 21
st
century, both male and female, are comfortable
with technology use, and anxiety is no longer related to gender, only to actual time spent
utilizing technology (Anderman, 2011; Popovich et al., 2008). The educational
technology for the current work, audience response systems, indicates a number of
benefits: a) increased student engagement, b) student anonymity (Mollborn & Hoekstra,
2010; MacGeorge et al, 2008), c) improved listening ability because of “resetting the
clock,” d) links concepts with personal experience, e) timely feedback (Whittinghill &
Kadlowec, 2010), f) enjoyable, g) clarifies instructor expectations, h) provides
opportunities for peer instruction, i) efficient data collection (Lasry, 2008), and j) positive
influence on attendance (MacGeorge et al., 2008). Recent research has emerged
indicating audience response systems are useful to engage student’s cognitions (Mayer et
al., 2009; Mollborn & Hoekstra, 2010), examining the impact on student metacognition, a
chief concern of the proposed work, is a relevant pursuit.
Metacognition and Self-Regulation
Once an instructor is utilizing technology and is comfortable, how to best utilize
technology to benefit students is the next item on the agenda. Frequently the intention of
investigations of educational technology is to examine effectiveness in order to improve
student outcomes. Research indicates that better outcomes tend to be produced by
students who are more metacognitively aware (Dinsmore, Alexander, & Loughlin, 2008;
22
Mayer, 2008). In addition students who are self-regulated learners form learning goals in
order to achieve desired outcomes (Zimmerman, 2000); because they have the ability to
make use of cognitive learning strategies, these self-regulated students impact their
learning efforts expertly (Wolters, 2010). In the following section, the constructs of
metacognition and self-regulation are discussed in the context of current research, and in
relation to the current work.
Metacognition. In research literature there are several definitions of
metacognition. Metacognition has been defined as referring to “the ability to reflect upon,
understand, and control one’s learning,” (Schraw & Dennison, 1994). A second definition
of metacognition is the, “knowledge and awareness of one’s own cognitive processes,”
(Mayer, 2008, p 108), and a third is simply how one thinks about his or her own thinking
(Anderman, 2011). Moreover, metacognition has been defined as a general awareness an
individual has about his or her own thinking and as an understanding of his or her own
thoughts. The present study views metacognitive knowledge as understanding of one’s
thoughts.
According to Anderson and Krathwohl (2008), metacognitive knowledge has
three knowledge subtypes: a) strategic, b) contextual and conditional knowledge of
cognitive tasks, and c) self-knowledge. Strategic knowledge includes understanding of
organizational and elaboration strategies as techniques to utilize in study so that
information is moved from the working memory (WM) to long term memory (LTM)
encoding the information for retrieval later as needed (Dembo & Seli, 2008).
23
Metacognitive knowledge about cognitive tasks includes understanding when and why to
approach tasks differently and understanding that some tasks inherently vary in difficulty.
The third subtype, self-knowledge, originally discussed by Flavell (1979), is the
variability that exists in each person, including understanding personal strengths and
weaknesses, what one does and does not know, and how to inform oneself of what is
unknown. The present effort is primarily concerned with the third aspect of
metacognitive knowledge, individual variability, as well as strategic knowledge regarding
what students do before and after class. Individual variability relates to the present effort
in that, while all students utilize the same educational technology and are subjected the
same instructional strategies, each student has individual strengths and weaknesses,
varying levels of college preparation and knowledge and skills regarding academics, and
motivational beliefs (Anderson & Krathwohl, 2001). Because of the scope of
individuality in metacognition, individual perception is an essential element in this
current effort.
Self-Regulation. Aspects of self-regulation include formation and pursuit of
goals, time management, managing the physical and social environments, and self-
monitoring (Chen, 2002; Pintrich et al., 1993). Self-regulation is a process that is in
control of the learner, includes self-monitoring, and is initiated by the learner (Wolters,
2010). Self-monitoring bears a distinct resemblance to the third aspect of metacognition
discussed above. Self-monitoring is the aspect of self-regulation with which the present
study is concerned because self-monitoring is a distinct component of metacognition and
it is the aspect that links to self-regulation (Chen, 2002). Metacognition, self-regulation,
24
and self-regulated learning are used interchangeably in the literature (Dinsmore,
Alexander, & Loughlin, 2008). The manifestations of the self-regulatory aspects of
metacognition are of interests in the current study.
In research literature, metacognition and self-regulation are frequently employed
in similar manners; the constructs overlap and are used interchangeably, albeit,
unintentionally (Dinsmore et al., 2008; Pintrich et al., 1993; Schraw et al., 2006).
Metacognition is an essential component of self-regulation (Schraw, Crippen & Hartley,
2006). A self-regulated learner makes use of strategies including planning, minimizing
distractions, seeking help from peers and instructors, organizing, scheduling, note taking
and making charts, goal setting, careful time management and juggling multiple personal
and academic demands (Whipp, 2004). Moreover, students who are self-regulated
learners adapt strategies to the demands of different situations which demonstrates
metacognition (Wolters, 1998).
Metacognition and self-regulation, planning, monitoring, and evaluating.
Schraw et al. (2006) reviewed recent use of self regulation in science literature to discuss
how this literature impacts future education in the sciences. The authors focused on three
aspects of self-regulation to discuss the implications for science education and use of
learning strategies: a) cognitive strategy, b) control of metacognitions, and, b) student
beliefs about motivation. Schraw and colleagues note the distinct role played by
metacognition in self-regulation. The metacognitive component of self-regulation
involves a process of planning, monitoring, and evaluating; due to this process,
metacognition is described by the authors as expressly important, because the process
25
enables students to assess and evaluate conceptual understanding and skills, current level
of learning, and direct learning efficiently. Schraw and colleagues (2006) state that
research indicates cognition and motivation alone are inadequate to develop self-
regulated learning and require metacognition. Furthermore, instructors should guide
students through instruction of the development of metacognition.
Digital and media programs encourage students to build representations of core
concepts and review the knowledge they construct (Schraw et al., 2006). Technology can
guide students’ self-regulation through the use of cognitive scaffolding, feedback, and
collaborative efforts; in so doing, students become more aware of metacognitive
processes (Schraw et al., 2006). Schraw and colleagues (2006) suggest that metacognitive
knowledge is constructed by software programs which allows for students to save
information, and as student conceptual understanding evolves and the students update the
information with new knowledge, the students are able to reflect on the development of
their thoughts. This concept of metacognitive construction of knowledge may be relevant
to the current work, because of the increase in self-reflection that seems to occur when
the results of polling are depicted on the slide by a histogram or bar graph, and, in
addition to clarifying course concepts, may influence the construction of metacognitive
knowledge. In the same way as the software programs above, “clickers,” also a tool of
educational technology and a key element in the current study, guide students in
reflecting on “clicker” items through the questions, Peer Instruction, and through viewing
the resulting histograms. This process seems to follow the same pattern as the software
programs, introducing new knowledge, students reflect on the knowledge, update their
26
existing schema with the new information, and are able to reflect on the development of
their thoughts.
Metacognitive self-regulation as a framework. In order to develop student
metacognition, and to gauge metacognitive processes, gaining student perspective is
important, because of the individual variability that exists (Anderson & Krathwohl,
2001). In a study by Ross, Green, Glennon, and Tollefson (2006), students’
metacognitive self-regulatory processes involved in test preparation were examined to see
whether studying habits and performance on exams changed based on the expected
complexity of the exam. Strategy choice for exams is viewed as a function of
metacognitive self-regulation. There were three hypotheses; first, students expecting an
increased complexity will use deeper level strategies in studying and less strategies that
engage at a more superficial level. Second, students who are anticipating greater
complexity will have higher outcomes than the students who are not anticipating test
items that require deeper cognitions. Third, the strategies employed by the students guide
them in the level of processing necessary for the exam and outcome. In the presentation
for the first group, students were advised that the test contained information requiring
deeper levels of processing and were given complex examples. In the presentation for the
second group students were informed that the exam would require lower levels of
processing and were given an example that required memorization. After attending a 20
minute lecture, students were provided 20 minutes of study time in a room; students
could choose to study individually or in groups.
27
Previous research was confounded with the type of item the students expected to
have on the test (Ross et al., 2006). Ross and colleagues focused the students on the
complexity of the test so that the anticipation of greater complexity would cause students
to utilize deeper levels of processing during studying. A central focus was whether these
students would produce higher exam scores than students who did not anticipate higher
levels of complexity. Results indicated that students anticipating deeper levels of
processing for tests performed better on items requiring deeper cognitions, but did not
receive higher scores on the items requiring memorization. As expected students did
change study strategies in accordance with expectations for the type of exam, and this
enabled students to perform better. The generalizability is limited due to the homogeneity
of participants who were primarily undergraduates and predominantly from a middle
class background. However, according to results, students chose studying strategies and
levels of cognitive engagement for tests based on an expectation of the level of
complexity of exams. There may be implications for instructors as far as level of
communication about instructor expectations of students for tasks. This is important to
the current study, because “clicker” questions are designed anticipating certain levels of
student knowledge and student engagement. There is the expectation that students will
have read and studied the weekly readings to participate in lecture and for a quiz
administered at the start of each lecture.
Metacognition, self-regulation, and self-regulated learning in literature. Since
there are many studies that use the terminology “metacognition, self-regulation, and self-
regulated learning” (Dinsmore, Alexander, & Loughlin, 2008 p. 392), the meaning of the
28
constructs and how researchers apply and test the constructs is important to determine.
Dinsmore et al. (2008) seek to clarify the meaning of metacognition, self-regulation, and
self-regulated learning. These terms are frequently used interchangeably in literature,
which results in lack of clarity in research and the ability to convey meaning about these
constructs. Databases were searched, and several educational psychology oriented
journals were looked through by hand. Articles included were from 2003-2007, because
there were a tremendous number of articles. Three problem types emerged. First, for
some of the articles’ keywords entered in the database did not match the construct
discussed in the article. Second, terms were used interchangeably or only casually
mentioned in the discussion or summary. Third, some studies measured and defined more
than one of the constructs. Dinsmore and colleagues determined that the construct used in
each study was predominantly based on the definition given and the instrument employed
in measuring the construct. In cases where the construct definition was metacognition and
the Motivated Strategies for Learning Questionnaire (MSLQ) was the instrument
Dinsmore et al. categorized the research as self-regulation, because the MSLQ measures
aspects of self-regulation. The authors sought to clarify these constructs in order to
identify explicit and implicit definitions.
Dinsmore and colleagues (2008) concluded with some clarification of self-
regulation, self-regulated learning, and metacognition, and pointed out areas where there
was some continuing haziness. One of the distinctions identified is that self-regulation is
rooted in actions and metacognition is cognitive in nature. As more information about
metacognition emerged in research literature an awareness of behavior unfolded, and
29
metacognition became increasingly associated with behavior. Dinsmore and colleagues
cite the ground breaking study by Flavell (1979) as having utilized the terms “control”
and “monitor” for metacognition; however, these terms occur in literature in all three
constructs with the same regularity. According to Dinsmore and colleagues (2008) this is
an indication that “control’’ and “monitor” are significant in each of the constructs.
What is monitored or controlled may be the factor determining the difference
between metacognition, self-regulation, and self-regulated learning (Dinsmore et al.,
2008). Metacognition and self-regulated learning studies both tend to employ the
Motivated Strategies for Learning Questionnaire (MSLQ) to measure the constructs. Self-
regulation and self-regulated learning studies relied heavily on measures that were self-
report in nature while studies looking at metacognition tended to rely on self-report,
observations, and performance ratings. Lack of clarity in what is being measured will
cause lack of clarity in results; however the authors warn that the concepts are not fixed
and can evolve. In fact, metacognition and self-regulation were separate concepts in the
beginning and as more research was conducted about the concepts, they became
intertwined (Dinsmore et al., 2008). The authors describe the distinction between
metacognition and self-regulation in terms of what stimulates awareness of the need for
adaptation. Metacognition attributes awareness of the environment and the corresponding
regulation actions to the individual’s mind, and self-regulation attributes the awareness of
the individual and the corresponding regulatory actions to the stimulation of the
environment. Educational researchers must endeavor to seek clarity in definition of
constructs to the end that research questions are answered with clarity and intention.
30
When employing these concepts, Dinsmore and colleagues appeal to researchers to
“monitor” choice of terms, to “control” the conceptualization and operationalization of
constructs, and to “regulate” the dissemination of knowledge and intentions of studies.
Summary. Schraw and colleagues’ (2006) emphasis of the significant role played
by metacognition in self-regulatory processes is of interest to the current effort (Schraw
et al., 2006). While Motivated Strategies for Learning Questionnaire (MSLQ) by nature
looks at self-regulation as a more general framework, this study examines metacognition
specifically. Hence, in order to define and measure metacognition in an explicit fashion,
careful attention is paid in the current work to selecting appropriate terms to
operationalize metacognitive self-regulation. Metacognitive self-regulation is the
theoretic lens for Ross et al. (2006) and is of particular interest to the current effort.
Clarity of the construct and measures are a concern of the present study, because unless
clarity is sought a potential lack of clarity may emerge as a weakness of the study. This
concept is pertinent to the present study, because “clickers” draw students’ attention to
course concepts and may therefore lead students into deeper cognitive levels of thought
about the concepts present. Based on the results of Ross and colleagues (2006), students
choose the level of cognitive engagement needed for a task based on expectation of task
difficulty, and this choice of strategy is a metacognitive function (Mayer, 2008).
In the current study the educational technology employed is a useful tool for
engaging students in the lecture process. The environment is transformed by the
participatory nature of the device, “clickers,” (Hoekstra, 2008; Kelly, 2009; Mayer et al.,
31
2009). The participatory nature of the “clickers” and the idea that students choose the
level at which to engage in tasks and strategies needed for a task according to the
perceived level of difficulty (Ross, et al., 2006) may be relevant to the current effort.
Moreover, the relevance of specificity in language choices is a concern of the present
study (Dinsmore et al., 2008), because constructs, if not phrased correctly, and measured
with accuracy, can result in confounding factors which may lend to weak and uncertain
results. Terms employed in this investigation are chosen to draw attention to
metacognitive self-regulation processes, and the influence that “clickers” may have upon
these processes. In the present work the relationship between metacognition and self-
regulation is emphasized, and viewed as working in tandem, instead of as problematic.
Based on the overlapping relationship of metacognition and self-regulation (Dinsmore et
al., 2008), and for the purposes of the present work, metacognitive self-regulation is
operationalized as the cognitive self-knowledge that enables self-regulatory action which
adapts according to the perceived needs of the environment (Artino, 2005; Pintrich et al.,
1993).
Audience Response Systems
Among the technological devices employed in the endeavor to create student
centered learning contexts, and to improve student metacognition, are audience response
systems. These devices, commonly referred to as “clickers,” are continuing to increase in
popularity among faculty in higher education (Moss & Crowley, 2011; Mollborn &
Hoekstra, 2010). “Clickers” are utilized as an instructional aide to gage student learning,
32
(Duncan, 2006). This usage aligns with the emphasis that is placed on student
metacognition. In addition, this type of educational technology promotes a learner
centered educational environment, because “clickers” necessitate participation, gauge
understanding, direct lecturers to material needing clarification, and students tend to
believe “clickers” contribute to understanding course concepts, increase their final grade,
and create interest (Prather et al., 2006).
The current interest higher education has in “clickers” is relatively new and began
about 16 years ago (Burnstein and Lederman, 2006; Duncan, 2006; Mayer et al., (2009);
Moss & Crowley, 2011). Many credit Eric Mazur at Harvard with the first use of
“clickers” in a higher education context in the 1990’s (Duncan, 2006). According to
Moss and Crowley (2011) and Burnstein and Lederman (2006) “clickers” are widely used
in colleges and universities, especially in the United States, and optimism exists for
continuing use and potential. Our educational system, according to Collins and Halverson
(2009), is in flux. As a result of a shift to learner centered experience, and, because
technology is widely utilized, the importance of how technology fits into education is
crucial. In order to have a complete picture of “clickers” discussions of the following
topics will follow: a review of the history “clickers,” including what these devices are,
how use of these devices entered the educational arena, and what we know about clickers
from research.
History of “clicker” use in higher education. One of the earliest usages of
audience response systems (e.g., “clickers”) was in 1972 by Littauer at Cornell
(Burnstein and Lederman, 2006), who developed a home-made device that was
33
hardwired and had buttons on a key pad. However, as aforementioned, many attribute the
beginning of “clickers” usage in higher education to Mazur (Duncan, 2006). In 1998,
Horowitz proposed the use of a type of audience response system at IBM which was
utilized for an advanced technology educational setting (Burnstein & Lederman, 2006;
Duncan, 2006). Next, a company called The Better Education Company made a system
commercially available (Burnstein & Lederman, 2006). Radio Frequency (RF) systems as
audience response systems were developed and utilized by the business sector in the
latter part of the 1980’s; this system was first used for educational purposes by the
Illinois Institute of Technology (ITT). “Clicker” usage has increased since the mid 1990’s
when the technology became available commercially. Before this time audience response
systems were not widely used, because the systems were expensive, there were fewer
available, and faculty may have been reluctant to change from a traditional lecture format
in which the students were passive participants.
Modern “clickers” use either infrared (IR) or radio frequency (RF) for
transmitting information. IR systems are usually one-way, have a short range, and require
the user to look at the screen upon which results are displayed to ensure that the answer
registers; Burnstein and Lederman (2006) refer to this as a confirmation problem. RF
systems are usually two-way, meaning that a response is sent to students’ “clickers” to
indicating their responses registered; RF systems have a greater range, but were less
available in the past due to the high cost; these systems send the answer and confirm with
the key pad that the answer has been received. RF technology is not as expensive as when
it was new (Burnstein & Lederman, 2006). Other devices can be utilized for similar to
34
“clickers.” Cell phones can function as “clickers” by showing responses on a web page.
Notebooks and personal digital assistants can be utilized in this capacity when the
lecturer’s computer is connected to the internet by Wi-Fi to transmit and collect data, and
the lecturer’s computer has software that can present results immediately as histograms,
pie charts, or bar charts. “Clicker” designs can accommodate short answer, polling
multiple sites, and user log-ins to allow for individual tracking of students.
In sum, the history of the development and use of “clickers” and how they have
been utilized in educational settings is important to the current work, because history and
previous methods are keys to developing relevant research questions about “clickers.”
Whenever technology is employed in educational settings research is needed to explore
the most effective uses and practices to benefit the learning environment and to justify
expenditures. Technology’s role in education is still being examined and is not fully
resolved. Researchers disagree about whether technology is directly connected with
student outcomes or if the connection lies in the instructional design and pedagogy
(Tamim et al., 2011). Clark and Feldon (2005) strongly recommend viewing the
effectiveness of educational technology from a research based perspective. Use of
technology in the classroom necessitates evaluation, because determining if the
technology is effective and the most effective usages may improve student outcomes and
instructional design and strategies.
Research on “clickers”. As previously mentioned, “clickers” are popular devices
and increasingly, widely used in educational settings. The present study is concerned with
35
use of “clickers” in large lecture settings, with undergraduate students, and the influence
of “clickers” on metacognitive self-regulation. In the following section research is
discussed that employs “clickers” in larges lecture settings with undergraduates are
discussed.
While technology is often utilized to increase and sustain student interest, interest
has not been clearly connected to improved outcomes (Ainley et al., 2002; Chen, 2002;
Clark & Feldon, 2005). Although technology has inconsistently demonstrated increased
outcomes; the presence of technology will continue to be a part of education in the 21
st
century (Anderman, 2011; Collins & Halverson, 2009). The use of technology in the
classroom can go beyond mere entertainment and attempts to maintain students’ interest.
While interest of students can be important to gain and sustain, and is in itself a topic of
research, interest does not consistently correlate with better outcomes, and, similar to
educational seduction (Garner et al., 1989), has yield poorer outcomes (Ainley et al.,
2002; Clark & Feldon, 2005). Understanding how to effectively utilize technology in
educational settings is a necessary component of education, and will continue to be as
new technology and devices are developed. Use of technology as an effective tool in an
educational context to support meaningful learning is important to the current work.
Educational technology needs to be utilized in ways that research indicates are
effective, designed in consideration of course learning goals, and instructional design
(Clark & Feldon, 2005). According to research “clickers” can:
a) measure what students know before you start to teach them (pre-assessment),
b) measure student attitudes, c) find out if students have done their assigned
36
reading, d) get students to confront common misconceptions, e) transform the
way you do any demonstrations, f) increase students’ retention of what you teach,
g) test student understanding (formative assessment), h) make some kinds of
grading and assessment easier, i) facilitate testing of conceptual understanding, j)
facilitate discussion and peer instruction, [and] k) increase class attendance.
(Duncan, 2006, p.2)
“Clickers,” typical students, and frequency of measurements. In a study of 854
students MacGeorge et al. (2008) examined audience response systems, because of the
limited attention that has been given to the benefits of systems such as “clickers.”
MacGeorge et al. found limitations of previously performed studies on “clickers” for
three reasons. First, the students did not fit the typical profile of an undergraduate; most
studies involving “clickers” have been performed in sciences, engineering, mathematics,
and technology. Second, a limited number of elements were measured in the studies.
Lastly, instruments of measure tended to be administered only once. As a result of these
limitations, MacGeorge and colleagues’ foremost goal for this study was to gather data
from undergraduates in areas other than science, mathematics, technology, and
engineering. A secondary goal of the study examined ethnicity and use of “clickers,” and
the third goal of the study included gathering data at three separate times with more than
one instrument. There were 3,000 participants in 15 sections of an undergraduate, general
education courses in a mid-western university. Several faculty members participated in
the study. The number of faculty members may be a limitation, because there were
several. This means that continuity of presentation may be influenced by differences in
mannerisms and results may be confounded. The question may remain as to whether
results are measuring “clickers” or instructional methodologies. Results from MacGeorge
37
and colleagues (2008) confirmed “clickers” were easy to use, and impacted attendance
positively. In this study “clickers” were not shown to influence student preparation for
class. Gender, ethnicity, and year of the students did not have a significant effect on
results of this study. The strengths of this study are found in the large number of student
participants, multiple administration of the instrument, and participant characteristics;
these factors increase generalizability. A weakness is the lack of randomization.
MacGeorge and colleagues (2008) suggest recommendations for future research
relevant to the current effort. First, MacGeorge and colleagues recommend examining
students’ behaviors and perceptions of how the instructor uses the device. In the current
effort, student perceptions are key aspects considered in order to measure the construct,
metacognitive self-regulation. The second recommendation is to examine student
learning outcomes and learning processes with “clickers.” The setting for the current
work is an undergraduate educational psychology course, and, undergraduate outcomes
will be among the elements taken into account to examine usage of “clickers.” Moreover,
whether and how “clickers” influence student metacognitive processes is a foundational
piece of the current work.
“Clickers” and the social context. Undergraduate students were surveyed to
examine how interactive technologies, “clickers,” impact student learning (Hoekstra,
2008). Hoekstra’s findings suggest that “clickers” significantly change the social
environment in which students experience learning. “Clickers” increased interactions as
well as having the capability to increase cooperative learning and the application of
concepts. Hoekstra suggests that the successful use of “clickers” depends more upon the
38
students than the instructor, because the communication between the instructor and
students is changed, and the students are required to actively participate. Findings also
support that “clickers” are a useful tool for problem-based learning, and are less anxiety
provoking for students due to the anonymity afforded. Hoekstra warns the widely popular
device may be a tool of “edutainment,” or educational seduction (Garner et al., 1989), if
used as merely a tool to gain students attention, instead of using the device to encourage
critical thinking, aid students in working with concepts and terms, and provide
cooperative learning opportunities (Hoekstra, 2008). “Clickers” should be utilized in
conjunction with instructional strategies, because cognitive processes are primed by the
instructional strategies (Mayer, 2008; Mayer et al., 2009).
Prather and Brissender (2009) examined to what extent that students felt
“clickers” enhanced their learning. This element is of importance to the current effort
because student reflection about learning is a form of metacognitive understanding. To
gain insight from students about metacognition questions are asked that rely on students’
reflections about thinking and learning (Anderson & Krathwohl, 2001). Participants in
Prather and Brissender (2009) felt that “clickers” contributed positively to course
outcomes, and some felt “clickers” aided in understanding course concepts. The question
that remains unanswered is, “how did ‘clickers’ contribute?,” and this one of the foci of
the current effort.
“Clickers” and student outcomes. Student metacognition may be involved in
increasing student outcomes when “clickers” are utilized (Duncan, 2006: Mayer et al.,
2009). Mayer and colleagues (2009) examined “clickers” in large lectures and the
39
strategy of employing “clicker” questions, which is common to lectures using “clickers.”
The study employed a pre-post-test design, and compared the mid-term and final grades
of students, separately and combined. “Clicker” groups were found to have an element of
interaction between the instructor and student different from the groups without the
“clickers.” Positive connections have been found in research between “clickers” and
student learning gains. Large learning gains were reported by Mazur (1991), and several
replications confirmed the connection between “clickers” and student benefits (Beatty et
al., 2006; Caldwell, 2007; Duncan, 2006; Lasry, Watkins, and Mazur, 2008; Mayer et al.,
2009; Meltzer & Manivannan, 2002; Van Dijk et al.2001). Some results were moderate
(Chen et al., 2010; MacGeorge et al., 2008). James and Willoughby (2011) state many
studies find increase in attendance and interest without increase in learning outcomes.
There are several studies that have established no such effect (Caldwell, 2007; Lasry,
2008).
Hoekstra (2008) expresses the concern that “clickers” may merely be instruments
to increase attention and engagement, because this has become an expectation for
educators. Two additional recent studies compare “clickers” to another means of
feedback in lecture, other than raising hands. Lasry (2000) compared “clickers” to flash
cards in physics, and Chen et al. (2010) utilized flashcards compared to personal digital
assistants to evaluate rapid feedback in engineering. Neither of these studies found
significant differences between methods. There are articles emerging that suggest subject
specific best practices and strategies to support learning (Beatty, et al. 2009; Marino,
Bremner, & Emerson, 2010; Moss & Crowley, 2011). The distinct difference seems to lie
40
in utilizing “clickers” as a pedagogical tool in conjunction with instruction strategies,
Peer Instruction in particular (Lasry, Watkins, and Mazur, 2008; Mazur, 1991; Meltzer &
Manivannan, 2002; Van Dijk et al., 2001). This is important to the current study, because
“clickers” are utilized as a formative assessment tool to guide the lecture and as
pedagogical tool along with questioning and Peer Instruction.
Mayer and colleagues’ (2009) research is relevant to the current effort, because
the study is one of the few in a psychology setting that has some level of experimental
design. Mayer et al. compared groups of students who utilized “clickers” to groups of
students who did not utilize “clickers,” but used “group questioning, and to a control
group that did not use technology or “group questioning.” In the current work, “Clickers”
are expected to be used to engage students in critical thinking about course concepts, and
to provide learning opportunities (Hoekstra, 2008). Mayer and colleagues (2009) suggest
that while “clickers” promote cognitive processes when used with instructional strategies,
the quality of questions and introducing questions that are exam-like in nature may be all
students require to engage more deeply with concepts, and that the interest in “clickers”
may be due, at least in part, to the Novelty Effect. When educational technology devices
or strategies are first introduced into a context the students frequently experience
improved performances that can diminish over time. In the studies of “clickers”
determining whether the Novelty Effect is the reason for improvements or if the
improvements continue over time is important. Mayer et al. (2009) present the first study
with a quasi-experimental design that indicates “clickers” maybe related to improving
learning outcomes, because the “clicker” group demonstrated greater gains than the other
41
two groups. Moreover, there is a clear connection to “clickers” use with instructional
strategies, and Mayer and colleagues state that “clickers” are a tool to consider to
advance student and instructor interaction if the goal of the lecture is to increase student
learning. The resulting question for the current effort to explore is what role
metacognitive self-regulation plays with this type of “clicker” usage.
Research demonstrates self-regulation and metacognition bring about better
learning and performance outcomes (Mayer 2008) but none of the studies that recorded
positive outcomes assess whether students self-regulation and metacognition benefited
and indeed produced those enhanced outcomes. Conversely, the possibility exists in the
studies that did not find positive outcomes that the manner in which the “clickers” were
utilized did not target self-regulation and metacognition; however, this was not
established. This study seeks to establish a direct, validly measured relationship between
metacognitive processes and self-regulation in a quasi-experimental context and the
usage of “clickers” in order to benefit instructional design.
“Clickers” and metacognition or metacognitive self-regulation. Research
utilizing “clickers” to develop student metacognition seems void from research literature
although use of “clickers” has been identified as a means of engaging students
metacognitively (Mollborn & Hoekstra, 2010). Mollborn and Hoekstra (2010), in a study
with 800 undergraduate students in three social science courses, examined how “clickers”
work in order to introduce a new social science oriented pedagogical model for utilizing
“clickers.” Observations conducted by Mollborn and Hoekstra (2010) confirm that
42
“clickers” are effective tools to engage students, because students who are shy are
afforded anonymity, and displaying histograms after student responses inspire students to
comment and explain their responses (Mollborn & Hoekstra, 2010). Mollborn and
Hoekstra (2010) utilized student interviews to confirm that students like the anonymity,
because anonymity caused students to participate more freely, without fear of how
answers compared, or without feeling nervous (Stowell & Nelson, 2007). Mollborn and
Hoekstra (2010) point out that the effectiveness of “clickers” in developing critical
thinking depends more on the quality of educational goals embedded in instruction and
the time instructor invests in developing questions than on the device itself. Mollborn and
Hoekstra state “clickers” work best when students understand the intended purpose for
the “clicker,” and how students will be evaluated.
The effect of feedback type on engagement and learning. Higher education
instructors have incorporated electronic voting systems, also called “clickers,” at a rapid
pace, in settings ranging from small classrooms to large lecture halls. “Clickers” have
been used to achieve a variety of instructional goals and objectives. For example, the
anonymity allows for participation of students who otherwise may not be willing for a
variety of reasons. The anonymous and simultaneous manner of gathering responses via
“clickers” in comparison to traditional hand-raising also eliminates the conformity effect
where students may wait for cues from academically higher-status students in order to
decide their answer choice (Kennedy & Cutts, 2005; Stowell & Nelson, 2007). Another
benefit of “clickers” is that the immediacy of aggregated student responses allows
instructors to ascertain which concepts to re-examine during lecture (Lasry, 2008).
43
Instructors can archive “clicker” sessions for later perusal and design of follow-up
lessons based on content that has yet to be mastered as well as utilized “clickers” as data
gathering tools (Prather & Brissenden 2009).
What remains to be established, though, is the relationship, if any, between the
use of “clickers” and learning. Although research about learning gains remains
conflicted, there are promising results about the use of “clickers” in relationship to levels
of engagement (Bode, Drane, Kolikant, & Schuller, 2009; Lasry, 2008; Stowell &
Nelson, 2007; Trees & Jackson, 2007). Students’ level of engagement has been shown to
be a significant predictor of learning outcomes (Mayer, 2008; Schunk, Pintrich, & Meece,
2008) as have learner characteristics (Mayer, 2008).
There is agreement about the usefulness of “clickers” as far as attributing gains in
learning and engagement to instructional quality (Mayer et al., 2009), and “clickers”
allow for more interaction with the students than a traditional lecture which then leads to
engagement and learning outcomes (Beatty et al., 2006; Caldwell, 2007; Hoekstra, 2008;
Mayer, 2008; Mazur, 1991; Stowell& Nelson, 2007). Because of these agreed upon
characteristics, conducting research to establish the instructional effectiveness of
“clickers” in comparison to other methods of active engagement during lecture (e.g.,
flashcards, or raising hands to answer questions) is crucial.
Stowell and Nelson (2007) conducted a study that assessed the effectiveness of
“clickers” in comparison to hand-raising and paddles on learning and engagement. The
study established that the highest participation was from the “clicker” group, the
“clicker” group was more likely to answer questions honestly, and had a more positive
44
emotional reaction to the in class participation. However, Stowell and Nelson pointed out
several limitations: 1) the lecture was shorter than most lectures, 2) the instructor for the
mini lessons was not the usual instructor for the lecture, and 3) the data was collected
from one point and time. The current study sought to address these exact limitations by
purposefully providing a longer unit of instruction, assessing engagement and
performance in different groups of students enrolled in the same type of course, and
taught by the same instructor.
A study similar to Stowell and Nelson (2007) that investigated the benefits of
“clickers” was conducted by Chen et al. (2010). This study examined personal digital
assistants utilized in the same fashion as “clickers” in comparison to flashcards for
providing rapid feedback; the authors argue that feedback provided through homework,
quizzes, and tests is tedious, slow, and late. Chen and colleagues hypothesized that
student learning is enhanced by timely and specific feedback on central concepts and
skills (e.g., rapid feedback method), and provides students with insight into their learning.
According to Chen and colleagues, feedback that does not address a specific task is not
beneficial. In the current effort, “clicker” items are well developed and strategically
placed so that results will not be confounded with poorly devised instructional strategies
or implementation.
In Chen et al. (2010) rapid feedback was found to have statistical significance,
and results indicated students improved in ability to solve conceptual and application
problems. Although Chen and colleagues found that quiz score increased moderately, the
authors state that a level of uncertainty remains as to whether the audience response
45
systems was solely responsible for the increase. The majority of students believed they
would not have done as well in a class without feedback (Chen et al., 2010). Chen and
colleague (2010) suggest that positive results were due to the function of the information
processing system, which works by selecting information from the short term memory,
rehearsing information in the working memory, and organizing information for storage
and retrieval in the long term memory (Dembo & Seli, 2007). This process may be
relevant to the current effort, because “clicker” items lead students in a similar fashion,
focusing on information, working on information, and organizing information.
Furthermore, in Chen and colleagues (2010), rapid feedback had an impact on the
lecturer, because much organization and preparation was required in order to build into
the course well timed concept and skill questions (Chen et al., 2010). “Clickers” alter the
social context, engage students in an active learning process (Hoekstra, 2008; Mollborn,
& Hoekstra, 2010), and feedback is provided immediately (Mollborn & Hoekstra, 2010).
There are other tools, less costly (Burnstein & Lederman, 2006), that can be utilized to
provide gather data (e.g., pencil and paper surveys, quizzes), and provide feedback in
lecture (e.g., raising hands or paddles). Whether “clickers” are simply a more interesting
tool, or an unnecessary expenditure, is an important determination. If “clickers” provide
additional benefits, such as increases in student cognitions and metacognitions, and if
“clickers” change the social dynamics more so than other strategies, then “clickers”
would have increased validity to support use in higher education contexts.
The most important aspect in Chen et al. (2010) for the current study is the
apparent link between rapid, poignant feedback and student metacognition. When
46
utilizing “clickers” clear learning goals and instructional design for each lecture are
essential in order to accomplish intended purposes (Duncan, 2006; Chen et al. 2010).
Addressing these foundational course concerns would also result in situation specific
feedback to students. A significant portion of the participants in Chen et al. believed
outcomes would not have been as good without the rapid feedback. This seems to
indicate student metacognition may have played a positive role in the rapid feedback
treatment provided by the instructors, that is, when the feedback was specific to the
problem, concept, or task.
In a discussion of the use of “clickers” in higher education classrooms, Lantz
(2010) recommends ways to utilize “clickers” to increase student understanding of course
concepts and aid students in construction of subject specific knowledge. Lantz asserts
that research has not yet been conducted to examine whether “clickers” impact
conceptual understanding, which is an element of the current study’s focus. Lantz (2010)
provides guidance for effective “clicker” use for educators and guidance for research.
Lantz suggests that “clickers” can be incorporated into lecture much the same way that
textbooks have become user friendly in appearance, so that the manner of presentation
aids students in construction of knowledge, and lectures can utilize subtitles. This concept
of the presentation of the visual to enhance and not detract from the lesson agrees with
Clark and Feldon’s (2005) recommendations for use of multimedia in instruction.
Lantz’ (2010) suggestion to form the visual aspects of lecture with “clickers” so
that the visual presentation guides the student in lecture in the same user friendly way
that textbooks are designed has more to do with visual presentation and instructional
47
design than with “clicker” usage. This is because the suggestion is to have power point
schema that changes with the introduction of new material to alert students to the shift in
topics. Another suggestion is to close sections with a few “clicker” questions which
serves two purposes, functioning as a transition to the new topic, and a formative
assessment to measure the current level of understanding. Because students naturally
organize material into existing schema, lecturers can guide students in the construction
and organization of the material. This allows instructors to ensure that students
understand relationships in concepts and prevents confusion (e.g. hierarchies).
Commenting on research regarding “clicker” effectiveness, Lantz (2011) cautions about
utilizing groups to measure effectiveness of “clickers” without randomization. There are
implications for the current effort, in that, while “clickers” have demonstrated increased
grades in courses, measuring groups without some level of randomization has an inherent
weakness. Results demonstrating the effectiveness and grade improvements may be due
to the group characteristics, and not to the use of “clickers.” This is an important piece of
information for this study, because there is an element of randomization which may lend
strength to the study results obtained.
Peer engagement and critical thinking. Results from James and Willoughby
(2011) indicate in order to achieve the level of engagement for in Peer Instruction, a basic
level of ability to think critically and metacognitively is necessary. In an effort to
examine the impact “clickers” have on students in lecture, James and Willoughby (2011)
observed students’ discussions in an undergraduate introductory astronomy course during
Peer Instruction activities. James and Willoughby analyzed the student conversations.
48
Findings demonstrated several interesting points pertaining to “clicker” questions. First,
students responded to items in a manner inconsistent with discussions in which they
engaged. Second, student discussions were composed of incorrect ideas that did not fit
the multiple choice responses offered. Third, results indicated that although students
often understood the question asked, answers deduced by the students showed inaccurate
conceptual understandings which were not anticipated by the instructor in the multiple
choice answers. Not surprising, one third of observed conversations qualified as “social
loafing.” Instructors were often unaware of the ideas students had that diverged from the
intended direction of the question, and “clicker” questions formed assume a certain level
of knowledge. While well intentioned to garner relevant feedback for students, James and
Willoughby (2010) suggest that feedback provided may be incomplete and even
misleading. Results also indicated that students can give correct responses to difficult
questions without possessing the conceptual understanding the instructor intended.
In addition, James and Willoughby (2010) found that students may have thought
about “clicker” questions in ways that researchers or instructors did not anticipate. In
order for the social context to operate effectively student metacognitive knowledge is
necessary. Self-regulation improves with self-reflection, and self-reflection is a
metacognitive process (Pintrich, 1995). Examining student responses provided insight for
future research, because there is an indication students can have unexpected responses
that instructor and researchers may not be able to anticipate, further investigation into
student metacognition is relevant (James & Willoughby, 2010). The current effort is
interested in how metacognitive self-reflections initiated by “clickers” impact learning.
49
Literature Summary
Results from Mollborn and Hoekstra (2010) support that “clickers” significantly
change the social environment of the lecture, and both students and faculty reported that
students are more effectively engaged in lecture. Additionally, the current effort is
concerned with student perceptions, also employed in the study design for Mollborn &
Hoekstra (2010), because interviews can provide insight into student perception. These
results are important to the current effort, because of the indications that students’
metacognitive processes may be influenced by the “clickers.” “Clickers” provided
opportunities for more frequent self-evaluations when comparing answers given to the
correct answer and for peer comparisons that may result from viewing one’s own answers
as compared to that of the class.
“Clickers” are a pedagogical tool (Mollborn & Hoekstra, 2010) which can be
utilized in forming learner centered educational environments; learner centered classes
are linked to improving student metacognitions and eventually, learning outcomes. This
is important to the current effort, because “clickers” may create opportunities to develop
student metacognition (Prather et al., 2006). Beatty and colleagues (2006) recommend
pedagogy to include a metacognitive goal when utilizing “clickers” questions. The use of
“clickers” alters the social context by increasing student interest and participation
(Mollborn & Hoekstra, 2010) which seems to occur in particular with solid instructional
50
design and well developed and placed “clicker” items. In the case where results suggest
differently, the problem may lie not in the technology device, “clickers,” but in
instructional design. The current study occurs in a context in which instructional design
of “clickers” items have been well thought out, refined, and well placed in each lecture.
The instructional design was developed in consideration of the impact that multimedia
may have on cognitive load (Clark & Feldon, 2005; Mayer & Moreno, 2003). Moreover,
“clicker” usage can provide the opportunity for students to engage in peer discussions
which have been shown to improve conceptual problem solving (Chen et al, 2010). Lantz
(2010) argues that “clickers” may be utilized best by aligning lectures and “clicker” use
with cognitive principles that foster memory and understanding of material. “Clickers”
are utilized according to these intentions in the educational setting for the current work.
These conditions prevent confounding of variables, and allow or the construct of
metacognition to be examined without concern of whether the “clickers” are utilized
effectively.
“Clickers” engage students, provide anonymity, increase participation, frequently
are reported to improve attendance, and it has been suggested that “clickers” may
increase outcomes as well as the notion of the popularity of “clickers” with faculty and
students, but do these devices effect students cognitions? Specifically, for the present
proposed study, do “clickers” influence metacognitive knowledge and if they do, how do
they? These questions remain unanswered. In addition, the studies pertaining to the
effectiveness of “clickers” lack randomization, and comparison groups; hence,
correlations linking “clickers” as effective tools to increase outcomes are weak as best;
51
the present study compares “clicker” use to an alternate tool to gather student responses.
Technology in education, or for that matter any tool utilized in an educational
environment, by virtue of the context for use, should positively impact student cognitions
and outcomes to be of any real value, in addition to providing some instructional virtue
that is not provided by some less expensive means. While the use of technology is not
going to disappear, guidelines to employ the effective use of technological devices need
to be formed. In the current state of education, guidelines cannot be formed without
empirically validated research on which to build a case for best practices and
recommendations that are specific to subject areas.
52
Chapter 3
Research Methodology
The present study examined whether “clickers” influenced student metacognition.
Research has yet to explore the metacognitive aspects of electronic feedback device
commonly referred to as “clicker” as mentioned in Chapter 1 and Chapter 2. When
technological devices are employed in the structural design of higher education classes, it
is prudent to explore whether their use benefits student outcomes. Metacognitively aware
students and self-regulated learners tend to have increased academic outcomes (Bransford
et al., 2000, 2004; Mayer, 2008). The primary purpose of this paper was to examine
“clicker’s” influence on metacognitive self-regulation. In doing so, the present work
undertook a necessary research-based effort to provide evidence to support and guide the
practice and design in utilizing “clickers” in large undergraduate lectures. In the first part
of this chapter, the research questions are restated and the intended design for this current
study is described. The remainder of this chapter describes how the sample was
determined, the population, the instrumentation, and the process of data collection.
Research Questions
As discussed in chapter 2, studies of “clickers” suggest examining whether
student cognitions are influenced by this technology (Hoekstra, 2008), and they
recommend examining metacognition (James & Willoughby, 2011; Lantz, 2010). The
self-evaluative nature of “clicker” use and the natural tendency for peer comparisons may
increase student metacognitive self-regulation. As a result the following research
questions were proposed.
53
The primary research questions were quantitative:
1. Are there differences in student metacognitive self-regulation, motivated
learning strategies, and metacognitions in lecture based on whether the
students utilize “clickers” or paddles, and the extent of use?
2. Does use of “clickers” verses paddles predict performance outcomes?
3. Does extent of use of “clickers” predict performance outcomes (e.g., quizzes,
course grade)?
The secondary research questions are qualitative:
1. If “clickers” affect metacognition, how do they affect metacognition?
2. Is the experience of using “clickers” different for students from the
experience using paddles? If it is different, how is it different?
Research Design
The purpose of the current research is to examine whether “clickers” affect
metacognition, more specifically, student metacognitive self-regulation. Because there is
a gap in this area in research, the research questions were first focused on whether student
metacognitive self-regulation is influenced, then how student metacognitive self-
regulation is influenced. The independent variables in the current study were as follows:
1. The group using “clickers”
2. The group using paddles,
3. The third summer group using paddles first and then “clickers”,
54
4. The extent of use (e.g., 0,25%, 50%, 75%, 100% of the time), and
5. Participant demographics (e.g., gender, ethnicity, and primary language).
The dependent variables were general metacognition, metacognitive self-regulation
resulting from “clicker” use, metacognitive self-regulation resulting from paddle use, and
performance outcomes (e.g., quizzes and course grade). In the current study student
perception was a key determinant of metacognitive self-regulation, because
metacognition involves individual thoughts and characteristics, and only the individual
can effectively convey personal thoughts and thoughts about behaviors.
Population and Sample
Undergraduate students were involved as the sample for this study. Both
quantitative and qualitative data were collected through using self-report data (e.g., age,
gender, ethnicity, class standing, major, athlete/non-athlete), and the Motivated Strategies
for Learning Questionnaire (MSLQ). The sample for the current study was an existing
entry level general education course in educational psychology. Three sections of the
course were involved in the present work, one summer section, and two fall sections. All
three classes were held in a large lecture hall which seats 100 students. In this course,
students utilized “clickers” or paddles to participate in lecture, and after the fifth lecture
all students continued with “clickers” for the remainder of the semester. Students were in
lecture for a total of 27.5 hours for the semester, once a week for a 15 week period (twice
per week for the Summer session), and lectures were 100 minutes in length.
55
Data collection took place in Summer and Fall 2011. “Clickers” and paddles were
utilized in this study. In the current effort quotation marks were utilized with the term
“clicker,” because this is a well-known nickname and there are many names for
electronic feedback devices. Quotation marks were not utilized with the term paddle,
because this term is specific to this study and does not have other names or prototypes;
most likely it is comparable to flashcards, another low technology response system. The
Summer Group could not be separated into two groups so the course began with utilizing
paddles for the first four lectures and switched to the use of “clickers” for the remaining
lectures. The sample size for the Summer Group was 33 participants. The audience
response system, or “clickers,” utilized for the current study was ResponseCard IR, a
product of Turning Technologies. The device is about the size of a credit card; it is light
weight and compact with 12 keys for the student to input information. The “clickers”
were provided for the Summer Group at the beginning of the Summer session. Each
student signed a contract permitting use of the “clicker” for the Summer session and
guaranteeing the return of the “clicker”. Responses to “clicker” items were displayed in
the form of histograms and pie charts; correct answers were indicated on power point
slides. When paddles were in use students were given two paddles that were labeled and
color coded. Each paddle had two responses, one on the front and one on the back. The
possible responses were color coded. The first paddle had green paper on the front with a
large block letter “A,” and the other side had pink paper with a large block letter “B.”
The second paddle had blue paper on the front side with a large block letter “C,” and the
other side had yellow paper with a large block letter “D.” The questions/items in the
56
lectures used “A, B, C, D” as in a multiple choice test format. When participants chose an
answer to a question, each one decided whether to use the paddle marked “A,” “B,” “C,”
or “D,” and then held up the paddle selected. Correct answers were verbally confirmed by
the professor as well as visually displayed by the correct answer indicator appearing on
the power point slide.
In Fall 2011, there were two sections offered of the educational psychology
course. The students who elected to enroll in the educational psychology course were
required to purchase “clickers” to participate in lecture; this information was provided in
the course syllabus, and incorporated into the pedagogy, instructional design, and
instructional and course goals. The two sections allowed for a quasi-experimental design
with one treatment group and one comparison group. To prevent human bias the
treatment group was determined randomly by flipping a coin. The two groups were
referred to as ‘Group A’ and ‘Group B’, where ‘Group A’ is the treatment group
(“clickers”) and ‘Group B’ is the comparison (paddles) group. The flip of the coin
assigned the course section that met for lectures on Monday and labs on Wednesday to
receive the designation as the “clicker” group, ‘Group A.’ The course section that met for
lectures on Wednesday and labs on Monday was designated as the paddle group, ‘Group
B.’ ‘Group A’ began and continued the semester using “clickers” while ‘Group B’ began
with paddles during lectures 1-4 and switched to “clickers” beginning lecture 5 and for
the remainder of the semester.
57
This sample and setting were appropriate for the current study. The course was a
lower division general education educational psychology course in motivation and
learning. As a regular practice in the instructional design of the class “clickers” were
utilized to elicit student responses by the instructor. “Clicker” questions were designed
for the intended lessons and course goals. The instructor was able to gain insight into
student understanding of key course concepts, and “clicker” responses were utilized to
provide rapid feedback and to allow for peer discussions.
Instrumentation
In the current study several variables were examined: a) the influence of
metacognitive processes through the use of “clickers” and paddles, b) student perception
of “clicker” usage compared to paddle usage and their influence on metacognitive self-
regulation, and c) perceived influence of participants. Dinsmore and colleagues (2008)
argue against the persistent use of Likert scales which were often utilized in the past
without other measures to corroborate results. Instrumentation was a key concern for the
present work, because metacognition is about one’s own thoughts, and as such, is very
difficult to measure. Qualitative questions were included to address this.
The first instrument used was the Motivation for Learning Strategies
Questionnaire (MSLQ) from which metacognitive self-regulation items were selected
(Pintrich et al, 1993). The second instrument consisted of questions that addressed
metacognition adapted from Mayer (2008) metacognitive items. The third instrument was
consisted of questions used to determine the level of metacognitive self-regulation in
58
lectures resulting from response device use. This survey was developed to provide
questions specific to a large lecture context in which “clickers” which parallel the
qualitative questions. In this manner qualitative and quantitative questions could be
compared.
The study had a pre-post-test design. Students were offered the opportunity to
participate in the study and to receive extra credit points for participation. An alternate
extra credit assignment was provided for students who declined to participate in the study
and wanted the opportunity to gain extra credit. Students that chose to participate in the
study completed a pencil and paper demographic survey; we administered the pretest at
this time.
Quantitative data was collected at two points: during the first week, and after the
fifth lecture. The Summer Group had a third point of data collection after “clickers” were
in use that occurred after the 10
th
lecture, because this group completed surveys for use of
paddles and “clickers.” In order to gather quantitative data, three different surveys with a
Likert scale were utilized.
Qualitative data were collected at two points. The first collection consisted of
using a questionnaire with open ended questions administered during week five. Brief
individual interviews were conducted during the last weeks of the semester. Qualitative
data gathered via the interviews were utilized to corroborate and supplement quantitative
data.
59
We gathered information to analyze patterns that emerged giving insight into how
the participants experienced “clickers” or paddles as a part of the lecture and how the
devices influenced metacognitive processes. This included comparisons between the
devices that students experienced. The comparisons that participants made included
comparing answers to histograms that display the polling results and comparing results to
peers. When histograms were displayed, students had the opportunity to view their
answers as compared to the rest of the class, then the correct answer indicator was
displayed and students were able compare their answers to the correct answer in addition
to peer responses. “Clickers” allowed opportunities for social interaction that may not
have been present in the same way when paddles were utilized because of the visual
nature of the display. When students had increased opportunities to view their answers in
comparison to their peers, and in comparison to the correct answer, it may have caused
increased self-reflections and self-evaluations about individual characteristics in
comparison to peers. When students raise hands in response to instructor questions, there
is an issue of conformity, because students look about and may change answers based on
the how many of their peers raise hands in response to a question. This may result in
instructors thinking that students understand conceptual knowledge better than the
students actually do understand. The anonymity provided by “clickers” may decrease
conformity issues and so improve the ability of instructors to gauge student
understanding. Without being able to see peers’ answers students must rely on their
individual level of preparation for the lecture. Students now enter a situation in which
their efforts are measured against others, quickly and with regularity. The importance of
60
this to the current study is because the nature of self-reflections and comparison is a
function of metacognition, and these were secondary foci that were primarily examined
through qualitative questions and interviews.
Biographical and demographic data. All of the students who elected to
participate in the current study completed a questionnaire asking for biographic
information. These included name, age, gender, major (e.g., declared or undeclared),
ethnicity, and athletic status (e.g., athlete or non-athlete). The participants were also
asked to indicate whether English was their first language, their second language, or
whether they were an English language learner.
Summer biographical and demographic data. The Summer group had a total of
33 participants of which 32 were 18 years of age and one was 19 years of age, 15
participants were female, 24 students declared their major, eight were undeclared and
data from one participant was unavailable, eight participants were student athletes. The
ethnicity of the participants included 18 Latin American participants, nine African
American, two self-identified as Bi-racial, one White, one Asian, one self-identified as
Multi-racial, and selected the “Other” category. The participants included 20 who were
native English speakers, 12 who identified English as a second language, and one English
language learner.
Fall ‘Group A’ biographical and demographic data. Fall ‘Group A,’ the
experimental group (“clickers”), had a total of 87 participants of which 65 were 18 years
of age, 19 were 19, two were 20, and one was 87. Forty-three participants were female,
61
61 participants had a declared major, and 48 participants were student athletes. The
ethnicity of the participants included 50 white participants, 11 African American, nine
who self-identified as Bi-racial, six Asian, four Latin American participants, four who
self-identified as Multi-racial, and two selected the “Other” category. The participants
included 78 who are native English speaker, eight who identified as English as a second
language, and one English language learner.
Fall ‘Group B’ biographical and demographic data. Fall ‘Group B,’ the
comparison group (paddles),had a total of 78 participants of which 62 were 18 years of
age, 11 were 19, three were 20, and one was 23, and one was 26. Thirty-five participants
were female, 58 participants had a declared major, and 28 participants were student
athletes. The ethnicity of the participants included 26 white participants, 20 African
American, 11 were Asian, seven self-identified as Bi-racial, six were Latin American,
five self-identified as Multi-racial, and three selected the “Other” category. The
participants included 68 who are native English speaker, and 10 who identified as English
as a second language.
Motivated Strategies for Learning Questionnaire (MSLQ). MSLQ was
developed by Pintrich, Smith, Garcia, and McKeachie (1993); Artino (2005) states that
MSLQ was developed to examine motivation and self-regulation of college students in a
specific course because students’ motivation can change from course to course. Included
in the items were sections that evaluate student metacognition. There was an underlying
assumption in this measure that motivation and self-regulation are traits that are not
62
static, and in the construct of self-regulation, metacognition is one of the three
specifically defined aspects of self-regulated learning (Artino, 2005). This measure was
subject to 10 years of development, and has been field tested in many correlational
studies (Artino, 2005). The items addressing metacognitive self-regulation in this
instrument are included in the subsection of learning strategies.
The Motivated Strategies for Learning Questionnaire (MSLQ), seen in appendix
B, was the measurement used in gathering quantitative data for the pre-post-test. The
MSLQ is a measure that gives results specific to a course (Artino, 2005; Murphy, 1998);
the MSLQ uses a Likert scale with seven points and asks the respondent to indicate after
each item whether the statement is true for the respondent and responses on the seven
point scale range from very much like the respondent to not at all like the respondent.
The MSLQ includes 82 items, but there are 15 items in the section related to cognition
and learning strategies that are most relevant to the current effort. The MSLQ form that
will be utilized for the present work is appendix B. The Cronbach’s alphas for the items
that are most pertinent to the present study, Metacognitive Self-Regulation, are strong,
.79 for metacognitive and cognitive strategies in the area of metacognitive self-
regulation, and .80 for cognitive and metacognitive strategies in the area of critical
thinking (Pintrich, Smith, Garcia, & McKeachie, 1993, 22-24). In the current study the
alpha for the MSLQ for the summer group was .86. There was a pre-test given, but the
data was misplaced. The pre-test MSLQ for ‘Group A’ in Fall was .73, and ‘Group B’ in
Fall was .74. In the current study the alpha for the post-test MSLQ for the Summer Group
63
was .82, for ‘Group A,’ the experimental group (“clickers”), in Fall was .83, and ‘Group
B,’ the comparison group (paddles), in Fall was .89.
“Clickers” and student metacognition. The second quantitative measure, seen
in Appendix C, Electronic Feedback Devices and Metacognitive Self-Regulation, uses a
five point Likert scale where one is “strongly disagree,” two is “disagree,” three is
“neutral,” four is “agree,” and five is “strongly agree.” The author of the current study
adapted this metacognitive instrument from Mayer’s (2008) examples of questions that
were designed to measure metacognitive reading strategy awareness. In Mayer (2008)
metacognitive items are presented in three categories of strategies referred to as global,
problem solving, and support reading strategies. The questions were adapted to reflect
“clickers” and metacognition in a lecture context. The Summer Group took this measure
for both the clickers and the paddles. The alpha for the Summer clicker measure was .49,
and the alpha for the paddle questionnaire was .37. The alpha for ‘Group A,’ the
experimental group (“clickers”), was .96, and for ‘Group B,’ the comparison group
(paddles), was .92.
Metacognitive self-regulation in lectures. The third quantitative measure, seen
in appendix D, Metacognition and Electronic Feedback Devices in lectures in educational
settings, begins by asking whether the participant utilized the “clicker” and to what extent
of time. The scale employed for the survey items uses a five point Likert scale where one
is “strongly disagree,” two is “disagree,” three is “neutral,” four is “agree,” and five is
“strongly agree.” The questions for this survey were developed in order to have items
64
specific to a large lecture context with “clickers,” and to have qualitative items that
correspond to quantitative items. This instrument was designed because there is a lack of
instrumentation that is specific to the use of the technology and “clickers” with which the
present study is concerned. An essential component to explore in the present work is to
what extent the participant used the device and the responses to use of the device in the
lecture setting. This measure was given to the Summer Group for “clickers” and paddles.
The alpha for the Summer “clicker” questionnaire was .75. The alpha for the Summer
paddle questionnaire was .74. The alpha for ‘Group A,’ the experimental group
(“clickers”), was .75, and for ‘Group B,’ the comparison group (paddles), was .78.
Qualitative instrumentation. Qualitative questions were formed in consideration
of metacognitive processes and “clicker” use in a lecture context immediately preceding
this discussion of qualitative instrumentation, and after examination of the questions from
the first survey for the current work based on Motivated Strategies for Learning
Questionnaire (MSLQ), and the second survey from the current effort based on Mayer
(2008). These were utilized to form qualitative items, to the end that quantitative
information may be connected more readily to corresponding qualitative data. Qualitative
data was collected using a questionnaire with open ended questions and individual
interviews; the protocol is included in Appendix D following the quantitative piece
“Metacognitive Self-Regulation in Lectures.” Information was gathered to analyze for
patterns that emerged that give insight into how the participants experienced “clickers” or
paddles as a part of the lecture and how they influenced metacognitive processes,
including comparisons occurred. The comparisons that participants made included
65
comparing answers to the results of histograms. “Clickers” allowed for social interaction
that may not be present in the same way when paddles are utilized, because the results
were generally indicated by the instructor and students do not have the same opportunity
to view answers in comparison to their peers. This information together with the
quantitative results gave insight into the metacognitive process that occurred. Moreover,
the concern was to discern whether the influence is unique to “clickers,” or if paddles
produced similar results.
Procedure and data collection. The researcher obtained IRB approval to conduct
the present study from the required institution. The students heard a presentation about
the study, received information on the study, and were invited to participate; during the
Summer 2011 there were 33 out of 39 students who elected to participate. During Fall
2011 there were 165 out of 182 students in two different sections who elected to
participate in the study. The researcher explained the purpose of the study (e.g., to gain
insight into whether there is a relationship between “clicker” use and students’
metacognitive self-regulation). The researcher informed the students that participation
was both voluntary and confidential and would not impact course grading. After the
students were informed of the study’s purpose, confidentiality, and consent was obtained,
and data collection began for students who chose to participate.
Data Analysis. The data was collected and entered into Statistical Package for the
Social Sciences (SPSS) 18.0 program. The researcher visited each lecture and observed
the groups (e.g., Summer section, and ‘Group A’) who began the semester with use of
66
“clickers” and the group that began the semester raising paddles in response to questions
and surveys. The measure of metacognition and self-regulation that was used as a pre-
and post-test was Motivated Strategies for Learning Questionnaire (MSLQ). The first
research question, “Are there differences in student metacognitive self-regulation,
motivated learning strategies, and metacognitions related to “clickers” in lecture based on
whether use of “clickers,” or paddle, and the extent of use,” was analyzed using three
different t-tests. The second research question, “Does use of “clickers” verses paddles
predict performance outcomes” was analyzed using an ANOVA. The third question,
“Does extent of use predict performance outcomes (e.g., quizzes, course grade) was
analyzed using simple regression. The secondary research questions are qualitative. The
qualitative data collected through asking students to answer open ended questions as a
part of the post-test measure. The data was examined for themes that support and
expound upon quantitative data. Quantitative data were examined for equality of
variances, skewedness, and kurtosis, because the statistical analyses employed in
the study assume distribution along a normal curve. Reversed coded items were
included in the surveys to prevent careless or invariant responses. Invariant
responses were detected in seven surveys and these surveys were eliminated from
the analyzed data set.
The current study’s data is reported, discussed, and analyzed in chapter 4. In
chapter 5 the results of the present study are discussed, and suggestions about future
research are offered. Based on the findings of the current study, chapter 5 also includes
67
theoretical and practical implications for the use of feed-back devices in large lecture
college settings.
Table 1
“Clicker’s” influence on social context, peer comparisons, and metacognition
(metacognitive self-regulation)
IVs DVs Quantitative Research
Questions
Qualitative
Research
Questions
Statistics/ Test
Groups:
(nominal)
A.“clickers”
B. paddles
Extent of use
(interval)
Demographics
-gender
ethnicity
ELL/ESL/1
st
(nominal-
ANOVA)
General MC
“Clickers”
MCSR
Outcome-
quizzes
Course grade
1.Are there differences in
student metacognitive self-
regulation, motivated
learning strategies, and
metacognitions in lecture
based on whether the
students utilize “clickers”
or paddles, and the extent
of use based?
2.Does use of “clickers”
versus paddles predict
performance outcomes?
3.Does extent of use
predict performance
outcomes (e.g., quizzes,
course grade)?
1. If clickers”
affect
metacognition,
how do they?
2. Is, and how is,
the experience
different for
students using
“clickers” verses
students using
paddles?
(Quantitative)
1. ANOVAs
2. ANOVA or
regression
3.simple
regression
(Qualitative)
Open-ended Q
survey and
Interviews
68
Chapter 4
Results
The current study examined whether and how metacognition was influenced by
electronic feedback devices. Quantitative and qualitative data were gathered to answer
the research questions. This chapter presents the statistical outcomes and the qualitative
results for the following research questions: 1) Are there differences in student
metacognitive self-regulation, motivated learning strategies, and metacognitions in
lectures based on whether the students utilize “clickers” or paddles, and the extent of use?
2) Does use of “clickers” versus paddles predict performance outcomes? 3) Does extent
of use of “clickers” predict performance outcomes (e.g., quizzes 1-4 for each group)? 4)
If “clickers” affect metacognition, how do they? and 5) Is, and how is, the experience
different for students using “clickers” versus students using paddles?
The present study expands on current research regarding the contributions made
to the educational environment by utilizing “clickers.” This chapter presents the
descriptive statistics, followed by the correlation tables, and then the quantitative and
qualitative results. In order to measure metacognitions, self-report surveys were
administered. The quantitative results sections begins with the research question and each
research question is followed by the results for the Summer Group and followed by the
results for the Fall Groups, after which a summary of results is presented. Next the
qualitative results are presented, then the qualitative questions are examined according to
the qualitative results; this section is followed by a summary of the results.
69
Measures and Descriptive Statistics
The measures and descriptive statistics for the current study are reported in the
following sections. Scale psychometrics are discussed first followed by Table 1 which
presents a summary of the measures. Next the internal consistency of the measures is
discussed and followed by Table 2 which presents the alphas for the measures and the
groups. Third, demographic characteristics are discussed and followed by Table 3 which
presents a summary of the characteristics.
Scale Psychometrics
The measures administered in the current study are presented in Table 2 and in the
following summary. The study included multiple measures and multiple points of data
collection. Students from a general education psychology course offered during Summer
2011 and Fall 2011 were presented with the opportunity to participate in the study. At
this time the pre-test was administered to students who elected to participate in the study.
The pre-test and post-test included two sections from the MSLQ, a total of 15 items, that
measured metacognitive processes using a Likert scale. The (Electronic) Feedback
Devices and Metacognitive Self-Regulation included 15 items using a Likert scale. The
Metacognition and (Electronic) Feedback Devices included seven items using a Likert
scale; following the quantitative items for this survey was the pencil and paper qualitative
survey that included four questions; these same questions were utilized as a guide for the
interviews.
70
Internal consistency. Cronbach’s alpha was employed to determine the validity
of the measures utilized in this study. The statistical analysis demonstrated that the
majority of the Cronbach’s alphas were strong to moderate. The internal consistency is
reflected in Table 3. One concern surfaced regarding the scales internal consistency. For
the Summer Group, the alpha for the (Electronic) Feedback Devices and Metacognitive
Self-Regulation for both the paddles (α = .49) and the clickers (α =.37) measures were
Table 2
Overview of Measures
Overview of Measures Measure Used
# of Items on
Questionnaire
Self-Report Surveys
Pre-Post-Test – MSLQ
Motivated Strategies for
Learning Questionnaire
(Pintrich, et al., 1993)
15
(Electronic) Feedback Devices
and Metacognitive Self-
Regulation
Adopted from Mayer, 2008 15
Metacognition and (Electronic)
Feedback Devices–Includes
qualitative survey
Developed to compare with
the qualitative survey and
interview
7
Qualitative Interview
Metacognitive Self-Regulation
and Metacognition in Lecture
Questions based on
preceding measures
4
low moderate, but for the Fall Groups the same measure had a very strong alpha for both
groups. The experimental group (“clickers”) was α = .96 and the comparison group
71
Table 3
Cronbach’s Alphas for Metacognition and Feedback Device Scales
Construct # of items in scale
MSLQ (Summer Group) 15 .82
Pre-MSLQ Group ‘A’ 15 .73
Pre-MSLQ Group ‘B’ 15 .74
Post-MSLQ Group ‘A’ 15 .83
Post-MSLQ Group ‘B’ 15 .89
(Electronic) Feedback Devices and
Metacognitive Self-
Regulation(Summer- paddles)
15 .49
(Electronic) Feedback Devices and
Metacognitive Self-Regulation
(Summer – “clickers”)
15 .37
(Electronic) Feedback Devices and
Metacognitive Self-Regulation (Fall PQ
- paddles)
15 .92
(Electronic) Feedback Devices and
Metacognitive Self-Regulation (Fall CQ
– “clickers”)
7 .96
Metacognition and (Electronic)
Feedback Devices (Summer - paddles)
7 .74
Metacognition and (Electronic)
Feedback Devices (Summer –
“clickers”)
7 .75
Metacognition and (Electronic)
Feedback Devices (Fall –paddles)
7 .75
Metacognition and (Electronic)
Feedback Devices (Fall – “clickers”)
7 .78
72
(paddles) was α = .92. Normally such strong alphas suggest redundancy of items. While
the cause of this discrepancy is not certain, there may be confounding factors due to
group differences. Maturation may have influenced results, and different group
experience or events from the Summer time to Fall may have influenced participant
perspectives.
Demographic characteristics. Participants in the study completed a demographic
survey immediately prior to the administration of the pre-test. The survey asked for
participants’ age, gender, and ethnicity. The survey also asked for major status (e.g.,
declared or undeclared), athletic status (e.g., athlete or non-athlete), and whether the
participant speaks English as the primary or second language, or is a language learner.
Demographic information is presented on Table 4.
The Summer Group included 33 participants, Fall ‘Group A,’ the experimental
group (“clickers”), included 87, and Fall ‘Group B,’ the comparison group (paddles)
included 78. In each group, less than half of the participants were female. In the Summer
Group, 45.5% of participants were female, in Fall ‘Group A’ 48.3% were female, and in
Fall ‘Group B’ 43.6% were female. In the Summer Group the participants were
predominantly Latino (54.5%), followed by 27.3% African American, 6.0% Biracial,
3.0% Multiracial, 3.0% Asian, 3.0% White, and 3.0% Other. In the Fall Group ‘A’
participants were 57.5% White, 12.6% African American, 10.3% Biracial, 6.9% Asian,
4.6% Latino, 4.6% Multiracial, and 2.3% Other. In Fall Group ‘B’ participants were
33.3% White, 25.6% African Americans, 14.1% Asian, 9.0% Biracial, 7.7% Latino, 6.4%
73
Table 4
Demographic Characteristics
Group and Response Device Summer Group Fall Group ‘A’
“Clickers”
Fall Group ‘B’
Paddles
Participants 33 87 78
Age mean 18.03 18.31 18.37
18 32 65 62
19 1 19 11
20 — 2 3
22+ — 1 2
Gender M/F 18/15 44/42 42/34
Ethnicity
African American 9 11 20
Asian 1 6 11
Latino 18 4 6
White 1 50 26
Biracial 2 9 7
Multiracial 1 4 5
Other 1 2 3
Major status:
Declared 24 61 58
Undeclared 8 26 20
Athletic status:
Athlete 9 39 28
Non-athlete 24 48 50
English Language:
English 20 78 68
English 2
nd
Language 12 8 10
English Lang. learner 1 1 —
74
Multiracial, and 3.8% Other. The percentage of participants with declared majors in
Summer was 72.7%, in Fall ‘Group A’ 70.1%, and in Fall ‘Group B’ 74.4%. In each
group, the majority of participants were non-athletes, 72.7% in the Summer Group,
55.2% in Fall ‘Group A’, and in 64.1% Fall ‘Group ‘B’. In the Summer Group, 60.6% of
participants identified themselves as Native English language speakers, 33.4% English as
a Second Language (ESL), and 3.0% as English Language Learners (ELL). In the Fall
‘Group A,’ (experimental- “clickers”), 89.7% of participants indicated that their primary
language was English, 9.2% English as a Second Language (ESL), and 1.1% as English
Language Learners (ELL). In the Fall Group ‘B’ (comparison group- paddles) 87.2% of
participants indicated that their primary language was English, and 12.8% English as a
Second Language (ESL).
Study correlations. There are two separate correlation tables, one for the
Summer Group and one for the Fall experimental and comparison groups. The reason for
this is that the Summer Group was not split into two separate groups and experienced
both response systems. Because of this, the Summer Group took additional surveys. The
Summer Group correlations are discussed and presented in Table 4, followed by the Fall
experimental (“clickers”) and comparison (paddles) groups’ correlation discussion and
presentation in Table 5. The correlations are described according to the statistical strength
of relationship, meaning that a correlation of 0 to .2 is weak, .2 to .4 moderate, .6 to .8
strong, and .8 to 1.0 very strong. If the correlation is .8 this indicates a strong to very
strong relationship between measures meaning that the measures shared a very significant
amount of variance and multicollinearity may have been present (Salkind, 2008).
75
Multicollinearity is a term used to signify that an observable statistical occurrence in
which two or more predictor variables are highly correlated, and small changes in the
data may result in erratic changes in the correlation coefficient.
Summer group correlations. The MSLQ used a 7 point Likert scale while the
remaining quantitative measures used a 5 point Likert scale. The mean for the MSLQ was
4.51 (SD=.82). The Electronic Feedback Devices and Metacognitive Self-Regulation
mean for paddles was 3.50 (SD=.77). The Electronic Feedback Devices and
Metacognitive Self-Regulation mean for “clickers” was 3.87 (SD=.73). The mean for the
Metacognition and Electronic Feedback Devices mean for paddles (MCP, appendix D)
was 3.59 (SD=.59). The mean for the Metacognition and Electronic Feedback Devices
for “clickers” (MCC appendix D) was 4.10 (SD=.55). The performance outcome mean
for paddles was 7.8 (SD=1.25). The performance outcome for “clickers” had a mean of
9.3 (.80). Summer Group correlation information is presented on Table 5.
The Summer Group correlations indicate that there were some common ground between
measures. The MSLQ was moderately, positively correlated to the Electronic Feedback
Devices and Metacognitive Self-Regulation (CQ, appendix C) for “clickers” which is
strong enough to indicate that the variable share some level of commonality in regard to
metacognitive processes. Interestingly enough, the same measure, Electronic Feedback
Devices and Metacognitive Self-Regulation (PQ, appendix C), when administered for
paddles was not correlated with the MSLQ (r = .333, p = .058). The MSLQ and
Metacognition and Electronic Feedback Devices (MCP, appendix D) for paddles
76
correlation demonstrated a moderate relationship, meaning the measures had some level
of commonality with metacognitive processes, and, to a lesser degree, the same is true of
the performance outcome for “clickers.” A moderate positive relationship was found
between the Electronic Feedback Devices and Metacognitive Self-Regulation (PQ,
appendix C) for paddles and the Metacognition and Electronic Feedback Devices for
paddles (MCP, appendix D). This survey (PQ) was also positively correlated with the
Metacognition and Electronic Feedback Devices for “clickers” (r = .517, p < .01).
Essentially the common variable, metacognitive processes, measured by these surveys
had a strong relationship. The Electronic Feedback Devices and Metacognitive Self-
Regulation for “clickers” was positively correlated with the Metacognition and Electronic
Feedback Devices for paddles (r = .388, p < .05). This means the instruments had a weak
to moderate relationship between in the common variable measured. Both measured
metacognitive processes but with different response devices so that the common variable
was more than likely the metacognitive processes and the variable that contributed to less
strength was the difference in response device. The Electronic Feedback Devices and
Metacognitive Self-Regulation (CQ, appendix C) for “clickers” was positively correlated
with the Metacognition and Electronic Feedback Devices (MCC, appendix D) for
“clickers” (r = .456, p < .01). This means that there was more strength to the surveys
when the same response device was measured. There was a positive relationship with
these measures, and the way participants responded to questions on one survey was
indicative of how the participants were likely to respond on the other.
77
Table 5
Summer Group: Means, SD, and Pearson Product Correlations for Measured Variables
* p < .05; ** p < .01; *** p < .001
Variables 1 2 3 4 5 6 7
M SD
1. MSLQ 4.51 .82 -- .333 .587** .537** .482* .425* .260
2. Electronic Feedback
Devices and
Metacognitive Self-
Regulation
(paddles, appendix C)
3.50 .77 -- .333 .418* .517** .260 .226
3. Electronic Feedback
Devices and
Metacognitive Self-
Regulation
(“clickers”, appendix C)
3.87 .73 -- .388* .456** .363* .289
4 Metacognition and
Electronic Feedback
Devices (MCP-
paddles,
appendix D)
3.59 .59 -- .449** .490** .401*
5. Metacognition and
Electronic Feedback
Devices (MCC-
“clickers”
appendix D)
4.10 .55
- .325 .118
6. Performance Outcomes:
Clicker Quizzes 6-10
9.3 .80
- -- .511**
7. Performance Outcomes:
Paddles Quizzes 1-5
8.3 1.13 --
78
Electronic Feedback Devices and Metacognitive Self- Regulation (CQ, appendix
C) for “clickers” were positively correlated with the performance outcome for
“clickers”(r = .363, p < .05). This means this measure of “clickers” and metacognition
had a significant relationship between with the performance outcomes, indicating that the
interacting elements were not caused by random factors. The Metacognition and
Electronic Feedback Devices (MCP, appendix D) for paddles and Metacognition and
Electronic Feedback Devices (MCC, appendix D) for “clickers” were positively
correlated (r = .449, p < .01) so that there was a common variable measured with
moderate strength of relationship; though the response devices were different, the
common variable, metacognitions in lecture, was related. This same level of relationship
existed for the Metacognition and Electronic Feedback Devices (MCC/MCP, appendix
D) and with both performance outcomes, “clickers” (r = .490, p < .01), and paddles (r =
.401, p < .05). Performance outcomes for “clickers” and performance outcomes for
paddles were positively correlated (r = .511, p < .01) which means a moderate
confidence, approaching strong, existed; if a participants performed well on one measure,
they would also perform well on the other.
Fall correlations. The Fall Groups’ Correlation table is presented in Table 5, and
in the following summary. The pre-test (MSLQ) had a mean of 4.8 (SD=.682). The post-
test (MSLQ) had a mean of 4.58 (SD =.82). The pre-and post-test used a 7 point Likert
scale while the remaining quantitative measures used a 5 point Likert scale. The
Electronic Feedback Devices and Metacognitive Self-Regulation had a mean of 3.26
(SD= .69). The Metacognition and Electronic Feedback Devices (MCP/MCC, appendix
79
D) had a mean of 3.26 (SD=.6). The performance outcome had a mean of 7.89
(SD=1.25). The pre-test and the Electronic Feedback Devices and Metacognitive Self-
Regulation were positively correlated (r = .191, p < .05). Metacognitive motivated
learning strategies and metacognitions were positively correlated. This means that
participants reported that they were influenced by feedback devices in lecture. The pre-
MSLQ and Metacognition and Electronic Feedback Devices (MCP/MCC, appendix D)
had the same level of significance (r = .176, p < .05). The Electronic Feedback Devices
and Metacognitive Self-Regulation (CQ, appendix C) and the Metacognition and
Electronic Feedback Devices (MCC, appendix D) were positively correlated (r = .610, p
< .01). This means that a positive relationship existed between the variables measured so
that the participants who reported higher metacognitive processes for motivated strategies
for learning (MSLQ) when using “clickers” also reported higher metacognitions and
metacognitive self-regulations in lecture using “clickers.” For the Fall experimental
group and comparison group, individual correlation tables were created to examine
whether there were differences between the groups’ measurements. The means and the
standard deviations are reported in Tables 6 and 7. For the experimental group
(“clickers”), the scale that measured metacognitions that occurred in lecture resulting
from use of response devices (CQ, appendix C) had a positive correlation with the
survey that measured how much influence that participants attributed to the response
device (MCC, appendix D), (r = .867, p < .001). This means that these measures had a
very strong positive relationship with each other and measured with a high degree of
specificity the metacognitive processes. The same measures for the comparison (paddles)
80
Table 6
Fall ‘Groups A’ Comparison Group: Means, SD, and Pearson Product Correlations for
Measured Variables
* p< .05; ** p < .01; *** p< .001
group had a strong, but a more moderate relationship for the common variable,
metacognition (r = .611, p < .01). The pre-MSLQ had an almost moderate relationship
with the post-MSLQ (r = .440, p < .05) for the experimental group, are for the
comparison group the relationship between the pre- MSLQ and post-MSLQ was stronger
Variables 1 2 3 4 5
M SD
1. Pre-test (MSLQ) 4.63 .09 -- .440* -.087 .176 .057
2. Post-test (MSLQ) 4.41 .11 -- .256 .303 .204
3. Electronic Feedback
Devices and
Metacognitive Self-
Regulation
(CQ, appendix C)
2.94 .09 -- .867*** -.206
4 Metacognition and
Electronic Feedback
Devices (MCC,
appendix D)
3.39 .11 -- -.189
5. Performance Outcome
Quizzes 1-4 mean
7.81 .156 --
81
Table 7
Fall ‘Groups B’ Comparison Group: Means, SD, and Pearson Product Correlations for
Measured Variables
* p< .05; ** p < .01; *** p< .001
(r = .775, p < .001). These measures demonstrated strength of relationship in measuring
the metacognitions involved in motivated learning strategies in the course lecture context.
However, metacognitions that occurred in lecture while using “clickers” (CQ) and the
influence of “clickers” on metacognitive processes in lecture (MCC) were not
significantly correlated with performance outcomes (r = -.181, p =.124; r = -.153, p
=.197). There was not a relationship found between these measures and performance
Variables 1 2 3 4 5
M SD
1. Pre-test (MSLQ) 4.80 .08 -- .775*** .191 -.010 .205
2. Post-test (MSLQ) 4.58 .10 -- -1.09 .055 .227
3. Electronic Feedback
Devices and
Metacognitive Self-
Regulation
(PQ, appendix C)
3.26 .69 -- .611** .036
4 Metacognition and
Electronic Feedback
Devices (MCP,
appendix D)
3.62 .07 -- .115
5. Performance Outcome
Quizzes 1-4 mean
7.53 .17 --
82
outcomes. The relationship between these measures was also insignificant for the
comparison group (r = .036, p =.795; r = .115, p =.344), so that there was no significant
commonality found between the variables (metacognitive processes) measured on these
surveys and performance outcomes.
Correlation Summary
During the Summer section of the study, the MSLQ for the Summer Group had a
moderately strong relationship to all the measures with two exceptions. The MSLQ and
performance outcomes were correlated when “clickers” were utilized and were not
correlated to the paddle performance outcomes. During the Fall the pre-MSLQ
demonstrated a slight correlation to the Electronic Feedback Devices and Metacognitive
Self-Regulation (PQ/CQ, appendix C), and to the Metacognition and Electronic Feedback
Devices (MCP/MCC, appendix D). The post-MSLQ and these same measures did not
demonstrate significance when the correlations were performed for the Fall sections
together. When the comparison and experimental group correlations were performed
separately for each group, the pre-MSLQ demonstrated a very strong, significant
relationship with the post-MSLQ for the comparison group (paddles) and a strong,
significant relationship for the experimental group (“clickers”). None of the measures
demonstrated strength of correlation with the performance outcomes in the Fall. The
correlations for the measures varied with the Electronic Feedback Devices and
Metacognitive Self-Regulation (PQ/CQ, appendix C) and the Metacognition and
Electronic Feedback Devices (MCP/MCC, appendix D). These measures had a
83
consistently moderate to strong significant relationship when the Fall data were examined
collectively. When the experimental and comparison group results were examined
separately these measures proved to have much stronger relationships.
Quantitative Results
Results for the quantitative research questions. Quantitative results for the
research questions are reported in the following sections. Each section begins with the
respective research question. Then the statistical analysis utilized to determine results and
the results of the tests performed are reported. First the results of the Summer Group are
reported followed by the results of the Fall experimental (“clickers”) and comparison
(paddles) groups. After results for each research question there is a summary. Last,
results of statistical tests of differences performed for demographics are reported in
summary form after research questions 1-3.
Results Research Question 1
Research Question One asked: Are there differences in student metacognitive
self-regulation, motivated learning strategies, and metacognitions in lecture based on
whether the students utilize “clickers” or paddles, and the extent of use? In the following
sections the results for the Summer Group, Fall ‘Group A’ (experimental group), and Fall
‘Group B’ (comparison group) are presented. The Summer results are presented
separately from the Fall Groups, because the additional measures were administered. A
pre-test for the Summer Group was not available due to human error and this group did
not have a control group or a comparison group; the Summer Group experienced both
84
“clickers” and paddles and the accompanying measures for each response system,
whereas the Fall experimental (“clickers”) and comparison (paddles) groups had a pre-
post-test and a quasi-experimental design, and ‘Group A’ only completed measures in
regard to “clicker” usage while ‘Group B’ only completed measures related to usage of
paddles. There are three points of focus for the groups in each section. The first is
Metacognitive Self-Regulation, the second is Motivated Strategies for Learning, and the
third, Metacognitions in Lecture.
Summer Group. To answer this question for the summer group, paired sample
ANOVAs were utilized. One part of research question 1 was unanswerable and will be
discussed the section below, “Motivated strategies for learning.” Results for research
question 1 regarding metacognitive self-regulation and metacognitions in lecture are
discussed in the following sections.
Metcognitive self-regulation. For this question, an ANOVA was performed with
Metacognitive Self-Regulation in Lecture Survey (PQ/CQ, appendix C) as the dependent
variable and response device and extent of use as the independent variables. Significance
was not found for “clickers” (F = .662, df = 2, p=.394), or for paddles (F = .198, df = 3,
p= .897). The results indicate that there were no differences in participant metacognitions
(e.g., knowing level of preparation for lecture, measuring level of understanding, seeing
how lecture fits with the text, and helped to refocus), and metacognitive self-regulatory
behaviors (e.g., guiding note-taking, helped to know key concepts to highlight, and
85
helped to know what questions to ask) during lecture based on whether the participants
used “clickers” or paddles.
Motivated strategies for learning. This question remains unanswerable for the
Summer group. The pre-Motivated Strategies for Learning Questionnaires were missing
due to human error.
Metacognitions in lecture. The Metacognition and Electronic Feedback Devices
Survey (MCC/MCP, appendix D) was designed to determine if and to what degree
participants attribute the influence of response devices to learning as well as to gather
data that more closely aligns with the qualitative questions of the current study. An
ANOVA was performed for response device and Metacognition and Electronic Feedback
Devices Survey with the dependent variable being student metacognitions in lecture (as
measured by Metacognition and Electronic Feedback Devices Survey, MCP/MCC,
appendix D) and the independent variables were response devices and extent of use.
Significance was found for “clickers” (F = 2.458, df = 14(18), p = .039), but not for
paddles (F = .703, df = 14(18), p = .745). In other words, when “clickers” where utilized
in lecture, participants connected use of the “clickers” to improved understanding of
course concepts, self-monitoring, peer comparisons, and ascribed more influence to
“clickers” than simply responding by raising hands. Participants did not make this
attribution to use of paddles in lecture.
Fall Groups. ANOVAs and a regression were performed to answer research
question 1 for the Fall experimental (“clickers”) and comparison (paddles) groups which
86
was, “Are there differences in student metacognitive self-regulation, motivated learning
strategies, and metacognitions in lecture based on whether the students utilize ‘clickers’
or paddles, and the extent of use?” The first ANOVA was to determine if use of
“clickers” versus use of paddles influenced Metacognitive Self-Regulation in Lecture
Survey (PQ/CQ, appendix C). The second aspect of research question 1 required a
regression to determine if use of clickers and extent of use influenced Motivated
Strategies for Learning based on the post-Motivated Strategies for Learning
Questionnaire (MSLQ). An ANOVA was utilized to determine if use of clickers versus
use of paddles influenced Metacognition and Electronic Feedback Devices Survey
(MCP/MCC, appendix D). Results for the two ANOVAs and regression are reported in
the following section for the Fall control and experimental groups.
Metcognitive self-regulation. For the first ANOVA, the dependent variable was
student Metacognitive Self-Regulation as measured by the Metacognitive Self-Regulation
in Lecture Survey (PQ/CQ, appendix C), and the independent variables were response
device (e.g., clicker or paddles), and extent of use. No significance was found between
use of “clickers” and metacognitions in lecture for the group that utilized “clickers.” The
group utilizing paddles had a higher mean (F = 4.845, df = 1, 147, p = .000). The results
indicated that there were differences in participant metacognitions when utilizing paddles
in lecture (e.g., knowing level of preparation for lecture, measuring level of
understanding, seeing how lecture fits with the text, and helped to refocus), and
metacognitive self-regulatory behaviors (e.g., guiding note-taking, helped to know key
concepts to highlight, and helped to know what questions to ask) during lecture based on
87
group assignment. The comparison group had a higher mean, 3.26 (SD =.69) than the
experimental, 2.94 (SD= .09). This indicates that these types of metacognitions may be
more influenced by paddles than “clickers.”
Motivated strategies for learning. For the second consideration in research
question one, a regression was employed. The dependent variable was the post-Motivated
Strategies for Learning Questionnaire, and the two independent variables were group
belonging (paddles vs. “clickers”), and the pre-Motivated Strategies for Learning
Questionnaire. Significance was found in favor of paddles and the post-Motivated
Strategies for Learning Questionnaire controlling for pre-MSLQ (β = .747, p = .037) In
other words, paddles influenced metacognitive process related to motivated strategies for
learning (MSLQ) more than “clickers.” The means for the experimental group was 4.40
(SD = .95) and the means for the comparison group was 4.58 (SD = .10). There was
significance according to group assignment with the comparison group (paddles) more
predictive of post-MSLQ. Metacognitive processes measured by the MSLQ are known
predictors of performance outcomes. A regression was performed using pre- and post-
Motivated Strategies for Learning Questionnaire as the independent variables and quiz
mean as the dependent variable. This results indicate that post-MSLQ was predictive of
quiz outcomes (β = .365, p = .045), but the pre-MSLQ was not related to outcomes (β =-
070, p = .725). Significance was found between the pre- and post-Motivated Strategies
for Learning Questionnaire at the .05 level. Pre-Motivated Strategies for Learning
Questionnaire explained 48% of the variance in predicting post-MSLQ (F = 68.295, df =
2, 145, p = .000). This means that the data for the participants was spread out, or varied,
88
and the spread of the data can be explained in part by the pre-MSLQ. In addition, the pre-
MSLQ did not predict quiz performance, but the post-MSLQ did emerge as a moderate to
weak predictor of performance outcomes (F
(0.181)
= 0.237, p = .045). These results
indicate that when participants had a higher mean on the post-MSLQ, their performance
outcomes was higher.
To validate Motivated Strategies for Learning Questionnaire for our study, the
following question was asked: Does pre- Motivated Strategies for Learning Questionnaire
predict post- Motivated Strategies for Learning Questionnaire? Theoretically, the two
should have a strong relationship. The results indicate that pre-MSLQ predicted post-
MSLQ for the Fall experimental and comparison groups together (F
(0.590)
= 0.590, p =
.015) and when the regressions were performed for the experimental (F
(0.771)
= 0.857, p =
.046) and comparison (F
(0.521)
= 0.587, p = .046) groups individually. This alleviates any
concerns about the study and demonstrates that there are no significant issues with the
scale. However, there may be minor concerns with the validity of the scale as it does not
predict quiz performance as it theoretically should. Moreover, when examining
correlations, pre and post Motivated Strategies for Learning Questionnaire share 56% of
variance shown in Table 8 (r = 0.75, p <.000). Validating the scales was important
because the MSLQ has been associated with performance outcomes. The MSLQ is a
validated scale, and any results that may suggest that the MSLQ is not a strong predictor
require that the scales are validated so that the results of the current study are solidly
founded. Table 9 demonstrates that pre-MSLQ was highly correlated with post-MSLQ.
89
Furthermore, a one-sample t-test was performed comparing the pre and post-
Motivated Strategies for Learning Questionnaire for the Fall Groups. Significance was
found at the .05 level. The independent t-test reveals that post-MSLQ is significantly
Table 8
Correlations: Pre-MSLQ and Post-MSLQ
r Pre-MSLQ r Post-
MSLQ
N
1
.750**
.750**
1
153
154
** = p < .01 (two tailed)
lower than pre-MSLQ; the pre-MSLQ was t
(142)
= 66.541, p = .000, and post-MSLQ was
t
(140)
= 59.118, p < .000. This means that the previously presented statistically tests give
the illusion that pre-MSLQ was not a predictor of quiz outcome.
Metacognitions in lecture. For the third ANOVA, the dependent variable was
student metacognitions in lecture as measure by Metacognition and Electronic Feedback
Devices Survey (MCP/MCC, appendix D), and the independent variables use of paddles
versus “clickers”, and extent of use. Significance was not found between the
experimental (“clickers”) and comparison (paddles) groups (F = .418, df = 8, 147, p =
.909). These results indicated that neither the experimental or the comparison groups’
response system or the extent of use of response system was a factor influencing
metacognitions in lecture.
90
Research Question 1 Summary
Research question 1 examined whether the type of response system and the extent
of use influenced Metacognitive Self-Regulation, Motivated Strategies for Learning, and
Metacognitions in Lecture. For the Summer Group significance was not found for either
“clickers” or paddles or extent of use in Metacognitive Self-Regulation (PQ/CQ,
appendix C). This means that “clickers” and paddles did not influence participant
metacognitions during lecture (e.g., knowing level of preparation for lecture, measuring
level of understanding, seeing how lecture fits with the text, and helped to refocus) or
metacognitive self-regulatory behaviors (e.g., guiding note-taking, helped to know key
concepts to highlight, and helped to know what questions to ask). The pre-MSLQ was
unanswerable due to human error. Significance was found for “clickers,” but not for
paddles with the Metacognition and Electronic Feedback Devices Survey, (MCP/MCC,
appendix D). Essentially, participants from the Summer Group indicated “clicker” use in
lecture influenced understanding of course concepts more than raising hands, but paddles
did not have this same influence.
The results of for the Fall experimental (“clickers”) and comparison (paddles)
groups varied from that of the Summer Group, except in that significance was not found
for “clickers” or extent of “clicker” use in Metacognitive Self-Regulation (PQ/CQ,
appendix C). Significance was found for paddles and Metacognitive Self-Regulation (p<
.001). This means that for the comparison group paddles influenced metacognitions (e.g.,
knowing level of preparation for lecture, measuring level of understanding, seeing how
91
lecture fits with the text, and helped to refocus) and metacognitive self-regulatory
behaviors (e.g., guiding note-taking, helped to know key concepts to highlight, and
helped to know what questions to ask). Significance was found for the comparison
(paddles) group’s and the post-MSLQ survey which means that metacognitive processes
involved in motivate strategies for learning were more influenced with the paddles than
with the “clickers.” The MSLQ is known to measure metacognitive processes specific to
a course; statistical analyses were run to validate the scale for the current effort and to
confirm that there were no significant issues with the MSLQ. Neither the experimental
(“clickers”) group nor the comparison (paddles) group found response device influenced
Metacognitions in Lecture (MCC/MCP, appendix D).
Research Question 2
Research Question 2 asked: Does use of “clickers” versus paddles predict
performance outcomes? The tests of statistical significance performed to answer this
research question were t-tests. ”Clicker” versus paddle group belonging served as the
independent variable, and performance outcome (e.g., quiz means) was the dependent
variable. The following sections report the results of the Summer Group and the Fall
experimental (“clicker”) and comparison (paddles) groups. Table 9 displays the
descriptive statistics for the performance outcomes for the Summer Group and Fall
comparison and experimental groups.
Summer Group. A paired sample t-test was performed to determine whether
using clickers versus paddles influenced quiz outcomes. Paddles were utilized for the first
92
five lectures and then clickers were employed for the remaining seven lectures of the
summer course. The mean for quizzes 1-5, the time frame which reflects use of paddles,
and the mean for quizzes 6-10, which reflects use of clickers, were used to determine
whether there was any significant difference between response systems. Significance
differences were established between “clicker” and paddle quizzes (t
(31)
= 5.400, p =
.000). The test was set for at the .05 level, but significance was found at the .001 level.
The increase in performance outcomes can in part be explained by experience with the
course format, in other words, getting used to the course and the professor. However, the
difference of the performance outcomes was highly significant and suggests that
“clickers” were influential contributors to performance outcomes as well.
Table 9
Performance Outcome Descriptive Statistics
Source N M SD t
Summer Group
Paddles
“Clickers”
32
32
8.304
9.259
1.134
.801
5.400***
Fall Groups
Experimental Group ‘A’ –“clickers”
Comparison Group ‘B’ – paddles
83
74
8.028
7.545
1.031
1.42
2.416*
* p< .05; ** p < .01; *** p< .001
Fall Groups. An independent-test was performed to determine whether
“clickers” or paddles had more bearing on performance outcomes (e.g., mean of quizzes
93
1-4). ‘Group A’, the group that utilized “clickers,” performed better than ‘Group B’, the
group that utilized paddles, (t
(155)
= 2.416, p = .015). The results demonstrated that the
“clicker” group outperformed the paddle group. The “clicker” group out-performed the
paddle group by an average of almost .5 points. Arguably, group characteristics cannot be
ruled out as a factor, but the number of participants in the study and the similar group
characteristics (i.e., mean age, degree status) suggest that “clickers” in this case may be a
contributing factor. Moreover, the content and the professor were the same, including the
same polling information, instructional design, and instructional strategies.
Research Question 2 Summary
Research question 2 examined whether “clickers” or paddles predicted
performance outcomes. During Summer, paddle use was not found to be significantly
associated with performance outcomes while “clickers” were found to significantly
influence performance outcomes (p<.001). During Fall, “clickers” use was found to have
significantly more influence on performance outcomes compared to paddles (p< .05).
Research Question 3
Research Question Three asked: Does extent of use of “clickers” predict
performance outcomes (e.g., quizzes 1-4 for each group)? For the summer group mean
quiz outcomes and final grades were utilized. An ANOVA was used to examine the
extent use delineated as a percent of time used for clicker or paddle items in lecture (e.g.,
0%, 25%, 50%, 75%, or 100%) and quiz mean score. The independent variable was
extent of use and the dependent variable was the performance outcomes.
94
Summer Group. The summer group was supplied with each response system.
During lectures 1-5, paddles were supplied, and during lectures 6-10 clickers were
supplied. Significance was found in extent of paddles use and quiz mean (F = 3.158, df =
19, p = .023), but not in extent of use of clickers and quiz mean (F = 1.332, df = 11, p =
.278). This result indicates that the paddle extent of use seems to have influenced the
performance outcome.
Fall Groups. In order to determine whether the extent of participants use of an
audience feedback systems, data was gather through a self-report survey in which
participants indicated the level of use of clickers or of paddles during lecture was zero
percent, 25%, 50%, 75%, or 100% of the time. When the teaching assistants collected the
data the experimental (“clickers”) group reported 100% use of the electronic feedback
device. In lecture, it was regularly observed that 60% of participants tended to participate
in “clicker” response items; this level of response was inconsistent with participant self-
reports. Data was collected a second time to attempt to gain a more accurate report. The
participants were asked, “Did you use a clicker?” and to “What extent did you use the
clicker?” These two questions were re-administered in the large lecture setting by the
primary researcher who was not a teaching assistant for any of the lab sections for this
lecture group. When re-administered, although the numbers were still a bit inconsistent
with the percentage of respondents to polls in lecture, the results provided enough
variation to run an analysis of variance.
95
An ANOVA was used to determine whether extent of use of response systems
contributed to performance outcomes. Significance was not found in the experimental
group (“clickers”), F = .486, df = 4, 152, p = .746, or in the comparison (paddles) group
(F = .038, df = 3, 64, p = .990). While the mean quiz scores did not increase when use
extent of use increased, as measured by the ANOVA, there is an interesting difference in
the means between for the participants who reported utilizing “clickers” 25% of the time.
This group had the highest mean quiz score. This will be discussed further in chapter 5.
Research Question 3 Summary
The third research question examined whether the extent of use of “clickers” or
paddles predicted performance outcomes. The Summer Group results find that extent of
use of paddles influenced performance outcomes, while extent of “clicker” was not found
to influence performance outcomes. This means that for the Summer Group, paddle use
had some degree of prediction for performance outcomes, while none was found for
“clicker” use. For the Fall experimental (“clickers”) and comparison (paddles) groups
neither was influenced by extent of use. When “clickers” were in use, the Summer
Group’s performance outcomes were significantly higher as compared to the performance
outcomes based on paddles’ use. It is unclear why significance would occur at the .01
level for the Summer Group when “clickers” were utilized, while there was no
significance established when paddles were employed for this same group, and no
significance found for either the experimental (“clickers”) or comparison (paddles)
groups in Fall. In the next part of the results section qualitative results are reported.
96
Summary of Demographic differences.
Demographic characteristics were examined for differences with response device.
Table 10 displays only those results that demonstrated significance. In the Summer
Group, female participants attributed metacognitive self-regulation to paddles more so
than males (t
(33)
= 2.535, p = .017). There were no other gender differences. Significant
results were found for the Summer Significance was found for the Fall experimental
group (“clickers”) and major status; interestingly participants who had declared a major
indicated a higher use of self-regulatory motivate strategies according to the pre-MSLQ
(t
(87)
= 1.994, p = .049) while participants who had undeclared major status experienced
a greater degree of influence on metacognitions in lecture resulting from “clickers” (t
(87)
=
2.500, p = .014). Athletic status seemed to have influence on metacognitions for all
groups. Significance was found for the between athletic status on the MSLQ (t
(33)
=
2.293, p = .029) and the MCC (appendix D)(t
(33)
= 1.994, p = .049) with student-athletes
scoring higher/lower on these measures than non-student-athletes. A high degree of
significance was found between metacognitive processes and athletic status for the Fall
experimental (“clickers”) group, the CQ (appendix C) (t
(87)
= 3.766, p = .000), and the
MCC (appendix D) (t
(87)
= 2.943, p = .004). Athletes indicated more metacognitions
resulted from use of “clickers” in lecture (PQ, appendix C, t
(78)
= 1.994, p = .049), but
did not attribute metacognitive self-regulations to “clickers” (MCP, appendix C, t
(78)
=
1.832, p = .071). Different group and individual characteristics and experiences may
contribute to whether response devices influence students.
97
Table 10
Demographic Characteristics, (e.g., gender, major status, athletic status, ethnicity, and
English language status)
Summer Group
Fall Comparison
Group (paddles)
Fall Experimental
Group (“clickers”)
N 33 87 78
Gender -male/female, n(percent) 18(54.5%)
/9(27.3%)
44(50.6%)
/42(48.3%)
42(53.8%)
/33(42.3%)
MCP .017** Na .640
Major Status-declared / undeclared,
n(percent)
24(72.7%)
/8(24.2%)
26(29.9%)
/59(67.8%)
20(25.6%)
/58(74.4%)
c
Pre-MSLQ .873 .049* .635
CQ .202 .014* na
Athletic Status -athlete/ non-athlete,
n(percent)
9(27.3%)
/24(72.7%)
36(41.4%)
/48(55.2%)
28(35.9%)
/50(64.1%)
Pre-MSLQ .029* .194 .606
PQ .782 Na .017*
CQ .120 .000*** na
MCC .020* .004* na
English Language Status -English
as the primary / secondary
language, n(percent)
19(57.6)
/12(36.4%)
77(88.5%)
/8(9.2%)
62(79.5%)
/9(11.5%)
CQ .041* .191 na
a
reported as M(SD) or n(valid %).
b
reported for independent samples t-tests.
c
percentage varied for measures 74.4% to 67.9%)
*p ≤ .05.**p ≤ .01. ***p ≤ .001
resulted from use of “clickers” in lecture (PQ, appendix C, t
(78)
= 1.994, p = .049), but
did not attribute metacognitive self-regulations to “clickers” (MCP, appendix C, t
(78)
=
1.832, p = .071). Different group and individual characteristics and experiences may
contribute to whether response devices influence students.
98
Qualitative Results
The qualitative research questions asked how the response devices (e.g.,
“clickers” and paddles) influenced metacognition. All participants completed a pencil and
paper survey regarding how metacognition was influenced. Participants were selected for
interviews through purposeful sampling. Questions were designed to elicit information
pertaining to thought processes and self-regulatory behaviors resulting from response
devices, and whether there were differences in the way “clickers” and paddles influenced
metacognition. There are separate discussions for the Summer Group and Fall Groups’
data and then the results are discussed collectively. The is done because in the Summer
there was one section of the educational psychology course offered which was
significantly smaller and could not be divided into groups for comparison whereas the
Fall Groups were two large lecture sections of the educational psychology course. This
allowed for a quasi-experimental design with an experimental and a comparison group. In
the follow sections, research questions 4 and 5 are restated, the results of the qualitative
research are reported and are examined according to research questions 4 and 5, and
sections are followed by a summary.
Research Questions 4 and 5 Survey and Interview Results
Research Question Four asked: If “clickers” affect metacognition, how do they?
Research Question Five asked: Is the experience different for students using “clickers”
verses students using paddles, and how is the experience different? The following
sections report the results of the qualitative pencil and paper surveys and the interviews
for the Summer Group and the Fall comparison and experimental groups. Data was
99
reexamined several times as the passage of time allowed the researcher to review the
interview data with less regard for identities of participants and reduce the tendency to
form opinions when collecting the data. This allowed for overarching themes to emerge.
The themes that emerged were as follows: a) social comparisons, b) academic
benefits, c) effectiveness, d) engagement, and e) preference. Examples of the comments
pertaining to the themes are as follows. First, social comparison responses included use
of the term “anonymous,” responses that indicated “seeing other’s answers,” and “looked
around at peers.” Second, academic benefits included “evaluation of thoughts,” “had to
think on my own,” and “helped refocus.” Third, effectiveness included comments such
as “it has helped” or “it’s affected the course.” Fourth, engagement included “helped me
to write things down,” “more competitive,” and “increased participation.” Fifth, and
finally, examples of preference include “preferred,” and “liked better.” What about
“effectiveness?”
In the following sections the results of the qualitative analyses are reported. Data
about paddles are reported along with that of “clickers.” Tables and charts summarizing
qualitative data collection results are located in appendices E through S. First, the
Summer Group results for the pencil and paper survey and interviews are reported
followed by the Fall experimental (“clickers”) and comparison (paddles) groups. These
results are followed by summaries.
Summer group pencil and paper survey results. As stated above the following
categories were identified to represent the pencil and paper survey responses: a) social
100
comparisons, b) academic benefits, c) effectiveness, d) engagement, and e) preference.
These categories were utilized to frame and understand the Summer qualitative
interviews. The results for the qualitative pencil and paper survey data collection are
located in appendix E. All participants completed this qualitative survey (n=33). Both
response systems contributed to social comparisons; 39.4% of participants indicated
comparing self to others when using paddles and 54.5% reported similar comparisons
when using “clickers.” “Clickers” were described by 21.2% of participants as “easy” or
“convenient.” In regard to paddle use, 60% of participants indicated in some manner that
the paddles responses were visible (e.g., “you can see other’s answers”). Twenty-two
point four percent of participants reported peer influence resulted from paddles, and 9.1%
reported changing answers to match peer responses. These types of comments tended to
be framed negatively, often accompanied by statements of preference for “clickers.”
When this survey was administered for “clickers” 57.6% of participants commented on
the “anonymity” afforded by “clickers.” Academic benefits were reported by participants;
39.4% of students indicated that “clickers” allowed for monitoring of performance, (e.g.,
“can see how I am doing”). 12.1% of participants reported that paddles help test
knowledge. Participants reported that both response devices clarify what is important, but
distinctly more comments were made about “clickers” (78.8%) in this regard than about
paddles (45.5%). An equal number of participants reported that engagement resulted
from use of paddles and “clickers,” almost 25%. The immediate feedback of provided by
“clickers” was referred to by 54.5% of participants. The comments that emerged about
101
paddles and “clicker” tended to be distinctly more favorable toward “clickers,” in fact the
overwhelming majority (84.8%) preferred “clickers.”
Summer group interviews. In the following sections, the results for the
qualitative interview for the Summer Group are reported. The interview questions
devised to answer qualitative research question 4 were, question 1, “How did
clicker/paddle results cause you to evaluate your thoughts?” and question 2, “How have
clickers/paddles caused you to change the way you take notes?” The interview utilized
purposeful sampling; three categories of interviewees were decided upon according to the
mean scores on the PQ/CQ (appendix C), measure metacognition in lecture, and
MCP/MCC (appendix D). Because participants in the Summer Group took surveys for
both “clickers” and paddles, sub-group 1 was comprised of the respondents whose mean
on the “clicker” surveys was significantly higher than on the paddle surveys. Sub-group 2
was comprised of participants with means on the paddle and “clicker” surveys were the
same or very close to the same (within less than .5 points difference). Finally, sub-group
3 was comprised of the participants with the highest mean scores on paddles surveys and
lower mean scores on the “clicker” surveys. Comments from respondents were clearly
favorable toward “clickers.” The assumption was made that participants with higher
mean scores for paddles would have a preference for paddles over “clickers.” There were
three participants (9.1%) from the Summer Group who fell into this group, and two of the
three preferred “clickers”. This will be discussed further in chapter 5. The following
sections report the findings from the qualitative interview questions from the Summer
Group.
102
The first interview question asked was, “How did “clickers”/paddles cause you to
evaluate your thoughts?” Summer interview data collection results for interview question
1 are located in appendix F. Paddles caused students to “second guess answers.”
Respondents indicated that with use of paddles there are increased self-evaluative
thoughts, because of the visual component. Examples of these comments included “I
evaluated a lot because I never wanted to be wrong,” “I felt pressure to go with the
majority,” and “Paddles made me second guess my answers”). With “clickers,”
participants stated that they “relied on [their] own knowledge,” and learn from the
process of relying on their own answers (e.g., “when I’m wrong, I learn”). One
respondent from the high mean “clicker” sub-group suggested that when using “clickers”
“you can click anything,” while other respondents described “clickers” as “more
independent” and “easy to answer and focus on the question.” Other comments about
“clickers” included reference to the “anonymity,” “allowed own thoughts,” “cause
comparing of answers,” and “helps to refocus.” When respondents engaged in self-
reflections seeing other student’s results triggered uncertainly for one’s own answers or
resulted in simply changing to the majority because of uncomfortable feelings without
considering correctness of answers.
The second interview question asked was, “How have “clickers”/paddles caused
you to change the way you take notes?” Summer interview data collection results for this
interview question are located in appendix G. Some respondents indicated that that both
paddles and “clickers” helped to clarify what is important. Comments included, “If I
missed the question, it helped me to know what to write,” and “They help to outline
103
important information and know what I’m doing wrong.” Some respondents indicated
that neither response device changed note-taking. Comments included, “It hasn’t,” “No
necessarily,” and “No change.” Respondents were divided almost equally between
indicating that note-taking has changed and indicating that note-taking has not changed or
uncertainty as to whether changes occurred.
The third interview question asked, “How have “clickers”/paddles caused you to
compare your answers to other students?” Summer interview data collection results for
this interview question are located in appendix H. The anonymous quality of “clickers”
emerged, as did the visible nature of paddles. Paddles were described by respondents as
“uncomfortable,” (e.g., “They’re kinda uncomfortable and awkward if I knew I was
wrong”). One participant stated that, “People aren’t sure and they see others answers and
change their answers.” Another respondent described the ability to compare answers to
others when using paddles or “clickers” as the same, but when paddles were used stated,
“I had to turn around and look back.” Essentially, both response devices caused social
comparisons, but paddles’ influence resulted in respondents changing answers to align
with those of the majority.
The forth interview question asked, “How have “clickers”/paddles helped you
understand course concepts?” Summer interview data collection results for this interview
question are located in appendix I. Respondents indicated that both response devices
helped to understand concepts by “reinforcing what to learn,” and “main topics and extra
scenarios helped elaborate.” Several comments were made about the “confidential”
104
nature of “clicker” answers. One respondent suggested that with “clickers,” “I focus on
the question rather than on paddles and what others’ answers were.” One respondent
indicated that both helped, “because the instructor taught the same.” Preference for the
anonymity of the “clickers” emerged on this question which suggested that the ability for
some students to answer confidentially enabled learning.
Summer group summary. The Summer Group experienced use of both response
devices which means that these participants were able to compare response devices.
There was not a treatment group or control group. The results from the interviews
indicate that metacognition was influenced by “clickers” and by paddles, although the
influence of paddles in self-reflective thoughts was a negative experience for most
participants. Both devices influenced self-reflective thoughts and allowed for self-
monitoring. However, participant reflections were different with paddles than with
“clickers.” Most paddle reflections resulted from discomfort with the possibility of feeing
wrong and from feeling pressure to change to the majority. “Clicker” reflections were
oriented toward determining content knowledge and comparing self to others in level of
preparation and level of understanding course concepts. Respondents were divided on
whether “clickers” or paddles influenced note-taking, with approximately half stating
there was influence and the other half either “not sure” or flatly stating, “It didn’t.” For
those respondents who indicated that their note-taking was influenced, respondents
commented that note-taking was clarified, guided, and questions about subject matter are
answered. For most respondents both response systems caused comparing self to others.
Several participants indicated on the pencil and paper survey and during interviews that
105
they changed answers to the majority when using paddles. The pressure to change
answers according to respondents, seemed to result from metacognitions of a negative
nature; comments included feeling that everyone else was right, not wanting to be
different, or feeling embarrassed when paddle response choice proved incorrect. Both
response systems also helped understanding of course concepts. According to
respondents understanding was helped with “clickers” and paddles by elaborating on
concepts and providing additional information. The difference occurred in the
anonymous nature of “clickers” that provide a safe way to achieve the same effects.
“Clickers” were consistently preferred by the large majority (e.g., 32:1).
Fall groups. The Fall Groups, 'Group A’ (experimental) and 'Group B’ (comparison)
were given the same qualitative pencil and paper survey. The same questions were
posited as were with the Summer group; the following questions were asked: Question 1,
“How have clickers/paddles caused you to compare your answers to other students?” and
Question 2, “How have clickers/paddles helped you to understand course concepts?” The
next section reports the results of the survey followed by the results of the interviews for
the Fall experimental (“clickers”) and comparison (paddles) groups.
All participants completed the pencil and paper qualitative survey (n= 165); the
results are located in appendices J and K. According to the results both response systems
contributed to social comparisons; 62.8% of participants indicated comparing self to
others when using paddles and 56.3% reported similar comparisons when using
“clickers.” In regard to paddle use, 37.2% of participants indicated that “you can see
106
other’s answers” and 23.1% reported that they changed answers to match peer responses.
In contrast, 2.6% of participants in the comparison group reported that “paddles train me
to stick with my answers.” The majority of comments about paddle visibility were
negative. Participants from the experimental group (6.9%) reported that the “anonymity”
afforded by “clickers” provided the opportunity to answer honestly. Both the
experimental (“clickers”) and the comparison (paddles) groups reported that response
systems allowed for monitoring of performance; statements in this regard included, “I can
check how I’m doing,” and “I find out what I know;” these types of comments were
reported by 17.2% of “clicker” respondents and 14.1% from the paddle group. Both
groups indicate that use of response systems clarified what is important. Respondents
from the comparison (paddle) group (50%) reported that paddles clarify what is
important, and indicated that paddles help to think critically about the material (20.5%).
Participants in the experimental (“clicker”) 25.3% indicated that “clickers” as a response
device was not effective, but 16.1% indicated that “clickers” help to think critically about
the material. Furthermore, 39.0% participants reported that “clickers” clarify important
concepts. Participants in both groups reported engagement or increased engagement due
to paddles (25.0%) and to “clickers” (16.1%), and 28.7% indicated that the immediate
feedback provided by “clickers” was helpful.
Fall group interviews. In the following sections, the results for the qualitative
interview for the Fall experimental (“clickers”) and comparison (paddles) groups are
reported. As with the Summer Group, purposeful sampling was utilized for the Fall
experimental (“clickers”) and comparison (paddles) groups in order to determine the
107
three sub-groups. Sub-group 1 included the participants whose higher means for the
surveys for “clickers” as compared to participants from the same group. Sub-group 2
included the participants whose mean scores were closest to the median score. Sub-group
3 consisted of the participants with the lowest mean scores.
The qualitative interview questions for the experimental (“clickers”) and the
comparison (paddles) groups were the same questions that were asked of the Summer
Group: a) “How did clicker/paddle results cause you to evaluate your thoughts?,” b)
“How have clickers/paddles caused you to change the way you take notes?,” c) “How
have clickers/paddles caused you to compare your answers to other students?,” and d)
“How have clickers/paddles helped you to understand course concepts?” In the following
section, the Fall experimental (“clickers”) and comparison (paddle) groups’ qualitative
interview data analysis is reported examining the comments of the low, median, and high
groups that were derived from purposeful sampling, first the data for the experimental
(“clicker”) group is reported, and then data for the comparison (paddle) group.
The first interview question asked was, “How did “clickers”/paddles cause you to
evaluate your thoughts?” The Fall groups’ interview data results for this interview
question are located in appendices L and M. “Clickers” and paddles result in self-
evaluative thoughts according to respondents. Respondents from the “clicker” groups
with the highest means and the mid-means on the surveys, unanimously reported that
“clickers” influenced how self-evaluative thoughts. Comments made by respondents that
illustrate this included, “I felt confident when I was right,” I realized my level of
108
preparation,” “By comparing to peers,” and “When answers matched the majority, I knew
I was right and when they didn’t match, I knew I was wrong and I would rethink the
answer.” There were several ways in which “clickers” caused self-evaluative thoughts.
The first way was seeing results displayed and comparing to group members which
resulted in the ability to compare self to peers. These comparisons resulted in increased
confidence for some participants. The second way was “clickers” helped gauge level of
preparation for lecture which in turn helped participants to evaluate study habits. The
third way was in the anonymity; anonymity provided the ability to answer “safely” and
when wrong answers were given less discouragement resulted for two reasons. One was
because no one saw individual answers and the second was because others were also
wrong. The fourth way was that “clickers” increased interest in the lecture. The fifth way
was that “clickers” causes understanding of responses given by other students. There
were respondents from the low mean sub-group (participants whose mean for the
“clicker” surveys were the lowest in the group) that stated “clickers” did not result in
self-evaluative thoughts. These respondents felt “clickers” did not contribute and stated,
“I don’t think they helped,” and “they’re a waste of money.”
The comparison (paddle) sub-groups varied slightly according to how self-
evaluative thoughts occurred. The overall picture of the how the paddles influenced
metacognition through self-reflective thought was shaped by the visibility of the response
system. Paddles allowed respondents to self-monitor level of preparation: comments in
this regard included, “I knew how I was doing,” and “You knew who was prepared.”
Paddles increased engagement, (e.g., “I feel involved,” and “I paid more attention”).
109
Respondents consistently connected responding with paddles with negative feelings when
a response was wrong and confidence when correct. Comments of this nature included,
“Others knew when you were wrong,” “I was embarrassed when I was wrong,” and “I
felt confident when I was right.” For some the pressure of the visible responses resulted
in increased pressure to prepare for lectures, (e.g., “I studied to answer more correctly”).
Paddles caused self-evaluative thoughts through “comparing own responses to others.”
Respondents statement about the negative pressure from paddles included, “I was caught
between deciding to stay with my own answer or be right,” and “I changed to the
majority.” The temptation in evaluating responses may be to think that statements such as
these probably come from under-performing students. Comments such as these came
from participants with lower performance outcomes and participants who with higher
performance outcomes.
The second interview question asked was, “how have “clickers”/paddles caused
you to change the way you take notes?” The Fall experimental (“clickers”) and
comparison (paddles) groups’ interview data collection results for this interview question
are located in appendices N and O. According to 42.1% of respondents changes did not
occur in note-taking as a result of “clickers,” (e.g., “No change; I write down the right
answer”). For the majority, changes occurred in note-taking. Changes that occurred in
note-taking included key concepts were identified (e.g., “They show what’s important,”
and, explanations were given focusing the lecture on the main points, (e.g., “I focus my
notes on those things”). Respondents reported increased engagement, (e.g., “It increases
my attention and helps me to focus in,” “results are interesting,” and “I focus on what the
110
professor is saying”). Respondents in the low-mean group (i.e., respondents who had
lower scores on the surveys about “clickers”) were more than twice as likely to indicate
that “clickers” did not change the way note-taking occurred. Respondents in the other two
sub-groups, the high mean and mid-mean groups, experienced more changes in note-
taking. The low mean group had higher performance outcomes than the other sub-groups,
and for this group note-taking did not depend on the response device. These respondents
stated, “No change,” or “It didn’t (affect note-taking).” This suggests that the type or
quality of notes taken by higher performing students may not be influenced by response
systems, or the strategies utilized, while the middle and lower performing students
experienced changes.
As with the experimental (“clicker”) group the majority of respondents from the
comparison (paddle) group reported changes in note-taking occurred, and 30.8%
indicated no change occurred. When changes in note-taking occurred for this group, the
reasons were similar to the “clicker” respondents. Responses included, “they helped me
take down key points,” “they helped maintain my focus,” and “I wrote down the main
points.” Paddles clarified what was important, (e.g., “Ideas were clear because of the
questions,” and “I write down the paddle information”), and “helped to focus.”
The third interview question asked, “How have “clickers”/paddles caused you to
compare your answers to other students?” The Fall comparison and experimental groups’
interview data collection results for this interview question are located in appendices P
and Q. Social comparisons resulted from “clicker” use according to 84.2% or
111
respondents. Social comparisons were made in several ways. First, comparisons were
made as a result of visual representations (e.g., “the display shows you where you are
compared to others”). Second, comparisons resulted from competition, (e.g., “clickers
add a competitive spirit”); this comment resulted because of the team competitions that
are possible with “clickers” but not with paddles. Third, comparisons resulted from the
ability to respond anonymously and evaluate self and others, (e.g., “Seeing the results is
interesting, and I checked myself to see if I should change.” Fourth, comparisons resulted
in emotions; respondents reported, “It’s exciting to see when your answer is right and
others were wrong,” and “If you’re with the small percentage that are wrong, you don’t
feel great.” Finally, the fifth way comparisons resulted was increased feelings of
identification with the group; one respondent stated, “If I am with the majority I think I
am more like the others in class.”
As with the experimental (“clicker”) sub-group respondents from the comparison
(paddle) sub-groups reported that social comparisons occurred. There were more
differences than similarities in the social comparisons that resulted from using “clickers”
and paddles. In the case of paddles, respondent comments surrounded the visible nature
of the response system, and seeing the answers before knowing the correct response. This
visible nature resulted in respondent statements about experiencing pressure to conform
to the majority. Respondents commented about this pressure stating, “Paddles influenced
my answer to change with the majority,” “I was embarrassed when my answer was
wrong,” and “I made sure my answer was the same.” Responses commenting on the
pressure for social conformity occurred across the sub-groups.
112
The fourth interview question asked, “How have “clickers”/paddles helped you
understand course concepts?” The Fall experimental (“clickers”) and comparison
(paddles) groups’ interview data for this interview question are located in appendices R
and S. All respondents indicated that “clickers” helped to understand course concepts.
Respondents reported several ways in which “clickers” helped. Respondents
commented, “If you got it right, you know you know what you’re doing; if not, you
eventually catch on,” and “Clickers helps to focus me on the material visually and it’s
easier to understand,” “Clickers” show specific details and key concepts (e.g., “It gives
main facts and what the professor wants you to know). “Clickers” provided an indication
of level of preparation and checked for understanding (e.g., “Shows if you read or
understand the lecture”). They were engaging, provided different perspectives, helped to
think critically, clarified key concepts, and elaborated on concepts. “Clickers” chunked
information, (e.g., “It helps break up material to learn better,” and result in discussions.
“Clickers” organized material, questioning, application of concepts, and anonymity.
Anonymity was identified as a feature that helped with understanding of concepts; the
benefits of anonymity included, “No pressure whether you’re right or wrong,” and “they
help when you’re too shy to raise your hand.” Almost 16 % of interview respondents felt
that “clickers” were a waste of money and should be “subsidized;” these comments were
from the low-mean sub-group of respondents, meaning their mean scores on the surveys
about “clickers” were on the low end as compared to the rest of the participants who were
interviewed. However, none of the comments from these 16% respondents indicated that
use of “clicker” failed to help or improve understanding of course concepts; in fact there
113
were some comments from this sub-group that indicated there were benefits experienced
through “clicker” use. Comments in this regard included, “They help you know where
you’re at,” “You know by the visual representations,” “adds a competitive spirit,” and
“You can rank the class.” The majority of respondents indicated that understanding was
enhanced.
In case of the paddle respondents, the majority (84.6%) indicated that paddles
helped to understand course concepts; as with the “clicker” interview respondents on this
question, a few respondents did not think paddles helped. Respondents, who indicated
that paddles helped, reported that paddles increased critical thinking and increased self-
monitoring. “They give the answer,” and “help to apply the reading.” As one respondent
stated, “When you’re wrong, you want clarification.” “Clickers” helped learn material
more effectively by providing “a mini quiz.” A response surfaced that matched the
Summer Group respondent’s who felt that paddles were better than “clickers;” the
respondent stated, “Paddles made me prepare, because you show your answers.”
Fall group qualitative summary. Unlike the Summer Group, the Fall Groups
were divided into two groups, utilizing different response devices for the first four weeks
of an educational psychology course and then continuing for the remainder of the course
with “clickers.” The similarities are that the devices showed the level of preparedness,
and had a visual component. ”Clickers” presented information on histograms after
answers were given and paddles revealed individual responses prior to knowing the
correct response. As one participant said, “With paddles, your answers are out there, but
114
with clickers answers are hidden.” One “clicker” interview respondent summed up the
anonymity of clickers by stating that the answers were identified “without saying, Sally
was wrong.” The social comparisons resulting from paddles seemed to lead to changing
answers to match those of the majority. This was true for the majority of students;
however, almost 4% of students in the paddle group expressed a preference for paddles
because the visibility of paddles resulted in stronger motivation to prepare for lecture.
“Clickers” allowed for individual and group comparisons by providing “interesting”
visual representations. “Clickers” provided a safe way to answer honestly without the
negative emotions, while paddles created social conformity issues. Paddles seemed to
elicit negative feelings in participants, primarily embarrassment when answers were
wrong, as one respondent stated, “You shouldn’t do that stuff.”
Summary of Qualitative Results: Research Questions 4 and 5
The intention of the qualitative section of this study was to examine how
metacognitive processes were influenced by “clickers” in particular and whether there
were differences between how “clickers” paddles were experienced by participants. All
study participants (n= 198) from the Summer and Fall Groups completed a pencil and
paper survey, and purposeful sampling was utilized to select interview candidates for
Summer (n
1
=13) and for the Fall experimental group (n
2
=19) and comparison (n
3
=13)
groups. The reason the comparison group (paddles) had less interviewees than the
experimental (“clickers”) group was that three of the participants who had were selected
to be interviewed according to the mean scores on the paddle surveys were in the primary
115
researcher’s lab, and these interviewees were eliminated from the selection process to
avoid the potential of the Hawthorne effect influencing respondents. A summary of the
interview results for research question 4 and research question 5 are reported below.
Research question 4 summary. Research question 4 examined how the
metacognitive processes may be influenced by utilizing “clickers.” Respondents in the
Summer Group and the Fall experimental (“clickers”) and comparison (paddles) groups
were asked four questions about how metacognitive processes were influenced in regard
to self-evaluative thoughts, note-taking, social comparisons, and understanding of course
concepts.
The respondents from the Summer Group stated that “clickers” influenced self-
evaluative thoughts because anonymity required relying on one’s own thoughts and level
of preparation. This resulted in contemplating the answers individually, and learning
occurred because of the individual’s ability to reflect on the answer and self-monitor
privately before results were displayed. The Fall experimental group reported that
“clickers” influenced self-evaluative thoughts through realizing level of preparation, and
though seeing how one’s own answers compared with the responses given by peers.
Note-taking was reported by half of summer respondents to change as a result of
“clicker” use and the other half indicated that note-taking did not change or that there was
uncertainty as to whether note-taking changed. Those in the Summer Group that
indicated a change suggested the change occurred, because concepts were clarified, and
the “clicker” process provides guidance including providing answers to questions about
116
the subject matter. 42.1% of respondents from the Fall experimental group indicated that
note-taking was not changed, with some who answered from this position additional
comments indicated that the respondent wrote down the main points. For the majority in
this group for whom note-taking was changed, changes occurred because important
concepts were identified, explanation of key concepts was provided, and because
“clickers” helped focus on points the professor discussed.
Social comparisons were reported by all respondents from the Summer Group and
Fall experimental (“clickers”) and comparison (paddles) groups. The Summer Group
indicated social comparisons occurred because of the anonymous nature of “clickers”
allowing comparisons to be made individually through the visual representations. The
Fall experimental group similarly indicated that the visual representations resulted in the
ability to anonymously compare self and others. Furthermore, these respondents
suggested that confidence increased when answers were correct and lessoned when
incorrect. In addition, when the responses of the peer group matched the individual’s
response, this resulted in feelings of identification with the group.
All of the Summer respondents and 85% of the Fall experimental (“clickers”)
respondents indicated that “clickers” helped the understanding of course concepts. The
Summer respondents suggested that “clickers” helped by reinforcing the material learned,
elaborating on main topics, and providing extra examples and scenarios. The Fall
experimental group suggested that course understanding was helped in many ways.
Understanding was helped according to this group, because confidence was experienced
117
when answers were correct, a visual component was provided that improved
conceptualization, key concepts were identified, showed level of preparation, checked
level of understanding, broke up the content into manageable pieces, and accomplished
all of the aforementioned without the pressure of whether the participant was right or
wrong. The interviews provided clarification about the process by which “clickers”
influence the metacognitive process in lecture.
Research question 5 summary. Research question 5 explored whether “clickers”
and paddles were experienced differently by students in a large lecture context. There
were similarities in how “clickers” and paddles were experienced by students. Both
devices clarified concepts, influenced self and peer comparisons, and both “clicker” and
paddle interviews indicated that participants were split in whether the response devices
helped with note-taking. The visual quality of “clickers” and paddles were the basis for
the differences participants experienced. “Clickers” provided anonymity allowing what
one respondent called “safe” participation where participants were able to think privately
about answers and come to a conclusion before the answers of the collective group were
indicated. With paddles, answers of peers were identifiable prior to knowing the correct
response and this frequently resulted in participants experiencing social conformity
pressure. The results indicate that the response systems affected participants differently.
Summary
In summary, the current study showed that both “clickers” and paddles influence
metacognition with some similarities and some differences. The majority expressed a
preference for “clickers” and this was consistent between the Summer Group (90.9%)
118
and the Fall experimental “clickers” group and comparison (paddles) groups. Because the
Summer Group experienced use of both response systems the participants stated their
preference for one response device over the other. This was not the case for the Fall
Groups, and because asking the participants’ preferred response device was not a
function of this study and this information was not offered as regularly by the participants
as with the Summer Group. This means that a precise percentage of participants who
preferred “clicker” over paddles was not available, but from pencil and paper surveys it
seems that only 3.4% preferred paddles and the remaining 96.5% preferred “clickers.”
Both “clickers” and paddles appeared to influence note-taking in a similar fashion.
Respondents tended to agree that the items that guided the note-taking process were the
questions, resulting answers, and clarified concepts. Respondents reported that note-
taking was more focused because of the informative quality of the response systems.
Participants seemed to experience increased clarity about what to write in their notes,
because of the clarification of key points and concepts that accompanied the response
system items. Both “clickers” and paddles resulted in social comparisons while, for the
majority of students, influencing metacognitions and metacognitive self-regulatory
behavior differently. There were results contrary to expectations for the Fall experimental
(“clickers”) and comparison (paddles) groups. The first was that although the “clicker”
group emerged as having higher performance outcomes, the surveys did not indicate that
there were higher metacognitive processes associated with “clicker” use than paddle use.
There were no significance found for the Summer or Fall Groups for response devices
and metacognitions in lecture (CQ/PQ, appendix C). The Summer Group did attribute
119
metacognitive processes to “clickers” at the .05 level, but not to paddles (MCC, appendix
D). The Fall experimental (“clickers”) group did not make the same attribution, but the
Fall comparison (paddles) group did attribute metacognitive processes to paddles at the
.01 level (MCP, appendix D). Metacognition is a known predictor of successful outcomes
(Mayer, 2008) so these surprising results suggest the possibility that there are additional
aspects to the quality or types of metacognition that can be influenced by the type of
feedback. There were some indications of differences based on demographic information,
but these differences were inconsistent. The Summer Group, in comparison to the Fall
Groups, was more likely to express favoring one response system over the other. All but
one respondent favored “clickers” (n=33). In the following section, Chapter 5,
implications to education will be discussed and recommendations will be suggested for
practice and future research.
120
Chapter 5
Discussion
The current dissertation sought to explore whether metacognition was influenced
by college students’ use of “clickers” in a large lecture setting. The driving questions in
this study were whether and how metacognition was influenced by the use of “clickers.”
It is important to note that response devices were employed in addition to the use of
instructional strategies already embedded in the instructional design of the undergraduate
educational psychology course (e.g., questions, and peer instruction), because this is the
recommended manner for the use of “clickers” (Mayer et al., 2009). The study was quasi-
experimental, and employed a pre-post-test design.
Data was collected from three sections of a general education educational
psychology course during the Summer 2011 and Fall 2011 sessions. The study utilized
two response systems in order to have a basis for comparison (e.g., “clickers” and
paddles). The Summer Group was smaller in size (n=33), and this group experienced use
of both response systems. During Fall there were two sections of this course with the
same instructor and the same syllabus which was also the same instructor and syllabus
utilized during the Summer session. One group, ‘Group A,’ utilized “clickers,” and the
other group, ‘Group B’ utilized paddles; ‘Group A,’ constituted the quasi-experimental
design and Group ‘B’ the comparison. The response system each group utilized was
determined by random assignment (e.g., the toss of a coin). ‘Group A’ was the
experimental group (n=87) that utilized “clickers,” and ‘Group B’ was the comparison
group (n=78) that utilized paddles.
121
In the current effort there were three hypotheses. The first hypothesis was that
“clicker” use influenced student metacognition and self-regulation. While quantitative
results were inconsistent, it is apparent from the qualitative results that “clicker” use
influenced metacognition and self-regulation. The second hypothesis was that students
engage in social and peer comparisons as a result of “clicker” usage. Paddles and
“clickers” not only resulted in social comparisons, but response devices influenced
participants differently. The third hypothesis was that by engaging in self-evaluation
following “clicker” questions, student metacognitive awareness and self-regulation,
MCSR, would increase, resulting in improved note taking in lecture. Approximately half
of Fall participants and all of the Summer participants offered ways in which note-taking
was changed. The majority of participants reported that self-reflective thoughts and social
comparisons resulted from “clicker” use, and that, therefore, a better understanding of
course concepts was gained.
Both quantitative and qualitative data were collected from all groups: the Summer
Group, and the Fall experimental and comparison groups. The quantitative data gave an
overall picture that was enhanced and clarified by the qualitative data. A pointed example
of the level of clarification that occurred with the qualitative data can be seen in the
Summer Group. As mentioned in chapter 4, for Summer Group interviews, the low-mean
sub-group interviews candidates were selected to interview because their mean scores on
surveys administered for paddles were higher than their mean scores on surveys
administered for “clickers.” This would seem to indicate that the paddles increased
metacognitions more than “clickers” for these participants and that these participants
122
preferred paddles over “clickers.” However, two of these three respondents preferred
“clickers;” only one indicated a preference for paddles during the interviews. Without the
qualitative piece this information would go unnoticed; it would be presumed that a higher
mean value for paddles indicated a preference for paddles over “clickers.”
The following discussions are based on the combination of the quantitative and
qualitative data collected. In this chapter the insights gained into the influence of
response devices on student metacognition in large lecture settings are discussed. In
addition the educational implications are discussed as well as recommendations for
practice and future research. The first discussion regards the benefits and draw backs of
each response device. The second discussion considers, “clickers” and the social context;
the third discussion moves on to the relationship between “clickers,” metacognition and
metacognitive self-regulation; the fourth discussion addresses “clickers” and student
outcomes; finally, the fifth discussion concerns the effect of feedback type on
engagement in learning.
“Clickers” versus Paddles: The Benefits and Drawbacks of Each
The use of “clickers” and paddles had similar effects upon groups; however,
differences were seen as well. Both response devices were similar in how information
was conveyed through instructional design and the strategies utilized; groups had the
same course instructor and shared experiences in instructional design. Both devices were
thought to cause self-evaluative thoughts. Fall experimental (“clicker”) and comparison
(paddle) group respondents were split as to whether note-taking changed, but the Summer
Group reported that each device contributed to changes in note-taking. Both devices
123
contributed to social comparisons, and both helped understanding of course concepts in
similar fashions.
By and large the difference occurred in the visible nature of each device.
“Clickers” were viewed as “safe” while paddles resulted in a certain vulnerability. With
paddle use, participants expressed uncertainty in their own answers and compared
themselves to peers. Because of this, it seems from the interview data that the major
factor in determining how to respond to a given question was the response of the peer
group. The contrast between the devices was most clear in this area of determining how
to respond. “Clickers” afforded learning opportunities, and self-reflective thoughts. The
resulting social comparisons occurred privately instead of in the sight of one’s peers, and
before knowing the correct response. The paddles, for most respondents, resulted in
social conformity related thoughts and actions. A small margin, less than 10% of
respondents, reported debating with themselves as to whether to conform, that is to
change opinions based on perceived pressure from the group (Aronson, 2008), or to
respond according to their own determination of the correct response.
According to the PQ/CQ (appendix C) quantitative results, paddles had a
significantly greater metacognitive influence than “clickers,” meaning there were
differences in participant metacognitions (e.g., knowing level of preparation for lecture,
measuring level of understanding, seeing how lecture fits with the text, and helped to
refocus), and metacognitive self-regulatory behaviors (e.g., guiding note-taking, helped to
know key concepts to highlight, and helped to know what questions to ask) during lecture
124
based on group assignment. Essentially, it is clear from the qualitative sections that both
devices influenced metacognitive processes, but paddles seemed to create more
metacognitive reflections. When considering the quantitative significance and the
qualitative responses, the types of metacognitions that resulted from “clickers” had
distinctly more positive nature than paddles. Paddles were reported to elicit
metacognitions of a more negative nature for the majority of study participants and
interview respondents. This is true for the large majority, but an interesting caveat is that
in the Summer Group there was one participant out of 33 who felt that paddles were
superior to “clickers,” and the performance outcome was consistent in this case. For this
participant, the performance outcome was higher during paddle use for quizzes 1-5 than
performance outcomes during the remainder of the course when “clickers” were
implemented. This participant indicated that “with paddles you prepare, because your
answers are seen.” There were three participants in the Fall comparison group that shared
this sentiment. However, this was a small percentage of participants, 3.8% in Fall and
3.0% in Summer. The large majority of participants stated that they preferred learning
with “clickers.” Participants reported that use of “clickers” caused reliance on one’s own
level of preparation. This means, according to respondents, one’s own thoughts could be
used to determine answers and respond to polls; furthermore responses could be made in
a private way, with learning taking place free from the threat of being wrong. This
contributed to more authentic answers from participants. When “clickers” were in use,
instead of feeling pressure to change to the majority response as with paddles,
participants and respondents reported that they could give “honest” answers. When their
125
answers were incorrect, their confusion could be addressed without the feelings of shame.
One respondent stated, “It’s embarrassing; you shouldn’t do that stuff.” Essentially, what
emerged from these results was that the metacognitions from paddles generated bad
feelings because of their visible nature.
Interestingly, the Fall comparison (paddles) and experimental (“clickers”)
respondent groups showed an inverse relationship as far as performance outcomes. From
the experimental group (“clickers”), the respondents from the high mean sub-group had
the lowest mean performance outcome. The mid-mean sub-group had the middle most
performance outcome, while the low-mean sub-group had the highest performance
outcomes. The reverse was true for the comparison group (paddles); the high mean sub-
group had the higher performance outcome, the mid-mean sub-group had the middle
most mean, and the low mean sub-group had the lowest performance outcome mean.
Table 11 shows the trends in performance outcomes for Fall respondents.
While Table 11 displays results that have potentially interesting implications,
interpretation should be done with caution because the number of respondents in the
comparison group (n=13) and the experimental group (n=19) was relatively low. This
being said, the comparison (paddles) group’s high mean sub-group had higher
performance outcomes and seemed to like to “show what they know.” The mid-mean and
low mean paddle sub-groups expressed preference for “clickers” and the ability to
respond in confidence. It may be that the increased metacognitions with paddle use
interfered with productive learning; this statement is based on the reported cognitions and
126
Table 11
Fall Experimental (“clickers”) and Comparison (paddles) Performance Outcomes
concerns about how one compares to another and the persistent social conformity effects
of paddles. This may indicate that paddles provided undesirable distractions or even an
impediment to learning goals. This may also be related to self-instruction, one of the six
components of self-regulation (Bandura, 1997); self-instruction stems from the
interaction between the environment and individual’s perceptions. Perhaps the self-
instructions in which participants engaged were inhibited by social comparison
distractions when paddles were utilized and when “clickers” were in use the self-
instruction aspect was uninterrupted by uncomfortable peer comparisons because of
anonymity.
127
“Clickers” and the Social Context
Anonymity and social conformity were issues that emerged in the current effort.
In today’s educational climate universities and educational institutions have academic
integrity policies which are usually listed on the syllabi of courses. Cheating has
increased remarkably in recent decades (McCabe, Trevino, & Butterfield, 2001). There is
a tug-of-war in the current academic cultural climate between academic integrity and
excelling with success so paramount that students may view cheating as normative
behavior and, therefore, as acceptable (Eastman, Iyer & Reisenwitz, 2008).
Unfortunately, in the current cultural climate, dishonesty is sometimes overlooked as
success is elevated. Society places a high value on success. The possibility exists that
social conformity resulting from the negative feelings about the visible nature of paddles
may discourage students from asking questions, because if students do not want an
incorrect paddle response to be seen, clarification questions may not be asked. Having
issues with learning that are perceived as atypical may lead to reduced help seeking
behaviors and poorer outcomes for students. Anonymity has been consistently cited in
studies of audience response systems as a positive factor which students appreciate as
discussed in chapter 2. “Clickers” provide the opportunity to reveal answers and thoughts
without the jeopardy or stigma of being seen as wrong. Participants in the experimental
group (‘Group A’) indicated that self-reflective responses occurred from seeing peer
responses and comparing “how (he or she) measured up.” Most students want to be part
of the majority and will conform even if conformity is not in their own best self-interest.
128
Conformity effects were experienced overtly when paddles were utilized, but
when “clickers” were used, these same effects were experienced covertly. When
participants had more opportunities to view their answers in comparison to their peers’
answers through histograms and other similar displays, they seemed to experience
increased ability for self-reflection and self-evaluation. In other words, participants
engaged in comparisons on individual and peer group level. When participants held up
paddles in response to instructor questions, conformity effects occurred because students
looked around and changed answers to match those of their peers, reportedly with more
concern for fitting in than for learning. Using paddles to respond to polls, as in the case of
the current effort, much like raising hands, can result in instructors falsely believing that
students have a better grasp on conceptual knowledge when in fact conformity effects
may be clouding the issue. The anonymity provided by “clickers” decreased conformity
effects, thereby, improving the ability of instructors to gauge student understanding. This
process required participants to respond relying on their individual level of preparation
and conceptual understanding for polling items. Students entered a situation in which
their efforts were measured against others, quickly and with regularity, but privately first.
“Clickers” are a technological tool that facilitate academic integrity and honesty in a safe
way and remove the conformity effects to which students are so prone.
“Clickers’” Relationship to Metacognitive Processes
The quantitative data collected suggested that participant metacognitions in
lecture were influenced by both types of response devices (e.g., “clickers,” and paddles),
but were influenced more by the use of paddles. The obvious difference between the use
129
of “clickers” and paddles on metacognition occurred as a result of the visible nature of
paddles. Most participants did not like being seen with the wrong response by their peers.
There appeared to be different qualities of metacognitions that the response systems
elicited. In the comparison group (paddles), the interview respondents with higher
outcomes viewed response systems as interchangeable, while respondents with lower and
moderate outcomes expressed a stronger preference for the “clickers.” A different view
was expressed by the experimental group (“clickers”); in the experimental group the
interview respondents that rated “clickers” the highest had lower performance outcomes
as compared to those who rated “clickers” the lowest. Respondents in this group that had
the lowest rating for “clickers” had the higher performance outcomes. This suggests that
those who were prepared, savvy participants enjoyed holding up paddle responses for all
to see, while the mid to low performing students appreciated the safety of learning and
responding to questions in a confidential manner. This is an important implication
because the participants who rated paddles high and “clickers” low had the higher
performance outcomes of the respondents. Note-taking was largely unchanged for these
students and social comparisons were less likely to occur or be of concern that reflected
some level of vulnerability. In fact, if these students answered incorrectly, the situation
was viewed as a learning opportunity more than a threat. In other words, these were
participants who would navigate academia successfully regardless of response device.
This may reflect findings in current research on social comparisons which indicate when
students having feelings of power, the use of social comparison information may be
reduced while the “less powerful” were subject to the conformity pressures, (Johnson &
130
Lammers, 2012, p. 5). The majority of participants were the mid and lower performing
respondents, who were not favorable toward paddles, but were appreciative of the privacy
“clickers” afforded. This privacy included the differences that occurred in the learning
process, that is, freedom from worry about what peers think and “not wanting to be
wrong.” Research suggested that there is a relationship between metacognition and
cognitions and help seeking behaviors, and that there is an interaction between
technology utilized and the learner (Aleven, Stahl, Schworm, Fischer, & Wallace, 2003).
The low to middle performing participant may be deterred by the overt nature of paddles,
or even hand raising. This participant seems unlikely to seek clarification at the risk of
revealing a lack of understanding of concepts to the peer group.
The MSLQ has proved to be a reliable and valid measure of motivating strategies
for learning that is course or subject specific (Artino, 2005). This created the expectation
that the post-MSLQ would demonstrate improvements in metacognitive processes
associated with motivating strategies for learning. This would imply that the post-MSLQ
would have a higher mean than the pre-MSLQ. The expectation that motivating strategies
for learning related to metacognitive processes would increase as the course progressed,
was not realized in the current study. Among the Fall Groups, the pre-MSLQ was
significantly higher than the post-MSLQ. As seen in chapter 4, the pre- and post-
Motivating Strategies for Learning Questionnaire were highly correlated. The pre-
Motivating Strategies for Learning Questionnaire did have predictive reliability for the
post-Motivating Strategies for Learning Questionnaire.
131
The information about the pre-post-MSLQ brings up considerations about newly
admitted college students’ perception of their own ability. The pre-Motivating Strategies
for Learning Questionnaire displayed an intuitive understanding of expected academic
performance. Intuition is the perception of the unconscious mind (Bradley, 2011). After
four weeks of the first semester in a collegiate environment, the post-MSLQ revealed an
adjusted perception of collegiate academic ability; essentially, perception of ability was
lowered to a more realistic assessment; these results about undergraduate confidence
dropping to a realistic level is consistent with current research (Jones, Anotonekot &
Green, 2012). There seems to be an unrealistic sense of ability that influenced the pre-
MSLQ results so that pre-MSLQ did not have predictive ability for academic outcomes,
while the post-Motivating Strategies for Learning Questionnaire did , at the .05 level,
which was lower than anticipated due to the known validity of the MSLQ.
Metacognition is purported to be essential to the 21
st
century learner, so, aiding
student metacognitive development is recommended for quality higher education courses
(Anderman, 2011). It is possible that not all metacognitions have the productive quality
leading to higher outcomes. The current effort suggests that it would be helpful to
develop a broader outlook since there were mixed results. These mixed results seem to
suggest that some metacognitions may contribute in a more productive and positive path
to learning outcomes than other metacognitions. Moreover, there may be a component of
student beliefs about technology tools utilized in the classroom that contributes to the
usefulness of a specific tool for the individual student. Furthermore while the topic of
new college student perception of ability was not a focus of the current effort, the results
132
of the pre and post Motivated Strategies for Learning Questionnaire required further
attention. The results of these measures were consistent with the concept of an unrealistic
sense of ability: once experience is gained, abilities can be more appropriately assessed.
Perhaps metacognitions should be viewed in terms of types of metacognitions, for
example, productive and unproductive, or as having a self-reflective quality as compared
to a group reflective quality.
“Clickers” and Student Outcomes
While paddles seemed to elicit more metacognitions in each of the groups (e.g.,
Summer and Fall), the performance outcomes were significantly higher when “clickers”
were utilized. This piece of information illustrates that there were implicit issues
underlying the interaction of metacognitions and response devices that were not
examined in the current effort. The quantitative results indicated that metacognitions
occurred significantly more for the paddle group, but the qualitative data suggested that
the conformity effects experienced overshadowed the metacognitive process. Hence the
metacognitive process for the majority of participants was unproductive. To add to these
mixed results, in the Fall experimental group, though 25.3% respondents indicated that
the “clickers” were not effective, respondents also indicated that “clickers” helped them
to think critically about the material and about half of this group commented on ways that
“clickers” clarified important concepts. In this “clicker” group, the participants
interviewed who were less impressed by the device also had the highest quiz mean. As
suggested previously, the discrepancy may be that with or without “clickers,” these
133
students would perform well. It was specifically the respondents with the low and middle
performance outcomes who seemed to experience enhanced learning with “clicker” use.
The Summer Group provided results that more directly indicated “clickers” were
preferred. They produced better outcomes and had more academic benefits than paddles.
More specifically, the academic outcomes for this group were significantly improved by
the use of “clickers.” A reason in part is that over the weeks, participant understanding of
the nature of the quizzes and how to do well improved; although experience is a
confounding factor, the outcomes are highly significant (p=.001) with “clicker” use.
Therefore, some level of attribution to “clickers” seems appropriate. Notwithstanding, as
mentioned previously, instructional strategies known to enhance learning were embedded
within the design of the course so that the combination of the learning strategies,
“clicker” use, and growing course experience are each in part responsible for propelling
students to improved learning outcomes. “Clickers” alone have inconsistent results in
improving academic outcomes, but with each group in the current effort student
outcomes were significantly higher with “clickers.”
The Effect of Feedback Type and Engagement in Learning
A key finding in the current study was that feedback type and engagement in
learning were tied to the level of visibility of feedback devices. A recurring theme in
research is the effectiveness of “clickers” in engaging and refocusing students (James &
Willoughby, 2011). In the current effort histograms were displayed based on “clicker”
polling. Many participants indicated that engagement in academic-related social
comparisons resulted from these histograms. These social comparisons were made within
134
the safety of anonymity; there seemed to be a more authentic engagement, free of
negative social pressure and comparison. This may reflect findings in current research on
social comparisons which indicate that when students feel they have power, the use of
social comparison information may be reduced (Johnson & Lammers, 2012).These self-
reflective qualities are part of the metacognitive processes. With “clickers,” it appears
that participants were engaged in such a fashion that the learner could self-reflect and as a
natural consequence, improve metacognitive processes. This is so because the learner’s
responses with electronic feedback anonymity provided for a level of honesty and
authenticity of the responses so that the instructor could have a greater level of
confidence in the quality of learning taking place in a large lecture setting. There are very
few qualitative approaches utilized in “clicker” research; only one qualitative piece was
found during research of the literature for the current effort. This approach pertained to
types of conversations resulting from “clicker” questions (James & Willoughby, 2011).
The qualitative piece regarding how “clickers” influence learning was unavailable and
seems to have been missing until the current study as discussed in the literature review.
Implications
As a result of the findings from the current study, there are practices that can be
recommended for electronic feedback devices. Seeing that “clickers” utilized in
conjunction with instructional strategies are the combination that produce higher
performance outcomes, (e.g., the current effort, Mayer et al., 2009), and that “clickers”
are recommended to be utilized with considerations to instructional strategies (Mayer,
2008; Mayer, et al., 2009), the following recommendations are presented in combination
135
with several research based strategies for instructors from Ambrose, Bridges, DiPietro,
Lovett, and Norman (2010). Many metacognitive strategies are easily formatted into
“clicker” questions or surveys including: a) polling students about beliefs regarding
learning, b) guiding students in a metacognitive reflection following “clicker”
questions/surveys which allow modeling of metacognitive processes by the instructor, c)
scaffolding metacognitive processes (e.g., a series of tasks that progressively guide
students to deeper levels of metacognition by developing a series of linked
questions/surveys items), d) various levels of knowledge can be introduced using
“clicker” items, e) discussions surrounding use of strategies that are subject or context
specific can be integrated in to “clicker” items, and f) students can be guided in
monitoring performance. First, create opportunities during instruction to guide students in
developing metacognitive awareness; such awareness is one of the traits of self-regulated
and life-learners. While research affirms the importance and benefits of metacognitive
self-regulation in academic and professional pursuits, few courses address metacognitive
processes, although incorporating metacognition into learning goals is recommended
(Salomon Waters, & Schneider, 2010). Second, provide opportunities for self-assessment
(Anderman, 2011). This is an activity that most students would not engage in without
guided instruction; “clickers” can be utilized to provide opportunities for self-assessment.
This can be accomplished by employing “clicker” questions to assess level of
preparedness (e.g., have students read the text), level of conceptual understanding of
lecture, and text material. When “clicker” item results are displayed, students reflect on
the level and effectiveness of their preparation for lecture. Third, students can monitor
136
their performance with “clickers” when instructors ask questions about reading and
lecture materials. These activities function as guided self-assessments as well as a more
precise measure for the instructor to provide a learner-centered environment that
accurately addresses student needs on the most individualized level possible.
The current study suggests that metacognitions are influenced by response devices
and can have different types of metacognitive results. Some of the resulting
metacognitions lead to self-regulatory behaviors, hence the term used to distinguish such
behaviors, metacognitive self-regulation. Some of the influence upon metacognitions
resulted in positive self-monitoring behaviors, and/or emotions (e.g., changes in note
taking, help seeking behaviors, pride resulting from correct answers, fitting in with peers,
and level of preparedness). Other aspects of the influence on metacognition were
negative in nature as far as thoughts, behaviors, and/or emotions (e.g., clicking any
answer, changing paddle answers, embarrassment resulting from incorrect answers).
The metacognitively aware student discerns and selects productive thoughts while
disregarding unproductive thoughts; this type of thinking is strategic in nature (Anderson
& Krathwohl, 2008). The disregarded thoughts are essentially pruned thoughts, and the
thoughts that are chosen become pathways in the brain; thoughts that are engaged in more
frequently become well-worn paths in the brain, leading to automaticity (Bargh &
Chartrand, 1998). Metacognitions can be elicited through experiences in academic
settings. The current study indicates that not only are metacognitions influenced through
feedback devices, but different types of metacognitions may be stimulated by the
137
different feedback devices. Metacognition is a recommended part of the 21
st
century
college education (Anderman, 2011). If different types of metacognitions are products of
instructional technologies, clarifying the types of metacognitions that are produced (e.g.,
productive or unproductive) and which lead to improved outcomes is essential to
planning effective student learning. Therefore is important to clarify which types of
metacognition lead to most desirable learning outcomes. If student thoughts, specifically
metacognitive self-regulation of students, can be systematically influenced through the
instructional design and tools of a course, perhaps more successful college outcomes
could be garnered and retention rates could be increased.
Limitations
The study had several limitations. The first is that the primary researcher was a
teaching assistant for one of the four labs that participated in the Fall comparison group.
To reduce the possible influence, participant interview were not conducted with any
participants in the primary researcher’ lab section. This study was able to establish with
only a moderate to low level of confidence the influence that response systems have upon
the students. The current effort was quasi-experimental with an experimental and
comparison group, but not a control group. Also, groups, not individuals, were randomly
assigned. Without random assignment of individuals and a control group a causal
inference cannot be made. Therefore the results of this study are correlational in nature
rather than causal. True causal results require a more rigorous experimental design which
is seldom possible in the educational environment. Another limitation of the current
effort was due to human error, because the study was not able to hold the Summer Group
138
to the intended pre-post-test design which may have lent strength to the results as well. A
third limitation of the study is in regard to the (Electronic) Feedback Devices and
Metacognitive Self-Regulation. This measure had a moderate alpha when administered to
the Summer Group; when administered for “clicker” use the alpha was .39 and when
administered for paddles the alpha was .47. This aspect is problematic and may decrease
the strength of the results. However, there may be confounding factors such as the
possibility that group differences contributed to the change in the measure, because the
alpha was very strong when administered to the Fall comparison (α = .92) and
experimental groups (α = .96). Furthermore this was the first time the primary researcher
developed measures to examine constructs. Constructs are particularly difficult to
measure because of the need to rely on self-reporting. If the primary researcher replicated
this study, the measures created would be changed so that each has a 7 point Likert scale
instead of a 5 point scale. This change would have two benefits: a) the data would be able
to be compared with the MSLQ more easily, and b) this would allow for two more
categories or responses that may provide more insight into the metacognitive processes
that are influenced by the response devices. A fourth was a result of the reliance on self-
report surveys for the greater part of the data collection. This may have impacted the
study by social desirability effects; desirability bias can occur when participants only
report answers they believe are socially acceptable. Although in the current effort results
were strengthened by attempting to triangulate data, a repeat of the current study may
prove to confirm results.
139
Future Research
There was an interesting occurrence in the variables regarding extent of use. The
outcomes for the Summer, and Fall experimental and comparison groups and extent of
use were not significant. However, for the Fall groups there are two variables that are
approaching significance and could be researched further. The 25% extent of use in the
experimental group spiked and then dropped for the 50% group; a similar pattern
occurred in the comparison group. A point of research for future studies would be to
examine the mean plot between 0-25 and 0-100 that could be studied further. These
contrasts are seen in Tables 12 and 13. The first Table shows that the participants who
reported use of “clickers” 0% of the time performed more poorly on the quizzes than
those participants who reported 25%, 50%, 75% or 100% use. The participants who
reported use 25% of the time have the highest mean of the “clicker” group. After this
point, those who employed use of “clickers” seem to improve with more extensive use as
was hypothesized in the current study. A similar pattern exists for the comparison group
(paddles), although no participants reported 0% use. This may be due to two factors.
First, the paddles were provided when lecture began and collected at the end of the
lecture; all the participants had access to the paddles. Second, the visible nature of the
paddles adds a measure of accountability. It is probably safe to assume students would
not want to appear to instructors or TAs as not participating. At the 50% use of paddles
point, performance on quizzes drops significantly and the highest point occurs for
participants who reported paddle use as 75% of the time. It is interesting to see that the
participants who reported 25% use in the experimental group (“clickers”) and that the
140
participants who reported 75% use in the comparison group (paddles) have the highest
performance outcomes for their respective groups. These may be a characteristic of those
participants referred to in the above section on “Clickers” and Metacognition who would
perform well regardless of response device.
Another area recommended for future research is the types of metacognitions
elicited by response devices or other technology tools utilized in the educational setting.
The findings of this study indicate that metacognitions can be productive or
unproductive. Educational tools influence a self-reflective quality and/or a group
reflective quality. An interesting caveat, that may have relevance for this study and future
research, is found in recent studies of neuroscience and education discussed by
Immordino-Yang, Christodoulou and Singh (2012). There is a state of rest (DM rest) that
seems connected with cognitive performance; people who have the ability to engage in
higher connectivity in DM rest, a form of rest that the brain requires, perform better on
cognitive measures according to several researchers (Immordino-Yang, Christodoulou &
Singh, 2012). Perhaps in this concept lies the answer to the question about metacognitive
processes and “clickers” versus paddles. Paddles may have demonstrated increased
metacognitions related to social context, producing the more negative and distracting
metacognitions while “clickers” allowed safe social comparisons that did not interfere
with internal self-reflection in the learning process (e.g., individual’s level of preparation
and ability to self-monitor privately). If the paddles provided distractions and prevented
productive internal reflections because of increased social comparisons, this indicates that
141
Table 12
Fall Experimental Group Extent of Use (“Clickers”)
Table 13
Fall Comparison Group Extent of Use (Paddles)
142
the social processing required was a distraction. In addition, if “clickers” allowed for
higher internal reflections because of anonymity, then this type of feedback device may
indeed contribute to a learner centered environment resulting from more beneficial or
productive metacognitions. Moreover, if instructional technologies can produce or
influence different types of metacognitions, clarification about the types of
metacognitions and whether and how they may lead to improved outcomes is an area of
research that would benefit educational technology understanding.
Summary
This study sought to determine whether and how metacognitions are influenced
by “clickers.” To accomplish this, the study employed multiple groups, multiple
measures to triangulate data, and multiple points of data collection. The study had a pre-
post-test and a quasi-experimental design with an experimental group (“clickers”) and a
comparison group (paddles).
Conformity was found to be a strong underlying influence during the Summer and
Fall. In the Fall comparison group (paddles), a small margin (less than 10%) reported that
they debated with themselves as to whether to stay with their answer or change to match
the majority. These results also indicate that the quality of feedback to the lecturer may
be compromised. In addition, electronic feedback devices create a more learner-centered
environment in three ways. The first is by providing anonymity so that students can self-
monitor their own level of preparedness and understanding of concepts. The second way
learning is student-centered is that by receiving more authentic student answers, the
143
lecturer is able to focus on areas of concern and move more quickly over those areas that
are well understood. This maximizes the level of specificity that can be accomplished in
this educational context, to as close to an individual level as possible, certainly more so
than paddles, and in a more authentic manner. The third way learning is student-centered
is “clickers” may improve outcomes for mid and low performing students. As previously
mentioned, the higher- performing participants seemed to do well regardless of response
device. In addition, these respondents were split between enjoying the visible nature of
paddles or simply experiencing paddles without negative emotions. The experience of
those participants who had low and middle performance outcomes contrasts with the
experience of the higher performers. The low and middle performance outcome groups
expressed negative feelings associated with paddles and a strong preference for
“clickers.” The private nature of “clickers” seemed to allow an enhanced comfort level as
well as enhanced learning outcomes. Also, demographics were examined for
significance; results were somewhat inconsistent and significance may be due to a variety
of individual and group characteristics and experiences.
The results of this study suggest that considering student learning goals and
outcomes when deciding how to garner real-time feedback from students may be
important. The visible nature of paddles, similar to raising hands to respond to questions,
tends to elicit negative feelings from students. Such negative feelings may not be
productive for performance outcomes. However, there are times in educational
environments that the effects of conformity may serve an educational purpose or
contribute to student learning outcomes. Furthermore, this study found that there may be
144
different types of metacognitions that are elicited from different feedback devices.
Although self-report surveys about metacognition in lecture (e.g., PQ/CQ in appendix C),
indicated that metacognitions seem to be higher with paddle use, the learning outcomes
are significantly better for the “clicker” groups in Summer (p< .001) and Fall (p< .001) in
the current effort. This seems to indicate different types of metacognitions or different
levels of metacognitions were experienced. The types of metacognitions seem to include
productive and unproductive features and may have a self-reflective quality as compared
to a group reflective quality.
Future research is still necessary to replicate and add to the findings of the current
study. Extent of use seems to account for group differences. Metacognitions may have a
broader influence than previously perceived. Nevertheless, the current effort added to the
body of knowledge about electronic feedback devices, their influence upon
metacognition, and how students experience feedback devices in large lecture contexts.
In addition, the measures of metacognition and feedback devices that were developed
provide instruments that specifically addresses garnering feedback from students and the
level of influence students attribute to the devices. Furthermore, this current effort
illustrates the importance of combining qualitative data to develop a more complete
picture when measuring constructs because the picture given by the quantitative data was
at times misleading and the qualitative information provided clarification and insight for
the quantitative data.
145
Bibliography
Ainley, M., Hidi, S., and Berndorff, D. (2002). Interest, Learning, and the Psychological
Processes That Mediate Their Relationship. Journal of Educational Psychology,
94(3), 545-561.
Aleven, V., Stahl, E., Schworm, S., Fischer, F., & Wallace, R. (2003). Help seeking and
help design in interactive learning environments. Educational Research,73(3),
277-230.
Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C. & Norman, M. K. (2010).
How learning works: & research-based principles for smart teaching. California:
Jossey-Bass.
Anderman, E. M. (2011). Educational Pschology in the Twenty-First Century: Challenges
for Our Community. Educational Psuchologist, 46(3).
Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching, and
assessing: A revision of Bloom’s taxonomy of educational objectives. New York:
Addison Wessley Longman, Inc.
Arnold, K. E., Tanes, Z., & King, A. S. (2010). Administrative perceptions of data
mining software signals: Promoting student success and retention. The Journal of
Academic Administration in Higher Education, 6(2), 31-42.
Aronson, E. (2008). The Social Animal. New York, NY: Worth Publishing, 2-57.
Artino, A. R. (2005). Review of the motivated strategies for learning questionnaire.
Online submission. (ERIC Document Reproduction Services No. ED 499 083
Bandura, A. (1986) Social Foundations of thought and action: A social cognitive
theory. Englewood Cliffs, NJ: Prentice Hall.
Bandura, A. (1977). Self-Efficacy: The exercise of control. New York: W. H. Freeman.
Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds.). (2000). How experts differ from
novices. In how people learn (pp. 31-50). Washington. D.C.: National Academy
Press.
Beatty, I. D., Grace, W.J., Leonard, W. J., and Dufense, R. J., (2006). Designing effective
questions for classroom response systems teaching. American Journal of Physics,
74(1), 31-39.
146
Bradley, (2011). Resolving the enigma of nonlocal intuition: a quantum-holographic
approach. Handbook of Intuition Research, (pp. 197-246). UK: Edward Elgar
Publishing Limited.
Brown, K. (2010). Dang you tricked me into learning: Chaos, current events
competition in the legal research classroom. (May 7, 2012).
http://ssrn.com.libproxy.usc.edu/abstract=1602102
Brunstein, R. A., & Lederman, L. M. (2006). The Use and Evolution of an Audience
Response System. In E. D. Banks (Eds.), Audience response systems in
Higher education: Applications and cases. (pp. 40-52). Retrieved from
http:books.google.com.libproxy.use.edu/books?hl=en&lr=&id=xFqhg6xd4VEC&
oi=fnd&pg=PA
Caldwell, J. E. (2007). Clickers in the large classroom: Current research and best-practice
tips. Life Sciences Education, 6, 9-20.
Carlson, S. (2005). The net generation goes to college. The Chronicle of Higher
Education, 52(7), A34
Chen, C. S. (2002). Self-regulated learning strategies and achievement in an
introduction to systems information course. Information Technology,
Learning, and Performance Journal, 20(1), 11-25.
Chen, P. D., Lambert, A. D. & Guidry, K. R. (2009). Engaging on-line learners: The
impact of web-based learning technology on college students. Computers in
Education, 54(4), 1222-1232.
Chen, J. C., Whittinghill, D. C., & Kadlowec, J. A. (2010). Classes that click: Fast, rich
feedback to enhance students’ learning and satisfaction. Journal of Engineering
Education, 99(2), 158-169.
Clark, R.. E, & Feldon, D. F. (2005). Five common, but questionable principles of
multimedia learning. Cambridge Handbook of Multimedia Learning. Cambridge:
Cambridge University Press.
Collins, A. & Halverson, R. (2009). Rethinking education in the age of technology:
The digital revolution and schooling in America. New York: Teachers College
Press.
Cresswell, J. W. (1998) Qualitative inquiry and research design: Choosing among five
traditions. California: Sage Publications, inc.
147
Dinsmore, D. L., Alexander, P. A., & Loughlin, S. M. (2008). Focusing the
conceptual lens on metacognition, self-regulation, and self-regulated
learning. Educational Psychology Review, 20, 391-409.
Dembo, M., & Seli, H. (2007). Motivation and learning strategies for college success:
A self-management approach. 3
rd
edition. New York: Taylor & Francis.
Duncan, D. (2006). Clickers: A new teaching aid with exceptional promise.
Astronomy Education Review, 5(1).
Eastman, J. K., Iyer, R., &Reisenwitz. (2008). The impact of unethical reasoning on
different types of academic dishonesty: An exploratory study. Journal of
College Teaching & Learning, 5(12).
Flavell, J. (1979). Metacognition and cognitive monitoring: A new era of cognitive
development inquiry. American Psychologist, 34, 906-911.
Halpern, D. F. (. (. (Ed.). (2010). Undergraduate education in psychology: A blueprint
for the future of the discipline. Washington, DC, US: American Psychological
Association. doi:10.1037/12063-000
Halawi, L., Pires, S., & McCarthy, R. (2009). An evaluation of E-Learning on the
basis of Bloom's Taxonomy: An exploratory study. Journal of Education for
Business, 84(6), 374-380.
Immordino-Yang, M. H., Christodoulou, J. A. & Singh, V. (2012). Rest is not
idleness: Implications of the brain’s default mode for human development and
education. Manuscript submitted for publication.
James, M. C., & Willoughby, S. (2011). Listening to students conversations during
clicker questions: What you have not heard might surprise you! American
Journal of Physics, 79(1), 123.
Johnson, C. S., & Lammers, J. (2012). The powerful disregard social comparison
information. Journal of Experimental Social Psychology, 48(1), 329-334.
Jones, M. E., Anotonekot, P. D., & Green, C. M. (2012). The impact of collaborative and
individualized student response system strategies on learner motivation,
metacognition, and knowledge transfer. Journal of Computer Assisted
Learning. Doi: 10.1111/j.1365-2729.2011.00470.x
Kay, R., Lesage, A., & Knaack, L. (2010). Examining the use of Audience Response
Systems in secondary school classrooms: A formative analysis. Journal of
Interactive Learning and Research, 21(3), 343-365.
148
Kelly, K. G. (2009). Student response systems (“Clickers”) in the psychology classroom:
A beginner’s guide. Office of Teaching Resources in Psychology. Retrieve from
http://www.teachpsych.com/otrp/resources/kelly09.pdf
King, A. (1993). From sage on the stage to guide on the side. College Teaching, 41,
30–36.
Koestner, R., Taylor, G., Losier, G., & Fichman, L. (2010), Self-regulation and
adaptation during and after college: A one-year prospective study. Personality
and Individual Differences.49, 869-873.
Lantz, M. E. (2010). The use of ‘clickers’ in the classroom: Teaching innovation or
merely an amusing novelty? Computers in Human Behavior, 26, 556-561.
Lasry, N. (2008). Clickers or flashcards: Is there really a difference? The Physics
Teacher, 46.
MacGeorge, E. L., Homan, S. R., Dunning, J. B., Elmore, D., Bodie, G. D., Evans,
E.,Khichadia, S., Lichti, S., M., Feng, B., and Geddes, B. (2008). Student
evaluation of audience response technology. Educational Technology Research
and Development, 56, 125-145.
Mayer, R. E. (2008). Learning and Instruction. New Jersey: Pearson Education, Inc.
Mayer, R. E. and Moreno, R. (2003). Nine Ways to Reduce Cognitive Load in
Multimedia Learning. Educational Psychologist,38(1),43-52.
Mayer, R. E., Stull, A., DeLeeuw, K., Almeroth, K., Bimber, B., Chun, D., Bulger, M.,
Campbell, J., Knight, A, & Zhang, H. (2009). Clickers in college classrooms:
Fostering learning with questioning methods in large lecture classes.
Contemporary Educational Psychology, 34, 51-57.
Mazur, E., (1991). Peer Instruction: A user’s manual. New Jersey: Prentice-Hall.
McCabe, D. L., Trevino, L. K., & Butterfield, K. D. (2001). Cheating in academic
institutions: Adecade of research. Ethics & Behavior, 11, 219-232.
Meltzer, D. & Manivannan. (2002). Transforming the lecture-hall environment: The fully
interactive physics lecture. American Journal of Physics. 70(6), 639-654.
Mollborn, S. & Hoekstra, A. (2010). ''A meeting of minds'' : Using clickers for critical
thinking and discussion in large sociology classes. Sage: Teaching
Sociology,38(18).
149
Morin, (1993). Self-talk and self- awareness: On the nature of the relation. The Journal
of Mind and Behavior, 14(3), 223-234.
Nora, A. &, Blanca, B. P. (2009). Technology and Higher Education: The Impact of E-
Learning Approaches on Student Academic Achievement, Perceptions and
Persistence. Journal of College Student Retention: Research, Theory and
Practice, 10(1), 3 – 19.
Paas , F., van Gog, T., & Sweller, J. (2010). Cognitive load theory: New
conceptualizations, specifications, and integrated research perspectives.
Educational Psychology Review, 22: 115-121, doi 10.1007/s10648-010-9133-8.
Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning styles: Concepts
and evidence. Psychological Science in the Public Interest, 9(3), 105-119.
Patten, M. Q. (2002). Qualitative research and evaluation methods. California: Sage
Publications, inc.
Phan, H. P. (2009). Relations between goals, self-efficacy, critical thinking, and deep
processing strategies; a path analysis. Educational Psychology. 20(7), 777-799.
Phan, H.P. (2010). Students’ academic performance and various cognitive processes of
learning: an integrative framework and empirical analysis. Educational
Psychology, 30(3), 297-322.
Pintrich, P. R., & Schunk, D. H. (2007). Motivation in education: Theory, research, and
application (3rd ed.). New Jersey: Merrill Prentice Hall.
Pintrich, P. R., Smith, D. A. F., Garcia, T. & McKeachie, W. J. (1993). Reliability
and predictive validity of the motivated strategies for learning questionnaire.
Education and Psychological Measures, 53, 801.
Prather, E. & Brissender, G. (2009). Clickers as Data Gathering Tools and Students’
Attitudes, Motivations, and Beliefs on Their Use in this. Astronomy Education,
AER, 8, 010103-1, 10.3847/AER2009004
Prather, E., Slater, T. F., Brissenden, G., & Dokter, E. F. (2006). To click or not to
click is not the question: How research with clickers develops a better
understanding of when learning happens in your classroom. Bulletin of the
American Astronomical Society, 38, 948. Retrieved from
http://adsabs.harvard.edu
150
Popovich, P. M., Gullekson, N., Morris, S., & Morse, B. (2007). Comparing attitudes
toward computer usage by undergraduates from 1986 to 2005. Computers in
Human Behavior, 24, 986-992.
Rodriguez, M. (2010). A Q-methodology study of adult English language learners'
perceptions of audience response systems (clickers) as communication aide
(Doctoral dissertation). Retrieved from
http://gradworks.umi.com.libproxy.usc.edu/3419769.pdf.
Ross, M. E., Green, S. B., Glennon, A. D. S., & Tollefson, N. (2006). College
students' study strategies as a function of testing: An investigation into
metacognitive self-regulation. Innovative Higher Education, 30(5), 361-376.
Ross, S. M., Morrison, G. R., & Lowther, D. L. (2010). Educational technology
research past and present: Balancing rigor and relevance to impact school.
Contemporary Educational Technology, 1, 17-35.
Salkind, N. J.(2008). Statistics for people who think they hate statistics California:
Sage Publications, Inc.
Salomon Waters, H., & Schneider, W. (2010). Metacognition, strategy use, and
instruction. New York: Guilford press.
Schraw, G., Crippen, K., & Hartley, K. (2006). Promoting self-regulation in science
education: Metacognition as part of a broader perspective on learning.
Journal of Research in Science Education, 36(1), 111-139. doi:
10.1007/s11165-005-3917-8.
Schunk, D., Pintrich, P. R., & Meece, J. L. (2008). Motivation in education: Theory,
research, and application. New Jersey: Pearson Education, Inc.
Schurdak, J. (1967). An approach to the use of computers in the instructional
processand an evaluation. American Educational Research Journal, 4, 59–
73. doi:10.3102/00028312004001059
Shashaani, L. (1994). Gender-differences in computer experience and its influence
on computer attitudes. Journal of Educational Computing Research, 11(4),
-367.
Schraw, G. & Dennison, R. S. (1994). Assessing metacognitive awareness.
Contemporary Educational Psychology, 19, 460-475.
151
Schwartz, B. L., & Metcalfe, J. (1994). Methodological problems and pitfalls in the
study of human metacognition. In J. Metcalfe, & A. Shimamura (Eds.),
Metacognition: Knowing about knowing (pp. 93–113). Cambridge, MA: The
MIT.
Stowell, J. R., Oldham, T., & Bennett, D. (2010). Using student response systems
(“clickers”) to combat conformity and shyness. Teaching of Psychology,
37(2), 135-140.
Sweller, J. (2010). Element interactivity and intrinsic, extraneous, and germane
cognitive load. Educational Psychology Review, 22:123-138, doi 10.
1007/s10648-010-9128-5
*Tamim, R. M., Bernard, R. M., Borokhovski, E., Abrami, P. C., & Schmid, R. F.
(2011). What forty years of research says about the impact of technology on
learning: A second-order meta-analysis and validation study. Review of
Educational Researcher, doi: 10.3102/0034654310393361
*Turner, M., Kitchenham, B., Brereton, P., Charters, S., & Budgen, D. (2010). Does
the technology acceptance model predict actual use? A systematic literature
review. Information and Software Technology, 52(5), 463-479.
US Census Bureau (2005). Computer and Internet Use in the United State: 2003. US
Department of Commerce. Economics and Statistics Administration.
Van Diik, L., Van Der Berg, G., & Van Keulen, H. (2001). Interactive lectures in
engineering education. European Journal of Engineering Education, 26, 15-
28.
Van Merrienboer, J. J. R., Kirschner, P. A., & Kester, L. (2003). Taking the load off a
learner’s mind: Instructional design for complex learning. Educational, 38(1), 5-13.
Van Merrienboer, J. J. G., & Sweller, J. (2005). Cognitive load theory and complex
learning: Recent developments and future directions. Educational Psychology
Review, 17(2), 147-177.
Warschauer, M., & Matuchniak, T. (2009). New technology digital worlds:
Analyzing evidence of equity, access, use, and outcomes. Review of Research
In Education 34, 179. doi:10.3102/0091732X09349791
Watkins, L. J., & Mazur, E. (2008). Peer instruction: From Harvard to the two-year
college. American Journal of Physics, 76(11), 1066-1069.
152
Whipp, J.L. (2004). Self-regulation in a web-based course: A case study. Educational
Technology Research and Development. 52(4), 5-22.
Wolters, C. A. (1998). Self-regulated learning and college students’ regulation of
motivation. Journal of Educational Psychology. 90(2) 224-235.
Wolters, C. A. (2010). Regulation of motivation: Evaluating an underemphasized aspect
of self-regulated learning. Educational Psychologist, 38(4), 189-205.
Wolters, C. A. (2003). Understanding procrastination from self-regulated learning
perspective. Journal of Educational Psychology, 95(1), 179-187.
Zimmerman, B. J. (2000a). Self-efficacy: An essential motive to learn. Contemporary
Educational Psychology, 25, 82–91. doi:10.1006/ceps.1999.1016
Zimmerman, B. J. (2000). Attaining self-regulation: A social cognitive perspective. In M.
Boekaertz, P. R. Pintrich, & M. Zeidner (ed.), Handbook of self-regulation (pp
13-19). San Diego: Academic Press.
153
Appendix A
Biographical and Demographic Data
Name____________________________________ Age_________
Major_______________________ Circle one
Athlete Non-Athlete
Ethnicity – Check boxes that apply
Latino/Hispanic African-American Asian
White Other _ _______________
English language (check the one that best describes you)
English is my first/primary language
English is my second language
I am an English language learner
154
Appendix B
Motivated Strategies for Learning Questionnaire (MSLQ)
*Please read the following questions and think about whether “clickers” influenced
your learning. Circle the answer that is most correct for you.
1 = Not at all true of me 7 = Very True of me
1. I often find myself questioning
things I hear or read in this course to
decide if I find them convincing
1 2 3 4 5 6 7
2. When a theory, interpretation, or
conclusion is presented in class or
readings, I try to decide if there is
good supporting evidence
1 2 3 4 5 6 7
3. I treat the course material as a
starting point and try to develop my
own ideas about it
1 2 3 4 5 6 7
4. I try to play around with ideas of my
own related to what I am learning in
this course
1 2 3 4 5 6 7
5. Whenever I read or hear an assertion
or conclusion in this class, I think
about possible alternatives
1 2 3 4 5 6 7
6. During class time I often miss
important points because I am
thinking of other things (REVERSE)
1 2 3 4 5 6 7
7. When reading for this course, I make
up questions to help focus my
reading
1 2 3 4 5 6 7
155
8. When I am confused about
something I’m reading for this class,
I go back and try to figure it out
1 2 3 4 5 6 7
9. Before I study new course material
thoroughly, I often skim it to see
how it is organized
1 2 3 4 5 6 7
10. I ask myself questions to make sure I
understand the material I have been
studying in this class
1 2 3 4 5 6 7
11. I try to change the way I study to fit
the course requirements and
instructor’s teaching style
1 2 3 4 5 6 7
12. I often find that I have been reading
for class but don’t know what it was
all about (REVERSE)
1 2 3 4 5 6 7
13. When studying for this course I try
to determine which concepts I don’t
understand well
1 2 3 4 5 6 7
14. When I study for this class, I set
goals to direct how I use my each
study period
1 2 3 4 5 6 7
15. If I get confused taking notes in
class, I make sure I sort it out
afterwards
1 2 3 4 5 6 7
156
Appendix C
Electronic Feedback Devices and Metacognitive Self-Regulation
*Please read the following questions and think about whether “clickers” influenced
your learning.
*Did you use a “clicker”? Yes No
0% 25% 50% 75% 100%
*What percentage
of time did you use
the “clicker”?
O O O
O
O
Strongly
Disagree
Disagree
2
Neutral
3
Agree
4
Strongly
Agree
5
1. Clicker/paddle helps to clarify
the purpose for me when taking
notes in lecture
O O O O O
2 2. . I I g ga ai in ne ed d u un nd de er rs st ta an nd di in ng g a ab bo ou ut t t th he e
s su ub bj je ec ct t O O O O O O O O O O
3. Clicker/paddle helps me know if
the reading I did to prepare for
lecture was on track
O O O O O
4. Clickers/paddle help me decide
what to write in my notes
O O O O O
5. Clicker/paddle helps me decide
what to ignore
O O O O O
157
6. Clickers/paddle help me see how
the lecture material fits with the
text
O O O O O
7. Clickers/paddle help me to
understand what I’ve written in
my notes
O O O O O
8. Clickers/paddle help me to know
if I’m writing what’s important
O O O O O
9. Clickers/paddle help me to know
what questions to ask when the
topic is difficult
O O O O O
10. Clickers/paddle help me to get
my focus back on track
O O O O O
11. Clickers/paddle help me to know
what questions to ask
O O O O O
12. Clickers/paddle help me focus
on questions to write down when
a topic is difficult, so I can look
for an opportunity to ask
questions
O O O O O
13. Clicker questions help me decide
on key concepts and key words
to write in my notes
O O O O O
14. Clickers/paddle help me to know
when an idea is important to
underline, highlighting, circling,
or some other indicator
O
O
O
O
O
15. Clickers/paddle help me rethink
how I paraphrase (write ideas in
my own words) in my notes O O O O O
158
Appendix D
Metacognition and Electronic Feedback Devices
*Please read the following questions and think about whether using “clickers”
influenced your learning.
Strongly
disagree
1
Disagree
2
Neutral
3
Agree
4
Strongly
agree
5
1. When I responded to a clicker
question and then the answers
displayed on the histogram
indicated I was wrong…
a. I gained more understanding
about the subject
O
O
O
O
O
b. I found that clickers/paddle
helped answer questions I had
about the reading
O O O O O
c. Clickers/paddle helped me to
understand in the lecture material
O O O O O
2. I think using clickers/paddle
impacted me more than just raising
hands
O O O O O
3. I compared myself to others
O O O O O
4. I compared my answers to the
histogram
O O O O O
5. My understanding of
understanding of course concepts
improved
O O O O O
6. I thought more deeply about
course concepts
O O O O O
159
Please tell us how “clickers” may have influenced you:
1. How did clicker/paddle results cause you to evaluate your thoughts?
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
2. How have clickers/paddles caused you to change the way you take notes?
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
3. How have clickers/paddles caused you to compare your answers to other
students?
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
4. How have clickers/paddles helped you understand course concepts?
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
160
Appendix E
Summer Group Initial Descriptions of Pencil and Paper Survey
161
Appendix F
Summer Group Qualitative Interview
Question 1 – How did clicker/paddle results cause you to evaluate your thoughts?
1
Tally
Group 1
5 Respondents
2
Tally
Group 2
5 Respondents
3
Tally
Group 3
3 Respondents
III C- Anonymous I C- Allowed own
thoughts
I
C- Preferred
I C- helped refocus I C- caused more
comparing of answers
I C- when I’m wrong, I
learn, because I can’t
see everyone’s
answers
I C- Preferred I C- made me think
more
I C- Can’t change to the
majority
I C- 1
st
answer was
usually right, and if
I second guessed I
was wrong.
I C- didn’t see peer
responses till the end
I C-liked better
I C- Can press
anything
I C- required thinking
because you couldn’t
look around to see
others answers to
decide.
I C/P- evaluation of
thoughts was the
same for both
I P- Looked around
at peers
I C- easy to answer and
focus on the question
I C/P examples helped
I P- If my answer
deviated, I thought I
was wrong.
I C- Can’t just copy the
answer
I C/P positive, helps
with quiz
I P- Evaluate a lot
and never wanted
to be wrong.
I C- more independent
I P-Group influenced;
inclined to change
answers
I C- helped me because
if I had the wrong
answer, I evaluated
my thoughts and
compared myself to
others
I C- rely on own
knowledge
I P- felt pressure to go
with the majority
I C- helped
determine what you
know and don’t
know
I P- made me second
guess my answers
I P-preferred because
seeing answers
affected how I was
thinking
162
Appendix G
Summer Group Qualitative Interview
Question 2 – How have clickers caused you to change the way you take notes?
1
Tally
Group 1
5 Respondents
2
Tally
Group 2
5 Respondents
3
Tally
Group 3
Respondents
I C/P- If I missed the
question, it helped to
know what to write,
to put an explanation
in my notes
I C/P- Let me know
what’s important
I C/P- more notes
taken to answer
the questions
I C/P- Helps outline
important
information and
know what I’m doing
wrong
I C/P- not really I C/P- same, If my
answer was
wrong, I clarified
in my notes
I C/P- Didn’t change 3
gives questions to
write down in case
there are similar
questions on the
exam 2
I C- with clickers I felt
it was more
important to write
down
II C/P- not
necessarily, hasn’t
I C/P- Helps to see the
relevance to real life
situations
II C/P- same, no change
163
Appendix H
Summer Group Qualitative Interview
Question 3: How have clicker/paddle results caused you to compare yourself to others?
1
Tally
Group 1
5 Respondents
2
Tally
Group 2
5 Respondents
3
Tally
Group 3
3 Respondents
I C- Liked clickers I C- knew right away if
your answer was right
or wrong
I C- more competitive
I C- anonymous I P- people aren’t sure
and they see others
answers and change
their answers
I P- paddles can’t see
the results the same
I P- kind of
uncomfortable and
awkward if I knew
they were wrong
I C/P- helped to compare
progress
I C/P- You see what the
majority answered
with both, but with
paddles you might
change your answer
I C/P- Visual
representations of
where I am in class
with both, but had to
turn around and look
back with paddles
I C/P- increased self-
efficacy when I was
correct
I P- I changed my
answer a few times
I P preferred paddles; if
the majority of the
class has a different
answer you know to
change, and don’t
have to think with
clickers, just click
I P- saw answers and
knew possible right
answer already
I C/P- Basically
examples gave a
different view, but I
like clickers better
I P- You could see
answers and then
influenced you; you
change your answer
I C- had to think about
the answer truthfully to
put it in
I C- With clickers you
have security
I C Anonymous, but you
could see the majority
on the histogram
I P- easier to look at your
neighbor
I C- button to press; no
comparisons because
didn’t know who
answered what
I C/P- If I was wrong it
would help me to see
why and learn
164
Appendix I
Summer Group Qualitative Interview
Question 4 – How have clickers/paddles helped you to understand course concepts?
1
Tally
Group 1
5 Respondents
2
Tally
Group 2
5 Respondents
3
Tally
Group 3
3 Respondents
I C/P- both helped me
to understand 2 and
refocus attention 3
I C/P- not really but did
guide me 3
I C- actually have to
do it 3
I C/P- same, questions
break it down and
give basic knowledge
so I can expand later 2
I C- don’t know what
other’s choose 1
I C/P- not really 2
I C/P- made me think a
little more 2
I C- prefer clickers
because there is more
personal
responsibility 4
I C- prefer clickers
because you can’t
see other’s answers
begin submitted 4
I C/P- after I could
evaluate and they
helped me reflect 2
I C/P- reinforced
concepts and helped
me to know what to
learn 2
I C- choose clickers 4
because you stick to
your own experience
rather than other’s
answers 1
I C- prefer clickers, 4
because I focus on the
question rather than 2
on the paddles and
what peers answers
were 1
I C- provides a visual
response screen 2
I C- were confidential 1
I C- gives me more
self-confidence and
if I was wrong, no
one know so I would
be more inclined to
answer 1
I C/P-same for both;
main topics and extra
scenarios helped
elaborate 2
I C/P- competition
(increased
participation, caused
them to vote 3
I C/P- both helped
because instructor
taught the same 2
I P- choose paddles
because it’s good to
see everyone’s opinion
as you’re thinking – it
affected my thinking 4
165
Appendix J
Fall ‘Group A’ Initial Descriptions of Pencil and Paper Survey
166
Appendix K
Fall ‘Group B’ Initial Descriptions of Pencil and Paper Survey
167
Appendix L
Fall ‘Group A’
Question 1 – How did clicker results cause you to evaluate your thoughts?
1
Tally
Group 1
8 Respondents
2
Tally
Group 2
6 Respondents
3
Tally
Group 3
5 Respondents
IIIII By comparing to peers II Answers matched the
majority, when they
didn’t match, knew I
was wrong and
would rethink
answer
I
It’s interesting to
see peers views,
otherwise you
only see yours
IIII Understanding other’s
views
II Enjoyed using
clickers/good
I Kind of helped, not
that useful, but it’s
useful to think
about the right
answer
I Realized my level of
preparation
I Felt confident when
right
I The question
helps; it’s not
exactly higher
level thinking
II Like clickers II Increased
engagement
I I guess they
helped me see
how I fit in with
the class
I Clarifies what I need
to work on
I When wrong I gained
understanding
I I don’t think they
helped
III Gave
confidence/clarified
where I am compared
to the class
I When wrong I paid
more attention
I Waste of money
I Reflected on my study
habits
I Like the display of
results
I Made me think
critically about the
right answer
I Opens up discussion
I +If you were wrong
and others were
wrong too it was less
discouraging
I When I see answers,
I thing about the
right answer and
look at the material
I Anonymous
168
Appendix M
Fall ‘Group B’
Question 1 – How have paddles results cause you to evaluate your thoughts?
1
Tally
Group 1 -high
mean
4 respondents
2
Tally
Group 2 –mid-mean
3 respondents
3
Tally
Group 3- low
mean
6 respondents
II Compared responses
to peers
I Feel
involved/engaged
II
Self-monitored
performance
II Changed answers to
majority
I I don’t know I Peer comparisons
to feel as
knowledgeable
I Caught between
deciding to stay with
answer or be right
I Prefer clickers I Paid more
attention
I Decreased
confidence when
wrong
I Visible answers. I Thought critically
about own
thoughts
I Confident when
right
I Others know when
you’re wrong
I Didn’t influence.
II Visible answers
II Embarrassed when
wrong
I Studied more to
answer correctly
I Requires an
opinion, not only
learning
information
I Prefer clickers
169
Appendix N
Fall 'Group A’
Question 2 – How have clickers caused you to change the way you take notes?
1
Tally
Group 1 - high
mean
8 Respondents
2
Tally
Group 2 – mid-
mean
6 Respondents
3
Tally
Group 3 – low mean
5 Respondents
II Useful for making
predictions on tests
or assignments
II I write down the
answer and main
points
I I take notes on clicker
questions and write
down new material
III Show key
points/shows what’s
important
I I take mental notes
on the right answer
IIII No change
II I focus my notes on
things I get
wrong/clarifies
I I focus on what the
professor is saying
I Increase
attention/helps to
stay focused
I I don’t know; I write
down the right
answer
I Results are
interesting
I No change
II No change
170
Appendix O
Fall ‘Group B’
Question 2 – How have paddles caused you to change the way you take notes?
1
Tally
Group 1 -high mean
4 respondents
2
Tally
Group 2 –mid-
mean
3 respondents
3
Tally
Group 3- low mean
6 respondents
I Clarified what’s
important
II Ideas are clear
because of the
questions
II Write down paddle
information
I Wrote main points I Take down key
points
I See what the
majority thinks and
if you’re on the same
level as peers
II No change/questions
provide details
II Hasn’t changed I If I was wrong
question/clarify
I I don’t think notes
changed
I Helped maintain
focus
I III Hasn’t changed
171
Appendix P
Fall ‘Group A’
Question 3 – How have clickers caused you to compare your answers to other students?
1
Tally
Group 1 – high
mean
8 Respondents
2
Tally
Group 2- mid-mean
6 Respondents
3
Tally
Group 3 – low
mean
5 Respondents
IIII The display shows
where you are
compared to others
I It’s interesting to
compare thoughts
I The visual
representation
makes it easy to
compare
II +Shows how well I
know the subject
I I compare answers
with a team mate,
and if I disagree I
want to think and
find out what
someone else thinks
I Allows you to see,
not really right or
wrong, you see a
different
perspective
I Seeing the results is
interesting and I
compare myself to
see if I should change
I When answers are
different we share
why were correct,
and the professor
clarifies what’s right
and wrong
I Adds a competitive
spirit
I If I am with the
majority I think I am
more like the others
in class
I If I am right I know
that me and the
majority know what
we’re doing, if not
we’re not catching
on
II You see if how you
compare/rank with
the class
I It’s exciting to see
when you answer
right and others were
wrong
I Engages the whole
class
I If you’re with the
small percentage
who are wrong, you
don’t feel great
I Not sure; I can see
area of weakness for
myself
I Usually I don’t, but if
I’m wrong I think
about why I’m in the
lower percent
I Can’t say they have
I I really don’t
compare
Comparisons are
hard when questions
are opinion based
172
Appendix Q
Fall ‘Group B’
Question 3 – How have paddles caused you to compare your answers to other students?
1
Tally
Group 1 -high mean
4 respondents
2
Tally
Group 2 –mid-mean
3 respondents
3
Tally
Group 3- low mean
6 respondents
II Paddles influence my
answer/change to
majority
I Answers are visible IIIII Answers are visible
I See the response of
student immediately
I Embarrassed when
wrong or different
answer
I Made sure my
answer was the
same/changed
I If answers were
different and I didn’t
read, I knew I was
wrong
I Fit in with peers
when answers are
the same.
I Wanted to fit in
I Not sure they have I If answers are
different, I consider
why
I When wrong seek
to understand
I Embarrassed when
wrong “You
shouldn’t do that
stuff”
III Authentic answers
with clickers
I Clickers are
Anonymous
III Prefer clickers
173
Appendix R
Fall 'Group A’
Question 4 – How have clickers helped you to understand course concepts?
Group 1
Tally
Grp 1 - high mean
8 Respondents
Grp 2
Tally
Grp 2 - mid-mean
6 Respondents
Group
3 Tally
Grp3 - low mean
5 Respondents
I Reviews information
to check for
understanding
II Helps with key
points
II Gives more specific
details
I Questions and
answers help attention
I Helps to think
critically
I Shows if you read or
understand in
lecture
III It’s fun/ like clickers II Break up material to
learn to better
I Highlights important
information
III Understand different
views
I Gives main facts and
what the professor
wants you to know
I Engages students
II Engaging II Clarifies key
concept; if you’re
right you got it; if
not you eventually
catch on
I Helps to know how
to prepare of exams
II Helps to prepare for
exams/labs
I It’s learning through
question, repetition,
and scenarios.
I Gives a different
perspective
I Elaborates on topics I It leads to
discussion
I They help a little,
but applying the
concept helps
understanding.
I Helps me to think
more critically about
note taking
I No pressure
whether you’re
right or wrong.
They’re cool
I They are auxiliary.
I Helps to focus on the
material visually and
easier to understand
I I They are overpriced;
other software
programs that are
similar
Challenges me to
retrieve information
more quickly to
answer the questions
I I Waste of money
I Shows the professor
whether more
explanation is
required
I Helps when you’re too
shy to raise your hand
I Anonymous
174
Appendix S
Fall ‘Group B’
Question 4 – How have paddles helped you to understand course concepts?
1 Tally Group 1 -high
mean
4 respondents
2 Tally Group 2 –mid-
mean
3 respondents
3 Tally Group 3- low mean
6 respondents
II It gives the answer I It give the answer I Increased self-
monitoring
I It’s like a mini quiz,
and you learn
more effectively
I If your wrongs you
take more notes,
study, and reread
I Increased learning
I Increased critical
thinking about
answers
I I don’t know I When wrong, you
want clarification
I Paddles made me
prepare more
because you show
your answers
II The questions help I Didn’t help
significantly
I Helps to apply the
reading and my
own academic
situation
I Doesn’t help; I
don’t remember
anything from it
II Professor clarifies
and helps you relate
II The question gives
the general idea
Abstract (if available)
Abstract
The purpose of this study was to examine whether electronic response systems influence student metacognitions in large lecture settings, and how metacognitive processes are influenced. Moreover, this study compared electronic response systems with a low technology system and sought to establish whether differences exist in how the two response systems influence metacognition. The design of the study was quasi-experimental, and employed both quantitative and qualitative measures with multiple groups and multiple points of data collection. A context was selected that utilized electronic response systems as a part of the instructional design of the course and in conjunction with instructional strategies (e.g., questioning, and Peer Instruction). Three sections of the same undergraduate educational psychology course with the same instructor and instructional design were utilized in the study. There were a total of 198 participants, 33 in the Summer Group, 87 in the Fall experimental (“clickers”) group and 78 in the Fall (paddles) comparison group. The study found that metacognitions are influenced more by the low technology response systems than by “clickers,” but performance outcomes were significantly higher with “clicker” use (p < .01). Results from the study indicate that metacognitive processes are influenced by response systems and there are similarities and difference in the influence of the two response systems. This study found that the low technology response system resulted in negative feelings because answers were visible to peers before the correct response was indicated. This resulted in students changing responses based on perceived pressure from the group. While this resulted in more metacognition than “clickers,” the visible nature of the low technology device generated negative feelings which may indicate that this method of response may have been an impediment to learning goals and creating a learner-centered environment. The use of “clickers” seems to influence honesty and reduce the conformity effects to which students are prone. Results indicate that it may be useful to view metacognitions as productive or unproductive, and in the case of response systems, as having a self-reflective or group reflective quality. In addition the respondents who experienced enhanced learning outcomes with “clickers” were the participants who had low to average performance outcomes as compared to participants who tended to have higher performance outcomes. Participants who had higher outcomes experienced the least benefits and may have had consistent performance outcomes regardless of the response device in use.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Elementary principal leadership and learning outcomes for low socioeconomic status Hispanic English learners
PDF
The effect of reading self-efficacy, expectancy-value, and metacognitive self-regulation on the achievement and persistence of community college students enrolled in basic skills reading courses
PDF
Mind, motivation, and meaningful learning: A cognitive science approach to learning how to learn
PDF
1:1 device program in a K-12 public school: the influence of technology on teaching and learning
PDF
Improving pilot training by learning about learning: an innovation study
PDF
Online, flipped, and traditional instruction: a comparison of student performance in higher education
PDF
How are teachers being prepared to integrate technology into their lessons?
PDF
Knowledge, motivation, and organizational influences within leadership development: a study of a business unit in a prominent technology company
PDF
Developmental education pathway success: a study on the intersection of adjunct faculty and teaching metacognition
PDF
Employing cognitive task analysis supported instruction to increase medical student and surgical resident performance and self-efficacy
PDF
Instructional coaching and educational technology in California public K-12 school districts: instructional coaching programs across elementary, middle, and high schools with educational technolo...
PDF
Instructional coaching in California public K-12 school districts: instructional coaching programs in elementary, middle, and high schools and the impact on teacher self-efficacy with educational...
PDF
Instructional coaching, educational technology, and teacher self-efficacy: a case study of instructional coaching programs in a California public K-12 school district
PDF
The knowledge, motivation and organizational influences that shape a special education teacher’s ability to provide effective specialized academic instruction
PDF
Exploring organizational transfer in self-directed, self-selected elearning courses
PDF
Fostering competent professionals: instructional systems specialists at the instructional systems technology program
PDF
TikToking and Instagramming: following high school teacher influencers' roles in supporting and informing teacher practices
PDF
Aligning educational resources and strategies to improve student learning: effective practices using an evidence-based model
PDF
Investigation of health system performance: effects of integrated triple element method of high reliability, patient safety, and care coordination
PDF
Self-reported and physiological responses to hate speech and criticisms of systemic social inequality: an investigation of response patterns and their mediation…
Asset Metadata
Creator
Manke-Brady, Melanie L.
(author)
Core Title
“Clickers” and metacognition: How do electronic response devices ("clickers") influence student metacognition?
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education (Leadership)
Publication Date
05/08/2012
Defense Date
03/12/2012
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
accurately assess conceptual understanding during lecture,anonymity and internal reflections,clickers,clickers and anonymity,clickers and enhanced learning outcomes,clickers and honesty,clickers and instructional strategies,clickers and instructional strategies produce higher outcomes,clickers and undergraduate large lectures,clickers provide timely and specific feedback,comparative study for response devices,conformity effects and undergraduates,conformity effects in large lecture context,electronic polling,electronic polling devices,electronic response systems,enhanced learning outcomes for undergraduates with low to average performance outcomes,group reflective quality,increased performance outcomes and polling devices,influencing productive metacognitions in undergraduates,learner-centered environments,low technology polling and disruption of the learning process,low technology versus high technology response systems,meaningful metacognitions,metacognition,metacognitive self-regulation,negative feelings and low technology polling devices,OAI-PMH Harvest,performance outcomes and clickers,productive and unproductive metacognitions,quality of metacognitions elicited,reduce conformity effects,self-reflective quality,self-reflective quality of clickers,students with higher performance outcomes experienced the least benefits,technology in large undergraduate lectures,verses high technology polling systems
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Seli, Helena (
committee chair
), Burch, Patricia E. (
committee member
), Keim, Robert G. (
committee member
)
Creator Email
melaniebrady2000@yahoo.com,melanilb@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-35109
Unique identifier
UC11289145
Identifier
usctheses-c3-35109 (legacy record id)
Legacy Identifier
etd-MankeBrady-806.pdf
Dmrecord
35109
Document Type
Dissertation
Rights
Manke-Brady, Melanie L.
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
accurately assess conceptual understanding during lecture
anonymity and internal reflections
clickers
clickers and anonymity
clickers and enhanced learning outcomes
clickers and honesty
clickers and instructional strategies
clickers and instructional strategies produce higher outcomes
clickers and undergraduate large lectures
clickers provide timely and specific feedback
comparative study for response devices
conformity effects and undergraduates
conformity effects in large lecture context
electronic polling
electronic polling devices
electronic response systems
enhanced learning outcomes for undergraduates with low to average performance outcomes
group reflective quality
increased performance outcomes and polling devices
influencing productive metacognitions in undergraduates
learner-centered environments
low technology polling and disruption of the learning process
low technology versus high technology response systems
meaningful metacognitions
metacognition
metacognitive self-regulation
negative feelings and low technology polling devices
performance outcomes and clickers
productive and unproductive metacognitions
quality of metacognitions elicited
reduce conformity effects
self-reflective quality
self-reflective quality of clickers
students with higher performance outcomes experienced the least benefits
technology in large undergraduate lectures
verses high technology polling systems