Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Asynchronous online education and introductory statistics: The role of learner characteristics
(USC Thesis Other)
Asynchronous online education and introductory statistics: The role of learner characteristics
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
ASYNCHRONOUS ONLINE EDUCATION AND INTRODUCTORY STATISTICS:
THE ROLE OF LEARNER CHARACTERISTICS
by
Nicholas Peter Gorman
A Dissertation Presented to the
FACULTY OF THE USC ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
December 2011
Copyright 2011 Nicholas Peter Gorman
ii
Table of Contents
List of Tables ............................................................................................................... iv
Abstract ..........................................................................................................................v
Chapter One
Background ........................................................................................................1
Statement of Problem .........................................................................................7
Limitations, Assumptions, and Design Controls .............................................11
Definition of Key Terms ..................................................................................11
Summary ..........................................................................................................14
Chapter Two
Introduction ......................................................................................................16
Statistics Education ..........................................................................................17
Justifications for Online & Distance Education...............................................20
Online Education and Statistics .......................................................................27
Summary ..........................................................................................................39
Chapter Three
Introduction ......................................................................................................42
Research Hypotheses .......................................................................................48
Population and Sample ....................................................................................49
Data Collection and Instrumentation ...............................................................50
Figure 1: Detecting Covariates for use in Final Stepwise
Regression Modeling ...............................................................53
Data Analysis ...................................................................................................57
Summary ..........................................................................................................60
Chapter Four
Introduction ......................................................................................................63
Course Design and Implementation .................................................................64
Sampling and Data Cleaning ...........................................................................66
Analyses ...........................................................................................................69
Summary ..........................................................................................................80
Chapter Five
Introduction ......................................................................................................82
Summary of the Study .....................................................................................82
Conclusions ......................................................................................................84
Implications......................................................................................................95
Future Research ...............................................................................................97
iii
References ..................................................................................................................102
Appendices
Appendix A: Student Survey Instrument .......................................................108
Appendix B: Student Satisfaction Survey......................................................116
iv
List of Tables
Table 1: Comparison of Outcomes in Online to Face-to-Face
Statistics Courses 30
Table 2: Methodologies used to Compare Online to
Face-to-Face Statistics Course 31
Table 3: Fictitious Example Demographic Characteristics of Traditional
versus Online Students 46
Table 4: Overview of Covariates/Predictors of Student Outcomes 54
Table 5: Cronbach’s Alpha for Computed Scales 69
Table 6: Demographic Characteristics, Psychological Factors, Time
Constraints, and Previous Knowledge of HESC 349 Students 71
Table 7: Correlation Matrix of Proposed Predictor Variables 73
Table 8: Summary of Final Regression Models for Predicting 74
Student Grades
Table 9: Summary of Final Regression Model for Predicting Student
Pass Rate 75
Table 10: Comparison of Student Grades by Condition with and
without Covariates 78
Table 11: Comparison of Student Pass Rates by Condition with and
without Covariates 79
Table 12: Summary of Statistically Significant Predictors of Student
Outcomes 81
v
Abstract
Increasingly, online classes have been used to address the rising demand for statistics
training and calls for improved curricula. However, little empirical research has
examined the efficacy of online statistics courses. The present study sought to fill this
niche by comparing the grades, attrition, pass rates, and satisfaction of 181 students
enrolled in 4 face-to-face and 2 online introductory health science statistics courses. As
methodological controls, the online and face-to-face curriculums were made as similar as
possible, a single instructor taught all 6 classes, and blinded teaching assistants conducted
all grading. In addition, differences in students’ demographic characteristics,
psychological factors, time constraints, and previous knowledge were assessed and
controlled for prior to examining their learning outcomes. Contrary to past findings, face-
to-face students outperformed online students in terms of their exam scores and pass
rates. In addition, online students were found to be closer to graduation, less likely to be
pursuing a Health Science degree, had taken more mathematics courses, placed lower
value on their statistics training, displayed more negative attitudes toward statistics, and
reported less confidence in their ability to learn statistics software. The implications of
these findings for educators and researchers are examined in depth.
1
Chapter One
Background
Despite an ongoing and, at times, contentious dialogue in the research community
regarding the merits and pitfalls of online education, the recent explosion of online
programs worldwide suggests that wide-scale adoption of online learning may be
inexorable. Currently data reveal that online education is the most rapidly growing
segment of education (Hurlburt, 2001). A little under half of all U.S. universities appear
to offer online learning opportunities (Grandzol, 2004), a figure that jumps to over 95%
when only large institutions are considered (Gordon, He, & Abdous, 2009). Moreover,
even in traditional face-to-face classes many curriculums include online components
(Green, 1997, as cited in Hurlburt, 2001).
The reasons for this recent growth are manifold, with proponents of online
education citing justifications based on everything from greater accessibility to improved
student outcomes (Summers, Waigandt, and Whittaker, 2005; U.S. Department of
Education, 2009). For instance, many researchers seem to agree that online classes,
particularly asynchronous ones in which content may be accessed at the student’s
convenience, provide greater flexibility and autonomy than face-to-face courses and are
well suited to the needs of non-traditional students like parents, full-time workers, and
older students (Ocker & Yaverbaum, 1999; Shanker & Hu, 2006; Summers, Waigandt,
and Whittaker, 2005; U.S. Department of Education, 2009). Because these students face
significant time constraints in their personal lives, they might not otherwise be able to
seek out higher education (Summers, Waigandt, and Whittaker, 2005). This is an
important consideration for educators, as the rate of concurrent employment during
2
college rose almost 10% from 1984 to 2001, with a simultaneous doubling of full-time
employment rates (Orszag, Orszag, & Whitmore, 2001). This suggests that there may be
a growing segment of students for whom online curriculum may be especially enticing.
Simply stated, both the supply and demand surrounding online courses are on the rise.
Despite the pace at which online curriculums are spreading through higher
education, the progress has varied within different academic fields. Within the field of
statistics, educators have in some ways been uniquely positioned to take advantage of
online curriculums. A constellation of factors including a growing demand for statistics
training (Everson, Zieffler, & Garfield, 2008), widespread faculty support for
implementing new teaching technologies (Meletiou-Mavrotheris, 2004), and a need for
improvements to existing curricula (Garfield, Hogg, Schau, & Whittinghill, 2002;
Meletiou-Mavrotheris, 2004; Onwuegbuzie, 2003) have positioned statistics instructors to
capitalize on the online learning movement.
To illustrate the potential represented by online curriculum, it may be edifying to
examine the growing demand for statistics training worldwide. This increased interest in
statistics literacy is reflected in recent movements to establish statistics as a general
education requirement across college campuses. At the same time a growing number of
majors have begun to include statistics as one of their core requirements. Both of these
trends reflect a mounting realization that some level of statistics fluency is necessary not
only for academic pursuits, but also for success in business, government, and even
navigating commercial advertising (Glencross & Binyavanga, 1996). With the demand
for statistics training increasing, online curriculums are increasingly being eyed as a way
3
of reaching out to non-traditional students and expanding enrollments (Gordon, He, &
Abdous, 2009; Summers, Waigandt, & Whittaker, 2005).
Statistics instructors and departments may be especially receptive to distance
learning technologies, as the field of statistics instruction itself has recently been radically
reshaped by other learning technologies. Even as recently as 20 years ago, statistics
courses were frequently taught without the benefit of classroom technologies like
computer labs and statistical software packages (Chance, Ben-Zvi, Garfield, & Medina,
2001). Today the advent and proliferation of statistics software, multimedia tutorials,
computer simulations, and advanced calculators have all forced educators to evaluate not
only how but also what to teach in statistics classes (Mills & Xu, 2006). The result is that
statistics professors, in aggregate, constitute a workforce that has already experienced
firsthand the benefits of integrating new classroom-learning technologies.
The potential of online statistics curriculum should also be considered in light of
existing face-to-face curriculum. While significant efforts have been made to redesign
statistics curriculum to take into account student characteristics, integrate learning
technologies, and take into account principles from educational psychology, the fact
remains that on average students’ exam scores in statistics remain noticeably lower than
those in other courses (Onwuegbuzie & Wilson, 2003). In addition, research suggests that
up to four-fifths of students in introductory statistics courses will at some point
experience unmanageable levels of anxiety (Onwuegbuzie & Wilson, 2003; Schau,
Stevens, Dauphinee, & Del Vecchio, 1995), a sobering thought as levels of anxiety have
been linked to academic performance in statistics classes (Onwuegbuzie, 2003). Clearly
4
there remains room for improvement in statistics curriculums. While unlikely to serve as
a panacea for all the difficulties encountered in statistics education, online programs may
represent a new and powerful tool in educators’ arsenals.
Despite the potential posed by online curriculums, there are currently three major
obstacles standing in the way of expanding the implementation of online statistics
courses. Specifically, the lack of empirical research evaluating online technologies in
statistics classes, the presence of conflicting results from the few existing studies, and a
series of pervasive methodological flaws among these studies all mean that practitioners
have little evidence documenting the efficacy of implementing online statistics training.
Given the alignment between statisticians’ training as evaluators and the potential
posed by online curriculum, it may be surprising to learn that little formal research has
been conducted to evaluate the efficacy of online statistics curriculum compared to
traditional face-to-face classes (Meletiou-Mavrotheris, 2004). A search of the literature
uncovered only five articles directly comparing online statistics curriculum to traditional
face-to-face courses. To date, a disproportionate amount of the literature on online
statistics education has instead been dedicated to non-empirical studies. This is troubling
in that these studies, while informative case studies, are often not replicable or
generalizable across settings. As a result they may be of limited utility for educators.
In addition, even among the few published direct comparisons of online to face-
to-face statistics courses there has been a notable degree of disagreement regarding online
programs’ efficacy in terms of outcomes like learning or satisfaction. This, in turn,
reflects a disagreement in the broader educational research community as to the efficacy
5
of online programs, with some large-scale meta-analyses favoring online courses (U.S.
Department of Education, 2009) and others suggesting there is no difference between
online and traditional face-to-face curriculum (Russel, 2001; Yatrakis, & Simon, 2002).
In addition to the overall lack of empirical research and conflicting findings, a
third barrier stands in the way of expanding the role online statistics courses. Specifically,
the existing literature has been plagued by methodological shortcomings that may, in
part, have contributed to the conflicting findings reported to date. Across the five direct
comparisons published, methodological shortcomings including failure to blind graders,
to consider student demographic characteristics, to track experiment mortality, or to
employ statistical controls have been widespread. As a result of these shortcomings, it is
difficult to establish whether the existing evidence offered in support or opposition of
online statistics courses can be viewed as valid.
The present study seeks to address these three issues by providing new, empirical
research to evaluate the impact of online statistics curriculum compared to traditional
face-to-face courses. As a series of rigorous methodological controls will be employed,
the present study also stands to shed light on the conflicting outcomes reported among the
few other existing studies.
Conceptual Underpinnings for Study
Before discussing the conceptual framework adopted in the present study it is first
necessary to consider two background considerations that strongly influenced the
selection of a data-driven, theory-building approach. First, given the lack of past research
examining the efficacy of online statistics curriculum compared to traditional face-to-face
6
classes (Meletiou-Mavrotheris, 2004), no previously validated theoretical models for the
evaluation of online statistics courses were available for use. As a result, the literature
base concerning student outcomes in statistics courses, irrespective of whether they were
conducted online, was consulted. This review revealed the importance of a vast pool of
possible predictors of student outcomes including demographic characteristics,
psychological factors, time constraints, and previous experience. Specifically, the
literature has underscored the importance of gender (Rodarte-Luna & Sherry, 2008),
anxiety (Hsu, Wang, & Chiu, 2009; Onwuegbuzie, 2003; Rodarte-Luna, B., & Sherry),
achievement expectations (Onwuegbuzie, 2003), value (Garfield, Hogg, Schau, &
Whittinghill, 2002), attitudes towards computers (Hsu, Wang, & Chiu, 2009), self-
efficacy towards mathematics, statistics, and software (Hsu, Wang, & Chiu, 2009),
English language proficiency (Lesser & Winsor, 2009), prior experience with math,
computers (Sitzmann, Kraiger, Stewart, & Wisher, 2006), or statistics courses
(Onwuegbuzie, 2003), and time commitments resulting from employment, serving as a
caregiver, and overall course load (Onwuegbuzie, 2003).
This breadth of predictors of student outcomes in statistics courses poses additional
problems for the present study. Most importantly, logistical constraints in the present
study will result in a maximum sample size of one hundred and forty-four participants, a
sample size that is insufficient for modeling the number of possible predictors of student
outcomes identified. Further, as few researchers have considered the above predictors
concurrently in a single study, there is little theoretical basis upon which to select or
exclude predictors.
7
As a result of these two limitations, the decision was made to propose a primarily
data-driven approach for detecting predictors of student outcomes for use as control
variables in the present study. While data-driven modeling techniques are generally
regarded as less desirable than regression modeling choices based on theory (Field,
2009), in the absence of guiding theory, such techniques may provide the first of several
lines of evidence needed for the creation of new theories.
As a result the present study represents a necessary first step towards developing a
new conceptual framework regarding the evaluation of online statistics curricula. As the
study will employ a primarily data-driven regression modeling technique, the conclusions
drawn will be interpreted with due caution and emphasis will be placed on the limitations
and appropriate applications of the findings.
Statement of Problem
Across both academic and applied sectors there is a rapidly accelerating demand
for training in statistics (Glencross & Binyavanga, 1996). At the same time, educational
researchers and practitioners have noted the need to make substantive improvements to
existing statistics curricula, as students’ grades are often lowest in statistics courses
(Onwuegbuzie & Seaman, 1995), these courses sometimes function as weeder courses
preventing students from pursuing certain degree tracks (Onwuegbuzie, 1998), and up to
four-fifths of students in introductory statistics courses at some point experience
unmanageable levels of anxiety (Onwuegbuzie & Wilson, 2003; Schau, Stevens,
Dauphinee, & Del Vecchio, 1995). Simply stated, there is a high demand across sectors
not only for more statistics courses, but also for higher quality statistics training.
8
In other academic fields, online courses have been proposed as a promising means
of both meeting the increased demand for courses (Ocker & Yaverbaum, 1999; Summers,
Waigandt, & Whittaker, 2005) as well as of enhancing student learning outcomes (U.S.
Department of Education, 2009), student satisfaction (Shanker & Hu, 2006; Suanpang &
Petocz, 2006), course pacing/time on task (Suanpang, & Petocz, 2006; U.S. Department
of Education, 2009; Zhang, 2002), course retention (Chua & Lam, 2007), and student
collaboration (Wang, Sierra, Folger, 2003; Woods & Ebersole, 2003). This evidence
suggests that online courses may help meet the two of the greatest challenges facing
statistics educators today: the rising demand for statistics courses and calls for higher
quality instruction.
However, despite the potential posed by online statistics courses, further
evaluation of these courses is still needed. To date, there has been remarkably little
research evaluating the integration of learning technologies into statistics courses
(Meletiou-Mavrotheris, 2004). In the few studies that have directly compared online to
face-to-face statistics courses, a number of conflicting findings have been reported.
Further, these few studies have all reported substantial methodological shortcomings,
which may undermine the validity of their claims. As a result it is difficult to determine
the efficacy of online statistics courses in terms of student outcomes such as learning,
satisfaction, course retention, and collaboration.
Taken together, these findings underscore the pressing need for a
methodologically rigorous direct comparison of online and face-to-face statistics
instruction. Such research could then be used to determine what role, if any, online
9
learning should play in addressing the rising demand for statistics courses and for higher
quality statistics training
Purpose of Study
This study will assess whether an entirely online statistics curriculum results in
different student outcomes in terms of grades, pass rates, attrition, and satisfaction when
compared to an otherwise identical face-to-face statistics course. This study extends upon
previous research into the efficacy of online statistics curriculum by being the first to
employ rigorous methodological controls and to statistically control for differing student
characteristics in the online and face-to-face conditions. The findings are intended to be a
first step toward clarifying the debate over the efficacy of online statistics curriculum and
building a theoretical framework for future studies.
Research Questions
In order to assess the efficacy of online statistics curriculum, this study poses the
following series of research questions:
1. Do students who enroll in online introductory statistics courses vary from
those who enroll in traditional classes in terms of demographic
characteristics, psychological factors, time constraints, or previous
knowledge?
2. Which of the demographic characteristics, psychological factors, time
constraints, or previous knowledge domains found to differ in Research
Question One predict student outcomes (attrition, grades, pass rates, and
satisfaction)?
10
3. After statistically controlling for the variables identified by both Research
Questions One and Two above, do student outcomes (attrition, grades,
pass rates, and satisfaction) vary between online and face-to-face statistics
courses?
Research Hypotheses
Research Question One. Due the scarcity of methodologically rigorous studies
examining the possibility that students enrolling in online statistics courses differ from
face-to-face students in terms of demographic characteristics, psychological factors, time
constraints, or previous knowledge, a priori hypotheses regarding the outcome of
Research Question One are not tenable. As a result, a primarily a posteriori, data-driven
approach will be adopted in hopes of generating theory to gird future studies.
Research Question Two. Similarly, as the outcome of Research Question One will
directly impact the variables entered into analysis for Research Question Two, a
primarily a posteriori, data-driven approach will be adopted in order to generate new
theory.
Research Question Three. While the lack of previous research and theory needed to
address Research Questions One and Two limit the generation of hypotheses as to which
variables should be entered as covariates in the final model, the broad literature base
documenting either no difference between online and face-to-face curriculum (Russel,
2001; Yatrakis, & Simon, 2002) or the superiority of online curriculum (U.S. Department
of Education, 2009) suggests that after controlling for relevant covariates, student
11
outcomes in the online class section should be equivalent or superior to those of students
in the traditional face-to-face condition.
Limitations, Assumptions, and Design Controls
One noteworthy limitation in the present study involves the size of the proposed
sample. Specifically, the proposed study would make use of a sample of at most 144
students. While several rules of thumb exist for assessing the required sample size for
regression techniques (see Field, 2009), accurate estimates are difficult, as they vary on
the basis of observed effect sizes and desired statistical power. Recent research on sample
size determination for regression modeling would suggest that a sample of 144 subjects
would provide adequate data to detect up to medium-sized effects in models with 10 or
fewer predictors (Miles & Shevlin, 2001, as cited in Field, 2009). As the current study
proposes an investigation of up to 19 separate effects, the proposed analysis may be
insufficient to identify all truly significant predictors. In effect, the study may be biased
towards Type II errors in which false negatives are reported. In order to address this
issue, Research Questions One and Two have been designed, in part, to reduce this initial
broad pool of variables down to a manageable number before running the final regression
model.
Definition of Key Terms
For the purposes of this study, the following definitions will be employed:
A priori. When discussing hypotheses, this term will be used to describe
hypotheses derived from deductive reasoning or previous knowledge. In essence, these
are hypotheses that are generated prior to hypothesis testing.
12
A posteriori. When discussing hypotheses, this term will be used to describe
hypotheses derived from observation. These are hypotheses that are generated as a result
of the outcomes of hypothesis tests.
Achievement Expectations. This documented predictor of student outcomes will
reflect students’ expected performance in HESC 349 (Eccles, 1983; Onwuegbuzie, 2003).
Attitudes. Stemming from Ajzen and Fishbein’s (1980) theory of reasoned action,
attitudinal measures are an indicator of whether people like a given object or subject. In
the current study it will be determined whether participants like statistics and computers.
Asynchronous Technologies. This term refers to online technologies that may be
accessed or used at the student’s convenience irrespective of the time of day. Examples
include hosted video files, e-mail communications, and discussion board posts, as each of
these may be accessed any time of day (U.S. Department of Education, 2009).
Blended Curriculum. This term refers to courses that use both in-person teaching
tools such as small group activities or lecture as well as online tools such as discussion
boards or posted videos. These are sometimes referred to as hybrid courses (Mills & Xu,
2006).
Face-to-face Curriculum. This term refers to courses that include a lecture
component in which students are expected to be physically present in the same room as
their instructor. For the purpose of this discussion, face-to-face courses may include
online components, as blended curricula are commonplace in higher education today. The
defining characteristic is that the lecture component of the course is delivered in person
and not through a mediating technology.
13
HESC 349. Titled “Measurement and Statistics in Health Science,” this
introductory statistics course is a requirement for Health Science majors and minors and
also meets a general education requirement for students from other departments at
California State University, Fullerton (Department of Health Science, 2005). The course
emphasizes the use and interpretation of applied statistics in the context of the health
sciences.
Online Curriculum. This term will be used to refer to courses that include a
lecture component that is conducted entirely online using synchronous or asynchronous
technologies. This term is used in contrast to face-to-face curriculum.
Psychological Factors. This term is adopted as a broad category encompassing
student characteristics that are primarily mental in nature. In this study, psychological
factors are constrained to characteristics that have been identified in past research to
predict statistics learning outcomes. These characteristics include: achievement
expectations, attitudes, self-efficacy, and value (Betz, 1978; Hsu, Wang, & Chiu, 2009;
Onwuegbuzie, 2003).
Self-Efficacy. In the present study self-efficacy will refer to students’ perceptions of
their ability to successfully engage in a series of academic tasks (Bandura, 1982). As a
domain-specific phenomenon, self-efficacy will be assessed separately to determine
students’ perceived ability to succeed in tasks related to mathematics, statistics, and the
use of SPSS.
SPSS/PAWS. SPSS (Statistical Package for the Social Sciences) and PASW
(Predictive Analytics SoftWare) are software packages used to conduct statistical
14
analyses. Students will be using PASW or its predecessor, SPSS, to complete class lab
assignments and homework.
Synchronous Technologies. Synchronous technologies are online tools that are
presented in real time in order to mimic in-person interaction. Examples include
webcasting and virtual office hours in which students have only a set window of time in
which to access teaching content (U.S. Department of Education, 2009).
Value. In the present study value refers to the internal, cognitive reasons
underlying individuals’ decisions to engage in tasks (Eccles, 1983).
Summary
In summary, the need for new ways to engage in statistics education has been
underscored by the recent sharp rise in demand for statistics training as well as by
research into the historical shortcomings of traditional statistics courses. At the same
time, online education, while not a singular panacea for all the challenges faced by
statistics educators, has been documented to address issues related to rising demand for
classes and to course quality. Together these observations are suggestive of the
ameliorative role online statistics courses could play in addressing these issues; however,
at present there simply is not sufficient research to justify wide-scale implementation of
online statistics courses.
Ultimately, this theme of insufficient evidence was the most consistently
encountered issue throughout the preparation of this study. At present it remains unclear
how students enrolled in online statistics courses differ from students enrolled in face-to-
courses. Similarly, it is unknown how these differences, in turn, impact statistics
15
students’ outcomes such as grades, pass-rates, satisfaction, or attrition. Most importantly,
however, to date there has been little clear evidence as to whether online statistics courses
are superior, equivalent, or inferior to face-to-face courses in terms of students’ grades,
pass-rates, satisfaction, or attrition.
Ultimately, the proposed study represents a vital first step towards filling these
gaps in the literature and justifying investments of money and time into the development
of online statistics courses. In the absence of methodologically rigorous research into the
impact of online statistics curriculum, the developing of new online statistics curriculum
may be an exercise in putting the cart before the horse.
16
Chapter Two
Introduction
While interest in online education has snowballed in recent years, there remain
concerns that online education advocates may be placing the cart before the horse by
advocating for untested and rapidly changing technologies. Given the breadth of
commercial technologies now available to educators, the continued debate over their
efficacy, and on-going dialogue regarding their application, it may be edifying to first
explore the impetuses that have spurred interest in online education.
In this chapter the recent, growing interest in online education, particularly in the
context of statistics courses, is explored. First, the field of statistics education will be
examined in order to document the recent, growing interest in expanding statistics
instruction. This is an important consideration, as demand for additional statistics training
across academic and applied sectors has, in part, played a role in fueling the movement
for online training.
Next, Chapter Two explores the literature on online education. Specifically it
addresses six often-invoked arguments used by online learning advocates to justify
investing time and money into developing online curriculum. In this section variables and
constructs of interest to online education research are identified.
Finally, the chapter will examine the overlap between the growing interests in online
education and statistics training. Though the existing literature base is small at present,
several notable trends have begun to emerge, the most important of which include several
challenges still waiting to be addressed by current research. In surveying the limitations
17
of the current literature, the stage is set for future directions in online learning and
statistics research.
Statistics Education
Once a topic of study relegated to students of mathematics and probability, in recent
decades statistics has become a subject of interest to groups as diverse as researchers,
politicians, and businesspeople. Indeed, in many ways modern American culture has
fulfilled H. G. Well’s 1903 prediction that “statistical thinking will one day be as
necessary for [efficient] citizenship as the ability to read and write,” (cited in Pagano &
Gauvreau, 2000, p. 1). In a society bombarded with advertisements citing scientific
studies, the ability to navigate statistics in mass media and evaluate advertisers’ claims
has become a vital skill for consumers. Within academia, the prevalence of statistics has
also spread. Although journal titles such as “A multivariate variance components model
for analysis of covariance in designed experiments” (Booth, Federer, Wells, &
Wolfinger, 2009) still abound, today modern researchers are just as likely to have to
navigate statistics in articles like “Some notes on Roman mold material and the technique
of molding for glassblowing” (van den Dries, 2007) or “Religious affiliation in
contemporary Japan: Untangling the enigma” (Roemer, 2009).
To illustrate the prominence of statistics across varied research domains today a
simple replication of Becker’s (1996) methodology can be conducted. A search of the
Educational Resources Information Center (ERIC) database using the keyword
“statistics” yields a total of 1,924 results from 2008-2009, a figure reflecting the
publication of approximately 160 statistics articles per month. Likewise, a similar search
18
of Current Contents, a multidisciplinary database accessing 8,000 journals, yields 9,551
articles about statistics published in fields as disparate as robotics, art, and public
administration. This figure represents the publication of almost 26 research articles
directly related to statistics every day. Clearly, the use of statistics has become topical to
many, if not all, major fields of research.
In the same vein, some have argued that the importance of statistics has come to
transcend academia, as nearly every activity in the realms of business and government in
some way now relies on statistics (Glencross & Binyavanga, 1996). This observation may
be troubling given that an increasing proportion of data collection and analysis is
conducted by laypeople whose lack of training can result in the use of erroneous research
methodologies, analyses, and interpretations (Glencross & Binyavanga, 1996).
Given the growing emphasis placed upon statistics by these diverse sectors, it’s worth
noting that serious evaluation of statistics instruction pedagogies has been a relatively
recent phenomenon. Some accounts date the first serious dialogues about teaching
statistics as beginning 60 years ago with the creation of the International Statistical
Institute’s Educational Committee (Glencross & Binyavanga, 1996). A more
conservative estimate might place this dialogue at closer to only 40 years old, however,
as in a review of 530 dissertations and journal articles related to statistics instruction,
Becker (1996) noted that only 3% of the retrieved articles predated 1970.
As rigorous evaluation of pedagogies for statistics instruction did not begin in earnest
until the 1970’s (Becker, 1996), it may not be surprising that the investigation and
integration of distance learning technologies into statistics classes is a relatively new
19
phenomenon (Hurlburt, 2001). Even as recently as 20 years ago, statistics courses were
frequently taught without the benefit of even now-ubiquitous classroom technologies like
computer labs and statistical software packages (Chance, Ben-Zvi, Garfield, & Medina,
2001). While a considerable effort has been made to survey, document, and to some
extent evaluate the efficacy of a rapidly growing pool of teaching technologies and
software (for examples see Chance, Ben-Zvi, Garfield, & Medina, 2001; Lock, 2000; and
Velleman & Moore, 1996), within the context of statistics courses, teaching technologies,
particularly those related to online learning, have been particularly slow to be evaluated
(Meletiou-Mavrotheris, 2004).
Justifications for Online & Distance Education
In examining the adoption and evaluation of online education in statistics courses, it
may be edifying to first examine the topic of distance education at large. As an emergent
instruction pedagogy, distance education is seeing widespread implementation in K-12
settings (U.S. Department of Education, 2009), corporate training (Sugrue & Rivera,
2005), and higher education (U.S. Department of Education, 2009). The impetuses for
this growth have been as varied as the settings in which they have arisen. In the following
section, six regularly cited justifications for distance education are explored to provide a
rationale for investigating the feasibility and impact of online courses in statistics.
Having grown out of technologies predating personal computers and the Internet,
distance education was initially concerned with utilizing correspondence approaches and,
later, videotaped materials to conduct courses (U.S. Department of Education, 2009).
While previous research has demonstrated that even these relatively low-tech
20
instructional approaches can prove to be as effective as traditional face-to-face classes,
the advent of personal computers, the Internet, and course management systems like
Blackboard have provided educators with a diverse, new portfolio of teaching aides and
forced researchers to revisit the topic of distance education (U.S. Department of
Education, 2009).
Advocates of online distance education courses have begun to cite a wide array of
studies espousing the belief that distance education programs may not only be equivalent
to traditional face-to-face classrooms, but under the right circumstances may actually
prove to be superior to face-to-face courses (Chua & Lam, 2007; Hurlburt, 2001; Ocker
& Yaverbaum, 1999; Shanker & Hu, 2006; Suanpang, & Petocz, 2006; U.S. Department
of Education, 2009; Zhang, 2002). Current research into the opportunities presented by
online learning has highlighted a variety of metrics to examine besides traditional student
learning outcomes. Specifically, six measures have been regularly invoked by researchers
to justify investments in the development of online courses. While not uncontested,
research has begun to suggest that online statistics courses may be superior to traditional
classes in terms of: (1) student learning outcomes, (2) study time and pacing, (3) course
retention (4) student collaboration (5) student satisfaction, and (6)
convenience/accessibility.
Justifications for Online Courses #1: Learning Outcomes
Of all the metrics cited, student learning outcomes remain one of the most
actively debated topics of research due to the inconsistency of research findings
published to date. On one hand, individual studies and larger meta-analyses have
21
documented fairly robust effects suggesting that students in online courses outperform
their peers in traditional face-to-face conditions (Suanpang, & Petocz, 2006). In their
research, the U.S. Department of Education noted that across 46 diverse studies
conducting direct comparisons of online versus traditional face-to-face courses, 22% of
the examined effects were statistically significantly superior in the online courses. In
contrast, only 4% of studies suggested that traditional courses were superior. The latter
finding, the authors noted, may be cautiously dismissed as resulting from Type I errors,
or false positives, emerging from repeated testing across 51 effect size contrasts.
Despite the robustness of the U.S. Department of Education’s and others’
findings, the assertion that online education results in higher student learning outcomes
has faced noteworthy challenges, with a number of studies documenting no significant
differences in students’ learning outcomes between online and traditional classes
(Yatrakis, & Simon, 2002). For example, in a review of research on the equivalency of
online courses to traditional formats, Russel (2001) compiled a total of 355 publications
in which no differences in student learning outcomes were found between distance and
face-to-face classes.
One possible explanation for these conflicting findings regarding student learning
outcomes may be that student outcomes result from a complex interaction of mediating,
moderating, and confounding factors not easily coded into meta-analytic techniques or
accounted for across reviews. For instance in a meta-analysis of 39 studies with a total of
71,731 observations, Allen et al (2004) noted that despite finding small effects favoring
online education, such findings must be interpreted with caution due to the likely
22
presence of confounding variables and the detection of heterogeneity among the studies’
effects.
Despite the conflicting nature of these findings, there seems to be some emerging
degree of consensus that online courses, when conducted with adequate attention to
pedagogy, are at worst only equivalent to traditional classes in terms of student learning
outcomes. One of the equally important findings to emerge from the existing literature is
the lack of evidence that online classes result in inferior student learning outcomes. This
is an important consideration, as even in those instances where online student learning is
merely equivalent to that in traditional classes, there remains the possibility that online
classes will prove superior to face-to-face courses in terms of other important dimensions
like accessibility or student satisfaction, as discussed below.
Justifications for Online Courses #2: Study Time / Pacing of Material
Some have noted that posting materials such as video- or text-based resources
allows students to revisit content as needed, tailor the pacing of their education to their
individual needs, and in many cases spend more time overall with their course contents
(Suanpang, & Petocz, 2006; U.S. Department of Education, 2009; Zhang, 2002). While
encouraging greater student engagement with study materials is generally a laudable goal,
it should be noted that differences in the time-on-task intrinsic to online versus traditional
face-to-face curriculum has obfuscated direct comparisons of learning outcomes between
the two course types. For instance, in cases where online courses require additional study
time or provide more study aides than traditional courses, differences in student learning
outcomes may not be attributable to course format, per se, but rather to students’
23
exposure to additional instruction time and educational resources in non-equivalent
online courses. While the U.S. Department of Education (2009) was careful to discuss
this limitation in their recent meta-analysis of student learning outcomes, this issue has
long been suggested by prior research (Clark, 1983).
To date, the current body of literature addressing time-on-task has largely been
devoted to direct comparisons of non-equivalent courses. Given that time-on-task should
vary as a function of the quantity of learning materials made available to students, a more
relevant research issue may the comparison of time-on-task in conditions where online
and traditional classes have been matched in terms of the quantity and types of learning
materials made available.
Justifications for Online Courses #3: Course Retention
While some authors have cautioned that online courses may be inappropriate for
specific subsets of students such as those with low levels of self-regulation (McMahon &
Oliver, 2001) and that retention rates vary as a function of student satisfaction (Artino,
2008), increasingly retention rates are being cited as a line of support for online learning
(Chua & Lam, 2007). The argument is that as an alternative course format developed, in
part, to serve students who may already have withdrawn from traditional school settings
(U.S. Department of Education, 2009), online learning may be ideally suited to certain
populations. For instance, as evidence of the efficacy of Universitas 21 Global, a recent
online university, Chua and Lam (2007) noted that the program had maintained a 95%
retention rate. In addition, in other studies with lower overall retention rates, researchers
24
have found online courses to be at least equivalent to face-to-face courses in terms of
student retention (Hurlburt, 2001).
Justifications for Online Courses #4: Meaningful Student Interaction
Online collaboration tools including e-mail, chat features, and discussion boards
have been touted as a means of promoting meaningful student interaction and fostering
the formation of learning communities outside of traditional classroom settings (Wang,
Sierra, Folger, 2003; Woods & Ebersole, 2003). The positive influence of online
collaborative tools has even been observed to transcend the virtual or distance setting by
enhancing in-class discussions (King, 2001).
Despite these claims, concerns remain that the lack of face-to-face interaction
provided by online class formats may result in feelings of detachment by both students
and faculty. In such situations, not only may students disengage from materials, but
faculty may also find it difficult to develop rapport with individual students and assess
their unique strengths and weaknesses (Zhang, 2002). It may not be surprising then, that
the frequency of asynchronous communications and timeliness of teacher responses to
communication tools like e-mails have been found to predict both student learning
outcomes as well as class satisfaction (Newlin & Wang, 2002).
Justifications for Online Courses #5: Student Satisfaction
While the literature on student satisfaction has been generally supportive of online
class formats in terms of overall course ratings (Suanpang, & Petocz, 2006), levels of
satisfaction have varied substantially from study to study. For instance, in one of the few
studies to date employing both psychometrically sound instruments and control measures,
25
Summers, Waigandt, and Whittaker (2005) documented pronounced student
dissatisfaction in an online statistics course. In other cases direct comparisons to face-to-
face instruction have suggested that there may be no difference between the online and
face-to-face formats (Spooner, Jordan, Algozzine, & Spooner, 1999).
In making sense of these seemingly conflicting findings, it should be noted that
even in studies favoring online instruction, student satisfaction has been found to be
dependent on other factors including volume and promptness of teacher communications
(Newlin & Wang, 2002), task value, self-efficacy and perceived instructional quality
(Artino, 2008), and the use of asynchronous versus synchronous communication (Ocker
& Yaverbaum, 1999). To the extent that courses vary in any of the above, levels of
student satisfaction should vary as well. Further, students’ satisfaction with online
education may depend not only upon their academic experiences and personal attributes,
but also upon outside perceptions of the quality and comparability of their training
relative to that of their peers receiving face-to-face instruction (Chua & Lam, 2007).
Justifications for Online Courses #6: Convenience & Accessibility
Proponents of online education frequently cite evidence that both teachers and
students find online courses to be more convenient than traditional courses (Ocker &
Yaverbaum, 1999; Shanker & Hu, 2006; Zhang, 2002). This is particularly true of
asynchronous technologies, which, by their very nature, empower students to create their
own individualized instruction schedules (U.S. Department of Education, 2009). As a
result, online learning may prove especially beneficial for students with outside
obligations or logistic constraints involving family, work, or transportation, an
26
observation supported by student feedback regarding course satisfaction (Ocker &
Yaverbaum, 1999; Summers, Waigandt, & Whittaker, 2005).
Researchers have also noted that the removal of logistical constraints such as
those imposed by scheduled class times and campus locations means that online courses
may be accessible to larger audiences, some of whom may otherwise not have been able
to access higher education (Summers, Waigandt, & Whittaker, 2005; Zhang, 2002).
Justifications for Online Courses: Summary
While evidence is emerging that online courses may be able to contend with face-
to-face courses in terms of student learning outcomes, study time and pacing, course
retention, student collaboration, student satisfaction, and convenience/accessibility, it
remains important to stress that many of these claims remain disputed. Given online
education’s potential to serve non-traditional students including those with outside
obligations, those who have withdrawn from traditional face-to-face courses, and those
who may be predisposed to thrive in online communities due to any of the
aforementioned covariates, the debate over online education’s efficacy should not be seen
as disheartening to online proponents. Rather, the conflicting findings above, repeated
identification of mediating/moderating factors, and potential demonstrated in the positive
studies above all serve to underscore the need for additional, methodologically rigorous
studies into the impact and potential of online education.
Taken together, the previous two sections of Chapter Two document a parallel,
growing interest in both statistics education and in online learning. It may be unsurprising
then that current research has taken both trends into account and has begun to explore the
27
opportunities presented by online statistics courses. In the following section, the available
literature on online statistics courses is critically examined in an effort to illustrate both
gaps in the existing knowledge base and the methodological shortcomings of studies
conducted to date. These considerations, in turn, provide a rationale for the research
questions proposed in this study.
Online Education and Statistics
Statistics programs have certainly shared in the mounting interest surrounding online
learning courses, as, perhaps more so than many other academic fields, computer
technology has radically altered the way in which statistics courses are taught (Chance,
Ben-Zvi, Garfield, & Medina, 2001; Moore, Cobb, Garfield, & Meeker, 1995). In the
case of statistics, developing computer technologies have not only allowed for new
teaching pedagogies, but they have also fundamentally altered the type of information
that can and should be taught (Meletiou-Mavrotheris, 2004; Everson, Zieffler, &
Garfield, 2008). For instance, calculations such as correlation matrices that were once
prohibitively time-consuming to conduct in larger datasets can now be produced with
simple commands in statistical software packages. More broadly, this reflects a shift from
emphasizing the computational aspects of statistics as a field of mathematics to
emphasizing statistical thinking, a concept that stresses reasoning and application over
computation or procedure (Everson, Zieffler, & Garfield, 2008; Garfield, Hogg, Schau, &
Whittinghill, 2002).
However, as a result of the relatively recent adoption of classroom technologies in
statistics courses, there remain several notable gaps in the existing literature base
28
regarding best practices for technology use. Specifically, researchers and proponents of
online statistics courses face five major challenges for justifying wider scale
implementation: (1) the overall lack of empirical research related to online statistics
courses, (2) a growing body of contradictory findings that have emerged from existing
empirical works, (3) methodological shortcomings in the current body of research, (4)
integration of findings from past research conducted in traditional face-to-face statistics
courses, and (5) logistic challenges stemming from the cost and feasibility of creating
online classes like those used in existing studies. As a result of these five gaps in the
existing literature base, it remains difficult to draw substantive conclusions regarding the
merits of online learning compared to traditional courses in statistics.
Challenges to Online Learning in Statistics: #1 Lack of empirical research
Of the five gaps in the existing literature base on online statistics courses, the most
pressing is the need for more empirical research on the efficacy of technologies like those
used in online statistics classes (Becker, 1996; Meletiou-Mavrotheris, 2004). The
remaining four gaps in many ways simply reflect empirical questions that have yet to be
adequately addressed by researchers. For instance, the on-going debate over contradictory
findings in the existing literature will likely be resolved, in part, through meta-analytic
techniques. A larger body of rigorously designed, empirical studies is simply required for
analysis.
Although empirical research on concepts from statistics is common across a wide
range of academic disciplines, within the field of education the literature remains limited.
To date, a disproportionate number of education publications have been dedicated to
29
statistics instructors sharing craft knowledge, describing teaching technologies, or
espousing teaching pedagogies without first subjecting them to rigorous evaluation
(Becker, 1996; Meletiou-Mavrotheris, 2004). In fact, research has suggested as few as
30% of existing articles concerning the instruction of statistics have used empirical
approaches (Becker, 1996).
The absence of a broader pool of rigorous experimental and quasi-experimental
research on online statistics education has, in turn, stymied efforts to document student
learning outcomes in online statistics classes, delayed the integration of findings and
methodologies from other fields of research, and limited evaluations to curriculum that
may not be feasible to recreate in most school settings.
Challenges to Online Learning in Statistics: #2 Contradictory Findings
Just as the broader research conducted on online education as a whole has yielded
differing opinions regarding the efficacy of online curriculum, so too has that research
which has been focused specifically on statistics. To date, the limited number of studies
and differences in their methodologies make it difficult to make direct comparisons
between online and face-to-face statistics courses. However, a few metrics such as
student grades and satisfaction have been consistently assessed and thus may be
compared. As can be seen from Table 1, both grades and student satisfaction varied from
study to study. Some studies have reported equivalent values, while other studies have
favored either face-to-face courses or online courses. For example, in a study by Shanker
& Hu (2006), student satisfaction was found to be statistically significantly higher in the
online statistics class, a finding which conflicts with research by Hurlburt (2001) and
30
Summers, Waigandt, and Whittaker (2005). So far, no clear pattern has emerged from the
existing research.
Table 1: Comparison of Outcomes in Online to Face-to-Face Statistics Courses
Study
Grades Student Satisfaction
Face-to-Face Online
Face-to-Face Online
Grandzol, 2004 Inferential statistics not reported
Hurlburt, 2001 Equivalent X
Shanker, 2006 Not Reported X
Suanpang, 2006 X X
Summers, 2005 Equivalent X
‘X’ indicates higher reported values
In the same vein, even within individual studies conflicting findings have
emerged. For instance, in a study by Grandzol (2004), statistically significant differences
were found on some exams (the final) but not on others (the midterm), a finding that the
authors were unable to provide an explanation for. This finding was further complicated
by the fact that mean scores were not reported, so it remains unclear which condition
outperformed the other.
Ultimately, these contradicting findings not only speak to the need for additional
research, but also, equally importantly, underscore the importance of considering the
research methodologies in the existing studies. As will be shown in the following section,
the methodological shortcomings of the existing research may well explain these
conflicting results.
31
Challenges to Online Learning in Statistics: #3 Methodological Shortcomings
Given the limited number of empirical studies directly comparing online to face-
to-face statistics classes and the conflicting reports of students’ learning outcomes in
these studies (see Table 1, page 30), close examination of their respective research
methodologies is warranted. Such examination is critical, as methodological errors like
failure to consider confounding variables, employing study designs that are susceptible to
bias, or failure to standardize research elements can lead to type I and II errors or
distorted effect sizes. In the presence of contradictory student learning outcomes across
studies then, it seems reasonable to establish whether these conflicting findings can
simply be attributed to differing methodological shortcomings.
Examining three possible methodological problems (non-equivalent group
comparisons, failure to measure or control for bias, and failure to include statistical
controls) in the research directly comparing online to face-to-face statistics classes
reveals that all three problems are prevalent in the existing literature (see Table 2)
Table 2: Methodologies used to Compare Online to Face-to-Face Statistics Courses
Course Equivalency Bias
Statistical
Controls
Study
Content Time on Task
Participation
Rate*
Demographic
Comparison
Grader
Blinding
Grandzol,
2004
- - - - - -
Hurlburt,
2001
- - 15%-53%
(173/52)
X - -
Shanker,
2006
X - 48-84%
(113/1027)
- X -
Suanpang,
2006
- - 26.9%
(121/108)
- - -
Summers,
2005
X - (21/17) - X -
* (# traditional/#online)
32
Methodological Shortcoming #1: Non-equivalent Comparison Groups
Currently one of the primary difficulties in interpreting existing comparisons
between online and face-to-face statistics classes is that the comparison groups differ not
only in terms of the way the course content is delivered (online versus face-to-face), but
also in terms of many other unintended, and sometimes undocumented factors. As shown
in Table 2, only two of the five studies making direct comparisons of online to face-to-
face statistics courses used equivalent content in the two course formats. Further, none of
the studies found made any effort to measure or control for differences in the amount of
time students spend with the course content.
To illustrate the impact of non-equivalent course content it may be edifying to
consider one of the studies in more depth. For example, in a comparison of students
enrolled in online or face-to-face sections of an introductory MBA statistics course,
Grandzol (2004) noted that students in the online condition were assessed more
frequently and with greater intensity than face-to-face students. Research has shown that
differential exposure to assessments like this can result in a testing threat to internal
validity (Valente, 2002). In effect, students’ scores may increase simply by virtue of their
getting more practice or experience with the assessments. As a result, it remains unclear
whether to attribute differences in student performance in the two conditions to
differences in the curriculum or rather to differences in the assessment protocols.
A similar concern arises when traditional face-to-face classes taught by one
professor are compared to online classes taught by a different faculty member. For
example, in a study of an online business statistics course conducted in Thailand
33
Suanpang and Petocz (2006) noted that a significant limitation of their study was that
different faculty taught the face-to-face and online courses that were compared. Further
complicating comparisons, the resources provided to the online and face-to-face students
as study materials appeared to differ. As a result it remains unclear whether the observed
program effects reported were a result of the curriculum or rather as a result of
differences in teaching style or the study aides provided.
One way to avoid these concerns in future studies will be, wherever possible, to
standardize the delivery of lecture content through the generation of scripts/transcripts,
the use of a single instructor across all classes assessed, and the provision of identical
assessments and study aides in both online and face-to-face classes. In addition, the
amount of time students spend on task may be measured and employed as a statistical
control in analyses to ensure that differences reported are not simply an artifact resulting
from students spending more time with material in one course format or in the other.
Methodological Shortcoming #2: Experimenter Expectancy Effects
Another concern with many of the existing studies on online statistics courses is
the potential for experimenter expectancy effects. Research on experimenter expectancy
effects reveals that researchers’ expectancies can distort their perceptions of their data
(Rosnow & Rosenthal, 2005), a finding that is troubling when the experimenters are also
responsible for generating data such as student grades. In order to minimize bias
introduced in this manner, blinding of researchers to subjects’ study condition is often
employed (Rosnow & Rosenthal, 2005). However, Table 2 (page 31) shows that less than
34
half the studies used grader blinding. As a result it remains unclear what impact, if any,
researcher expectations may have made on student outcomes.
Methodological Shortcoming #3: Failure to Conduct Demographic Comparisons
Because random assignment of students to online or traditional class conditions is
not feasible in real school settings, there is an unavoidable risk posed by self-selection or
volunteer biases. Volunteer and self-selection bias refer to when participants opt into a
condition or study on the basis of one or more characteristics they possess. These
characteristics, in turn, differentiate these volunteers from the general population and can
result in their responding in a manner unlike the general population (Rosnow &
Rosenthal, 2005).
This is an especially troubling concern in the assessment of online statistics
courses, because it seems reasonable that some of the plausible characteristics of students
opting to take a statistics course online could impact outcomes such as grades or
satisfaction. For instance, in the event that students enrolling in online classes have more
experience with computers, it may be reasonable to expect that these students could have
an advantage on computer-based assignments such as SPSS labs. In the event that self-
selection or volunteer biases are occurring, it remains unclear whether differences in
student outcomes are attributable to the course format, or rather, to the different
underlying composition of students in online versus face-to-face classes.
This concern may be addressed in two ways. First, by formally comparing the
demographic compositions of students in online versus face-to-face classes, researchers
can begin to detect the presence of whether these biases may be operating in their study.
35
As shown in Table 2 (page 31), only one study to date has conducted such a comparison
(Hurlburt, 2001). It should be noted as well that even in this one instance an extremely
limited number of characteristics were examined.
In the presence of significant differences in the characteristics of students in the
two conditions, the next step would be to control for these differences using statistical
techniques prior to making direct comparisons of online and face-to-face classes. While
statistical controls are generally regarded as inferior to methodological controls, because
random assignment to class condition is not feasible, this may be the best alternative. To
date, no studies comparing online to face-to-face statistics courses have included control
variables in their analyses (see Table 2, page 31).
Methodological Shortcoming #4: Failure to Consider Participation Rates
Another potential source of volunteer bias emerges from students’ voluntary
participation in course assessments. Looking at Table 2 (page 31), the participation rates
of some of the classes examined were as low as 15%. In the presence of substantial non-
participation, it remains unclear if reported program effects are a result of actual
differences between the online or traditional face-to-face formats, or rather to differences
in the characteristics of the students who volunteered to participate. Further, in the event
that only 15% of students provide data, it remains unclear whether any effects, if
detected, would hold true for the other 85% of students who did not participate. In this
instance, the generalizability of the study would be severely compromised.
One methodological measure that can be put in place to address this is simply to
collect and compare the demographic characteristics of volunteers and non-respondents.
36
Significant differences between the two groups would serve as indicators of possible self-
selection biases and highlight the need for statistical controls during analysis.
Methodological Shortcoming #5: Failure to Integrate Known and Potential Predictors of
Statistics Learning
While the evaluation of online learning curriculum in statistics continues to face
significant hurdles, in the past 30 years a rich body of research has been developed in an
effort to predict and foster academic success in traditional face-to-face statistics courses.
This research in part grew in response to observations that students’ grades are often
lowest in statistics courses (Onwuegbuzie & Seaman, 1995) and that such courses can
serve as gatekeepers or weeder courses preventing students from pursuing certain degree
tracks (Onwuegbuzie, 1998). From this literature, several important biopsychosocial
covariates have emerged. For ease of discussion these covariates can be roughly
categorized as demographic characteristics, psychological factors, time constraints, or
previous skills/competencies.
In the case of demographic characteristics, notable gender differences in statistics
achievement have been observed across a number of studies and examined in detail with
meta-analytic techniques (Schram, 1996). Research has also begun to parse the nature of
gender’s impact on performance with gender predicting mediating factors such as
students’ anxiety and learning strategies they employ in statistics courses (Rodarte-Luna
& Sherry, 2008). Similarly, research has highlighted the importance of logistic constrains
such as such as English language proficiency on statistics performance (Lesser & Winsor,
2009). Considering that statistics, a field rife with unique terminology, has been
37
compared to learning a foreign language (Lalonde & Gardner, 1993 as cited in
Onwuegbuzie, 2003), it may be unsurprising that students’ language proficiency may
impact subsequent performance.
As to impact of psychological traits on statistics achievement, self-efficacy,
achievement expectancies, attitudes, and value have all been examined closely in past
research. For instance, the importance of domain-specific self-efficacy has been
documented across a number of fields including statistics (Finney & Shraw, 2003).
Similarly, achievement expectations have been found to not only coincide with levels of
self-efficacy, but also to serve as powerful predictors of student achievement in statistics
and related courses in their own right (Bandalos, Finney, & Geske, 1999; Onwuegbuzie,
Slate, Paterson, Watson, & Schwartz, 2000; Onwuegbuzie, 2003). In the case of value,
students’ perceptions of the value of their training have proven predictive of both student
motivation and performance in related fields such as math as well as statistics (Acee &
Weinstein, 2010; Garfield, Hogg, Schau, & Whittinghill, 2002; Schau, Stevens,
Dauphinee, & Del Vecchio, 1995). And finally, the importance of attitudes, both towards
computers (Hsu, Wang, & Chiu, 2009) as well as statistics is reflected in the number
scales (for examples see Roberts, & Bilderback, 1980; Schau, Stevens, Dauphinee, &
Del Vecchio, 1995; Wise, 1985) dedicated to measuring this construct as well as research
documenting its relationship with student performance (Tempelaar, Schim, & Gijselaers,
2007).
Regarding time constraints, employment, serving as a caregiver, and overall course
load have all been discussed in the broader context of achievement in statistics courses
38
(Onwuegbuzie, 2003). As statistics courses are often touted for their ability to allow
students to create their own schedules (U.S. Department of Education, 2009), the time
constraints faced by students seeking out online training seems a reasonable avenue for
examination.
And finally research has documented the role of prior experience with math,
computers (Sitzmann, Kraiger, Stewart, & Wisher, 2006), or statistics courses
(Onwuegbuzie, 2003) on students statistics performance, so measures addressing each
were included in the present study.
As well-documented predictors of academic outcomes for statistics students, the
potential impact of each of the above covariates bears consideration in efforts to assess
the efficacy of online learning courses in statistics. Because students have self-selected to
enroll in online or traditional format classes in the preponderance of studies conducted to
date, and the above research has clearly documented the impact of a variety of covariates,
the potential for spurious findings resulting from self-selection biases is quite high.
Methodological Shortcoming #6: Prohibitive Cost and Difficulty of Creating Online
Statistics Classes
The cost effectiveness of online education programs remains an open field of
discussion for practitioners and researchers (Fini, 2009; Gordon, He, & Abdous, 2009;
Hurlburt, 2001; Lips, 2010). In part, this remains an active topic as the actual cost of
online education remains a moving target. The costs associated with online courses have
come to depend of changing factors like the actual technologies employed, the shift from
initial program development costs to long-term maintenance, or the impact of instructors
39
learning to create curriculum more efficiently or with the aid of off-the-shelf technologies
(Gordon, He, & Abdous, 2009; Sitzmann, Kraiger, Stewart, & Wisher, 2006).
Despite the difficulties of pinning down a point estimate of the cost associated
with online classes, anecdotal evidence suggests that the cost of implementation can be
prohibitively high for some educational setting. For instance, in one study the total time
spent in development of a semesters’ worth of statistics content came out to 3,365 hours,
or 1.7 years of dedicated, uninterrupted 40-hour workweeks (Hurlburt, 2001). Likewise,
the raw dollar cost of even a single hour of multimedia instructional materials can climb
as high as $50,000 (Eamon, 1999).
The available research on online statistics courses also points to a second barrier
to wide-scale implementation. In some past studies the authors have built their course
materials using custom-designed materials that an average professor likely could not
replicate (for an example see Hurlburt, 2001). Most of the existing research has not
addressed the efficacy of off-the-shelf technologies.
Summary
With the demand for quantitative analytic skills on the rise across the academic,
public, and private sectors, educators have been facing the challenge of significantly
expanding the scope and target audience of statistics classes. Despite these external
pressures, however, significant gaps in the literature on statistics instruction pedagogies
remain.
In particular, research on online statistics instruction has been slow to gain
traction. Proponents and researchers of online learning have noted that in other academic
40
disciplines online classes have demonstrated superiority to traditional face-to-face
courses across several domains (Chua & Lam, 2007; Hurlburt, 2001; Ocker &
Yaverbaum, 1999; Shanker & Hu, 2006; Suanpang, & Petocz, 2006; U.S. Department of
Education, 2009; Zhang, 2002). Not only have some studies suggested that student
learning outcomes are superior in online classes (Suanpang, & Petocz, 2006; U.S.
Department of Education, 2009), but they have also pointed to such programs’ greater
accessibility and convenience (Ocker & Yaverbaum, 1999; Shanker & Hu, 2006;
Summers, Waigandt, & Whittaker, 2005; U.S. Department of Education, 2009; Zhang,
2002). On the basis of these claims, it seems that online statistics courses may be one
means of addressing the growing demand. However, additional research is still required
to justify the use of online curriculum in statistics courses and to document their efficacy.
To date the existing comparisons of traditional face-to-face to online statistics
courses have suffered from substantial, pervasive methodological shortcomings that
severely limit the substantive conclusions that can be drawn. Specifically, the existing
research has fallen prey to 1 or more of 5 methodological concerns: (1) the overall lack of
empirical research related to online statistics courses, (2) a growing body of contradictory
findings that have emerged from existing empirical works, (3) methodological
shortcomings in the current body of research, (4) integration of findings from past
research conducted in traditional face-to-face statistics courses, and (5) logistic
challenges stemming from the cost and feasibility of creating online classes like those
used in existing studies.
41
The present study represents an attempt to address these methodological concerns.
While the study will in some ways serve as a replication of the previous literature
comparing traditional face-to-face statistic courses to online courses, it will introduce a
number of methodological and statistical controls meant to address the aforementioned
concerns raised from the previous literature.
In Chapter Three an emphasis is placed upon describing the process of creating
the proposed curriculum, the methodological controls in place to address expected
sources of bias, and the statistical analyses to be undertaken to identify relevant control
variables and measure those biases that cannot be methodologically or statistically
controlled.
42
Chapter Three
Introduction
Given the growing demand for statistics training and mounting interest in online
education, the marriage of these two may seem like an obvious decision for educators.
Online courses could represent a means of meeting increased demand for statistics
training. However, the existing literature on both online education and online statistics
training remains contentious; few studies seem to agree on whether online education is
warranted by the existing data on factors such as students’ grades, course retention,
meaningful collaboration, or student satisfaction. Moreover, the inconsistencies observed
in past research may be attributable, to a large degree, to methodological shortcoming in
the literature base. Specifically, the existing research designs have frequently employed
comparisons between non-equivalent courses, failed to consider self-selection or
volunteer biases, neglected to address experimenter expectancy effects, overlooked
substantial attrition and non-response rates, and forgone the inclusion of control measures
for known predictors of academic outcomes in statistics (see Table 2, page 31).
The present study sought to address these issues by introducing a number of statistical
and methodological controls. After taking these methodological and statistical concerns
into account, the study sought to make two important contributions to the existing
literature on online statistics education.
First, at present no direct comparisons of online and face-to-face statistics courses
have included covariates. At the same time, the broader literature on statistics education
has documented the effects of several important covariates, which predict student
outcomes in statistics classes. The result is that a wide array of potential covariates is
43
available to researchers, but there is little theoretical basis upon which to select from
covariates from this pool. By considering a wide array of demographic characteristics,
psychological factors, time constraints, and previous knowledge simultaneously, the
current study took the first steps towards developing a theoretical foundation for selecting
covariates for future research. While data-driven approaches must be approached
cautiously, this study represented a needed first step towards theory-building.
Second, the existing direct comparisons of online and face-to-face statistics courses
have yielded an unsatisfactorily heterogeneous and conflicting body of findings. Some of
the conflicting findings regarding the efficacy of online education may be explained, in
part, by these studies’ methodological shortcomings. The present study thus sought to
determine whether online statistics training is warranted by students’ attrition, grades,
pass rates, and satisfaction after carefully monitoring previously unaddressed sources of
bias and implementing a number of methodological and statistical controls. As a result
this study represented a first step towards clarifying the conflicting results found in the
literature to date.
Problem and Purposes Overview
Research Questions
This research sought to address the following research questions:
1. Do students who enroll in online introductory statistics courses vary from
those who enroll in traditional classes in terms of demographic
characteristics, psychological factors, time constraints, or previous
knowledge?
44
2. Which of the demographic characteristics, psychological factors, time
constraints, or previous knowledge domains found to differ in Research
Question One predict student outcomes (attrition, grades, pass rates,
satisfaction)?
3. After statistically controlling for the statistically significant variables
identified by Research Questions One and Two above, do student
outcomes (attrition, grades, pass rates, satisfaction) vary between online
and face-to-face statistics courses?
Research Hypotheses
Research Question One. Due the scarcity of methodologically rigorous studies
examining the possibility that students enrolling in online statistics courses differ from
face-to-face students in terms of demographic characteristics, psychological factors, time
constraints, or previous knowledge, a priori hypotheses regarding the outcome of
Research Question One were not tenable. As a result, a primarily a posteriori, data-driven
approach was adopted in hopes of generating theory to gird future studies.
Research Question Two. As the outcome of Research Question One directly impacted
the variables entered into analysis for Research Question Two, a primarily a posteriori,
data-driven approach was adopted in order to generate new theory.
Research Question Three. While the lack of previous research and theory needed to
address Research Questions One and Two limited hypotheses as to which variables
would be entered as covariates in the final model, the broad literature base documenting
no differences in online and face-to-face curriculum (Russel, 2001; Yatrakis, & Simon,
45
2002) or the superiority of online curriculum (Suanpang, & Petocz, 2006; U.S.
Department of Education, 2009) suggested that after controlling for relevant covariates,
student outcomes in the online class section would be equivalent or superior to those of
students in the traditional face-to-face condition.
Population and Sample
In order to assess the impact of online versus traditional classroom formats on
undergraduate students in introductory statistics courses, a convenience sample was
drawn from California State University, Fullerton (CSUF) students enrolled in HESC
349, Measurement and Statistics in Health Science. It was anticipated that students would
be recruited from two online sections taught during the Fall 2010 and Spring 2011
semesters as well as from two traditional face-to-face class sections taught over the same
timeframe.
A total of 144 students (36 per class section) were expected to enroll in the four class
sections surveyed. Students were excluded from the study on the basis of refusal to
participate, having taken a previous statistics course, or on the basis of response
inconsistencies or substantial missing data detected during data cleaning. Analyses were
conducted to track the percent of students who provide data, and the percentage of
students lost to analysis due to any reason for exclusion were recorded.
Demographic characteristics of the online and traditional course students were
collected for baseline comparisons to test for possible sources of self-selection biases.
See Table 3 (page 46) for an example of the demographic variables that were to be
assessed, illustrative fictitious data, and proposed analyses.
46
Table 3: Fictitious Example Demographic Characteristics of Traditional versus
Online Students
Course Format
Traditional
a
Online
a
p-value
b
n 72 72
Age 19 (1.24) 27(2.45) 0.03*
Sex
Male 32 (44%) 40 (56%) 0.28
Female 40 (56%) 32 (44%)
Race
White (Caucasian)
Black or African American
Chinese or Chinese American
Latino or Hispanic
Pacific Islander
Vietnamese or V. American
Year
Sophomore
Junior
Senior
Major
Health Science
Nursing
Other
Employment Status
Not working
Part-time ( ≤ 20 hours/week)
Full-time (> 20 hours/week)
Computer Literacy
Past Math Performance
Past Computer Science Exposure
Attitudes Toward Computers
Attitudes Toward Statistics
Statistical Software Self Efficacy
Statistics Anxiety
a
reported as M(SD) or n(%) unless otherwise noted.
b
reported for independent samples t-tests or chi-square tests of
independence as appropriate
*p < .05. **p < .01. ***p < .001
47
Online and Face-to-Face Curriculum
HESC 349 serves as a required course for Health Science majors and minors
(Department of Health Science, 2005) and also meets a general education requirement for
students from other departments. Courses consist of a 15-week curriculum comprised of
13 lectures, a midterm examination, and a final examination. In order to minimize
differences in student learning outcomes attributable to the quantity or quality of
instructional resources provided or time spent on task, efforts were made to make the two
class conditions comparable to one another in every aspect except that the online course
lectures were conducted asynchronously through online media. In order to clarify
similarities and differences between the two curriculums, lecture, computer labs, office
hours, homework, and exams are discussed separately below.
Lecture. Students in both conditions were provided with between 1-2 hours of
instructional lecture per week. Instruction was supported by a PowerPoint presentation
displayed concurrently with the lectures. For the two class sections the PowerPoint slides
used were substantively identical.
Whereas face-to-face students experienced lectures in person during class, online
students accessed lectures asynchronously as web-based videos. Articulate Presenter ’09
was used to create Flash videos presenting the PowerPoint slides with the instructor’s
narration. In order to minimize the impact of biases attributable to different professors’
teaching styles, all four classes surveyed in this dissertation were to be taught/narrated by
the author. Similarly, audio files recorded by the author while teaching a previous face-
to-face section of HESC 349 were to be used as the basis for the online class video
48
narrations. In effect, the narrative presented in both online and face-to-face conditions
was as close to identical as feasibly and logistically possible.
Besides the video format, lecture in the online condition varied from the face-to-face
component in two important ways. First, due to the asynchronous nature of video
postings, students in the online condition were not be able to ask questions in real time.
Questions and clarification in the online class condition had to be requested through e-
mail, office hours, or message boards. In order to emulate face-to-face instruction in
which all students may benefit from hearing answers to their peers’ questions, responses
to all electronic questions were posted for the entire class.
In addition, the online and face-to-face lectures varied in the way students were
engaged with questions mid-lecture. In the face-to-face lectures, questions had already
been built into the slides. These questions are typically presented mid-lecture and are
used as an opportunity to clarify concepts, engage students in small group discussions,
and provide students with feedback on their understanding.
In the online class this sort of dynamic, real-time discussion was not logistically
feasible in the current study. As a result two separate types of questions were embedded
into the online curriculum. First, in order to provide students with feedback on their
understanding, students experienced short multiple-choice questions that must be
completed in order to proceed through the PowerPoint. Second, in order to emulate small
group discussions and opportunities to clarify concepts, students were directed to short
answer questions posted on the class discussion boards. In both cases these questions
were designed to closely mirror the question content covered in the face-to-face courses.
49
Computer Lab. A total of 8 SPSS computer lab assignments were assigned to students
in both conditions. Assignments were designed to take 45 minutes to an hour and were
comprised of a guided walkthrough by the professor followed by a replication of analyses
by the students.
Students in the face-to-face courses had access to a PowerPoint presentation that is
displayed concurrently with the lab instruction. At points during the lab, the professor
will switch to SPSS to illustrate techniques using actual data.
In the online condition students will have access to same visuals projected in the face-
to-face condition. This will be accomplished by recording a video screen-capture of the
professor’s computer using Camtasia for Mac. In the video, students will experience a
narrated walkthrough of the same portions of the PowerPoint presentation used in the
face-to-face condition and will see the professor’s demonstrations of SPSS as well.
Office hours. A total of four hours of office hours will be offered each week in
support of the two class sections. Office hours will be split so that half of the office hours
will be held on campus in a department office, and half of the hours will be conducted
online through Blackboard’s office hours tool, a feature that resembles an Internet chat
room. Students in both the face-to-face and online condition will be invited to use either
office hours as needed.
Homework. While the 5 homework assignments assigned will be identical in terms of
content and deadlines, their grading warrants discussion. In order to minimize bias
introduced by instructor knowledge of students’ class condition, a single teaching
assistant who will be blinded to students’ class condition will conduct all grading of
50
homework assignments. Assignments will be graded on completeness alone with points
deducted on the basis of missing or incomplete answers or failure to follow instructions.
Exams. A total of 4 quizzes, a midterm exam, and a final exam will be administered
to students in both class sections. All examinations will be identical between class
conditions and graded by a teaching assistant who will be blinded to the students’ class
condition.
Due to concerns about exam security, the potential for cheating, and a desire to
control for testing location, both face-to-face and online students will be expected to take
the examinations in person on campus. In order to accommodate online students’ diverse
schedules, efforts will be made to provide a wide variety of proctored test times.
Data Collection and Instrumentation
Baseline demographic data and control variables will be collected from traditional
format students with a paper and pencil survey administered by the professor at the
beginning of the first class of the semester (see Appendix A). For online format students,
demographic data will be collected with an identical survey administered through
qualtrics, an online survey tool. In both conditions informed consent will first be
collected. Participants will be provided with extra credit for their participation, which will
allow them to drop their lowest homework score, an extra credit mechanism used
successfully in the past as part of the normal statistics curriculum. In order to encourage
candid responses, paper and pencil surveys will be collected by a Teaching Assistant and
held confidentially in the Health Science Department until after semester grades are
submitted. Similarly, online responses will be delivered to a secure account, and the
51
author will not be provided with a password until after semester grades are submitted. At
the end of the semester students in both course conditions will be administered a short
student satisfaction questionnaire using the same methodology (see Appendix B).
In addition to the surveys, routinely recorded student data including grades and
attrition rates will be collected via records maintained through blackboard and analyzed
in both course formats.
Student Outcomes
A total of 4 student outcomes will be tracked as measures of curricular success for the
online and traditional face-to-face formats: student grades, attrition, pass rates, and
satisfaction.
Student Outcome #1: Grades
In both course formats, students can earn a total of 1000 points for their performance
across 5 categories. Points are earned through the completion of 5 homework
assignments (each worth 50 points), 4 quizzes (each worth 50 points), a midterm exam
(250 points), a final exam (250 points) and either attendance or participation in an online
discussion board (50 points). With the exception of attendance/participation all
assignments and grading will be identical between the two course formats.
In both conditions examinations will be curved by setting each assignment’s total
possible points equal to the highest score in the class. In the case of significant outliers,
the next highest grade will be used. In addition, a single extra-credit opportunity will be
provided to allow students to have their lowest homework score dropped from their
ultimate grade calculation.
52
In an effort to minimize experimenter bias, grading for all assignments in both class
sections each semester will be conducted by a single Teaching Assistant who will be
blinded to the participants’ course format.
Student Outcome #2: Attrition
In both course formats, student attrition will be monitored through records maintained
through CSUF’s blackboard and Titan Online systems. Due to high rates of student
enrollment and withdrawal in the first two weeks of the semester, only student
withdrawals occurring after the second week of the semester will be counted as course
attrition.
Student Outcome #3: Pass Rates
At the end of the semester student grades will be used to generate pass rates. A grade
of C or higher is necessary to meet department requirements, so the proportion of
students obtaining C’s or higher will be calculated for both class conditions.
Student Outcome #4: Satisfaction
Student satisfaction will be assessed by means of a twenty-two-item survey
administered at the end of the semester. Fifteen questions were adapted from previously
published scales (Paechter, Maiera, & Macher, 2010; So & Brush, 2008; Suna, Tsaib,
Fingerc, Chend, & Yeh, 2008) to address students’ satisfaction with the instructor,
learning experience, and course logistics. In addition seven questions were developed by
the author to assess students’ satisfaction with specific elements of the online learning
technologies used (see Appendix A)
53
Additional Measures
A total of 14 additional measures will be assessed as possible covariates and
predictors of student outcomes. These 14 measures can be broadly classified as
psychological factors (7 measures), time constraints (4 measures), previous
skills/competencies (6), and controls variables (5). For an overview of the variables to be
assessed see Figure 1 and Table 4 (page 54).
Figure 1: Detecting Covariates for use in Final Stepwise Regression Modeling
54
Table 4: Overview of Covariates/Predictors of Student Outcomes
Constructs/Variables Measure to be Used
# of Items on
Questionnaire
Psychological
Achievement Expectations
Survey of Attitudes Toward
Statistics (SATS-36)
1
Attitudes towards Computers
Adopted from Harrison and
Rainer, 1992
5
Attitudes towards Statistics
Affect subscale - Survey of
Attitudes Toward Statistics
(SATS-36)
6
Math Anxiety Mathematics Anxiety scale 10
Self Efficacy (SPSS & Math)
Adapted from – Survey of
Attitudes Toward Statistics
(SATS-36)
2
Self Efficacy (statistics)
Cognitive Competence subscale –
Survey of Attitudes Toward
Statistics (SATS-36)
6
Statistics Value
Value subscale – Survey of
Attitudes Toward Statistics
(SATS-36)
9
Time Constraints
Marital Status * 1
Caregiver Status * 2
Employment Status * 1
Units Taken * 1
Skills/Competencies
English Fluency
Self-Reported Fluency of English
Scale (SRFES)
3
Computer Literacy * 6
Prior Exposure: Math
Adapted from Survey of
Attitudes Toward Statistics
(SATS-36)
2
Prior Exposure: Computer Science
Adapted from Survey of
Attitudes Toward Statistics
(SATS-36)
2
Prior Performance: Math
Survey of Attitudes Toward
Statistics (SATS-36)
1
Prior Performance: Computer Science
Adapted from Survey of
Attitudes Toward Statistics
(SATS-36)
1
Control Variables
Gender * 1
Ethnicity * 1
Major - -
Grade Level - -
Age * 1
- Data already available through Blackboard, Titan Online, or another source
* Written or developed for this study
55
Survey of Attitudes Toward Statistics (SATS-36): The SATS-36 is a 36-item self-
report measure that is used to assess six dimensions of students’ attitudes towards
statistics: affect, cognitive competence, difficulty effort, interest, value (Schau, 2005).
Research on the 36-item scale and an earlier 28-item scale assessing only 4 domains
have documented the scales’ criterion and construct validity, shown adequate reliability
across the subscales (Cronbach’s alpha 0.64-0.88), and found the factors to be stable
across gender (Dauphinee, Schau, & Stevens, 1997; Finneya & Schrawb, 2003; Schau,
Stevens, Dauphinee, & Del Vecchio, 1995).
For all questions, responses will be collected with a seven-point Likert scale ranging
from 1 (strongly disagree) to 7 (strongly agree). As some questions are reverse coded,
recoding will be conducted prior to aggregating so that higher scores will reflect more
positive attitudes in each of the six dimensions.
In addition to the six dimensions listed above, demographic and background
questions assessed in the SAT-36 will be adopted to measure students’ achievement
expectations, SPSS self-efficacy, prior exposure to math, prior exposure to computer
sciences, and performance in math and computer science courses.
Self-Reported Fluency of English Scale (SRFES): The STFES is a 3-item, self-report
measure of perceptions of English fluency. Each question is scored on a 5-point Likert-
type scale anchored with “Not at all Fluent/Fluent,” “Not at all Comfortable/Very
Comfortable,” and “Never/Always.” Previous research on the scale has documented
adequate internal consistency (Cronbach’s alpha 0.78-0.81) and validity (Yeh & Inose,
2003).
56
Attitudes Towards Computers: In order to assess students’ attitudes towards
computers a 6-item battery previously adapted from Harrison and Rainer (1992) will be
adopted (Hsu, Wang, & Chiu, 2009). Past research has documented adequate
psychometric properties of this measure in terms of unidimensionality, convergent
validity, reliability, discriminant validity, and metric equivalence (Hsu, Wang, & Chiu,
2009). On the basis of past research, participant’s responses will be collected on a 7-point
Likert-type scale anchored with “Strongly Agree/Strongly Disagree.”
Mathematics Anxiety Scale (MAS): To measure student’s anxiety regarding
mathematics, a 10-item scale adapted by Betz (1978) from the earlier Fennema-Sherman
Mathematics Attitudes Scales (Pennema & Sherman, 1976) will be implemented. Scores
will be collected on a 7-point Likert-type scale anchored with “Strongly Agree/Strongly
Disagree.”
Time Constraints: A total of 5 questions were developed in an effort to assess
responsibilities that could conflict with students’ class or study time. The questions
developed ask students to indicate their marital status, the number of children they have,
whether they serve as a caregiver (excluding work or volunteer commitments for school),
and employment status.
In addition, course load, as assessed by the total number of credits they are currently
enrolled in, was examined as a potential confounding variable for the study outcomes on
the basis of previous research, which has shown course load to be a predictor of
achievement in both traditional statistics courses (Onwuegbuzie, 2003) and other
academic domains (Onwuegbuzie, Slate, et al., 2000).
57
Computer Literacy: In order to assess students’ confidence in utilizing the online
materials developed in support of the face-to-face and online class sections, a battery of 6
questions were developed. Two questions assess student confidence in basic computing
skills (accessing posted video files and using a web browser) while the remaining
questions relate to specific functions of CSUF’s Blackboard course management
software. Participant’s responses will be collected on a 7-point Likert-type scale anchored
with “Not at all Confident/Very Confident.”
Additional Control Variables: A battery of standard demographic questions will be
used to assess traditional control variables including gender, race, major, and age. In
addition student characteristics such as grade level and major will be collected from data
already available through Blackboard.
Data Analysis
Data Cleaning
Prior to analysis individual question and scale frequencies will be examined for
outlying observations, data-entry errors, and careless responding. Outlying observations
will be deleted, data entry errors will be corrected by returning to the original surveys if
possible, and evidence of careless responding will be used as grounds to remove
individuals from the data set. In order to detect careless responses a single question from
the first half of the survey will be chosen at random and repeated in the second half of the
survey. The presence of differing answers to the same question will be used as evidence
of careless or fictitious responses.
58
Bias Monitoring: Differential Mortality
Attrition analyses will be conducted to compare the characteristics of students
withdrawing from the face-to-face and online course formats in order to determine
whether differential mortality may impact the demographic compositions of students in
the two class conditions. This analysis will be conducted, as having a specific type of
student systematically withdraw from one condition could result in the detection of
spurious/erroneous program effects resulting from experiment mortality (Valente, 2002).
For example, if students high in statistics anxiety were found to be systematically more
likely to drop out of the online condition, it would then be difficult to establish whether
differences between the online and face-to-face conditions resulted from differences in
the curriculum or differences in the students enrolled in the two conditions.
A series of independent samples t-tests and chi-square tests will be used to detect
differences between the characteristics of students withdrawing from the two class
conditions (see Table 1, page 30). As attrition rates vary from semester to semester,
attrition analyses will be dependent upon adequate sample size for analysis.
Bias Monitoring: Self Selection
The baseline demographic characteristics of participants in the face-to-face versus
online course formats will be compared through a series of independent samples t-tests
and chi-square tests to detect the presence of self-selection biases (see Table 1, page 30).
Identifying Predictors of Student Outcomes
Due to the relatively thin body of literature examining control variables during the
assessment of online statistics courses, an a posteriori approach was chosen to explore
59
the role of an intentionally broad array of potential control variables. Student grades,
satisfaction, attrition, and pass-rates will each be independently modeled using stepwise
regression and logistic regression techniques, a computer-driven modeling approach in
which forward selection and backward elimination modeling approaches are
simultaneously explored to arrive at a single model that balances the risk of type I and II
errors.
Direct Comparison of Face-to-Face and Online Conditions
In order to test for differences in the four student outcomes (student grades,
satisfaction, attrition, and pass-rates), stepwise regression and logistic regression
modeling will be conducted. In each of the four models statistically significant predictors
of student outcomes detected by the previous analysis will first be force-entered into the
equation to serve as control variables.
Overview of Analyses
In order to clarify the relationship between the proposed research questions and
variables suggested, the following section provides a list of the variables to be used to
address each research question.
RQ1: Do students who enroll in online introductory statistics courses vary from those
who enroll in traditional classes in terms of demographic characteristics, psychological
factors, time constraints, or previous knowledge?
Analysis: Independent samples t-tests and chi-square tests.
Variables: All demographic characteristics, psychological factors, time
constraints, previous knowledge, and class format (see Table 4, page 54).
60
RQ2: Which of the demographic characteristics, psychological factors, time
constraints, or previous knowledge domains found to differ in Research Question One
predict student outcomes?
Analysis: Stepwise linear and logistic regression modeling.
Outcome Variables: attrition, grades, pass rates, and satisfaction (4 separate
models)
Predictor Variables: all demographic characteristics, psychological factors, time
constraints, and previous knowledge variables found to be statistically significant
in Research Question One.
RQ3: After statistically controlling for the statistically significant variables identified
by Research Questions One and Two above, do student outcomes vary between online
and face-to-face statistics courses?
Analysis: Hierarchical, stepwise linear and logistic regression modeling.
Outcome Variables: attrition, grades, pass rates, and satisfaction (4 separate
models)
Predictor Variables: Class format and all demographic characteristics,
psychological factors, time constraints, and previous knowledge variables found
to be statistically significant in Research Question Two.
Summary
Ultimately the present study seeks to detect differences in four student outcomes
(student grades, satisfaction, attrition, and pass-rates) between CSUF Health Sciences
students enrolled in either an online or face-to-face section of HESC 349: Measurement
61
and Statistics in Health Science. In order to control for potential threats to the study’s
validity, three methodological measures will be employed. Specifically, this study will
use teaching assistants blinded to students’ class for all grading, standardized course
content across both class formats, and the same instructor in all class sections.
In addition to the methodological controls, several statistical controls will be
employed. Specifically, a three-step process will be used to determine whether students in
the online course experience different grades, satisfaction, attrition levels, and pass-rates
than students in the face-to-face course. This analysis will control for any variables that
both differ between face-to-face and online student and predict the four student
outcomes.
By adhering to a strict, rigorous methodology, this study will address two notable
gaps in the existing literature on online statistics instruction. First, it provides the first
steps towards developing a much-needed theory of student performance in online
statistics courses. Specifically, by identifying learner characteristics that vary between
online and face-to-face students and predict performance in statistics classes, this study
will serve as a first step toward identifying pertinent control variables for future research.
In addition, this study will begin to resolve the debate surrounding student outcomes
by addressing many of the methodological concerns in the existing literature. As
discussed previously, there remains debate in the research community regarding the
impact of online statistics curriculums on outcomes like student grades or course
satisfaction (see Table 1, page 30). Given the ubiquitous presence of methodological
shortcomings in the existing literature, it has been difficult to ascertain the true impact of
62
online curriculum on student outcomes. By addressing many of the methodological
concerns raised by past research, this study may help to clarify the conflicting findings to
date and serve to quantify the impact of online curriculum on statistics students’ attrition
levels, grades, pass rates, and satisfaction.
63
Chapter Four
Introduction
The current study seeks to determine whether any differences in student outcomes
(attrition, grades, pass rates, and satisfaction) exist between online and face-to-face
statistics courses. In order to assess this, data were collected from students enrolled in six
sections (four face-to-face and two online) of an introductory statistics course designed
for health science majors. The present study expands on past research by examining the
role of student characteristics to determine whether any detected differences in student
outcomes are attributable to the online versus face-to-face teaching medium itself, to
differences in the characteristics of students self-selecting to enroll in either format, or
rather due to a combination of these factors. Table 4 (page 54) provides a concise review
of the array of demographic characteristics, psychological factors, time constraints,
previous skills/competencies, and control variables examined.
The chapter begins with a discussion of how the online and face-to-face curriculums
were implemented. Because the online sections used in this study represent the first
online courses taught by the author, some modifications were made to the curriculum. A
brief discussion follows to address the issues of sampling, data cleaning, and exclusion of
subjects. The psychometric properties of the scales employed are then examined. Finally,
the remainder of the chapter is devoted to addressing the three research questions, with a
description of the characteristics of the respondents included as part of the data used to
address research question 1.
64
Course Design and Implementation
Prior to addressing the three research questions posed in this study, it is worth
reexamining the online and face-to-face curriculums. In much of the past research into
the differences between online and face-to-face statistics courses, methodological issues
pertaining to the design and implementation of online classes have obscured the impact
of the online teaching medium. In order to address these concerns, the present study
sought to make the online course as similar to the face-to-face condition as possible,
employed the same instructor for all of the classes examined, and had all grading
accomplished by a teaching assistant blinded to each student’s class section. However,
three modifications were made to curriculum that represent a departure from the proposed
structure in Chapter 3.
First, while the initial proposal called for a single teaching assistant to handle all
the grading of homework, quizzes, and exams in the class sections studied, logistically
this proved to be infeasible. In the initial proposal, only four class sections were expected
to participate in the study; however, ultimately six classes were available for inclusion.
Because of the volume of students and the time constraints imposed by the additional
grading, three TAs were employed for grading each semester. In order to minimize
differences in grading style between the TAs, each was given a detailed grading rubric
and answer key for each assignment. In addition, prior to grading each quiz or exam, the
TAs attended a meeting with the professor to review the grading rubric in detail. At the
end of the meeting all three TAs were given a copy of a randomly chosen student’s exam
or quiz and were asked to grade it. The TAs’ responses were then reviewed as a group,
65
and any questions that received different scores amongst the TAs were discussed until
consensus was reached on how to handle the question in the future.
The curriculum also deviated from the initial proposal in that the online lectures
were not created from verbatim transcriptions of face-to-face lectures. Although video
recordings were taken of face-to-face lectures presented by the author in the semester
preceding the study, the process of transcription resulted in scripts that required
significant revision prior to implementation. For instance, in the face-to-face classes
recorded, the lectures included significant portions dedicated to addressing student
questions. The removal of these asides represented a considerable amount of time and
required substantial reworking of the script, so it was decided to forgo a direct recording
of transcripts. Instead, the author watched each video prior to recording the online
lectures to ensure that the content of the online lectures was as close to the face-to-face
lectures as possible, though not a verbatim copy as initially planned.
The final departure from the initial proposal involved the use of virtual office
hours through the course’s online blackboard. Although the original plan was to offer
both face-to-face and virtual office hours, extremely low participation and frequent
technical difficulties resulted in the decision to discontinue the digital office hours
component. During the first semester of the study, on several consecutive weeks the
online office hours feature of blackboard was offline, a problem outside of the control of
the author. In addition, few students accessed the virtual office hours, and those who did
access them did not do so during the specified hours when the chat room was monitored
by the professor. Instead of using the feature as a synchronous technology, questions
66
were posted asynchronously, effectively treating the technology more like an e-mail than
like a real-time, interactive chat room. As a result, the online office hours were
discontinued, and students in both the online and face-to-face courses were instead
encouraged to participate in the 3-4 hours of conventional office hours offered each week
or to e-mail their questions directly to the professor.
Sampling and Data Cleaning
A total of six class sections (four face-to-face and two online) were recruited to
participate in the present study by filling out a survey at the beginning and end of the
semester. Across these sections a total of six students’ response were removed from the
dataset. Three students were removed from the dataset, as they were returning students
who had failed to pass the course a previous semester. Two other students withdrew from
the course prior to completion and were thus removed. Finally, one student’s responses to
the first survey were removed at his request when he admitted to not filling out the survey
honestly. In the latter case, the student submitted a second set of responses that were
retained for analysis.
Ultimately a total of 121 out of 137 eligible face-to-face students filled out the first
survey, yielding a response rate of approximately 88.3%. In the online condition a total of
60 out of 66 eligible students completed the first survey for a response rate of 90.9%. For
the second round of surveys, a total of 100 (73.0%) face-to-face students and 46 (69.7%)
online students provided data.
All data included in the dataset were scrutinized for invariant or nonsensical response
patterns. Specifically, the presence of reverse coded items allowed for examination of
67
careless responses, while questions with unbounded fill-in responses such as students’
ages allowed for the detection of nonsensical responses. As the majority of the survey
items were close-ended, multiple-choice questions, the presence of outlying or extreme
scores was minimized. In the end all responses were retained in the dataset as no
evidence of careless or invariant responses was detected amongst those who agreed to
participate in the study.
Scale Psychometrics
Prior to addressing the 3 primary research questions of the study, the
psychometric properties of the scales employed were investigated. A single principle
components factor analysis (PCA) was conducted to explore the underlying factors in one
scale, as it was comprised of a mix of items adopted from published scales as well as
questions specifically written for this study. Given the mix of items employed, it seemed
prudent to formally test the assumption that there would be only a single underlying
factor assessed by the scale items. In addition, the internal consistency of each scale was
investigated by calculating Cronbach’s alpha in order to determine if any of the scale
items should be deleted.
As several scales contained a mix of positively and negatively valenced items,
reverse coding was conducted where necessary to create scales in which higher scores
reflected a greater quantity of the construct measured.
Principle Components Factor Analysis
Because the student satisfaction questions employed in the present study were
borrowed and adapted from multiple sources (So & Brush, 2008; Paechter, Maiera, &
68
Macher, 2010; Sun, Tsai, Finger, Chen, & Yeh, 2008), a principle components factor
analysis (PCA) using orthogonal rotation (varimax) was conducted to examine their
underlying structure. The Kaiser-Meyer-Olkin measure revealed that the sample size in
the study was suitable for PCA analysis (KMO = .94). In addition, Bartlett’s test of
sphericity (x
2
(28)
= 2389.85, p < 0.001) indicated that the correlations between the
individual course satisfaction items were high enough to warrant examination using a
PCA approach.
Prior to rotation, eigenvalues of the unrotated matrix were calculated, producing
only a single factor with an eigenvalue of greater than one. Examination of a scree plot
further supported a single factor solution, with inflexion suggesting only a 1- or 2- factor
solution. The resulting single factor solution explained 87.59% of the observed variance
in subjects’ responses. Since factor solutions with eignevalues below 1 typically function
no better than individual items (Valente, 2002), it was decided to use a single scale
comprised of all 8 items to assess course satisfaction.
Internal Consistency
The internal consistency of each scale is reflected in Table 5 (page 69). While the
majority of the scales demonstrated high internal consistency, in the case of three scales
(Math Anxiety, Computer Efficacy, and Course Satisfaction) alpha was quite high,
suggesting some redundancy of the items included. Single-item deletions made negligible
improvements to the alpha value in these three scales, and it was decided that such
redundancy would be unlikely to impact the scales’ predictive utility, if any. As a result,
the full scales were included without modification.
69
Table 5: Cronbach’s Alpha for Computed Scales
Construct # of items in scale α
Self-Reported Fluency of English Scale 3 .89
Attitudes Toward Statistics 6 .89
Statistics Self Efficacy 6 .82
Statistics Value 9 .85
Statistics Difficulty 7 .79
Statistics Effort 4 .86
Math Anxiety 10 .95
Attitudes Toward Computers 5 .84
Computer Efficacy 6 .91
Course Satisfaction 8 .98
Analyses for RQ 1:
Do Students Enrolling in Online Statistics Courses Differ from those Enrolling in Face-
to-Face Classes?
In order to determine whether there are systematic differences in the types of
students who elect to enroll in online versus face-to-face introductory statistics courses,
data were collected on students’ demographic characteristics, psychological factors, time
constraints, and previous exposure to math and computer science courses. A series of
independent samples t-tests and chi-square tests of independence were conducted to
examine differences between online and face-to-face students.
As shown in Table 6 (page 71), a total of six factors were found to differ between
online and face-to-face students. Specifically, compared to face-to-face students, the
online students in this sample tended to be closer to graduation (Junior or Senior
standing), were less likely to be pursuing a Health Science degree, had taken more
mathematics courses, placed lower value on their statistics training, displayed more
70
negative attitudes toward statistics, and reported less confidence in their ability to learn
statistics software.
71
Table 6: Demographic Characteristics, Psychological Factors, Time Constraints, and
Previous Knowledge of HESC 349 Students (n = 181)
Course Format
Traditional
a
Online
a
p-value
b
n 121 (66.9%) 60 (33.1%)
Age 21.42 (4.02) 22.23 (3.07) .17
Sex
Male 20 (16.5%) 14 (23.3%) .27
Female 101 (83.5%) 46 (76.7%)
Race
White, non-Hispanic 27 (23.3%) 16 (26.7%) .33
Hispanic or Latino 36 (29.8%) 15 (25.0%)
Black 2 (1.7%) 5 (8.3%)
Hawaiian or Pacific Islander 3 (2.5%) 1 (1.7%)
Asian 43 (35.5%) 21 (35.0%)
Middle Eastern 4 (3.3%) 1 (1.7%)
Biracial 6 (5.0%) 1 (1.7%)
Year
Sophomore 49 (46.2%) 6 (10.0%) < .001***
Junior 49 (46.2%) 30 (50.0%)
Senior 8 (7.6%) 18 (30.0%)
Health Science Major 99 (90.8%) 42 (77.8%) .02*
Employment Status
Not working 35 (28.9%) 16 (26.7%) .93
Part-time ( ≤ 20 hours/week) 55 (43.8%) 28 (46.6%)
Full-time (> 20 hours/week) 33 (27.3%) 16 (26.7%)
Caregiver 15 (12.4%) 11 (18.6%) .26
Language 14.20 (1.61) 14.43 (1.17) .32
Course Load (Credits Taken) 13.91 (2.42) 13.92 (3.20) .99
Math Classes Taken 5.51 (1.30) 6.03 (1.48) .02*
Math Performance 5.32 (1.23) 5.15 (1.34) .40
Computer Classes Taken 1.21 (1.43) 1.62 (2.03) .16
Computer Class Performance 5.60 (1.31) 5.84 (1.27) .34
Attitudes Toward Statistics 26.69 (7.79) 24.42 (6.55) .04*
Statistics Self Efficacy 31.03 (5.73) 29.80 (6.48) .20
Statistics Value 49.12 (8.13) 45 (8.18) .009**
Statistics Difficulty 25.38 (6.21) 24.88 (6.56) .62
Statistics Effort 26.74 (2.33) 26.65 (2.36) .80
Statistical Software Self Efficacy 5.19 (1.39) 4.73 (1.53) .05*
Math Anxiety 40.61 (14.74) 39.58 (16.51) .67
Math Efficacy 5.18 (1.34) 4.88 (1.26) .15
Computer Attitude 29.27 (4.53) 29.33 (4.82) .93
Computer Efficacy 36.83 (6.04) 38.42 (5.25) .09
a
reported as M(SD) or n(valid %).
b
reported for independent samples t-tests or chi-square tests of independence as
appropriate
*p ≤ .05. **p ≤ .01. ***p ≤ .001
72
Analyses for RQ 2:
Do the Differences Between Online and Face-to-Face Students Predict Academic
Outcomes (Attrition, Grades, Pass Rates, and Satisfaction)?
In order to determine whether any of the six factors found to differ between online
and face-to-face students predict student outcomes, a series of hierarchical regression
models were conducted. In the first block of each model all variables except Year in
School were entered using a stepwise procedure in order to minimize the risk of type I
and II errors. Because Year in School was dummy coded, the dummy coded variables
were force entered into the model in a second block.
Two deviations were made from the initial analysis plan. First, as a total of only
two students withdrew from the six courses observed in the study, the dataset precluded
any substantive analysis of attrition, so this student outcome was dropped from the
analyses. In addition, as theoretically different skills were required of students in testing
situations (graded on correctness), versus homework performance (graded on
completeness) and so on, it was decided to examine students’ grades by parsing their
performance into four categories: Quiz grades, Exam grades, Homework Grades, and
Attendance/Participation.
It is worth noting that in chapter one some discussion was spent on the topic of
sample size, as the analyses initially proposed to address this research question could
have lacked the necessary statistical power to identify all the significant predictors,
biasing the results towards false negatives. Fortunately, the unexpected inclusion of two
additional class sections, subsequent increase in the sample size of the study, and
73
identification of only six student characteristics varying between online and face-to-face
students in research question one helped to allay concerns over the sample size. With a
smaller sample than the one actually obtained, it should have been possible to detect up to
medium-sized effects in models with 10 or fewer predictors (Miles & Shevlin, 2001, as
cited in Field, 2009). Given that the analyses for research question two included a larger
sample than initially planned and only 6 predictors, it seems likely that the models run
had sufficient statistical power to detect any statistically or practically significant
predictors of students’ academic outcomes.
Prior to conducting any regression modeling, a correlation matrix of the proposed
variables was examined in order to assess the possibility of collinearity (see Table 7).
Examination of the resulting matrix revealed only modest correlations between the
predictors (significant r’s range from .15 - .37), suggesting minimal risk of collinearity.
Table 7: Correlation Matrix of Proposed Predictor Variables
2 3 4 5 6
1. Statistics Attitude .36*** .28*** -.15 .03 -.02
2. Statistics Value .22** .06 .05 .02
3. SPSS Self Efficacy -.02 .09 .13
4. Health Science Major -.15* -.15
5. Year in School .37***
6. Number of College Math Courses Taken -
* p < .05, ** p < .01, *** p < .001
RQ 2 A: Do the Differences Between Online and Face-to-Face Students Predict Grades?
As shown in Table 8 (page 74), hierarchical linear regression modeling revealed
statistically significant predictors for three of the four grade categories. While the
proportion of variance explained by each model was modest (R
2
ranged from .04-.06),
two of the predictors functioned in a theoretically plausible manner with positive attitudes
74
toward statistics predicting better quiz performance and enrollment in the health sciences
predicting better homework performance. In addition, it was discovered that sophomore
status predicted higher participation scores than junior or senior status. In the case of
exam grades, no predictors reached the .05 level of significance, and thus no model could
be constructed.
Table 8: Summary of Final Regression Models for Predicting Student Grades
Regression Models and Predictors
a
Β SE Β β
R
2
sig
Model 1: Quiz Grades .04 .01
Statistics Attitude .87 .36 0.20
Model 2: Homework Grades .04 .01
Health Science Major 16.30 6.71 0.18
Model 3: Exam Grades
No predictors reached the .05 level of significance
Model 4: Attendance/Participation .06 .009
Junior* -2.95 1.30 -.020 .02
Senior* -5.07 1.75 -0.25 .004
a
Reflects final models with only statistically significant predictors entered
* Reference Category: Sophomore (no Freshman were in the sample)
RQ 2 B: Do the Differences Between Online and Face-to-Face Students Predict Pass
Rates?
In order to determine whether any of the six factors found to differ between online
and face-to-face students predict students’ likelihood to pass the course, logistic
regression modeling was conducted. The decision was made a priori to approach this
model with a stepwise method as in the other hierarchical regression models used in this
75
study. The likelihood ratio method of variable section was chosen due to the unreliability
of the Wald method (Field, 2009). The decision was also made to run the regression
model twice, first using the forward and then the backward method, retaining only
variables that served as statistically significant predictors in both models.
As shown in Table 9, logistic regression modeling revealed students’ attitudes
towards statistics to be the only predictor of whether or not they attained a passing score.
While the final regression model was derived from both forward and backward entry, in
both cases statistics attitude was found to be the sole statistically significant predictor. As
logistic regression modeling techniques fail to provide a precise analog to the R
2
derived
in linear regression models, it was decided to report Nagelkerke’s R
2
, as Cox and Snell’s
R
2
equation is limited to values below the theoretical maximum value of 1 (Field, 2009).
The resulting single-factor model was found to predict approximately 7% of the observed
variance in whether or not students passed, with each 1 unit increase in statistics attitude
predicting a 1.09 times higher odds of passing the course.
Table 9: Summary of Final Regression Model for Predicting Student Pass Rate
Regression Model and Predictors
a
Β SE Β Exp ( Β) Nagelkerke’s
R
2*
sig
Overall Model .07 .01
Statistics Attitude .09 .04 1.09
a
Reflects final models with only statistically significant predictors entered
76
RQ 2 C: Do the Differences Between Online and Face-to-Face Students Predict Student
Satisfaction?
Regarding student satisfaction with the course, hierarchical linear regression
failed to reveal any predictors significant at the .05 level of significance, and thus no
model could be constructed.
Analyses for RQ 3:
Does Course Format (Online Versus Face-to-Face) Predict Academic Outcomes
(Grades, Pass Rates, and Satisfaction) after Controlling for the Differences Between
Online and Face-to-Face Students?
In order to determine whether any statistically significant differences between
online and face-to-face students’ outcomes exist after controlling for variables identified
in RQ’s 1 and 2, simple linear regression modeling was conducted. For each of the six
student outcomes (four grades, pass rate, and satisfaction), the impact of class format was
investigated as the sole predictor. The model was then re-run with the addition of the
other statistically significant predictors identified in RQ 2. By running the models in this
fashion it is possible to begin to tentatively attribute the detected differences to either
student demographics or to intrinsic differences in the teaching mediums.
RQ 3 A: Does Course Format Predict Students’ Grades after Controlling for the
Differences Between Online and Face-to-Face Students?
Just as in RQ 2, it was decided to examine students’ grades by parsing their
performance into four categories: Quiz grades, Exam grades, Homework Grades, and
Attendance/Participation.
77
Upon initially examining students’ quiz scores, no difference was detected
between the online and face-to-face classes (see Table 10, page 78). After controlling for
class format, statistics attitude, which was previously identified as a predictor in RQ 2,
remained a statistically significant predictor of students’ quiz scores, with more positive
attitudes predicting higher quiz performance.
In the case of exam scores, no statistically significant predictors of students’ exam
grades were detected by the analyses for RQ 2. As a result, only course format was
entered into the model. As shown in Table 10 (page 78), a statistically significant
difference was detected between face-to-face and online students. While face-to-face
students outperformed online students by approximately 23 points on average, it is worth
noting that these 23 points were out of 500 points possible, and condition only explained
2% of the variability observed in students’ exam performance.
Regarding students’ homework performance, no difference was detected between
the online and face-to-face classes. After controlling for class format, major, a previously
identified predictor from RQ 2, was no longer found to predict students’ homework
performance.
Finally, examination of students’ attendance or participation scores revealed a
statistically significant difference between online and face-to-face students, with online
students scoring, on average, approximately five points lower than face-to-face students.
After controlling for course format, junior or senior status, previously detected predictors
of students’ participation, failed make a statistically significant contribution to the model.
78
Table 10: Comparison of Student Grades by Condition with and without Covariates
Regression Models and Predictors
Β SE Β β
R
2
sig
Model 1:
Quiz Grades
Model without covariates .01 .24
Online/Face-to-Face -5.84 4.94 -.09
Model with covariates .05 .02
Statistics Attitude .88 0.35 .20 .01
Online/Face-to-Face -3.55 5.08 -.05 .49
Model 2:
Exam
Grades
Model without covariates
Online/Face-to-Face -23.21 11.60 -.15 .02 .05
Model 3:
Homework
Grades
Model without covariates .01 .33
Online/Face-to-Face -4.07 4.18 -.08
Model with covariates .02 .30
Health Science Major 7.26 5.91 .10 .22
Online/Face-to-Face -3.23 4.23 -.06 .45
Model 4:
Attendance/
Participation
Model without covariates
Online/Face-to-Face -5.46 1.16 -.35 .12 <.001
Model with covariates .14 < .001
Junior* -1.86 1.25 -.13 .14
Senior* -2.57 1.80 -.13 .16
Online/Face-to-Face -4.60 1.28 -.29 < .001
* Reference Category: Sophomore (no Freshman were in the sample)
RQ 3 B: Does Course Format Predict Students’ Pass Rates after Controlling for the
Differences Between Online and Face-to-Face Students?
In order to examine the impact of course format and student characteristic on pass
rates, a dichotomous measure, the same methodology was used as in RQ 3A, save that
logistic regression was substituted in the place of the previous linear regression modeling.
79
The resulting models revealed that both statistics attitude and course format made
statistically significant contributions to the model, with the final model predicting
approximately 14% of the variance observed in students’ pass rates (see Table 11). In the
case of course condition, after controlling for students’ attitudes towards statistics the
odds of face-to-face students obtaining a passing score in the course were nearly three
times higher than the odds of online students passing. Similarly, after controlling for
course condition, each 1-unit increase in statistics attitude predicted a 1.09 times higher
odds of passing the course.
Table 11: Comparison of Student Pass Rates by Condition with and without Covariates
Regression Models and
Predictors
a
Β SE Β Exp
( Β)
Nagelkerke’s
R
2*
sig
Model: Pass Rate
Online/Face-to-Face 1.26 .44 3.52 .09 .004
Model: Pass Rate .14 .001
Statistics Attitude .08 .04 1.09 .02
Online/Face-to-Face 1.09 .46 2.98 .02
RQ 3 C: Does Course Format Predict Students’ Satisfaction after Controlling for the
Differences Between Online and Face-to-Face Students?
As the differences between online and face-to-face students were not found to
predict student satisfaction in RQ 2c, only the direct effect of course format on
satisfaction was examined. Simple linear regression modeling failed to reveal any
statistically significant relationship between course format and student satisfaction
(F
(1,143)
= 1.5, p = .22).
80
Summary
Ultimately the sample size obtained and scale psychometrics proved adequate to
conduct a series of analyses to examine three central research questions. For a tabular
summary of all findings refer to Table 12 (page 81)
In addressing the first research question, it was discovered that online statistics
students differed from face-to-face statistics students in six ways. Specifically, compared
to face-to-face students, the online students in this sample tended to be closer to
graduation (Junior or Senior standing), were less likely to be pursuing a Health Science
degree, had taken more mathematics courses, placed lower value on their statistics
training, displayed more negative attitudes toward statistics, and reported less confidence
in their ability to learn statistics software
In research question 2, it was discovered that three of the six student
characteristics identified by research question 1 also predicted students’ academic
outcomes. These analyses pointed to the role of attitudes toward statistics, major, and
class level in achieving academic success in introductory statistics.
In research question 3, class condition (online versus face-to-face) was found to
predict students’ attendance/participation scores, exam grades, and pass rates after
controlling for the differing characteristics of online and face-to-face students identified
by research questions 1 and 2.
In the end, at least one statistically significant predictor was found for each
academic outcome except course satisfaction. Specifically, students’ quiz scores and pass
rates were predicted by their attitudes toward statistics. While major and year in school
81
predicted homework and participation scores, respectively, these relationships vanished
once course format was added to the model. Finally, students’ exam scores, participation
grades, and pass rates were found to be superior in the face-to-face setting.
In the following chapter the implications of these findings, limitations of the
current study, and directions for future research will be explored in depth.
Table 12: Summary of Statistically Significant
a
Predictors of Student Outcomes
Student
Outcomes
Quiz Homework Exam
Attendance/
Participation
Pass Rate Satisfaction
Differences
between conditions
Year in school X*
Health major X*
# of math classes
Statistics attitude X X
Statistics value
Software efficacy
Class format
Online vs face-to-face X X X
a
significant at the 0.05 level of significance
*relationship no longer statistically significant after controlling for class format
82
Chapter Five
Introduction
In the following chapter the results of the study are examined in depth. The
chapter begins with a review of the rationale and literature leading to the research
questions investigated and an overview of the research methodologies employed.
Building upon the observed findings, connections are then drawn back to the literature in
an attempt to reconcile the results of the current research with past findings. From there,
the implications of the study are discussed, with the emphasis split between implications
for researchers as well as for practitioners. The chapter concludes with suggestions for
future research and a final summary of the study in its entirety.
Summary of the Study
While a growing body of studies has suggested that online educational programs
are either equivalent (Russel, 2001; Yatrakis, & Simon, 2002) or superior (U.S.
Department of Education, 2009) to traditional face-to-face curriculum in terms of student
outcomes like learning or satisfaction, little formal research has been devoted to
examining this within the context of statistics curriculum (Meletiou-Mavrotheris, 2004).
This has been troubling, as online courses may be one way of meeting what other
researchers have noted is a growing demand for statistics training across several sectors
(Everson, Zieffler, & Garfield, 2008; Glencross & Binyavanga, 1996). In the few studies
comparing online to face-to-face statistics courses, little attention has been paid to the
differences in the learner characteristics of students electing to enroll in online versus
face-to-face courses, and significant methodological shortcomings have been observed.
Because of this, conflicting findings have been reported, and it has been difficult to
83
determine the efficacy of online statistics courses in terms of student outcomes like
learning, satisfaction, or course retention.
This study sought to determine whether students in online statistics curriculum
experience different academic outcomes in terms of grades, pass rates, attrition, and
satisfaction when compared to an otherwise identical face-to-face statistics course. The
study extended upon previous research into the efficacy of online statistics curriculum by
being the first to employ rigorous methodological controls and to statistically control for
differing student characteristics in the online and face-to-face conditions. The findings
were designed to serve as a first step toward clarifying the debate over the efficacy of
online statistics curriculum and building a theoretical framework for future studies.
In order to assess the efficacy of online statistics curriculum, this study posed
three research questions:
1. Do students who enroll in online introductory statistics courses vary from
those who enroll in traditional classes in terms of demographic
characteristics, psychological factors, time constraints, or previous
knowledge?
2. Which of the demographic characteristics, psychological factors, time
constraints, or previous knowledge domains found to differ in Research
Question One predict student outcomes (attrition, grades, pass rates, and
satisfaction)?
3. After statistically controlling for the variables identified by both Research
Questions One and Two above, do student outcomes (attrition, grades,
84
pass rates, and satisfaction) vary between online and face-to-face statistics
courses?
Conclusions
As significant results were found in response to each set of analyses, the
implications of each analysis will be discussed separately for each research question in
turn.
Conclusions from RQ 1
While a primarily a posteriori, data-driven approach was taken to identify factors
that differ between online and face-to-face students, it was none-the-less expected that
these two groups would vary from one another on at least some of the measures.
Of the six differences detected, the majority functioned as might have been
anticipated. For instance, while the role of the perceived value of statistics training in
face-to-face and online courses has not been previous studied in depth, it seems plausible
that students who value or have positive attitudes towards statistics may be more inclined
to take a face-to-face class in which they can participate in lecture and make connections
with faculty. On the other hand, students who fail to see the value of statistics or bear
negative attitudes toward statistics could prefer the online format out of beliefs that the
content may be less rigorous or the notion that it would spare them from being engaged
by faculty during lecture. Similarly, the observation that the face-to-face class sections
contained a higher proportion of health science majors may simply be attributable to
student advising practices. The finding that online students had taken more math classes
on average than face-to-face students was also unsurprising given that the online
85
condition had a higher proportion of juniors and seniors. As juniors and seniors can be
expected to have taken more courses in total than sophomores, it stands to reason that
they may have completed more required or elective math courses, as well.
Somewhat more complicated, however, was the reason why the online condition
had a higher proportion of juniors and seniors. Several explanations may account for this
observation, but the simplest is that seniors and juniors were given preference during
class registration and had an opportunity to enroll in classes before the sophomores or
freshmen did. Given the popularity of online curriculum, the online classes may simply
have been filled by juniors and seniors before the sophomores had an opportunity to
register for classes.
At least two alternative or additional explanations could explain the high
proportion of juniors and seniors observed in the online condition. First, given that the
statistics class examined in this study is both a required course for health science majors
and a prerequisite for additional courses, it is possible that the increased proportion of
seniors actually represents students who are taking the class for a second or third time.
While students who had taken the class previously with the author were removed from
the dataset, no data were available to determine if the remaining students had taken and
failed the class previously with another health science faculty member, in a different
department, or at another university. If this is the case, it may also explain why students
in the online condition bore more negative attitudes towards statistics and valued the
subject less.
86
An alternative explanation may simply be that these data reflect a Type I error, or
false positive. Due to the number of analyses run to answer research question one, the
chance of making a Type I error or finding spurious results was considerably higher than
the typical five percent used in many analyses. Because the study was largely exploratory
in nature and any spurious results would likely fail to load into the models used to
address research questions two and three, it was decided to leave the alpha level at an
unadjusted five percent per analysis.
The remaining statistically significant difference between online and face-to-face
students came as a greater surprise. Given that online students were effectively electing to
engage in their education solely through their computers, it seemed reasonable to
anticipate that they may have more familiarity with computers, report more positive
attitudes towards computers, or possess greater self-efficacy in learning novel
technologies and software. Not only was it surprising to observe no statistically
significant differences between online and face-to-face student’s experience, attitudes, or
self-efficacy when it came to computers, but to then detect statistically significantly
lower levels of self-efficacy in regards to learning statistics software was entirely
unanticipated.
As with year in school, multiple explanations may account for the observed
difference in students’ reported levels of SPSS self-efficacy. On one hand, while both
classes engaged in regular SPSS lab activities, peer support and interaction was built into
the group activities conducted in the face-to-face classes. In the absence of formally
offered peer support, students in the online section may have approached the SPSS labs
87
with less confidence. On the other hand, it is also possible that this finding reflects a
simple Type I error, or false positive.
Ultimately, the present study was not designed to determine why the students
enrolling in online and face-to-face classes differ, but rather to simply detect such
differences and reduce what initially was an overwhelming array of possible covariates
into a manageable number of factors. In this sense, research question one was
successfully addressed. While one or two significant differences could be attributed to the
inflated risk of encountering Type I errors across multiple analyses, the six differences
detected seem likely to include true, substantive differences between these students.
In short, research question one confirmed that students who choose to enroll in
online statistics courses differ from students who enroll in face-to-face in several ways
including demographic composition (specifically, in their year in school and major),
psychological constructs (in their valuing of statistics, attitudes toward statistics, and
SPSS self-efficacy), and prior experience (in the number of college-level math classes
previously taken).
Conclusions from RQ 2
Because the analyses conducted to address research question two depended on
which variables would prove to be significant in research question one’s analyses, few a
priori hypotheses could be made prior to analysis.
Regarding the four significant regression models produced for research question
two, two lend themselves to easy interpretation. As the importance of students’ attitudes
towards statistics has been well documented in past research (Schau, Stevens, Dauphinee,
88
& Del Vecchio, 1995), discovering that positive attitudes predicted students’ quiz scores
and pass rates in this sample was not entirely unanticipated.
In the case of students’ majors predicting homework scores and year in school
predicting attendance/participation rates, the results should be taken with some caution.
While these variables may warrant consideration in future research, in this instance the
effects of both variables were entirely attenuated when class condition was added to the
models.
In the case of year in school, this variable likely functioned as a proxy for class
condition, as the online classes had a considerably higher proportion of seniors and lower
proportion of sophomores than the face-to-face classes. Class condition, in turn, could be
expected to predict attendance/participation scores, as these scores were one of the few
class elements that were not equivalent between conditions. In the face-to-face condition,
students could miss up to two classes before they would begin to lose attendance points,
while in the online condition, points were immediately deducted for any missing
blackboard discussion posts. In this respect, the two courses were not equivalent, and
online students could be expected to be at a disadvantage. In effect, it was not junior or
senior standing that put students at risk of receiving lower participation scores, but rather
the fact that more juniors and seniors were in the disadvantaged online condition.
In the case of students’ majors (health science or not) predicting homework
scores, the relationship is even less clear. While the homework assignments involved
examples drawn from the health sciences, knowledge of the health sciences was not
necessary to complete the assignments. Moreover, it seems unlikely that intrinsic interest
89
would be the driving factor leading students to do their homework. More likely, students
do their homework in order to receive points for having completed their homework. As
the homework assignments were graded solely on completion rather than on correctness,
the only way to lose points on the homework assignments was to skip questions or not
turn in an assignment. The finding that health science majors were more likely to
complete or turn in their homework is interesting, but it should be viewed with caution
and perhaps a degree of skepticism as the effect was later attenuated by the inclusion of
class condition.
Perhaps just as important as the four statistically significant models that were
created are the two that were not significant. While the characteristics of online and face-
to-face students were found to differ, none of these differences were found to predict
students’ exam scores or satisfaction. While it was interesting to see quiz performance
vary as a function of attitude, it was initially surprising to find that same did not hold true
for exam performance. One possible explanation is that although negative or positive
attitudes may influence study habits or other behaviors for seemingly less important
quizzes, when high-stakes exams approach students put forth their maximal effort
regardless of their attitudes. This certainly fits with anecdotal observations made by the
author. In both semesters of the study students rarely made use of office hours to prepare
for quizzes. However, prior to the final exam, the author often ran out of seating in his
office due to the demand.
Regarding student satisfaction, the impact of no significant findings was less
clear. The student characteristics examined failed to predict student satisfaction, but so
90
too did course condition. In some ways this a relief, as it suggests that student satisfaction
should not be impacted by the baseline characteristics of students electing to enroll in
online or face-to-face classes. Neither class begins at a relative disadvantage in terms of
predisposing characteristics that could influence their end satisfaction.
In the end, the important consideration for research question two is not simply
that attitudes predict student outcomes, but also that these attitude levels were found to be
markedly different between online and face-to-face students at the beginning of the
semester. According to these data, online students enter the course at a disadvantage and
could be expected to attain lower scores than face-to-face students, a finding that
conflicts with past research suggesting that online students typically perform as well
(Hurlburt, 2001; Russel, 2001; Summers, Waigandt, & Whittaker, 2005; Yatrakis, &
Simon, 2002) or better (Suanpang, & Petocz, 2006; U.S. Department of Education, 2009)
than face-to-face students. As few studies have striven to create online classes that are
truly equivalent to their face-to-face counterparts, the previously reported benefits of
online classes may in part be a function of online students simply spending more time
with course contents (Suanpang, & Petocz, 2006; U.S. Department of Education, 2009;
Zhang, 2002), receiving more frequent assessments (Grandzol, 2004), benefiting from
different instructors than those encountered by their face-to-face peers (Suanpang &
Petocz, 2006), or even experiencing experimenter expectancy effects from researchers
participating in the grading of their subjects’ assignments (Rosnow & Rosenthal, 2005).
In the current study, the opposite was found to be true: when care is taken to create truly
91
equivalent courses, online students appear to perform less well than their face-to-face
counterparts on some metrics.
In short, research question two revealed that attitudes towards statistics and
possibly class major should be considered prior to making direct comparisons of online
and face-to-face statistics courses. In this study, online students entered the course with
poorer attitudes than their face-to-face peers, and they received lower quiz scores and
pass rates. This stands in contradiction to past research which reported that online
students typically perform as well (Hurlburt, 2001; Russel, 2001; Summers, Waigandt, &
Whittaker, 2005; Yatrakis, & Simon, 2002) or better (Suanpang, & Petocz, 2006; U.S.
Department of Education, 2009) than face-to-face students, Given the methodological
rigor employed in the present study and limitations of past research, the assumption that
online courses are inherently equivalent or superior to face-to-face courses should be
drawn into question. Certainly, online courses can be developed so as to be superior to
face-to-face courses, but it should be acknowledged that this superiority stems from
factors like the number of assessments provided rather than from the teaching medium
itself.
As a final cautionary note, it is important to remember that an entirely data-driven
model building approach was taken in this study. The goal of research question two was
simply to determine whether any of the baseline differences between online and face-to-
face students could be expected to influence their performance or satisfaction in class.
While several variables were found to predict student outcomes, due to the nature of the
model-building technique employed the models’ associated beta and R
2
values should be
92
viewed with caution. As a result, in the current study no conclusions have been drawn
from these metrics; only the models’ statistical significance and the direction of the
relationship (positive or negative) between each predictor and outcome was considered.
Conclusions from RQ 3
As with research question two, the analyses conducted to address research
question three depended on which variables would prove to be significant in research
question one and two’s analyses. As result few a priori hypotheses could be made prior to
analysis. However, the three significant regression models produced for research question
three were theoretically plausible and lead to the conclusion that there are substantive
differences in online and face-to-face students’ learning outcomes after controlling for
student characteristics.
For example, it was previously noted that the attendance/participation scores were
structured in a way that favored online students. As expected, class format revealed this
advantage and entirely attenuated the relationship between attendance grades and year in
school.
In addition, the odds of passing the course were discovered to be influenced by
both students’ attitudes towards statistics and class condition. Once again this made sense
in the context of the other findings in this study. Attitudes towards statistics were found
to predict students quiz scores, and class format was found to predict students’ exam
scores. Together students’ quiz and exams scored played a large role in whether or not
students would pass the course, so the findings here are quite consistent.
93
In the case of exam scores, no control variables needed to be included, so the
model simply reveals a direct difference in the performance of online versus face-to-face
students. The finding that online students performed less well on examinations once
again contradicts past research stating that online students’ performance is generally
equivalent to (Hurlburt, 2001; Russel, 2001; Summers, Waigandt, & Whittaker, 2005;
Yatrakis, & Simon, 2002) or better (Suanpang, & Petocz, 2006; U.S. Department of
Education, 2009) than face-to-face students’ performance.
As before, the historical use of non-equivalent courses in past research may be
partially to blame. Because a large part of the appeal of online courses is that they allow
students to create their own schedules (U.S. Department of Education, 2009), many
online classes eschew paper-and-pencil tests and the hassle of scheduling proctored
exams in favor of examinations administered online or in alternative formats. Directly
comparing exam performance between students taking a paper-and-pencil exam in a
proctored classroom environment to students taking an exam online from a home
computer seems fraught with difficulties. In these instances the online students have
several advantages including, but not limited to, the ability to work with peers, consult
their textbooks, take and share screenshots of the test to help other classmates prepare, or
search for answers via search engines. Indeed, over the course of the study the author
even found a posting on the website craigslist.org from a California State University
student looking for a statistics expert to take their online statistics exams for them. Under
these conditions, it is unsurprising that online students have often been found to
outperform their face-to-face peers.
94
In the present study, when online students were given nearly identical
instructional resources and took their exams under identical conditions to their face-to-
face peers, they performed less well on their exams. This again calls into question the
assumption that online courses are inherently equivalent or superior to face-to-face
courses. In the present study the teaching medium itself seemed to be a slight hindrance
to student learning outcomes. That the preponderance of studies has shown online
students to have equivalent or superior outcomes to face-to-face students may suggest
that with additional time, effort, and resources, online classes can be structured to
produce equivalent or superior outcomes to online courses. However, with the increase in
blended courses, which integrate supplemental online components into traditional face-
to-face courses, any resource developed for an online course could and often should be
shared in face-to-face courses, too. In effect, while it is possible to invest additional time,
effort, and resources to enhance online students’ outcomes, these same investments and
resources could be shared in a face-to-face setting as well. Given the results of this study,
it would be expected that under these conditions of a level playing field in regards to
academic resources the face-to-face students would then perform as well or better than
their online peers.
In short, research question three revealed that significant differences exist
between the academic outcomes of online and face-to-face students, with face-to-face
students consistently outperforming online students in these cases. In addition, these
analyses showed the importance of considering the differences in the characteristics of
students electing to enroll in online versus face-to-face classes. The odds of passing the
95
course were not only predicted by class format, but also by students’ attitudes toward
statistics.
As a final cautionary note, just as in research question two, an entirely data-driven
model building approach was taken here. As a result, no conclusions have been drawn
from the models’ beta or R
2
values; only the models’ statistical significance and the
direction of the relationship (positive or negative) between each predictor and outcome
was considered.
Implications
In this study three major findings emerged. Online students were found to differ
from face-to-face students in several respects, these differences predicted academic
outcomes, and face-to-face students were found to outperform online students after
controlling for the differences between the two groups. These findings bear implications
both for researchers and for educators.
Implications for Researchers
In terms of implications for researchers, this study underscores the importance of
considering student characteristics when comparing the efficacy of online versus face-to-
face curricula. While this study identified statistics attitudes, major, and year in school as
potentially important covariates to control for, in other studies, particularly those
examining domains other that statistics, entirely different important covariates may
emerge. In terms of practical applications, these finding suggest that researchers take at
least three steps.
96
First, care should be taken to test for the presence of underlying differences in the
characteristics of online versus face-to-face students. The factors driving students to
enroll in online courses may well vary in different school settings and majors, so
examining student characteristics on a study-by-study basis seems warranted until a
larger body of data and possible meta-analyses are available. In this study testing for
differences in student characteristics was accomplished by collecting data on a substantial
number of potential covariates and running repeated tests. Due to the risks of inflated risk
of encountering Type I errors with this approach, future studies should include a smaller
set of characteristics based on theory and adjust their significance levels to account for
the risks posed by multiple testing.
The second implication for researchers is that studies comparing online to face-to-
face courses should attempt to control for any student characteristics that may confound
their findings. As researchers rarely are afforded an opportunity to randomly assign
students to classes, controlling for student characteristics through study design is likely
infeasible. As a result, potential covariates should be controlled for statistically wherever
possible.
Finally, as the results of this study contradicted findings reported in previous
works, the methodological controls included here should be replicated in future research.
While the impact of blinding graders to the students’ class type and restricting the
comparisons to courses with equivalent content cannot be quantified in this study, the fact
that the results presented here differed from those of studies not employing such controls
suggests that these techniques may influence study outcomes.
97
Implications for Statistics Educators
Perhaps the most alarming concern for statistics educators teaching introductory
courses is the finding that online students entered into the course with less positive
attitudes towards statistics, scored lower on exams, and were less likely to pass the
course.
While creating equivalent content for both the online and face-to-face classes was
a necessary component of this study’s methodology, the finding that face-to-face classes
outperform online ones suggests that additional learning aides may be required for online
classes. It may be tempting for educators to substitute non-equivalent assignments, such
as essays, in place of exams or to allow online classes to take exams online (with all the
aforementioned advantages that come with that approach). However, care should be
taken to assure that students’ grades reflect a true acquisition of knowledge and applied
skills. While it is assuredly possible to structure courses in a way that boosts student
scores, the success or failure of a statistics course lies in its ability to prepare students
with the skills necessary for success in graduate school, research settings, and their
careers. The issue of selecting pedagogies to enhance online students’ learning is beyond
the scope of the present study, but this research does underscore the need to invest extra
time, energy, and effort into online introductory statistics courses, as students may enter
into these classes at a disadvantage.
Future Research
As this study was intended as a first step towards resolving conflicting findings
over the efficacy of online statistics courses, much research still lies ahead.
98
First, although great effort was put into selecting a wide array of theoretically
plausible learner characteristics that could both vary between online and face-to-face
students and predict student outcomes, many more variables remain to be examined.
Certainly additional research can be directed towards identifying other learner
characteristics that warrant inclusion as covariates in future models of student outcomes.
By methodically weeding out irrelevant learner characteristics and developing a pool of
characteristics that are expected to vary between online and face-to-face students, not
only will research be able to progress from exploratory to confirmatory analyses, but
teachers will be empowered with the information needed to generate online curriculum
better targeted to their students’ unique needs.
Second, although student characteristics like attitudes towards statistics were
assessed in the current study, they were only examined at baseline. Future studies may
wish to utilize pretest-posttest designs in which changes in these characteristics over time
may be monitored. While online students were found to have less positive attitudes
towards statistics at the beginning of this study, it is unclear whether these attitudes
changed over the course of the semester, and, if so, if the magnitude of such changes
were similar to any changes experienced in face-to-face classes. As most students
pursuing degrees in the health sciences will encounter more than one course requiring
concepts from statistics, it is important to consider how introductory statistics courses
will predispose those students to future classes.
A third line of research follows from the observation that the success or failure of
a statistics course lies in its ability to prepare students with the skills necessary for
99
success in graduate school, research settings, and their careers. With this in mind, future
research may also wish to compare the efficacy of online versus face-to-face introductory
statistics courses not only in terms of academic outcomes in the introductory course, but
also in terms of academic outcomes in subsequent courses involving statistics. In this way
success of online and face-to-face curricula can be measured not only in terms of
immediate academic outcomes, but also in terms of preparedness for future challenges.
Summary
This study sought to resolve conflicting findings that have been reported in past
research on the impact of online curriculum in introductory statistics courses. To
accomplish this, the study addressed several methodological shortcomings in the previous
research by employing a mix of methodological and statistical controls including the use
of TAs blinded to students’ class sections for all grading, the creation of substantively
identical curriculum and assessments for the online and face-to-face students, and the
identification and inclusion of key covariates in the final models investigating student
outcomes.
While four student outcomes (grades, pass rates, attrition, and satisfaction), were
initially to be investigated, attrition could not be examined due to the low number of
students withdrawing from the classes studied. In addition, students’ grades were parsed
into four domains (attendance/participation, exams, homework, and quizzes), as
theoretically different skills were required for different types of assessments.
Initial analyses revealed that, compared to face-to-face students, online students
tended to be closer to graduation (Junior or Senior standing), were less likely to be
100
pursuing a Health Science degree, had taken more mathematics courses, placed lower
value on their statistics training, displayed more negative attitudes toward statistics, and
reported less confidence in their ability to learn statistics software.
Further, it was revealed that these differing attitudes towards statistics predicted
students’ subsequent quiz scores and pass rates. While students’ major and year in school
also initially appeared to be related to their homework and attendance/participation
scores, respectively, subsequent analyses cast these relationships into doubt.
In the end, after controlling for students’ attitudes, majors, and year in school
where appropriate, it was found that face-to-face students outperformed online students
on exam scores, attendance/participation, and pass rates. While differences in
attendance/participation scores were expected due to differences in the grading of this
outcome, the differences in students’ exam scores and pass rates were considered
meaningful.
The results of this study point out the importance of considering and controlling
for the differences in the types of students who elect to enroll in online and face-to-face
classes, as studies attempting to make direct comparisons of online and face-to-face
curriculum may be confounded by these differences. The fact that this study found that
face-to-face students outperformed online students, a finding contrary to that reported in
previous research, may in part be attributable to this study’s statistical and
methodological controls.
Future research is still required to replicate the conditions and findings of the
current study. As the analyses used this research were data-driven, future work remains to
101
develop and test new theories of student performance in online and face-to-face classes.
Nevertheless, this study made contributions to the available body of literature by
illustrating the impact of statistical and methodological controls, identifying potential
covariates to include in future studies, and drawing into question the often reported
finding that online curriculums are either equivalent or superior to face-to-face
curriculums.
102
References
Acee, T. W., & Weinstein, C. E. (2010). Effects of a value-reappraisal intervention on
statistics students' motivation and performance, The Journal of Experimental
Education, 78(4), 487-512.
Ajzen, I., & Fishbein, M. (1980). Understanding attitudes and predicting social behavior.
Englewood Cliffs, NJ: Prentice Hall.
Allen, M., Mabry, E., Mattrey, M., Bourhais, J., Titsworth, S., & Burrell, N. (2004).
Evaluating the effectiveness of distance learning: A comparison using meta-analysis.
Journal of Communication, 54(3), 402-420.
Artino, A. R. (2008). Motivational beliefs and perceptions of instructional quality:
Predicting satisfaction with online training. Journal of Computer Assisted Learning,
24(3), 260-270.
Bandura, A. (1982). Self-efficacy mechanism in human agency. American Psychologist,
37, 122-147.
Becker, B. J. (1996). A look at the literature (and other resources) on teaching statistics.
Journal of Educational and Behavioral Statistics, 21(1), 71-90.
Betz, N. E. (1978). Prevalence, distribution, and correlates of math anxiety in college
students. Journal of Counseling Psychology, 25(5), 441-448.
Booth, J. G., Federer, W. T., Wells, M. T., & Wolfinger, R. D. (2009). A multivariate
variance components model for analysis of covariance in designed experiments.
Statistical Science, 4(2), 223-237.
Chance, B., Ben-Zvi, D., Garfield, J., & Medina, E. (2007). The role of technology in
improving student learning of statistics. Technology Innovations in Statistics
Education, 1(1), Retrieved from: http://www.escholarship.org/uc/item/8sd2t4rr.
Clark, R. E. (1983). Reconsidering research on learning from media. Review of
Educational Research, 53, 445-460.
Dao, T. K., Lee, D., & Chang, H. L. (2007). Acculturation level, perceived English
fluency, perceived social support level, and depression among Taiwanese
international students. College Student Journal, 287-295
Dauphinee, T. L., Schau, C., & Stevens, J. J. (1997). Survey of Attitudes Toward
Statistics: Factor structure and factorial invariance for females and males. Structural
Equation Modeling, 4, 129-141.
103
Department of Health Science, California State University, Fullerton (2005). Bachelor of
science in health science. Retrieved from
http://hhd.fullerton.edu/hesc/BSD/major.htm
Eamon, D. B. (1999). Distance education: Has technology become a threat to the
academy? Behavior Research Methods, Instruments, & Computers, 31, 197–207.
Eccles, J. S. (1987). Expectancies, values and academic behaviors. In J. T. Spence (Ed.),
Achievement and achievement motives (pp. 75-146). San Francisco: Freeman.
Fennema, E., & Sherman, J. A. (1976). Fennema-Sherman Mathematics Attitudes Scales:
Instruments designed to measure attitudes toward the learning of mathematics by
males and females. JSAS Catalog of Selected Documents in Psychology, 6, 31. (Ms.
No. 1225).
Field, A. (2009). Discovering statistics using SPSS: And sex and drugs and rock ’n’ roll
(3rd ed.). London: Sage.
Fini, A. (2009). The technological dimension of a massive open online course: The case
of the CCK08 course. International Review of Research in Open and Distance
Learning, 10(5), 1-26.
Finney, S. F., & Schraw, G. (2003). Self-efficacy beliefs in college statistics courses.
Contemporary Educational Psychology, 28, 161–186.
Garfield, J., Hogg, B., Schau, C. & Whittinghill, D. (2002). First courses in statistical
science: The status of educational reform efforts. Journal of Statistics Education
[Online], 10(2): (http://www.amstat.org/publications/jse/v10n2/01-
009R1_Garfield.doc)
Glencross, J. M., & Binyavanga, W. K. (2002). The role of technology in statistics
education: A view from a developing region. Retrieved from
http://www.stat.auckland.ac.nz/~iase/publications/8/25.Glencross.pdf
Gordon, S., He, W., & Abdous, M. (2009). Using a Web-Based System to Estimate the
Cost of Online Course Production. Online Journal of Distance Learning
Administration, 12(3). Retrieved from ERIC database.
Harrison, A. W., & Rainer, R. K. Jr., (1992). The influence of individual differences on
skill in end-user computing. Journal of Management Information Systems, 9(1),
93–111.
104
Hsu, M. K., Wang, S. W., Chiu, K. K. (2009). Computer attitude, statistics anxiety and
self-efficacy on statistical software adoption behavior: An empirical study of online
MBA learners. Computers in Human Behavior, 25, 412-420.
Hurlburt, R. T. (2001). “Lectlets” deliver content at a distance: Introductory statistics as a
case study. Teaching of Psychology, 28(1), 15-20.
King, K. P. (2001). Educators revitalize the classroom “bulltetin board”: A case study of
the influence of online dialogue on face-to-face classes from an adult learning
perspective. Journal of Research on Computing in Education, 33(4), 337–353.
Lesser, L. M., & Winsor, M. S. (2009). English language learners in introductory
statistics: Lessons learned from an exploratory case study of two pre-service
teachers. Statistics Education Research Journal, 8(2), 5-32,
Lips, D. (2010). How online learning is revolutionizing K-12 education and benefiting
students. Backgrounder, 2356, 1-9.
Lock, R. (2000). A sampler of WWW resources for teaching statistics. In T. Moore (Ed.),
Teaching statistics: Resources for undergraduate instructors (pp. 191-197).
Washington, DC: Mathematical Association of America.
Meletiou-Mavrotheris, M. (2004). Technological tools in the introductory statistics
classroom: Effects on the student understanding of inferential statistics. International
Journal of Computers for Mathematical Learning, 8, 265-297.
Miles, J. N. V. & Shevlin, M. (2001). Applying regression and correlation: A guide for
students and researchers. London: Sage.
Newlin, M. H., & Wang, A. Y. (2002). Integrating technology and pedagogy: web
instruction and seven principles of undergraduate education. Teaching of
Psychology, 29(4), 325–330.
Ocker, R. J. & Yaverbaum, G. J. (1999). Asynchronous computer-mediated
communication versus face-to-face collaboration: Results on student learning,
quality and satisfaction. Group Decision and Negotiation, 8(5), 427-440.
Onwuegbuzie, A. J. (1998). Teachers’ attitudes toward statistics. Psychological Reports,
83, 1008-1010.
Onwuegbuzie, A. J. (2003). Modeling statistics achievement among graduate students.
Educational and Psychological Measurement, 63(6), 1020-1038.
105
Onwuegbuzie, A. J., Slate, J., Paterson, F., Watson, M., & Schwartz, R. (2000). Factors
associated with underachievement in educational research courses. Research in the
Schools, 7, 53-65.
Onwuegbuzie, A. J., & Seaman, M. (1995). The effect of time and anxiety on statistics
achieve- ment. Journal of Experimental Psychology, 63, 115-124.
Paechter, M., Maiera, B., & Macher, D. (2010). Students’ expectations of, and
experiences in e-learning: Their relation to learning achievements and course
satisfaction. Computers & Education, 54(1), 222-229.
Pagano, M., & Gauvreau, K. (Eds.). (2000). Principles of Biostatistics (2nd ed.). Pacific
Grove, CA: Duxbury Press.
Roberts, D.M. & Bilderback, E.W. (1980). Reliability and validity of a statistics attitude
survey. Educational and Psychological Measurement, 40, 235-238.
Rodarte-Luna, B., & Sherry, A. (2008). Sex differences in the relation between statistics
anxiety and cognitive/learning strategies. Contemporary Educational Psychology,
33, 327–344
Roemer, M. (2009). Religious affiliation in contemporary Japan: Untangling the enigma.
Review of Religious Research, 50(3), 298-320.
Rosnow, R. L, & Rosenthal, R. (2005). Beginning behavioral research: A conceptual
primer (5
th
ed.). Upper Saddle River, NJ: Pearson Prentice Hall.
Russel, T. L. (Ed.), (2001). The no significant difference phenomenon: A comparative
research annotated bibliography on technology for distance education (5th ed.).
Raleigh, NC: North Carolina State University.
Rynearson, K., & Kerr, M. S. (2005). Teaching statistics online in a blended learning
environment. Journal of Computing in Higher Education, 17(1), 71-94.
Schau, C. (2005). The survey of attitudes towards statistics. Retrieved from
http://www.evaluationandstatistics.com/view.html
Schau, C., Stevens, J., Dauphinee, T. L., & Del Vecchio, A. (1995). The development
and validation of the survey of attitudes toward statistics. Educational and
Psychological Measurement, 55(5), 868-875.
Schram, C. M. (1996).A meta-analysis of gender differences in applied statistics
achievement. Journal of Educational and Behavioral Statistics, 21, 55-70.
106
Sitzmann, T., Kraiger, K., Stewart, D., & Wisher, R. (2006). The comparative
effectiveness of web-based and classroom instruction: A meta-analysis. Personnel
Psychology, 59(3), 623-664.
So, H-J. & Brush, T. A. (2008). Student perceptions of collaborative learning, social
presence and satisfaction in a blended learning environment: Relationships and
critical factors. Computers & Education, 51, 318–336.
Spooner, F., Jordan, L., Algozzine, B., & Spooner, M. (1999). Student ratings of
instruction in distance learning and on-campus classes. The Journal of Educational
Research, 92(3), 132-140.
Sugrue, B. & Rivera R. J. (2005). State of the industry: ASTD’s annual review of trends
in workplace learning and performance. Alexandria, VA: ASTD.
Suanpang, P., & Petocz, P. (2006). E-learning in Thailand: An analysis and case study.
International Journal on ELearning, 5(3), 415-438.
Summers, J. J., Waigandt, A., & Whittaker, T. A. (2005). A comparison of student
achievement and satisfaction in an online versus a traditional face-to-face statistics
class. Innovative Higher Education, 29(3), 223-250.
Sun, P-C., Tsai, R. J., Finger, G., Chen, Y-Y., & Yeh, D. (2008). What drives a
successful e-Learning? An empirical investigation of the critical factors influencing
learner satisfaction. Computers & Education, 50(4), 1183-1202.
Tempelaar, D. T., Schim van der Loeff, S., & Gijselaers, W. H. (2007). A structural
equation model analyzing the relationship of students' attitudes toward statistics,
prior reasoning abilities and course performance. Statistics Education Research
Journal, 6(2), 78-102
U.S. Department of Education, Office of Planning, Evaluation, and Policy Development,
(2009). Evaluation of evidence-based practices in online learning: A meta-
analysis and review of online learning studies. Washington, DC.
Valente, T. W. (2002). Evaluating health promotion programs. New York, NY: Oxford
University Press.
van den Dries, F. M. A. (2007). Some notes on Roman mold material and the technique
of molding for glassblowing. Journal of Glass Studies, 49, 23-38.
Velleman, P. F., & Moore, D. S. (1996). Multimedia for teaching statistics: Promises and
pitfalls. The American Statistician, 50, 217-225.
107
Velleman, P. F. (2000). Design principles for technology-based statistics education.
Metrika, 51, 91-104.
Wang, M., Sierra, C., & Folger, T. (2003). Building a dynamic online learning
community among adult learners. Educational Media International, 40(1/2), 49–62.
Wise, S. L. (1985). The development and validation of a scale measuring attitudes toward
statistics. Educational and Psychological Measurement, 45, 101-104.
Woods, R., & Ebersole, S. (2003). Using non-subject-matter-specific discussion boards to
build connectedness in online learning. The American Journal of Distance Education,
17(2), 99–118.
Yatrakis, P. G., & Simon, H. K. (2002). The effect of self-selection on student
satisfaction and performance in online classes. The International Review of Research
in Open and Distance Learning, 3(2).
Yeh, C. J., & Inose, M. (2003). International students’ reported English fluency, social
support satisfaction, and social connectedness as predictors of acculturative stress.
Counseling Psychology Quarterly, 16, 15-28.
Zhang, J. (2002). Teaching Statistics on-line: Our experience and thoughts. Proceedings
of the Sixth International Conference on Teaching Statistics, ed. B. Phillips,
Voorburg, The Netherlands: International Statistical Institute.
108
Appendix A: Student Survey Instrument
Achievement Expectations:
A1. What grade do you expect to get in this class (select only one):
A+ A A- B+ B B- C+ C C- D+ D D- F
Attitudes Towards Computers:
Strongly
Disagree
Neither
Agree
nor
Disagree
Strongly
Agree
B1. Computers are bringing us
into a bright new era
O
1
O
2
O
3
O
4
O
5
O
6
O
7
B2. The use of computers is
enhancing our standard of
living
O
1
O
2
O
3
O
4
O
5
O
6
O
7
B3. There are unlimited
possibilities of computer
applications that haven’t even
been thought of yet
O
1
O
2
O
3
O
4
O
5
O
6
O
7
B4. Computers are responsible
for many of the good things
we enjoy
O
1
O
2
O
3
O
4
O
5
O
6
O
7
B5. Working with computers
is an enjoyable experience
O
1
O
2
O
3
O
4
O
5
O
6
O
7
Attitudes Towards Statistics:
Strongly
Disagree
Neither
Agree
nor
Disagree
Strongly
Agree
C1. I will like statistics.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
C2. I will feel insecure when I
have to do statistics problems.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
C3. I will get frustrated going
over statistics tests in class.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
C4. I will be under stress
during statistics class.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
C5. I will enjoy taking
statistics courses.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
C6. I am scared by statistics. O
1
O
2
O
3
O
4
O
5
O
6
O
7
109
Math Anxiety:
Strongly
Disagree
Neither
Agree
nor
Disagree
Strongly
Agree
D1. t wouldn't bother me at all
to take more math courses.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
D2. I have usually been at ease
during math tests.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
D3. I have usually been at ease
in math courses.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
D4. I usually don't worry
about my ability to solve math
problems.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
D5. I almost never get uptight
while taking math tests.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
D6. I get really uptight during
math tests.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
D7. I get a sinking feeling
when I think of trying hard
math problems.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
D8. My mind goes blank and I
am unable to think clearly
when working mathematics.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
D9. Mathematics makes me
feel uncomfortable and
nervous.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
D10. Mathematics makes me
feel uneasy and confused.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
Self Efficacy: Math
Not at all
Confident
Very
Confident
E1. How confident are you
that that your background in
mathematics has adequately
prepared you for this
introductory statistics course?
O
1
O
2
O
3
O
4
O
5
O
6
O
7
110
Self Efficacy: SPSS
Not at all
Confident
Very
Confident
F1. How confident are you that
you can master new computer
software (SPSS) as part of
your introductory statistics
course?
O
1
O
2
O
3
O
4
O
5
O
6
O
7
Self Efficacy: Statistics
Strongly
Disagree
Neither
Agree
nor
Disagree
Strongly
Agree
G1. I will have trouble
understanding statistics
because of how I think.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
G2. I will have no idea of
what's going on in this
statistics course
O
1
O
2
O
3
O
4
O
5
O
6
O
7
G3. I will make a lot of math
errors in statistics.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
G4. I can learn statistics
O
1
O
2
O
3
O
4
O
5
O
6
O
7
G5. I will understand statistics
equations.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
G6. I will find it difficult to
understand statistical concepts
O
1
O
2
O
3
O
4
O
5
O
6
O
7
Statistics Value
Strongly
Disagree
Neither
Agree
nor
Disagree
Strongly
Agree
H1. Statistics is worthless.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
H2. Statistics should be a
required part of my
professional training
O
1
O
2
O
3
O
4
O
5
O
6
O
7
H3. Statistical skills will make
me more employable
O
1
O
2
O
3
O
4
O
5
O
6
O
7
H4. Statistics is not useful to
the typical professional
O
1
O
2
O
3
O
4
O
5
O
6
O
7
111
Statistics Value, Cont.
Strongly
Disagree
Neither
Agree
nor
Disagree
Strongly
Agree
H5. Statistical thinking is not
applicable in my life outside
my job
O
1
O
2
O
3
O
4
O
5
O
6
O
7
H6. I use statistics in my
everyday life
O
1
O
2
O
3
O
4
O
5
O
6
O
7
H7. Statistics conclusions are
rarely presented in everyday
life
O
1
O
2
O
3
O
4
O
5
O
6
O
7
H8. I will have no application
for statistics in my profession
O
1
O
2
O
3
O
4
O
5
O
6
O
7
H9. Statistics is irrelevant in
my life.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
Time Constraints:
I1. What is your marital status?
O Single, Never Married
O Married
O Separated
O Divorced
O Widowed
I1. How many children do you have?
O 0
O 1
O 2
O 3
O 4 or more
I1. Do you currently spend any time as a caregiver (do not count work, volunteer, or
internships)
O No
O Yes
112
I2. Which of the follow best describes your employment status (count paid or volunteer
time) this semester (select only one):
O Unemployed
O Work Part-time (average: 1-10 hours/week)
O Work Part-time (average: 11-20 hours/week)
O Work Full-time (average: 21-30 hours/week)
O Work Full-time (average 31 hours or more per week)
I3. Counting this class, how many credits are you taking this semester: _____________
Skills/Competencies:
Not at all
Fluent
Entirely
Fluent
J1. What is your present level
of English fluency
O
1
O
2
O
3
O
4
O
5
Not at all
Comfortable
Very
Comfortable
J2. How comfortable are you
communicating in English
O
1
O
2
O
3
O
4
O
5
Never Rarely Sometimes Very Often Always
J3. How often do you
communicate in English
O
1
O
2
O
3
O
4
O
5
Not at all
Confident
Very
Confident
K1. How confident are you
using a web browser
O
1
O
2
O
3
O
4
O
5
O
6
O
7
K2. How confident are you
navigating Blackboard to find
course materials
O
1
O
2
O
3
O
4
O
5
O
6
O
7
K3. How confident are you in
your ability to access and view
posted online video tutorials
O
1
O
2
O
3
O
4
O
5
O
6
O
7
K4. How confident are you in
using Blackboard’s
communication tools to
communicate with classmates
O
1
O
2
O
3
O
4
O
5
O
6
O
7
113
Skills/Competencies, Cont.
Not at all
Confident
Very
Confident
K5. How confident are you in
using Blackboard’s
communication tools to
communicate with professors
O
1
O
2
O
3
O
4
O
5
O
6
O
7
K6. How confident are you in
using Blackboard’s digital
dropbox to submit assignments
O
1
O
2
O
3
O
4
O
5
O
6
O
7
L1. How many high school mathematics courses have you completed _____
L2. How many college mathematics courses have you completed _____
M1. How many high school computer science courses have you completed _____
M2. How many college computer science courses have you completed _____
Very
Poorly
Neither
Poorly
nor Well
Very
Well
N1. How well did you do in
mathematics courses you have
taken in the past?
O
1
O
2
O
3
O
4
O
5
O
6
O
7
Very
Poorly
Neither
Poorly
nor
Well
Very
Well
N/A
O1. How well did you do
in computer science
courses you have taken in
the past?
O
1
O
2
O
3
O
4
O
5
O
6
O
7
O
8
Control Variables
P1. Please indicate your gender: O
Male O
Female
114
Control Variables, Cont.
P2. Please indicate your race/ethnicity:
O White, non-Hispanic
O
Hispanic or Latino
O Black
O
Hawaiian or Pacific Islander
O Asian
O Other (Please Clarify: ________________________________)
P3. What is your age: _______________
Statistics Difficulty
Strongly
Disagree
Neither
Agree
nor
Disagree
Strongly
Agree
Q1. Statistics formulas are easy to
understand.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
Q2. Statistics is a complicated
subject.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
Q3. Statistics is a subject quickly
learned by most people
O
1
O
2
O
3
O
4
O
5
O
6
O
7
Q4. Learning statistics requires a
great deal of discipline
O
1
O
2
O
3
O
4
O
5
O
6
O
7
R5. Statistics involves massive
computations
O
1
O
2
O
3
O
4
O
5
O
6
O
7
R6. Statistics is highly technical
O
1
O
2
O
3
O
4
O
5
O
6
O
7
R7. Most people have to learn a
new way of thinking to do
statistics.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
115
Statistics Interest
Strongly
Disagree
Neither
Agree
nor
Disagree
Strongly
Agree
S1. I am interested in being
able to communicate statistical
information to others.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
S2. I am interested in using
statistics.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
S3. I am interested in
understanding statistical
information.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
S4. I am interested in learning
statistics.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
Statistics Effort
Strongly
Disagree
Neither
Agree
nor
Disagree
Strongly
Agree
T1. I plan to complete all of
my statistics assignments.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
T2. I plan to work hard in my
statistics course.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
T3. I plan to study hard for
every statistics test.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
T4. I plan to attend every
statistics class session.
O
1
O
2
O
3
O
4
O
5
O
6
O
7
116
Appendix B: Student Satisfaction Survey*
Instructor:
1. My instructor gives fast feedback via e-mail, chat and/or other communication
facilities
2. When I needed advice from my instructor, I could easily get in contact with him
via e-mail, chat, forum, etc
3. The instructor had a high level of expertise in the implementation of e-learning
courses
4. Overall, the instructor for this course met my learning expectations
Learning:
5. I was able to learn from the narrated lecture slides.
6. I was able to learn from the narrated SPSS lab walkthroughs.
7. I was able to learn from the virtual office hours
8. I was able to learn from face-to-face office hours
9. I was able to learn from the homework problems
10. I was able to learn from the practice problems/answer keys
11. Overall, this course was a useful learning experience.
Logistics:
12. The course itself and the learning materials were clear and well structured
13. I could decide on my own about the pace of my learning
14. Taking this class via the Internet allowed me to spend more time on other
activities
15. As a result of my experience with this course, I would like to take another
distance course in the future.
Technology:
16. I was able to access the lecture slides without difficulty
17. The volume & picture quality of the lecture slides was satisfactory
18. I was able to access the narrated SPSS lab walkthroughs without difficulty
19. The volume & picture quality of the SPSS lab walkthroughs was satisfactory
20. I was able to access the office hours without difficulty
21. I was able to access the discussion board features without difficulty
22. Overall, I was able to utilize the online course components (excluding SPSS labs)
successfully with my home computer system
* Items 4-11 & 15 were adapted from measures described in So & Brush, 2008. Items 1-3 & 12-13 were
adapted from Paechter, Maiera, & Macher (2010). Item 14 was adapted from Sun, Tsai, Finger, Chen, &
Yeh (2008). Items 16-22 were written for this study.
Abstract (if available)
Abstract
Increasingly, online classes have been used to address the rising demand for statistics training and calls for improved curricula. However, little empirical research has examined the efficacy of online statistics courses. The present study sought to fill this niche by comparing the grades, attrition, pass rates, and satisfaction of 181 students enrolled in 4 face-to-face and 2 online introductory health science statistics courses. As methodological controls, the online and face-to-face curriculums were made as similar as possible, a single instructor taught all 6 classes, and blinded teaching assistants conducted all grading. In addition, differences in students’ demographic characteristics, psychological factors, time constraints, and previous knowledge were assessed and controlled for prior to examining their learning outcomes. Contrary to past findings, face-to-face students outperformed online students in terms of their exam scores and pass rates. In addition, online students were found to be closer to graduation, less likely to be pursuing a Health Science degree, had taken more mathematics courses, placed lower value on their statistics training, displayed more negative attitudes toward statistics, and reported less confidence in their ability to learn statistics software. The implications of these findings for educators and researchers are examined in depth.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Asynchronous online education and introductory statistics: The role of learner characteristics
PDF
A comparative study of motivational predictors and differences of student satisfaction between online learning and on-campus courses
PDF
Perceptions of TESOL teacher education: strengths, weaknesses, characteristics, and effective components
PDF
Beliefs and behaviors of online and face-to-face students: evaluating differences in collaboration, help‐seeking, and ethics
PDF
Examining student beliefs and behaviors in online and on-campus courses: measuring openness to diversity, voluntary peer collaboration, and help seeking
PDF
The relationship of students' self-regulation and self-efficacy in an online learning environment
PDF
Instructional delivery as more than just a vehicle: A comparison of social, cognitive, and affective constructs across traditional oncampus and synchronous online social work graduate programs.
PDF
A comparison of student motivation by program delivery method: self-efficacy, goal orientation, and belongingness in a synchronous online and traditional face-to-face environment
PDF
The examination of academic self-regulation, academic help-seeking, academic self-efficacy, and student satisfaction of higher education students taking on-campus and online course formats
PDF
Self-regulation and online course satisfaction in high school
PDF
Comparing the effectiveness of online and face-to-face classes among California community college students
PDF
A study of online project-based learning with Gambassa: crossroads of informal contracting and cloud management systems
PDF
The influence of parental involvement, self-efficacy, locus of control, and acculturation on academic achievement among Latino high school students
PDF
A qualitative analysis of student-athletes' engagement in synchronous, virtual learning environments at the secondary level
PDF
Student academic self‐efficacy, help seeking and goal orientation beliefs and behaviors in distance education and on-campus community college sociology courses
PDF
Online, flipped, and traditional instruction: a comparison of student performance in higher education
PDF
Reducing statistics anxiety among learners in online graduate research methods courses
PDF
Blended learning: developing flexibility in education through internal innovation
PDF
African-American/Black students’ experience and achievement in asynchronous online learning environments at a community college
PDF
Perspectives of learning in synchronous online education
Asset Metadata
Creator
Gorman, Nicholas Peter
(author)
Core Title
Asynchronous online education and introductory statistics: The role of learner characteristics
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education (Leadership)
Publication Date
11/21/2011
Defense Date
08/10/2011
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
Articulate,asynchronous,curriculum,efficacy,learner characteristics,Learning,OAI-PMH Harvest,online learning,statistics
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Hentschke, Guilbert C. (
committee chair
), McMahan, Shari (
committee member
), Seli, Helena (
committee member
)
Creator Email
ngorman@fullerton.edu,ngorman@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-209528
Unique identifier
UC11291406
Identifier
usctheses-c3-209528 (legacy record id)
Legacy Identifier
etd-GormanNich-425-1.pdf
Dmrecord
209528
Document Type
Dissertation
Rights
Gorman, Nicholas Peter
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
Articulate
asynchronous
efficacy
learner characteristics
online learning