Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Using data for continuous improvement: a promising practice study
(USC Thesis Other)
Using data for continuous improvement: a promising practice study
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Running head: CONTINUOUS IMPROVEMENT
1
USING DATA FOR CONTINUOUS IMPROVEMENT: A PROMISING PRACTICE
STUDY
by
Adam H. Ortiz
___________________________________________________________________
A Dissertation Presented to the
FACULTY OF THE USC ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
August 2016
Copyright 2016 Adam Ortiz
CONTINUOUS IMPROVEMENT 2
ACKNOWLEDGEMENTS
None of this would have been possible without my dissertation committee. Dr.
Sundt, thank you for all of your guidance and support. From the beginning until the end,
you continuously challenged me. As a result, I evolved as much as this paper did. Dr.
Filback, thank you for the countless rounds of revisions and edits. Your vision for this
paper provided the framework and foundation for all of the research. Dr. Ephraim, thank
you for all of your feedback. Synthesizing the feedback allowed me to take this paper to
the next level. Many thanks to all of you; I am deeply appreciative of all of your help.
Just as important, I wish to acknowledge my dear wife and family. Eileen, thank
you. Your selflessness allowed me to flourish on this path to complete the doctorate.
Mom and Dad, thank you for raising me into a man hungry to learn and teach. I would
not be who I am today without your love and support. Amy and Sissy, thank you being
amazing women with relentless support for your baby brother. I could not have asked for
a better set of sisters. To the rest of my family members, all of your support has helped
me reach this point. This paper is for all of you.
To mi gente, never give up on your education. The path is not easy, the odds are
not in your favor, but you can rise to the occasion. The answer lies inside your inner
desire to pick yourself up and keep moving forward. Grit and passion will take you
wherever you want to go. Never give up. If I can do it, so can you.
CONTINUOUS IMPROVEMENT 3
TABLE OF CONTENTS
Acknowledgements.........................................................................................................2
Table of Contents …………………………………………………………………............3
List of Tables …………………...………………………………………………………..9
List of Figures…………………………………………….……………………………...10
Abstract…………………………………………………………………………………..11
Chapter One: Introduction………………………………………………….....................12
Introduction of the Problem……………………………………………………...12
Organizational Context and Mission………………………..…………………...12
Organizational Performance Status………………………………………………13
Background of the Issue: Related Literature..…………………………………...13
Importance of this Study…….…………………………………………………...14
Organizational Performance Goal and Current Performance……………………15
Organizational Stakeholders…………………………………………...………...15
Stakeholder of the Study…..…………………………………………...………...15
Definition of Terms……………..……………………………………...………...16
Organization of the Study…………………………………………...…………...17
Chapter Two: Literature Review………………….………...…………………………...18
Use of Data for Continuous Improvement……………………………………….18
Higher Education Accrediting Bodies…………………………………...19
Professional Accrediting Agencies Definition……………………….......20
Learning and Motivation Theory…………………………………………….......21
Knowledge and Skills …………………………………………………...21
CONTINUOUS IMPROVEMENT 4
Motivation…………. ……………………………………………………22
Organization……….. ……………………………………………………23
Factors that Contribute/Inhibit Continuous Improvement……………………….24
Knowledge and Skills …………………………………………………...24
Contributing……………………………………………………...24
Inhibiting……………………………….………………………...26
Summary……………………………………….………………...29
Motivation ……………….………………………………………………29
Contributing……………………………………………………...29
Inhibiting……………………………….………………………...30
Summary……………………………………….………………...32
Organization………... …………………………………………………...32
Contributing……………………………………………………...32
Inhibiting……………………………….………………………...34
Summary……………………………………….………………...35
Conclusion……………………………………………………………………….35
Chapter Three: Methodology………………………..……………………………….......37
Purpose of the Project and Questions……………………………………………37
Framework for the Study………………………………………………………...37
Presumed Performance Needs/Issues/Assets…………………………………….38
Personal Professional Knowledge………………………………………..39
Knowledge and Skills……………………………………………39
Motivation………………………………………………………..39
Organization……………………………………………………...39
CONTINUOUS IMPROVEMENT 5
Learning and Motivation Theory……………………………………..….40
Knowledge and Skills…………………………………………....40
Motivation………………………………………………………..40
Organization……………………………………………………...40
Summary……………….………………………………………...41
Validation of the Performance Assets……………………………………………43
Validation of the Performance Assets: Knowledge……………………...43
Validation of the factual knowledge assets………………………43
Validation of the conceptual knowledge assets………………….43
Validation of the procedural knowledge assets………………….44
Validation of the metacognitive knowledge assets………………44
Validation of the Performance Assets: Motivation………………………45
Validation of the Performance Assets: Organization/Culture/Context…..47
Participating Stakeholders……………………………………………………….48
Data Collection…………………………………………………………………..48
Surveys…………………………………………………………………...49
Interview…………………………………………………………………49
Trustworthiness of Data………………………………………………………….49
Role of Investigator………………………………………………………………50
Data Analysis…………………………………………………………………….50
Chapter Four: Results and Findings………………………………….…..........................51
Data Collection at Programs……………………………………………………..52
Results and Findings for Knowledge Causes…………………………………….52
CONTINUOUS IMPROVEMENT 6
Data evaluation can Improve Programs………………………………….53
Functions of Different Types of Data……………………………………53
Creating Themes from the Data………………………………………….54
Strategic Periods to Assess Data…………………………………………56
Synthesis of Results and Findings for Knowledge Causes………………58
Results and Findings for Motivation Causes…………………………………….60
Administrators Value Using Data to Improve Programs………………...60
Administrators Value Student Feedback………………………………...61
Administrators Value Using Data for Accreditation…………………….62
Administrators’ Attainment Value of Data of for Accreditation………...63
Administrators Attribute Improved Student Feedback to Program
Changes…………………………………………………………………..64
Synthesis of Results and Findings for Motivation Causes………………65
Results and Findings for Motivation Causes…………………………………….68
Faculty Enjoy Data-driven Programming………………………………..68
Culture of Highly Motivated Program Administrators…………………..69
Synthesis of Results and Findings for Organization Causes…………….70
Summary…………………………………………………………………………71
Chapter Five: Recommendations, Implementation, and
Evaluation………………………………………………………………………………..73
Validated Causes Selection and Rationale……………………………………….73
Solutions for Motivation Causes…………………………………………………75
Value the use of Data…………………………………………………….75
CONTINUOUS IMPROVEMENT 7
Extrinsic Interest in Student Feedback…………………………………..76
Data for Accreditation Standards………………………………………...77
Attribution of Improved Student Feedback……………………………...78
Solutions for Organization Causes……………………………………………….79
Cultural Model…………………………………………………………...80
Cultural Setting…………………………………………………………..81
Solutions for Knowledge Causes………………………………………………...81
Factual………………………………………………………………...….82
Conceptual……………………………………………………………….83
Procedural………………………………………………………………..83
Metacognitive……………………………………………………………84
Implementation Plan……………………………………………………………..85
Evaluation Plan…………………………………………………………………..88
Level 1: Reactions…………………………………………………….….88
Level 2: Learning………………………………………………………...89
Level 3: Transfer…………………………………………………………90
Level 4: Impact…………………………………………………………..90
Limitations and Delimitations………………………………………………...….91
Limitations……………………………………………………………….91
Delimitations…………………………………………………….……….92
Future Research………………………………………………………………….92
Conclusion………………………………………………………………...……..94
References…………………...…………………………………………...……………....95
CONTINUOUS IMPROVEMENT 8
Appendix A………………………………………..……………………………………101
Appendix B………………….....……………………………………...……………......104
CONTINUOUS IMPROVEMENT 9
LIST OF TABLES
Table 1: Summary of Assumed Assets for Knowledge, Motivational, and Organizational
Issues……………………………………………………………………………………..41
Table 2: Summary of Assumed Knowledge Assets and Their Validation………………45
Table 3: Summary of Assumed Motivational Assets and Their Validation……..………46
Table 4: Summary of Assumed Organizational Assets and Their Validation……...……48
Table 5: Data Collected at Programs………………………………………..…………...52
Table 6: Summary of the Assumed Knowledge Causes and Validation Findings………59
Table 7: Summary of the Assumed Motivation Causes and Validation Findings……......67
Table 8: Summary of the Assumed Organization Causes and Validation Findings……...71
Table 9: Validated Causes Summary Table……………………………………………………74
Table 10: Summary of Validated Assets, Solutions, and Solution Implementation…......87
CONTINUOUS IMPROVEMENT 10
LIST OF FIGURES
Figure 1: Gap Analysis Process………...………………………………………………..38
Figure 2: Responses to Survey Question 1………………………………………………52
Figure 3: Responses to Interview Question 2…………………………………....………54
Figure 4: Responses to Interview Question 1…………………………………....………55
Figure 5: Responses to Survey Question 2………………………………………………57
Figure 6: Responses to Interview Question 3……………………………………………58
Figure 7: Responses to Survey Question 3………………………………………………61
Figure 8: Responses to Interview Question 4…………………………………....………62
Figure 9: Responses to Survey Question 4……………………………………....………63
Figure 10: Responses to Interview Question 5…..………………………………………64
Figure 11: Responses to Interview Question 7………………………..…………………65
Figure 12: Responses to Survey Question 5………………………..……………………69
Figure 13: Responses to Interview Question 8………………………..…………………70
CONTINUOUS IMPROVEMENT 11
Abstract
The purpose of this study was to explore administrators’ perspectives of the
barriers to and assets needed to effectively use data for continuous improvement in online
graduate programs. To explore potential barriers and assets needed, this study utilized the
Gap Analysis Framework to validate assumed knowledge, motivation, and organizational
factors that contribute to the use of data for continuous improvement (Clark & Estes,
2008). This study found that administrators strategically assess data, value different types
of data, empower a culture of using data, and work to improve programs using data on a
continual basis. The study concludes with implications for other administrators to
replicate for effective use of data for continuous improvement within their programs.
CONTINUOUS IMPROVEMENT 12
CHAPTER ONE: INTRODUCTION
Introduction of the Problem
Central to the success of any graduate program is the process of using student
feedback, learning analytics, and faculty evaluation data for continuous improvement in
higher education programs (Rowley, 2003). The efficiency of this feedback loop depends
on the stakeholders’ commitment to the process of using data and generating program
improvement (Barker & Pinard, 2014). Many universities have implemented processes
designed to collect information from students for the purpose of improving their
academic programs (Harvey, 2003). As distance education continues to evolve, new
technological tools provide unique opportunities to collect data for assessment and
growth (Nsiah, 2013). However, we do not know how this process works for online
education programs at colleges throughout the country. This study examines how a set of
high-performing online graduate programs use student feedback and other analytic data
for continuous improvement
Organizational Context and Mission
This paper explores a number of higher education institutions that offer graduate
level programs online. The universities offer a range of graduate degrees in education,
business administration, social work, public policy, and data science. There are a number
of universities within the United States that have successfully implemented online
graduate programs while utilizing the same service provider. The seven schools identified
are highly selective institutions that use the same service provider that specialize in the
interactive educational approach. These colleges share this technological partner to
deliver software programs to allow students to take courses online. Although the missions
CONTINUOUS IMPROVEMENT 13
of each university vary, the mission of the software provider believes in partnering with
the best graduate programs in order for students to reach their full potential through an
online system.
Organizational Performance Status
The majority of the graduate programs that are the subject of this study are
classified as highly selective institutions in their respective fields. One program, for
examples, is ranked as the number one online MBA program in the country ("Best
Colleges U.S. News Education", 2014). The service provider allows the program to offers
a unique blended learning approach that is supported by grounded in research to help
students reach their full learning potential. Thus, the programs that are the focus of this
study may be viewed as model organizations that exhibit promising practices that warrant
attention.
Background of the Issue: Related Literature
Higher education has transformed from traditional brick and mortar classrooms
into various forms of distance education including distance and online (Blass &
Hayward, 2014). The majority of universities in the United States now offer degree
programs or courses online (McClintock & Benoit, 2014). Through new technological
capabilities, universities can offer courses to students around the clock in synchronous
and asynchronous format (Karp, 2014). As the online world of higher education
continues to evolve, administrators and faculty members are raising questions as to how
programs will ensure quality for students (McClintock & Benoit, 2014).
One-mechanism universities use to help ensure quality in academic programs is to
collect student feedback and through analytic tools, analyze information for potential
CONTINUOUS IMPROVEMENT 14
thematic findings (Rowley, 2003). Higher education institutions have paid increasing
amounts of attention to student feedback to measure the quality of teaching and learning
in their programs (Alderman, Towers, & Bannah, 2012). Despite the growth of online
and distance education, there is little research on how online higher education programs
utilize student feedback and other analytic data for continuous improvement (Hirner &
Kochtanek, 2012). The need to evaluate quality in online higher education programs is
increasing, however, as more and more programs move to teaching students in various
online education formats (Jacek, 2013). Therefore, it is becoming critical for universities
to understand how to effectively use data for program improvement, including through
the analysis of practices at well-designed online programs at high performing institutions
(Dittmar & McCracken, 2012)
Importance of this Study
It is important to examine the use of student feedback and other analytic data for
continuous improvement in online programs as a promising practice for a variety of
reasons. First, there is little research of how online programs access and collect data for
continuous improvement (Dittmar & McCraken, 2012). Second, we do not know the
extent to which or how student feedback and analytic data is utilized in these high
performing institutions. Studying the process of using data to improve the delivery of
online instruction will help other universities in improving their programs as well (Hirner
& Kotchnatek, 2011). This study will be important to universities across the countries
that are developing online education programs (McClintock & Benoit, 2014). It holds the
potential to generate findings that will help schools plan online education including the
use of data to guide improvements for accreditation purposes and to meet the demands of
CONTINUOUS IMPROVEMENT 15
other stakeholders as well (Harvey, 2003). Furthermore, the utilization of data for
continuous improvement will ensure transformative change across institutions for the
rapidly increasing world of online education (Burnette, 2015).
Organizational Performance Goal and Current Performance
All of the programs in this study shared a goal to prepare 100% of their graduates
in their respective fields. To meet this goal, it is vital to create and maintain a feedback
system in order to collect data to continuously improve the graduate programs to prepare
students for their professional careers. The goal to prepare students aligns with the
mission of each university as well.
Organizational Stakeholders
The stakeholders involved in the process of using data for continuous
improvement include program administrators, faculty members, and students. The
program administrators facilitate the delivery of the online programs while coordinating
the collection and use of data with faculty members and employees of the cloud-based
software provider. The faculty members work with the students and program
administrators of the institution in order to ensure that student evaluation forms and other
types of assessment are completed. The students interact with faculty members and other
staff and provide input and feedback through different types of mechanisms provided.
Stakeholder of the Study
The stakeholder of focus for this study will include the program administrators or
other individuals who are responsible for collecting the student feedback and other
analytic data, gathering themes, and generating potential changes to improve the program
altogether. The administrators are central in the creation and support of a data use
CONTINUOUS IMPROVEMENT 16
strategy given their role as leaders of the online programs and as the ones responsible for
the continuous improvement of their programs. These administrators also manage faculty
members and the delivery of the online programs. Therefore, focusing on the
administrators will reveal insight into the promising practice of continuous improvement
at these high performing graduate level programs online.
The purpose of this study is to explore administrators’ perspectives of the barriers
to and assets needed to effectively use data for continuous improvement in online
graduate programs. The study will explore the following options:
a) What knowledge, skills, motivational, and organizational factors do online
program administrators in these online programs perceive as facilitating or
inhibiting student feedback and analytic data for continuous improvement in these
institutions?
b) For those factors to be facilitating student feedback for continuous improvement,
what promising practices could be adapted to and utilized by other units in the
same institution? For those factors perceived as inhibitors, what solutions may be
helpful for improving student feedback and analytic data for continuous
improvement within the institution?
c) How might those interventions, whether promising practices or solutions, be
evaluated for effectiveness?
Definition of Terms
Implementation. The institutional deployment of a new technology (Karp, 2014).
Innovation. The application of a new idea for a new outcome (Blass & Hayward,
2014).
CONTINUOUS IMPROVEMENT 17
Periodic evaluations. Used for program improvement for all stakeholders
involved (Hirner & Kotchnatek, 2012).
Surveying students. Informal discussions or conversations, formal qualitative
sessions, representatives, or questionnaires (Harvey, 2003).
System leader. The individual responsible for the system and responding to
questions throughout the evaluation process (Jacek, 2013).
Organization of the Study
Five chapters are used for this study. This chapter provided the introduction with
key concepts and terms used throughout the study about using data for continuous
improvement. This chapter also introduces the stakeholders, methodology, and purpose
for the remainder of the study. Chapter Two provides a review of the literature relative to
using data for continuous improvement. Chapter Three explores the assumed causes for
this study as well as methodology for the participants, data collection and further
analysis. In Chapter Four, data and results are analyzed and presented in various tables.
Chapter Five provides the solutions and promising practices for using data for continuous
improvement, based on data and literature, for an effective implementation plan with
room for future evaluation.
CONTINUOUS IMPROVEMENT 18
CHAPTER TWO: LITERATURE REVIEW
The purpose of this project is to identify factors that influence administrators in
developing systems for effective data use for continuous improvement. This chapter
provides a review of the literature that is related to this study. The chapter begins with a
discussion about the use of data for continuous improvement in order to define this
process and describe its importance for accreditation purposes. Next, the chapter will
review learning and motivation theory and factors that influence human performance.
Finally, the chapter will review the literature on data use in order to identify the
knowledge/skills, motivation, and organizational factors that administrators in graduate
online programs would need in order to successfully implement and carry out a process
of data use for continuous improvement.
Use of Data for Continuous Improvement
The use of data, such as learning analytics and student feedback, serves
significant roles in continuous program improvement and meeting different accreditation
standards for schools and professional affiliations (McClintock & Benoit, 2014; Siemens,
2013). The use of data to make key decisions on campus has steadily increased over the
past few years and will continue to grow and play a pivotal role in making major
decisions for programs across higher education institutions (Dringus, 2012). The use of
learning analytics in online education programs has greatly increased, for example,
enabling greater understanding of pedagogical practices that enhance student learning and
progress in coursework (Lockyer, Heathcoate, & Dawson, 2013). In addition, student
feedback is becoming a means for program administrators of online programs to evaluate
and consider future approaches for the management of learning and teaching within
courses and degrees (Alderman, Towers, & Bannah, 2012). In addition, the use of data
CONTINUOUS IMPROVEMENT 19
benefits institutional effectiveness through use feedback for students and faculty alike
(Barker & Pinard, 2014). Altogether, learning analytics, student feedback, and faculty
evaluation are examples of data that can contribute to program improvement, meeting
external requirements for accreditation, internal assessment and strategic planning
(Banta, Pike, & Hansen, 2009).
Higher Education Accrediting Bodies
Higher education accrediting bodies mandate the use of data for continuous
improvement, Access to information through technology is becoming increasingly
important for schools across the country (Ma et al., 2015). Graduate schools across the
country report to various accreditation agencies. Each school is responsible for collecting
data to improve program quality and institutional effectiveness (WASC, 2014; MSCHE,
2014; NEASC, 2014). For Western Association of Schools and Colleges Senior College
and University Commission accreditation, schools must undergo program review,
assessment of student learning, and data collection analysis as means to guide decision
making processes throughout the school year (WASC, 2014). Middle States Commission
on Higher Education requires that universities and colleges demonstrate systematic and
sustained use of measures that maximize the use of data and relative information to
evaluate and assess programs throughout the institution (MSCHE, 2014). The
Commission on Institutions of Higher Education for the New England Association of
Schools and Colleges require colleges to accomplish and improve achievements through
planning and evaluation of systematic collections of data to enhance institutional
effectiveness (NEASC, 2014). These higher education accreditation standards ensure that
campus leaders improve educational experiences through data-driven plans for students
CONTINUOUS IMPROVEMENT 20
and faculty at all levels (Banta et al., 2009).
Professional Accrediting Agencies Definition
In addition to the main higher educational accrediting bodies, professional fields
may also have accrediting agencies to assess the extent to which programs achieve their
goals. There are a number of different professional accrediting agencies. Some of these
professional accrediting bodies assess the use of collecting data to improve program
effectiveness and preparing students for that profession (NHSE, 2014; CAEP, 2015;
CSWE, 2014). Different schools within the universities must respond to a specific
accrediting process for the professional affiliation beyond the school accrediting agency
(Banta et al., 2009). The Commission on Collegiate Nursing Education’s standards for
accreditation of baccalaureate and graduate nursing programs demand that faculty
members utilize data from student and staff evaluation forms to inform decisions and
facilitate the achievement of student learning and overall program effectiveness through
out the institute (NHSE, 2014). The Council for the Accreditation of Educator
Preparation demands that higher education institutes provide quality assurance and strive
for continuous improvement through maintaining validated data to enhance program
effectiveness and improve upon student learning outcomes and developmental progress
(CAEP, 2015). The Council on Social Work Education utilizes standards to accredit
baccalaureate and graduate social work programs through the requirement that schools
with social work programs continuously use data to inform decisions for potential change
in curriculum and student organizations (CSWE, 2014). The various standards for
professional accrediting agencies ensure that campus leaders carefully monitor data for
continuous improvement to support and evaluate strategic directions across the higher
CONTINUOUS IMPROVEMENT 21
education institute (Banta et al., 2009). Although the accreditation agencies ask for
schools to use data for continuous improvement, the accreditation mandates do not
specify how to use data for continuous improvement. Most organizations will focus on
the collection of data, but fail to analyze the data for coded themes to create meaningful
change (Ma et al., 2015).
Learning and Motivation Theory
To better understand what schools and online graduate programs need to
successfully perform in utilizing data for continuous improvement, this section will
explore principles of learning and motivational theories, and the role of cultural models
and settings within organizations. Specifically, this section will review the knowledge
and skills, motivational, and organizational factors that are required for successful human
performance.
Knowledge and Skills
Knowledge and skills consists of four different categories: factual, conceptual,
procedural, and metacognitive (Krathwohl, 2002). Factual knowledge describes the facts
and bits of information required in knowing the discrete details about a particular element
(Anderson & Krathwohl, 2001). This is the basic level for understanding a topic or task
(Rueda, 2011). Conceptual knowledge is the more complex organized knowledge forms
of concepts, processes, and principles between different categories of knowledge (Rueda,
2011). Anderson & Krathwohl refer to this as the second cognitive level past factual
knowledge (2001). Procedural knowledge speaks to the knowledge dimension of
knowing how to do something, or knowing the steps in what to do, in context domain
specific areas (Krathwohl, 2002). Procedural knowledge also refers to the third
knowledge dimension (Rueda, 2011). Metacognitive knowledge then focuses on how a
CONTINUOUS IMPROVEMENT 22
learner thinks about thinking; this is the strategic planning aspect of awareness in a
particular area or domain (Rueda, 2011). This ability to think about your plan is the
highest of the four dimensions of knowledge (Anderson & Krathwohl, 2001). Altogether,
Anderson and Krathwohl utilized these four dimensions of knowledge to create a
cognitive process dimension, the revised taxonomy table, for learners to demonstrate
learning objectives and the processes used to learn the objectives (2001). In the section
below, the administrators’ use of data for continuous improvement will be explained
through each knowledge domain.
Motivation
Active choice, persistence, and mental efforts are the indicators of motivation.
Underlying motivation are the factors of task value, expectancy outcome, self-efficacy,
attributions, and goal orientation (Clark & Estes, 2008; Pintrich, 2003; Clark, 1999). Task
value refers to the idea that motivation and learning is enhanced if individuals value
something for intrinsic value, extrinsic value, attainment value, or cost value (Clark &
Estes, 2008). The motivational belief of task value addresses the question of why
someone should do a particular task (Ruesda, 2011). Expectancy outcome refers to a
belief that a specific behavior will or will not lead to a specific outcome (Pintrich, 2003).
Self-efficacy details the belief that motivation and learning is enhanced when individuals
share positive expectations for them to succeed (Clark & Estes, 2008). The notion of self-
efficacy refers to an individual’s perception of an ability to perform a task (Rueda, 2011).
The motivational theory of attribution refers to the notion that motivation and learning is
enhanced when individuals attribute their own success or failures to their effort in lieu of
the natural ability bestowed upon them (Pintrich, 2003). In attribution theory, individuals
CONTINUOUS IMPROVEMENT 23
are attempting to make sense of the world by using the attributions to guide the process
(Rueda, 2011). For example, one might attribute a mistake to an extenuating
circumstance and feel less guilty about the matter. Goal orientation refers to the
motivational theory that individuals who aim for creating mastery towards goals enhance
learning and performance (Clark & Estes, 2008). More specifically, goal orientation is
the pattern of beliefs that an individual reaches in achievement situations (Rueda, 2011).
It becomes important to understand that the various motivational theories can impact
motivation, learning, and performance throughout different parts of any organization
(Clark, 1999). In the section below, this paper will examine the administrators’ use of
data for continuous improvement through various motivation theories.
Organization
The structures, policies, and way people interact impact an individual’s
performance within an organization (Rueda, 2011). Organization is viewed through a lens
of cultural models and settings (Gallimore & Goldenberg, 2001; Schein, 2004; Clark &
Estes, 2008). Culture and cultural processes can describe individuals and an organization
as well (Rueda, 2011). Cultural models refer to the shared mental schemas and
understandings of what is normal in the world and automated to how everything works
(Rueda, 2011). The notion of cultural models impacts individual performance within an
organization because of people’s ability to become influenced by others (Schein, 2004).
Cultural models help shape the way an organization is structured through practices and
policies (Rueda, 2011). Cultural settings refers to the specific domains in which the
behavior occurs- who is involved, what is happening, when it is happening, where is it
occurring, and why it happens (Clark & Estes, 2008). Rueda points out that two people
CONTINUOUS IMPROVEMENT 24
may have similar mental schemas but act differently from one another in specific cultural
settings (2011). For organizations to manifest characteristics that enhance performance, it
becomes important to monitor and structure the culture for any maladaptive motivational
beliefs within the cultural settings and shapes the cultural models otherwise (Rueda,
2011). Organizational culture that is built for efficient processes will enhance
performance (Clark & Estes, 2008). In the section below, this paper will examine the
administrators’ use of data for continuous improvement through cultural models and
organizational settings.
Factors that Contribute/Inhibit Continuous Improvement
Through the application of learning principles, motivational theories, and
organizational cultural models, this section will focus on a review of the literature to
examine the factors that contribute to and inhibit faculty and administrators in the use of
data for continuous improvement throughout online higher education programs across the
country.
Knowledge and Skills
The knowledge and skills needed to successfully use data for continuous
improvement range from factual, conceptual, procedural, and metacognitive knowledge
(Clark & Estes, 2002; Krathwohl, 2002). This section will describe the contributing and
inhibiting knowledge factors that impact continuous improvement through the use of
data.
Contributing. Knowing the different types of data, the underlying themes from
collecting certain types of information, how to implement change from the data, and
strategically planning the feedback process all contribute to continuous improvement
(Alderman, Towers, & Bannah, 2012; Barker & Pinard, 2014; Bonnel, 2008). Simply
CONTINUOUS IMPROVEMENT 25
knowing the principles for student learning engagement for online users contributes to the
administrators’ ability to utilize data for continuous improvement (Ma et al., 2015). There
are a number of factual knowledge components for faculty and administrators to
contribute to data for continuous improvement. Research suggests that stakeholders that
can identify the objectives for improving higher education programs can successfully
guide programs to use data for continuous improvement (Banta et al., 2009).
Administrators that ensure that the faculty know the functions of the technology and
programs used for courses online enable the ability to monitor student progress and use
this information as a key awareness tool for growth in learning and student progression
(Sanders, 2008). Program administrators that know the specific types of data to collect to
generate thematic findings demonstrate the factual knowledge needed for continuous
improvement (Alderman et al., 2012). In addition, the administrators demonstrate factual
knowledge for continuous improvement when they recognize data that suggests if
students will succeed or fail when taking courses online (Dringus, 2012). Administrators
of online programs that are aware of methodologies to support data collection help
increase the likelihood that faculty will understand these methodologies and use the data
for continuous improvement (Dringus, 2012). The ability to know what type of data to
collect entails the factual knowledge of basic facts, information, and terminology related
to the specific type of data needed to generate continuous improvement (Krathwohl,
2002).
Another type of knowledge that is required is conceptual. For the administrators
responsible for generating thematic findings from the collected data, their ability to
interpret the findings relative to categories is a factor that contributes to the use of data
CONTINUOUS IMPROVEMENT 26
for continuous improvement (Barker & Pinard, 2014). Program administrators familiar
with the conceptual knowledge of the, Community of Inquiry (CoI) theoretical
framework, increase deeper learning through the use of student information in online
courses to improve learning outcomes (Ma et al., 2015). Faculty that attended
conferences on learning analytics and using data to guide pedagogical decisions for
online classrooms improved their conceptual knowledge on how to successfully use data
for continual improvement (Sanders, 2008). The ability to categorize the findings speaks
to the conceptual knowledge dimension as this ability refers to knowledge of underlying
principles, categories, structure, or theory of utilizing data for continuous improvement
(Krathwohl, 2002).
The procedural knowledge is important for contributing to effective use of data
for continuous improvement. Administrators who know how to utilize learning analytics
to make sense of the student’s experience from the end-user perspective are more likely
to the learning analytics effectively (Siemens, 2013). Instructors that can provide
guidance to students on online courses using the data to guide the directions successfully
use this information to improve pedagogical practices and learning outcomes (Ma et al.,
2015). The administrators who collect various types of data also know how to tailor data,
such as student feedback, for school faculty in order to ensure that the data is used for
continuous improvement (Alderman et al., 2012). The administrators knowledge tailoring
the student feedback speaks to the procedural dimension of knowledge as it requires the
administrators to know how to tailor the student data as well as taking the necessary steps
to actually tailor the data for the faculty themselves (Krathwohl, 2002).
Metacognitive knowledge is important for faculty and administrators to
CONTINUOUS IMPROVEMENT 27
successfully use data for continuous improvement. Administrators who plan to reflect on
the learning analytics exhibit the metacognitive knowledge that is required to ensure
students of online graduate programs have more activities that help them learn throughout
courses (Scheffel, Drachsler, Stoyanov, & Specht, 2014) In addition, administrators
responsible for data collection strategically plan for periods of time to allow for
examination of various types of data to continually improve their programs (Bonnell,
2011). This strategic planning reflects the metacognitive dimensions of knowledge as it
relates to the ability to reflect and adjust to monitoring progress throughout the duration
of the program (Krathwohl, 2002). The demonstration of knowledge and skills
demonstrated by the program administrators responsible for data collection contribute to
the utilization of data for continuous improvement.
Inhibiting. Without strategic assessment of programs, factual knowledge of the
feedback process, and procedural knowledge of how to tailor data, many higher education
programs inhibit the use of data for continuous improvement (Barker & Pinard, 2014;
Bonnel, 2008; Centner, 2014; Sanders, 2008). The administrators who do not know the
impact of data on student and faculty interactions online lack the factual knowledge
dimension of knowing the impact this type of data will have on continuous improvement
for their programs (Krathwohl, 2002). Not knowing the basic course design on the
learning platforms for online classrooms at higher education institutes inhibited the
faculty members’ use of data of continuous improvement (Sanders, 2008). Administrators
who collect data that represent static and fragmented pieces from the online courses fail
to understand the factual knowledge of these specific data types and the lack of impact
they have on student learning (Dringus, 2012).
CONTINUOUS IMPROVEMENT 28
The instructors of the courses must have conceptual knowledge of what the
learner needs in order to progress through an online course and failure to understand such
knowledge will not improve the program (Ma, Han, Yang, & Cheng, 2015). The failure
to demonstrate the knowledge of underlying categories and principles of categorizing the
needs of failing students on distance education courses may negatively impact learning
(Krathwohl, 2002).
Program administrators who lack the procedural knowledge of how to read, use,
and interpret learning analytics data underutilize this information as it is essential in
understanding student engagement and learning (Lockyer, Heathcoate, & Dawson, 2013).
Not knowing how to read the trends in the learning analytics impacts the ability to use the
data for continuous improvement (Nsiah, 2013). To interpret the data that reflect student
motivation requires a performance of knowing how to do it in order to enhance student
motivation and improve learning outcomes; when the program administrators do not
know to interpret this data, it reflects the procedural knowledge domain (Krathwohl,
2002). Administrators who do not strengthen the faculty member’s procedural knowledge
of how to utilize data to help students in online courses inhibit the use of data for
continuous improvement (Sanders, 2008).
In addition, administrators who do not recognize or evaluate the data that reflects
teacher and student interactions online inhibit the use of using this type of data for
continuous improvement (Centner, 2014). For those who do take the time to strategize for
reflection and assessment period and fail to modify strategies for continuous
improvement demonstrate a lack of performance in the metacognitive knowledge domain
(Krathwohl, 2002). The lack of demonstration of knowledge and skills by program
CONTINUOUS IMPROVEMENT 29
administrators responsible for collecting data inhibit the utilization of data for continuous
improvement.
Summary. The demonstration, or lack thereof, of knowledge and skills
contributes to and inhibits the use of data for continuous improvement. To contribute to
continuous improvement, program administrators must demonstrate the following:
factual knowledge of the specific types of data that they must collect, conceptual
knowledge of how to categorize different themes from the data, the procedural
knowledge of how to tailor the data for stakeholders, and the metacognitive knowledge of
strategically planning for set periods of evaluating the data. Program administrators that
fail to demonstrate the categorization of themes from data collection, knowing how to
interpret data, and strategic assessment of programs through data collection, all inhibit the
use of data for continuous improvement.
Motivation
Motivation enhances the likelihood of contributing or inhibiting the data for
continuous improvement in online programs (Clark & Estes, 2008; Clark, 1999; Pintrich,
2003). This section will describe the factors that contribute and inhibit the use of data for
continuous improvement through various motivational theories.
Contributing. There are many different motivational factors that contribute to the
administrators and faculty use of data for continuous improvement. The intrinsic value in
learning analytics, belief that professors expect data to implement change, and positive
expectations for students to input data contribute to the use of data for continuous
improvement (Nsiah, 2013; Scheffel, Drachsler, Stoyanov, & Specht, 2014; Siemens,
2013). Program administrators who value learning analytics in data collection contribute
CONTINUOUS IMPROVEMENT 30
to the use of data for continuous improvement of their program (Siemens, 2013).
Campus leaders who share evidence with faculty members that utilizing data meet goals
and objectives at similar institutes demonstrate task expectancy theory through the use of
data as a benefit for the programs at their schools (Banta et al., 2009). Access to financial
resources have increased the cost value of ensuring that online education is effective
through close examination of the learning analytics of the graduate programs online
(Lockyer, Heathcoate, & Dawson, 2013). Administrative leaders responsible for data
collection see the utility value of utilizing big data, such as learning analytics, as it can
transform the productivity and quality within the online program (Siemens, 2013). This
belief relates to the notion that intrinsic value enhances the process of data collection to
improve program quality altogether as the research indicates that high intrinsic value on
task will enhance motivation, learning, and performance (Pintrich, 2003). Faculty
members who work under program administrators who are responsible for collecting data
contribute to the use of data for continuous improvement when the faculty members
expect the data to improve the learning outcomes for their students (Scheffel et al., 2014).
The faculty member’s belief is grounded in the motivational theory of expectancy
outcome, as these staff members believe that the thematic findings from the data will
generate improved learning outcomes for their students (Pintrich, 2003). In addition,
when program administrators responsible for data collection have positive expectations in
themselves to get students to respond to course surveys and input feedback, these
administrators contribute to the use of data for continuous improvement (Scheffel et al.,
2014). This belief is grounded in the motivational concept of self-efficacy in the sense
that motivation and performance is enhances when the administrators have positive
CONTINUOUS IMPROVEMENT 31
expectations about their own role in achieving success (Clark, 1999). Also, faculty
member’s observations of student motivation in the online program can contribute to the
success of using this observational data to change pace within the course or guide the
student in a new direction (Ma et al., 2015). The observed motivational behaviors of the
program administrators contribute the use of data for continuous improvement.
Inhibiting. The lack of certain motivational assets may inhibit administrators in
the successful use data for continuous improvement. If administrators’ do not value data
for it’s utility in promoting change, tension will result if they face expectation to collect
and use data (McCuddy, Pinar, & Gingerich, 2008; Richardson, 2005; Siemens, 2013).
Many educators inhibit the use of data for continuous improvement when they believe
that the algorithms used to collect data from students in online graduate programs are
invalid and ineffective due to lack of utility value (Scheffel et al., 2014) Campus leaders
that have yet to establish a value for data as a viable tool to guide decision-making
processes inhibit the use of data for program improvements (Banta et al., 2009). The
notion that analyzing data for potential themes is seen as time consuming and unworthy
of the cost value which is grounded in the motivational belief of task value that
performance is not enhances when an individual does not see the benefit in the task
(Clark, 1999). In addition, if program administrators face a staff with tension in the
application of academic versus innovative practices, then this will inhibit the use of data
for continuous improvement (Siemens, 2013). The tension behind the application of
innovative practices is grounded in the motivational belief that the faculty members may
have more negative emotions that will not enhance learning or job performance (Pintrich,
2003). Program administrators who demonstrate the discussed observed behaviors would
CONTINUOUS IMPROVEMENT 32
inhibit the utilization of data for continuous improvement across their higher education
institutes.
Summary. Motivational beliefs impact the contribution and inhibition of using
data for continuous improvement. Program administrators contribute to the use of data
for continuous improvement through observed behaviors in the following: sharing
intrinsic value in learning analytics, expectancy outcome of generating themes to improve
learning outcomes, and having positive expectations in their own role as program
administrators responsible for data collection. Administrators inhibit the use of data for
continuous improvement through observed behaviors in the following: not valuing data
collection to improve programs, focusing on other issues in place of analyzing data for
potential themes, and failing to resolve the tension of applying innovative practices over
traditional academic practices for staff members.
Organization
An organizations cultural values, beliefs, and attitudes visible or invisible,
contribute or inhibit the use of data for continuous improvement (Clark & Estes, 2008;
Gallimore & Goldenberg, 2001; Schein, 2004). This section will describe the contributing
and inhibiting factors that impact continuous improvement through organizational
cultural settings and models.
Contributing. There are a number of different cultural models and settings that
contribute to the administrators and faculty use of data for continuous improvement.
Resources to collect data, organizational value in generating themes from data, and
promoting a culture of high quality feedback to implementation contribute to the use of
data for continuous improvement (McCuddy et al., 2008; Nsiah, 2013; Sanders, 2008).
CONTINUOUS IMPROVEMENT 33
Program administrators have begun to shift towards accepting learning analytics in
education as it continues to benefit the educational setting within classrooms, schools,
higher education institutes, and international programs abroad (Siemens, 2013). Research
also points out that leaders who build cultures around data-driven decision-making
processes will successfully utilize data for continuous improvement (Banta et al., 2009).
Dringus specifies that when campus leaders provide comprehensive details about online
programs, the faculty members will respond with the feedback necessary to evolve the
structure of the program and online system altogether (2012). Program administrators that
provide staff with the resources to collect analyze, and process data generate the
utilization of data for continuous improvement (Nsiah, 2013). The allocation of resources
symbolizes the cultural settings that appear within the higher education program as
concrete manifestations that will enhance the performance of faculty and students alike
(Gallimore & Goldenberg, 2001). Program administrators that place emphasis on learning
analytics contribute to the use of data for continuous improvement (Siemens, 2013). To
value learning analytics within the organization models a belief that is invisible and
automated for employees throughout the organization (Schein, 2004). The organizational
leaders that focus on program use of learning analytics increase the use of data for
continuous improvement as it can help instructors shape lessons and improve upon
pedagogical decisions online. (Lockyer et al., 2013) In addition, the administrators
responsible for data at higher education institutions, contribute to the use of data for
continuous improvement through promoting the cyclical feedback loop of student
feedback to teacher evaluation (Sanders, 2008). The promotion of the feedback loop
contributes to a cultural model that is amenable to change thus improving the notion of
CONTINUOUS IMPROVEMENT 34
using data to improve (Gallimore & Goldenberg, 2001). Campus leaders that build a
culture to support data as evidence of learning ensure contribute to the utilization of data t
guide continuous improvement throughout the program (Banta et al., 2009). The
observed cultural models and settings contribute to the use of data for continuous
improvement.
Inhibiting. Many factors inhibit the use of faculty and admins ability to
successfully use data for continuous improvement. From an organizational perspective,
administrators that must focus on more pressing issues, instead of analyzing data to
gather themes, inhibit the utilization of data for continuous improvement (Richardson,
2005). An organizations resistance to analyzing learning analytics, bureaucratic
limitations to accessing data, and a negative feedback to data collection inhibit the use of
data for continuous improvement (Richardson, 2005; Scheffel et al., 2014; Siemens,
2013). Many program administrators have yet to take learning analytics data to create
meaningful information in the analysis process thus limiting the potential impact on the
cultural models for faculty members within the school (Ma et al., 2015). Organizational
leaders that do not believe in using data to improve programs inhibit the use data for
continuous improvement (Richardson, 2005). A campus that consists of community
members whom do not view data as a proper assessment tool to guide decision-making
across the institute will inhibit the use of data for continuous improvement (Banta et. al,
2009). The lack of effective role models impacts the cultural settings that appear within
the organization such as the program administrators not seeing value in using the learning
analytics for program improvement (Schein, 2004). External stakeholders issues with
access to data inhibits the use of continuous improvement as well (Siemens, 2013).
CONTINUOUS IMPROVEMENT 35
Outside parties that share pessimistic attitudes and negative beliefs towards use of data
will hinder an administrator’s ability to embed the cultural belief that using data will
improve the program altogether (Schein, 2004). In addition, faculty members that are
resistant to implementing changes from the learning analytics of a learning management
system will disable the use of data for continuous improvement (Scheffel et al., 2014).
Faculty and staff resistance to change, like using the learning analytic data, will inhibit
the use of data for program improvement (Clark & Estes, 2008). The observed cultural
models and settings will inhibit program administrators from using data to grow and
improve programs across higher education institutions.
Summary. Cultural models and settings may contribute or inhibit the use of data
for continuous improvement. The instructors of online courses can shape the structure
student learning engagement through active guidance (Ma et al., 2015). The following
organizational cultural models and settings are contributing factors: the allocation of
resources for planning data collection, an automated belief in utilizing learning analytics
to generate themes for potential change, and promoting a cyclical loop of student
feedback to teacher evaluation. The following organizational cultural models and settings
are inhibiting factors: an automated belief that the organization does not value learning
analytics, external stakeholders impact on accessing data, and faculty member views on
applying the themes from data to everyday work at the higher education institute.
Conclusion
The studies used throughout this literature review support the concept that
knowledge and skills, motivational theories, and organizational models contribute or
inhibit the use of data for continuous improvement (Clark & Estes, 2008; Krathwohl,
CONTINUOUS IMPROVEMENT 36
2002). Faculty and administrators require specific factual, conceptual, procedural, and
metacognitive aspects of knowledge to successfully use data for continuous improvement
(Sanders, 2008; Scheffel et al., 2014). These knowledge dimensions play a vital role at
different levels in impacting human performance (Anderson & Krathwohl, 2001).
Various motivational theories impact performance for administrators and faculty
members’ ability to successfully use data for continuous improvement (Clark & Estes,
2008). Rueda points out the importance in understanding the active choice, persistence,
and mental effort that also impact the initiations of human performance (2011). In
addition, the campus leaders that shape the cultural models within the higher education
institute will enhance or diminish the successful use of data to improve the program
(Dringus, 2012). All three areas, independent of one another, impact the performance of
the administrators and faculty members’ ability to use data for continuous improvement
in the online graduate program (Clark & Estes, 2008).
CONTINUOUS IMPROVEMENT 37
CHAPTER THREE: METHODOLOGY
Purpose of the Project and Questions
The purpose of this study was to explore administrators’ perspectives of the
barriers to and assets needed to effectively use data for continuous improvement in online
graduate programs. The study explored the following options:
a) What knowledge, skills, motivational, and organizational factors do online
program administrators perceive as facilitating or inhibiting student feedback and
analytic data for continuous improvement in these institutions?
b) For those factors to be facilitating student feedback for continuous improvement,
what promising practices could be adapted to and utilized by other units in the
same institution? For those factors perceived as inhibitors, what solutions may be
helpful for improving student feedback and analytic data for continuous
improvement within the institution?
c) How might those interventions, whether promising practices or solutions, be
evaluated for effectiveness?
Framework for the Study
This study used the Gap Analysis Process (Clark & Estes, 2008) to examine the
use of data for continuous improvement. The gap analysis approach examines the
underlying knowledge, motivation, and organizational assets and barriers that factor into
the successful performance of stakeholders in using data for continuous improvement.
Each potential cause is assumed until validated or invalidate. Through this gap analysis
process it was possible understand the root causes that directly impact the promising
practice.
CONTINUOUS IMPROVEMENT 38
Figure 1. Gap analysis process
Presumed Performance Needs/Issues/Assets
In traditional problem solving, causes of problems are often assumed, and then
solutions are generated based on assumptions. The causes are rarely validated prior to
selecting a solution. The causes that actually impact the performance may never be
addressed unless these causes are first validated. The first step in validating causes is a
thorough investigation of all possible causes based on professional knowledge, learning
and motivational theory, organizational causes, and a review of the background literature.
This section describes the presumed causes of performance for the stakeholders in this
study, based on my own professional knowledge and on a review of factors from learning
and motivation theory. In the table that follows this section, factors from the literature
reviewed in Chapter 2 are also included.
CONTINUOUS IMPROVEMENT 39
Personal Professional Knowledge
This section details the observations based on informal conversations with
members of the partnership of online graduate programs. Such observations are
categorized in knowledge and skills, motivation, and organization.
Knowledge and skills. Based on informal conversations, administrators use a
number of different skills in using data for continuous improvement. Administrators with
experience in data mining know the data that are essential in improving their programs.
The skill of sorting out the different data sets is key to their success. Many administrators
also know how to use learning analytics and generate potential themes to improve the
quality of their programs.
Motivation. Many of the program administrators see great value in collecting the
data. Colleagues of mine in higher education have indicated the importance of collecting
data before even making decisions on campus initiatives or projects. The motivation
comes from the drive to expand the online program and improve the program for
consumers and staff alike. The utility of using data for continuous improvement
motivates many campus leaders as it is satisfies accreditation standards as well.
Organization. Many of the administrators have shaped a culture around the
utilization of data for continuous improvement (Ma et al., 2015). As online education
continues to grow, many administrators constantly reflect on the data to ensure that the
program is running at the highest level. This culture embeds a cyclical process of
continually collecting data to improve.
CONTINUOUS IMPROVEMENT 40
Learning and Motivation Theory
This section will describe the assumed assets and barriers based on major learning
and motivational theories. The factors below are considered due to their critical role in
using data for continuous improvement. These assets are explained through the different
types of knowledge, motivational theory, and organizational/cultural beliefs.
Knowledge and skills. The following assets are related to the different
knowledge types. Using Krathwohl’s (2002) taxonomy, the assumed assets are broken
into the following. Administrators who collect the data and generate themes must have
the basic facts and information on what the data looks like. The administrators must also
know how to categorize sets of data for different departments and for different processes.
In addition, the administrators must have the procedural knowledge of how to collect the
data and code it to generate potential thematic findings. The executive administration
must also have strategic a plan for using the data for program growth and improvement.
Motivation. The administrators of the online graduate programs use data for
continuous improvement for various motivational reasons. One reason is because of the
intrinsic value of using learning analytics to better their programs (Pintrich, 2003).
Faculty of online graduate programs report high attainment value for implementing
changes from student feedback (Clark & Estes, 2008). In addition, many of the
administrators work with faculty who attribute the data feedback loop to their improved
teaching methods (Pintrich, 2003).
Organization. Administrators responsible for the data feedback loop thrive in the
cultural beliefs and models within their organizations. The high performing institutes
culturally value innovation thus helping data based ideas flourish (Clark & Estes, 2008).
CONTINUOUS IMPROVEMENT 41
The organizations also emphasize the continual use examining the data in order to
improve program quality (Schein, 2004). The administrators also have an automated
belief in the data continuous improvement feedback loop as crucial to sustainability and
growth (Schein, 2004).
Summary
A summary of assumed assets categorized as Knowledge, Motivation, and Organization
is found in Table 1.
Summary of Assumed Assets for Knowledge, Motivational, and Organizational Issues
Assets
Sources
Knowledge Motivation Organizational
Processes
Scanning interviews,
Personal knowledge
Administrators know
that searching for
specific types of data
is important (F),
Administrators can
chunk the data (C),
Administrators know
how to use learning
analytics (P),
Administrators plan
for periods of time to
conduct assessment
of data
Administrators see
great value in
collecting data
(UV),
Administrators
attribute student
feedback to data
(A), Administrators
value learning
analytics from
online programs
(UV),
Administrators are
motivated by
accreditation
mandates (UV)
Organization of
innovation always
looking to improve,
A culture that
believes in data
driven decision
making, The
fostering nature to
strengthen the
cyclical process of
student feedback to
faculty members
Learning and
motivation theory
Administrators know
basic information
and facts required for
successful data use
(Krathwohl, 2002),
Administrators
understand the
underlying principles
and categories
necessary for use of
data for continuous
Administrators
increase use of data
for continuous
improvement when
it is valued
intrinsically,
extrinsically,
attainment, and
cost value (Clark &
Estes, 2008).
Successful data use
Culture that is open
to change
(Gallimore &
Goldenberg, 2001),
Visible cultural
models that have
resources (Schein,
2004), Belief that
there is a culture of
constant
competition (Clark
CONTINUOUS IMPROVEMENT 42
improvement
(Krathwohl, 2002),
Administrators
exhibit the
procedural
knowledge for
knowing how to
collect data and use it
for continuous
improvement
(Krathwohl, 2002),
Metacognitive
knowledge refers to
thinking about
thinking and the
ability to reflect
(Krathwohl, 2002).
is enhanced when
administrators have
positive
expectations about
their own ability
(Pintrich, 2003),
Successful data use
is enhanced when
administrators
attribute their
success and failures
to effort over
ability (Clark &
Estes, 2008),
Successful data use
is increased when
administrators
exhibit positive
emotions and
reduces negative
ones (Pintrich,
2003).
& Estes, 2008),
High employee
turnover that occurs
within the
organization
(Schein, 2004).
Related literature Factual –
Administrators know
that data collection
can help improve
programs (Alderman,
2008); Conceptual –
Administrators know
that specific types of
data will help
improve programs
(Barker et al., 2011),
Procedural –
Administrators know
that analysis of
learning analytics
will improve their
programs.
(Alderman, 2008),
Metacognitive-
Administrators know
that planning for
periods of
assessment and
evaluation will
Administrators are
motivated to accept
learning analytics
because of the
utility value
(Siemens, 2011),
Administrators see
the attainment
value in generating
themes from data
collection to
improve their
programs (Scheffel
et al., 2010),
Administrators
with high self-
efficacy will have
positive
expectations about
their ability to use
data for continuous
improvement
(Nsiah, 2013).
Organization values
collection of data
(Richardson, 2005),
Promoting a culture
of high quality
(Sanders, 2008),
Increased
motivation from
uniform visions
(Mcuddy et al.,
2008),
CONTINUOUS IMPROVEMENT 43
improve programs
(Sonnell, 2011).
Validation of the Performance Assets
The remainder of Chapter 3 will focus on detailing how this study will validate
the assumed assets of knowledge, motivation, and organization/cultural beliefs.
Validation of the Performance Assets: Knowledge
The following section describes the validation process for the assumed knowledge
assets in utilizing data for continuous improvement.
Validation of factual knowledge assets. In this study, I attempted to validate the
assumed factual asset through the program administrators. The administrators responsible
for collecting various types of data are assumed to understand that this data contributes to
improving their programs. To validate this assumption required a demonstration of the
knowledge that administrators know that data collection could improve their program.
The validation took place through an interview with the administrators. The questions in
the interview process revealed if the administrators know the value of data, purpose of
data, function of data collection, and factors that contribute to the use of improving
programs altogether.
Validation of conceptual knowledge assets. In this study, I attempted to
validate the conceptual knowledge assets through an interview with administrators
responsible for data at their institutions. The administrators are assumed to be able to
demonstrate the ability to classify different types of data. The interview posed questions
to distinguish if the administrators know different types of data, the relationship between
the different sets of data, and potential theories behind data sets. The validation attempted
CONTINUOUS IMPROVEMENT 44
to confirm that the administrators have the conceptual knowledge of underlying
categories, principles, structure, or theory in the use of data for continuous improvement
for their programs (Krathwohl, 2002).
Validation of procedural knowledge assets. To validate the procedural
knowledge assets required interviews with programs administrators. The administrators
are assumed to be able to demonstrate how to implement the use of thematic findings
from the data as well as how to generate themes from the data. To validate the
procedural knowledge of faculty required an interview with questions asking faculty
members to articulate how they have utilized thematic findings from the data to improve
instruction in their courses. To validate the procedural knowledge of administrators
required an interview to pose questions and inquire if the administrators know how to
code data and look for specific findings. Both demonstrations attempted to validate the
assumed assets of procedural knowledge.
Validation of metacognitive knowledge assets. To validate the metacognitive
knowledge assets required document analysis of the strategic plan for each online
graduate program. The assumed metacognitive asset is that executive administrators have
adjusted skills and knowledge to strategize for utilization of data for continuous program
improvement. To validate this metacognitive asset will require an analysis of the strategic
plans for each online graduate program. Schein points out that such artifacts will serve as
evidence and examples of metacognitive knowledge (2004). The analysis will validate
the assumption that executive administrators reflected and adjusted to include general
strategies around using data for continuous improvement (Krathwohl, 2002).
CONTINUOUS IMPROVEMENT 45
Table 2
Summary of Assumed Knowledge Assets and Their Validation
Assumed Knowledge Asset How was it validated? Interview questions
& document analysis
Administrators know that data evaluation
can improve programs (F)
Ask administrators the following questions
in an interview: What are your thoughts on
data?
Do you use data to reach goals? Why?
Survey question: How does our
organization improve online programs?
Administrators know the different types, or
specific types of data can contribute to
student learning(C)
Ask the administrators to summarize the
different types of data
Ask administrators the following questions
in an interview: What type of data do you
collect? Why?
Administrators know how to create themes
from data analysis (P)
Ask administrators to articulate how they
have used data to improve instruction
Ask administrators the following questions
in an interview:
How do you code the data?
Administrators have strategic plans in place
for growth and evaluation with continual
use of data (M)
Observe the strategic plan of each distance
education program
Ask administrators the following questions
in an interview: When do you assess and
evaluate the data? Is the staff following the
changes from your data?
Survey question: There is regularly planned
period to assess and evaluate data
Validation of the Performance Assets: Motivation
This section will detail the validation techniques for the assumed motivational
assets. The assumed motivational assets compose indicators for three different facets of
motivational performance: active choice, persistence, and mental effort (Clark & Estes,
2008). To validate the assumed motivational asset for program leaders who see high task
CONTINUOUS IMPROVEMENT 46
value in using data to improve programs will require a written likert -scale item survey
with questions focused on expectations in the program. This survey will validate the
possible causes of high task value and expectancy outcome in the program leaders
responsible for data collection. To validate the assumed motivational asset of staff
viewing the feedback as important to improving practice required a likert-scale item
survey focused on faculty perception of implementing feedback to improve their
practices. This survey attempted to validate the extrinsic value, or utility, and determine if
the faculties view this as useful and valuable. I attempted to validate the assumed
motivational asset of faculty attributing improved student feedback to data thematic
findings through a a likert-scale item survey with a focus on student feedback reflecting
the changes faculty members made in their courses. This survey aimed to validate the
idea that faculty attribute the improvement of instruction to their effort of implementing
the student feedback to the changes the faculty members made. To validate the assumed
motivational asset of using data for continuous improvement to satisfy accreditation
standards required a Likert-scale item survey for executive administrators to answer the
question if using data is important for accreditation.
Table 3
Summary of Assumed Motivational Assets and Their Validation
Motivational Asset Type of Indicator Possible Cause(s) How was it
validated?
Program leaders see
the utility value in
using data to
improve programs
Persistence, effort High task value Likert-scale item
survey “As an
administrator, I
expect to do well in
our programs”
Administrators have
an extrinsic interest
in student feedback
as it helps improve
Active choice High task value Interview question:
What types of
ultimate outcomes
do you expect from
CONTINUOUS IMPROVEMENT 47
program decisions the use of data in
your online
program?
Administrators
attribute improved
student feedback to
thematic findings
from the coded data
Persistence, effort Attributions Has your approach
to gathering and
using student
feedback changed
and if so how?
Executive
administrators value
learning analytics as
an improvement to
satisfy accreditation
standards
Active choice, effort High task value-
attainment
Likert-scale item
survey “Using data
is important for
accreditation”
Validation of the Performance Assets: Organization/Culture/Context
The validation techniques for the assumed organizational and cultural assets took
place through a survey. I aimed to validate the assumed organizational assets. The two
assumed organizational assets for this study stem within cultural models and cultural
settings. Cultural models represent values and beliefs that remain generally invisible
(Gallimore & Goldenberg, 2001). Cultural settings refer to visible concrete
manifestations that appear within organizational settings (Schein, 2004). This study
assumes the organizational asset that the graduate schools share a cultural model that
values innovate practices. To validate this assumed organizational asset required a Likert-
scale item survey to inquire if the schools value innovative practices within the
organization. This survey aimed to validate that employee within the partnership schools
accept new ideas and openness to change through a cultural model of valuing innovative
practices. To validate the assumed organizational asset that the partnership schools share
a cultural setting of highly motivated staff to improve programs with use of data required
an open ended survey to pose how the school improves programs altogether. This
CONTINUOUS IMPROVEMENT 48
validation indicated the possible organizational causes of constant communication and
feedback seen often within the actual settings of the graduate programs.
Table 4
Summary of Assumed Organizational/Culture/Context Assets and Their Validation
Organizational Asset Possible Organizational
Causes
How will it be validated?
Positive attitude of data
driven culture across
campus
Acceptance of new ideas
Openness to change
Interview: What is the role
of faculty in data collection
and use and how has faculty
responded to the use of
data?
Survey: My faculty enjoys
using data to shape
practices
Culture of highly motivated
administrators advocating
for faculty to meet specific
goals
Constant communication
Praise and feedback seen
often
Interview: Do you use data
to establish and track goals?
Describe that process such
as how you use data to set
goals and how you
communicate goals and
measure progress etc.
Participating Stakeholders
The stakeholder population for this study was the program administrators
responsible for data collection throughout the partnerships of online graduate programs.
The criteria in choosing this stakeholder was as followed: must have full control over
access to data, have the ability to code the data, and serve as chief individual responsible
for inputting the data thematic findings throughout the institute. These stakeholders were
chosen because of their responsibilities of collecting, analyzing, and disseminating
information throughout the respective online graduate programs.
Data Collection
For data collection, permission from the University of Southern California’s
Institutional Review Board will be obtained. This next section will describe how the
CONTINUOUS IMPROVEMENT 49
validation of the knowledge, motivation, and organizational assets in this study will be
conducted.
Surveys
Surveys were created to validate the assumed assets of knowledge, motivation,
and organization. The surveys consisted of Likert-scale questions for the program
administrator. Each question was tailored to validate the assumed assets for the
mentioned tables above. The survey consisted of X number of questions intended to
measure potential validation of the assumed asset. These Likert-scale questions are
validated and reliable as seen in previous studies. The surveys were administered online
through a secured online certification website to ensure the results stay in a safe virtual
place. The survey respondent will remain anonymous and the data will remain on a
password-protected computer.
Interviews
Interviews occurred via online telecommunication software. Interviews followed
a semi-structured script of questions (Merriman, 2009). The video software will also
allow for recording. All of the interviews were conducted in English. Interviews aimed to
validate the assumed assets of knowledge, motivation, and organization/culture. The
interviews lasted between thirty minutes to an hour. Questions were also proved for
richer responses. The interview guide in included in the appendix.
Trustworthiness of Data
To ensure trustworthiness of the data for this study required numerous procedures
best described in Merriam’s checklist (2009). For triangulation purposes, the interviews,
surveys, and data analysis helped validate the assumed assets. Adequate time will be
CONTINUOUS IMPROVEMENT 50
spent collecting the data to ensure saturation occurs. Discussions with colleagues
regarding the process of the study will take place. Member-checking will also take place
to increase the reliability of this study (Merriam, 2009). In addition, this study will
provide assurance of anonymity in the surveys and confidentiality throughout all of the
interviews.
Role of Investigator
As investigator of this study I am doctoral student at one of the universities within
the partnership. However, the school department that is being studied at the university I
attend is separate. I will ensure complete confidentiality of information, identity, and data
for every school within the partnership. I will ensure that this study is completely
voluntary and the right to participate is clearly understood. Participants can withdraw
from the study at any time. Data will not be collected without obtaining permission from
the university, department, and administrator.
Data Analysis
Data analysis occurred based upon the surveys, interviews, and document
analysis. Analyzing interview data will use the a priori categories of knowledge,
motivation, and organization factors that contribute to using data for continuous
improvement in addition to employing open coding to find unexpected, emergent themes
to either nullify or validate the assumed assets. The survey data reflected the frequency
rate of what program administrators chose in the Likert-scale item surveys as in strongly
agreed or agreed or strongly disagreed to disagree. In addition, the document analysis
consisted of using a priori categories to determine if there was plausible cause to validate
the assumed assets of knowledge, motivation, and organization.
CONTINUOUS IMPROVEMENT 51
CHAPTER FOUR: RESULTS AND FINDINGS
The purpose of this study was to explore administrators’ perspectives of the
barriers to and assets needed for effectively using data for continuous improvement in
online graduate programs. The question that guided this study is: what knowledge, skills,
motivational, and organizational factors do program administrators in these online
programs perceive as facilitating or inhibiting the use of student feedback and analytic
data for continuous improvement in these institutions? The Gap Analysis model (Clark &
Estes, 2008) was used as the framework to guide this study. The assumed causes that
facilitate or inhibit the use of data for continuous improvement were delineated through a
review of the literature in knowledge and skills, motivational theories, and organizational
beliefs. Each assumed cause was assessed to determine the validation of these factors.
Both qualitative and quantitative data were collected to determine the validation
of each assumed asset. Surveys were distributed to each program administrator via an
online platform. Each survey had open-ended and Likert scale questions. Interviews were
conducted via cell phone in order to understand the knowledge, motivation, and
organization causes that contribute to or inhibit the use of data for continuous
improvement. Data were collected from the program administrators at each partnership
school over a three-month period.
This chapter first displays the data collection methods at each partnership
university. This is followed by the validation results for each assumed asset through an
examination of the survey results, findings from interviews, and a synthesis for each
cause. The goal was to determine the validity of each assumed knowledge, motivation,
CONTINUOUS IMPROVEMENT 52
and organizational factor that facilitates or inhibits the use of data for continuous
improvement.
Data Collection at Programs
This section is designed to display the various types data collected by each of the
participating programs. All seven programs within the universities collect different types
of data for continuous improvement. Areas shaded in black signify that the program
collects that particular type of data. Details of the data collected across the seven
programs are visible in Table 5.
Table 5
Data Collected at Programs
Student
Background
Information
Federal
Data
Admissions/
Retention
Data
Field
Placement
Data
Faculty/
Student
Feedback
Data
Online
Activity
Data
Program A
Program B
Program C
Program D
Program E
Program F
Program G
Results and Findings for Knowledge Causes
The assumed assets were delineated from four knowledge types, factual,
conceptual, procedural, and metacognitive (Anderson et al., 2001) to understand which
CONTINUOUS IMPROVEMENT 53
knowledge or skills facilitated or inhibited the use of data for continuous improvement.
All of the findings were extracted from interviews and survey responses.
Data evaluation can Improve Programs
According to the survey results, one respondent strongly disagreed that the
collection and use of data can help online programs. 25% of respondents agreed with the
statement; 63% strongly agreed that the collection and use of data can help online
programs. Therefore, this assumed cause was validated. Results are displayed in Figure 2.
Figure 2. Responses to survey question 1: The collection and use of data help our online
degree programs perform well. The number of respondents disagreeing/agreeing to this
statement is represented in the vertical bar.
Functions of Different Types of Data
It was assumed that program administrators know the different types of data, or
specific types, that contribute to student learning in their online programs. To validate
this assumed conceptual knowledge cause, participants were asked the following
question: “What type of data do you collect? Why?”
CONTINUOUS IMPROVEMENT 54
According to the transcribed interview data, all of the respondents described a
number of different types of data used to contribute to student learning. One respondent
noted: “We collect to support all of our decisions and conclusions.” A few respondents
described retention. Two respondents described admissions data. Additionally,
respondents discussed using students background, student feedback, and faculty
performance ratings. The collection of data to contribute to student learning was
described as “similar to the same data we collect for students on the ground.” Hence, this
conceptual knowledge assumption has been validated. The results are displayed in Figure
3.
Figure 3. Responses to interview question 2: What types of data do you collect? Why?
The number of types of data that participants described in the interview is listed on the
vertical bar.
Creating Themes from the Data
It was assumed the program administrators know how to create themes from data.
Learning analytics is used to collect data and generate themes to improve programs. To
CONTINUOUS IMPROVEMENT 55
validate this assumed procedural knowledge cause, the participants were asked: “How do
you code data?”
According to the transcribed interview data, all of the participants coded the data
to find specific themes through learning analytics. The administrators discussed a few
major themes. An administrator stated: “we look at a number of quality measures to
determine the appropriate codes.” One administrator discussed the rates of students
passing a course. A few discussed the grade point average of students in the program. In
addition, three administrators spoke to student retention, student feedback, and data from
course learning outcomes. One administrator described the creation of themes through
“tallying up key phrases and producing a frequency chart to find relevant themes.”
Therefore, this procedural knowledge assumption was validated. The results are
displayed in Figure 4.
Figure 4. Responses to interview question 1: How do you code data? The number of
themes that participants described in the interview is listed on the vertical bar.
CONTINUOUS IMPROVEMENT 56
Strategic Periods to Assess Data
Metacognitive knowledge focuses on how a learner thinks about thinking; this
type of knowledge is the most complex of the four dimensions of knowledge (Anderson
& Krathwohl, 2001; Rueda, 2011). There was one survey question that pertained to
metacognitive knowledge. It was assumed that program administrators have strategic
plans in place for growth and evaluation with continual use of data. To validate this
assumption, administrators were given the following prompt on a Likert-scale: there are
regularly planned periods to assess and evaluate data.
According to the survey results, 88% of the respondents agreed that there is a
regularly planned period to assess and evaluate data. 13% of respondents strongly agreed
with this statement. The results validate the metacognitive assumption that program
administrators regularly make time to plan growth and evaluation with continual use of
data. The responses are described in Figure 5.
CONTINUOUS IMPROVEMENT 57
Figure 5. Responses to survey question 2: there are regularly planned periods to assess
and evaluate data. The number of respondents disagreeing/agreeing to this statement is
represented in the vertical bar.
It was assumed that program administrators have strategic plans in place for
growth and evaluation with continual use of data. To validate this assumed metacognitive
knowledge cause, participants were asked the following: “When do you assess and
evaluate the data?”
According to the transcribed interview data, all of the respondents described
different frequency periods they use to ensure that the data are assessed and evaluated
throughout the school year. A respondent stated: “We assess and evaluate data to inform
school leaders for them to act on.” Three respondents described weekly meetings with
staff to evaluate and assess data. One respondent described the weekly meeting as “a time
to collaborate and talk about what data we can use and share.” Almost all of the
respondents met quarterly. Five respondents described monthly meetings. In addition, all
of the respondents discussed their annual meeting with program leaders to evaluate and
CONTINUOUS IMPROVEMENT 58
assess their data. Thus, this assumed metacognitive knowledge assets has been validated.
The results are displayed in Figure 6.
Figure 6. Responses to interview question 3: When do you assess and evaluate the data?
The frequency of periods used to evaluate data that participants described in the interview
is listed on the vertical bar.
Synthesis of Results and Findings for Knowledge Causes
Through the triangulation of surveys and interviews given to program
administrators of online programs, the results indicate that program administrators have
validated the assumed factual, conceptual, procedural, and metacognitive assets from the
research literature regarding the use of data for continuous improvement of their online
programs.
Factually, the program administrators validated the assumed cause that
administrators know that data evaluation can improve their programs. 88% of
respondents agreed to strongly agreed that the collection and use of data help their online
programs perform well.
CONTINUOUS IMPROVEMENT 59
Procedurally, all of the respondents knew how to create themes through data
analysis of their online programs. The number of different themes varied between each
administrator, but all of them demonstrated that they knew how to code the data to
generate specific themes within each of their programs.
Conceptually, all of the respondents knew the different types of data to collect in
order to improve student-learning outcomes within their programs. When asked what
type of data they collect, the program administrators responded with more than seven to
eleven different types depending on the needs of their program.
As for metacognitive knowledge, all of the respondents agreed or strongly agreed
that there are regular periods planned to evaluate and assess data. In addition, the
program administrators described a number of different processes used to ensure that
planning took place to regularly evaluate and assess their program data throughout the
school year.
The assumed knowledge causes as well as triangulated findings are listed in Table
6.
Table 6
Summary of the Assumed Knowledge Causes and Validation Findings
Category Assumed Cause Result Explanation
Factual
1. Administrators
know that data
evaluation can
improve programs.
Validated
88% of respondents
agreed to strongly
agreed that data helps
improve their
programs.
Conceptual
2. Administrators
know the different
types, or specific
types of data can
Validated
100% of respondents
described different
types of data they
collect to improve
CONTINUOUS IMPROVEMENT 60
contribute to student
learning.
student learning
within their programs.
Procedural
3. Administrators
know how to create
themes from data
analysis
Validated
100% of respondents
described different
themes they created
through analysis of
data.
Metacognitive
4. Administrators
have strategic plans
in place for growth
and evaluation with
continual use of
data.
Validated
100% of respondents
described different
periods in place to
ensure data is
evaluated
periodically; 100% of
respondents agreed to
strongly agreed that
there is a regularly
planned period to
collect and analyze
data.
Results and Findings for Motivation Causes
The assumed assets were delineated from various motivational theories (Rueda,
2009) to understand which motivational principles facilitated or inhibited the use of data
for continuous improvement.
Administrators Value Using Data to Improve Programs
It was assumed that administrators value using data in order to improve their
programs. To assess the utility value of program administrators using data to improve
programs, the respondents rated the following statement through a Likert-scale question:
our collection and use of data have resulted in practical changes and operations. The
results indicate that the majority of the respondents agreed or strongly agreed with this
CONTINUOUS IMPROVEMENT 61
statement. 50% of respondents agreed that the collection and use of data improved their
programs; 38% of respondents strongly agreed that the use of data has improved their
programs. Only one respondent felt neutral about this statement. As a result, this assumed
cause was validated. The results are displayed in Figure 7.
Figure 7. Responses to survey question 3: Our collection and use of data have resulted in
practical changes in operations. The number of respondents disagreeing/agreeing to this
statement is represented in the vertical bar.
Administrators Value Student Feedback
It was assumed that program administrators have an extrinsic interest in student
feedback as it helps improve program decisions. To validate this assumed motivational
cause, participants were asked the following: “What types of ultimate outcomes do you
expect from the use of data in your online program for student learning?”
According to the transcribed interview data, all of the respondents described a
number of improvements expected through student feedback. Four respondents described
program performance. One respondent noted: “For program performance, data tells us the
true story, we can what is what working and change what isn’t working.” Three
CONTINUOUS IMPROVEMENT 62
respondents described student learning and student motivation. In addition, two
respondents described academic issues. Finally, nearly all of the respondents described
faculty development. One respondent noted, “we want faculty to be as comfortable as
possible.” Thus, this assumed motivational asset has been validated. The results are
displayed in Figure 8.
Figure 8. Responses to interview question 4: What types of ultimate outcomes do you
expect from the use of data in your online program for student learning? The number of
student feedback generated outcomes described in the interview is listed on the vertical
bar.
Administrators Value Using Data for Accreditation
It was assumed that administrators value learning analytics as a tool for satisfying
accreditation standards. To assess the attainment value of using data for accreditation
purposes, the respondents rated the following statement through a Likert-scale question:
using our data is important in the accreditation process. The results indicate that all of the
respondents agreed or strongly agreed with this statement. 25% of respondents agreed
that using their data is important to the accreditation process; 75% of respondents
CONTINUOUS IMPROVEMENT 63
strongly agreed that their data use is important for accreditation. Therefore, this assumed
cause was validated. The results are displayed in Figure 9.
Figure 9. Responses to survey question 4: using data is important in the accreditation
process. The number of respondents disagreeing/agreeing to this statement is represented
in the vertical bar.
Administrators’ Attainment Value of Data for Accreditation
It was assumed that program administrators value learning analytics, or using
data, as a way to satisfy accreditation standards. To validate this assumed motivational
cause, participants were asked the following: “How much does the accreditation process
impact your data practices?”
According to the transcribed interview data, all of the respondents described a
number of practices that satisfy accreditation. Three respondents described using data for
internal reviews to satisfy accreditation mandates. One respondent stated: “Data supports
our presentation to educational community, better our thinking.” Four respondents
described data collected to generally improve their program. Three respondents described
data to improve student experience. Finally, two respondents described data from
CONTINUOUS IMPROVEMENT 64
learning outcomes to meet accreditation standards. One respondent described the learning
outcome data “as a way to see what the student has learned to effectively use data for
accreditation.” As a result, the assumed motivational asset has been validated. The results
are displayed in Figure 10.
Figure 10. Responses to interview question 5: How much does the accreditation process
impact your data practices? The accreditation impacted data practices participants
described in the interview are listed on the vertical bar.
Administrators Attribute Improved Student Feedback to Program Changes
It was assumed that program administrators attribute improved student feedback
to thematic findings from their data. To validate this assumed motivational cause,
participants were asked the following: “Has your approach to gathering and using student
feedback changed and if so how?”
According to the transcribed interview data, all of the respondents described a
number of program modifications generated from student feedback. One respondent
described marketing tactics, “we even surveyed students who did not choose our
program.” A few respondents discussed faculty development. One respondent stated: “As
CONTINUOUS IMPROVEMENT 65
we expanded, we created guidebooks to help support faculty.” Another respondent also
talked about technical development. In addition, three or more respondents discussed
revised data collection methods and general continuous improvement. One respondent
discussed, “our data collection is for continuous improvement and t make changes in our
program.” Thus, this assumed motivational asset has been validated. The results are
displayed in Figure 11.
Figure 11. Responses to interview question 7: Has your approach to gathering and using
student feedback changed and if so how? The number of improved student feedback
program modifications participants described in the interview is listed on the vertical bar.
Synthesis of Results and Findings for Motivation Causes
Through the triangulation of surveys and interviews given to program
administrators of online programs, the results indicate that program administrators have
validated the motivational assets from the research literature regarding the use of data for
continuous improvement of their online programs.
CONTINUOUS IMPROVEMENT 66
Program administrators’ value the use of data to improve programs. When
surveyed, the majority of administrators agreed that the use of data has improved their
programs. Thus proving the value to use data for improvements within the program.
Administrators have an extrinsic interest in student feedback as it impacts
program decisions. Throughout the interview process, respondents described a number of
ways that student feedback has helped improve program decisions.
Administrators’ value learning analytics, or the use data, as a method of satisfying
accreditation. When surveyed, all respondents agreed to strongly agreed that using data is
important for the accreditation process. In addition, all respondents described a number of
practices used in their program for accreditation purposes.
Finally, program admin attribute improved student feedback to thematic findings
from their data. Throughout all of the interviews, respondents described a number of
program changes that increased positive student feedback.
The assumed motivation causes as well as triangulated findings are listed in Table
7.
CONTINUOUS IMPROVEMENT 67
Table 7
Summary of the Assumed Motivation Causes and Validation Findings
Category Assumed Cause Result Explanation
Utility
1. Administrators
value the use of data
to improve
programs.
Validated
88% of respondents
agreed to strongly
agreed the use of data
has resulted in
program
improvements.
Utility
2. Administrators
have an extrinsic
interest in student
feedback as it helps
improve program
decisions
Validated
100% of respondents
described different
student feedback
generated
improvements within
their programs.
Attainment
3. Administrators
value learning
analytics, or use of
data, to meet
accreditation
standards.
Validated
100% of respondents
agreed to strongly
agreed that data use is
important for
accreditation; all of the
respondents described
practices around data
used for accreditation.
Attribution
4. Administrators
attribute improved
student feedback to
thematic findings
from their own data.
Validated
100% of respondents
described improved
student feedback
modifications within
their programs.
CONTINUOUS IMPROVEMENT 68
Results and Findings for Organization Causes
The assumed assets were delineated from organizational theories and cultural
value beliefs (Rueda, 2009) to understand which motivational principles facilitated or
inhibited the use of data for continuous improvement.
Faculty Enjoy Data-driven Programming
It was assumed that there is a positive and open attitude of data use throughout the
program. To assess the organizational value of faculty using data driven programming,
the respondents rated the following statement through a Likert-scale question: our faculty
members, generally, are open to the idea of using data for continuous improvement.
The results indicate that the majority of the respondents agreed to strongly agreed
with this statement. 63% of respondents agreed that faculty members are open to the use
of data for continuous improvement; 37% of respondents strongly agreed with this
statement. As a result, this assumed cause was validated. The results are displayed in
Figure 12.
CONTINUOUS IMPROVEMENT 69
Figure 12. Responses to survey question 5: Our faculty members, generally, are open to
the idea of using data for continuous improvement. The number of respondents
disagreeing/agreeing to this statement is represented in the vertical bar.
Culture of Highly Motivated Program Administrators
It was assumed that there is a culture of highly motivated administrators
advocating for faculty to meet specific goals. To validate this assumed motivational
cause, participants were asked the following: “What types of ultimate outcomes do you
expect from the use of data in your online program for student learning?”
According to the transcribed interview data, all of the respondents described a
number of faculty advocated goals. One respondent noted: “the data tells us the true
story, not the story we hope we are telling.” Four respondents described goals for specific
areas of growth, improving pedagogy, and designated administrator goals. In addition,
three respondents discussed goals around student competency and learning goals. One
administrator described their goal as “students achieving competencies in different
practices areas throughout the program.” Thus, this assumed organizational asset has
been validated. The results are displayed in Figure 13.
CONTINUOUS IMPROVEMENT 70
Figure 13. Responses to interview question 8: Do you use data to establish and track
goals? Describe that process such as how you use data to set goals and how you
communicate goals and measure progress etc. The faculty advocated goals participants
described in the interview is listed on the vertical bar.
Synthesis of Results and Findings for Organization Causes
Through the triangulation of surveys and interviews given to program
administrators of online programs, the results indicate that program administrators have
validated the organizational assets from the research literature regarding the use of data
for continuous improvement of their online programs.
There is a positive attitude around the use of data throughout the program. All of
the respondents agreed to strongly agreed that faculty members are open and enjoy using
data for continuous improvement.
There is a culture of highly motivated administrators advocating for faculty
members to meet specific goals throughout the program. Throughout the interviews,
respondents described a number of different goals that were designed for faculty.
The assumed motivation causes and triangulated findings are listed in Table 8.
CONTINUOUS IMPROVEMENT 71
Table 8
Summary of the Assumed Organization Causes and Validation Findings
Category Assumed Cause Result Explanation
Cultural Model
1. There is a
positive attitude of
data driven culture
throughout the
program.
Validated
100% of respondents
agreed to strongly
agreed that faculty
members are happy
and open to data for
continuous
improvement.
Cultural Setting
2. Culture of highly
motivated
administrators
advocating for
faculty to meet
specific goals.
Validated
100% of respondents
described different
goals being tracked
for faculty within
their programs.
Summary
Triangulation of the survey results and interview findings of the program
administrators validated the assumed knowledge, motivation, and organizational causes
that contribute to the use of data for continuous improvement. Through closer
examination, four knowledge causes were validated, four motivational causes were
validated, and two organizational causes were validated as well.
The assumed knowledge assets validated factual, conceptual, procedural, and
metacognitive causes. Program administrators know that data evaluation can improve
programs. Administrators know the different types of data. Admin also know how to
analyze and evaluate data sets as well. Finally, administrators have strategic plans in
place to ensure there is ample time to monitor and evaluate data throughout the school
year.
CONTINUOUS IMPROVEMENT 72
The assumed motivational assets validated two utility, one attainment, and one
attribution cause. Program administrators value the use of data to improve their programs.
Admin have an extrinsic interest in student feedback as it helps them improve decisions
within their programs. Program admin also value the use learning analytics, of the use of
data, to meet accreditation requirements. Finally, administrators attribute improved
student feedback to various program improvements through data.
The organizational assets validated a non-resistant and participatory cause. There
is generally a positive attitude of data driven programming throughout the program. In
addition, program administrators are highly motivated and advocate for faculty to meet
specific goals
CONTINUOUS IMPROVEMENT 73
CHAPTER FIVE: SOLUTIONS, IMPLEMENTATION, AND EVALUATION
The purpose of this study was to explore the factors that contribute to the use of
data for continuous improvement for online graduate school programs. This paper
utilized the Gap Analysis model to propose and validate assumed factors within
knowledge, motivation, and organizational dimensions (Clark & Estes, 2008) according
to college administrators responsible for the use of data. After triangulating surveys and
interviews, a total of ten assets were validated. This chapter is designed to advise
administrators on a recipe of online success through recommendations, based on the
validated assets and the research literature, for programs looking to strengthen their use
of data. The next section will discuss the order and rationale of the assets. Following, are
the recommended solutions, an implementation plan, and an evaluation plan based on the
Kirkpatrick model (1998). The chapter then concludes with a discussion of the limitations
of the study and a discussion of future research in the field.
Validated Causes Selection and Rationale
In Chapter 4, a total of ten assumed causes were validated. The validated causes
stem from knowledge and skills, and motivational and organizational factors. To
effectively answer the question as to what do online program administrators in these
online programs perceive as facilitating or inhibiting the use of student feedback and
analytic data for continuous improvement in these institutions, the rest of the paper
presents the validated causes and related recommended practices that could be adapted by
other program administrators across online programs to effectively use data for
continuous improvement. A list of validated causes in displayed in Table 9.
CONTINUOUS IMPROVEMENT 74
Table 9
Validated Causes Summary Table
Gap Analysis Dimension Validated Causes
Motivation Utility
Administrators value the use of data to improve
programs
Utility
Administrators have an extrinsic interest in student
feedback
Attainment
Administrators value learning analytics to meet
accreditation mandates
Attribution
Administrators attribute improved student findings to
program improvements due to data analysis
Organization Cultural Model
Faculty members and administrators have a positive
attitude around use of data
Cultural Setting
Highly motivated administrators advocated for faculty
to reach goals
Knowledge Factual
Administrators know that data evaluation can improve
programs
Conceptual
Administrators know the different types of data that
contribute to continuous improvement
Procedural
Administrators know how to code data for thematic
findings
Metacognitive
Administrators have strategies in place to evaluate and
assess data for program improvement
CONTINUOUS IMPROVEMENT 75
Solutions for Motivation Causes
Clark and Estes (1998) believe that the prioritization of validated factors is
dependent on the needs of the organization. By first addressing motivational factors,
program administrators can ensure that the use of data for continuous improvement is
valued and personnel are willing to collect and analyze data to improve their programs
(Rueda, 2009). The triangulation of the surveys and interviews validated the assumed
motivational factors that contribute to the use of data for continuous improvement.
Program administrators value the use data to improve programs. Administrators value
the collection and analysis of student feedback within their programs. In addition, the
administrators value data because it satisfies the mandates of accreditation agencies.
Finally, program administrators attribute program improvements to the use of student
feedback. Through a review of the literature, these motivational factors are discussed
below.
Value the use of Data
For online education programs to use data for continuous improvement, it is
important to ensure that the program administrators value the use of data in their
programs. This motivational factor stems from intrinsic value, the belief that higher levels
of interest in a particular task will motivate people and enhance performance (Pintrich,
2003). The value of using data to improve programs for learning is common throughout
higher education, thus it is important to foster and drive the interest of using data in order
to deepen and broaden the learning experience within the program (Alderman, Towers, &
Bannah; Simenes, 2013).
CONTINUOUS IMPROVEMENT 76
Generating intrinsic value in using data is possible through creating the
connection between program goals and program administrators’ interests (Clark & Estes,
2008). It is recommended that program leaders connect with the director of institutional
effectiveness, or another individual responsible for data collection at the university, to
gauge the interests of program administrators and discuss their interests with respect to
using data for continuous improvement within their programs. Bridging these
connections increases intrinsic value as it highlights the idea that collecting and using
data is vital for systematic change and growth within programs (McCuddy, Pinar, &
Gingerich, 2008). Discourse with the appropriate stakeholder will ensure that data is
valued and lead the increased evaluation (Cheawjindakarn, Suwannatthachote, &
Theeraroungchaisri, 2012). As a result, program administrators’ association between
personal interests and value of data will enhance the likelihood of contributing to the use
of data for continuous improvement throughout their online program.
Extrinsic Interest in Student Feedback
This solution focuses on the development of utility value for student feedback.
Program administrators demonstrated a shared value in the collection and analysis of
student feedback across the partnership schools. Utility value is the belief that people are
more likely to increase commitment to a task because of the benefits associated once
completed (Clark & Estes, 2008). Thus, if administrators learn about the detailed benefits
of effectively collective and analyzing student feedback, then they will successfully
contribute to continuous improvement within their programs.
It is recommended that administrators sit down with the director of institutional
effectiveness and faculty members to find common ground to discuss the mission of the
CONTINUOUS IMPROVEMENT 77
school and how it aligns to using student feedback as a type of data to drive continuous
improvement (Burnette, 2015). For this solution to translate into other programs, it is
important for program administrators to also understand the realistic benefits of
collecting and analyzing student feedback (Clark & Estes, 2008). Simply describing the
benefits to administrators has shown to increase the likelihood that they collect and
analyze student feedback (Barker & Pinard, 2014). Understanding the importance of
student feedback has contributed to the development of teaching and learning
frameworks within other higher education programs for measurement and evaluation
(Alderman, Towers, & Bannah, 2012). In addition, in understanding the benefits of
student feedback, other program administrators have adopted new skills through methods
of incorporating student feedback (Bonnel, 2008). Furthermore, describing the benefits of
utilizing student feedback will increase the likelihood that administrators complete the
task altogether.
Data for Accreditation Standards
Attainment value refers to the importance one attaches to performing a task
(Rueda, 2011). The attachment to values of importance can lead people to adopt the
action and persist regardless of the distractors (Clark & Estes, 2008). Although program
administrators valued the use of data for its importance in making sure they satisfied the
mandates of professional and regional accrediting agencies, some programs went beyond
the minimum requirements to find other uses for data.
Administrators at other universities can develop attainment value in data for
accreditation through articulating a vision with academic deans, provosts, and faculty
members to discuss the importance of using data for accreditation (Burnette, 2015).
CONTINUOUS IMPROVEMENT 78
Administrators can also go beyond the minimum requirements once they have identified
the data they need specifically improve their program. After articulating the vision, it is
then important for administrators to continue to share the vision at meetings and staff
check-ins throughout the school year. It is possible to transfer this solution to other online
higher education institutions through the adoption of this focused vision discourse
between the provost and program administrators on using data for continuous
improvement throughout programs (Rueda, 2011). The focus of such discourse
throughout programs has helped administrators become aware of what accreditation
concerns should be supported within the institutions (McClintock & Benoit, 2014).
Hence, a continued dedication to sharing the vision will increase the attainment value
among program administrators. In addition, placing an importance on using data for
accreditation has moved administrators beyond the notion of simply using student
evaluations as a component to satisfy accreditors (Jacek, 2013). Effective
implementation of this solution will meet the requirements of accreditation and sustain
the use of data for continuous improvement.
Attributions of Improved Student Feedback
The administrators in this study attributed improved student feedback to program
modifications. This factor speaks to attributions and control beliefs. Attribution theory
refers to the belief one has about success or failure in a task and their ability in
controlling the outcome of that particular task (Rueda, 2009). In this sense, the program
administrators believed the improved student feedback ratings were reflective of the
changes that they had requested and modified within their program through their
collection and analysis of data.
CONTINUOUS IMPROVEMENT 79
To develop attributions towards effort, the director of institutional effectiveness
should connect with lead instructional designers and program administrators to
understand the use of learning analytics, and other course data, toward continuous
improvement (Burnette, 2015). The director of institutional effectiveness could then
provide feedback for administrators on their use of learning analytics for program
modifications and encourage them to continually use this type of data throughout the
school year. For others to increase the likelihood that this occurs, it is important to
attribute positive student feedback to the efforts and strategies practiced in the collection
and analysis of data (Clark & Estes, 2008). Focusing on the efforts that administrators put
forward will increase the chance to produce the outcomes desired (Rueda, 2009).
Attributions of purposeful approaches to feedback have resulted in new learning
opportunities in programs (Bonnel, 2008). Promoting the strategies to use student
feedback has also shown to serve as a catalyst to facilitate continuous improvement
(Alderman et al., 2012). As a result, developing a mindset to attribute improved student
feedback to strategies of data collection and analysis will contribute to continuous
improvement.
Solutions for Organization Causes
The triangulation of the surveys and interviews validated the assumed
organizational causes that contribute to the use of data for continuous improvement.
Program administrators have established a cultural model around positive attitudes of a
data driven culture. Administrators are also highly motivated and have created a cultural
setting for faculty members to reach targeted goals within their program. Through a
review of the literature, these validated assets are discussed below.
CONTINUOUS IMPROVEMENT 80
Cultural Model
The administrators across the partnership schools validated that a positive attitude
around using data throughout the culture of the program increases the likelihood for
effective data use. Cultural models refer to beliefs and values that are generally invisible,
often involving values that are relative (Rueda, 2011). In this study, the administrators
believed that faculty members were open to using data to improve their work and felt
positive about the process of continuous improvement. Despite having different missions
across the partnerships schools, all of the programs had a clear vision, goals, and an
effective way to measure progress thus leading to a culture with positive beliefs around
using data (Clark & Estes, 2008).
For other programs to develop a positive attitude around the use of data requires
building trust with various stakeholders. Administrators could build trust by meeting with
faculty members and the director of institutional effectiveness to have an open dialogue
to address concerns and answer questions about using data for continuous improvement
(Burnette, 2015). Successful transfer of this solution to other programs could also utilize
a mission statement aligned with to the progress of goals through the program vision
(Clark & Estes, 2008). It is important for program administrators to first define their
goals in order to begin creating a culture that has positive values around use of data
(Sanders, 2008). An established tracking system to monitor clear goals will contribute to
the success of online programs (Hirner & Kotchtanek, 2011). Hence, motivation is
increased as the goals continue to progress aligned to the vision and mission (McCuddy
et al., 2008). As a result, following and maintaining these standards will ensure an
effective and successful implementation of an online program (Nsiah, 2013).
CONTINUOUS IMPROVEMENT 81
Cultural Setting
The program administrators in this study validated the organizational asset that
there is a culture of highly motivated administrators advocating for their faculty members
to reach targeted goals. This solution is grounded in the organizational theory of cultural
settings. Unlike cultural models, cultural settings are visible and speak to the physical
settings where the values and beliefs play out (Rueda, 2011). Furthermore, the
administrators in this study discussed a number of different ways they reached out to
faculty to communicate expectations and update progress towards goals. The efforts of
the administrators thus building trust and helping faculty members meet their goals
(Clark & Estes, 2008).
For this solution to transfer to other programs, it is important for administrators to
develop a system to communicate constantly and candidly to faculty members throughout
their program (Clark & Estes, 2008). Administrators could communicate through weekly
emails and monthly check-ins. Constant communication would help faculty feel more
comfortable and empower them reach-targeted goals (Burnette, 2015). Programs must
ensure that they prioritize standard communication systems to ensure growth and
maintain quality (Sanders, 2008). Programs must also promote this communication to
increase organizational performance and effectiveness (Leithwood & Jantzi, 2008). As a
result, the open communication will contribute and build a culture climate that is
productive and healthy (Lindahl & Beach, 2013).
Solutions for Knowledge Causes
The triangulation of the surveys and interviews validated the assumed knowledge
causes that contribute to the use of data for continuous improvement. Program
CONTINUOUS IMPROVEMENT 82
administrators know that data evaluation can improve programs. Administrators know
that different types of data that contribute to student learning. Procedurally,
administrators know how to code data to find relevant themes. Finally, program
administrators have the metacognitive knowledge to plan for periods of evaluating data.
Through a review of the literature, these validated assets are discussed below.
Factual
Administrators in this study demonstrated the factual knowledge of knowing that
data evaluation can improve their programs. Factual knowledge refers to facts basic to
specific disciplines and includes things that one must be familiar with in order to
understand and function (Rueda, 2011). In this case, the administrators across the
different program knew that collecting and analyzing data would contribute to continuous
improvement. This solution speaks to their possession of information knowledge and the
skill enhancement that their knowledge allows for them to do within their programs
(Clark & Estes, 2008).
For other program administrators to replicate this solution, it is vital they
communicate the factual information in data use and continuous improvement (Clark &
Estes, 2008). This is possible in a number of different ways. Administrators can schedule
sessions in the beginning of the semester to ensure program personnel have the factual
information of using data to drive the program towards continuous improvement
(McClintock & Benoit, 2014). During onboarding for new hires, the relative factual
knowledge needed for program success can be embedded throughout the process (Hirner
& Kochtanek, 2012). Altogether, simply telling people the information they need to know
will help them in their jobs to succeed on their own and increase the likelihood they
CONTINUOUS IMPROVEMENT 83
contribute to the use of data for continuous improvement (Clark & Estes, 2008).
Conceptual
Administrators validated the asset of conceptual knowledge- different types of
data contribute to improvement. Conceptual knowledge refers to knowledge of
categories, classifications, or theories pertaining to a given area (Rueda, 2011). In this
study, the program administrators demonstrated their conceptual knowledge of the
different types of data they collect to improve their programs. Knowing the different
types of data enhanced the administrator’s ability to utilize data for continuous
improvement (Clark & Estes, 2008).
Other administrators can replicate this solution through the creation and
dissemination of job aids or resources (Clark & Estes, 2008). A job aid, or resource, that
summarizes the different types of data to generate intended outcomes would ensure
administrators have the resources to deal with relative issues (Southard & Mooney,
2015). This resource would also support administrators to make sure they are adequately
supported for success in managing the program (Nworie, Haughton, & Oprandi, 2012).
Failure to provide these resources will impact the administrators ability to reach program
goals (Cheawjindakarn et al., 2012). A well designed job aid or resource will ensure
administrators contribute to the use of data for continuous improvement (Clark & Estes,
2008).
Procedural
Through triangulation of the interview and surveys, administrators demonstrated
that they know how to create themes from collecting the data. This knowledge skill refers
to procedural knowledge. Procedurally, this knowledge refers to the ability to simply do
CONTINUOUS IMPROVEMENT 84
something (Rueda, 2009). In this study, the administrators knew how to collect data and
code it for findings within their programs. This solution is important because it is a
product of high impact training that is directly transferred into the work place (Clark &
Estes, 2008). To know how to analyze data requires the knowledge of a specific skill
(Rueda, 2009).
Other administrators can employ this solution through participating in a training
to acquire the skills of collecting and analyzing data to find themes (Clark & Estes,
2008). Knowledge of how to code data will ensure that the online education program
continues to grow and develop (Nworie et al., 2012). This type of specified training will
also contribute to quality maintenance of programs and progress toward institutional
benchmarks (Southard & Mooney, 2015). The training will enable administrators to
increase their efficiency to support the learning and evaluation of their programs, too
(Cheawjindakarn et al., 2012). Furthermore, the training will allow program
administrators to know how to code data to drive continuous improvement.
Metacognitive
Administrators demonstrated that they have strategic plans in place to monitor
and evaluate their use of data throughout the school year. This solution is grounded in
metacognitive knowledge. Metacognition refers to the ability to think about thinking,
rather becoming aware of one’s own thinking processes (Rueda, 2009). In this study, the
administrators made sure to calendar specific dates in order to meet and discuss the issues
around collecting and analyzing their data. In this sense, the program administrators
across the partnership schools had the metacognitive knowledge to anticipate and
CONTINUOUS IMPROVEMENT 85
potentially solve future challenges in using data for continuous improvement (Clark &
Estes, 2008).
This solution is transferable to other program administrators through an
educational program that teaches the conceptual, theoretical, and strategic knowledge of
how to evaluate the use of data for continuous improvement (Clark & Estes, 2008). An
educational program that teaches the ability to adopt strategic planning will transcend the
abilities of the administrator to lead and manage growth more effectively (Nworie et al.,
2012). The acquisition of this metacognitive knowledge will also enhance the
administrators ability to facilitate strategic planning and achieve success within the
program (Cheawjindakarn et al., 2012). Altogether, the completion of an educational
program will build the administrators capacity to solve and strategize problems are the
use of data toward continuous improvement (Clark & Estes, 2008).
Implementation Plan
Research indicates that receiving training by itself or isolated information will
simply not work (Fixsen, Naoom, Blase, Friedman, & Wallace, 2005). For administrators
to contribute to the use of the data for continuous improvement, it is essential to follow
an integrated plan that combines all of the solutions within the knowledge, motivation,
and organization dimensions.
Program administrators could begin by attending an educational program to learn
about best practices and develop metacognitive abilities to strategize for effective
evaluation periods throughout the school year. The educational program could
demonstrate the benefits of being more proactive and making data use needs known early
in the program. Upon completion of the program, administrators should then gather with
CONTINUOUS IMPROVEMENT 86
the provost and director of institutional effectiveness to create data driven objectives
aligned to program goals and the schools mission, a system to monitor these goals,
develop a communication system, and set up the next professional development on how
to code data from within their programs. Administrators should also establish the
expectation that regular planning will take place with accountability steps to ensure these
meetings take place. Data driven objectives aligned to program goals, a cadence tracker,
communication system, and expectations of planning periods will prepare the
administrators to move forward with the rest of the components of the integrated solution.
Prior to the orientation sessions, it is important for administrators to meet with the
director of institutional effectiveness to connect their interests to program goals. Program
administrators should then host an orientation session for new hires and returning
employees alike. The orientation will focus on the new data driven objectives for the
program, a cadence tracker to monitor goals, and communication system. In addition, all
of the orientation information will live on an internal staff website for employees to
utilize and perform effectively in the program throughout the school year. Overall, this
orientation will align stakeholders to drive program improvement.
After the orientation, administrators will begin to email monthly newsletters to all
of the employees. The weekly newsletter will highlight and praise the efforts of
employees using data for continuous improvement. The newsletter will also detail the
latest research on the benefits of using data throughout programs. Weekly dissemination
of information for employees will also inform them of goal progress and upcoming
training opportunities.
The details of the integrated solution are displayed in Table 10.
CONTINUOUS IMPROVEMENT 87
Table 10
Summary of Validated Assets, Solutions, and Solution Implementation
Knowledge and Skills Motivation Organization
Assets • Knowing data
improves
outcomes
• Knowing
different types of
data
• Knowing how to
create themes
from data
• Knowing when
to evaluate and
plan for data
• Value the use of
data
• Extrinsic interest
in student
feedback
• Attainment of
data for
accreditation
• Attribution of
improvements
within program
• Positive
attitude
around
data
• Highly
motivated
administra
tors
advocatin
g for
faculty
Solutions • Information
• Job aids
• Training
• Educational
program
• Director of
institutional
effectiveness to
bridge the
connection
between
administrators
interests and
program goals
• Administrators to
describe the
benefits of
collecting student
feedback to
faculty members
and staff
• College leaders
and
administrators to
engage in
discourse on
attainment
• Administrators to
promote strong
beliefs and values
around effort
• Align
mission
statement
to goals
with clear
vision
• Develop a
standardiz
ed
communic
ation
system
CONTINUOUS IMPROVEMENT 88
Implement
ation
• Onboarding
information;
generate weekly
newsletters to
disseminate
latest
information
• Provide
administrators
with resources
on different data
sets
• Offer
professional
development
workshops to
train
administrators on
how to collect
data
• Send
administrators to
educational
programs to
learn and apply
strategic
planning
• Discuss interests
with director of
institutional
effectiveness
• Provide
administrators
with research on
benefits of
student feedback
• Reinforce
accreditation
standards on
using data for
continuous
improvement
• Praise the efforts
of analysis to
encourage
personnel
• Modify
the
mission of
the
program
to reflect
clear goals
with
trackers
for the
year
• Communi
cate
weekly
with staff
and
check-in
individuall
y once a
month
Evaluation Plan
After implementation of the integrated solution, an evaluation is needed to
determine the effectiveness and impact towards using data for continuous improvement
(Clarks & Estes, 2008). Using Kirkpatrick’s evaluation model, the rest of this section will
describe the four levels of evaluation needed to fully determine the effectiveness of the
integrated solutions (1996).
Level 1: Reactions
This first level is to check the reactions of participants towards the performance
program (Clark & Estes, 2008). For our integrated solution, we have two sets of Level 1
CONTINUOUS IMPROVEMENT 89
reaction questions. The first set is for the administrators upon completion of their training.
The second set is for the faculty members upon completion of their workshops. Both sets,
however, will gauge the reaction of the training and workshop to better understand the
motivation and value interest for administrators and faculty. These questions will be
administered through a survey.
Both surveys would have open-ended and Likert-scale questions. Open questions
would ask “What would you change about the workshop/training, and how would you
change it, to make it more effective for you and for others?” (Clark & Estes, 2008.) Some
sample Likert-scale questions would ask participants to rate the following statement on
an agreeability scale from far below average to far above average: “Overall, how much
did you enjoy this training/workshop.” Altogether, the results of both surveys will
indicate the motivational impact of the training and workshop (Clark & Estes, 2008).
Level 2: Learning
The 2
nd
level of evaluation will check the learning, motivation, and organizational
change impact of the training and workshop (Clark & Estes, 2008). This level is
important to truly understand if the solution resulted in any changes (Rueda, 2011). In
order to measure learning, it is best to utilize a direct assessment to determine what
learning has occurred (Clark & Estes, 2008).
For administrators and faculty members, this would require the creation and
utilization of a assessment to gauge the learning acquired through the training and
workshop. The assessment for the administrators would assess if they coded the data
according the training guidelines. This assessment would be administered before and
after the data analysis to measure if they learned the appropriate skills in the training. The
CONTINUOUS IMPROVEMENT 90
faculty members would be assessed on the new information relative to the program and
targeted goals learned in their workshop. This assessment would be administered pre and
post workshop to measure their learning (Kirkpatrick, 1996).
Level 3: Transfer
The 3
rd
level of evaluation focuses on transfer. This level measures the
effectiveness and implementation of the training and workshop in the actual work setting
(Rueda, 2011). Rather, did the learning of the training and workshop transfer over into
the program (Clark & Estes, 2008). To measure the transfer of the skills from the training
and workshop will require ongoing evaluation and observation (Kirkpatrick, 2009). This
would look different for administrators and the faculty members.
To measure the transfer of skills for the administrators, it is important to survey
and observe them a few months after the training. The triangulation of surveys and
observations will determine if the administrators effectively transferred the skills learned
from the training into their program. As for the faculty members, it is important to survey
their acquired learning from the workshop and observe them in the program a few months
after the initial opening session. Their survey, after a few months time, will indicate the
success of the transferred learning into their workplace.
Level 4: Impact
The 4
th
level of evaluation focuses on impact. This level measures the impact of
the training and workshop on the program as a whole (Rueda, 2011). This level would
measure if the training and workshop contributed to the use of data for continuous
improvement. Rather, it provides the bottom line impact on the training and workshop
CONTINUOUS IMPROVEMENT 91
(Clark & Estes, 2008). This measurement will take place at the end of the year to
determine how data was used to contribute to the programs improvement.
To measure the overall impact, administrators should look at course and program
evaluations to determine their impact in coding data to improve the program throughout
the school year. Administrators should also examine their scorecard on goals to
determine overall effectiveness of their training. To measure the workshop,
administrators can survey faculty and analyze student feedback to determine the impact
of growth throughout the school year. These measurements will indicate the program
success and areas for improvement (Kirkpatrick, 1996).
Limitations and Delimitations
This next section will detail the limitations and delimitations in this study.
Limitations
As with all studies, there a few limitations. First, these findings are based on a set
of high performing graduate schools. The infrastructure and culture of these institutions
are well established and have recently adopted an online program. Thus, the context of
which these solutions may transfer to other programs could vary due to the infrastructure
and capital of that institution. Second, access to learning analytics in each of the
programs was limited. The details of the learning analytics could have revealed additional
information beyond the perceptions of the administrators running each partnership
school. This data could have led to validated assets and generated new assets in the
knowledge, motivation, and organization dimension. Finally, as this is a promising
practice study, there was no discussion on the problems or gaps within each partnership
school. It is important to consider these points in the broad application of solutions to all
CONTINUOUS IMPROVEMENT 92
institutions with online programs. The sample chosen for this promising practice study
reflect the administrators of high performing institutes. Other schools may not have the
ability to replicate the practices at these institutes for many reasons. In addition, the
survey items may not reflect honest answers from the program administrators. The
interview responses may also not have been candid and true reflections of the
administrators view on the use of data within their programs (Clark & Estes, 2008).
Delimitations
This study is delimited to the notion that it is context specific to the partnership of
schools utilizing a common vendor to provide courses online. The study is content
specific to the high performing institutes mission and strategic plan as well. The study is
also delimited to the experience of the program administrators, as they may not reflect the
program administrators at other universities
Future Research
This study prompts a need for future research into the use of data for continuous
improvement. As technology advances, data collection will continue to evolve and
manifest in different ways. However, not all data is the same. There is a need to
understand the methods of data collection and analysis with the ongoing development of
programs. As programs evolve, the protocols for data collection continue to grow as well.
Additional research could explore the development of data protocols to measure and
monitor the impact towards program improvement. Identifying such protocol methods in
early phases of data collection would benefit all stakeholders within the program. Finally,
there is a need to explore the integrated solution proposed in this paper. Learning from
CONTINUOUS IMPROVEMENT 93
this integrated solution could certainly help other program administrators in their ability
to use data for continually improving their online education programs.
As technology continues to evolve, additional studies will need to examine the
impact of using data for continuous improvement. The learning principles, motivational
theories, and organizational structures within each of these institutions contributed to the
continual performance and growth of programs within each school. However, further
research is needed to identify which types of data contribute to decision-making
processes that drive actionable change within programs. All of the program
administrators shared various types of data that are collected within their programs. As
these programs continue to grow, research into which types of data were prioritized and
valued the most could help assist other online program administrators in the utilization of
data for continuous improvement.
The program administrators in this study shared a number of similarities across
the various programs. However, the approach to collecting and using data differed across
the sites. As some programs began to figure out what type of data to collect, others
sought specific types for targeted areas for specific program improvement. Future
research should examine the impact of developing data collection protocol in the early
stages of online programs. The research on data collection protocol could help strengthen
infrastructure towards sustainability for program administrators of other schools across
the country.
There is also a need to understand the effectiveness of the implementation of data
collection methods. As the online education world continues to discover new types of
data, the question remains as to how effectively maximize the use of this data for
CONTINUOUS IMPROVEMENT 94
program improvement. Future research could examine the impact of aligning data
collection methods with school initiatives to optimize and determine if that particular data
collection is vital or needed for program growth. To identify the data collection method
that best suits the needs of the program will take time, however, it will certainly provide
the foundation for scaling and sustainable growth.
Conclusion
This promising practice study focused on the factors that contribute to the use of
data for continuous improvement. Through the Gap Analysis Framework, this study
explored knowledge, motivation, and organizational factors that impact data for program
improvement (Clark & Estes, 2008). The triangulation of surveys and interviews
validated all ten assumed factors within each of the three dimensions. Once validated, a
solution was proposed for each asset for other administrators to replicate within their
programs. Thus leading into an integrated solution to maximize and ensure effective
implementation of all of the validated assets.
Furthermore, this study enables administrators to improve using data for
continuous improvement. As administrators across varying institutions begin to realize
that not all data is the same, the hope is that school leaders can use this study to ensure
that data collection is designed for purposeful application towards effective utilization.
As some program administrators duly noted that data collection does not translate into
utilization without proper systems for support in translating the data into actionable
pieces for implementation. Alas, this study provides program administrators with the
validated knowledge, motivation, and organizational factors to effectively utilize data for
continuous improvement.
CONTINUOUS IMPROVEMENT 95
References
Alderman, L., Towers, S., & Bannah, S. (2012). Student feedback systems in higher
education: a focused literature review and environmental scan. Quality in Higher
Education, 18(2), 261–280. doi:10.1080/13538322.2012.730714
Barker, M., & Pinard, M. (2014). Assessment and evaluation in higher education closing
the feedback loop : Iterative feedback between tutor and student in coursework
assessments. Assessment & Evaluation in Higher Education, 2, 1–17.
doi:10.1080/02602938.2013.875985
Beaudoin, M. (2015). Distance education leadership in the context of digital change. The
Quarterly Review of Distance Education, 16(1), 11–24.
Blass, E., & Hayward, P. (2014). Innovation in higher education; will there be a role for
“the academe/university” in 2025? European Journal of Futures Research, 2(3), 41.
doi:10.1007/s40309-014-0041-x
Bonnel, W. (2008). Improving feedback to students in online courses. Nursing Education
Perspectives, 29, 290–294.
Centner, T. J. (2014). Structuring a distance education program to attain student
engagement. Journal of National Association of Colleges and Teachers of
Agriculture, 2(9), 230–235.
Cheawjindakarn, B., Suwannatthachote, P., & Theeraroungchaisri, A. (2012). Critical
success factors for online distance learning in higher education : A review of the
literature. Creative Education, 3(12), 61–66. doi:10.4236/ce.2012.38b14
Clark, R. E. & Estes, F. (2008). Turning research into results: A guide to selecting the
right performance solutions. Charlotte, NC: Information Age Publishing, Inc.
CONTINUOUS IMPROVEMENT 96
Desimone, L. M., Smith, T. M., & Phillips, K. J. R. (2013). Linking student achievement
growth to professional development participation and changes in instruction : A
longitudinal study of elementary students and teachers in Title I Schools. Teachers
College Record, 115(5), 46.
Desrochers, D. M., Lenihan, C. M., & Wellman, J. V. (2008). Trends in college spending.
Supported by Lumina Foundation for Education.
Donaldson, M. L. (2013). Principals’ approaches to cultivating teacher effectiveness:
Constraints and opportunities in hiring, assigning, evaluating, and developing
teachers. Educational Administration Quarterly, 49(5), 838–882.
doi:10.1177/0013161X13485961
Dringus, L. (2012). Learning analytics considered harmful. Journal of Asynchronous
Learning Networks, 16(3), 87–100.
Fei, M. M., & Yan, M. (2013). The online examination system of distance education.
Applied Mechanics and Materials, 41, 2901–2905.
doi:10.4028/www.scientific.net/AMM.411-414.2901
Fixsen, D. L., Naoom, S. F., Blase, K. a, Friedman, R. M., & Wallace, F. (2005).
Implementation research: A synthesis of the literature. Components, 31,(12), 1–119.
Gallimore, R., & Goldenberg, C. (2001). Analyzing cultural models and settings to
connect minority achievement and school improvement research. Educational
Psychologist. doi:10.1207/S15326985EP3601_5
Garcia, S. G., & Jones, D. (2012). An evaluation of instructional coaching at selected
middle schools in south Texas and its effect on student achievement, 35,
Retrieved from
CONTINUOUS IMPROVEMENT 97
http://proxyau.wrlc.org/login?url=http://search.proquest.com/docview/1284159150?
accountid=8285\nhttp://vg5ly4ql7e.search.serialssolutions.com/?ctx_ver=Z39.88-
2004&ctx_enc=info:ofi/enc:UTF-
8&rfr_id=info:sid/ProQuest+Dissertations+&+Theses+Full+Text&rft_va
Goldfarb, S., & Morrison, G. (2014). Continuous curricular feedback. Academic
Medicine, 89(2), 264–269. doi:10.1097/ACM.0000000000000103
Greller, W., & Drachsler, H. (2012). Translating learning into numbers : A generic
framework for learning analytics. Educational Technology Society, 15(3), 42–57.
doi:http://hdl.handle.net/1820/4506
Harn, B. A., Chard, D. J., & Kame’enui, E. J. (2011). Meeting societies’ increased
expectations through responsive Instruction: The power and potential of systemwide
approaches. Preventing School Failure, 55(4), 232–239.
doi:10.1080/1045988X.2010.548416
Harvey, L. (2003). Student feedback. Quality in Higher Education, 9(2), 3–20.
doi:10.1080/13538320308164
Hirner, L., & Kochtanek, T. (2012). Quality indicators of online programs. Community
College Journal of Research and Practice, 36(2), 122–130.
doi:10.1080/10668920802628645
Jacek, L. (2013). Why not go online. College and University, 89(2), 12–21.
Jaschik, S., & Lederman, D. (2014). Faculty attitudes on technology.
Karp, M. M., & Fletcher, J. (2014). Adopting New Technologies for Student Success
(Vol. 1).
Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing pedagogical action:
CONTINUOUS IMPROVEMENT 98
Aligning learning analytics with learning design. American Behavioral Scientist, 57,
1439–1459. doi:10.1177/0002764213479367
Ma, J., Han, X., Yang, J., & Cheng, J. (2015). Examining the necessary condition for
engagement in an online learning environment based on learning analytics approach:
The role of the instructor. The Internet and Higher Education, 24, 26–34.
doi:10.1016/j.iheduc.2014.09.005
McClintock, C., & Benoit, M. (2014). Online Graduate Education.
McCuddy, M. K., Pinar, M., & Gingerich, E. F. R. (2008). Using student feedback in
designing student-focused curricula. International Journal of Educational
Management, 22, 611–637. doi:10.1108/09513540810908548
Nsiah, G. K. B. (2013). Best practices in distance education: A review. Creative
Education, 04(12), 762–766. doi:10.4236/ce.2013.412108
Nworie, J., Haughton, N., & Oprandi, S. (2012). Leadership in distance education:
Qualities and qualifications sought by higher education institutions. American
Journal of Distance Education, 26(3), 180–199.
doi:http://dx.doi.org/10.1080/08923647.2012.696396
Paarlberg, L. E., & Perry, J. L. (2007). Organization Goals, 19, 387–408.
Pintrich, P. R., & Pintrich, P. R. (2003). A motivational science perspective on the role of
student motivation in learning and teaching contexts. Journal of Educational
Psychology, 95, 667–86. doi:10.1037/0022-0663.95.4.667
Richardson, J. T. E. (2005). Instruments for obtaining student feedback: A review of the
literature, 30(4), 387–415. doi:10.1080/02602930500099193
Rowley, J. (2010). Designing student feedback questionnaires. Quality Assurance in
CONTINUOUS IMPROVEMENT 99
Education, 11(3), 143–149. doi:10.1108/09684880310488454
Rueda, R. (2011). The 3 dimensions of improving student performance. New York:
Teachers College Press.
Sanders, L. S. (2008). Quality through the implementation of a center for excellence.
Distance Learning, 8(1), 37–44.
Scheffel, M., Drachsler, H., Stoyanov, S., & Specht, M. (2014). Quality indicators for
learning analytics. Educational Technology Society, 17, 117–132.
Schein, E. H. (2004). Organizational Culture and Leadership.
doi:10.1080/09595230802089917
Siemens, G. (2013). Learning analytics: The emergence of a discipline. American
Behavioral Scientist, 57, 1380–1400. doi:10.1177/0002764213498851
Southard, S., & Mooney, M. Comparative analysis of distance education quality
assurance standards. Quarterly Review of Distance Education, 16(6), 55–68.
Swan, K. (2001). Virtual interaction : Design factors affecting student satisfaction and
perce. Distance Education, 22, 306–331. Retrieved from
http://www.rcet.org/research/publications/interactivity.pdf
Tempelaar, D. T., Rienties, B., & Giesbers, B. (2014). In search for the most informative
data for feedback generation: Learning analytics in a data-rich context. Computers in
Human Behavior, 47, 157–167. doi:10.1016/j.chb.2014.05.038
Western Association of Schools and Colleges’ Accrediting Commission for Community
and Junior Colleges (WASC-ACCJC) (2010). Accreditation reference handbook.
Retrieved from http://www.accjc.org/wpcontent/uploads/2010/09/Accreditation-
Reference-Handbook-August-20101.pdf
CONTINUOUS IMPROVEMENT 100
Wu, D., & Hiltz, S. R. (2004). Predicting learning from asynchronous online discussions.
Journal of Asynchronous Learning Network, 8, 139–152.
CONTINUOUS IMPROVEMENT 101
Appendix A
Interview Protocol
Date:
Introduction:
The idea of this study is to find out how program administrators utilize data for online
programs. The questions I am going to ask revolve around the use of data. I am interested
in learning from your experience. The interview will take roughly 30 minutes. You can
stop the interview at any time. If you have any questions or concerns, please do not
hesitate to ask. Your answers will be confidential. The records of this study will be kept
private. In any sort of report we make public we will not include any information that
will make it possible to identify you. If we tape-record the interview, we will destroy the
tape after it has been transcribed, which we anticipate will be within two months of its
taping.
Interview:
To begin, I would like to ask about your experience working with data from the online
programs.
1. What types of data do you collect?
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
2. How do you use the different types of data that you collect? What are the different
purposes behind collecting data?
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
3. What types of ultimate outcomes do you expect from the use of data in your online
program? Program performance? Student learning? Meeting external expectations? Etc.?
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
4. How do you use data to improve instruction specifically? (Possible other follow-up
questions about specific uses if not elicited in #3: Improve program quality?
Accreditation? Etc.)
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
CONTINUOUS IMPROVEMENT 102
5. Accreditation agencies state that schools must use data for continuous improvement.
How much does the accreditation process impact your data practices?
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
6. How do you analyze and code the different types of data to find relevant themes?
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
7. Describe the practical processes or routines you have put in place, such as when you
assess and evaluate the data and when and how you share results.
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
8. Do you use data to establish and track goals? Describe that process such as how you
use data to set goals and how you communicate goals and measure progress etc.
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
9. What is the role of faculty in data collection and use and how has faculty responded to
the use of
data?___________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
_____
10. Has your approach to gathering and using student feedback changed and if so how?
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
CONTINUOUS IMPROVEMENT 103
11. What changes have occurred in the program as a result of the use of data? Can you
tell me about an example or two?
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
12. Is there anything I have not asked you that will help me understand your experience
with using data for continuous improvement?
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
Thank you!
CONTINUOUS IMPROVEMENT 104
Appendix B
Survey
Welcome! My name is Adam Ortiz and I am conducitng a doctoral dissertation study in
the Rossier School of Education at the University of Southern California. Your time is
greatly appreciated and your input is extremely important to this study. The information
you provide will only be used for the purposes of this study and all particpants will
remain anonymous. This survey is composed of likert-scale and open-ended questions
and should take approximately 15-20 minutes to complete.
Answer the following questions by selecting the most appropriate answer
1. The collection and use of data helps our online program perform well.
Strongly Disagree Disagree Neutral Agree Strongly Agree
2. As an administrator, I find that data use is valuable for
a. helping faculty improve instruction
b. meeting accreditation stanadards
c. contributing to student learning
d. measuring student engagement
e. other:_____________________________________
3. Our collection and use of data has resulted in practical changes
Strongly Disagree Disagree Neutral Agree Strongly Agree
If you answered "strongly disagree," "disagree," or "neutral" to question 3, please
say a few words explaining why you believe your use of data has not resulted in a
practical change, and how could that use could be improved?
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
If you answered "agree" or "strongly agree" to question 3, could you provide an
example or two of any changes that have resulted?
CONTINUOUS IMPROVEMENT 105
_____________________________________________________________________
_____________________________________________________________________
Using data is important beyond the accreditation purposes
Strongly Disagree Disagree Neutral Agree Strongly Agree
4. Our faculty members, generally, are open to the idea of using data for continuous
improvement
Strongly Disagree Disagree Neutral Agree Strongly Agree
5. Our faculty members, generally, look forward to targeted feedback from data
Strongly Disagree Disagree Neutral Agree Strongly Agree
6. There are established mechanisms in place in our program to collect the data we
need
Strongly Disagree Disagree Neutral Agree Strongly Agree
7. There are regularly planned periods to assess the data we collect
Strongly Disagree Disagree Neutral Agree Strongly Agree
8. We have strategies in place to communicate findings from our analysis of the
data
Strongly Disagree Disagree Neutral Agree Strongly Agree
9. Our school values the process of using data for continuous improvement
Strongly Disagree Disagree Neutral Agree Strongly Agree
10. If you have great examples of your use of data for continuous improvement in
your online degree program supported by 2U, please feel free to add those here.
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
CONTINUOUS IMPROVEMENT 106
11. What is most challenging aspect of using data for continuous program
improvement in your online degree program supported by 2U? If you can think of
more than one challenge, please feel free to share any and all of those challenges
here.
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
Abstract (if available)
Abstract
The purpose of this study was to explore administrators’ perspectives of the barriers to and assets needed to effectively use data for continuous improvement in online graduate programs. To explore potential barriers and assets needed, this study utilized the Gap Analysis Framework to validate assumed knowledge, motivation, and organizational factors that contribute to the use of data for continuous improvement (Clark & Estes, 2008). This study found that administrators strategically assess data, value different types of data, empower a culture of using data, and work to improve programs using data on a continual basis. The study concludes with implications for other administrators to replicate for effective use of data for continuous improvement within their programs.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Financial stability and sustainability in online education: a study of promising practice
PDF
Implementing field-based online graduate professional programs: a promising practice study
PDF
Supporting administrators in successful online co-curriculum development: a promising practices study of contributing factors
PDF
Ensuring faculty success in effective online instruction
PDF
Crisis intervention and mental health support services in online graduate degree programs: an evaluation study
PDF
Supporting faculty for successful online instruction: factors for effective onboarding and professional development
PDF
Attracting younger generations to the local church: Deep roots, selective seeds, and preparing a new soil
PDF
Establishing a systematic evaluation of an academic nursing program using a gap analysis framework
PDF
Knowledge, motivation and organizational influences impacting recruiting practices addressing the gender gap in the technology industry: an evaluation study
PDF
An examination of barriers to effective supervision from the perspective of employees within a federal agency using the GAP analysis framework
PDF
An examination of supervisors’ perspectives of teamwork in a federal agency: promising practices and challenges using a gap analysis framework
PDF
Closing the achievement gap for students with disabilities: a focus on instructional differentiation - an evaluation study
PDF
An examination of the facilitators of and barriers to effective supervision from the perspective of supervisors in a federal agency using the gap analysis framework
PDF
A case study of promising practices mentoring K-12 chief technology officers
PDF
Building data use capacity through school leaders: an evaluation study
PDF
Improved student success at California State University, Dominguez Hills: a promising practice study
PDF
Achieving high levels of employee engagement: A promising practice study
PDF
An examination of employee perceptions regarding teamwork in the workplace within a division of the Food and Drug Administration (FDA) using the gap analysis approach
PDF
School leaders' use of data-driven decision-making for school improvement: a study of promising practices in two California charter schools
PDF
Military instructor critical thinking preparation
Asset Metadata
Creator
Ortiz, Adam H.
(author)
Core Title
Using data for continuous improvement: a promising practice study
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education (Leadership)
Publication Date
07/25/2016
Defense Date
07/22/2016
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
continuous improvement,data,Distance education,OAI-PMH Harvest
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Filback, Robert (
committee chair
), Sundt, Melora (
committee chair
), Ephraim, Ronni (
committee member
)
Creator Email
adamorti@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-276708
Unique identifier
UC11280433
Identifier
etd-OrtizAdamH-4597.pdf (filename),usctheses-c40-276708 (legacy record id)
Legacy Identifier
etd-OrtizAdamH-4597.pdf
Dmrecord
276708
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Ortiz, Adam H.
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
continuous improvement
data