Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Intelligent Learning Quotient
(USC Thesis Other)
Intelligent Learning Quotient
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Intelligent Learning Quotient
by
Yelena Mammadova
Rossier School of Education
University of Southern California
A dissertation submitted to the faculty
in partial fulfillment of the requirements for the degree of
Doctor of Education
May 2022
© Copyright by Yelena Mammadova 2022
All Rights Reserved
The Committee for Yelena Mammadova certifies the approval of this Dissertation
Monique Datta
Eric Canny
Helena Seli, Committee Chair
Rossier School of Education
University of Southern California
2022
iv
Abstract
AI, or Artificial Intelligence, and its recent advancements in machine learning, natural language
processing, and robotics, are changing workforce skill requirements. These changes require
upskilling and reskilling of the global workforce at unprecedented pace and scale. However,
traditional training programs to develop skills produce low rates of learning transfer to the
workplace. This study sought to understand how advanced learning technologies could address
the problem of learning transfer. Specifically, this paper explored person, training, and
environment factors that influence transfer of learning to the workplace after completion of the
AI-enabled adaptive learning program. Holton et al. 's (2000) Learning Transfer System
Inventory was used as the conceptual and methodological framework for the study along with
Bandura’s (1986) social cognitive theory as a theoretical model. A mixed methods approach was
used to collect and analyze data. Participants of the AI-enabled adaptive learning program and
traditional learning program took a survey to report their perceptions of individual, training and
environment related factors which facilitated or hindered transfer of learning after completion of
the program. Eight participants of the AI-enabled adaptive learning program participated in a
follow-up interview to explore factors facilitating or hindering transfer of learning. The data
suggested that an AI-enabled adaptive learning program was perceived by learners as a more
effective solution to gain desired learning outcomes. The participants also reported on how
organizational culture and environment supported or hindered learner transfer. Specifically, the
data suggested that there is a gap in organizational practices to support learning transfer to the
workplace. Recommendations for developing a holistic learning ecosystem for introduction and
adoption of the AI-enabled adaptive learning programs were proposed.
Keywords: adaptive learning, AI learning, learning transfer
v
Dedication
To my wonderful children, Iskandar and Suleyman, may you always be curious, courageous to
think big, and have passion to reach your dreams. You were my beacon throughout this journey
as for my life.
To my sister, Jamila, thank you for always being my best friend, for your patience, for listening
to me, for our philosophical and simple debates, and unconditional love and support. You’ve
always been my role model.
To JJ, for your belief in me as no-one else; your love has transformed me as a person and gave a
start to a wonderful journey of exploring and finding the courage to aspire to my dreams.
vi
Acknowledgements
According to the African philosophy of “ubuntu,” human living begins with the premise
that “I am” only because “we are.” My journey throughout my time at USC has been one of the
great examples where a sense of myself was shaped by relationships with other people. Thank
you to my chair, Dr. Helena Seli for her selfless dedication, support, pushing me to bring the best
of me, and continuous inspiration turning this journey into one of the most rewarding life
experiences. Thank you to my committee member, Dr. Monique Datta for encouragement and
appreciation of the importance of my research work. Thank you to my committee member, Dr.
Eric Canny, whose words about the impact of the EdD program have been ingrained in my
memory and have proved to be so true.
Thank you to my children Iskandar and Suleyman who were quiet observers of my work;
I hope this time will serve you as an example of true dedication to one’s belief that people have
unlimited potential to learn and grow.
Thank you to my sister Jamila who has been my greatest partner in sharing the life of a
student and celebrating every milestone. Thank you to my family - Zinyat, Mehdi, Vagif, Mom
and Dad, for cherishing my successes from afar.
Thank you to my friend, JJ for the ability to see in me more than I have, for pushing me,
challenging me, supporting me, and co-living this and many other experiences.
Thank you to Pete Andrich for being the best Leader in so many phases of my life, and a
great support to restart my career in the US. Thank you to Judith Luberski, Brian Slezak, Pat
Verduin and Sally Massy for trusting me to contribute to one of the most amazing fields of
practice - education.
vii
Table of Contents
Abstract ........................................................................................................................................... iv
Dedication ........................................................................................................................................ v
Acknowledgements ........................................................................................................................ vi
List of Tables ................................................................................................................................... x
List of Figures ................................................................................................................................. xi
List of Abbreviations ..................................................................................................................... xii
Chapter One: Introduction ............................................................................................................... 1
Background of the Problem ................................................................................................. 1
Organizational Context and Mission ................................................................................... 3
Purpose of the Study and Research Questions .................................................................... 4
Importance of the Study ...................................................................................................... 5
Overview of Theoretical Framework and Methodology ..................................................... 6
Definitions ........................................................................................................................... 8
Organization of the Dissertation .......................................................................................... 9
Chapter Two: Literature Review ................................................................................................... 11
The Importance, Implications, and Unknowns of the 4IR ................................................ 11
Historical Context of the 4IR ................................................................................ 12
Challenges and Implications of the 4IR on the Future of Work ............................ 12
Existing Strategies to Respond to the 4IR-Driven Changes .................................. 14
Artificial Intelligence-Enabled Adaptive Learning ........................................................... 16
Overview of Adaptive Learning ............................................................................ 17
New Frontier of AI-Enabled Adaptive Learning ................................................... 20
Evaluation of AI-Enabled Adaptive Learning ....................................................... 22
Transfer of Learning .......................................................................................................... 26
Learning Measurement .......................................................................................... 27
Intent to Transfer ................................................................................................... 34
Study’s Conceptual Framework ........................................................................................ 35
Summary ............................................................................................................................ 37
Chapter Three: Methodology ........................................................................................................ 39
Research Questions ........................................................................................................... 39
Overview of Methodology ................................................................................................ 39
The Researcher .................................................................................................................. 41
Data Sources ...................................................................................................................... 42
Surveys .................................................................................................................. 42
Interviews .............................................................................................................. 51
Credibility and Trustworthiness ........................................................................................ 54
Ethics ................................................................................................................................. 55
Chapter Four: Results and Findings .............................................................................................. 56
Participants ........................................................................................................................ 56
Survey Participants ................................................................................................ 57
viii
Interview Participants ............................................................................................ 58
Research Question 1: Are There Differences in the Person- and Training-Related
Attributes Based on Type of Learning Completed? .............................................. 60
Person and Training-Related Results from the LTSI Survey ................................ 60
Research Question 2: Are There Differences in the Intent to Transfer Based on
the Type of Learning Program Completed? .......................................................... 65
Intent to Transfer Results ...................................................................................... 65
Research Question 3: What Is the Relationship of Person, Training, and
Environment Attributes to the Intent to Transfer Learning, Controlling
for the Type of Learning Program Completed? .................................................... 67
Results of the Multiple Regression Analysis of the Relations of Person,
Training, and Environment Attributes to the Intent to Transfer in
the AI-Enabled Adaptive Learning Program ............................................. 69
Results of the Multiple Regression Analysis for Person, Training, and
Environment Attributes and Intent to Transfer in the Traditional
Learning Program ...................................................................................... 71
Research Question 4: What Are the Experiences of Learners in the AI-Enabled
Program and How Do They Relate to the Intent to Transfer? ............................... 73
Knowledge Assessments as a Mechanism to Enable Personalization
of the Experience, Leading to Perceived Increase in Engagement,
Efficiency, and Effectiveness of the Program ........................................... 74
A Perceived Lack of Social Interaction in the AI-Enabled Learning
Program, Yet Not Viewed as a Barrier to the Learning Transfer .............. 77
A Reported Culture of Support for Learning, Yet A Gap Between
AI-Enabled Adaptive Learning Program and Organizational
Practices for Learning as a Barrier to the Learning Transfer .................... 78
Conclusion ......................................................................................................................... 81
Chapter Five: Discussion ............................................................................................................... 82
Discussion of Findings and Results ................................................................................... 82
Differences in Person-, Training-Related Attributes, and Intent to
Transfer Between Two Programs .............................................................. 82
Relationship Between Person-, Training-, and Environment-Related
Attributes and Intent to Transfer ............................................................... 84
Learner Experiences and Intent to Transfer After Completion of the
AI-Enabled Adaptive Learning Program ................................................... 86
Recommendations for Practice .......................................................................................... 87
Learner-Related Recommendations ...................................................................... 87
Training-Related Recommendations ..................................................................... 89
Organizational Environment-Related Recommendations ..................................... 91
Integrated Recommendations ............................................................................................ 93
AI Technologies-Enabled Learning Organization ................................................. 94
AI Technologies-Enabled Personalized Learning Offerings ................................. 94
AI-Enabled Measurement of Learning .................................................................. 95
Evaluation of the AI-Enabled Learning Ecosystem Implementation .................... 97
Summary ................................................................................................................ 99
ix
Limitations and Delimitations ........................................................................................... 99
Recommendations for Future Research ........................................................................... 101
Implications for Equity and Connection to the Rossier Mission ..................................... 102
Conclusion ....................................................................................................................... 103
References ................................................................................................................................... 105
Appendix A: Permission for Figure 1 Baldwin and Ford’s (1988) Training Transfer
Model ............................................................................................................................... 128
Appendix B: Learning Transfer System Inventory Scale Definitions, Descriptions,
Loadings, and Reliability Coefficients of 89 items, Version 3 ....................................... 129
Appendix C: LTSI Survey ........................................................................................................... 135
Appendix D: Invitation Email to Participate in a Survey ............................................................ 139
Appendix E: Information Sheet ................................................................................................... 140
Appendix F: Interview Protocol .................................................................................................. 142
Appendix G: Invitation Email to Participate in an Interview ...................................................... 145
x
List of Tables
Table 1: Learning Scales and Definitions ..................................................................................... 32
Table 2: Data Sources .................................................................................................................... 41
Table 3: Data Analysis Methods ................................................................................................... 47
Table 4: Reliability Coefficients for Third LTSI .......................................................................... 50
Table 5: Demographic Characteristics of Survey Respondents .................................................... 58
Table 6: Demographic Characteristics of Interview Participants .................................................. 59
Table 7: Alpha Values of the Person- and Training-Related LTSI Scales .................................... 61
Table 8: t-test Results for Differences in Person- and Training-Related Attributes
Between AI-Enabled and Traditional Training (Nai = 267, Ntrad = 170) ........................ 64
Table 9: Alpha Values of the Intent to Transfer Scales ................................................................. 66
Table 10: t-test Results for Differences in Person- and Training-Related Attributes
Between AI-Enabled and Traditional Training (Nai = 267, Ntrad = 170) for
Intent to Transfer ............................................................................................................... 67
Table 11: Reliability Coefficients of the LSTI and Intent to Transfer Scales from
Previous Studies and for Current Study ............................................................................ 68
Table 12: Unstandardized Regression Coefficients of Person-, Training-, and
Environment-Related Attributes on Intent to Transfer After Completion
of the AI-Enabled Learning Program ................................................................................ 70
Table 13: Unstandardized Regression Coefficients of Person-, Training-, and
Environment-Related Attributes on Intent to Transfer After Completion
of the Traditional Learning Program ................................................................................. 72
Table 14: A Priori and Emergent Codes in the Interviews with Participants of the
AI-Enabled Learning Program .......................................................................................... 73
Table 15: Participant Comments Related to the Experience and Effectiveness of
the AI-Enabled Learning Program .................................................................................... 76
Table 16: Participant Comments Related to the Impact of the Organizational Culture,
Environment, and Learning Processes on the Learning Transfer ...................................... 80
Table 17: Study’s Significant Predictors of the LTSI Factors to Intent to Transfer ..................... 85
Table 18: Key Themes from the Survey and Interview Data ........................................................ 87
Table 19: Evaluation Rubric for AI-Enabled Learning Organization ........................................... 98
xi
List of Figures
Figure 1: Training Transfer Model ................................................................................................ 28
Figure 2: Revised HRD Evaluation Framework ........................................................................... 30
Figure 3: Conceptual Framework for AI-Enabled Learning Program Evaluation ........................ 37
Figure 4: AI-Enabled Learning Ecosystem ................................................................................... 97
xii
List of Abbreviations
AI Artificial Intelligence
LTSI Learning Transfer System Inventory
1
Chapter One: Introduction to the Problem of Practice
The Fourth Industrial Revolution (4IR), the age of accelerated emergence of new
technologies that integrate the physical, digital, and biological dimensions of life, is creating skill
gaps in the global workforce (Schwab, 2016). New forms of workplace training are emerging,
enabled by advanced learning technologies. This training brings together the science of learning
and advanced computational techniques such as Artificial Intelligence (AI), machine learning,
and supervised learning (Sawyer, 2014). However, while training is one solution to address skill
gaps (Renjen, 2020), its impact on generating meaningful learning outcomes is questionable,
particularly when considering that only 10-20% of training materials get transferred to on-the-job
behaviors (Ford et al., 2011; Grossman & Salas, 2011; Velada et al., 2007). This study sought to
understand how advanced learning technologies using AI could address the problem of learning
transfer in the workplace.
Background of the Problem
The rise of automation, AI, and other technological advancements is changing job and
skill requirements (Davis, 2016). Despite the perception of a decreased need for human
employees for routine tasks, the creation of new jobs and change in current jobs could require
upskilling and reskilling the global workforce. In the United States, it is predicted that for every
six jobs being automated by AI, one additional job will be created in order to develop and
manage the new technologies, resulting in “net projected growth from 153.5 million jobs in 2020
to 165.4 million jobs in 2030; this is an increase of 11.9 million jobs, or 7.7 percent” (Bureau of
Labor Statistics, U.S. Department of Labor, 2021; Strack et al., 2021, p. 4). According to
estimates, these newly created roles are expected to encompass 63 occupations, mainly in data
science and software development (Strack et al., 2021). However, these are not the only
occupations expected to be impacted by the 4IR. In aggregate, over 50% of the global workforce
2
could require upskilling and reskilling by 2025 in response to changing job requirements (WEF,
2020). If this problem is not addressed, a lack of adequate skills could lead to displacement of up
to 375 million workers by 2030 as a result of elimination of jobs and changes in job requirements
(Manyika et al., 2017a). However, while the employment outlook for 2025 predicted a loss of 85
million jobs, primarily in routine and administrative roles, it also forecasted creation of up to 97
million jobs, meaning a total gain of 12 million jobs in technical fields such as computing,
mathematics, architecture and engineering (WEF report, 2020). Skill gaps could also raise
questions around equity. The Organization for Economic Co-Operation and Development
(OECD) reported that changing skills requirements could create an equity gap by polarizing the
global workforce into high-skilled and low-skilled jobs (OECD, 2018b).
The pace and magnitude of changing skill requirements and the overall need to
successfully adapt to disruptions from technological advancements have prompted organizations
to focus on skill-building strategies (Gartner, 2020). Strack et al. (2021) suggested that
organizations should develop learning strategies to upskill and reskill the existing workforce to
mitigate skill shortages and create a culture of lifelong learning where people continuously
upgrade their skills as part of a daily routine. While many organizations have started to examine
the use of learning technologies to respond to the problem (Frezzon, 2017), most training formats
aiming to build a person’s knowledge, skills, and abilities continue to be a “passive delivery of
materials that assumes all students receive, handle, absorb, and process information similarly”
(Rifai et al., 2018, p. 1423). With a total investment in United States workforce development
training estimated at $164.2 billion in 2012 and $400 billion in 2015, companies continue to
report a lack of learning transfer into improvements in individual and organization performance
outcomes (Beet et al., 2016; Carnevale et al., 2015). Such ineffective training could lead to
excessive organizational costs (Johnson, 1995; Noe et al., 2006). Additionally, ineffective
3
training could adversely impact workers’ motivation, self-efficacy, and beliefs in their own
abilities (Hulleman et al., 2016). Christensen (2017) posited that adaptive learning, a form of
personalized, technology-enabled instruction, creates opportunities for democratization and
scaling of learning focused on a person's needs. However, while education technologies could
offer innovative approaches to learning, their efficacy for learning transfer and subsequently
their relevance to upskilling and reskilling of the global workforce have not been thoroughly
studied (Renjen, 2020; Rifai et al., 2018).
Organization Context and Mission
The organization of study, ABZ (pseudonym), is a large multi-national company with its
mission defined as serving its customers through offering online products at a competitive price.
With the organization’s vision to expand and take a leading position in the market, ABZ
differentiated its growth strategy via exceptional customer service. ABZ strived to meet
customer needs by continually raising the bar of customer experience in finding, discovering,
and delivering any product. As an organization operating in the age of continuously enhancing
internet capabilities and advanced data sciences, ABZ actively used AI to predict and anticipate
customer desires to maximize customer success in buying experiences. Operating within the
high-tech industry with headquarters based in the United States, ABZ employed 10,000+ people,
had more than $1 billion in revenue, and had several divisions, each of which operated as an
autonomous entity in the global organizational structure.
ABZ’s people development strategy had a goal to upskill and reskill a large number of
employees, both new hires and seasoned employees. New hires in 2021 accounted for 250% of
the prior year’s workforce, which effectively doubled headcount in response to exponential
business growth and the impact of the global pandemic caused by Coronavirus Disease-2019
(COVID-19), a virus that spread throughout the world. In addition to new-hire onboarding and
4
training, employees needed unique knowledge and skills to deploy new products. ABZ’s
workforce learning and development strategy hinged on AI-enabled learning technologies with
the primary objective of delivering learning quickly, effectively, and at scale. There was,
however, a transition period where AI-enabled learning programs coexisted with traditional
programs for evaluation and comparison between the programs and their impact on the transfer
of learning to the workplace. One of the common AI applications in learning technologies is
creating adaptive learning training where content is adapted to an individual's needs (Loucks et
al., 2018; Wang et al., 2020). Adaptive learning benefits include both learning effectiveness and
efficiency, where the former is addressed by adaptation of the content to the learner, and the
latter is achieved by reducing the instruction time, focusing on the learner’s specific knowledge
and skill gaps, and finding optimal paths for individual learners (Christensen, 2017). In total,
about 1,000 employees at ABZ were going to participate in an adaptive learning program
delivered via AI-enabled learning technology. It was expected to measure the impact of the
adaptive learning program on individual and organizational performance outcomes via learning
transfer to the workplace.
Purpose of the Study and Research Questions
The purpose of this study was to examine the impact of the AI-enabled adaptive learning
program on facilitating learning transfer at ABZ. The study was guided by Holton’s (1996)
Human Resource Development (HRD) evaluation and research model. The study examined self-
reports related to three factors from Holton’s Learning Transfer System Inventory (LTSI): the
training, the person, and the organizational environment. These factors contribute to or hinder the
transfer of learning and the study analyzed the relationship of these factors to intent to transfer, a
person’s willingness to exert effort to attain a desired behavioral goal (Ajzen, 1991; Al-Eisa et
al., 2009). Additionally, the study also explored the experiences of learners in the AI-enabled
5
learning program and how they related to the intent to transfer. The study results and findings
helped facilitate discussions among ABZ’s workforce development professionals around the
implications of AI-enabled adaptive learning transfer programs for the global workforce.
Research questions examined the impact of AI-enabled adaptive learning programs on learning
transfer via self-reported LTSI factors, compared them to non-AI-enabled learning program
outcomes, determined relationships of LTSI factors and on-the-job intent to transfer training, and
explored experiences of learners in the AI-enabled learning program in relation to the intent to
transfer training:
1. Are there differences in the person- and training-related attributes based on the type
of learning completed?
2. Are there differences in the intent to transfer based on the type of learning program
completed?
3. What is the relationship of person, training, and environment attributes to the intent to
transfer learning, controlling for the type of learning program completed?
4. What are the experiences of learners in the AI-enabled program and how do they
relate to the intent to transfer?
Importance of the Study
The problem of skill gaps is important to address because the impact of the 4IR predicts
greater use of automation, necessitating new skills development for the global workforce to work
with intelligent machines (Bughin et al., 2018). A survey of top business executives and higher
education institutions suggested that only about one-quarter of their workforce (27%) and one-
fifth of their students (20%) had the skills to use emerging technologies (Bahl et al., 2018).
While training programs aim to build requisite skills, these efforts often fail to improve
performance unless there is a transfer of learned skills and their application at the workplace
6
(Rouiller & Goldstein, 1993; Velada et al., 2007). Beer et al. (2015) discussed paradoxical
outcomes of training in the study where companies that invested significantly in training
programs to drive organizational change did not achieve the intended performance outcomes and
even fell behind organizations which did not invest in training interventions at all. With this
degree of uncertainty in learning program efficacy, the economic impact could be significant
when in the United States, an estimated $1.1 trillion is spent on academic and corporate
education, $400 billion of which includes workforce development training (Carnevale et al.,
2015). Therefore, an evidence-based approach to demonstrate the effectiveness of training
programs for individual and organizational performance outcomes via learning transfer is an
important area of study (Bates et al., 2012; Wick et al., 2015).
The organization of study, ABZ, aimed to upskill and reskill its workforce at scale to
meet job competency expectations and be prepared for rapid business growth. It was launching
the AI-enabled adaptive learning program to train new hires and tenured employees on
knowledge and skills for new products, processes, and tools. While the launch of the new
programs provided opportunities to experiment with AI-enabled learning technologies, the
traditional programs continued to exist for comparison of the programs’ effectiveness and
transfer of learning to the workplace.
Overview of Theoretical Framework and Methodology
Bandura’s (1986) social cognitive theory (SCT) was used as an overarching theoretical
framework for the study, guided by Holton’s (1996) Human Resource Development (HRD)
evaluation and research model. The quantitative aspect of study was operationalized via the
Learning Transfer System Inventory (LTSI), a psychometrically validated instrument that
gathers data about an individual’s perceptions of factors that impact transfer of learning to the
workplace (Bates et al., 2012; Holton et al., 1997; Holton et al., 2000). The central idea of
7
Holton’s HRD evaluation framework was that the transfer of learning is influenced by a
combination of attributes related to the person, the training, and the organizational environment
defined as the LTSI factors (Holton, 2005). Intent to transfer was included to enhance the
evaluation of LTSI factor predictability in the transfer of learning (Hutchins et al., 2013;
Yamkovenko, 2009). The qualitative aspect of study was operationalized via interviews with
participants of the AI-enabled adaptive learning program and helped to explain or expand the
quantitative results from the LTSI.
This study examined the impact of the AI-enabled learning program on transfer of
learning, identified key influencing factors in the transfer of learning, examined the relationship
of these factors on intent to transfer (Hutchins et al., 2013; Yamkovenko et al., 2007;
Yamkovenko, 2009), and explored the experiences of learners in the AI-enabled learning
program. The study implemented an explanatory sequential mixed methods research design
using quantitative and qualitative inquiry (Creswell & Creswell, 2018). For the quantitative
aspect of study, data were collected using the latest fourth version of the LTSI revised in 2008
(Bates et al., 2012). While the control group existed via the traditional program, random
assignment of participants was not possible. The quantitative inquiry enabled quantifiable
measures to identify characteristics and patterns of the factors that influence transfer of learning,
their correlations and relationship to the intent to transfer, controlling for the type of program
completed. The qualitative data were collected via interviews with participants of the AI-enabled
learning program. The qualitative inquiry helped to expand or explain quantitative results.
8
Definitions
A few key concepts surrounded the problem of practice and were central to
understanding the research study.
Adaptive Learning Experiences
Adaptive learning experiences address an individual’s unique needs through just-in-time
feedback and adaptation of the content and instruction (Brusilovsky & Peylo, 2003).
Adaptive Learning Technologies
Adaptive learning technologies, often referred to as adaptive systems, are designed to
create adaptive learning experiences using the power of digital technology. AI-enabled adaptive
learning is a digital technology capable of adapting to the learner’s needs through AI (Herder et
al., 2017).
Fourth Industrial Revolution
Fourth Industrial Revolution (4IR) is an era of blurring boundaries between humans and
machines and is characterized by the advancement of new technologies such as Artificial
Intelligence, robotics, and automation (Schwab, 2016).
Intent to Transfer
Intent to transfer is defined as “the trainees’ intention to engage in specific behavior that
would facilitate transfer of their skills” (Machin & Fogarty, 2004, p. 228).
Learning Transfer Systems Inventory
Learning Transfer Systems Inventory (LTSI) is a validated instrument that measures 16
factors affecting transfer of training (Holton et al., 2000).
9
Transfer of Learning
“Transfer of learning occurs when knowledge or skill learnt in a learning context is used
in a new context different from the context of learning” (Baldwin & Ford, 1988; Perkins &
Salomon, 1988, p. 46).
Workforce Upskilling and Reskilling
Workforce upskilling and reskilling, while often used interchangeably, have a different
focus from one another. Upskilling is concerned with learning skills to improve current
capabilities; reskilling is learning a new set of skills to perform new roles or tasks (Deloitte,
2019).
Organization of the Dissertation
This dissertation is organized into five chapters. Chapter One provided an overview of
the problem of practice, the supporting data that highlighted the significance of the problem, and
new education technologies, such as AI-enabled adaptive learning programs and their potential
role in transfer of learning to the workplace, which prompted this study. This chapter also
reviewed the organization’s mission, goal, and stakeholder groups, and the theoretical framework
of SCT guided by Holton’s (1996) Human Resource Development (HRD) evaluation model.
Chapter Two provides an in-depth review of the literature on the 4IR impact on global workforce
upskilling and reskilling. It also presents a discussion on understanding adaptive learning, its
purpose, various models, key education problems it aims to solve, and research findings related
to the evaluation of AI-enabled adaptive learning programs. Also examined is an overview of the
learning measurement concepts and models, including transfer of learning and the influencing
factors in learning transfer (Holton et al., 2000). Lastly, intent to transfer as a proximal variable
to training outcome is discussed (Ajzen, 1991). Chapter Three provides details of the
methodology for assessing impact of the AI-enabled learning program on transfer of learning.
10
Chapter Three also describes participant selection, data collection, and analysis methods. Chapter
Four presents the results and findings of the study. Chapter Five provides a discussion on study
findings and outlined recommendations.
11
Chapter Two: Literature Review
The literature review examines the Fourth Industrial Revolution (4IR), its uncertainties,
effects on job and skill requirements, and overall implications on the future of work. Next, a
discussion is presented of the concept of adaptive learning and its models and applications via
Artificial Intelligence (AI)-enabled technologies; this AI adaptive learning model is one of the
emerging training formats to upskill and reskill at speed and scale. The chapter also includes
review of empirical research related to the evaluation of AI-enabled adaptive learning programs
and debates around the role of these technologies in the future of education. Lastly, discussion is
presented on the concept of transfer of learning as a determinant of training efficacy, the analysis
of existing models to evaluate training outcomes, and the review of the empirically validated
instrument in measuring training transfer to the workplace. The last section of Chapter Two
contains a detailed conceptual framework derived from the literature review which established a
set of principles on which the study was founded.
The Importance, Implications, and Unknowns of the 4IR
The 4IR predicts changes in the global economy and posits the risks of so-called
“technological unemployment” (Denny, 2019, p. 118). The optimistic predictions around the 4IR
suggest more jobs will be created. In contrast, the pessimistic views suggest the automation of
jobs will be at a higher rate than the creation of new jobs, exacerbated by the limited time to
upskill and reskill the global workforce to make it ready for the future of work (Butler-Adam,
2018). Despite these uncertainties, organizations, governments, and educational institutions need
to develop workforce upskilling and reskilling strategies to respond to the scale and pace of
changes in job skills requirements driven by the 4IR (Anshari, 2020). The following sections will
review the historical content of the 4IR, the key implications and challenges driven by the 4IR in
12
the context of the future of work and changing skill requirements, and the existing strategies to
address the emerging gaps in education and global workforce development.
Historical Context of the 4IR
The 4IR, also referred to as Industry 4.0, connects and removes boundaries between the
biological and artificial worlds via the emergence of AI, robotics, and automation (McGinnis,
2020). It builds on the first three industrial revolutions: 1) the advent of the steam engine, leading
to production mechanization, 2) the invention of electricity, giving birth to mass production, and
3) the emergence of computers, electronics, and digital technologies automating production
(Roblek et al., 2016). The term “Fourth Industrial Revolution” (4IR) was first mentioned in 2016
by Klaus Schwab, executive chairman of the World Economic Forum (WEF) (Schwab, 2016).
The 4IR symbolized the emergence of technologies that create entirely new capabilities for
people and machines (Davis, 2016). 4IR is about the automation of knowledge, whereas the
previous three industrial revolutions were about the automation of physical strength
(Brynjolfsson & McAfee, 2014). Schwab posited that 4IR could significantly improve the quality
of life. Yet, according to Schwab, it could present some serious risks, such as increasing skill
gaps in the global workforce, increasing inequality due to disruption in labor markets, and
exacerbating misalignment between formal education and dynamic changes in the skills
requirements (Harris & Kimson, 2018; Schwab, 2016). Views around implications of the 4IR are
shared both by the research community and corporate executives expecting transformative
changes in the way humans live and organizations function (Manyika et al., 2017b).
Challenges and Implications of the 4IR on the Future of Work
Organizations are adopting AI, robotics, new computational technologies, innovative
materials, biotechnology, virtual reality, and other technologies driving the 4IR (WEF, 2018). On
this path, existing work tasks may continue to be redistributed between humans and machines
13
with greater reliance on machines to perform tasks of varying degrees of complexity (Melnyk et
al., 2019). For example, in 2018, 71% of tasks were performed by humans and 29% by
machines. This ratio is expected to shift by 2022 to 58% of tasks performed by humans and 42%
by machines (Manyika et al., 2017b). Still, organizations struggle to develop long-term
workforce development strategies due to challenges in identifying jobs that can be replaced by
machines (Anshari, 2020). Furthermore, expected changes demand an understanding of skill
requirements for changing jobs, with 42% of core skills predicted to be either absolutely new or
not considered necessary yet by 2025 (Anshari, 2020; WEF, 2018).
Although the majority of business executives are uncertain that they have the proper
workforce and skill sets needed for the future, they agree about the importance of continuous
learning and its role in preparing the global workforce to fulfill new job requirements (McGinnis,
2020). While education systems are the cornerstone of preparing youth for jobs globally, they
struggle to produce the skills required by the current labor market. On average, across 36
countries representing the OECD, 14.3% of 18- to 24-year-olds are neither employed nor
actively pursuing education or training whereas, in some countries, this number increases to over
25% (OECD, 2019). Further, the rate of unemployment lasting more than one year among 18- to
25-year-olds averages 1.5% across OECD countries, reaches 3% among developing countries,
and can be as high as 7.9% in some countries (OECD, 2019). According to The Education
Commission (2018), the situation is not on the path to improvement with estimations that more
than half of the nearly 2 billion youth worldwide will have significant skill gaps by 2030.
Education practitioners and researchers suggest that for a better-prepared future of work, in
addition to numeracy, literacy, and other sciences provided in the standard K-12 curricula,
students need to understand the 4IR and both learn and apply skills for new technologies (Butler-
Adam, 2018; Gleason, 2018).
14
The problem of skill gaps is relevant not only to the youth and unemployed population
but also to the current global workforce. Recent studies estimate that extensive training time is
required to address existing skill gaps. According to WEF, about 35% of the global workforce is
expected to build new skills, which could require between six and twelve months of training time
(WEF, 2018). Furthermore, these estimates do not reflect future skill requirements, which could
accelerate and exacerbate the skill gaps, leading to an increase in time, cost, and resource
requirements needed to address the problem.
Existing Strategies to Respond to the 4IR-Driven Changes
The implications that arise from the 4IR require the global workforce and youth to have
the skills to work with new technology (Butler-Adam, 2018). Among predicted requisite skills
are cognitive flexibility, problem solving, critical thinking, creativity, and the ability to employ
the mindset of a lifelong learner (Lewis, 2018). Sawyer posited the view that knowledge workers
are expected to go beyond a general awareness of new concepts and include reflection and
metacognition for a deeper understanding of learned skills (Sawyer, 2014). Similarly, Gleason
(2018) argued that current learning strategies in education and corporate learning are often
focused on “what to learn” instead of “how to learn” which could be a central topic in the context
of the 4IR and its unprecedented pace of change necessitating continuous learning of new skills
(p. 146).
Education community exploration and discussion is needed to respond to the 4IR-driven
challenges. For example, a liberal arts education could become one of the strategies to respond to
automation and implications of the 4IR (Lewis, 2018). Aoun (2017) recommended that
universities act as the innovation centers for new forms of education, building skills of the future
such as systems thinking, entrepreneurship, agility, and lifelong learning. There are ongoing
experiments in the United States, South Africa, and Asia with new curricula that merge technical
15
skills and liberal arts, foster lifelong learning skills, and create opportunities to apply newly
acquired skills in practice (Gleason, 2018). Some of these examples include the introduction of
the design thinking model to the science, technology, engineering, and mathematics (STEM)
curriculum at some universities in China. A second example is the efforts of Northeastern
University, Yale University, and the National University of Singapore to develop a partnership to
build project-based STEM curriculum with a focus on new technologies. A third example is the
development of a new life science curriculum at Stanford University enabling active
experimentation to solve practical problems (Gleason, 2018).
Despite ongoing initiatives, education and workforce development in general struggle
with the technology-driven innovation process due to the speed of change and new skills arising
from digital technologies (Seufert & Meier, 2016). To address the problem, the education
research community argues that the line between higher education and corporations is becoming
blurry in the 4IR. For example, pathways for students to continue skill development through their
institutions after graduation could be created to respond to rapid changes in job requirements
(Penprase, 2018). New models such as “open loop” university, piloted by Stanford University,
could be the new forms of partnership between education and corporations. They blend learning
and work experiences and provide students with opportunities to pursue higher education during
their professional experiences (Penprase, 2018). In the context of changes driven by the 4IR,
learning methods focusing on learner centricity, motivation, and the enhancement of learning
effectiveness and efficiency are gaining attention (Gleason, 2018). The scale and urgency of
required upskilling and reskilling put a question before learning scientists about mechanisms
available to meet these goals. In the given problem, digital technologies and computer scientists’
experimentations could play an important role in transforming learning through these
technologies’ inherent capabilities to automate, scale, and provide data-driven insights that could
16
inform future education and workforce development strategies (Sawyer, 2014). Among emerging
forms of learning, AI-enabled learning technologies with new techniques to develop more
personalized learning experiences are gaining attention of the education and workplace learning
community.
Artificial Intelligence-Enabled Adaptive Learning
The research in education demonstrated the effectiveness of adaptive learning in one-on-
one tutoring providing personalized learning (Bloom, 1984). The merger of digital technologies
and the concept of adaptive learning (i.e., an individual learner-centric approach with adaptation
of the content) gave birth to adaptive learning technologies. These technologies automate some
of the components of the adaptive learning process and create opportunities to use this approach
at scale (Xie et al., 2019).
Even though adaptive learning is a well-known concept in education, in recent years its
definition has been shifting to the forms where instruction is augmented with digital technologies
or even entirely automated by AI. It has the potential to solve essential problems in the field of
learning, such as accomplishment of the learning goals and proficiencies among learners with a
diverse set of skills and prior knowledge (Sharma, 2019). Adaptive learning can also increase the
effectiveness and efficiency of the learning process via automated and just-in-time feedback,
improve learner engagement and motivation through personalization of instruction, and reduce
resource limitations required for instruction (Oxman & Wong, 2014). Furthermore, access to
large amounts of data from adaptive learning technologies enables further research on how
people learn, allowing educators to continuously improve the content, learning materials, and
overall performance of the technology (Oxman & Wong, 2014). However, despite its benefits
ranging from scalability to data-driven learning, adaptive learning is still facing challenges in
measuring the outcomes of learning efficiency and effectiveness, and researchers seek to
17
understand the optimal models for these systems (VanLehn, 2011). Several influential
organizations such as the Bill and Melinda Gates Foundation and prominent educational
institutions in the United States, Australia, Asia, and Europe have expressed their interest in
researching to study and accelerate the adoption of adaptive learning in global education and
workplace learning (Gleason, 2018; Oxman & Wong, 2014). The following sections review the
historical and contemporary definitions of adaptive learning, the theoretical frameworks and
models applicable to adaptive learning, and focus on the adaptive learning enabled via AI,
including advantages, disadvantages, and debates around various applications.
Overview of Adaptive Learning
A simple definition of adaptive learning can be framed as the learning program where the
instruction method and content adapts to learner needs (Kerr, 2016). The concept of adaptive
learning is not new. Until the early 1970s marking the arrival of the personal computers and the
emergence of computer-based education applications, adaptive learning was manifested only in
the one-to-one approach as one of the most effective education methods that increase learning
outcomes (Bloom, 1984). The use of digital technologies in learning led to the emergence of the
term technology-enabled learning, with one of its branches in adaptive learning (Kirkwood &
Price, 2014). The New Media Consortium (Johnson et al., 2013) defined adaptive learning
technologies as technologies with access to data on student performance within the tool and
deployment of that data to adapt instruction to meet individual needs. The U.S. Department of
Education views the benefits of adaptive learning in enhancing learning analytics, which allows
educators to better respond to learner needs using just-in-time data during the learning process
(U.S. Department of Education Office of Educational Technology, 2013). Although there is
agreement that the future of learning should focus on personalized content for learners, adaptive
18
learning lacks a single definition and therefore may have various interpretations (UNESCO
Institute for Information Technologies in Education, 2013).
Historical Context and Science of Adaptive Learning
The history of adaptive learning is rooted in the work of Pressey, the first developer of
the teaching machine, which was based on the behaviorist learning theory of stimulus, response,
and reinforcement in the learning process (Stolurow & Davis, 1965). However, this machine’s
adoption was not successful and even led to Pressey’s (1933) expressions of frustration in his
notes:
There must be an ‘industrial revolution’ in education, in which educational science and
the ingenuity of educational technology combine to modernize the grossly inefficient and
clumsy procedures of conventional education. Work in the schools of the future will be
marvelously though simply organized, so as to adjust almost automatically to individual
differences and the characteristics of the learning process. (pp. 582–583)
Skinner (1958) improved Pressey’s machine by adding steps to acquire learners’ desired
behavior based on the principle that they needed to follow a sequentially designed instructional
content to achieve intended behavioral outcomes (Kara & Sevim, 2020). Over time, with the
evolution of the fields of psychology and learning science, as well as the advancement of
computer technologies, newer paradigms such as cognitivism and constructivism emphasizing
the role of the learner in a learning process started to gain attention in the discussions around
adaptive learning systems (Kara & Sevim, 2020).
Contemporary Definition and Models of Adaptive Learning
The U.S. Department of Education Office of Educational Technology (2013) defined
digital learning systems as adaptive
19
when they can dynamically change to better suit the learning in response to information
collected during the course of learning rather than on the basis of preexisting information
such as a learner’s gender, age, or achievement test score. Adaptive learning systems use
information gained as the learner works with them to vary such features as the way a
concept is represented, its difficulty, the sequencing of problems or tasks, and the nature
of hints and feedback provided. (p. 12)
Although adaptive learning systems are not clearly defined, they share common tenets
delineated in various design models and related technology components for the adaptation
mechanism (Pugliese, 2016). According to Mavroudi and Hadzilacos (2016), models of adaptive
learning vary widely: the macro-adaptive approach is the most generic and simplest form of
adaptation based on learning goals and pre-learning assessment outcomes; the aptitude-treatment
interaction approach is an adaptation based on learners’ prior knowledge and characteristics such
as learning preferences, and the micro-adaptive approach is an adaptation of content by
diagnosing the student’s specific learning needs during the learning process and providing
precise feedback and solutions to remedy knowledge or skill gaps. With the adaptive learning
models’ variability come differences in the programming approach, mainly using three
intersecting factors: content, learner, and instruction method. The latest technological
advancements in AI enabled a shift from the oldest and basic form of decision-tree or rule-based
programming to advanced algorithms and machine learning (Aroyo et al., 2006). Despite the
variation in adaptive learning technologies, the key tenets of adaptive learning include
automation to reduce manual instruction processes, sequenced progression of content to gain
skills, and assessment of the acquisition of knowledge and skills (Pugliese, 2016).
20
New Frontier of AI-Enabled Adaptive Learning
The term “artificial intelligence” was first mentioned by John McCarthy during the
launch of the Dartmouth Summer Research Project in 1955 (McCarthy, 1955). One year later, he
referred to it as “the science and engineering of making intelligent machines” (McCarthy, 2007,
p. 2). AI-based applications are increasing aggressively across all major sectors such as health,
aerospace, education, law, manufacturing, agriculture, and transportation. Variability of AI
technologies is large, including programmed intelligence, rule-based machines, machine learning
finding patterns and making decisions without a need for the predefined tasks, and the ultimate
vision for AI which is to sense external stimuli and make decisions (Dhawan & Batra, 2021;
Strack et al., 2021). The field of education is not an exception to this variability, and AI-enabled
learning technologies present immense positive potential to alter the way higher education and
workforce development function.
Artificial Intelligent Tutors in Education
The most common use of AI in education is referred to as intelligent tutor systems, which
can track learner performance with precision to provide focused feedback and more optional
paths for personalized learning (du Boulay, 2016). Education and computer scientists generally
agree that the most advanced AI-enabled adaptive learning technologies are built on the power of
machine learning that has complex algorithmic structures behind these technologies. For
example, McGraw-Hill’s ALEKS intelligence uses machine learning and algorithms based on a
knowledge space theory, a mathematical probabilistic approach to assess state of knowledge
(Cosyn et al., 2021). Another competing software model developed by Carnegie Mellon
University is ACT-R, a cognitive architecture for modeling cognition and striving to understand
how people receive, process, and organize knowledge to produce intelligent behavior. However,
modalities and use of AI-enabled adaptive learning vary, and educational institutions are at the
21
early stages of exploring these technologies, often in research-based experiments (Carnegie
Mellon University, 2021).
Debates Around AI-Enabled Adaptive Learning
According to the U.S. Education Sector’s 2018 report, use of AI in education
technologies in U.S. education will grow 47.77% between 2018 and 2022 (Technavio, 2018).
Furthermore, in 2018 only, AI startups in the United States received approximately $9.3 billion
in venture funding (Davenport, 2019). The instantaneous changes necessitated in the education
industry due to the spread of Coronavirus Disease-2019 (COVID-19) have also expedited the
transfer to technology systems in education and corporations globally (Dhawan & Batra, 2021).
Case for AI-Enabled Adaptive Learning. A compelling argument can be made for
creating personalized learning for every employee and student at scale using adaptive learning
platforms (Christensen, 2017). AI-enabled learning technologies achieve personalization of
learning by helping to overcome traditional education problems such as varied learning abilities,
different levels of prior knowledge, and resource and cost limitations (Pugliese, 2016). The
power of these technologies is in the inherent capabilities to precisely determine what a learner
knows and how to help the learner with the content by addressing individual needs and providing
just-in-time remedy and support (Veletsianos, 2016). Real-time learner assessment and
predictive analytics available through these systems not only increases learning efficiency and
effectiveness but also solves the problem of traditional assessments where the learner receives
feedback only after the learning event (Dziuban et al., 2018). Also, the learner-centric approach
and personalized learning drive higher levels of engagement and motivation throughout the
learning experience (Pugliese, 2016).
Case Against AI-Enabled Adaptive Learning. With all advancements of AI-enabled
adaptive learning systems, they are still in experimental stages. According to experimental
22
researchers of adaptive systems, AI-enabled adaptive learning will need to develop some
common standards and definitions to align the terminology and application among education and
research communities (Pugliese, 2016). Another challenge relates to the application of data from
adaptive learning systems. The metadata are prone to error if not continuously tested for
accuracy and correctness. When AI-enabled technologies were developed, the input data were
provided and coded by humans, and therefore, the humans’ “biases tainted the data, which in
turn infected the algorithms, which in turn produced biased outcomes” (Kulkarni, 2019, p. 38).
Next, with all the movements and progress in AI-enabled adaptive learning, two competing
approaches are emerging and are debated among researchers and educators: AI-led versus AI-
assisted learning. AI-led education, a system undergoing active experimentation in China,
assumes no interaction with the human teacher, resulting in complete reliance on the interaction
between machine and learner. AI-assisted learning, mainly employed in the United States,
experiments with augmenting the application of adaptive learning with human teachers (Hao,
2019).
Evaluation of AI-Enabled Adaptive Learning
AI-enabled adaptive learning shifts the focus to the individual and aims to provide an
individualized learning path to every learner, tailoring the instruction to varying learning
abilities, preferences, prior knowledge, and motivation levels (Pugliese, 2016). In addition to
potential learning effectiveness and efficiency gains, it could also save time and cost of learning
by freeing human researchers’ time to address specific learning needs that machines cannot
resolve (Szijarto & Cousins, 2019; Pugliese, 2016). However, despite growing interest and
experimentation with these programs, there is a lack of empirical evidence on the impact of AI-
enabled adaptive learning on individual and organizational performance outcomes.
23
Learner Engagement in AI-Enabled Adaptive Learning
Learner engagement is defined as “the amount of physical and psychological energies
that a learner devotes to the experience” (Astin, 1999, p. 519). A learner’s engagement increases
when the learning is relevant (Hamari et al., 2014). The research community has been
experimenting with evaluating learner reaction within the constructs of the learner’s attributes,
such as motivation and self-regulation. For example, Tsatsou et al. (2018) evaluated how a
learner’s motivation changes in a dynamic adaptation model of the AI-enabled learning
environment based on the learner’s behavior and affective state. Liu et al. (2017) agreed that
apart from learners’ cognitive ability in AI-enabled adaptive learning, psychological factors,
such as motivation and self-regulation, play a role as predictors of engagement and performance.
Cook et al. (2007) posited a view that learning preferences, often interchangeably used with
cognitive styles, may not yield a statistically significant impact on learning outcomes, and
instructional methods play a more significant role in learning gains (Cook et al., 2007). Aleven et
al. (2013) demonstrated the impact of self-regulation on learning outcomes in adaptive learning
environments and how well these models can support an individual's goal orientation, self-
monitoring, and exertion of control over the learning process. Lastly, the findings from Liu et al.
(2017) suggested that while real-time assessments play an important role in achieving desired
learning outcomes, learners with higher levels of prior knowledge spent nearly the same amount
of time in self-assessments and staying on content as those with lower levels of prior knowledge.
This could be explained by the learner’s motivation to achieve learning mastery. Evidence
suggests that learner attributes such as motivation and self-regulation in the AI-enabled adaptive
learning environment have implications and represent an area of focus in ongoing research
(Aleven et al., 2017).
24
Training Design in AI-Enabled Adaptive Learning
Educational research on adaptive learning technologies has documented many instances
where effective instructional treatment to increase desired learning gains depends on the
learner’s prior knowledge (Kalyuga, 2006). AI-enabled learning systems are expected to achieve
learning goals via the optional path tailored to individual needs, thus increasing learning
efficiency (Kalyuga, 2006). Kalyuga (2006) experimented with measuring the effects of AI-
enabled learning programs on knowledge gains via a rapid diagnostic assessment model which
tracked how remedial and tailored content was provided to optimize cognitive load and
ultimately reduced training time while accomplishing increased learning gains.
However, instructional adaptability is a broad topic. Aleven et al. (2017) defined
instructional adaptivity as two-dimensional, based on discussions on how to adapt content and
what to adapt to. The Adaptability Grid developed by Aleven and colleagues presents a model of
adaptability, including designed adaptability as an iterative instructional design with a rule-based
adaptation of the content and algorithmic adaptivity using algorithms and machine learning.
Research findings of the effects of various adaptabilities demonstrated that both models of
adaptation positively affected student knowledge gain and produced effective strategies to
address knowledge gaps (Aleven et al., 2017). However, the plethora and variability of adaptive
learning systems present challenges in developing the recommended taxonomy and models
needed to adapt instruction design. For example, some AI-enabled learning technologies provide
more control to the learner while others completely remove this attribute. Liu et al. (2017)
summarized key models of instruction methods in AI-enabled adaptive learning technologies.
These vary widely from, on the one hand, personalizing instruction by changing the sequencing
of concepts with no control by the learner to, on the other hand, approaches that provide learners
with both sequenced and deliberately selected contents. In this second approach, AI analyzes
25
data from learner’s diagnostic assessment and performance and adapts instruction with the help
of multiple input factors (Liu et al., 2017). These variables have implications on understanding
and exploring the interdependent nature of learner and instruction attributes in AI-enabled
learning technologies and their implications for learning outcomes. Despite ongoing research in
AI-enabled adaptive learning technologies, there is a lack of empirical research to inform
educators and learning professionals on the most effective instructional design methods in this
new form of education technology to yield desired learning outcomes.
Organizational Environment for AI-Enabled Adaptive Learning
There is empirical evidence that validates the workplace environment and organizational
culture’s impact on organizational learning (Amabile, 1988; Sternberg, 2003). Organizational
cultures, which consist of shared beliefs and values among organizational members, play an
important role in organizational learning and learning initiatives. The culture may support or
hinder the transfer of learning and achievement of organizational outcomes through its effect on
individuals’ motivation, attitudes, and behavior (Bates & Khasawneh, 2005). The concept of
organizational learning culture is defined as “a continuous testing of experience and its
transformation into knowledge available to the whole organization and relevant to their mission”
(Senge, 1990, p. 6). Such culture fosters the organizational practices of knowledge acquisition,
knowledge distribution, and transfer of learning (Banerjee et al., 2017). The term “learning
organization” was coined by Peter Senge (1990) as a relevant organizational format to respond to
the accelerated pace of change in the global environment. According to Senge, “Learning
organizations are organizations where people continually expand their capacity to create the
desired results. New and expansive thinking patterns are nurtured, where collective aspiration is
set free, and where people are continually learning to see the whole together” (p. 3). According
to Watkins and Marsick (1993), learning organizations foster curiosity, inquiry, and dialogue,
26
support “creative tension” as a way to create better ideas and outcomes, and view learning as
integral to the individual performance that drives organizational performance. In this context, AI-
enabled learning technologies are the tools to facilitate learning and “any learning process
(adaptive or otherwise) that is taking place in a social intervention is embedded within a social
learning system,” which requires careful evaluation of its impact on defined goals (Szijarto &
Cousins, 2019, p. 161).
The power of AI in adaptive learning is in its potential to strengthen and enhance the
learning process by keeping the learner motivated, supported with just-in-time feedback, and
continuously adapting to her needs (Becker et al., 2016). However, according to Hao (2019),
before AI-enabled learning models become a form of intelligent learning, it would be a mistake
to widely introduce these technologies to education and workplace learning without extensive
research on their impact on learning transfer to the workplace (Hao, 2019).
Transfer of Learning
The ultimate goal of the learning program is to produce behavioral change to improve
performance outcomes at the individual and organizational levels (Holton, 1996; Stolovitch &
Keeps, 2004). However, the distinction between learning and the transfer of learning is nuanced,
with transfer playing an important role in validating training efficacy and relating to both
learning process and outcome (Haskell, 2004). Steiner (2001) delineated learning and transfer of
learning:
Learning, on the one hand, is thought (a) to occur within an immense body of prior
knowledge, and (b) to be a process of continual integration of knowledge by
constructions or reconstructions [….] Transfer, on the other hand, is thought of as a
ubiquitous, continuous, systematic use of selected parts of the immense body of prior
knowledge. (p. 15846)
27
This section discussed the concept of learning transfer, models to evaluate learning
outcomes, and factors that influence learning transfer.
Learning Measurement
If training is perceived as a mechanism to deliver essential knowledge and skills required
to perform a job or task effectively, the process of evaluating the degree of acquired knowledge
and the application of that knowledge is critical to understanding and determining the efficacy of
training (Phillips, 1997). However, conceptualization of learning transfer varies among the
research community and includes a broad range of meanings, including application of new skills
on the job, the quality of new skills application, and the impact of new behaviors on workplace
performance (Schoeb et al., 2019). These variations in interpretations also lead to different
approaches in measuring learning outcomes. Herrero-Pineda et al. (2015) identified two main
streams in measuring learning outcomes in organizational practice: direct evaluation, or the
assessment of knowledge, skills, and attitudes that learners have acquired throughout their
learning experience, and indirect evaluation, or the assessment of the variables that may facilitate
or hinder transfer of learning. The former model of evaluation is often associated with
measurement through tests, questionnaires, and observations, while the latter model focuses on
variables to diagnose and predict learning transfer after completion of the learning program.
Pieneda-Herrera et al. (2015) proposed that measuring gained knowledge could provide useful
insights relevant to the selection of a specific training program, while using a diagnostic
approach to understand factors facilitating or hindering learning transfer could provide predictive
models for learning transfer conditions.
Baldwin and Ford (1948) provided the foundation for the conceptualization of examining
training transfer through the framework which included training inputs, training outcomes, and
conditions for training transfer. Still, one of the key recommendations of Baldwin and Ford’s
28
work to the research community was to develop learning transfer criterion measures. Figure 1
depicts the conceptualization of Baldwin and Ford’s training transfer model.
Figure 1
Training Transfer Model
Note. Training transfer model. Reprinted from Transfer of training: A review and directions for
future research (p. 65), by T. Baldwin and J. Ford, 1948, Personal Psychology. Copyright 2006
by John Wiley & Sons. Reprinted with permission (Appendix A).
29
Kirkpatrick’s (1996) four-level evaluation model is another frequently cited learning
outcome-focused framework to measure impact of learning at four levels: 1) learner reaction to a
training event, 2) acquisition of knowledge and skills via training program, 3) behavior change as
a result of training, and 4) results the organization gained through training intervention.
However, many researchers criticized the model and viewed it more as a taxonomy of outcomes
citing its lack of evidence in causal chain of outcomes (Bates, 2004) and perceived neglect of
compounding factors (Burrow, 1996; Falletta, 1998; Holton, 1996; Moreau, 2017; Parker, 2013).
With the criticism of Kirpatrick’s (1996) model, Holton (1996) viewed the importance of
diagnosing and understanding the causal influences in achieving learning outcomes at the
individual, training, and organizational levels, which led to emergence of the human resource
development (HRD) evaluation and research framework. Figure 2 depicts conceptualization of
Holton’s HRD evaluation model revised in 2005 with added “complete construct definitions,
where possible” (Holton, 2005, p. 50).
30
Figure 2
Revised HRD Evaluation Framework
Note. Revised Human Resource Development evaluation framework. Reprinted from Holton’s
evaluation model: New evidence and construct elaborations (p. 51), by E.F. Holton (2005),
Advanced in Developing Human Resources, 7(1). Copyright 2005 by Holton. Reprinted with
permission.
In an attempt to extend the research on causal influences of learning transfer into a
validated scale, the Learning Transfer System Inventory (LTSI) was developed. The LTSI is a
diagnostic tool consisting of a two-part self-report assessment of 16 factors that impact learning
transfer with variables related to workplace environment, individual characteristics, and training
design (Bron et al., 2012; Holton et al., 2000).
31
Overview of the LTSI
Holton et al. (2000) noted that an "established set of transfer system scales with validated
constructs and known psychometric qualities would [….] add significantly to understanding of
the transfer process" (p. 337). The first version of a 66-item instrument consisting of the
constructs of the instrument originally developed by Rouiller and Goldstein (1993) gave rise to
the empirical work and development of the transfer climate inventory (Holton et al., 1997). With
extensive research of the influencing factors in learning transfer, later versions of the instrument
were reframed as a transfer system (Holton et al., 2000). In the second version, Holton et al.
(2000) developed a 112-item instrument, later reduced to 68 items, intended to measure factors
influencing learning transfer to individual and organizational performance outcomes
(Yamkovenko et al., 2007). The second and subsequent versions of the instrument resulted in
measuring 16 factors in two distinct domains: 11 of the factors refer to a particular training
program, and five of them refer to general factors related to perception of training in the
organization (Holton et al., 2000).
In 2007, the exploratory and confirmatory factor analysis of the second version led to the
creation of the third version of the LTSI with a total of 89 items (Bates et al., 2012). The fourth
version of the LTSI consisting of 48 items and five demographic questions was created in 2008
“to increase organizational and respondent acceptance, minimize completion time, diminish
respondent fatigue, and provide a more practical, easier-to-use, more accessible instrument for
organizations, training practitioners, and researchers” (Bates et al., 2012, pp.18-19). Table 1
provides a description of Holton et al.’s (2000) LTSI scales and definitions. Appendix B
provides an extended version of the LTSI scales, definitions, and descriptions.
32
Table 1
Learning Scales and Definitions
Scale Scale definition
Trainee characteristics scales
Learner readiness The extent to which individuals are prepared to enter and
participate in a training program.
Performance self- efficacy An individual’s general belief that they are able to change
their performance when they want to.
Motivation scales
Motivation to transfer learning The direction, intensity and persistence of effort toward
utilizing in a work setting skills and knowledge learned in
training.
Transfer effort – performance
expectations
The expectation that effort devoted to transferring learning
will lead to changes in job performance.
Performance - outcomes
expectations
The expectation that changes in job performance will lead to
outcomes valued by the individual.
Work environment scales
Feedback performance
coaching
Formal and informal indicators from an organization about an
individual’s job performance.
Supervisor/manager support The extent to which managers support and reinforce the use
of learning on-the-job.
Supervisor/manager sanctions The extent to which individuals perceive negative responses
from managers when applying skills learned in training.
Peer support The extent to which peers reinforce and support the use of
learning on-the-job.
Resistance/openness to change The extent to which prevailing group norms are perceived by
individuals to resist or discourage the use of skills and
knowledge acquired in training.
Personal outcomes positive The degree to which applying training on-the-job leads to
outcomes that is positive for the individual.
33
Scale Scale definition
Personal outcomes negative The extent to which individuals believe that if they do not
apply new skills and knowledge learned in training that it will
lead to outcomes that are negative.
Ability scales
Opportunity to use learning The extent to which trainees are provided with or obtain
resources and tasks on-the-job enabling them to use the skills
taught in training.
Perceived content validity The extent to which the trainees judge the training content to
accurately reflect job requirements.
Personal capacity for transfer The extent to which individuals have the time, energy and
mental space in their work lives to make changes required to
transfer learning to the job.
Transfer design The extent to which training has been designed to give
trainees the ability to transfer learning to job application and
the training instructions match the job requirements.
LTSI’s Construct and Criterion Validity
The LTSI construct validity has been extensively researched (Bates et al., 2012).
Construct validity “refers to how well a measure actually measures the construct it is intended to
measure” (Netemeyer et al., 2003, p. 11). LTSI has shown content, convergent, and discriminant
validity in numerous studies (Bates et al., 2012). Furthermore, the instrument demonstrated
validity in various linguistic contexts. According to Bates et al. (2012), “The LTSI has been
translated into 17 different languages, and several of these studies have provided evidence that
supports the 16-factor structure” (p. 551).
Criterion-related validity refers to “the extent to which one measure estimates or predicts
the values of another measure or quality” (Salkind, 2007, p. 200). In review of existing learning
transfer measurement instruments, Schoeb et al. (2019) found that only 10% of the instruments
34
relate training outcomes to predictive performance. While the LTSI as an instrument has
demonstrated evidence of construct validity, its predictive validity has been evaluated only for
select LTSI scales, such as individual performance (Bates et al., 2000) and transfer design
(Velada et al., 2007). Also, Bates et al. (2007) concluded that a few studies demonstrated that
LTSI factors, such as motivation to transfer, learner readiness, transfer design, transfer-
performance expectations, use opportunity, performance-outcomes expectations, and self-
efficacy are stronger predictors of job performance change after training than remaining LTSI
factors (Devos et al., 2007).
A literature review suggested that LTSI does not have extensive empirical research on
predictive validity across all 16 factors (Katsioloudes, 2015). Scaduto et al. (2008) recommended
examining criterion-related validity of the complete instrument for training outcomes. A study by
Hutchins et al. (2013) took steps to measure predictive validity of the complete LTSI scales and
intent to transfer as a proximal outcome variable.
Intent to Transfer
Intent to change a behavior is based on the theory of planned behavior (TPB) and refers
to a person’s willingness to perform a desired behavior (Ajzen, 1991). The model views the
intention to engage in a behavior as positively correlated to performance and as dependent on the
person's attitude, perceived behavioral control, and social norms (Ajzen, 1991). TPB has been
analyzed in multiple contexts and has high predictive validity, whereas the construct of attitude
has been reported as having the highest level of positive impact on the intent to perform or
exhibit a certain behavior (Qu et al., 2019). Holton et al.’s (2000) LTSI may appear to have some
similar constructs, but it is important to note that the LTSI construct motivation to transfer is not
equivalent to intent to transfer. Al-Eisa et al. (2009) described the relationship between these
constructs such that the former is a starting process and the latter represents conversion of intent
35
into a process and a commitment to transfer the learning. Hutchins et al. (2013) recommended
exploring the relationship of the LTSI constructs and intent to transfer to merge the diagnostic
nature of the LTSI with the transfer outcome, where intent to transfer serves as a proximal
measure. Wang et al. (2017) described transfer intent as “arguably the most important precursor
to actual transfer” (p. 596). With the intent to transfer viewed as an antecedent to an individual's
behavior, it could be the closest measure of learning and transfer outcomes (Hutchins et al.,
2013).
Study’s Conceptual Framework
The purpose of this study is to examine the impact of the AI-enabled adaptive learning
program on transfer of learning guided by Holton’s (1996) human resource development (HRD)
evaluation and research model. Bandura’s (1986) social cognitive theory (SCT) represents an
overarching theoretical framework for the study with its central idea on triadic reciprocal
causation and influencing relationships between person, behavior, and environment. In these
relationships, Bandura (1999) noted, people “function as contributors to their own motivation,
behavior, and development within a network of reciprocally interacting influences” (p. 169). In
this study, SCT as a theoretical framework is operationalized via Holton et al.’s (2000) Learning
Transfer System Inventory (LTSI), which examines influencing factors supporting or hindering
transfer of learning to the workplace. A significant body of research has demonstrated that
learning evaluation and transfer is a complex process and depends on various factors, such as
individual characteristics, training design, and organization-related factors (Baldwin & Ford,
1988; Bates & Khasawneh, 2005; Edmondson & Lei, 2014; Holton, 2004; Schneider, 2014). In
an attempt to identify and measure influencing factors which impact learning outcomes, LTSI
was developed and validated as an assessment instrument with 16 influencing factors in four key
constructs: motivation, work environment, ability, and secondary influences (Holton et al., 1997;
36
Holton et al., 2000). Furthermore, the instrument measures 16 factors related to a specific
training event and general perception of training in the organization (Holton et al., 2000).
Herrero-Pineda et al. (2015) noted that the chosen approach could fit into the category of indirect
evaluation, allowing for learning transfer condition generalization. Intent to transfer derived from
the theory of planned behavior (Ajzen, 1991) was used in a few studies as a proximal learning
outcome to increase predictive validity of transfer of learning factors on performance outcomes
(Hutchins et al., 2013).
The literature review provided key insights on the definition of adaptive learning, the
enablement of adaptive learning via Artificial Intelligence (AI), its various forms and attributes
applied to adaptive systems, and an overview of the initial findings from the program evaluation.
Literature on AI-enabled learning programs suggests that learner attributes such as motivation,
goal-setting, and self-regulation have an impact on learner engagement in the program. Self-
efficacy and learner readiness constructs align with the AI program’s learner attributes of
motivation, goal-setting, and self-regulation (Holton et al., 2000). This also occurs in AI-enabled
adaptive learning, an instructional design which uses advanced algorithmic models to help
learners achieve subject and skill proficiency to meet individual learning needs. This aligns with
LTSI’s training design factors (Holton et al., 2000). Lastly, the literature review included a
discussion of the workplace environment essential for the transfer of learning to behavior, which
corresponded to LTSI environmental factors (Holton et al., 2000). Figure 3 depicts
conceptualization of the study approach aiming to examine the impact of the AI-enabled
adaptive learning program on transfer of learning at the workplace.
37
Figure 3
Conceptual Framework for AI-Enabled Learning Program Evaluation
Summary
Challenges presented by the accelerated pace of change in the Fourth Industrial
Revolution (4IR) require education entities, governments, and businesses to consolidate their
efforts to respond to the impact of advanced technologies on the future of work. Key
implications of the 4IR include changing skill requirements and shortages of skills in labor
markets. According to the projections of some government organizations and research firms at
the forefront of the topic of the future of work, more than 50% of the global workforce will need
to upskill or reskill by 2025 (WEF, 2020). However, the pace and scale of continuous changes in
skill requirements pose a challenge for many organizations to develop workforce development
strategies in response to the advancements of automation and Artificial Intelligence (AI).
38
Education communities are exploring solutions to address the problem. In this field, AI learning
technologies are gaining attention due to numerous benefits such as enablement of personalized
learning adapted to individual needs, which could also drive efficiency in achieving learning
outcomes. Furthermore, these technologies provide access to multiple levels of data allowing
further analysis, iteration, and improvement of these solutions. However, despite extensive
experimental work with AI-enabled adaptive learning, limited empirical research is available to
examine impact of the AI-enabled adaptive learning specifically around transfer of learning to
the workplace behaviors. This is particularly important where training transfer is estimated at
less than 10-20% (Velada et al., 2007), while investment in workplace training continues to
grow, reaching $400 billion in 2015 (Carnevale et al., 2015). The AI-enabled adaptive learning
technologies growth rate is expected to be 47.77% between 2018 and 2022 (Technavio, 2018).
The purpose of this study is to examine the impact of an AI-enabled adaptive learning program
on learning transfer of job behaviors to determine the program’s efficacy in achieving individual
and organizational performance outcomes. The study results will help to inform future strategies
on addressing the upskilling and reskilling needs of the global workforce using emerging
advanced learning technologies such as AI-enabled adaptive learning.
39
Chapter Three: Methodology
Chapter Three provides an overview of the research methodology of the study, which
aimed to examine the impact of an Artificial Intelligence (AI)-enabled adaptive learning program
on facilitating learning transfer to the workplace. The study examined self-reported learning
transfer factors and their relationship to intent to transfer following completion of the AI-enabled
learning program. The chapter is organized into sections outlining research questions and
methodological design, including participant selection, instrumentation, data collection and
analysis procedures, strategies to maximize study validity and reliability, researcher
positionality, ethics, and potential limitations and delimitations of the study.
Research Questions
1. Are there differences in the person- and training-related attributes based on type of
learning completed?
2. Are there differences in the intent to transfer based on the type of learning program
completed?
3. What is the relationship of person, training, and environment attributes to the intent to
transfer learning, controlling for the type of learning program completed?
4. What are the experiences of learners in the AI-enabled program and how do they relate to
the intent to transfer?
Overview of Methodology
The study used an explanatory sequential mixed method of inquiry (Creswell & Creswell,
2018). This study method answered research questions by making inferences about existing or
predictive correlational relationships between variables of the research questions using
quantitative and qualitative data (Creswell & Creswell, 2018). The first phase of the study was
quantitative and employed the posttest only nonequivalent control group design to statistically
40
compare self-reported LTSI factors (Holton et al., 2000) between two groups and evaluate their
relationship to intent to transfer (Hutchinson et al., 2013; Yamkovenko, 2009). The analysis
compared data from a control group consisting of participants who completed a traditional
program and a treatment group consisting of participants who completed an AI-enabled adaptive
learning program. Both learning programs focused on content aiming to develop proficiency in
the same skill set. The second phase of the study was qualitative and collected data from the self-
selected participants from the treatment group. This group consisted of participants who
completed the AI-enabled adaptive learning program to answer questions related to their
experiences with the program and how they relate to the intent to transfer.
A mixed method design aligned with my pragmatic worldview where the “approach
should be adopted rather than a commitment to a paradigm and the philosophical doctrine on
which it is supposedly based” (Bryman, 2006, p. 118). The explanatory sequential mixed method
inquiry helped to explain the quantitative results with qualitative data (Creswell & Creswell,
2018). The quantitative part of the mixed-methods inquiry was ideal for the study of this problem
as it produced quantifiable measures and inferences about relationships among variables related
to research questions (Creswell & Creswell, 2018). The qualitative part of the inquiry collected
rich insights on the complex realities of learners in the AI-enabled adaptive learning environment
(Regnault et al., 2018). Table 2 provides a matrix view of the research questions and used data
sources.
41
Table 2
Data Sources
Research questions Survey Interview
RQ1: Are there differences in the person- and training-related
attributes based on type of learning completed?
X
RQ2: Are there differences in the intent to transfer based on the
type of learning program completed?
X
RQ3: What is the relationship of person, training, and
environment attributes to the intent to transfer learning,
controlling for the type of learning program completed?
X
RQ4: What are the experiences of learners in the AI-enabled
program and how do they relate to the intent to transfer?
X
The Researcher
Finding effective training solutions to address the problem of growing skill gaps was a
topic of personal and professional interest for me. As a pragmatist, I sought to examine the
outcomes of the training programs intending to solve the problem (Kaushik & Walsh, 2019).
According to Maaroof (2019), pragmatists should be biased only by the degree necessary to
enhance the research and help answer the research questions. For the given study, inquiry
comprised a combination of deductive examination of the quantifiable impact of the AI-enabled
adaptive learning program on the transfer of learning, followed by a qualitative analysis to
explain or elaborate on the quantitative results (Creswell & Creswell, 2018).
Potential position of power in the study related to my role in ABZ as a leader of the
learning and development (L&D) function with direct access to the organization’s strategy and
people data, skills gaps, and workforce development strategies. Therefore, my role in the study
was stated explicitly and delineated from my official job in the organization. In addition, access
to organizational data was governed by the organization's data access protocol for external
42
parties. To mitigate any personal bias, data collection and interpretation in the study followed a
rigor and accounted for reliability and validity of instruments and methods (Creswell &
Creswell, 2018).
Data Sources
The explanatory sequential mixed methods research used surveys and interviews as data
sources for integration of the findings from the quantitative and qualitative analysis (Tashakkori
& Creswell, 2007). In the interpretation of the study results, the qualitative results helped to
expand or explain quantitative results (Creswell & Creswell, 2018). This two-phase data
collection research method helped to examine the problem from both the deductive and inductive
perspectives, and consequently, enabled to combine hypothesis testing and theory generation
within a single study (Jogulu & Pansiri, 2011).
Surveys
The surveys aimed to obtain a type of information related to the trends, opinions, and
attitudes of the study participants (Merriam & Tisdell, 2016). The survey method is used in
various quantitative research designs to answer descriptive or experimental research questions
(Creswell & Creswell, 2018). Among the benefits of surveys as a data collection mechanism is
the economy of time and the relative simplicity of administration using online tools and large
sample data collection, which allows generalization of results to a broader population (Fowler,
2014; Sue & Ritter, 2012). The data for this study was collected via an online survey using the
fourth version of the LTSI (Holton et al., 2000) to measure 16 self-reported factors influencing
transfer of learning. The survey also included questions to measure intent to transfer as a
proximal variable to transfer outcome (Bates & Holton, 2007; Hutchinson et al., 2013).
43
Participants
The population in this study included 1,000 employees in the ABZ organization that
should have participated in a skill development program. The participants had the choice to
select which type of a program to complete using the organization's learning platform. As a
result, some of the participants experienced the AI-enabled learning program, while others
experienced a traditional learning program consisting of video-based modules and on-the-job
project assignments. Both programs resided in the organization’s learning platform and the only
difference for the learner selecting a specific program was provided in the course description.
The AI-enabled learning program consisted of a series of questions asked at the beginning of the
program to collect initial data on the learner’s preference in the interface configuration, prior
knowledge, and level of confidence in the application of the skills on the job. Throughout the
program experience, the learner was prompted to answer questions, receive feedback on the level
of proficiency in the content, and the learner had flexibility and control over specific sections of
the program to further study the topic. The traditional program began with online, video-based
content and included knowledge checks throughout the program. Upon successful completion of
the embedded quiz, the learning program culminated in a project-based assignment. The learner
needed to follow the guidelines of the provided rubric to complete all assigned tasks within a
project before completing the learning program.
The sampling frame for the study included all people who completed either program
aiming to build a certain level of proficiency around the same skill set. The study employed
posttest only nonequivalent group design using a convenience sample from participants who
completed either the AI-enabled learning program or the traditional program. These participants
were identified via the organization's learning management system. While the control group
44
existed in the study, random assignment was not possible. However, two groups were selected as
equivalent populations, sharing similar characteristics such as employment within the same
organization, job function, and being trained on the same skill set.
A priori type power analysis using commercial calculator G*Power (Faul et al., 2007)
was used to determine the study’s sample size with an equivalent number of participants needed
in each condition of the group completing AI-enabled learning programs and traditional
programs to detect significant group differences (Creswell & Creswell, 2018). The recruitment
method was based on voluntary response and involved sending participants from both groups an
email with an invitation to participate in the study. The invitation email included the purpose of
the study, USC’s Institutional Review Board (IRB)-template based information sheet, and a
statement guaranteeing response anonymity.
Instrumentation
The study used the Learning Transfer System Inventory (LTSI) to collect self-reported
data on influencing factors of learning transfer (Holton et al., 2000). Collected data captured
learner perceptions of learning transfer catalysts and barriers following AI-enabled adaptive
learning program or traditional program completion. The LTSI (Holton et al., 2000) is a
validated instrument controlled by the LTSI authors. The version of the instrument used (the
fourth version) contains 48 items across 16 factors within four categories: motivation, work
environment, ability, and secondary influences (Holton et al., 2000). The instrument has an
extensive empirical research history, has been validated through numerous studies, and has been
translated into 17 languages (Hutchins et al., 2013). In this study, 16 LTSI factors represented
independent variables; these factors were learner readiness, performance self-efficacy,
motivation to transfer learning, performance expectations, outcome expectations, feedback,
supervisor support, supervisor sanctions, peer support, resistance to change, positive personal
45
outcomes, negative personal outcomes, opportunity to use, personal capability for transfer, and
perceived content validity (Holton et al., 2000). In addition, intent to transfer as a proximal
transfer outcome was a dependent variable in the study, which was measured using four items of
the LTSI survey, representing an entire existing scale of the intent construct in the instrument
(Holton & Bates, 2007; Yamkovenko, 2009).
The introductory section of the survey provided a study summary, assurance around
survey anonymity, data use, data storage, and logistics. The full survey consisted of 48 closed-
ended questions from the LTSI (Holton et al., 2000) and four closed-ended questions on intent to
transfer with the most exhaustive and mutually exclusive response options on a five-point ordinal
Likert-type scale. It also included three demographic questions related to the type of completed
program, participant age, and gender. The full survey is provided in Appendix C.
Data Collection Procedures
The survey was administered using a web-based online portal LTSInventory owned by
the LTSI authors. In addition to 48 transfer factor related items, the fourth version of the LTSI
instrument included four items related to intent to transfer from the Intent to Transfer instrument
developed by Holton and Bates (2007) and used in the study conducted by Yamkovenko (2009).
Also, three demographic questions were added to the survey by the LTSI authors per my request.
Lastly, at the end of the survey, a note was included with an invitation to participate in a
voluntary follow-up interview if the participant met the condition of completing an AI-enabled
adaptive learning program.
An email with the survey purpose and the direct hyperlink to access the online survey
provided by LTSInventory was sent to the participants by the Human Resources specialist on my
behalf. Appendix D includes an introductory note for the email that was sent to participants with
an invitation to complete the survey. Also, the email included the Information Sheet shown in
46
Appendix E as a way of gaining informed consent. The estimated time to complete the survey
was between 15 and 20 minutes. All collected data were stored in a password-protected
computer and will be destroyed three years after the study’s conclusion. All data collection
procedures were approved by the USC’s (IRB) and the organization of study before the
commencement of data collection.
Data Analysis
The overall objective of data analysis was to determine what factors were the most
influential in learning transfer, compare them between two types of programs, and examine their
relationship to the intent to transfer. Due to proprietary scored-scale information of the LTSI,
instrument authors conducted scale score analysis. Upon receipt of the statistical analysis from
the LTSI authors, I conducted descriptive analysis on all variables such as means, standard
deviations, and range of scores, using the Statistical Package for Social Sciences (SPSS)
(Creswell & Creswell, 2018). t-test was used as a statistical test to answer the study’s first two
research questions (RQ1 and RQ2) to compare the means of LTSI factors based on the type of
learning program completed. Multiple linear regression (MLR) was used to answer the third
research question (RQ3) to identify whether or not a relationship existed between LTSI factors
and intent to transfer, as well as to assess the strength of that relationship (Creswell & Creswell,
2018), controlling for the type of learning program completed. The overall objective of data
analysis was to determine how LTSI factors impacting learning transfer correlated with each
other and to ascertain whether or not a relationship existed between LTSI factors and intent to
transfer. Table 3 provides details on the data analysis methods mapped to research questions.
47
Table 3
Data Analysis Methods
Research
question
Independent
variable(s)
Level of
measure-
ment
Dependent variable(s) Level of
measure-
ment
Statistical
test
1. Are there
differences
in the
person- and
training-
related
attributes
based on
type of
learning
completed?
Type of
learning
program
Nominal
(AI-
enabled
learning
program
vs. trad.
learning
program)
1. Motivation to
transfer
2. Transfer effort
expectations
3. Performance
expectation
4. Learner readiness
5. Self-efficacy
6. Perceived content
validity
7. Transfer design
8. Personal capability
9. Opportunity to use
All
interval
(Likert-
type)
t-test
2. Are there
differences in
the intent to
transfer based
on the type of
learning
program
completed?
Type of
learning
program
Nominal
(AI-
enabled
learning
program
vs. trad.
learning
program)
Intent to transfer All
interval
(Likert-
type)
t-test
48
Research
question
Independent
variable(s)
Level of
measure-
ment
Dependent variable(s) Level of
measure-
ment
Statistical
test
3. What is the
relationship
of person,
training, and
environment
attributes to
the intent to
transfer
learning,
controlling
for the type
of learning
program
completed?
1. Motivation to
transfer
2. Transfer
effort
expectations
3. Performance
expectation
4. Learner
readiness
5. Self-efficacy
6. Perceived
content
validity
7. Transfer
design
8. Personal
capability
9. Opportunity
to use
10. Feedback
11. Peer support
12. Personal
outcomes-
positive
13. Supervisor
support
14. Personal
outcomes-
negative
15. Supervisor
sanctions
16. Openness to
change
17. Type of
learning
program
Interval
(Likert-
type)
Nominal
(AI-
enabled
learning
program
vs. trad.
learning
program)
Intent to transfer All
interval
(Likert-
type)
Multiple
linear
regression
49
Validity and Reliability
The question of validity in quantitative research can relate to many forms of validity
including content, concurrent, and construct validity (Creswell & Creswell, 2018). The results
achieved via a quantitative research approach can be challenged based on the validity of the data,
design questions, and method of data collection (Creswell & Creswell, 2018). The LTSI has been
empirically studied by several researchers and has demonstrated strong construct and criterion
validity (Yamkovenko et al., 2007). Factor analysis with over 6,000 respondents established
convergent and divergent validity through the correlation or relationships of other constructs
(Fagan, 2017).
The studies and development of the third 89-item version of the LTSI strengthened the
reliability of 21 scales, which were added to the original 68 items from the second version of the
instrument. The new scales included personal outcome-positive, personal capacity for transfer,
supervisor/manager sanctions, opportunity to use learning, and feedback/performance coaching.
Table 4 provides the reliability coefficients for the third version of the instrument and is adapted
from Holton, Bates, and Ruona (2000). Appendix B provides an expanded version of LTSI with
definitions, descriptions, loadings, and reliability coefficients of the scales.
50
Table 4
Reliability Coefficients for Third LTSI
Scale Number of
items
∝
Supervisor/manager support 6 .91
Transfer design 4 .85
Resistance/openness to change 6 .85
Perceived content validity 5 .84
Motivation to transfer learning 4 .83
Performance - outcomes expectations 5 .83
Peer support 4 .83
Transfer effort - performance
expectations
4 .81
Personal outcomes negative 4 .76
Performance self-efficacy 4 .76
Learner readiness 4 .73
Opportunity to use learning 4 .70
Feedback performance coaching 4 .70
Personal outcomes positive 3 .69
Personal capacity for transfer 4 .68
Supervisor/manager sanctions 3 .63
In this study, the fourth version of the LTSI instrument (Holton et al., 2000) with an
established validity was used. This version uses the same 16 scales and retains the same
psychometric properties (Bates et al., 2012). In addition, four questions were added to the fourth
version of the LTSI to measure intent to transfer from the Intent to Transfer instrument
51
developed by Holton and Bates (2007). Potential threats to internal validity include maturation
and history (Creswell & Creswell, 2018), which were mitigated by sending surveys to program
participants no later than two weeks after program completion to reduce follow-up time between
program completion and survey response (Blume et al., 2009). As one of the most important
forms of reliability, the study’s internal consistency was measured via Cronbach's alpha (α) and
stayed in the optional value range between .7 and .9 (Creswell & Creswell, 2018).
Interviews
The interviews aimed to obtain a type of information which could not be observed or
would have been difficult to interpret (Patton, 2015). In this study, interviews were used as part
of the second phase of the study. The qualitative interview represented a semi-structured
interview with open-ended questions to elicit experiences, views, and opinions of participants in
the AI-enabled adaptive learning program. Also, the interview process employed a neo-positivist
conception of interviewing in which a “skillful interviewer asks good questions, minimizes bias
through his/her neutral stance, generates quality data and produces valid findings” (Roulston,
2010, p. 52).
Participants
The study employed purposeful sampling with the primary goal to gain insight into the
subject of study from participants who could provide rich data (Merriam & Tisdell, 2016). The
purposeful selection followed a rigorous quantitative sampling of the first phase of the
explanatory sequential mixed method study (Creswell & Creswell, 2018). According to Creswell
(2018), the purposeful sampling participants should be knowledgeable about the subject or
phenomenon being studied, willing to participate in the study, and represent the range of points
of view.
52
Participants were selected for interviews based on the criterion of completing an AI-
enabled adaptive learning program. Survey participants who completed the AI-enabled adaptive
learning program and took the survey were invited to participate in an interview by submitting
their contact information through a Qualtrics link not connected to their survey responses. In
purposeful sampling, “The size of the sample is determined by informational considerations”
(Lincoln & Guba, 1985, p. 202). The goal of the interviews was to reach the saturation point
when no further information provided additional insights for the study (Charmaz, 2006). The
goal was to interview eight to ten participants. It was planned that if the number of volunteers
were to exceed a defined target, random recruitment from volunteers would be implemented. If
there were not enough volunteers for the interview, a $25 gift card incentive would be offered to
all who volunteered to be interviewed.
Instrumentation
The interview protocol followed a semi-structured approach, where the “interview guide
included a mix of more and less structured questions” (Merriam & Tisdell, 2016, p. 110). The
primary choice for the semi-structured interview protocol was driven by the fact that this
approach allowed for a balance of the main questions with follow-up probing questions, yielding
more in-depth information based on individual responses. The questions' content was guided by
the concepts that emerged from the literature review and framed around learner experiences in
the AI-enabled adaptive learning program and how they related to the intent to transfer.
The interview protocol included basic information about the interview, an introduction
section, an opening question, content questions, probing questions, and closing instructions
(Creswell & Creswell, 2018). The interview protocol included a total of ten questions using
opinion, knowledge, and behavior type questions (Patton, 2015). The opinion questions explored
what the learners “think about something,” knowledge questions elicited “actual factual
53
knowledge about a situation,” and experience and behavior questions “got at the things a person
does or did” (Merriam & Tisdell, 2016, p. 118). The interview protocol is provided in Appendix
F.
Data Collection Procedures
Data collection in the form of interviews as the second phase of the explanatory
sequential mixed method study was built on the quantitative results (Creswell & Creswell, 2018).
The results of the quantitative data from surveys were used to “connect to the qualitative data
collection” (Creswell & Creswell, 2018, p. 222). An email with the interview’s purpose was sent
to the participants by me. Appendix G includes an introductory note for the email that was sent
to participants with an invitation to participate in an interview. Also, the email with the
Information Sheet shown in Appendix H described the purpose of the study and outlined a
summary of the key expectations around interview process confidentiality, use of data, and
logistics. Each interview was conducted during a workday and lasted for about 45 minutes. The
interview was conducted via the organization's internal video conferencing tool, which had
capabilities to record video calls and transcribe the notes to ensure everything shared by the
interviewer was preserved for analysis (Merriam & Tisdell, 2016). In addition to the reliance on
the computer software to record and transcribe the interview, notes were taken throughout the
interview to identify potential themes and leave an opportunity for self-reflection (Creswell &
Creswell, 2018). Also, at the beginning of the interview, participants were reminded of
confidentiality of the interview process and the logistics around use, storage, and management of
interview data. They were also asked for permission to record the interview. The opening and
closing notes for the interview process are provided in the interview protocol in Appendix F.
54
Data Analysis
Merriam (1988) posited a view that in qualitative research, data collection and analysis
occur simultaneously. The data analysis in the transcribed interviews intends to make sense of
the received data (Creswell & Creswell, 2018). In this study, data analysis was managed using a
qualitative computer software program, Atlas.ti, to organize data, identify and connect codes,
and generate emerging themes. In the interpretation phase, reporting included quantitative results
for the first phase, qualitative results for the second phase, and a discussion on how the
qualitative results explained or expanded the quantitative results (Creswell & Creswell, 2018).
Credibility and Trustworthiness
In qualitative research, validity and reliability are positioned differently than in
quantitative inquiry (Creswell & Creswell, 2018), and are often referred to as trustworthiness and
credibility of the qualitative study (Creswell & Miller, 2000). In this study, credibility and
trustworthiness were accomplished via data triangulation from the quantitative inquiry during the
first phase of data collection and interviewee transcript review to confirm interpretations of the
meanings with the participants (Creswell & Creswell, 2018). Also, to ensure qualitative
reliability, recordings were used to help reinforce objectivity. Seale and Silverman (1997) listed
that among the strategies required to ensure trustworthiness in qualitative research are “recording
data objectively and comprehensively, including the use of audiotapes, videotapes and different
levels of details in the transcription of data” (p. 380), including the steps of data collection and
analysis procedures. Lastly, data were collected until the saturation point was reached, and
reflexivity was continuously employed during both the data collection and analysis processes
(Creswell & Creswell, 2018).
55
Ethics
I complied with the university’s code of ethics and received guidance and final approval
from the dissertation chair to submit a research design proposal to the Institutional Review Board
(IRB) for further review and approval. No data were collected prior to IRB approval. I also
sought the approval of the organization’s research review committee before initiating any study.
A number of ethical considerations recommended by Creswell and Creswell (2018)
defined key practices in relation to this study such as provision of the USC IRB template-based
information sheet before completing the survey. The information sheet explicitly guaranteed
response anonymity and confidentiality and discussed storage of all associated data in a
password-protected computer and its destruction after three years. Also, participants completing
the AI-enabled adaptive learning or traditional program were informed via the information sheet
that there was no compensation or incentive for survey completion. To address power and
positionality considerations, throughout the entire study I proactively informed participants about
my role as a researcher and separated it from my current role within the organization. I reminded
participants of this delineation in all distributed emails, surveys to participants, and interviews.
56
Chapter Four: Results and Findings
Chapter Four provides an overview of the participants and presents results and findings
of the study to understand the impact of an Artificial Intelligence (AI)-enabled adaptive learning
program on facilitating learning transfer to the workplace in comparison with the traditional
learning program. The study used an explanatory sequential mixed method of inquiry to explain
the quantitative results with qualitative data (Creswell & Creswell, 2018). Specifically, a survey
was conducted at the first phase of the inquiry for quantitative analysis and interviews were
conducted at the second phase of the inquiry for qualitative analysis. The data that was collected
helped answer four research questions:
1. Are there differences in the person- and training-related attributes based on type of
learning completed?
2. Are there differences in the intent to transfer based on the type of learning program
completed?
3. What is the relationship of person, training, and environment attributes to the intent to
transfer learning, controlling for the type of learning program completed?
4. What are the experiences of learners in the AI-enabled program and how do they relate to
the intent to transfer?
Participants
This section provides an overview of the participants of a two-phased study. Participation
in the research was based on the willingness of each participant to contribute to research on the
impact of the AI-enabled adaptive learning program on the intent to transfer in comparison with
the traditional learning program. All participants were employees of the organization of study
with access to both AI-enabled adaptive learning and traditional learning programs.
57
Survey Participants
In the first phase of the study, a survey with the link to a web-based online portal
LTSInventory was distributed by email from the Human Resources specialist to employees in the
organization of study who completed the AI-enabled learning program or a traditional learning
program. The goal of the Learning Transfer System Inventory (LTSI) survey was to collect self-
reported data about the factors influencing learning transfer (Holton et al., 2000) and capture
learner perceptions of learning transfer catalysts and barriers following AI-enabled adaptive
learning program or traditional program completion. With the confidence level of 90%,
population size of 1,000 employees, and margin of error of 5%, the target number of participants
in the survey was 220, 110 participants in each group. At the time of the study, 437 employees
representing 43.7% of the population size in the organization of study completed one of the two
learning programs. In total, 57.5% of the invited learners participated in the survey. Specifically,
39% of respondents completed the traditional learning program and 61% of the respondents
completed the AI-enabled learning program. The participation rate met the identified threshold
for the statistical significance of the sample group from the population size. Of the participants,
69% identified as males, 29% identified as females, and seven percent selected none of the
provided options. Sixteen percent of the respondents were less than 25 years old, 34% between
25 and 35 years old, 27% between 36 and 45 years old, 19% between 46 and 55 years old, and
4% between 56 and 65 years old. Table 5 provides demographic characteristics of survey
respondents.
58
Table 5
Demographic Characteristics of Survey Respondents
Characteristic AI-enabled learning
program participants
Traditional learning
program participants
Full sample
n % n % n %
Gender
Male 167 55 136 45 303 69
Female 94 74 33 26 127 29
Transgender - - - - - -
None of the
above
7 2 - - 7 2
Age (years)
Below 25 60 83 12 17 72 16
25-35 145 98 3 2 148 34
36-45 32 28 84 72 116 27
46-55 25 30 57 70 82 19
56-65 6 32 13 68 19 4
Above 65 - - - - - -
Note. N = 437 (n = 268 for AI-enabled learning program, n = 169 for traditional learning
program).
Interview Participants
At the end of the survey, a note was included with an invitation to participate in a
voluntary follow-up interview if the participant met the condition of having completed an AI-
enabled adaptive learning program. The goal of the study was to interview eight to ten learners.
59
25 people out of a total 268 participants who completed an AI-enabled learning program
expressed their interest in participating in the interviews. From those who volunteered, eight
people were selected randomly, though two participants requested to remove themselves from
the study due to work related priorities. Two randomly chosen alternate participants were added
to the participant list. All participant names were replaced with pseudonyms to protect
confidentiality. Table 6 provides additional demographic details with protection of the
confidentiality of participants in the small sample.
Table 6
Demographic Characteristics of Interview Participants
Demographic variable Characteristic
Gender Female = 37.5%
Male = 62.5%
Occupation Product Management = 50%
Software Development Engineering = 25%
System Development Engineering = 25%
60
Research Question 1: Are There Differences in the Person- and Training-Related
Attributes Based on Type of Learning Completed?
Differences in the person- and training-related attributes based on type of learning
completed were analyzed using quantitative survey data. A t-test as a statistical analysis method
enabled comparison of the average values of the data sets between participants of the AI-enabled
learning program and traditional learning program. According to the study's conceptual
framework, person- and training-related attributes were captured in nine scales of the Learning
Transfer System Instrument (LTSI) (Holton et al., 2000). This section will review the results
related to the differences in the person- and training-related attributes based on type of learning
completed.
Person and Training-Related Results from the LTSI Survey
For purposes of this study, LTSI factors motivation to transfer, transfer effort
expectations, performance expectations, learner readiness, and self-efficacy were categorized as
person-related and perceived content validity, transfer design, and personal capacity for transfer
as training-related attributes. The means for these factors were compared for 169 participants
who completed the traditional training program and 268 participants who completed the AI-
enabled training program. Of the 16 scales used in the 52-item survey, nine scales corresponding
to three LTSI factors, such as motivation, ability, and secondary influence, are related to person-
and training-related attributes. Two groups consisting of 169 participants who completed a
traditional training program and 268 participants who completed an AI-enabled training program
were compared on the difference of the mean value of a continuous, normally distributed nine
scales using a five-point Likert scale.
61
An independent t-test was conducted to examine differences in person- and training-
related LTSI scales between participants of the AI-enabled learning program and traditional
training program. An alpha level of .05 was utilized. The values for asymmetry, kurtosis, and
skewness in data between AI-enabled learning program participants and traditional learning
program participants demonstrated normal univariate distribution. For each scale, the assumption
of variance homogeneity was tested and satisfied via Levene’s F Test for equality of variances to
exclude any violations. LTSI scales were compared between two groups and equality of means
for each scale were reported in the t test. Table 7 provides alpha values of the person- and
training-related LTSI scales for current study.
Table 7
Alpha Values of the Person- and Training-Related LTSI Scales
LTSI constructs LTSI items Current study ∝
Training-related attributes Transfer design .89
Personal capacity for transfer .74
Perceived content validity .85
Person-related attributes Performance - outcomes expectations .78
Transfer effort – performance expectations .75
Learner readiness .81
Performance self-efficacy .82
Motivation to transfer .80
62
Person-Related Results
Results of the survey indicated that there were a number of statistically significant
differences between the participants of the AI-enabled learning program and the traditional
learning program. Specifically, there was a statistically significant difference in motivation to
transfer between the participants of the AI-enabled learning program (M = 4.25, SD = 1.52) and
the traditional learning program (M = 3.00, SD = 1.32), t(442) = 9.09, p < .001. There was a
statistically significant difference in learner readiness between the participants of the AI-enabled
learning program (M = 4.52, SD = .98) and the traditional learning program (M = 3.34, SD =
.79), t(778) = 13.83, p < .001. There was a statistically significant difference in self-efficacy
between the participants of the AI-enabled learning program (M = 4.61, SD = 1.25) and the
traditional learning program (M = 3.25, SD = .99), t(307) = 12.61, p < .001. There was a
statistically significant difference in transfer effort expectations where traditional learning
program participants (M = 4.35, SD = 1.11) had higher scores than AI enabled learning program
participants (M = 3.59, SD = .59, t(315) = -8.25, p < .001. However, there was not a significant
difference in performance expectations between AI-enabled learning program participants (M =
3.37, SD = .25) and traditional learning program participants (M = 2.95, SD = .69), t(0) = 7.66, p
= .13.
Training-Related Results
Results of the survey indicated that there was a statistically significant difference in
perceived content validity between the participants of the AI-enabled learning program (M =
4.89, SD = .92) and the traditional learning program (M = 3.21, SD = 1.22), t(203) = 15.38, p <
.001. There was a statistically significant difference in perception of training design quality
between the participants of the AI-enabled learning program (M = 4.68, SD = 1.26) and the
63
traditional learning program (M = 2.98, SD = .89), t(382) = 16.50, p < .001. However, there was
not a significant difference in the perception of personal capacity for transfer between AI-
enabled learning program participants (M = 3.98 SD = 1.22) and traditional learning program
participants (M = 3.67, SD = .21), t(0) = 4.06, p = .37.
All scales in t-test included measurement of effect size using Cohen’s d to determine the
standardized difference between two means of the traditional and AI-enabled adaptive learning
program participant groups. Specifically, motivation to transfer, transfer effort expectations,
performance expectations, learner readiness, self-efficacy, perceived content validity, and
transfer design scales in t-test were 0.8 and above, indicative of a strong degree of practical
significance; small effect size between 0.2 and 0.5 existed for personal capacity to transfer scale.
Table 8 provides results of an independent t-test analysis examining differences in the in-person-
and training-related attributes based on the type of learning program completed. Results of the
study will be further discussed in Chapter Five.
64
Table 8
t-test Results for Differences in Person- and Training-Related Attributes Between AI-Enabled
and Traditional Training (Nai = 267, Ntrad = 170)
Group N M SD t Sig. (2-tailed) Cohen’s d
Motivation to transfer AI 267 4.25 1.52 9.091 < .001 .878
Trad 170 3.00 1.32
Transfer effort
expectations
AI 267 3.59 .59 -8.216 < .001 .861
Trad 170 4.35 1.11
Performance
expectations
AI 267 3.37 .25 7.6621 .46 .809
Trad 170 2.95 .69
Learner readiness AI 267 4.52 .98 13.833 < .001 1.325
Trad 170 3.34 .79
Self-efficacy AI 267 4.61 1.25 12.616 < .001 1.206
Trad 170 3.25 .99
Perceived content
validity
AI 267 4.89 .92 15.384 < .001 1.554
Trad 170 3.21 1.22
Transfer design AI 267 4.68 1.26 16.504 < .001 1.558
Trad 170 2.98 .89
Personal capacity to
transfer
AI 267 3.98 1.22 4.057 .37 .360
Trad 170 3.67 .21
65
Research Question 2: Are There Differences in the Intent to Transfer Based on the Type of
Learning Program Completed?
Differences in intent to transfer based on type of learning completed were analyzed using
quantitative data from the survey. A t-test as a statistical analysis method was utilized to compare
the average values of the data sets between participants of the AI-enabled learning program and
the traditional learning program. This section will review the results related to the differences in
the intent to transfer based on type of learning completed.
Intent to Transfer Results
For purposes of this study, in a 52-item LTSI survey, four items measured intent to
transfer, with sample items including “I anticipate making every effort in the coming weeks to
put into practice what I learned in this training” and “As soon as it is feasible, I intend to use at
work all that I learned in this training.” (Hutchins et al., 2013). Two groups consisting of 169
participants who completed a traditional training program and 268 participants who completed
an AI-enabled training program were compared on the difference of the mean value of a
continuous, normally distributed scale of intent to transfer using a five-point Likert scale. An
alpha level of .05 was utilized. The values for asymmetry, kurtosis, and skewness in data
between AI-enabled learning program participants and traditional learning program participants
demonstrated normal univariate distribution. The assumption of variance homogeneity was
tested and satisfied via Levene’s F Test for equality of variances to exclude any violations. Intent
to transfer scale was compared between two groups and equality of means for each scale were
reported in the t test. Table 9 provides alpha values of the intent to transfer scales for current
study.
66
Table 9
Alpha Values of the Intent to Transfer Scales
Intent to transfer LTSI items Current study
∝
Intent to
Transfer
Planning to use new knowledge and
skills at work
.90
Making every effort to put new
knowledge and skills into practice
.90
Objective to apply at work as much as
was learnt during training
.90
Intend to use at work what was learnt .90
Results indicated that there was a statistically significant difference in intent to transfer
between the participants of the AI-enabled learning program (M = 4.85, SD = 1.32) and the
traditional learning program (M = 3.68, SD = 1.25), t(303) = 9.33, p < .001. Test results
identified a large effect size of 0.8 and above, indicative of a strong degree of practical
significance. Table 10 provides results of an independent t-test analysis examining differences in
intent to transfer based on the type of learning program completed. Study results will be further
discussed in Chapter Five.
67
Table 10
t-test Results for Differences in Person- and Training-Related Attributes Between AI-Enabled
and Traditional Training (Nai = 267, Ntrad = 170) for Intent to Transfer
Group N M SD t Sig. (2-tailed) Cohen’s d
Intent to
transfer
AI 267 4.85 1.32 9.330 < .001 .910
Trad 170 3.68 1.25
Research Question 3: What Is the Relationship of Person, Training, and Environment
Attributes to the Intent to Transfer Learning, Controlling for the Type of Learning
Program Completed?
Relationship of person, training, and environment attributes to the intent to transfer based
on type of learning completed was analyzed using quantitative data from the survey. Multiple
linear regression (MLR) was used as a statistical analysis method to predict the outcome of the
intent to transfer as a dependent variable based on 16 LTSI scales as independent variables. MLR
was conducted with key statistical assumptions of linearity, normality, high influence points,
multicollinearity, and homoscedasticity using kurtosis statistics and diagnostic plots, such as Q-
Q plot, residual versus fitted plot, scale-location plot, and residuals versus leverage plot.
Reliability was measured using Cronbach’s alpha. Table 11 provides reliability coefficients of
the LTSI scales and intent to transfer scales from the previous reliability studies (Hutchins et al.,
2012) and current study.
68
Table 11
Reliability Coefficients of the LTSI and Intent to Transfer Scales from Previous Studies and for
Current Study
LTSI constructs LTSI items Previous studies ∝
a
Current study ∝
Training-related
attributes
Transfer design .85 .89
Personal capacity for transfer .68 .74
Perceived content validity .84 .85
Person-related
attributes
Performance - outcomes
expectations
.83 .78
Transfer effort - performance
expectations
.81 .75
Learner readiness .73 .81
Performance self-efficacy .76 .82
Motivation to transfer .83 .80
Environment-
related attributes
Feedback performance
coaching
.70 .70
Personal outcomes positive .69 .70
Personal outcomes negative .76 .72
Supervisor/manager support .91 .81
Supervisor/manager sanctions .63 .71
Peer support .83 .69
Resistance/openness to change .85 .81
Opportunity to use learning .70 .72
Intent to transfer Intent to transfer .92
b
.91
a
Previous study alpha values from Holton et al., 2000.
b
Previous study alpha values from Hutchins et al., 2013.
69
The multiple correlation coefficient, the R value, was used as one of the measures of the
quality of the prediction of the intent to transfer outcome. The adjusted R
2
value as a modified
version of R
2
was applied to account for the number of predictors used to measure the proportion
of variance in the intent to transfer that could be explained by the LTSI scales. The F-ratio tested
whether the overall regression model was a good fit for the data. Unstandardized coefficients
indicated how much the intent to transfer outcome varied with every LTSI scale when all other
scales were held constant.
Results of the Multiple Regression Analysis of the Relations of Person-, Training, and
Environment Attributes to the Intent to Transfer in the AI-Enabled Adaptive Learning
Program
The MLR results for the participants of the AI-enabled learning program showed a
significant regression equation F (2, 96) = 32.35, p < 0.001, with an R
2
of .738. The R
2
value of
.738 revealed that the predictors explained 73% variance in the intent to transfer outcome with F
(2, 96) = 32.35, p <.001. The findings revealed that supervisor/manager support (β = 9.133, p <
.005), transfer design ((β = 32.101, p < .005), openness to change (β = 4.208, p < .005),
perceived content validity (β = 7.901, p < .005), motivation to transfer learning (β = 10.677, p <
.005), performance outcomes expectations (β = 1.987, p < .005), learner readiness (β = 7.224, p <
.005), peer support (β = 2.897, p < .005), transfer effort-performance expectations (β = 3.986, p <
.005), opportunity to use (β = 14.872, p < .005), and self-efficacy (β = 11.822, p < .005)
positively predicted intent to transfer whereas positive personal outcomes (β = -1.889, p < .005)
and negative personal outcomes (β = 8.249, p < .005) negatively predicted intent to transfer, and
feedback performance coaching (β = 10.251, p > .005) and supervisor sanction (β = 4.113, p >
.005) had non-significant effect on intent to transfer. Table 12 provides results of the MLR
70
analysis examining relationship of the person-, training- and environment related attributes to
intent to transfer reported by participants of the AI-enabled learning program.
Table 12
Unstandardized Regression Coefficients of Person-, Training-, and Environment-Related
Attributes on Intent to Transfer After Completion of the AI-Enabled Learning Program
Variables B SE t Sig. 95% CI
LL UL
Constant 91.138 23.22 12.333 .000 88.111 106.198
Supervisor/manager support 9.133 1.55 3.12 .000 6.231 12.937
Transfer design 32.101 11.82 5.11 .000 28.651 36.829
Resistance/openness to change 4.208 1.85 .89 .000 2.874 6.982
Perceived content validity 7.901 1.74 1.77 .000 4.678 10.208
Motivation to transfer learning 10.677 2.25 4.77 .000 8.467 14.628
Performance - outcomes expectations 1.987 .82 1.8 .000 .935 3.478
Peer support 2.897 .83 1.92 .000 1.031 4.729
Transfer effort - performance
expectations
3.986 .84 1.66 .000 1.984 5.782
Personal outcomes negative - 8.249 .71 -2.66 .000 -10.672 -6.678
Performance self-efficacy 11.822 .66 3.51 .000 7.836 15.378
Learner readiness 7.224 1.23 1.42 .000 5.234 9.341
Opportunity to use learning 14.872 1.67 1.55 .000 12.092 16.249
Feedback performance coaching 10.251 .88 3.91 .422 9.134 11.424
Personal outcomes positive - 1.889 .65 -5.11 .000 -3.621 1.672
Personal capacity for transfer 4.113 1.92 2.98 .349 2.983 5.881
Supervisor/manager sanctions - 9.889 1.96 -5.18 .000 -11.209 -8.111
71
Results of the Multiple Regression Analysis for Person-, Training, and Environment
Attributes and Intent to Transfer in the Traditional Learning Program
The MLR results for the participants of the traditional learning program showed a
significant regression equation F (6, 125) = 79.35, p < 0.001, with an R
2
of .872. The R
2
value of
.872 revealed that the predictors explained 87% variance in the intent to transfer outcome with F
(6, 125) = 79.35, p <.001. The findings revealed that supervisor/manager support (β = 14.278, p
< .005), transfer design ((β = 2.778, p < .005), openness to change (β = 13.762, p < .005),
perceived content validity (β = 1.776, p < .005), motivation to transfer learning (β = 13.146, p <
.005), performance outcomes expectations (β = 5.256, p < .005), learner readiness (β = 12.443, p
< .005), peer support (β = 1.692, p < .005), transfer effort - performance expectations (β = 1.823,
p < .005), opportunity to use (β = 26.874, p < .005), and self-efficacy (β = 15.111, p < .005)
positively predicted intent to transfer, whereas positive personal outcomes (β = -5.913, p < .005)
and negative personal outcomes (β = -6.145, p < .005) negatively predicted intent to transfer.
Feedback performance coaching (β = 3.145, p > .005) and supervisor sanction (β = 15.329, p >
.005) had non-significant effect on intent to transfer. Table 13 provides results of the MLR
analysis examining relationship of the person-, training- and environment related attributes to
intent to transfer reported by participants of the AI-enabled learning program. Study results will
be further discussed in Chapter Five.
72
Table 13
Unstandardized Regression Coefficients of Person-, Training-, and Environment-Related
Attributes on Intent to Transfer After Completion of the Traditional Learning Program
Variables B SE t Sig. 95% CI
LL UL
Constant 71.987 15.54 5.123 .000 67.157 90.341
Supervisor/manager support 2.778 .39 1.26 .000 1.341 5.235
Transfer design 32.101 8.31 1.65 .000 27.091 44.145
Resistance/openness to change 13.762 .98 .81 .000 10.531 18.981
Perceived content validity 1.776 .33 .65 .000 .689 4.164
Motivation to transfer learning 13.146 5.75 1.23 .000 7.145 17.982
Performance - outcomes
expectations
5.256 2.87 .45 .000 1.981 6.153
Peer support 1.692 .71 .81 .000 .981 3.147
Transfer effort - performance
expectations
1.823 .91 .21 .000 .912 4.981
Personal outcomes negative - 6.145 .11 -1.12 .000 -7.417 -8.982
Performance self-efficacy 15.111 4.45 2.16 .000 5.536 16.567
Learner readiness 12.443 5.11 .47 .000 8.142 15.981
Opportunity to use learning 26.874 13.68 .59 .000 17.672 30.673
Feedback performance
coaching
3.145 1.81 .92 .422 2.352 4.554
Personal outcomes positive - 5.913 .55 -4.12 .000 -7.334 -4.449
Personal capacity for transfer 4.113 .95 .99 .349 3.674 6.259
Supervisor/manager sanctions 15.329 5.91 -1.24 .000 13.764 18.224
73
Research Question 4: What Are the Experiences of Learners in the AI-enabled Program
and How Do They Relate to the Intent to Transfer?
An analysis of qualitative data helped to expand the quantitative results and explore the
experiences of the AI-enabled adaptive learning program participants. Interview data analysis
was not limited to a priori codes. After reviewing the transcribed information and conducting
several rounds of coding, other themes emerged and were utilized to identify the final themes.
Table 14 shows the a priori codes aligned with the Learning Transfer System Instrument (LTSI)
factors, and emergent codes from the interviews organized into overarching categories.
Table 14
A Priori and Emergent Codes in the Interviews with Participants of the AI-Enabled Learning
Program
Categories A priori codes from LTSI
scales and other variables
Emergent
codes
Themes
Learner
experience
Motivation
Self-efficacy
Engagement
Continuous learning
Learner autonomy
Social
interaction
Learning design Content validity
Transfer design
Self-paced learning
Test
Knowledge
assessment
Learning culture Perception of opportunity to
use
Perception of leadership
support
Perception of learning
processes and practices
Perception of learning
environment
Learning
culture
74
Knowledge Assessments as a Mechanism to Enable Personalization of the Experience,
Leading to Perceived Increase in Engagement, Efficiency, and Effectiveness of the Program
All eight participants were asked why they chose the AI-enabled learning program
instead of the traditional program. In response, five of eight participants stated that they were
interested in exploring a new program as an alternative solution to the traditional programs,
which often “look and feel the same, boring, and not a value-add activity.” Through the
experience of the AI-enabled learning program, Participant 1 found the program engaging,
personalized, and well-balanced to reach the learning goals:
This program is so different. Truth is that our organization launches so many
training programs each year […] for various reasons: compliance training, job training,
new product training […] and I rarely find them engaging. Let’s be candid: most people
watch the training in the background while doing some work […] Well, things get
complicated if there is a quiz involved, but even in that case, most of the quiz questions
can be answered without even a need to complete training. With this program, I felt it
was designed for me […] even the quiz was just-in-time [and it] helped me to validate my
understanding.
All participants commented on the value of the real time feedback via the knowledge
assessments, including predicted proficiency as a motivational factor during the learning
experience. Equally, these data made learners feel empowered in making decisions around their
learning with an added level of autonomy to pivot the learning journey towards the most
desirable path to attain the learning goals. Participant 2 explained it as follows:
When I started a new course, I noticed the message saying, “not enough information” to
predict my proficiency in any of the learning outcomes. Apparently, this is expected
75
because you haven’t answered any assessment questions yet, so there is not enough data
to make a prediction. But [...] after I started answering questions, my proficiency had
started to become predicted […] One interesting thing for me is that it doesn’t feel like a
test because you control your own learning pace. It can sometimes be a more helpful
learning process to get questions wrong before you try again [...] and eventually get them
right.
One participant mentioned the artificial nature of the various “campaigns running in the
organizations promoting importance of continuous learning, but lack of individual ownership to
shift the needle.” Five of eight participants stated that the program design improved engagement,
fostered curiosity, and could be a new way to provide learning experiences centered around the
learner. All eight participants viewed the AI-enabled adaptive learning program as a key enabler
of building a continuous learning culture where people are at the center of the learning journeys
and own their personal development. Participant 3 stated that the AI-enabled adaptive learning
program “prioritizes learner and learner needs, and thus supports the culture of continuous
learning.” According to Participant 6, AI-enabled adaptive learning programs are “the future of
learning empowering individuals to be in control of their career and professional destiny versus
employer programs defining people’s career choices.”
In addition to the feedback regarding engaging experience with the AI-enabled learning
program, participants reported on the benefits of the adaptive learning program. Specifically,
they mentioned the quality of the program and reduced time to accomplish defined goals. Table
15 provides some of the comments organized into themes of learning efficiency and learning
effectiveness.
76
Table 15
Participant Comments Related to the Experience and Effectiveness of the AI-Enabled Learning
Program
Participant Central topic Comment
P1 Learning efficiency
through
personalization
“This was the best time investment I could have
imagined for e-learning. Everything was well
organized, to the point, and felt fast enough to get
to the ultimate goal of learning new skills.”
P2 Learning effectiveness
due to personalization
“The program kept me focused throughout the
entire experience. I could feel how questions were
helping me to dive deep into a specific topic and
learn it fully before I move to the next lesson.”
P3 Learning effectiveness
through
personalization
“The questions throughout the program were
perfect. I could see where I need to go back and
retake the content. Not the entire lesson, but a
specific part to gain the expected level of
proficiency. What was particularly interesting for
me is that going back to the material I was
struggling with…did not make me feel like going
back as I was presented with the new type of
information helping to master that specific area of
concern.”
P4 Learning efficiency
due to personalization
“I was so pleased with the fact that I could answer
some questions before taking the program… which
as I understand, saved me a lot of time in the end! I
completed a few lessons in less than 15 minutes
and heard some other folks would spend more than
an hour on the same lesson.”
77
A Perceived Lack of Social Interaction in the AI-enabled Learning Program, Yet Not
Viewed as a Barrier to the Learning Transfer
Despite perceived high levels of engagement and motivation, learners reported lack of
social interaction. Specifically, participants communicated a desire to engage in discussions with
other learners and participate in the project-based work. Participant 5 shared:
The directions and assignments were well structured. There were many useful and, when
needed, very helpful resources throughout the course [….] However, I felt a lack of
human interaction, like […] with the other learners […] or someone like a course
instructor, just to get some guidance, or validate my understanding.
Two of eight participants mentioned that learning occurs when people become aware of
opposing ideas. In the AI-enabled learning environment, learners were primarily focused on self-
improvement following the prompts of the technology. Participant 8 said,
I had the moments of [….] thinking how great it could have been to ask someone this
specific question […] or just engage in a healthy debate with someone who may have a
different opinion. This experience was completely lacking during the program. Will it
have an impact on learning a new skill? probably not […] but you know […] I may not
know what I don’t know by virtue of lacking such interaction. It is probably less goal
oriented, and more about humans being social creatures and a need for interaction is
pretty much in our DNA.
Participant 5 mentioned that often learning experiences involving others create
opportunities to learn something new including “things which are not directly related to the
learning objectives.” Another participant said that effective course facilitators “often adapt the
content based on the flow of the conversation and may not necessarily go with the script.”
78
Six of eight participants mentioned that they had the moment of need to interact with the
facilitator to unblock either the knowledge-related gap or system-related issues. While technical
issues were relatively “easy to resolve via technical support chatbot” (Participant 2), the element
of validating an understanding of the knowledge concept was missing and the only existing
mechanism was framed around “hoping to complete a questionnaire successfully” (Participant
2). However, despite the lack of social interaction, seven of eight participants did not view it as a
barrier to transfer of learning, and considered it “as essential to human beings, but in truth, a
nice-to-have option in this program” (Participant 1). Participant 7 said that “it will be possible for
the program to be the best scalable solution for the organization with adequate quality of training
in a digital environment skipping a need for the classroom or web-based instructor-led training.”
Furthermore, Participant 6 stated that “this learning provides all an individual might need, even
though it is a new norm for training […] with no interaction with others, but the flip side of it is a
complete focus on me, the learner.”
A Reported Culture of Support for Learning, Yet A Gap Between AI-enabled Adaptive
Learning Program and Organizational Practices for Learning as a Barrier to the Learning
Transfer
All eight interviewees mentioned effective training design, including knowledge
assessment, simulated environments, real-time feedback, “enabling to practice newly gained
skills in a safe environment as long as needed.” Seven of eight interviewees emphasized the
degree of the confidence in applying newly learned skills on the job. However, when the
question was asked regarding opportunities to apply new skills on the job and learner belief in
learning transfer leading to change in job performance, a consistent theme identified the gap
between training programs and organizational practice in creating opportunities to practice new
79
skills. One participant said that without transfer of skills and knowledge back to the workplace,
the value of training is subjective:
You have to give people the opportunity to apply and use it in a real situation. Otherwise,
these skills will be remembered for some time, but refreshed only through another
training or potential change in jobs. Thus, I would question the value of time and funds
invested in this program if it only addresses half of the problem.
According to five out of eight participants, supportive learning environments with
continued learning opportunities were identified as a critical need for their learning. All eight
participants stated that the organization's culture is supportive for learning, with one participant
stating that it is “ingrained in the organization’s values and much expected from people that
continuous learning and curiosity are fundamental requirements for individuals to be successful
in their jobs.” However, despite supportive leadership and pro-learning organizational culture,
there is a perceived gap in learning processes and practices leading to barriers for learning
transfer. According to Participant 4, “Solving a problem of training quality through innovative
learning programs is a promising path, yet it requires integration of these innovations into a
wider ecosystem.” Participant 8 said that “investment in learning innovation without
consideration of organizational practices could be perceived as an incomplete and expensive
solution.” All eight participants reported that after completion of the AI-enabled learning
program supported by their supervisors, there was not a clear path on how to apply new skills on
the job, unless intentionally requested by the participant to participate in a project providing such
opportunity. Table 16 provides a summary of key themes and associated comments from the
participants.
80
Table 16
Participant Comments Related to the Impact of the Organizational Culture, Environment, and
Learning Processes on the Learning Transfer
Participant Central topic Comment
P1 Lack of continuity of
learning experience
“I liked the idea of this new training but there is
something that is missing. It is a lack of
continuity to what starts with this program and
ends after completion of the program. It is
unclear what happens next, and it could feel like
going back to reality after a great experience of
learning something new.”
P2 Disconnected learning
practices
“We talk and spend a lot on innovation and new
things….but we lack a holistic approach to what
we do. It is typical for ABZ (pseudonym). For
example, why some people even take this
training? Will they apply it on the job? Is it part
of their next career move? For me, it was for fun
and yes, I’ve learnt quite a bit, but I don’t know
if I will ever use these skills and most likely,
forget what I have learnt soon.”
P3 Lack of continuity of
learning experience
“The program is really a nice model of what
modern learning should be: independent, self-
driven, effective use of time. I also immediately
talked to my manager after the program on how
I would like to use my new skills. He quickly
helped me to pick a project, which will help me
to continue to practice these skills. But let me
tell you that it was all initiated by me… no-one
pushed me to do it.”
P4 Disconnected learning
practices
“Our culture is very supportive of learning.
Leadership and managers will never stop you
from investing your time in learning. I am afraid
we have more learning programs than we can
consume, and we may not even know where to
begin. This quickly frustrates people and they
do not pursue looking for any training. Instead,
everything becomes ‘learn by doing’, but this is
not the best way to learn, especially when you
just start your experience in a certain project.”
81
Conclusion
Data from a survey and interviews were analyzed to answer four research questions about
differences and relationship between in person-, training-, and environment related factors that
facilitate or hinder intent to transfer learning after completion of the training program. Survey
results helped to examine differences and relationship between person-, training-, and
environment related factors to the intent to transfer. Interview findings helped to explore the
learner experiences in the AI-enabled adaptive learning program. Chapter Five contains
recommendations for practice based on the study results and findings.
82
Chapter Five: Recommendations and Discussion
The study examined factors that facilitated or hindered transfer of learning to the
workplace after completion of an AI-enabled adaptive learning program in comparison with the
traditional learning program. The research questions were framed to examine the differences and
relationship between person-, training-, and environment-related factors and intent to transfer
after completion of the programs and explored the experiences of the learners completing the AI-
enabled adaptive learning program. Based on the results and findings from the study, this chapter
presents recommendations for an integrated framework of a learning ecosystem for the
integration of the AI-enabled adaptive learning technologies with primary focus on facilitating
transfer of learning to the workplace.
Discussion of Findings and Results
The results and findings from Chapter Four indicated differences in participant
perceptions of the person- and training-related factors facilitating or hindering transfer of
learning after completion of the AI-enabled adaptive and traditional learning programs. The
study also demonstrated an existing relationship between person-, training, and environment-
related factors and intent to transfer after completion of the learning programs. Lastly, the study
identified key themes from participant experiences in the AI-enabled adaptive learning program,
which may facilitate or hinder transfer of learning to the workplace.
Differences in Person-, Training-Related Attributes, and Intent to Transfer Between Two
Programs
The results of the survey using Holton et al.’s (2000) Learning Transfer System
Instrument (LTSI) and a t-test as a statistical measure indicated significant differences between
the means of two groups in a number of person-, training-related factors, and intent to transfer. In
83
sum, the results of the statistical analysis indicated that overall, the participants of the AI-enabled
adaptive learning program reported higher levels of motivation to transfer as “the direction,
intensity, and persistence of effort toward utilizing in a work setting skills and knowledge
learned” (Holton, 1996), higher levels of learner readiness as the extent to which respondents felt
prepared to enter and participate in training, higher levels of self-efficacy as the belief that they
are able to change their performance, higher degrees of perceived content validity as the extent to
which participants felt their training content matched their job requirements accurately, training
design as the extent to which participants felt the training was designed and delivered to foster
learning transfer on the job, and higher levels of intent to transfer as a proxy of expected
behavior change after completion of training. Though participants of the traditional learning
program reported lower levels of motivation to transfer, learner readiness, self-efficacy,
perceived content validity, training design, and intent to transfer, they reported higher levels of
the transfer effort expectations as perceived expectations that learning will result in on-the-job
changes.
The statistically significant results discussed above indicate that the magnitude of
difference in self-reported scores of participants of the two programs will more likely be
observed in the population, which the sample purports to represent (Fisher, 1925). However,
while statistical significance shows that there is a difference between the means of two groups,
“the primary product of a research inquiry is one or more measures of effect size, not p values”
(Cohen, 1990, p. 15). According to McLeod (2019), the effect size promotes a more scientific
approach, which is independent of sample size. According to Cohen (1988), effect sizes are
small when d is equal to 0.2, medium when d is equal to 0.5, and large when d is equal to 0.8. In
the study, results of the self-reported scores for motivation to transfer, transfer effort
84
expectations, learner readiness, self-efficacy, perceived content validity, and transfer design
demonstrated a strong degree of practical significance, which was above 0.8 (Cohen, 1988). This
result suggests that the observed statistical difference has a large magnitude of the difference
between the groups and therefore, presents practical significance. Specifically, differences in the
self-reported person- and training-related factors are large enough to be meaningful in the real
world.
Among factors which did not present any significant differences, it is worth noting that
the personal capacity for transfer was one of the person-related attributes describing the extent to
which participants felt they had time and energy in their work environments to transfer learning
on the job after completion of training. While the result of no significance suggests that the
difference observed in the sample groups cannot be a basis for inferring differences in self-
reported scores towards the population, further research may be needed in order to understand
the relationship between work environment and personal capacity, which could impact
participant response (Yamkovenko et al., 2007). Furthermore, while results related to personal
capacity for transfer factor did not yield statistical significance, the effect size was close to a
medium effect size (Cohen, 1988). This result could have been attributed to not having sufficient
power in the sample and a need for more participants to observe a statistically significant result.
Relationship Between Person-, Training-, and Environment-Related Attributes and Intent
to Transfer
Using multiple regression analysis as a statistical measure, several significant predictors
emerged for intent to transfer for both programs. Table 17 provides a summary of the factors
with the significant prediction to the intent to transfer and key differences between the programs.
85
Table 17
Study’s Significant Predictors of the LTSI Factors to Intent to Transfer
LTSI factor AI-enabled adaptive learning
program
Traditional learning
program
Supervisor/manager support X
Transfer design X X
Openness to change X
Perceived content validity X X
Motivation to transfer learning X X
Performance outcomes
expectations
X X
Learner readiness X X
Peer support X X
Opportunity to use X X
Self-efficacy X X
Transfer effort expectations X
In both programs, self-efficacy, motivation to transfer, and transfer design were
significantly related to the intent to transfer, which aligns with other empirical LTSI-related
studies (Hutchins et al., 2013; Yamkovenko, 2009). According to Yamkovenko (2009), “Self-
efficacy may be one of the critical variables in overcoming any obstacles in achieving learning
goals” (p. 31). Hutchins et al.’s (2013) study concluded that motivation to transfer and transfer
design have the strongest relationship with intent to transfer. However, an interesting result was
related to the lack of significance for transfer effort expectations to intent to transfer in the AI-
enabled adaptive learning program. This factor measures the respondent’s expectation that
86
learning will result in on-the-job changes. It conflicts with the outcomes of Hutchins et al.’s
(2013) study, which identified transfer effort expectations among the factors that accounted for
the largest amount of unique variance in intent to transfer. This finding is also contrary to some
recent studies that identified transfer effort expectations as a predictor of the transfer of training
(Tziner, Fisher, Senior & Weisberg, 2007). One possible explanation for the observed result is
discussed in Yamkovenko’s (2009) work when he analyzed the relationship between learning
goals and intent to transfer. Lack of significance was explained by the fact that the actual transfer
of learning was not a dependent variable, and instead, intent to transfer served as a proxy and a
dependent variable in the study (Yamkovenko, 2009). According to Yamkovenko, (2009), “It is
possible that the psychological nature of behavioral intent is such that dispositional differences
play a much smaller role in the system of influences than constructs like general perceptions of
the learning environment” (p. 39). The difference in other factors, such as supervisor and
manager support and openness to change is consistent with prior studies where they “share less
than 10% of variance with the regression effect” (Hutchins et al., 2013, p. 256).
Learner Experiences and Intent to Transfer After Completion of the AI-Enabled Adaptive
Learning Program
The interviews with the participants of the AI-enabled adaptive learning program
identified three main themes. Specifically, the first theme was framed around the role of the AI-
enabled learning program in learner engagement as leading to increase in motivation to transfer
and self-efficacy; the second theme was framed around learning effectiveness and readiness to
transfer learning with general sentiment of the lack of social interaction during the experience;
and the third theme was framed around perceived gap in learning transfer due to disconnected
organizational practices despite learning innovation within a supportive organizational culture.
Table 18 provides a summary of key findings from the interviews.
87
Table 18
Key Themes from the Survey and Interview Data
Category Key findings
Learner-related factors Participants of the AI-enabled adaptive learning program
reported higher levels of engagement, increased
motivation, self-efficacy, and intent to transfer.
Training-related factors AI-driven knowledge assessment served as a mechanism
to enable personalization, which was perceived as a
strong contributor to learning effectiveness and intent to
transfer.
Organization-related factors Despite a strong organizational culture for continuous
learning and curiosity, a disconnect between innovative
design in formal training and organizational practices to
support learning transfer was perceived as a barrier for
learning transfer.
Recommendations for Practice
There are three recommendations to address study’s results and findings. All
recommendations are grouped into three key dimensions of the study’s conceptual framework,
aligned to Holton’s (1996) human resource development model, specifically learner, training,
and organization. Recommendations in all three groups are based on the results and findings of
the conducted study and cross-referenced with empirical studies on Holton et al.’s (2000)
Learning Transfer System Instrument (LTSI).
Learner-Related Recommendations
Participants of the AI-enabled adaptive learning program rated a number of person-
related factors more favorably than the participants of the traditional learning program. During
interviews, 50% of the participants emphasized a strong level of engagement in the AI-enabled
88
adaptive learning program due to perceived relevance of the content, 87.5% of the participants
stated that the program increased their motivation to transfer learning, and 100% commented
about intent to transfer after completion of the program largely attributing their own intent to
transfer to the personalization of the experience.
Developing Learning Programs Using AI Technology with Learner-Centric Design
Personalized solutions are developed through a learner-centered approach, in accordance
with learners’ specific needs (Deloitte China, 2019). According to McCombs and Whisler
(1997), learner centricity emerges from a combination of two dimensions: the learner and the
process of learning. It is achieved by focusing on individual needs and “the best available
knowledge about learning […] that is most effective in promoting the highest levels of
motivation, learning, and achievement for all learners” (p. 9). AI techniques allow learning
systems to cater to the distinct backgrounds and characteristics of each learner with the primary
goal of enabling higher levels of learner engagement, producing better learning results,
supporting a higher commitment to transfer learning, and helping to facilitate transfer of learning
to the workplace. The study demonstrated the importance of learner-centric solutions such as
providing personalization of learning experience and their impact on the intent to transfer as a
proxy of the learning transfer to the workplace.
Clarifying the Role of a Learner During and After Completion of the Learning Program
The overwhelmingly positive response to a number of person-related attributes in the
case of completion of the AI-adaptive learning program contrasts with the findings related to
transfer effort expectations and personal capacity for transfer. The transfer effort expectation
factor measures respondent expectation that learning will result in on-the-job changes and
personal capacity for transfer refers to the extent to which participants feel they have time,
energy and mental space in their work lives to make changes required to transfer learning on the
89
job. While many of the person-related attributes in LTSI (Holton et al., 2000) are geared towards
introspection of the learners’ state during the learning experience, both training transfer effort
expectations and personal capacity for transfer shift the focus to the learners’ role after the
completion of learning with a degree of accountability set on learners (Kabudi et al., 2021). To
address the lower levels of self-reported scores in transfer effort expectations and personal
capacity for transfer after completion of the AI-enabled adaptive learning program, a clear
definition and communication of learner’s role in the learning process and learning transfer is
needed. This recommendation should not be confused with the articulation of the learning goals
at the beginning of the program, a subject of another study by Yamkoveko (2009) which
validated the relationship of learning goals to the intent to transfer. One recommendation from
this study is related to setting explicit accountabilities in the learning process to help bridge the
perceived gap in the transition from actual learning experience to the learning transfer.
Training-Related Recommendations
Participants of the AI-enabled adaptive learning program rated perceived content validity
and training design more favorably than the participants of the traditional program. Additionally,
statistical analysis of participant responses confirmed that both factors had a significant positive
correlation to the intent to transfer, i.e., the higher self-reported scores on the perceived content
validity and training design factors showed the higher the reported intent to transfer. During the
interviews, 90% of participants stated that the design of the AI-enabled adaptive learning
program provided personalization, which was perceived as a strong contributor to learning
effectiveness and intent to transfer.
Creating AI-Enabled Personalized Learning with Formative Assessments
In the learning field, there is an array of approaches to provide personalized learning
(Cuban, 2017). However, through analyzing learning models and individual differences among
90
the learners, training design using AI technology provides a new way to apply the same content
and models in teaching different learners. Accurately diagnosing learners’ problem-solving goals
through the learning experience is essential for the AI technology to assess the concepts and
skills a learner understands (Gudivada, 2016). With the embedded knowledge assessments, AI-
enabled adaptive learning technologies can provide real-time and customized learning solutions
based on learners’ learning status, including knowledge diagnosis, competence assessment,
proficiency achievement, and content recommendation. The study demonstrated that under AI-
enabled adaptive learning technology, participants perceived a higher quality of training design
and perceived content validity compared to the traditional learning program, which also
translated into training instructions matching the individual knowledge and job requirements and
subsequently, more of a commitment to transfer (Hutchins et al., 2013).
Complementing AI-Enabled Adaptive Learning Program with the Social Learning
Seventy five percent of participants of the AI-enabled adaptive learning program
commented about lack of human interaction during the learning process. In their seminal works,
Piaget (1959), Bandura (1977), Vygotsky (1978), and other famous psychologists discussed how
conversation and community are integral parts of learning. According to Bandura (1977), “Most
human behavior is learned observationally through modeling: from observing others, one forms
an idea of how new behaviors are performed, and on later occasions, this coded information
serves as a guide for action” (p. 135). Given that in an AI-enabled adaptive learning program, a
learner’s interaction with other learners is minimized to the intelligent machine tutor, a blended
learning strategy to combine both online learning and activities involving other participants is
recommended.
According to Tucker (2019), “Blended learning models can serve as a bridge toward
personalization” (p. 25). The original 70-20-10 model for learning held the point of view that 70
91
percent of learning occurs from job-related experiences, 20 percent of learning from interactions
with others, and 10 percent of learning from formal training (Lombardo & Eichinger, 1996). The
latest research by the Training Industry (2021), surveying more than 1,500 global workers on the
impact of COVID-19 on learning, provides the updated on-the-job, social, formal (OSF) learning
blend, which reflects an increased role of social and formal learning shifting the distribution of
learning sources to a 55-25-20 model. In this model, if the AI-enabled adaptive learning program
can be defined as formal learning, it should be supplemented with the relevant social learning
components to blend the learning experience and convert it into a personalized and social
learning journey. This recommendation is particularly relevant when the process of learning is
viewed through the lens of social constructivism where individuals are active participants in the
creation of their own knowledge (Schreiber & Valle, 2013; Vygotsky, 1978). Vygotsky (1978)
argued, “Learning is a necessary and universal aspect of the process of developing culturally
organized, specifically human psychological function” (p. 90), which occurs through social
interaction. While AI technologies focus on personalizing learning experiences, they are not yet
capable of producing socially constructed knowledge unless supplemented with traditional social
learning activities.
Organizational Environment-Related Recommendations
Seventy-five percent of the participants of the AI-enabled adaptive learning program
mentioned a disconnection between formal learning and organizational processes to support
continuity of the learning process despite a supportive culture for learning. Peter Senge’s (1990)
seminal work on learning organizations is an important lens through which application and
adoption of the AI-enabled adaptive learning program should be discussed. Garvin (1993)
defined a learning organization as “an organization skilled at creating, acquiring, and transferring
knowledge, and at modifying its behavior to reflect new knowledge and insights” (p. 4), and
92
identified three building blocks of a learning organization, which include supportive learning
environment, concrete learning processes, and leadership that reinforces learning (Garvin et al.,
2008).
Creating Mechanisms for Continuous Learning After Completion of the AI-Enabled Learning
to Support Transfer of Learning
With the perceived gaps in the organizational processes and practices in learning, it is
important to create supporting mechanisms after completion of the AI-enabled adaptive learning
program. Such need remains relevant despite the fact that the role of the AI-enabled adaptive
learning program is largely focused on achieving learning effectiveness and efficiency. While
learners may have reached the intended learning outcomes, there is a necessary enabling of the
learning transfer to the workplace via supporting processes and practices in the organization
(Thille, 2013). The study revealed the opportunities to create continuous learning paths beyond
formal learning programs in order to support learning transfer to the workplace. Furthermore,
integrating these learning paths into organizations’ human development frameworks and systems
is one of the possible ways to create a holistic system of learning acquisition and application with
the ultimate goal of becoming a learning organization, which “can only happen as a result of
learning at the whole organization level [...] facilitating the learning of all members” (Pedler et
al., 1991, p. 1).
Developing a New Integrated Learning Analytic Methodology Leveraging Power of AI
One of the powers of AI technologies is in the embedded level of precision of
measurement enabled by its computational power, big data, and machine learning (Gartner,
2020). Recent reports from Gartner (2020) predicted that AI analytics could add around $13
trillion, or 16%, to annual global GDP by 2030. Applying this view to learning organizations
(Senge, 1990), a system-level view on learning environment, processes, and practices in
93
organizations will require investment in the development of new learning analytic
methodologies. AI-enabled techniques in learning analytics could connect data generated from
various phases of the learning process, i.e., before, during, and after learning experiences, and
dissect it at multiple levels, including measurement of the learners’ state and attitude during the
learning experience, gained proficiencies and competencies after learning completion, and
change in the performance at the individual and organizational levels in a connected system.
Integrated Recommendations
The role of AI in supporting human development is becoming increasingly important,
bringing fundamental changes to the fields of education, learning and workforce development
(WEF, 2020). According to Vint Cerf, “AI and machine learning will be augmenting human
cognition […] There will be abuses and bugs, some harmful, so we need to be thoughtful about
how these technologies are implemented and used, but on the whole, I see these as constructive”
(Anderson et al., 2018, p. 69).
When applying this lens to the education, learning and workforce development industry,
the path of integrating AI into organizational learning practices should be constructive in order to
adapt and refine based on emerging knowledge in this field. This is particularly relevant for the
learning practices using AI technologies, which account for a $2.3 billion investment from 2016
to 2018 in the US only (Deloitte, 2019). With such significant and growing investments, it is
important to develop a robust human-centric learning ecosystem that leverages AI technologies
with the primary goal to make learning effective, efficient, and ultimately, enable transfer of
learning to the workplace (Pugliese, 2016). The following recommendations include an
integrated framework of a learning ecosystem for the development and integration of the AI-
enabled learning technologies in the workplace.
94
AI Technologies-Enabled Learning Organization
As AI technologies advance, organizational leaders need to understand their impact on
people and adapt the enterprises to a technology-enabled learning organization (Cooper, 2019).
Integration of the AI-enabled technologies into learning and workforce development presents a
unique opportunity to refine the mission and purpose of technology-enabled learning in the
organization. This revision should include a clear definition of the purpose of learning and role
of AI technologies in the learning strategy (Senge, 1990). Development of the refined learning
strategies integrating AI technologies could be done using Garvin et al.’s (2008) building blocks
of a learning organization. Specifically, use of AI-enabled learning technologies providing
learner-centered, personalized learning experiences will encourage individual ownership and
self-regulation of the learning and development process, leading to a strong learning culture with
increased motivation, engagement, and learner self-efficacy (Dembo & Seli, 2016). Next,
refining the organization’s learning processes will be necessary in order to create continuous
learning experiences, which integrate AI-enabled learning programs with coaching, peer-to-peer
learning, and other forms of social learning grounded in Vygotsky's (1978) sociocultural learning
theory. In addition, concrete mechanisms that provide opportunities to transfer learning to the
workplace, such as project-based work and job assignments will need to be created. Lastly, the
role of leadership supporting adoption of the innovative learning technologies, such as AI-
enabled learning, will be critical for the AI-technology enabled learning organization.
AI Technologies-Enabled Personalized Learning Offerings
Understanding an organization's learning needs should be an important step to target the
key areas of focus for the use of the AI-enabled learning solutions. The developers of the AI-
enabled learning offerings will need to employ human-centered design in creation of such
95
programs and possess the skills to construct learning programs using AI technologies. According
to the study’s results and findings, formative assessment will play a key role in the AI-enabled
learning solutions, enabling personalization of the learning experiences. However, given the
continuous development and proliferation of the AI-enabled education technologies, learning
developers should be provided with opportunities to build skills necessary for the deployment of
these technologies (Mavroudi & Hadzilacos, 2016). Lastly, it will be important to ensure that
development of the AI-enabled learning programs is evaluated in the context of sociocultural
learning theory, which posits a view that individual development process is based on the social
and then individual level, occurring “between people (interpsychological) and then inside the
child (intrapsychological). This applies equally to voluntary attention, to logical memory, and to
the formation of concepts” (Vygotsky, 1978, p.57). Therefore, it will be necessary to use a wide
variety of channels and formats, all connected and ranging from AI-enabled adaptive learning
programs to synchronous group-based learning activities and on-the-job learning.
AI-Enabled Measurement of Learning
Capturing and fostering organizational learning with analytics is one of the ways to
increase understanding of the impact of learning on the organization (Manyika et al., 2017b). AI
techniques will be able to forecast an individual learner’s optimal sequence of learning activities
and continuously adapt the sequence until the learner acquires the intended learning outcome.
Furthermore, AI-enabled technologies should provide reliable mechanisms to analyze learning
programs and measure learner performance, producing evidence of knowledge, capability
development, and transfer of learning to the workplace.
Figure 4 depicts an integrated, learner-centered AI-enabled learning ecosystem enabling
transfer of learning to the workplace. The ecosystem includes four components, which are
96
person, training, environment, and analytics. For the person component, focus areas include the
role of an individual in owning personal learning, leading to higher levels of motivation “as the
internal processes that give behavior its energy and direction” (Dembo & Seli, 2016, p. 10) and
personalization of training via intelligent tutors, enabling higher levels of engagement. For the
training component, personalization of a learning experience is achieved via AI-enabled
formative assessments but also requires its integration into other forms of social learning. For the
environment, focus areas include a well-defined purpose of learning, robust organizational
learning practices relevant for the AI-enabled learning organization, and supportive leaders
enabling a culture of continuous learning. Lastly, the final component of analytics using AI
capabilities, which are integrated into various components of the learning ecosystem, focuses on
the tools and mechanisms to measure learner performance and transfer of learning to the
workplace.
97
Figure 4
AI-Enabled Learning Ecosystem
Evaluation of the AI-Enabled Learning Ecosystem Implementation
In the evaluation of the organization’s use and adoption of the AI-enabled learning
ecosystem, the first step includes an assessment using the provided framework. After assessment
completion, benchmarking of the results will provide a detailed understanding of the current state
of the organization’s learning ecosystem and identified gaps. From this foundation, organizations
can create and follow a clear action plan and roadmap to become an AI-enabled learning
98
organization with demonstrated transfer of learning to the workplace.
To evaluate an organization's current state for being an AI-enabled learning organization,
it will be required to collect data concerning four components from the AI-enabled learning
ecosystem framework. The general criteria provided in Table 19 will gauge the conversation
with the organizational stakeholders involved in the evaluation process.
Table 19
Evaluation Rubric for AI-Enabled Learning Organization
Category Criteria
Person Learners are offered personalized learning experiences enabled by AI
technologies.
Ownership of learning is defined as a key responsibility of a learner.
Role of the learner is clearly defined at the beginning of each learning program.
Training AI technologies drive formative assessments during the learning process
informing learner proficiency.
AI technologies forecast optional learning sequences to gain learning
proficiency.
AI technology-enabled learning programs are blended with other forms of
learning programs, such as group-based projects, project-based assignments and
other social learning.
Organizati
onal
environme
nt
Organization has a clear purpose of learning using AI technologies.
Organization has a learning culture where leaders support use of AI-enabled
learning programs and foster continuous learning.
Organization has the operating model, processes, and practices for continuous
learning following completion of the AI-enabled learning programs.
Analytics Organization uses AI technologies to measure learner performance and predict
learning proficiency.
Organization uses AI technologies to measure transfer of learning to the
workplace.
Organization uses AI tools to measure, support, and continuously improve the
learning ecosystem.
99
Summary
AI-enabled learning ecosystem framework comprises four components, such as person,
training, organizational environment, and analytics. Each component of the framework has a set
of focus areas intended for the development and adoption of the AI technologies in organization
in order to facilitate transfer of learning. The framework is a guiding tool, which sets
expectations for the creation of the AI technologies-enabled learning organization and adoption
of the AI tools in learning processes. The author sees this framework as the starting point for the
evaluation of an organization’s state and developing a unique roadmap to address identified gaps.
The framework can be also used for the continuous evaluation of the organization’s progress
towards becoming an AI technology-enabled learning organization.
Limitations and Delimitations
Study limitations represent weaknesses within a research design outside of the control of
the researcher that may influence study outcomes, whereas delimitations refer to the decisions
made by the researcher with regards to the research study (Ross & Bibler, 2019). There were a
few study limitations: non-equivalent group sampling, self-selection bias, non-response threat,
self-report and social desirability, and generalizability of the findings as a threat to external
validity of the study results. Among one of the internal validity threats was lack of random
participant assignment to treatment (AI-enabled learning program) and control (traditional
learning program) groups. Subsequently, using a posttest only nonequivalent groups design in
this study had certain sampling biases. To mitigate potential biases in sampling error, two groups
in the study were created with the closest possible characteristics, such as employment within the
same organization, department, job role, with learning content developed by the same subject
matter experts and instructional designers for both programs.
Another limitation of the study related to self-selection sampling bias by virtue of making
100
both programs available to participants in the organization. One way to mitigate this bias was to
use post-stratification to ensure study samples match the organization’s population (Kolenikov,
2016). The same post-stratification strategy using the R survey package was applied to the non-
response situation. A key difference in post-stratification between self-selection sampling bias
and non-response was to use calibrated weights to achieve alignment between the sample and the
population, while non-response adjusted weights focused on alignment between the responding
sample and the original sample (Kolenikov, 2016).
Hutchins et al. (2013) discussed that despite socially desirable responses, related studies
demonstrated mixed results for different behaviors; there is no evidence that it plays a role in
specific learning transfer situations and specifically in the context of self-reports (Holtgraves,
2004). Furthermore, Hutchins et al. (2013) viewed the use of self-report data in LTSI consistent
with its design aiming to measure individual perceptions of learning transfer constructs as
meanings enabling an individual to “interpret events, anticipate outcomes, and respond with
appropriate behavior” (p. 259).
The threat to external study validity related to generalizability of the study due to
interaction of the treatment with selection, setting, and history (Creswell & Creswell, 2018). The
narrow characteristics of participants, the organizational setting of participants, and time-bound
experiment were present in the study, which resulted in the limitations of the study results to
apply to the organization of study. According to Creswell and Creswell (2018), in the presented
situations, generalization of the results requires further research and additional experiments. In
the explanatory sequential mixed method study, the control and treatment groups enabled a
quasi-experimental approach in examining the relationship of LTSI factors and intent to transfer,
but random assignment was excluded from the study.
101
Recommendations for Future Research
While the results of the study demonstrated a higher level of learner engagement,
motivation, and intent to transfer learning after completion of the AI-enabled learning program
than the traditional learning program, one of the main considerations for practice remains how to
effectively address the variability of learners, including their prior knowledge, level of
motivation, and learning-related emotions. Specifically, there is no clarity on how AI
technologies identify when learners become frustrated and confused, at which point it offers
them the help they need (Padron-Rivera et al., 2018). The variety of the AI technologies used to
make decisions around adaptivity and limited empirical studies (Kabudi et al., 2021) present
opportunities for future research for AI and emotionally intelligent systems.
Another area for future research relates to participant feedback about lack of social
interaction during the AI-enabled adaptive learning experience. While there are positive results
observed from the experiments conducted in public education and private organizations on the
use of the AI-enabled adaptive learning programs, they have a serious omission in “putting
learners into a closed environment” (Hao, 2019, p. 26). As the research and product development
in this space continue, an in-depth assessment of the limitations of the current AI-enabled
learning technologies and their impact on the learning process will need further examination in
longitudinal studies to identify and mitigate any potential risks from such forms of learning.
Another area for future research relates to the potential impact of the participants’
demographics to their perceptions on the factors, which facilitate or hinder transfer of learning
following completion of the traditional or AI-enabled adaptive learning program. While the
study did not focus on these factors, the age distribution of the participants was different between
two groups. Future research is needed to explore the impact of age and other demographic
102
characteristics on the choices made by the participants in selection of AI-enabled versus
traditional learning program and perception on the learning transfer factors.
Lastly, the use of AI technologies, often referred to as intelligent tutoring systems (ITS),
in education is not new. From their earliest days, these systems aimed to mimic expert human
teaching (Burton & Brown, 1979) and maximize the quality of learning while optimizing
learning efficiency (du Boulay, 2017). However, critics of AI-enabled intelligent tutoring
systems argue that these systems bring the “new behaviourism” - a contemporary version of
Skinner (1968) Boxes (Watters, 2015). However, the application of the AI techniques has many
forms and continues to grow, including the latest developments where such systems can be
integrated into other advanced technologies becoming available to educators (du Boulay, 2017).
This study focused on the use of AI to provide adaptive learning experiences. There are other
examples of the uses of AI, such as virtual reality enabled training, augmented reality enabled
training, mixed reality enabled training, and many other techniques. Research developing the
taxonomy of these new models of learning and exploration of the convergence of these
technologies may lead to solutions of the deficiencies of these technologies.
Implications for Equity and Connection to the Rossier Mission
The mission of the USC Rossier School of Education is to prepare leaders to achieve
educational equity through practice, research, and policy. AI-enabled adaptive learning programs
present a potential model for using advanced technology to address the dual challenge of
learning efficiency and learning outcomes attainment. This study presented results and findings
which contribute to the emerging and fast-developing field of education technology powered by
AI and its promising role in the future of learning. Through powerful computers, mobile devices,
and other emerging technologies, the field of education powered by AI-enabled technologies has
an opportunity to provide access to learning and meet the needs of every individual irrespective
103
of educational, cultural, ethnic, or socio-economic background. Furthermore, high-tech
organizations at the forefront of research in the field of AI-enabled education technologies
should take a lead on provision of access to other public and private sector organizations in use
and deployment of the AI-enabled education technologies at their workplaces.
Conclusion
The Fourth Industrial Revolution (4IR) is a controversial topic among researchers, policy
makers, business leaders, and educators. Despite different viewpoints around macro-level impact
of the 4IR, a common denominator in all debates is the change, a symbol of the era of advanced
technologies reshaping every aspect of life. The 4IR is the revolution, which converges
automation and information technologies, leading to a significant impact on the way people live
and work. Global workforce is directly impacted by the 4IR given that technologies like
Artificial Intelligence (AI) automate and augment professional lives and change job and skill
requirements. Education is becoming increasingly important as a powerful agent for good in the
unknown future, which will likely be dominated by AI, automation, and robotics, and will
require humans to acquire new knowledge and skills 50 times faster than in the last 20 years
(McKinsey, 2020).
AI-enabled technologies are a promising learning product which helps to accelerate the
learning process and achieve evidence-based learning outcomes. Furthermore, AI technologies
with capabilities to provide personalized learning can help meet the individual learners where
they are. While these technologies largely shift the focus from merely learning content to the
needs of the individual learner, collective knowledge is what defines a learning organization,
capable of continuous learning, innovation, and growth (Senge, 1990). Furthermore, machine
learning as one of the forms of AI could help organizations not only measure the impact of
learning, but also predict areas of learning with the most organizational value. Expansion of
104
intelligent tutors and their transformation into intelligent learning organizations could lead to the
emergence of a new AI-enabled learning quotient marking a turning point in the learning
industry.
In 2016, Professor Hawking said at the Cambridge University Conference: "In short, the
rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do
not yet know which.” Educators could think about this dilemma through the lens of Mandela’s
(2003) words, “Education is the most powerful weapon we can use to change the world.”
105
References
Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision
Processes, 50, 179–211.
Al-Eisa, A., Furayyan, M., & Alhemoud, A. M. (2009). An empirical examination of the effects
of self-efficacy, supervisor support and motivation to learn on transfer intention.
Management Decision, 47, 1221–1244.
Aleven, V., Mclaughlin, E. A., Glenn, R. A., & Koedinger, K. R. (2013, September 17–21).
Active learners: Redesigning intelligent tutoring system to support self-regulated
learning [Paper presentation]. The 8th European Conference on Technology Enhanced
Learning, Tallinn, Estonia.
Aleven, V., McLaughlin, E. A., Glenn, R. A., & Koedinger, K. R. (2017). Instruction based on
adaptive learning technologies. In R. E. Mayer & P. Alexander (Eds.), Handbook of
research on learning and instruction (2nd ed., pp. 522-560). Routledge.
Amabile, T. M. (1988). A model of creativity and innovation in organizations. Research in
Organizational Behavior, 10, 123–167.
Anderson, R.C. (1977). The notion of schemata and the educational enterprise. In R.C.
Anderson, R.J. Spiro & W.E. Montague, (Eds.) Schooling and the acquisition of
knowledge (pp. 415-432). Lawrence Erlbaum Associates.
Anshari, M. (2020). Workforce mapping of fourth industrial revolution: Optimization to identity.
Journal of Physics: Conference Series, 1477, 072023. https://doi.org/10.1088/1742-
6596/1477/7/072023
Aoun, J. (2017). Robot-proof: Higher education in the age of artificial intelligence. The MIT
Press. https://doi.org/10.7551/mitpress/11456.001.0001
106
Appelbaum, S. H., & Goransson, L. (1997). Transformational and adaptive learning within the
learning organization: A framework for research and application. The Learning
Organization, 4(3), 115–128. https://doi.org/10.1108/09696479710182803
Aroyo, L., Dolog, P., Houben, G. J., Kravcik, M., Naeve, A., Nilsson, M., & Wild, F. (2006).
Interoperability in personalized adaptive learning. Journal of Educational Technology &
Society, 9(2), 4–18.
Astin, A. W. (1999). Student involvement: A developmental theory for higher education. Journal
of College Student Development, 40(5), 518–529.
Bahl, M., Cook, M., & Nerurkar, K. (2018). Relearning how we learn, from the campus to the
Workplace. Center For the Future of Work.
https://www.cognizant.com/whitepapers/relearning-how-we-learn-from-the-campus-to-
the-workplace-codex3921.pdf
Baldwin, T. T. & Ford, J. K. (1988). Transfer of training: a review and directions for future
research. Personnel Psychology, 41(2), 63–105.
Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory.
Prentice-Hall.
Bandura, A. (1988). Organizational application of social cognitive theory. Australian Journal of
Management, 13(2), 275–302. https://doi.org/10.1177/031289628801300210
Bandura, A. (1989). Social cognitive theory. In R. Vasta (Ed.), Annals of child development. Vol.
6: Six theories of child development (pp. 1-60). JAI Press.
Bandura, A. (1999). Social cognitive theory: An agentic perspective. Asian Journal of Social
Psychology, 2, 21–41. doi:10.1111/1467-839X.00024
107
Banerjee, P., Gupta, R., & Bates, R. (2017). Influence of organizational learning culture on
knowledge worker’s motivation to transfer training: Testing moderating effects of
learning transfer climate. Current Psychology, 36(3), 606–617.
https://doi.org/10.1007/s12144-016-9449-8
Bates, R. (2004). A critical analysis of evaluation practice: The Kirkpatrick model and the
principle of beneficence. Evaluation and Program Planning, 27(3), 341–347.
https://doi.org/10.1016/j.evalprogplan.2004.04.011
Bates, R., & Khasawneh, S. (2005). Organizational learning culture, learning transfer climate
and perceived innovation in Jordanian organizations. International Journal of Training
and Development, 9, 96–109. https://doi.org/10.1111/j.1468-2419.2005.00224.x
Bates, R., Holton, E. F., & Hatala, J. P. (2012). A revised learning transfer system inventory:
factorial replication and validation. Human Resource Development International, 15(5),
549–569. https://doi.org/10.1080/13678868.2012.726872
Becker, S. A., Freeman, A., Hall, C. G., Cummins, M., & Yuhnke, B. (2016). NMC Horizon
Report: 2016 K-12 Edition. Educause. https://www.learntechlib.org/p/173568/
Beer, M., Boselie, P., & Brewster, C. (2015). Back to the future: Implications for the field of
HRM of the multistakeholder perspective proposed 30 years ago. Human Resource
Management, 54(3), 427–438. https://doi.org/10.1002/hrm.21726
Beer, M., Finnstrom, M., & Schrader, D. (2016). The Great Training Robbery. Harvard Business
Review.
Bloom, B. (1984). The 2 sigma problem: The search for methods of group instruction as
effective as one-to-one tutoring. Educational Researcher, 13(6), 4–16.
https://doi.org/10.3102/0013189X013006004
108
Blume, B. D., Ford, J. K., Baldwin, T. T., & Huang, J. L. (2009). Transfer of training: A meta-
analytic review. Journal of Management, 36(4), 1065-1105. doi:10.1177/01492063093
52880
Bron, R. (2012). Transfer of and for learning. A study on a new transfer component and its
influencing factors. PwC. https://essay.utwente.nl/62306/1/MSc_Bron_R._-
_S0173177.pdf
Brusilovsky, P., & Peylo, C. (2003). Adaptive and intelligent web-based educational systems.
International Journal of Artificial Intelligence in Education, 13, 156–169.
Bryant, G., Newman, A., & Stokes, P. (2013). Learning to adapt: A case for accelerating
adaptive learning in higher education. Tyton Partners.
https://tytonpartners.com/library/accelerating-adaptive-learning-in-higher-education/
Bryman, A. (2006). Paradigm peace and the implications of quality. International Journal of
Social Research Methodology, 9, 111–26.
Brynjolfsson, E., & McAfee, A. (2014). The profession of IT learning for the new digital age
profession. Communications of the ACM, 57(9), 29–31. https://doi.org/10.1145/2644230
Bughin, J., Hazan, E., Lund, S., Dahlström, P., Wiesinger, A., & Subramaniam, A. (2018). Skill
shift: Automation and the future of the workforce. McKinsey & Company.
https://www.mckinsey.com/featured-insights/future-of-work/skill-shift-automation-and-
the-future-of-the-workforce
Bureau of Labor Statistics, U.S. Department of Labor. (2021, September 23). Employment to
grow 7.7 percent from 2020 to 2030; 1.7 percent excluding COVID-19 recovery. The
Economics Daily. https://www.bls.gov/opub/ted/2021/employment-to-grow-7-7-percent-
from-2020-to-2030-1-7-percent-excluding-covid-19-recovery.htm
109
Burrow, J. (1996, November). Evaluation: Perception to transfer. Paper presented at the meeting
of the International Society for Performance Improvement, Research Triangle Park, NC.
Burton, R. R., & Brown, J. S. (1979). An investigation of computer for informal learning
activities. International Journal Man-Machine Studies, 11(1), 5-24.
Butler-Adam, J. (2018). The Fourth Industrial Revolution and education. South African Journal
of Science, 114(5/6 SE-Leader). https://doi.org/10.17159/sajs.2018/a0271
Calvert, G., Mobley, S., & Marshall, L. (1994). Grasping the learning organization. Training &
Development, 48(6), 38–44.
Carnegie Mellon University (2021). ACT-R [Workshop]. 28th Annual ACT-R Workshop.
Virtual. http://act-r.psy.cmu.edu/
Carnevale, A. P., Strohl, J., & Gulish, A. (2015). College is just the beginning: Employers’ role
in the $1.1 trillion postsecondary education and training system. Center on Education
and the Workforce, Georgetown University McCourt School of Public Policy.
Caspersen, J., Smeby, J. C., & Aamodt, P. O. (2017). Measuring learning outcomes. European
Journal of Education, 52(1), 20–30. https://doi.org/10.1111/ejed.12205
Charmaz, K. (2006). Constructing grounded theory. Sage.
Christensen, U. J. (2017). How to teach employees skills they don’t know they lack. Harvard
Business Review. https://hbr.org/2017/09/how-to-teach-employees-skills-they-dont-
know-they-lack
Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Routledge Academic.
Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45, 1304-1312.
Cook, D. A., Gelula, M. H., Dupras, D. M., & Schwartz, A. (2007). Instructional methods and
cognitive and learning styles in web-based learning: Report of two randomised trials.
Medical Education, 41(9), 897–905. https://doi.org/10.1111/j.1365-2923.2007.02822.x
110
Cooper, C. E. (2019). The impact of advanced technologies on the workplace and the workforce:
An evaluation study (Publication No. 13808928) [Doctoral dissertation, University of
Southern California]. ProQuest Dissertations Publishing.
Cosyn, E., Uzun, H., Doble, C., & Matayoshi, J. (2021). A practical perspective on knowledge
space theory: ALEKS and its data. Journal of Mathematical Psychology, 101, 102512.
https://doi.org/10.1016/j.jmp.2021.102512
Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed
methods approaches. Sage.
Cuban, L. (2017). A Continuum on Personalized Learning: First Draft.
https://larrycuban.wordpress.com/2017/03/22/a-continuum-on-personalized-learning-
first-draft/
Davenport, T. H. (2019). China is catching up to the US on artificial intelligence research. MIT
Press. https://theconversation.com/china-is-catching-up-to-the-us-on-artificial-
intelligence-research-112119
Davis, N. (2016). What is the fourth industrial revolution?
https://www.weforum.org/agenda/2016/01/what-is-the-fourth-industrial-revolution/
Deloitte. (2019). Tech trends 2019: Beyond the digital frontier. Deloitte.
https://www2.deloitte.com/content/dam/Deloitte/fi/Documents/technology/DI_TechTren
ds2019.pdf
Deloitte China. (2019). Global development of AI-based education. Deloitte. deloitte-cn-tmt-
global-development-of-ai-based-education-en-191108.pdf
Denny, L. (2019). Heigh-ho, heigh-ho, it’s off to work we go: The Fourth Industrial Revolution
and thoughts on the future of work in Australia. Australian Journal of Labour
Economics, 22(2), 117–143.
111
Devos, C., Dumay, X., Bonami, M., Bates, R., & III, E. H. (2007). The Learning Transfer
System Inventory (LTSI) translated into French: Internal structure and predictive validity.
International Journal of Training and Development, 11, 3.
Dhawan, S., & Batra, G. (2021). Artificial intelligence in higher education: Promises, perils, and
perspective. OJAS, July-December, 11–22.
du Boulay, B. (2016). Artificial intelligence as an effective classroom assistant. IEEE Intelligent
Systems, 31(6), 76–81. https://doi.org/10.1109/MIS.2016.93
Dziuban, C., Howlin, C., Moskal, P., Johnson, C., Parker, L., & Campbell, M. (2018). Adaptive
learning: A stabilizing influence across disciplines and universities. Online Learning
Journal, 22(3), 7–39. https://doi.org/10.24059/olj.v22i3.1465
Edmondson, A., & Lei, Z. (2014). Psychological safety: The history, renaissance, and future of
an interpersonal construct. Annual Review of Organizational Psychology and
Organizational Behavior, 1(1), 23–43. https://doi.org/10.1146/annurev-orgpsych-
031413-091305
Edmondson, A., & Moingeon, B. (1998). From organizational learning to the learning
organization. Management Learning, 29(1), 5–20.
https://doi.org/10.1177/1350507698291001
The Education Commission. (2018). Learning Generation Report. https://report.education
commission.org/report/
Faggella, D. (2017). Examples of artificial intelligence in education. Tech Emergence.
https://www.techemergence.com/examples-of-artificialintelligence-in-education/
112
Falletta, S. (1998). Evaluating training programs: The four levels Donald L. Kirkpatrick,
Berrett-Koehler Publishers, San Francisco, CA, 1996, 229 pp. [Review of the book
Evaluating Training Programs: The Four Levels, by D. L. Kirkpatrick & J. D.
Kirkpatrick]. The American Journal of Evaluation, 19(2), 259–261.
https://doi.org/10.1016/s1098-2140(99)80206-9
Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G*Power 3: A flexible statistical
power analysis program for the social, behavioral, and biomedical sciences. Behavior
Research Methods, 39(2), 175–191. https://doi.org/10.3758/BF03193146
Finger, M., & Brand, S. B. (1999). The concept of the “learning organization” applied to the
transformation of the public sector: Conceptual contributions for theory development. In
M. Easterby-Smith, L. Araujo, & J. Burgoyne (Eds.), Organizational learning and the
learning organization: Developments in theory and practice (pp. 130–156). Sage.
https://doi.org/10.4135/9781446218297.n8
Fisher, R.A. (1925). Statistical method for research workers. Oliver & Boyd.
Ford, J., Yelon, S. L., & Billington, A. Q. (2011). How much is transferred from training to the
job?: The 10% delusion as a catalyst for thinking about transfer. Performance
Improvement Quarterly, 24(2), 7-24. doi:10.1002/piq.20108
Fowler, F. J. (2014). Survey research methods (5th ed.). Sage.
Frezzon, D. (2017). The role of technology in the education of the future.
https://www.weforum.org/agenda/2017/05/science-of-learning/
Gartner. (2020). 2021 HR priorities survey. https://www.gartner.com/en/newsroom/press-
releases/2020-11-16-gartner-survey-reveals-hr-leaders-number-one-priority-in-2021
Garvin, D. (1993). Building a learning organization. Harvard Business Review.
https://hbr.org/1993/07/building-a-learning-organization
113
Garvin, D., Edmondson, A., & Gino, F. (2008). Is yours a learning organization? Harvard
Business Review. https://hbr.org/2008/03/is-yours-a-learning-organization
Gleason, N. (Ed.). (2018). Higher education in the era of the Fourth Industrial Revolution.
Palgrave Macmillan. https://doi.org/10.1007/978-981-13-0194-0
Grossman, R., & Salas, E. (2011). The transfer of training: What really matters. International
Journal of Training & Development, 15(2), 103-120. doi:10.1111/j.1468-
2419.2011.00373.x
Gudivada, V. N. (2016). Cognitive computing: Theory and applications. Handbook of Statistics,
36, 3-38. https://doi.org/10.1016/bs.host.2016.07.004
Hamari, J., Koivisto, J., & Sarsa, H. (2014, January 6–9). Does gamification work? A literature
review of empirical studies on gamification [Paper presentation]. 47th Hawaii
International Conference on System Sciences, Waikoloa, Hawaii, United States.
https://doi.org/10.1109/HICSS.2014.377
Hao, K. (2019). China has started a grand experiment in AI education. It could reshape how the
world learns. MIT Technology Review. https://www.technologyreview.com/2019/08/02/
131198/china-squirrel-has-started-a-grand-experiment-in-ai-education-it-could-reshape-
how-the/
Harris, K., & Kimson, A. (2018). Labor 2030: The collision of demographics, automation and
inequality. Bain & Company. https://www.bain.com/insights/labor-2030-the-collision-of-
demographics-automation-and-inequality/
Haskell, R. E. (2004). Transfer of learning. In C. D. Spielberger (Ed.), Encyclopedia of applied
psychology (pp. 575–586). Elsevier Science. https://doi.org/10.1016/B0-12-657410-
3/00834-5
114
Herder, E., Sosnovsky, S., & Dimitrova, V. (2017). Adaptive intelligent learning environments.
In E. Duval, M. Sharples, & R. Sutherland (Eds.), Technology enhanced learning:
Research themes (pp. 109-114). Springer. https://doi.org/10.1007/978-3-319-02600-8_10
Herrero-Pineda, P., Quesada, C., & Ciraso, A. (2015). Strategies and instruments to evaluate
transfer of learning: Reflections from practice. Conference paper.
https://www.researchgate.net/publication/282321789_Strategies_and_instruments_to_eva
luate_transfer_of_learning_Reflections_from_practice
Holtgraves, T. (2004). Social desirability and self-reports: Testing models of socially desirable
responding. Personality and Social Psychology Bulletin, 30, 161–172.
Holton, E. F. (1996). The flawed four‐level evaluation model. Human Resource Development
Quarterly, 7(1), 5–21. https://doi.org/10.1002/hrdq.3920070103
Holton, E. F., Bates, R., Seyler, D., & Carvalho, M. A. (1997) Construct validation of a transfer
climate instrument. Human Resource Development Quarterly, 8, 95-113.
Holton, E. F. (2000). What's really wrong: diagnosis for learning transfer system change. In E. F.
Holton, T. T. Baldwin, & S. S. Naquin (Eds.), Managing and changing learning transfer
systems, Advances in Developing Human Resources, 2(4), 7-22.
Holton, E. F., Bates, R. A., & Ruona, W. E. A. (2000). Development of a Generalized Learning
Transfer System Inventory. Human Resource Development Quarterly, 11, 333-360.
Holton, E. F. (2005). Holton’s evaluation model: New evidence and construct elaborations.
Advances in Developing Human Resources, 7(1), 37-54. doi:10.1177/1523422304272080
Holton, E. F., & Bates, R. A. (2005). LTSI Learning Transfer System Inventory: Administrators
guide. Louisiana State University.
115
Holton, E. F., Bates, R. A., & Bookter, A. (2007). Convergent and divergent validity of the
Learning Transfer System Inventory. Human Resource Development Quarterly, 18, 385-
419.
Huber, G. P. (1991). Organizational learning: The contributing processes and the literatures.
Organization Science, 2, 554-559.
Hulleman, C. S., Barron, K. E., Kosovich, J. J., & Lazowski, R. A. (2016). Student motivation:
current theories, constructs, and interventions within an expectancy-value framework. In
The Springer Series on Human Exceptionality (pp. 241–278). Springer International
Publishing Switzerland. https://doi.org/10.1007/978-3-319-28606-8_10
Hutchins, H. M., Nimon, K., Bates, R., & Holton, E. (2013). Can the LTSI predict transfer
performance? Testing intent to transfer as a proximal transfer of training outcome.
International Journal of Selection and Assessment, 21(3), 251-264.
Johnson, L., Adams Becker, S., Estrada, V., & Martín, S. (2013). Technology outlook for
STEM+ education 2013–2018: An NMC horizon project sector analysis. The New Media
Consortium.
Johnson, S. D. (1995). Transfer of learning. The Technology Teacher, 54(7), 33- 35.
Jogulu, U. D., & Pansiri, J. (2011). Mixed methods: A research design for management doctoral
dissertations. Management Research Review, 34(6), 687–701.
https://doi.org/10.1108/01409171111136211
Kabudi, T., Pappas, I., Olsen, D. (2021). AI-enabled adaptive learning systems: A systematic
mapping of the literature. Computers and Education: Artificial Intelligence, 2, 1-12.
Kalyuga, S. (2006). Assessment of learners’ organised knowledge structures in adaptive learning
environments. Applied Cognitive Psychology, 20(3), 333–342.
https://doi.org/10.1002/acp.1249
116
Kara, N., & Sevim, N. (2013). Adaptive learning systems: Beyond teaching machines.
Contemporary Educational Technology, 4(2), 108–120. Advance online publication.
https://doi.org/10.30935/cedtech/6095
Kaushik, V., & Walsh, C. A. (2019). Pragmatism as a research paradigm and its implications for
social work research. Social Sciences, 8(9). https://doi.org/10.3390/socsci8090255
Kerka, S. (1995). The learning organization. Myths and realities. ERIC Publications.
Kerr, P. (2016). Adaptive Learning. ELT Journal, 70(1), 88–93.
https://doi.org/10.1093/elt/ccv055
Kirkpatrick, D. L. (1996). Evaluating training programs: The four levels (1st ed.). Berrett-
Koehler.
Kirkpatrick, J. D., & Kirkpatrick, W. K. (2016). Kirkpatrick’s four levels of training evaluation.
Association for Talent Development.
Kirkwood, A., & Price, L. (2014). Technology-enhanced learning and teaching in higher
education: What is ‘enhanced’ and how do we know? A critical literature review.
Learning, Media and Technology, 39(1), 6–36.
https://doi.org/10.1080/17439884.2013.770404
Kolenikov, S. (2016). Post-stratification or non-response adjustment? Survey Practice, 9(3).
https://doi.org/10.29115/SP-2016-0014
Kulkarni, A. (2019). AI in education: Where is it now and what is the future? Lexalytics.
https://www.lexalytics.com/lexablog/ai-in-education-present-future-ethics
Lewis, P. (2018). Globalizing the liberal arts: Twenty-first-century education. In N. W. Gleason
(Ed.), Higher education in the era of the Fourth Industrial Revolution (pp. 15–38).
Springer Singapore. https://doi.org/10.1007/978-981-13-0194-0_2
Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Sage Publications.
117
Liu, M., McKelroy, E., Corliss, S. B., & Carrigan, J. (2017). Investigating the effect of an
adaptive learning intervention on students’ learning. Educational Technology Research
and Development, 65(6), 1605–1625. https://doi.org/10.1007/s11423-017-9542-1
Locke, E. A., & Latham, G. P. (1990). A theory of goal-setting and task performance. Prentice
Hall.
Lombardo, M. M., & Eichinger, R. W. (1996). The Career Architect Development Planner.
Lominger.
Loucks, J., Davenport, T., & Schatsky, D. (2018). State of AI in the enterprise (2nd ed.). Deloitte
Insights. https://www2.deloitte.com/content/dam/Deloitte/co/Documents/about-
deloitte/DI_State-of-AI-in-the-enterprise-2nd-ed.pdf
Maarouf, H. (2019). Pragmatism as a supportive paradigm for the mixed research approach:
Conceptualizing the ontological, epistemological, and axiological stances of pragmatism.
International Business Research, 12(9), 1. https://doi.org/10.5539/ibr.v12n9p1
Machin, M. A. & Fogarty, G. J. (2004). Assessing the antecedents of transfer intentions in a
training context. International Journal of Training and Development, 8(3), 222-236.
Manyika, J., Chui, M., Miremadi, M., Bughin, J., George, K., Willmott, P., & Dewhurst, P.
(2017a, January). Harnessing automation for a future that works. McKinsey Global.
http://www.mckinsey.com/global-themes/digital-disruption/harnessing-automation-for-a-
future-that-works
Manyika, J., Lund, S., Chui, M., Bughin, J., Woetzel, J., Batra, P., Ko, R., & Sanghvi, S.
(2017b). Jobs lost, jobs gained: What the future of work will mean for jobs, skills, and
wages. https://www.mckinsey.com/featured-insights/future-of-work/jobs-lost-jobs-
gained-what-the-future-of-work-will-mean-for-jobs-skills-and-wages
118
Mavroudi, A., & Hadzilacos, T. (2016). Historical overview of adaptive e-learning approaches
focusing on the underlying pedagogy. In Y. Li, M. Chang, M. Kravcik, E. Popescu, R.
Huang, N. C. Kinshuk (Eds.), State-of-the-art and future directions of smart learning (pp.
115–122). https://doi.org/10.1007/978-981-287-868-7_13
McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for the
Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955. AI
Magazine, 27(4), 12. https://doi.org/10.1609/aimag.v27i4.1904
McCarthy, J. (2007, November 12). What is Artificial Intelligence? Computer Science
Department, Stanford University. http://www-formal.stanford.edu/jmc/whatisai.pdf
McCombs, B. L., & Whisler, J. S. (1997). The learner-centered classroom and school:
Strategies for increasing student motivation and achievement. Jossey-Bass.
McGinnis, D. (2020). What is the Fourth Industrial Revolution? The 360 Blog from Salesforce.
https://www.salesforce.com/blog/what-is-the-fourth-industrial-revolution-4ir/
McLeod, S. A. (2019, July 10). What does effect size tell you? Simply psychology:
https://www.simplypsychology.org/effect-size.html
Melnyk, L., Kubatko, O., Dehtyarova, I., Matsenko, O., & Rozhko, O. (2019). The effect of
industrial revolutions on the transformation of social and economic systems. Business
Perspectives, 17(4), 381–391. https://doi.org/10.21511/ppm.17(4).2019.31
Merriam, S. B., & Tisdell, E. J. (2016). Qualitative research: A guide to design and
implementation (4th ed.). Jossey-Bass.
Moreau, K. A. (2017). Has the new Kirkpatrick generation built a better hammer for our
evaluation toolbox? Medical Teacher, 39(9), 999–1001.
https://doi.org/10.1080/0142159X.2017.1337874
119
Netemeyer, R. G., Bearden, W. O., & Sharma, S. (2003). Scaling procedures: Issues and
applications. Sage.
Noe, R. A., Hollenbeck, J. R., Gerhart, B. and Wright, P. M. (2006), Human resource
management: Gaining a competitive advantage (6th ed.). McGraw‐Hill Irwin.
Oliveira, M., Barreiras, A., Marcos, G., Ferreira, H., Azevedo, A., De Carvalho, C.V. (2017).
Collecting and analysing learners’ data to support the adaptive engine of OPERA, a
learning system for mathematics. In P. Escudeiro, S. Zvacek, B. M. McLaren, J.
Uhomoibhi, & G. Costagliola (Eds.), Proceedings of the 9th International Conference on
Computer Supported Education (pp. 631-638). CSEDU 2017.
Organization for Economic Co-operation and Development. (2018a). Skills for jobs report.
http://www.oecd.org/els/emp/Skills-for-jobs-brochure-2018.pdf
Organization for Economic Co-operation and Development. (2018b). Using educational
research and innovation to address inequality and achievement gaps in education.
https://www.oecd.org/education/ceri/EDUCERICD_2018_11.pdf
Organization for Economic Co-operation and Development. (2019). Education at a Glance.
OECD Indicators. https://www.oecd-ilibrary.org/docserver/f8d7880d-
en.pdf?expires=1573964067&id=id&accname=guest&checksum=F212AA60710539334
3D19961E8447DBF
Örtenblad, A. (2001). On differences between organizational learning and learning organization.
The Learning Organization, 8(3), 125–133. https://doi.org/10.1108/09696470110391211
Oxman, S., & Wong, W. (2014). White paper: Adaptive learning systems.
http://kenanaonline.com/files/0100/100321/DVx_Adaptive_Learning_White_Paper.pdf
Padron-Rivera, G., Joaquin-Salas, C., Patoni-Nieves, J. L., & Bravo-Perez, J. C. (2018). Patterns
in poor learning engagement in students while they are solving mathematics exercises in
120
an affective tutoring system related to frustration. In J. Martínez-Trinidad, J. Carrasco-
Ochoa, J. Olvera-López, & S. Sarkar (Eds.) Pattern Recognition. MCPR 2018. Lecture
Notes in Computer Science, 10880. Springer. https://doi.org/10.1007/978-3-319-92198-
3_17
Parker, K. (2013). A better hammer in a better toolbox: considerations for the future of
programme evaluation. Medical Education, 47(5), 440–442.
https://doi.org/10.1111/medu.12185
Pawson, R., & Tilley, N. (1997). Realistic evaluation. Sage Publications.
Penprase, B. (2018). The Fourth Industrial Revolution and higher education. In N. W. Gleason
(Ed.), Higher education in the era of the Fourth Industrial Revolution (pp. 207–229).
Springer Singapore. https://doi.org/10.1007/978-981-13-0194-0_9
Perkins, D. N. & Salomon, G. (1988). Teaching for transfer. Educational Leadership, 46(1), 22–
32.
Piaget, J. (1959). The language and thought of the child (Vol. 5). Psychology Press.
Phillips, J. (1997). Handbook of training evaluation and measurement methods (3rd ed.). Gulf.
Pressey, S. L. (1933). Psychology and the new education. Harper.
Pugliese, L. (2016). Adaptive learning systems: Surviving the storm. Educause.
https://er.educause.edu/articles/2016/10/adaptive-learning-systems-surviving-the-storm
Qu, M., Lin, Y., & Halder, P. (2019). Analysis of Chinese pupils’ intents in using bioenergy
through the application of structural equation modeling approach. Journal of Cleaner
Production, 231, 386–394. https://doi.org/10.1016/j.jclepro.2019.05.242
Rifai, N., Rose, T., McMahon, G. T., Saxberg, B., & Christensen, U. J. (2018). Learning in the
21st century: Concepts and tools. Clinical Chemistry, 64(10), 1423–1429.
https://doi.org/10.1373/clinchem.2018.292383
121
Regnault, A., Willgoss, T., & Barbic, S. (2018). Towards the use of mixed methods inquiry as
best practice in health outcomes research. Journal of Patient-Reported Outcomes, 2(1),
1–4. https://doi.org/10.1186/s41687-018-0043-8
Renjen, P. (2020). Industry 4.0: At the intersection of readiness and responsibility. Deloitte
Global’s annual survey on business’s preparedness for a connected era.
https://www2.deloitte.com/ch/en/pages/risk/articles/industry-4-0-intersection-of-
readiness-and-responsibility.html
Robinson, S. B., & Firth Leonard, K. (2019). Designing quality survey questions. Sage.
Roblek, V., Meško, M., & Krapež, A. (2016). A complex view of industry 4.0. SAGE Open,
6(2). https://doi.org/10.1177/2158244016653987
Ross, P. T., & Bibler Zaidi, N. L. (2019). Limited by our limitations. Perspectives on Medical
Education, 8(4), 261–264. https://doi.org/10.1007/s40037-019-00530-x
Rouiller, J. Z., & Goldstein, I. L. (1993) The relationship between organizational transfer climate
and positive transfer of training. Human Resource Development Quarterly, 4, 377-390.
http://dx.doi.org/10.1002/hrdq.3920040408
Roulston, K. (2014). Reflective Interviewing: A Guide to Theory and Practice. In Reflective
Interviewing: A Guide to Theory and Practice. SAGE Publications Ltd.
https://doi.org/10.4135/9781446288009
Salkind, N. J. (2014). Chapter 6: Just the truth. In Statistics for people who (think they) hate
statistics. (pp. 105-128). Sage.
Sawyer, R. K. (Ed.) (2014). The Cambridge handbook of the learning sciences. Cambridge
University Press. https://doi.org/10.1017/CBO9781139519526
122
Scaduto, A., Lindsay, D., & Chiaburu, D. S. (2008). Leader influences on training effectiveness:
Motivation and outcome expectation processes. International Journal of Training &
Development, 12(3), 158-170.
Schechter, C. (2008). Organizational learning mechanisms: Its meaning, measure, and
implications for school improvement. Educational Administration Quarterly, 44(2), 155–
186. https://doi.org/10.1177/0013161X07312189
Schneider, K. (2014). Transfer of Learning in Organizations. Springer International Publishing.
https://doi.org/10.1007/978-3-319-02093-8
Schreiber, L. M., & Valle, B. E. (2013). Social constructivist teaching strategies in the small
group classroom. Small Group Research, 44(4), 395–411.
https://doi.org/10.1177/1046496413488422
Schwab, K. (2016). The Fourth Industrial Revolution. Portfolio Penguin.
Seale, C., & Silverman, D. (1997). Ensuring rigour in qualitative research. European Journal of
Public Health, 7(4), 379–384. Oxford University Press.
https://doi.org/10.1093/eurpub/7.4.379
Senge, P. M. (1990). The fifth discipline. The art and practice of the learning organization.
Random House.
Seufert, S., & Meier, C. (2016). From eLearning to digital transformation: A framework and
implications for L&D. International Journal of Advanced Corporate Learning, 9(2), 27–
33. https://doi.org/10.3991/ijac.v9i2.6003
Sharratt, L., & Fullan, M. (2009). Realization: The change imperative for deepening district-
wide reform. Corwin.
123
Schoeb, G., Lafrenière-Carrier, B., Lauzier, M., & Courcy, F. (2021). Measuring transfer of
training: Review and implications for future research. Canadian Journal of
Administrative Sciences, 38(1), 17–28. https://doi.org/10.1002/cjas.1577
Sharma, G. (2019). Discussing the benefits and challenges of adaptive learning and education
apps. https://elearningindustry.com/benefits-and-challenges-of-adaptive-learning-
education-apps-discussing
Skinner, B. F. (1958). Teaching machines: From the experimental study of learning come
devices which arrange optimal conditions for self-instruction. Science, 128(3330), 969–
977. https://doi.org/10.1126/science.128.3330.969
Skinner, B. F. (1968). The Technology of Teaching. Appleton-Century-Crofts.
Steiner, G. (2001). Transfer of Learning, Cognitive Psychology of. In International
Encyclopedia of the Social & Behavioral Sciences (pp. 15845–15851). Elsevier.
https://doi.org/10.1016/b0-08-043076-7/01481-9
Sternberg, R. J. (2003). A broad view of intelligence. Consulting Psychology Journal, 55(3),
139–154. https://doi.org/10.1037/1061-4087.55.3.139
Stolovitch, H., & Keeps, E. (2004). Training ain’t performance. American Society for Training
and Development Press.
Stolurow, L. M., & Davis, D. (1965). Teaching machines and computer-based systems. In R.
Glaser (Ed.), Teaching machines and programmed learning II: Data and directions (pp.
162–212). National Education Association of the United States.
Strack, R., Carrasco, M., Kolo, P., Nouri, N., Priddis, M., & George, R. (2021). The future of
jobs in the era of AI. https://www.bcg.com/publications/2021/impact-of-new-
technologies-on-jobs
Sue, V. M., & Ritter, L. A. (2012). Conducting online surveys (2nd ed.). Sage Publications.
124
Szijarto, B., & Cousins, J. B. (2019). Making space for adaptive learning. The American Journal
of Evaluation, 40(2), 160–176. https://doi.org/10.1177/1098214018781506
Tashakkori, A., & Creswell, J. W. (2007). The new era of mixed methods. Journal of Mixed
Methods Research, 1, 1–6.
Tashakkori, A. & Teddlie, C. (2003). Handbook of Mixed Methods in Social & Behavioral
Research. Sage.
Technavio. (2018). Artificial intelligence market in the US education sector 2018-2022.
https://www.researchandmarkets.com/reports/4613290/artificial-intelligence-market-in-
the-us
Thille, C. (2013). Surfing the tsunami: Faculty engagement with the open learning initiative
(Publication No. AA13592387) [Doctoral dissertation, University of Pennsylvania].
ProQuest Dissertations Publishing.
Training Industry (2021). Research report: Deconstructing 70:20:10.
https://trainingindustry.com/research-report-deconstructing-70-20-10/
Tsang, E. W. K. (1997). Organizational learning and the learning organization: A dichotomy
between descriptive and prescriptive research. Human Relations, 50(1), 73–89.
https://doi.org/10.1177/001872679705000104
Tsatsou, D., Pomazanskyi, A., Hortal, E., Spyrou, E., Leligou, H. C., Asteriadis, S., Vretos, N.,
& Daras, P. (2018, June 27–30). Adaptive learning based on affect sensing [Paper
presentation]. International Conference on Artificial Intelligence in Education, London,
United Kingdom. https://doi.org/10.1007/978-3-319-93846-2_89
Tucker, C. (2019, February 1). Blended learning: A bridge to personalization.
https://catlintucker.com/2019/02/blended-learning-a-bridge-to-personalization/
125
Tziner, A., Fisher, M., Senior, T., & Weisberg, J. (2007). Effects of trainee characteristics on
training effectiveness. International Journal of Selection and Assessment, 15(2), 167-174.
UNESCO Institute for Information Technologies in Education. (2013). Activity Report 2012-
2013. http://verstkapro.ru/
U.S. Department of Education Office of Educational Technology. (2013). Expanding evidence
approaches for learning in a digital world. U.S. Department of Education.
VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems,
and other tutoring systems. Educational Psychologist, 46(4), 197–221.
VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems,
and other tutoring systems. Educational Psychologist, 46(4), 197–221.
https://doi.org/10.1080/00461520.2011.611369
Velada, R., Caetano, A., Michel, J. W., Lyons, B. D., & Kavanagh, M. J. (2007). The effects of
training design, individual characteristics and work environment on transfer of training.
International Journal of Training and Development, 11(4), 282–294.
https://doi.org/10.1111/j.1468-2419.2007.00286.x
Veletsianos, G. (2016). Emergence and innovation in digital learning. Athabasca University
Press.
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes.
Harvard University Press.
Vygotsky, L. S. (1987). Thinking and speech. In R. W. Rieber & A. S. Carton (Eds.), The
collected works of L.S. Vygotsky, Volume 1: Problems of general psychology (pp. 39–
285). Plenum Press.
126
Wang, X., Sun, N., Lee, Y., & Wagner, B. (2017). Does active learning contribute to transfer
intent among 2-year college students beginning in STEM? The Journal of Higher
Education, 88(4), 593-618. https://doi.org/10.1080/00221546.2016.1272090
Wang, S., Christensen, C., Cui, W., Tong, R., Yarnall, L., Shear, L., & Feng, M. (2020). When
adaptive learning is effective learning: Comparison of an adaptive learning system to
teacher-led instruction. Interactive Learning Environments, 1–11.
https://doi.org/10.1080/10494820.2020.1808794
Watters, A. (2015). Education technology and Skinner's box.
http://hackeducation.com/2015/02/10/skinners-box
Watkins, K. E., & Marsick, V. J. (1993). Sculpting the learning organization. Jossey-Bass.
Wick, C. W., Jefferson, A., & Pollock, R. V. (2015). The six disciplines of breakthrough
learning: How to turn training and development into business results (3rd ed.). John
Wiley & Sons.
World Economic Forum. (2018). The future of jobs report.
https://www.weforum.org/reports/the-future-of-jobs-report-2018
World Economic Forum. (2019). Who pays for the reskilling revolution? Investment to safeguard
America’s at-risk workers likely to cost government $29 billion.
https://www.weforum.org/press/2019/01/who-pays-for-the-reskilling-revolution-
investment-to-safeguard-america-s-at-risk-workers-likely-to-cost-government-29-billion/
World Economic Forum. (2020). The future of jobs report.
https://www.weforum.org/reports/the-future-of-jobs-report-2020
Xie, H., Chu, H. C., Hwang, G. J., & Wang, C. C. (2019). Trends and development in
technology-enhanced adaptive/personalized learning: A systematic review of journal
127
publications from 2007 to 2017. Computers & Education, 140(May), 103599.
https://doi.org/10.1016/j.compedu.2019.103599
Yamkovenko, B., Holton, E. F. III, & Bates, R. A. (2007). The Learning Transfer System
Inventory (LTSI) in Ukraine: The cross-cultural validation of the instrument. Journal of
European Industrial Training, 31(5), 377-401.
http://dx.doi.org/10.1108/03090590710756819
Yamkovenko, B. (2009). Dispositional influences on the intent to transfer learning: a test of a
structural equation model. (Publication No. 159) [Doctoral dissertation, Louisiana State
University]. LSU Doctoral Dissertations.
Yardley, S., & Dornan, T. (2012). Kirkpatrick’s levels and education ‘evidence.’ Medical
Education, 46(1), 97–106. https://doi.org/10.1111/j.1365-2923.2011.04076.x
128
Appendix A: Permission for Figure 1 Baldwin and Ford’s (1988) Training Transfer Model
129
Appendix B: Learning Transfer System Inventory Scale Definitions, Descriptions,
Loadings, and Reliability Coefficients of 89 items, version 3
Adapted from pp. 344-346 in Holton, E. F., Bates, R. A., & Ruona, W. E. (2000). Development of
a generalized learning transfer system inventory. Human Resource Development Quarterly,
11(4), 333-360.
LTSI Scale Descriptions
Scale Type of
training
scale
Scale definition Scale description # of
items
∝
Trainee characteristics scales
Learner
readiness
specific The extent to
which
individuals are
prepared to enter
and participate in
a training
program.
This factor addresses the degree to
which the individual had the
opportunity to provide input prior
to the training, knew what to
expect during the training, and
understood how training was
related to job-related development
and work performance.
4 .73
Performance
self- efficacy
general An individual’s
general belief
that they are able
to change their
performance
when they want
to.
The extent to which individuals
feel confident and self-assured
about applying new abilities in
their jobs, and can overcome
obstacles that hinder the use of
new knowledge and skills.
4 .76
Motivation scales
Motivation to
transfer
learning
specific The direction,
intensity and
persistence of
effort toward
utilizing in a
work setting
skills and
knowledge
learned in
training.
The extent to which individuals are
motivated to utilize learning in
their work. This includes the
degree to which individuals feel
better able to perform, plan to use
new skills and knowledge, and
believe new skills will help them
to more effectively perform on-
the-job
4 .83
130
Scale Type of
training
scale
Scale definition Scale description # of
items
∝
Transfer
effort –
performance
expectations
general The expectation
that effort
devoted to
transferring
learning will lead
to changes in job
performance.
The extent to which individuals
believe that applying skills and
knowledge learned in training will
improve their performance. This
includes whether an individual
believes that investing effort to
utilize new skills has made a
difference in the past or will affect
future productivity and
effectiveness
4 .81
Performance
- outcomes
expectations
general The expectation
that changes in
job performance
will lead to
outcomes valued
by the individual.
The extent to which individuals
believe the application of skills
and knowledge learned in training
will lead to recognition they value.
This includes the extent to which
organizations demonstrate the link
between development,
performance, and recognition,
clearly articulate performance
expectations, recognize individuals
when they do well, reward
individuals for effective and
improved performance, and create
an environment in which
individuals feel good about
performing well.
5 .83
Work environment scales
Feedback
performance
coaching
general Formal and
informal
indicators from
an organization
about an
individual’s job
performance.
The extent to which individuals
receive constructive input,
assistance, and feedback from
people in their work environment
(peers, employees, colleagues,
managers, etc.) when applying new
abilities or attempting to improve
work performance. Feedback may
be formal or informal cues from
the workplace.
4 .70
131
Scale Type of
training
scale
Scale definition Scale description # of
items
∝
Supervisor/
manager
support
specific The extent to
which managers
support and
reinforce the use
of learning on-
the-job.
This includes managers’
involvement in clarifying
performance expectations after
training, identifying opportunities
to apply new skills and knowledge,
setting realistic goals based on
training, working with individuals
on problems encountered while
applying new skills, and providing
feedback when individuals
successfully apply new abilities.
6 .91
Supervisor/
manager
sanctions
specific The extent to
which
individuals
perceive negative
responses from
managers when
applying skills
learned in
training.
This includes when managers
oppose the use of new skills and
knowledge, use techniques
different from those taught in
training, do not assist individuals
in identifying opportunities to
apply new skills and knowledge, or
provide inadequate or negative
feedback when individuals
successfully apply learning on-the-
job.
3 .63
Peer support specific The extent to
which peers
reinforce and
support the use
of learning on
the-job.
This includes the degree to which
peers mutually identify and
implement opportunities to apply
skills and knowledge learned in
training, encourage the use of or
expect the application of new
skills, display patience with
difficulties associated with
applying new skills, or
demonstrate appreciation for the
use of new skills.
4 .83
132
Scale Type of
training
scale
Scale definition Scale description # of
items
∝
Resistance/
openness to
change
general The extent to
which prevailing
group norms are
perceived by
individuals to
resist or
discourage the
use of skills and
knowledge
acquired in
training.
This includes the work groups’
resistance to change, willingness to
invest energy to change, and
degree of support provided to
individuals who use techniques
learned in training.
6 .85
Personal
outcomes
positive
specific The degree to
which applying
training on-the-
job leads to
outcomes that is
positive for the
individual.
Positive outcomes include:
increased productivity and work
effectiveness, increased personal
satisfaction, additional respect, a
salary increase or reward, the
opportunity to further career
development plans, or the
opportunity to advance in the
organization.
3 .69
Personal
outcomes
negative
specific The extent to
which
individuals
believe that if
they do not apply
new skills and
knowledge
learned in
training that it
will lead to
outcomes that
are negative.
Negative outcomes include:
reprimands, penalties, peer
resentment, too much new work,
or the likelihood of not getting a
raise if newly acquired skills are
utilized.
4 .76
133
Scale Type of
training
scale
Scale definition Scale description # of
items
∝
Ability scales
Opportunity
to use
learning
specific The extent to
which trainees
are provided
with or obtain
resources and
tasks on-the-job
enabling them to
use the skills
taught in
training.
This includes an organization
providing individuals with
opportunities to apply new skills,
resources needed to use new skills
(equipment, information,
materials, supplies), and adequate
financial and human resources
4 .70
Perceived
content
validity
specific The extent to
which the
trainees judge
the training
content to
accurately reflect
job requirements.
This factor addresses the extent to
which individuals’ work load,
schedule, personal energy, and
stress-level facilitate or inhibit the
application of new learning on-the-
job.
5 .84
Personal
capacity for
transfer
specific The extent to
which
individuals have
the time, energy
and mental space
in their work
lives to make
changes required
to transfer
learning to the
job.
This factor addresses the degree to
which skills and knowledge taught
are similar to performance
expectations as well as what the
individual needed to perform more
effectively. It also addresses the
extent to which instructional
methods, aids, and equipment used
in training are similar to those used
in an individual’s work
environment
4 .68
134
Scale Type of
training
scale
Scale definition Scale description # of
items
∝
Transfer
design
specific The extent to
which training
has been
designed to give
trainees the
ability to transfer
learning to job
application and
the training
instructions
match the job
requirements.
The extent to which the training
program is designed to clearly link
learning with on-the-job
performance through the use of
clear examples, methods similar to
the work environment, and
activities and exercises that clearly
demonstrate how to apply new
knowledge and skills.
4 .85
135
Appendix C: LTSI Survey
Part I. Demographic questions
a. What kind of learning program did you complete?
[Adaptive learning program personalized to my needs vs. E-learning following with project
based assignment]
b. How do you currently describe yourself?
[Female / Male / Transgender / None of these]
c. What is your age?
[Less than 26 /26-35 / 36-45 / 46-55 / 56-65 / 66 years or older]
© Copyright 2000, E. F. Holton III & R. Bates, all rights reserved, version 4.
Part II. LTSI factor questions
136
137
138
Part III. Intent to Transfer Questions
Thank you for participating in this survey!
If you’ve completed an AI-enabled adaptive learning program and are interested to participate in
a follow-up interview for this study, please submit your contact details via this Qualtrics link. All
interviews will be strictly confidential.
139
Appendix D: Invitation Email to Participate in a Survey
You are invited to complete a survey as part of Yelena Mammadova’s doctoral study on
the efficacy of learning programs using different instructional methods. The purpose of the study
is to understand the perceived barriers and catalysts for the transfer of learning after the
completion of the learning program. Results will be aggregated and used to identify
recommendations on how to improve the efficacy of the learning programs.
Please note that Yelena Mammadova’s role in this study is of academic interest and
should not be confused with her role within the organization. You can find more information
about this study in the Information Sheet attached to this email.
The survey should take about 15-20 minutes to complete. There are no right or wrong
answers. Your honest responses will be the most helpful to the study. You may skip any question
you do not wish to answer, and you may stop the survey at any time. All responses are
anonymous. If you agree to participate in the survey, please click “Continue” below.
140
Appendix E: Information Sheet
Information Sheet for Exempt Studies
University of Southern California
Rossier School of Education
3470 Trousdale Pkwy, Los Angeles, CA 90089
INFORMATION SHEET FOR EXEMPT RESEARCH
STUDY TITLE: Future of Learning: Intelligent Learning Quotient in the Future of Work
PRINCIPAL INVESTIGATOR: Yelena Mammadova
FACULTY ADVISOR: Dr. Helena Seli, PhD
______________________________________________________________________________
You are invited to participate in a research study. Your participation is voluntary. This document
explains information about this study. You should ask questions about anything that is unclear to
you.
PURPOSE
The purpose of this study is to examine the efficacy of the various learning methods. The study is
framed as a program evaluation and will aim to collect your experiences and perceptions of the
catalysts and barriers to transfer of learning following completion of the learning program. You
are invited as a possible participant because you participated either in the AI-enabled adaptive
learning program tailored to your individual needs launched in May 2021 in your organization or
a blended learning program consisting of an e-learning module and project based assignment.
PARTICIPANT INVOLVEMENT
If you decide to take part, you will be asked to answer 3 demographic and 52 questions on your
perception of the catalysts and barriers for transfer of learning after your participation in the
141
learning program. It will take you approximately 15-20 minutes to complete. If you completed
the AI-enabled learning program, you will also be invited to participate in interviews via an
independent link at the end of the survey.
PAYMENT/COMPENSATION FOR PARTICIPATION
You will not be compensated for your participation.
CONFIDENTIALITY
The members of the research team and the University of Southern California Institutional
Review Board (IRB) may access the data. The IRB reviews and monitors research studies to
protect the rights and welfare of research subjects.
The data collected for this study via the survey will be anonymous. The data will be stored in a
password protected computer and will be destroyed after 3 years.
INVESTIGATOR CONTACT INFORMATION
If you have any questions about this study, please contact Yelena Mammadova at
mammadov@usc.edu or 908.821.6903 and Dr. Helena Seli at helena.seli@rossier.usc.edu or
213-740-6742.
IRB CONTACT INFORMATION
If you have any questions about your rights as a research participant, please contact the
University of Southern California Institutional Review Board at (323) 442-0114 or email
irb@usc.edu.
142
Appendix F: Interview Protocol
Respondent Type: Employees in organization who completed AI-enabled adaptive learning
program
Introduction to the Interview:
Thank you for agreeing to participate in my study. I appreciate the time that you have set aside to
meet with me and answer my questions.
Before we get started, I want to remind you that I am a student of the University of Southern
California and am conducting a study aimed to examine the effect of AI-enabled adaptive
learning programs on upskilling and reskilling of the global workforce. Allow me to share with
you the definition of the AI-enabled adaptive learning program. These programs describe
learning experiences that address an individual’s unique needs through just-in-time feedback and
adaptation of the content using the power of Artificial Intelligence. I am particularly interested in
the individual experiences and effects of these programs. I am speaking to you today because
you have been identified as someone who has participated in these programs. My study does not
aim to evaluate your learning outcomes. Rather, I am trying to learn more about your experience,
opinion, and recommendations for this learning program and its potential application in the
organizational context. So, I am strictly wearing the hat of a researcher today.
This interview is confidential. What that means is that your real name will not be shared with
anyone. The data for this study will be compiled into a report and while I do plan on using some
of what you say as direct quotes, none of this data will be directly attributed to you. I will use a
pseudonym to protect your confidentiality and will try my best to de-identify any of the data I
gather from you. Also, I will keep the data in a password protected computer and all data will be
destroyed after 3 years. Your participation in this interview is completely voluntary. If at any
time you need to stop, take a break, please let me know. You may also withdraw your
143
participation at any time without consequence. Also, I am happy to provide you with a copy of
my final paper if you are interested.
I have planned this interview to last no longer than 45 minutes. To facilitate my note-taking, I
would like to ask your permission for using a recorder during our interview. The recording is
solely for my purposes to best capture your perspectives and will not be shared with anyone else.
May I have your permission to record our conversation? (wait for yes/no answer). Please let me
know if you have any questions or concerns about this study before we start (answer questions, if
any). Let me turn the recorder on and we can get started (turn the recorder on).
1. Tell me about your recent experience of going through the AI-enabled learning program.
(opening question)
2. What do you think prompted you to start the program? (RQ4: learner
experience/motivation)
3. What did you think of the design of this learning program? (RQ4: learner
experience/training design)
How did the program design support or hinder your learning experience? (probing question)
4. How confident were you in the acquired skills from this program? (RQ4: learner
experience/self-efficacy)
5. Tell me whether you planned/plan to use the new skills on the job after completion of the
program (RQ4: learner experience/intent to transfer)
How did you want to use your new skills? (probing question)
6. Tell me whether you have used your new skills gained from this learning program. (RQ4:
learner experience/intent to transfer)
How did you use your new skills acquired through this program in your job? (probing question)
144
7. If you have used your new skills in your job, in what ways do you think the
organizational environment may have impacted use of skills on the job following completion
of this specific program? (RQ4: learner experience/environment)
8. Tell me about your experience with leadership support or lack of it to use new skills
gained through this learning program. (RQ4: learner experience/environment)
9. Tell me about your experience with organizational processes or practices, which may
have supported or hindered use of new skills following completion of this specific program?
(RQ4: learner experience/environment)
10. How would you describe the differences between this type of learning program and other
types of learning programs you’ve experienced? (RQ4: learner experience)
Conclusion to the interview:
Thank you for your participation in my study. I greatly appreciate your willingness to meet with
me for this interview and to share your thoughts about your experiences, which were extremely
informative and useful. If I find myself with a follow-up question, can I contact you via email?
Thank you so very much for your time and effort that made this research study possible.
145
Appendix G: Invitation Email to Participate in an Interview
You are invited to participate in an interview as part of Yelena Mammadova’s doctoral
study on the efficacy of learning programs using different instructional methods. The purpose of
the study is to understand the learners’ experiences, perceived barriers and catalysts for the
transfer of learning after the completion of the learning program. Results will be aggregated and
used to identify recommendations on how to improve the efficacy of the learning programs.
Please note that Yelena Mammadova’s role in this study is of academic interest and should not
be confused with her role within the organization. You can find more information about this
study in the Information Sheet attached to this email.
The interview should take about 45 minutes to complete. There are no right or wrong
answers. Your honest responses will be the most helpful to the study. You may skip any question
you do not wish to answer, and you may stop the interview at any time. All responses are
confidential. If you agree to participate in the survey, please click “Continue” below.
Abstract (if available)
Abstract
AI, or Artificial Intelligence, and its recent advancements in machine learning, natural language processing, and robotics, are changing workforce skill requirements. These changes require upskilling and reskilling of the global workforce at unprecedented pace and scale. However, traditional training programs to develop skills produce low rates of learning transfer to the workplace. This study sought to understand how advanced learning technologies could address the problem of learning transfer. Specifically, this paper explored person, training, and environment factors that influence transfer of learning to the workplace after completion of the AI-enabled adaptive learning program. Holton et al. 's (2000) Learning Transfer System Inventory was used as the conceptual and methodological framework for the study along with Bandura’s (1986) social cognitive theory as a theoretical model. A mixed methods approach was used to collect and analyze data. Participants of the AI-enabled adaptive learning program and traditional learning program took a survey to report their perceptions of individual, training and environment related factors which facilitated or hindered transfer of learning after completion of the program. Eight participants of the AI-enabled adaptive learning program participated in a follow-up interview to explore factors facilitating or hindering transfer of learning. The data suggested that an AI-enabled adaptive learning program was perceived by learners as a more effective solution to gain desired learning outcomes. The participants also reported on how organizational culture and environment supported or hindered learner transfer. Specifically, the data suggested that there is a gap in organizational practices to support learning transfer to the workplace. Recommendations for developing a holistic learning ecosystem for introduction and adoption of the AI-enabled adaptive learning programs were proposed.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Enhancing transfer of harassment prevention training into practice
PDF
Closing the leadership development learning transfer gap
PDF
An examination of factors that affect online higher education faculty preparedness
PDF
Workplace bullying of women leaders in the United States
PDF
Supporting faculty in preparing entrepreneurship: an exploration in the context of active learning
PDF
Promoting diversity: reactions to corporate actions that support anti-racism efforts
PDF
The impact of advanced technologies on the workplace and the workforce: an evaluation study
PDF
A path to K-12 educational equity: the practice of adaptive leadership, culture, and mindset
PDF
Emotional intelligence self-perceptions in non-clinical leaders: an examination into a healthcare organization
PDF
Manager leadership skills in the context of a new business strategy initiative: an evaluative study
PDF
Impact of tenure on the professional motivation of civil servants
PDF
Impact of job insecurity on intergenerational knowledge sharing among machinists in the United States
PDF
Line staff and their influence on youth in expanded learning programs: an evaluation model
PDF
“Black” workplace belonging: an examination of the lived experiences of Black faculty sense of belonging factors in community colleges
PDF
Dual-enrollment program implementation to address the problem of college affordability as a barrier to student access and a contributing factor toward student debt
PDF
Access to quality supplemental educational programs for K-12 students in underserved communities
PDF
Modern corporate learning requires a modern design methodology: an innovation study
PDF
Incorporating social and emotional learning in higher education: a promising practices based development of authentic leadership
PDF
Implementing problem-based learning to develop student supply chain skills
PDF
Managers’ learning transfer from the leadership challenge training to work setting: an evaluation study
Asset Metadata
Creator
Mammadova, Yelena
(author)
Core Title
Intelligent Learning Quotient
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Organizational Change and Leadership (On Line)
Degree Conferral Date
2022-05
Publication Date
03/05/2022
Defense Date
02/07/2022
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
adaptive learning,AI learning,learning transfer,OAI-PMH Harvest
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Seli, Helena (
committee chair
), Canny, Eric (
committee member
), Datta, Monique (
committee member
)
Creator Email
mammadov@usc.edu,ym@yelenamammadova.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC110768192
Unique identifier
UC110768192
Legacy Identifier
etd-MammadovaY-10414
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Mammadova, Yelena
Type
texts
Source
20220308-usctheses-batch-915
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
adaptive learning
AI learning
learning transfer