Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
00001.tif
(USC Thesis Other)
00001.tif
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
INFORMATION TO USERS
This manuscript has been reproduced from the microfilm master. UMI
films the text directly from the original or copy submitted. Thus, some
thesis and dissertation copies are in typewriter face, while others may be
from any type of computer printer.
The quality of this reproduction is dependent upon the quality of the
copy submitted. Broken or indistinct print, colored or poor quality
illustrations and photographs, print bleedthrough, substandard margins,
and improper alignment can adversely affect reproduction.
In the unlikely event that the author did not send UMI a complete
manuscript and there are missing pages, these will be noted. Also, if
unauthorized copyright material had to be removed, a note will indicate
the deletion.
Oversize materials (e.g., maps, drawings, charts) are reproduced by
sectioning the original, beginning at the upper left-hand comer and
continuing from left to right in equal sections with small overlaps. Each
original is also photographed in one exposure and is included in reduced
form at the back of the book.
Photographs included in the original manuscript have been reproduced
xerographicaUy in this copy. Higher quality 6” x 9” black and white
photographic prints are available for any photographs or illustrations
appearing in this copy for an additional charge. Contact UMI directly to
order.
UMI
A Bell & Howell Information Company
300 North Zeeb Road, Ann Arbor MI 48106-1346 USA
313/761-4700 800/521-0600
THE USE OF EVALUATION METHODS IN EMPLOYEE TRAINING
COURSES PROVIDED TO BUSINESS AND INDUSTRY BY A
COMMUNITY COLLEGE: A CASE STUDY
by
Teresita Castro-McGee
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(Education)
May 1995
Copyright 1995 Teresita Castro-McGee
UMI Number: 9617120
UMI Microform 9617120
Copyright 1996, by UMI Company. All rights reserved.
This microform edition is protected against unauthorized
copying under Title 17, United States Code.
UMI
300 North Zeeb Road
Ann Arbor, MI 48103
UNIVERSITY OF SOUTHERN CALIFORNIA
THE GRADUATE SCHOOL
UNIVERSITY PARK
LOS ANGELES. CALIFORNIA 99CO'
This dissertation, written by
Teresita Castro-McGee
under the direction of h . 3 . T ... Dissertation
Committee, and approved by all its members,
has been presented to and accepted by The
Graduate School, in partial fulfillment of re
quirements for the degree of
DOCTOR OF PHILOSOPHY
jgsr: P* C t. .
Date
DISSERTATION COMMIT TEE
Chnrrperson
..
ii
DEDICATION
This study is dedicated to Lauren and Stephen, the
loves of my life. To Rosie and Paul, to Sharon, and Chuck,
for their undying love and unconditional support. To Jose
Miguel for the last-minute jolt of inspiration. Lastly, to
the memory of Dr. Penny Richardson whose love for the adult
learner has forever been imprinted in my heart and in my
life.
iii
TABLE OF CONTENTS
Pacte
DEDICATION
ii
Chapter
I INTRODUCTION 1
Background
Purpose of the Study
Research Questions
Significance of the Study
Definitions of Terms
Context
Program Philosophy
Instructors
TQM Curriculum
Program Clients
Students
Organization of the Study
Contract Education
Evaluation Theory
Historical Review
Evaluation of Training Program
Kirkpatrick Model
Reaction Level
Learning Level
Behavior Level
Results Level
CIPP Model
Context Evaluation
Input Evaluation
Process Evaluation
Product Evaluation
TVS Approach
Step 1 - Situation
Step 2 - Intervention
Step 3 - Impact
Step 4 - Value
IPO Model
Input Evaluation
Process Evaluation
Output Evaluation
Outcomes Evaluation
II REVIEW OF THE LITERATURE 21
Chapter Page
III. METHODOLOGY............................. 38
Case Study Approach
Research Questions
Pilot Study
Validity and Reliability
Construct Validity
Internal Validity
External Validity
Reliability
Data Collection
Interviews
Observation
Participant Observation
Review of Documents
Analysis of Data
IV. FINDINGS................................. 49
Evaluation Methods
Class Evaluation
Paper/Pencil Testing
Pre/Post Testing
Subjective Observation
Application Assignments
Team Projects
90-Day Post-Training
Assessment
Evaluation Levels
Reaction Level
Learning Level
Behavior Level
Results Level
Factors Determining Selection
of Evaluation
Student Evaluation
Team Projects
90-day Post-training Assessment
V. CONCLUSIONS............................. 75
Interpretation of Results
Question #1
Question #2
Program Recommendations
Implications
Suggestions for Additional Research
V
BIBLIOGRAPHY...................................... 92
APPENDIXES ....................................... 99
A. Conversation Guide ....................... 100
B. Introduction Letter ...................... 102
C-F. Class Evaluation Form.................... 104
G. Summary of Results ........................ 109
H. Review Test .............................. Ill
I-J. Curriculum ............................... 113
K-M. Post-Training Assessment ................. 116
Teresita Castro-McGee William Maxwell
ABSTRACT
THE USE OF EVALUATION METHODS IN EMPLOYEE TRAINING
COURSES PROVIDED TO BUSINESS AND INDUSTRY BY A
COMMUNITY COLLEGE: A CASE STUDY
This study was designed to investigate the methods
and levels of evaluation implemented by a community
college contractual education program to evaluate its own
practices. Two questions guided the study. These were:
(1) How were employee training programs (contractual
education programs) provided by the community college
evaluated? What methods were employed? What levels of
evaluation were implemented? (2) Why were specific
methods and levels of evaluation implemented? What were
the factors contributing to their selection?
Qualitative research methodology was pursued in
the form of a descriptive case study, which was conducted
over a period of 23 months. Data were collected through
interviews, observation, participant observation, and
analysis of documents.
Findings indicated that seven evaluation methods
were selected by the program staff to evaluate program
effectiveness and student learning. Analysis of data
also suggested that evaluation practices were conducted
at four levels of evaluation. The levels were: Reaction
Level, Learning Level, Behavior Level, and Results Level.
Although many methods were pursued, there appeared to be
a lack of direction and uniformity mostly due to the lack
or absence of agreement on what the education programs
were supposed to accomplish. This was complicated by the
program's attempt for "customer satisfaction" and the
identification of three "customers": the participants,
the companies, and the funding source.
A factor that contributed to the selection of
evaluation methods used for assessment was the program's
"business" philosophy and application of TQM (Total
Quality Management) principles focused on establishing a
feedback loop to ensure client satisfaction. Another
factor was the need to utilize problem-specific
assessment tools to resemble job conditions in order to
foster the transference of training concepts to the
workplace. Lastly, compliance with Standards of
Accountability by the funding source also affected the
selection of levels of methods of evaluation.
1
CHAPTER I
INTRODUCTION
Background
Community colleges emerged in the early 1900s in
response to the needs brought on by an ever-changing
American society. Historically, and closely related to the
forces driving this case-study, one of the social forces
that motivated the birth and, later, the expansion of the
American community college system was the need to provide
a well trained work force for local communities (Cohen &
Brawer, 1987).
A century later, Americans were again faced with the
rapid shift in economic conditions. As they approached the
21st century, Americans were experiencing the end of an era
of industrialization and had become a post-industrial
society.
The changing demands from careers and jobs pressured
the American work force into the acquisition of new skills
or modification of the old ones to insure survival in a
society that has been labeled by many, as a "knowledge,”
"service," and "information" society (Keller, 1983).
As Americans have experienced the explosion of
technological and demographic changes, educational systems
have been forced to search for ways to keep abreast with
the resulting challenges. Because of the close ties to
their communities, community colleges have been in the
forefront, in meeting the demands of America's changing
society (Cohen & Brawer, 1987).
Faced with such economic transformation, community
colleges have been looking at cooperative education
programs to explore linkages with systems, such as the
business community. Deegan and Drisko (1985) reported that
contractual education was an emerging force in cooperative
education efforts. Geber (1987) further asserted "competi
tion, demographic changes and technological shifts," made
contractual education a "natural merge" between colleges
and the external community of business and industry.
Authors agree that programs with industry meet the
overall community college mission and goals to provide
community services (Feldman, 1985; Parnell, 1990; Phillips,
1991). Nevertheless, there are critics who question
whether efforts by community colleges to serve the business
community will further dilute the dwindling services
offered to the more traditional students and to the
community in general (Geber, 1987; O'Banion, 1991).
Together with the emerging opportunities to train
the American work force, community colleges have been
forced to meet the challenges that have been originated.
Among other things, and as one moves to the central issues
of this study, questions were raised regarding the evalua
tion of contractual programs (O'Banion, 1991).
This study was designed to answer questions investi
gating procedures that contractual education programs
implement to evaluate themselves. In the section that
follows the purpose of the study is stated.
Purpose of the Study
This investigation was undertaken with the purpose
of discovering and understanding the evaluation practices
used by the contractual education program of a community
college to assess its programs offered to business and
industry. This investigator's professional experience of
the past 14 years as a business and industry instructor for
the public sector indicated that few employee training
programs availed themselves of systematic evaluation
procedures.
Interested in discovering whether the same practice
was replicated when the training provider was "a more
traditional educational system," such as a community
college program, this descriptive case study was designed
to answer questions regarding the program's evaluation
practices. The next section states the research questions
that guided this study.
Research Questions
This case study was guided by the following research
questions:
1. How were employee training programs (contractual
educational programs) provided by the community college
evaluated? What methods were employed? What levels of
evaluation were implemented?
2. Why were specific methods and levels of evalua
tion implemented? What were the factors contributing to
their selection?
Significance of the Study
A review of the literature indicated that evaluation
of training programs by training providers was inconsistent
at best and totally absent in most cases (Carnevale &
Schulz, 1990; Holcomb, 1993; McMahon & Carter, 1990; Rossi
et al., 1979). Quite the contrary, instruction at the
community colleges has been systematically evaluated,
primarily through the use of quizzes and student examina
tions (Cohen & Brawer, 1987).
This study proposed to investigate how a traditional
educational institution, such as a community college, would
conduct evaluation of its programs offered to nontradition-
al students in nontraditional settings such as business and
industry. It was hoped that the findings of this case
study would help establish a model of the evaluation of
programs resulting from the merger of two systems, business
and education, with totally different traditions and
practices.
Definitions of Terms
For the purpose of this study, the terms contractual
education, contract education, and employee training
programs are used interchangeably. Contract education
refers to an agreement in which a business, a government
agency, or a community organization contracts directly with
the college for the provision of instruction to its
employees, clients, or members (Suchorski, 1987).
The Study of Contractual Education Programs in the
California Community Colleges (Report to the California
Legislature, 1986), hereafter referred to as the Study,
defines contractual classes as determined by Title V of the
California Administrative Code (Sections 55170, 58250,
58251, 58253) as "classes offered by a Community College
District in fulfillment of a contract with a public or
private agency, a corporation, an association or any other
body or person" (p. 6).
The terms clients, company, accounts, and contracts
are used interchangeably. They refer to business organiza
tions contracting with the community college to provide
instruction.
Additionally, the term clients is used interchange
ably by different groups within the study. Instructors
used the term clients to refer to (a) students or partici
pants receiving instruction, (b) the company contracting
instruction, (c) the college staff contracting individual
services.
Administrators, additionally, use the term clients
to refer to the State of California. In the following
section, program clients will be further explored.
To insure confidentiality of the program and program
participants, all names and data sources jeopardizing
anonymity have been altered. Moreover, any information
indicating geographical parameters which could lead to
identification of the program have been modified.
Context
A community college in Southern California reported
as having one of the largest contractual educational
programs for business and industry (Study, 1986) was
selected as the site for this descriptive case study. The
college's contractual education program formed part of the
Vocational Education Department; however, since 1985 it has
informally developed an almost autonomous identity as the
college's Center for Professional Education (CPE). (To
protect the Center's confidentiality, all identifying
information, including the names, have been altered.)
A recent newspaper article reported the CPE as "the
largest provider of contract education . . . among the
nation's community colleges." The same article further
defined the CPE as follows:
Funded through grants from the State's Employment
Training Panel (ETP), the Center has helped more
than 6,000 companies train 16,000 employees to use
new technologies. . . .
Companies are not required to pay a fee to receive
help from the Center, but must guarantee that
employees enrolled in retraining courses will
complete them.
The Center works mainly with aerospace firms to
retrain workers.
Although the Center is a division of (name)
College, it does not concentrate on training stu
dents. Instead it markets its services to compa
nies in and outside of the city where it is located
(p. 3) .
The CPE offers a variety of courses to the adult
learners in and out of the immediate community. This
particular investigation focused on the Center's "TQM" (Total
Quality Management) project. Accordingly, only data pertain
ing to the CPE's TQM project were selected for analysis.
Program Philosophy
Respondents viewed the CPE as a business partnership
among the college, the business community, and the State of
California. Administrators and instructors consistently
commented, "We are not like those people at the college," "We
are in the real world, here," "This is real business." They
proclaimed adherence to a "Business Philosophy" as opposed to
an "Academia Philosophy."
Evidence of a "Business Philosophy" could be observed
in the CPE's choice of physical location (downtown, away from
the campus, the program has re-located to a new facility) the
language in their documents, their perception of students as
"clients" or "customers" and their reference to contracting
companies as "accounts," "clients," or "contracts."
The "business" philosophy is explicitly stated on page
1 of the Instructor's Manual. The Manual reported the CPE's
"Mission Statement" as follows:
Our Mission is to continually improve the efficien
cy, productivity and quality of the California
workforce. We strive to improve our services, and
our resources, continuously, in order to bring
pride to our customers, our employees and the
members of our community.
Furthermore, the Instructor's Manual stated the CPE's
"Quality Policy" as follows:
The [CPE] firmly believes that Total Quality is the
essential ingredient of our business principles.
It demands we focus on the vital part of our opera
tion: the customer. By maintaining this focus, we
will always be able to accomplish the aspects we
strive for in our Mission. (p. 1)
The CPE's business orientation was observed through
their focus on "client/customer satisfaction." Administra
tors reported:
"Our goal is to provide good service to the business
community."
"We are very proud of our contribution to the
business development of our community."
9
"One of our goals is to make clients happy. We need
to know what they are thinking and feeling about us so that
we can make the necessary changes to meet their needs."
Instructors
Reflecting the program's philosophy, the CPE teaching
staff are referred to as "instructors" or "trainers." An
administrator explained:
All our instructors are people from business and
industry. Most of all, we want them to be familiar
with how things are and what is expected of them in
business. This is not like teaching students at
the college. They need to know how to deal with
people in business because we have to be able to
serve the needs of our customers. Our clients are
not like the college's students, they don't have to
be there.
All instructors reported autonomy in the performance
of their instructional tasks. No one reported knowing
exactly how other instructors were teaching a particular
topic. An instructor stated, "I think I have an idea of
what and how others are teaching, but I really could not say
for sure." Another instructor said, "I have no idea" (what
others are doing). Yet another, asserted, "It is entirely
possible that we are all doing different things. I don't
think so, but it is entirely possible." Lastly, one more
instructor stated, "Every now and then we have meetings,
then you can pick up hints or tips, but that doesn't happen
very often."
Instructors reported that autonomy in the performance
of their job allowed them to draw "from what has worked
10
before.” Additionally, they reported, it gave them the
freedom to "customize” according to the conditions faced.
They all reported understanding that the curriculum "had to
be covered."
Administrators1 comments echoed those of the instruc
tors. They reported, "It doesn't matter how it [curriculum]
gets covered, but it must be covered," "We don't want to cut
down people's creativity by setting specific rules for
teaching; we want to draw from people's strengths."
TOM Curriculum
The TQM curriculum was developed by the college's
staff in accordance with parameters established by "9
primes" (Principal Aerospace Industries) . The selection of
curriculum material was guided by the interest to provide
small companies the technology necessary to meet require
ments for contracts with large aerospace corporations. The
State of California, Employment Training Panel (ETP) , as the
funding source, was responsible for providing final approval
for implementation of the curriculum as designed by the
college.
Staff involved in curriculum development reported
adherence to the general guidelines from the "9 primes" but
primarily "drew" from their own experience as professionals
in the field to select and develop materials for the
curriculum. Developers reported using "whatever we could
11
get our hands on," "our common sense," and "judgment" in
their selection of curriculum materials.
Moreover, developers felt pressured for time as
deadlines imposed by the State approached. One instructor
reported, "Some stuff was selected knowing full well that it
would need to be changed later on. Other areas were
incomplete, but we had to beat the clock." They also
reported pressure, needing to "come up with a blanket
curriculum to fit just about everyone's needs."
The curriculum was designed to be taught in 16-20
(depending on class member classification), three-hour,
weekly classes held at different locations. Sites for
instruction included the CPE's classroom space, as well as
"on-site" at company-provided locations.
Hours of instruction varied. This researcher
observed one, three-hour session, held after hours (5:30 -
8:30 p.m.). A session observed at another location was held
between the hours of 8:30 a.m. and 11:30 a.m.
Instructors reported, "We teach whenever the company
is willing to fit us in." Some instructors held other
"full-time" jobs, and scheduling required consideration of
"other commitments." Regarding scheduling, administrators
declared, "We would like to have all instruction conducted
during employee regular working hours. We encourage it, but
the company has to agree."
The Instructor's Manual specified, "The instructor
will present the training material in the pre-arranged order
designed by the CPE." An administrator declared, "We don't
care how they teach their class, but we want them to follow
the order, because it becomes a headache with make-up
classes." However, another administrator reported, "All
instructors have the responsibility to relate the curriculum
materials to the company they are working with. They have
to customize." Consequently, all instructors reported
"jumping around" in the curriculum sequence. One justifica
tion was stated as, "You never know where the class will
take you."
Additionally, all instructors reported needing to
"supplement" the curriculum provided. A review of On-Site
Materials (December 1992) revealed that for some class
sessions, instructors were provided with little more than
the topic to be covered. Sources for supplementing the
curriculum were quoted as "books," "articles," magazines,"
and "stuff from other classes."
Interviews with administrators revealed the existence
of curriculum "modules for instruction." These "modules"
provided description of class sessions with respect to
subject content, time frame, objectives, and competencies.
However, at the time of this study, these modules for
instruction were not part of the curriculum materials
provided to instructors. These modules appeared to be
13
applied more for administrative purposes in compliance with
State requirements than for instructional purposes.
Some instructors reported "never having seen" the
modules mentioned. Others reported "having seen or heard"
about them, but considered them "irrelevant to what goes on
in the classroom."
A review of the modules for instruction revealed that
most written objectives and competencies lacked the speci
ficity and precision to assist in curriculum design and
subsequently in the evaluation process. None of the
objectives and competencies reviewed met what Mager (1984)
reported as the three essential characteristics of instruc
tional objectives: (1) Performance, (2) Conditions; and (3)
Criteria.
Rather, the objectives and competencies reviewed
appeared to be statements of generalities. The following
citations have been selected as representative of objectives
and competencies provided by the modules of instruction:
Objective:
Provide the student with a general overview of TQM
as a way of life within the company. Give students
the basic concepts & principles upon which TQM is
based. Describe and develop the five phases of
TQM. Introduce students to the problem-solving
process.
Competency:
The student will be able to identify individual,
departmental, and company customers. Students will
be able to identify their own company and depart
ments as to how they fit into the five phases of
14
TQM and offer ideas to improve or modify their
actual status. The students will be proficient in
defining and analyzing problems.
Objective:
Provide the students with the concept of personali
ty styles and how they relate in the workplace.
Give students a firm understanding of the defini
tion of leadership and the qualities of effective
leadership. To prepare students for the change
required to evolve into an environment of continu
ous improvement.
Competency:
The students will be able to identify their own
personality style and how it relates to communica
tion and leadership in the work environment. The
students will learn to employ leadership techniques
in the work environment to empower and enable
employees. The students will be prepared to deal
with the emotional and psychological changes re
quired for continuous improvement.
Objective:
Provide the tools needed to prepare the student to
work with others as part of teams rather than
groups. Give the students tips and practical
examples to communicate effectively with other
members of the team. Make students aware of the
different types of meetings, their purpose and
effectiveness.
Competency:
The student will be able to work comfortably in a
team integrated by different people with different
styles. Participants will be aware of the syner
gistic results obtained by the Team. Communica
tions will be enhanced by making participants focus
on individual's contribution to the Team goals
rather than Department goals. Students will be
able to conduct effective meetings, where objec
tives, time, and action items are constantly moni
tored.
Although most modules presented objectives and
competencies in a general manner as indicated above, some
modules specified performance such as the following: "The
student will be able to collect, analyze, and use variable
data. . . . They will be able to determine the required
statistical tool on each case by systematic use of the
problem-solving process." "The student will be able to set
up the appropriate charts to a particular operation depend
ing on sample size, run size, and variability of the sample
size." "The student will be able to interpret patterns in
all control charts introduced." "The student will be able
to present senior staff specific problems and their solu
tions. "
The inconsistency in the composition of the objec
tives and competencies was prevalent through the modules of
instruction. Although some attempts for specificity could
be identified, most objectives and competencies lacked
precision for measurability.
Most significantly, there was no evidence in docu
ments reviewed or verbal reports that the selection of
curriculum objectives and competencies were "linked" to a
specific needs analysis. Rather, curriculum design appeared
to be directed by the general needs of the State of Califor
nia as presented through a request for proposal (RFP). The
absence of a specific needs analysis, endangered the
precision of objectives and competencies; this circumstance
16
impacted curriculum design and would eventually interfere
with the evaluation process.
Program Clients
Instructors1 Clients
As indicated earlier (Definitions of Terms), the term
"client," which assumed different meanings, was used inter
changeably by different groups within the study to define
different populations. As stated by an instructor and an
administrator, "this creates confusion," and "sometimes a
conflict, because we don't know whom we are supposed to be
serving first."
The Instructor's Manual (Section 1.0, Internal and
External Customers for Instructors/Trainers), states that:
1.1 EXTERNAL
a. Direct. The students/participants of our
training, coaching & motivating efforts.
b. Indirect. The management or employers of
the students whom are going to evaluate the
results of our efforts as trainers.
1.2 INTERNAL
a. Permanent. The CEU (Certification Enroll
ment Unit) is the instructor's main
internal customer. Instructors send CEU
a tangible product: The completed
attendance sheets filled out correctly
and in a timely fashion. (Roster and
SOST forms)
b. Occasional. Document Control. The
function that does corrections, improve
ments, or additions to the training
curriculum and/or manual based on
instructors' input. (Instructor's
Manual, p. 6)
In summary, instructors' clients included the
students/participants, the organizations employing stu
dents/participants, and the college's staff through repre
sentatives of the CEU and Document Control. It is this
variety of clients that prompts instructors to report,
"Sometimes it is hard to figure out which one is more
important. Do I pay attention to what the students need?
Do I pay attention to what the company needs, even though it
is not what the student needs? Or do I just do what the
college wants me to do, because that's what the State wants
them to do?"
In response to the question, "Aren't all these needs
congruent with each other?" and an instructor answered, "In
theory, they should be. In reality, sometimes there's a
conflict of interest and we (instructors) are caught in the
middle." He continued, "Sometimes the company wants
something, but the college (through the curriculum) wants to
do something different, and we know the students can't
handle it. What do you do? It depends on the situation.
But it is hard, because you see the students all the time,
and you know what can be done, then you know that the
company wants you to do certain things, but your boss
(college) is also watching what you do. You just have to
use your judgment and sometimes just do what doesn't get you
in trouble. This doesn't happen all the time, though."
18
Administration's Clients
Interviews with administrators indicated their
perception of clients in the following sequence: First, the
funding source, the State of California; second, the
organizations contracting the college's services; third, the
students attending training. One administrator justified
the order as he reported, "If we don't satisfy the State as
the primary client, we can't get to the companies that need
the program, and if we can't get to the companies, we
wouldn't be able to reach the students."
Administrators repeatedly emphasized the importance
of meeting the State's accountability requirements.
Pressure was exercised upon instructors to comply with "red
tape" in a timely fashion. One instructor reported, "The
only training I got was how to fill out all the State forms;
it seemed that was the most important thing."
Students
Students (also referred to as participants and/or
clients) in training were selected by the employing compa
nies. Students were selected primarily by their superiors
and assigned to meet on designated dates as arranged with
the college.
Interviews with instructors described students as a
"diverse" population. One instructor declared, "Mostly
males, different ages, but I would say, the majority around
middle age; some managers, supervisors, others, lower
19
level. Some are highly educated, college-degreed, others
barely made high school, if at all." Additionally, an
instructor reported, "Most speak English, others' knowledge
of English is minimal. And they're asking me to teach these
statistical concepts to people who can't speak the lan
guage 1"
Interviews revealed that instruction was complicated
as instructors faced the diversity of variables within the
student population sometimes in the same class. As one
instructor reported, "Imagine trying to teach SPC to someone
with an MBA who is sitting next to someone who didn't even
complete primary education in their own country. Talk about
a challenge I"
Organization of the Study
This study has been organized in five chapters. All
efforts have been made to have the organization of this
study reflect a traditional format. Inasmuch as this study
represents a "naturalistic inquiry," some liberties have
been taken to insure that its qualitative nature not be
compromised.
Chapter I, offers the reader a glimpse of the "world"
of contractual education and sets the context for the case
study.
20
Chapter II reviews the literature as it relates to
contract education and theory of evaluation in business and
industry.
Chapter III presents methodology in a thorough manner
with the aim to provide the reader with an "audit trail" to
increase reliability.
Chapter IV offers findings for the reader's review.
It comprises rich, anecdotal passages to present the reader
with a respondent's view.
Chapter V provides a summary, conclusions, and
recommendations, not to dissuade the reviewer from drawing
his or her own conclusions from the journey just shared.
The study concludes with a Bibliography and Appendix
es.
21
CHAPTER II
REVIEW OF THE LITERATURE
Contract Education
By whatever name— contract education, employee
training programs, corporate services, human resources
development— "a new program to serve business and industry
has emerged during the last decade in community colleges
that is a significant innovation" (O'Banion, 1991, p. 15).
Furthermore, O'Banion (1991) added that "these programs are
dynamic, innovative, entrepreneurial, and serve as the
vanguard in a national effort to train the work force for
the year 2000" (p. 15).
Rapid shifts in technological advances have created
gaps between societal economic needs and the qualifications
of the work force. Edwards (1990) stated, "We are feeling
the collision of demographics and technology as fewer and
fewer people become available for increasingly complex jobs
in today's work environment" (p. 8).
Not only has the traditional educational system been
failing to provide business and industry with well-trained
graduates, but employers have had to face the need to
retrain their already employed force as advances cause
major shifts in the work environments (Keller, 1983;
22
Waddell, 1990) . Authors agree that the community college
should emerge as a leader in attempting to bridge the gap
between societal needs and technological advancements
(Feldman, 1985; O'Banion, 1991; Parnell, 1986).
Furthermore, in clarifying the mission of the
California Community Colleges, the Board of Governors
(1993) recently asserted, "The Community College should be
designated as the State's primary delivery system for work
force training and retraining" (p. 3) . The same report
declared, "Through both state and federal programs,
community colleges have accepted the leading role in
employment training, skills upgrading, and encouraging
technology transfer" (Board of Governors, 1993, p. 3).
Employee training programs present community
colleges with the opportunity to fulfill the overall
mission of taking an active role in the economic develop
ment of the community. Moreover, contractual educational
programs offer other benefits including the following:
1. The opportunity for faculty to keep abreast with
technology by accessing up-to-date equipment and facili
ties.
2. The opportunity to provide faculty with "real-
world" experience.
3. The establishment of a network for student
employment opportunities.
4. Increased visibility for the college.
23
5. Increased revenue for the college (Suchorski,
1987).
It would be biased to speak of an innovation by
stating only its benefits. Although the discussion of
disadvantages of contractual education programs is beyond
the scope of this study, this investigator must report that
grievous threats to the community college include (a) the
risk of neglecting student needs by focusing on business
needs of the community, and (b) the danger of having
organizations other than educational institutions define
the goal of education (Feldman, 1985; Geber, 1987).
For instance, it is feasible to imagine that the
business community could establish the goal of education to
be that of increasing the profitability of industry. It
could follow, then, that educational programs would be
developed with the sole purpose of educating individuals to
fit into business organizations and to serve company needs.
If contractual education programs are to be inte
grated into the institution as an important part of the
mission for community colleges, a number of important
questions needs to be addressed. Among others, as men
tioned earlier as the central focus of this study, is "How
should these programs be evaluated?" In the following
sections, this review of the literature cites how evalua
tion is conducted with employee training programs in
business and industry.
24
Evaluation Theory
Historical Review
Although evidence of the practice of evaluating
performance can be detected as early as 2000 B.C. and
history indicates that Greek teachers, such as Socrates,
used verbal evaluations as part of their teaching patterns,
formal evaluation of educational programs was almost non
existent until the mid-nineteenth century (Rossi, Freeman,
& Wright, 1979; Worthen & Sanders, 1987). A historical
glimpse from the 1830s to the early 1900s links educational
evaluation to names such as Horace Mann, Joseph Rice, and
Edward Lee Thorndike.
It can be said that educational evaluation flour
ished in the United States during the 1920s and 1930s.
Systematic evaluation emerged when the progressive educa
tion movement developed different curricula that had to be
implemented and tried.
Presently, the demand for evaluation has increased
as shrinking resources force educational programs to make
wise decisions in view of ever decreasing support. The
late 1980s and 1990s have been characterized by changing
conditions that have led to retrenchment and reduction in
many aspects of society. Educators and policymakers have
resorted to evaluation to make the best use of diminishing
wealth.
25
In response to the demand, six general approaches to
educational evaluation have emerged. These are:
1. Goal-based evaluation or Objectives-oriented
evaluation.
2. Goal-free evaluation or Consumer-oriented
evaluation.
3. Responsive evaluation or Management-oriented
evaluation.
4. Systems evaluation (also Management-oriented
evaluation).
5. Professional Review or Expertise-oriented
evaluation.
6. Quasi-legal or Adversary-oriented evaluation
(Bramley, 1991; Worthen & Sanders, 1987).
The review literature indicated that evaluation in
training (the little there is of it) has primarily adopted
two approaches. The most prominently used is Goal-based or
Objective-oriented evaluation such as the one proposed by
the Kirkpatrick Model (Kirkpatrick, 1960). The other is
Responsive/Systems evaluation or Management-oriented
evaluation like the ones proposed by the CIPP Model
(Worthen & Sanders, 1987), TVS Approach (Fitz-Enz, 1994),
and IPO Model (Bushnell, 1990). All of which are discussed
in the segment that follows.
Evaluation of Training Programs
Contrary to commonly held knowledge, evaluation in
training program is not a post-training event (Rossi,
Freeman, & Wright, 1979). Evaluation is a careful,
systematic practice that must be initiated at the time of
the planning stages of training (Worthen & Sanders, 1987).
It must begin at the design phase of training when objec
tives are being matched to the company needs (Mager, 1984;
McMahon & Carter, 1990).
Evaluation of training programs in business and
industry is conducted for two purposes, formative and
summative (May, Moore, & Zammit, 1987) . Formative evalua
tion is defined as evaluation conducted during the course
of training implementation for the purpose of identifying
and incorporating changes to the program prior to its
conclusion. Summative evaluation is conducted at the
conclusion of the program to decide whether the program was
effective or not, whether the program should continue or
not.
Although the literature defines that evaluation is
not optional, but rather an obligation, reports have
indicated that not all training programs are consistently
evaluated (Carnevale & Schulz, 1990). The same authors
further reported that the absence of evaluation of training
programs has been allowed because programs have benefited
27
from a "blind" trust by managers who believe that training
is inherently good. Holcomb (1993) stated:
I was amazed at that time, as I am today, at how
much money companies were willing to invest in
employee training without having a clue as to
whether or not it was doing the employee or the
company any good. They were willing to hire expen
sive consultants, like the ones I worked with, and
had no system for tracking or evaluating the re
sults of what they were doing, (p. ix)
McEvoy and Buller (1990) established legitimate
reasons when the evaluation of training programs in business
and industry may be misguided and omission may be consid
ered. These included the following: (1) When training is
being offered as a work "perk"; (2) When training is being
offered as a symbol (rite of passage) rather than to improve
performance; and (3) When the cost of evaluation exceeds the
cost of the training program.
However, in the 1990s companies are looking at
methods of increasing quality and productivity and decreas
ing expenses. Carnevale and Schulz (1990) warned, "This is
not the time to decide whether to evaluate or not. Rather
to decide how one is to evaluate."
Various frameworks for evaluation of training
programs have been proposed (Phillips, 1991). However the
most popular and widely used in the field is one proposed by
Donald L. Kirkpatrick (Carnevale & Schulz, 1990; Gordon,
1991; Phillips, 1991). Carnevale and Schulz (1990) report
ed, "Almost universally, organizations evaluate their
28
training programs by emphasizing one or more of the model's
four levels."
For this reason the "Kirkpatrick Model" was selected
as the principal model to provide the conceptual framework
during this investigation. However, other models of equal
importance have been included in this review in search of a
broader interpretation of the findings. In addition to the
Kirkpatrick Model of evaluation (Kirkpatrick, 1960), this
review includes a discussion of the "CIPP" (Context, Input,
Process, Product) Model (Worthen & Sanders, 1987), the TVS
(Training Valuation System) approach (Fitz-Enz, 1994), and
the IPO (Input, Process, Outcome) (Bushnell, 1990) models.
Each will be interpreted in the sections that follow.
Kirkpatrick Model
The Kirkpatrick framework proposes that evaluation of
training programs in business and industry can be conducted
at four levels, each level containing multiple methods of
implementation (Kirkpatrick, 1960). The levels form a
hierarchy from low to high in relation to the sophistication
of data-gathering methods, data analysis, and interpretation
of data results. The four levels are widely known as
Reaction, Learning, Behavior, and Results.
Reaction Level
The most basic level of evaluation in the Kirkpatrick
Model is "Reaction." At the Reaction level, evaluation is
29
based exclusively on direct feedback from the trainees.
Data gathered reflect trainees' "thoughts and/or feelings"
regarding the training experience (Gordon, 1991).
Reaction level evaluation seeks to answer questions
such as the following: Did the participants find the
training applicable to their job? Did they find it useful?
Did they think/feel the trainer was well prepared? Were the
objectives met? Would they recommend the program?
The most commonly reported method of implementation
of "Reaction" evaluation is a feedback form (Broadwell,
1989; Cornwell, 1989). It is commonly referred to as the
"happiness sheet" or the "smile sheet." The practice is
criticized as subjective, unreliable, and invalid (Jones,
1990). However, Centra (1980) declared that student
subjective evaluations strengthen in validity as their
numbers increase and responses remain consistent over time.
Learning Level
The second level of evaluation in the Kirkpatrick
Model is Learning. Evaluation at this level is centered
around the trainees' acquisition of new skills, knowledge,
information, or attitudes.
Evaluation practices at this level seek to find out
"What knowledge (principles, facts, and techniques) did
participants gain from the program" (Carnevale & Schulz,
1990, p. 15) . Methods such as examinations, pre-post tests,
30
assignments, projects, case studies, and simulations are
frequently used.
Behavior Level
The third level of evaluation, or "Behavior" evalua
tion, focuses on changes on the job. It seeks to evaluate
what positive changes in participants' job behaviors stemmed
from the training program (Gordon, 1991). Commonly used
methods of implementation of Behavior evaluation include
observation from the instructor, and observation and reports
from company representatives.
Results Level
The fourth and most sophisticated level of evaluation
in the Kirkpatrick Model is "Results." Evaluation at this
level is considered complex because it focuses on accounting
practices, weighing the costs vs. the benefits of training
programs (Shipp, 1989). Carnevale and Schulz (1990)
reported that Results evaluation answered questions such as,
"What were the training program's organizational effects in
terms of reduced costs, improved quality of work, increased
quantity of work, and so forth?" (p. 15).
Authors reported the majority of employee training
programs are exclusively evaluated at the Reaction level
(Carnevale & Schulz, 1990; Gordon, 1991). Gordon (1991)
stated, "If you do a good needs analysis up front, evalua
tion takes care of itself." He further added, "The reason
many people think evaluation at levels 3 (Behavior) and 4
31
(Results) are so difficult and expensive and time-consuming
is because 'they're coming in after the fact to fish for
results,' of course you'll run into trouble” (p. 23).
CIPP Model
The CIPP (Context, Input, Process, Product) Model is
defined by Worthen and Sanders (1987) as a Management-
oriented approach to evaluation in education. "Developers
of this method have relied on a systems approach to educa
tion in which decisions are made about inputs, processes,
and outputs" (Worthen & Sanders, 1987, p. 77).
The same authors reported, "The most important
contributions to the approach have been made by Stufflebeam
and Alkin." In the mid-1960s both recognized the shortcom
ings of (objective-oriented) available evaluation approach
es" (p. 78) . The model proposes four different kinds of
educational evaluation. They are: (1) Context Evaluation,
(2) Input Evaluation, (3) Process Evaluation, and (4)
Product Evaluation. Although they appear linear in fashion,
their application may be independent and exclusive of each
other depending on the decisions that need to be made.
Context Evaluation
Context evaluation is conducted for planning purpos
es. During this process, educational needs are established
and matched to program objectives. Data gathered at this
level through means of interviews, observations, question
32
naires, or surveys provide the evaluator a glimpse of the
present state of the system to be intervened.
Input Evaluation
The goal of Input evaluation is to structure deci
sions regarding the selection of the educational program to
be implemented. During this phase, availability of resourc
es and alternative educational strategies are explored. The
aim of the evaluator is to the match the best possible
educational approach to the existing needs of the system as
established by Context evaluation.
Process Evaluation
Process evaluation concerns itself with assessing the
implementation of the educational program. It answers
questions such as, "How well is the program being implement
ed?" "What barriers threaten its success?" "What revisions
are needed?" (Worthen & Sanders, 1987, p. 78). Methods of
data gathering include subjective reports from participants
in the program and observations from evaluators.
Product Evaluation
This kind of evaluation gathers information regarding
the results stemming from the educational intervention. It
gathers description and judgment of outcomes to interpret
the worth and merit of the program. Outcomes are judged in
their relation to meeting the needs as established by
Context evaluation.
33
TVS Approach
Fitz-Enz (1994) reported that the TVS (Training
Valuation System) "was built around a relatively simple set
of analytic tools and has been tested across a range of
training interventions" (p. 54). The system emerged as 26
companies joined forces in the fall of 1992 to search for a
universal method of training evaluation (Fitz-Enz, 1994).
The TVS closely parallels aspects of the CIPP model.
As does the CIPP model, the TVS approach breaks down into
four basic steps. These are: Step 1 - Situation, Step 2 -
Intervention, Step 3 - Impact, Step 4 - Value. The steps
are defined in the following section.
Step 1 - Situation
At Step 1, the goals of the evaluation are two-fold:
First, to collect pre-training data to ascertain current
levels of performance within the organization; second, to
define a desirable or acceptable level of future perfor
mance. Step 1 requires detail and precision because it
establishes the measurable value of training as the gap
between present performance and desirable performance is
operationalized.
Primary sources of data for Step 1 are organization
managers, reports, and other records. Data are collected
through a variety of methods including interviews, question
naires, surveys, and review of documents. Step 1 of the TVS
34
Approach closely resembles Context evaluation of the CIPP
Model.
Step 2 - Intervention
Intervention is defined as having two components, (1)
problem diagnosis, and (2) training description. Step 2 is
aimed at identifying the reason for the existence of the gap
between present performance and desirable performance.
Intervention evaluation examines whether the performance gap
may be narrowed or eliminated with training. In other
words, it asks, "Where is the discrepancy coming from, and
is training the solution to the problem?” Step 2 is similar
to Input evaluation of the CIPP Model.
Step 3 - Impact
Step 3 is used to evaluate the difference between
pre-training data and post-training data. Additionally, it
is aimed at identifying the variable affecting trainees'
behavior and performance after the completion of training.
Step 3 parallels Product Evaluation of the CIPP
Model. It is also similar to Learning Level and Behavior
Level evaluation of the Kirkpatrick Model.
Step 4 - Value
Step 4 evaluation is defined as, "A measure of
differences in quality, productivity, service, or sales, all
of which can be expressed in terms of dollars" (Fitz-Enz,
1994, pp. 57-58) . Value evaluation is also similar to
35
Product evaluation of the CIPP Model and Results Level
evaluation of the Kirkpatrick Model.
IPO Model
The IPO (Input, Process, Output, Outcome) Evaluation
Model for training is defined by Bushnell (1990) as, "A
model that can readily determine whether training programs
are achieving the right purposes1 1 (p. 41) . He further
declared that the model enabled evaluators to detect
necessary changes to improve training and, most important,
to establish whether students were acquiring the intended
knowledge and skills. Similarly, to the TVS Approach, the
CIPP Model, and the Kirkpatrick Model, the IPO Evaluation
Model involves four phases. They are Input, Process,
Output, and Outcomes. They are subsequently defined.
Input Evaluation
At the Input phase, emphasis is placed upon the
evaluation of System Performance Indicators (SPI). Vari
ables such as trainee qualifications, availability of
materials, appropriateness of training facilities, and
accommodations and instructor capabilities are assessed.
The evaluation conducted at this phase has close resemblance
to Situation evaluation of the TVS Approach and Context and
Input evaluation of the CIPP Model.
36
Process Evaluation
The Process stage is characterized by four activi
ties: planning, design, development, and delivery. At this
stage, evaluation goals and objectives are established,
training strategies are selected, training materials are
assembled, and the training program is implemented. The
Process stage of the IPO model is similar to Step 2 -
Intervention, of the TVS Approach. It also closely paral
lels Process Evaluation of the CIPP Model.
Output Evaluation
Output evaluation focuses on gathering data resulting
from the training intervention. Measurements of student
reactions, knowledge, and skills are sought. Indicators of
improved performance on the job are pursued. "Output deals
with the short-term benefits or effects of training"
(Bushnell, 1990, p. 42).
Output evaluation encompasses the first three levels,
Reaction, Learning, and Behavior, of the Kirkpatrick Model.
In comparison with the TVS Approach, the similarities are
observed at Step 3 - Impact. In the CIPP Model, parallels
can be drawn to Product Evaluation.
Outcomes Evaluation
"Outcomes refer to longer term results associated
with improvement in the corporation's bottom line— its
profitability, competitiveness, and even survival" (Bush
nell, 1990, p. 42) . The Kirkpatrick Model refers to this
stage as Results Level evaluation. The CIPP Model describes
it as Product Evaluation, and the TVS Approach defines it as
Value evaluation.
The four models just discussed, the Kirkpatrick
Model, the CIPP Model, the TVS Approach, and the IPO Model,
are to be applied in conjunction with the conclusions of
this study. Conclusions are addressed in Chapter V of this
document.
38
CHAPTER III
METHODOLOGY
Case Study Approach
The single-case study design was selected for this
investigation because the CPE represented a "unique case"
for investigation (Yin, 1991). As mentioned earlier, the
Los Angeles Times recently reported that the CPE was the
largest provider of contract education among the nation's
community colleges.
This case study is holistic in nature as only one
unit of analysis, the TQM program, was selected among all
the other CPE programs. Although this study could have
been "imbedded" by placing the TQM in the context of all
the other CPE programs, the populations served (aerospace
industry) and the duration of the course (16-20 weeks) set
it apart from the rest of the programs. Most significant
the TQM program was funded through cost-reimbursement
monies from the California Employment Training Panel (ETP),
and at the onset of the study, had been awarded, "a little
over $2,000,000, to provide performance-based contracts to
businesses in the area," reportedly, one of the largest
contracts within the CPE.
39
As mentioned before, this study is also defined as
a descriptive case study. A motivating factor in the
design of this investigation was to "explore a contemporary
phenomenon within real life context" (Yin, 1991) .
The case study approach was elected, not only
because naturalistic inquiry represented a "natural fit"
with this researcher's professional and personal inclina
tions, but because the study was guided by "how" and "why"
questions (Merriam, 1988; Yin, 1991).
Research Questions
As indicated in Chapter I, this case study was
guided by two research questions. They were the following:
1. How were employee training programs (contractual
educational programs) provided by the community college
evaluated? What methods were being employed? What levels
of evaluation were being implemented?
2. Why were specific methods and levels of evalua
tion implemented? What were the factors contributing to
their selection?
Pilot Study
A pilot study was conducted by this researcher
during the spring of 1992. The pilot study emerged out of
the investigator's concern for (1) accessibility to the
setting, and (2) the confirmation that some evaluation
40
practices were in place. As Yin (1991) suggested, the
pilot study was of assistance in refining the lines of
questioning later pursued.
The pilot study concluded in April 1992. All data
pertaining to the TQM project have been included in the
analysis in the completion of this study. Some of the
respondents, expressly the key informants who emerged
during the pilot study, remained constant through the
completion of this case study.
Validity and Reliability
Construct Validity
Construct validity is especially problematic in case
study research (Yin, 1991). Problems arise since data
collection is always at risk of "subjective judgments."
Threats to construct validity have been reduced through the
adoption of the following practices:
1. Triangulation of data sources (Merriam, 1988),
also known as multiple sources of evidence (Yin, 1991).
This task was accomplished not only by gathering the same
data from different respondents, but it was accomplished as
well by seeking it from different types of data collection
systems.
2. Member check (Merriam, 1988) or key informant
review (Yin, 1991). Copies of the first draft of this
manuscript were offered for review to key informants in the
41
setting. All three copies were returned. When applicable,
recommendations for changes were implemented.
3. Clarification of researcher’s biases (Merriam,
1988). This clarification was accomplished through
frequent discussion with peers. This was difficult because
of the interference of the researcher's extensive profes
sional experience in the arena of contractual education.
Internal Validity
Internal validity as discussed by Yin (1991), "is
only a concern for causal or explanatory studies, when an
investigator is trying to determine whether event x led to
event y" (p. 43). This case study was defined as a
descriptive case study, motivated by the desire to present
"a contemporary phenomenon in a real-life context," no
causality was pursued, as such. The issue of internal
validity did not apply.
External Validity
Again, Yin (1991) proposed that external validity
was a major obstacle in doing case studies. It is to be
expected that the study of a single case presents limited
data to be generalized to an entire population. However,
in case studies, it has been recommended that the research
er strive for analytical generalization rather than
statistical generalization." In analytical generalization,
the investigator is striving to generalize a particular set
of results to some broader theory" (Yin, 1991). General
42
ization to training evaluation theory has been attempted
through the course of this investigation.
Reliability
To strengthen the reliability of the study, three
measures were implemented. They were:
1. The in-depth replication of the pilot study.
2. Detailed documentation of research procedures.
3. Provision of "rich, thick" descriptions to
provide base information.
Data Collection
The collection of data, which began February, 1992,
was concluded December, 1993. Data were collected through
interviews, observations, participant observation, and
analysis of documents. Each component is discussed
separately in the following section.
Interviews
All interviews were "open-ended," "focused" inter
views. The interviewer assumed a casual stance, but used
a "conversational guide" (Appendix A) with the intent to
derive answers to specific questions.
Held in the field, all interviews ranged from 30
minutes to 90 minutes, averaging approximately 45 minutes,
depending on the availability of the respondent. Inter
views were held in instructor's private offices, at
restaurants, in classrooms (2 on-site, 2 at the CPE), and
43
at the CPE offices. One was conducted at a respondent's
home.
Respondents' names were provided to the researcher
by an administrator. The list of respondents "grew" as the
study evolved.
At the onset of this study, the only "planned"
interview was with the college president. He then referred
the researcher to the program director, who further
referred to program coordinators, who then referred to
project specialists and instructors. The total number of
respondents was 15. This numberrepresented 100% of the
names provided to the investigator. It must be added that
the given number of potential respondents was likely to
change as "accounts closed or opened up." An administrator
explained, "We can have 2 or 3 instructors, or we can have
16? it depends on accounts coming in."
Some respondents were interviewed once; however, the
researcher developed on-going relationships with key
informants with whom interaction was constant— at times as
often as two to three times weekly.
The investigator was "introduced" to CPE instructors
and staff through casual and through formal meetings. On
one occasion, there was a formal group introduction during
a holiday celebration at a local restaurant. "Introduc
tions" were also conducted through the telephone and,
44
lastly, by a formal letter of introduction from the program
director (Appendix B).
As mentioned previously, a list of potential respon
dents was presented to the researcher by the TQM program
coordinator. Every individual whose telephone number was
provided was reached by telephone and eventually inter
viewed. It is important to note that, at the conclusion of
this study, not all who were interviewed remained in
service with the CPE.
All respondents were willing to be interviewed. It
was difficult at times to “connect1 1 with respondents. As
they explained it, "as an independent contractor, one is
constantly running between sites."
Some interviews were taped and later transcribed.
Whenever interviews were not taped, the investigator
resorted to note taking. Some respondents were willing to
be taped, others were not; some locations were suitable for
taping, others were not.
At times, taping was "intrusive." It was agreed
that all "off the record" comments would not be taped.
Taping was optional to all respondents when the location
was suitable.
Respondents were assured anonymity. To insure
confidentiality, certain measures were adopted. They are
described in the following:
45
1. All names have been omitted or altered.
2. Because of the severe under-representation of
female respondents (3 out of 15), and in accordance to
American Psychological Association (APA) format, all
efforts have been made to write in a gender-free voice.
When impossible the male gender was used.
3. Although different respondents represented
different professional classifications and titles, all have
been grouped into just two general classifications of
respondents, those of administrators or instructors.
A total of 71 contacts with CPE personnel was
documented by the conclusion of this case study. All
encounters assumed "some" interviewing, but not all were
scheduled interviews. Some interviews emerged during
meetings, observations, or casual "run-ins." Telephone
exchanges were not logged on the "contact journal."
Observation
Both formal and casual observations were conducted
during the scope of this study. Three, three-hour class
sessions were attended by the researcher. The three-class
observations were conducted as three field visits with
three different instructors.
Casual observations included attendance at meetings
and overall Center activities. On multiple occasions, the
researcher was provided access to physical space at the
CPE. Hours were spent reviewing documents at the site;
46
this endeavor allowed for meeting people and observing the
day-in, day-out functioning of the Center.
Participant Observation
During the fall of 1992 and late spring 1993, this
researcher was contracted by the CPE to perform curriculum
development tasks. As a member of the curriculum develop
ment team, this researcher was presented with prized
opportunities to collect data. Frequency of visits was
increased, and access to documents and staff was facilitat
ed; it was during this time that key informants emerged.
However, the observation process was affected by the
duality of roles. Although the physical access was more
"open," data collection was somewhat interrupted, as the
researcher's professionalism as "curriculum developer"
interfered with "good observer" procedures.
Review of Documents
The analysis of documents was used to corroborate
(or contradict) data collected from other sources (Yin,
1991). As Merriam (1987) suggested, documents were used as
"ready-made" sources of data available to the investigator.
The following documents were considered for analysis:
1. Personal communications
2. On-site materials (curriculum)
3. Sections of the TQM State Agreement (only
those sections provided to the investigator
by administrators)
4. Instructor's Manual
5. Newspaper clippings
6. Agendas, announcements of meetings
7. Administrative documents, such as completed
forms.
Some documents were considered "sensitive” in
nature. A few times, when requesting for documents, the
researcher was asked to pursue the review with utmost
confidentiality. Occasionally a bit of reluctance was
detected, as the researcher was reminded "you are potential
competition," or "remember, you are employed by our
competitors." Documents considered "sensitive" were not
pursued when reluctance was detected.
To insure confidentiality, all names, persons, or
institutions have been altered when documents were used as
references. It is important to note that some documents,
e.g., on-site materials and personal communications were
modified by the CPE during the evolution of this study.
Some documents were available to the researcher in
the form of copies. Others were presented for analysis at
the Center.
Analysis of Data
"Data analysis consists of examining, categorizing,
tabulating or otherwise recombining the evidence to address
the initial proposition of the study" (Yin, 1991) .
48
Analysis of the data was an ongoing activity, almost
simultaneous with the collection of data. As data were
being collected, this researcher "highlighted" elements
that were particularly connected with the issues pursued.
The analysis of the data was an intuitive process of
deduction realized only through reading, re-reading and
more re-reading of the data collected. Symbols such as
stars, exclamation points, underlining, circling and
highlighting were resorted to as patterns began to emerge.
Patterns developed into categories in relation to the
guiding questions.
Categories were established with the help of a
loose-leaf binder and dividers. A matrix of categories was
visually displayed on the researcher's home office walls.
Frequency of evaluation methods used was tabulated in
relation to their use by instructors. Emergent themes were
identified upon repetition from various sources.
Much of theanalysis was conducted during hours and
hours of jogging and pacing floors. Ever present by the
researcher were "paper and pen," to "jot" down notes that
"jumped" into consciousness. Many valuable insights were
written on restaurant napkins and "Kleenex."
49
CHAPTER IV
FINDINGS
This chapter sets forth the case study findings as
collected through the different sources of data described
in Chapter III. The chapter begins by presenting the
methods of evaluation as reported by respondents. The
methods are further classified into levels, as suggested by
the Kirkpatrick Model of Evaluation (Kirkpatrick, 1960).
Lastly, factors affecting the selection of methods and
levels are discussed.
Evaluation Methods
Reported methods of evaluation included the follow
ing:
1. Mid-way class evaluation
2. Paper/pencil examination
3. Pre/post testing
4. Subjective observation
5. Application assignments
6. Team projects
7. Ninety-day post-training assessment.
Each is discussed separately in the following
section.
50
Class Evaluation
An administrator described the class evaluation as
A mid-way evaluation to provide students with the
avenue for input. The evaluation is given to all
students in a confidential manner. One of our
staff goes out to the site and asks the instructor
to leave the room for a few minutes. The students
are then asked to complete the form (Appendixes C &
F). These forms are collected by our staff member,
they are reviewed, here at the office, and a summa
ry of the results (Appendix G) is shared with the
instructor.
If we feel that some changes need to be done
and can be done, we do them immediately. I tell
you, it is very important to keep our clients
happy, because word of mouth spreads fast, that’s
how we stay in business, that's how we will sur
vive. We do not advertise much, so what our cli
ents say about us, can make us or break us.
During the course of this study, class evaluations
were conducted with "most" classes ("maybe some fall through
the cracks, but not customarily"), sometime between the 8th
and 10th class session. An administrator reported, "This
gives us enough time to correct things before it is too
late. If you wait until the end of class, then you can't do
anything with the class that gave you the feedback. Of
course it helps with the future classes, but we want to make
this class feel heard."
Furthermore, an administrator added, "There is great
emphasis on knowing our clients and finding out how they
feel about what we are doing. Believe me, they let us know,
sometimes this phone doesn't stop ringing. The mid-way
evaluation helps us in that."
51
Instructors' feedback regarding the class evaluation
was diverse. Not one instructor objected to the practice,
but a few questioned its utility.
Regarding the class evaluation, one instructor
stated, "So I don't know how effective it is, but anything
that can be used like that is definitely a help." Another
reported, "the ten week evaluation . . . it's part of the
program but I think it is more just like feedback to the
instructor as opposed to how well the students objectively
learned." Yet another instructor voiced his opinion
regarding the class evaluation in the following manner,
"Well, hum . . . (silence) information is only good if you
use it, I guess."
Some instructors reported ignorance regarding results
of their class evaluations. One of them stated, "I have
never received a summary, but I do get comments from (name) .
Things like, 'I heard that your class went really well', or
he'll say something specific about my class, so I know that
he got the evaluations, but I haven't seen a summary yet."
Paper/Pencil Testing
Only one of the interviewed instructors reported
utilizing a paper/pencil test as a method of evaluating
student learning (Appendix H). He added, "I mostly use it
as a review, especially if I feel that the class was
particularly hard." He reported, "I make up my own tests,
52
depending on what was covered in class, or sometimes, I use
old ones, that I have saved from other classes."
Furthermore:
I don't grade the tests, so there's no record of
grades or anything like that. I just check to see
if people are able to answer questions. If not,
then I know we have to review before we go on. I
use the tests just to review, that way I know we
are all moving along together. I do use a test,
without it being a test per se. Grades don't
matter, you know, this is not a class like that,
everyone has to pass.
Responding to inquiries about the use of paper/pencil
examinations, most instructors' responses can be summarized
in the following passages:
"No, I don't do any paper/pencil testing. The
testing I do is 'real world' application. 'Now propose to
me a change (in the company) you want to make', that's my
way of testing to see if they have a 'real-world' grasp of
the data. I don't give an impractical or academic type of
test."
No, no, I don't test. I really don't go beyond
that which the CPE expects of their instructors,
and what they're most concerned about is, is the
company applying it? I'm less concerned with each
individual learning the difference between an X Bar
chart and a Proportional Chart, because I think it
will more or less be wasted, anyway, if the company
is not applying it.
"I don't need to do any testing, the projects will
tell you if people learned or not. You don't need any other
testing."
53
As explained earlier, administrators "allow1 1 and even
"encourage" instructors to supplement curriculum and select
their own methods of instruction. They acknowledge that
different instructors resort to different techniques in the
fulfillment of their teaching tasks and, "it's O.K., as long
as the results among all are the same."
However, administrators' opinion regarding paper/
pencil examinations may be summarized in the following
quote:
I guess it's all right to test students if you want
to, but it's too much like school. We are working
with people in business, and in business you don't
go around testing people. Besides, tests scare
people, we don't want our people to be scared. It
reminds them of their days back in school, when
stomachs churned at the thoughts of tests and you
got all nervous and had a miserable day. Adults
don't like to be tested.
Pre/Post Testing
At the onset of this investigation, discussion
regarding pre/post testing was prevalent among administra
tors; however, this investigator discovered no concrete
evidence of the practice in the field, or in the review of
documents. It is admitted, however, that at the conclusion
of this study there was doubt in this researcher's mind
regarding the implementation of the practice. The reader is
made aware that studying a "contemporary phenomenon in its
real-life context" gives rise to rapid changes. So that
conditions present at the onset of the study may be drasti
cally different at its conclusion.
54
Personal communications (June, 1992) between the ETP
and the CPE, related the following:
[the CPE] will use the following Standards of
Accountability Plan:
Based on pre-test data, both trainees and
the business itself will be required to demonstrate
in a post-training analysis that both a measurable
skills increase in the trainee has been achieved,
as well as a measurable productivity increase in
the company has been achieved.
In December of 1992, an administrator re
ported:
The new accountability is going to be a pre/post
test format. For the pretest, we go in and we say
to the company, 'We are going to give this test out
to everybody. 1 I think the way we are going to
sell it to companies is to tell, 'We need to do
this in order to know how much your people know
about specific areas so that we know what to empha
size. ' Really, what we are trying to do is to show
that they know very little or absolutely nothing.
That's the easiest thing for us to do, it minimizes
our risk. Is it really effective [evaluation],
probably not. Will we do more, probably so.
Interviews with instructors conducted during the
earlier months of 1993 indicated most were familiar with the
CPE's concept of pre/post testing; however, none reported
pre/post testing with any of their classes. As one of them
reported, "Oh, yeah I heard we are going to start doing
that." By December, 1993, one instructor reported the
following:
Now, there's the ETP requirement of the pre-test/
post-test with a bunch of questions . . . and that
is, I think is a very poor evaluation instrument.
I think that was designed with slanted kind of
questions to demonstrate a certain percentage of
improvement, a certain percentage of learning. I
think it is sort of ludicrous, the way that's
55
administered, you know, [jokingly] "here's the
answers, O.K.? Now, here is the pre-test, we want
you to fail this miserably. If you are not 100%
sure, leave it blank," you know. And, that's not
really used as an evaluation, that's mostly to
placate the ETP.
Subjective Observation
All instructors reported instructor observation as
the most preferred method of assessing student learning.
One instructor, insistently, stated, "If you don't know that
your students are learning, if you can't tell just by being
with them week in and week out, then you have no business
being an instructor."
When asked, "What methods do you use to evaluate
student learning?" another instructor replied, "[laughter]
highly subjective methods!" He further explained, "To an
instructor, it should be OBVIOUS if a student is learning or
not." Yet another instructor responded, "You can see if
students are learning or not. If you have people giving you
a blank stare, you just know it is not 'clicking'; other
times, you can almost see 'the light bulb going on.'"
Instructors reported spending "a lot of time and
effort in establishing a rapport with their students." They
reported it was necessary in order to establish a "feedback
loop" to encourage students to "feel confident to ask
questions and not be intimidated." One instructor stated,
"My first goal is to open communication. I encourage a lot
of participation, I know everyone by first name shortly
56
after the first class. That's my first rule; that way, I
can tell what is going on during the class."
Another instructor reported developing a "sense" of
student learning. He added, "Comments before, during or
after class, a glazed look, a smile, a nod, all are good
indicators for evaluation." Yet, another, added, "All you
have to do is listen to their questions, then you'll know
whether they are learning or not."
The practice is in accordance to goals stated by
administrators. When asked about their criterion selection
of instructors, administrators replied, "We look for good
communication skills. Our instructors must be able to make
contact, even 'feel' our clients' needs, and try to address
them."
Application Assignments
Homework assignments were also considered by many as
methods of assessing student learning. Assignments consist
ed of activities participants were expected to complete at
home after class. Some instructors referred to them as
homework, others termed them class assignments.
The curriculum specified assignments at the conclu
sion of most classes (Appendix I-J). However, some class
sessions did not specify assignments and instructors
reported the "freedom to design" their own assignment.
At the time of this study, administrators had formed
a curriculum development team (this investigator was a
57
contracted member) with the purpose of unifying curriculum
practices among all instructors. In particular, the team
was charged with establishing a uniform sequence to all
classes and designing application assignments for each class
session.
Of instructors interviewed, most reported class
assignments as being important methods of evaluating student
learning. In the words of one instructor, "If they cannot
complete the assignment, or have too many mistakes, you know
right away that something is wrong 11 1
Most instructors reported beginning their class
session with a "review" time, during which class assignments
were discussed. Upon discussion, most class assignments
were submitted to the instructor for feedback to be returned
the following class meeting.
As stated, most instructors fulfilled class assign
ment requirements because they viewed them as assisting in
the assessment process. However, some instructors dis
agreed. One instructor reported, "They [assignments] are a
waste of my time. I have better things to do with my
class." The same instructor reported, "I only pay attention
to them if time permits. Sometimes, we have time for them,
but I won't sacrifice time, if I think that other more
important things need to be covered."
Another instructor declared, "Class assignments are
not good indicators of learning. You don't know if the
58
person understands, if all they are doing is completing an
assignment. How do you even know if they did the work?"
Team Projects
In correspondence to the ETP, the CPE defined the use
of projects in their Standards of Accountability Plan as
follows:
Each student will be tasked with implementing a
company project and tracking this project through
out the training period and the 90-day retention
period. Through the tracking of this project,
proficiency in applying the skills taught will be
demonstrated. The project will also exemplify the
need for baseline measurement and performance
tracking in relation to productivity increase. All
projects and related data and summary sheets will
be kept on file for monitoring purposes. (J. M.
Rey, personal communication, June 22, 1994)
When asked to define projects, one instructor
summarized them in the following manner:
We require that they complete a project that uti
lizes all the tools they learned in the course.
And, they don't use all of them, obviously, but as
many as they can apply to solving a particular
problem within their organization. And it really
works well, to take what they learned in the class
room, take it back to the job and bringing it back
and ask questions, "what did I do wrong," and those
types of things.
Another instructor's response was representative of
instructors' feelings regarding projects:
I use projects as much as I can . . . It's the only
way to get to see if people have learned, by doing
it, if they experience it a couple of times ... I
try to give a real-world application . . . to make
it into a full proposal . . . the whole nine yards
. . . it wasn't just, "Oh yeah, show me your de
sign, oh, yeah, that looks good," they had to give
a real-world application and some of the thrills
that go along with the application in the real
59
world. In other words, you just don’t propose a
design [to solve a company problem], you have to
tell me how much it's going to cost, you have to
tell me the scheduling, those types of things.
Because business is going to say, "I don't want
this high falluting stuff that you are bringing
from academia. I want something that I can put
into words, and put into play in my business to
day.” So that's what I try to gear these people
towards. Here's how you propose something if you
are working at the level of SPC [Statistical Pro
cess Control] you collect the data, you see the
information, now propose a change. What's going to
be the impact? Are you going to measure? What
kinds of things? When did changes start occurring?
What happens if it doesn't give you results? And
they felt it was real good.
Projects were assigned to teams, and approved by a
Steering Committee, which was formed by management represen
tatives from the company. The Steering Committee's primary
function was to oversee the appropriateness of projects and
to support the implementation of the new skills to the
solutions.
Instructors reported that projects could only be
successful if companies supported and reinforced training.
Experiences reported by instructors were varied. At times
they reported excellent company support; other times, the
support was described as "lip service." An example can be
observed in the following passage:
Well the companies, I mean they're supportive to
some extent because they are sending people to
training; however, a lot of this TQM stuff is being
forced down the throat of these little suppliers
with the threat of, "We are not going to do busi
ness with you unless your people comply with our
request of doing SPC or you begin some kind of
TQM," and most businesses or their managers may not
be necessarily whole heartily behind changing
60
anything in the company, so they subtly or overtly
send non-verbal messages to their people saying,
"This [project] is a lower priority; you finish
your work here and if you have extra time you can
pay attention to that SPC stuff." A lot of times,
you get people in training and it's very clear to
them that their management is not supportive of
this and they know why they are there and that
doesn't help them learn.
Another instructor, added, "Lots of companies do not
support the training. Since it is being paid for by the
State, they see it as a 'freebee' and don't take it too
seriously."
To the question, "Do you find that companies some
times are not supportive of the projects?" one instructor
responded, "unfortunately, that's a very common experience.
. . . Probably more common than the experience when people
say, 'yes, you've done a great job and we want to incorpo
rate this!' Unfortunately, I have had the opportunity to
work with a number of companies, and only every once in a
while you see that."
Instructors reported the main purpose of the projects
was "to transfer the knowledge gained from training to the
workplace." However, neither the CPE nor the instructors
reported having had the authority to provide the fertile
grounds for changing behaviors once the participants left
the classroom. "All we can do is suggest things," reported
one instructor. Another one, stated, "If the projects are
not getting supported, I start writing letters saying,
'something is wrong here', but that's all I can do."
61
Ninetv-Dav Post-Training Assessment
The 90-Day Post Training Assessment (Appendix K-M)
was described as "the subjective evaluation by the boss; it
shouldn't be, but I'm pretty sure it is." Personal communi
cations with the ETP identified the 90-Day Post Training
Assessment as "Each student will be tasked with implementing
a company project throughout the training period and the 90-
day retention period." Upon the completion of 90 days
subsequent to training, "the areas targeted for improvement"
were to be measured.
Targeted areas for improvement were identified as
follows:
1. Building the company's revenue at least 2%
2. Achieving cost reduction in manufacturing
and assembly of at least 5%
3. Improving production or workload of staff,
department, or company at least 5%
4. Improving profits at least 2%
5. Reduce scrap and rework by at least 5%
6. Reduce lead time by at least 2%
7. Reduce warranty claims by at least 5%.
(J.M. Rey, personal communication,
April 1992)
Interviews with administrators revealed that the
format for implementing the 90-day post training assessment
was "shaky" at best. Ideally, the 90-day evaluation was to
be conducted by a CPE staff member or possibly the instruc
tor himself, through a follow-up visit to the company. In
practice, a two-page post training assessment form was
mailed to company representatives. Those completed were
mailed back to the CPE, with no questions asked about
validity; they subsequently were filed away with the
appropriate report forwarded to the State.
As this study was being conducted the CPE had no
mechanism in place systematically to implement the 90-day
post training assessment. It was done on a, "We need to
send them all out, and try to keep track of them, but no one
is really responsible for it." Reasons cited were two, "no
funding for tasks concerning evaluation," and, "we are just
growing so fast that some things are falling through the
cracks."
Some instructors were knowledgeable of the 90-day
post training assessment. One of them reported involvement
in the analysis of the completed surveys. However, most
instructors* responses can be summarized in the following
statements:
"I have heard of it [90-day post training assessment], but
I have never seen or done one."
I think it is still being done, but basically,
again, its something the state requires in order to
complete the funding cycle, because they say they
want 100% of the students who were involved in
training to still be employed 90 days after the
training ends, and so they survey that. It is not
so much that they are checking the areas of im
provement as it is more of a physical count. A
mandatory check list and, yeah, all the people are
here. . . . It does not really evaluate whether
they are using the techniques, as I said, it is
mostly a physical inventory. "Are these people
still employed there?" "Great, you get paid; if
they aren’t, oh, well, you don't get paid."
63
Evaluation Levels
Once evaluation methods were analyzed, they were
grouped together so that the Kirkpatrick Model of evaluation
"came to life." In this section, the methods reported
earlier have been categorized into the following levels of
evaluation: Reaction Level, Learning Level, Behavior Level
and Results Level. Each is discussed separately.
Reaction Level
The class evaluation administered by the CPE staff on
or about the 10th week of classes indicated that the Center
conducted evaluation at the Reaction Level. Reports from
interviews and documents indicated that such practice was
systematically implemented through the distribution of an
"anonymous feedback sheet" (Appendixes C-F), completed by
all training participants. Data were collected and reviewed
by program administrators who reportedly "made changes
whenever possible" and "were interested in finding out if
clients’ needs were being served."
Learning Level
Analysis of data suggested four evaluation methods
used to assess learning level. The methods reported were
paper/pencil testing, pre/post testing, homework assign
ments, and subjective observation. Each is discussed
separately.
Paper/pencil testing. This type of examination was
reported to be used by only one instructor. He stated its
64
use as "a review.” Furthermore, administrators reported
"encouraging instructors' autonomy," however, comments
regarding paper/pencil testing were, "Too much like school,
and this is not school," and "Adults don't like to be
tested."
Pre/post testing. At the onset of this investiga
tion, pre/post testing was "a concept that was being
investigated 'as we speak'." Administrators discussed its
implementation to evaluate participant learning in compli
ance with the State Agreement.
At the conclusion of this study, only one instructor
reported the use of pre/post testing. He added, "That's not
really used for evaluation, that's mostly to placate the
ETP."
Application assignments. Most instructors favored
application assignments as valid indicators of student
learning. Those using application assignments designated
time at the beginning of class to "discuss" and "answer
questions" regarding homework. Instructors reported
"collecting" assignments and returning them with comments to
the participants.
Assignments were critiqued but no records were filed.
One instructor reported, "Once you give it back to them,
that's it, it's theirs."
Subjective observation. All instructors reported
subjective evaluation of students as one of the best methods
65
for the evaluation of student learning. They reported
"taking time to know students," "establishing a feedback
loop," "paying attention to blank stares," or "light bulbs
going on" as reliable indicators of student learning. In
essence, the thoughts expressed could be summarized in the
following statement," A good instructor JUST KNOWS if
students are learning or not."
Behavior Level
Team projects indicated evaluation at the Behavior
Level. As reported, and defined in accordance to the State
Agreement, team projects were designed to provide "real-
world solutions" to problems in participants' companies.
Interviews indicated that some companies incorporated
team projects into their normal functioning, to promote the
transfer of knowledge or skills from the classroom to the
job site. However, behavior changes promoted by the TQM
instruction were not reinforced or even allowed in other
companies.
Behavior changes, implementation of new skills on the
job, were to be "tracked" by the CPE staff 90 days after the
conclusion of training. Information gathered from different
data sources did not indicate a systematic procedure for
"tracking" Behavior level evaluation.
Results Level
In theory, the 90-day post training assessment served
a dual purpose: (1) to "track" employee retention and
66
implementation of learned skills on the job site (Behavior
Evaluation, discussed earlier) and (2) to gather data
regarding target areas for improvement within the company
(Results Evaluation). Different sources of data indicated
the existence of a plan to conduct Results Level evaluation
through the 90-day post training assessment; however,
inconsistencies with the implementation of the plan were
discovered.
It was suggested that some Results Level evaluation
was conducted. However, inconsistencies with the planned
implementation made it difficult to establish validity.
Factors Determining Selection of Evaluation
The following section addresses the factors that
determined the selection of the different evaluation
methods. Since the different methods have been previously
organized in Levels, each Level and its comprising methods
are discussed separately.
Mid-wav Evaluation
As reported at the Reaction Level, evaluation is
conducted through the mid-way class evaluations. The
evaluations were conducted by the CPE's administrative staff
for formative purposes. They have been conducted for the
purpose of identifying and incorporating changes to the
class prior to its conclusion (May, Moore, Zammit, 1987).
67
The practice was in accordance with the program's
philosophy to "provide good service to the business communi
ty" and "to make clients happy." As an administrator
reported, "We need to know what [clients] are thinking and
feeling about us so that we can make the necessary changes
to meet their needs."
The practice was not only guided by the Center's
"Business Philosophy" of customer satisfaction, but also
motivated by the Center's pursuit of survival. The Center
relies on "word of mouth" as its main mechanism for market
ing its services. An administrator summarized it in the
following manner: "Word of mouth spreads fast; that's how
we stay in business . . . that's how we will survive. What
our clients say about us can make us or break us."
Another factor determining the use of the mid-way
evaluation was the administrative need to develop a mecha
nism to assess instructor's performance. At the time of the
study, instructors' performance evaluation procedures were
not in place. An administrator reported, "This may help in
the re-hiring or in the assignment of certain instructors to
certain clients. This can give us good feedback from the
students."
In accordance with the literature, the mid-way class
evaluations were used as evaluation at the Reaction Level to
answer the question: "How well did the participants like
the class?" It was a subjective evaluation by participants,
68
one that the CPE utilized for class re-direction and
assessment of instructors' performances.
Student Evaluation
As reported, at the Learning Level a variety of
methods was utilized by different instructors and adminis
trative staff. The findings revealed four methods were used
to evaluation student learning. These included: paper/
pencil examinations, pre/post testing, observation, and
application assignments.
The reader is reminded that only one instructor opted
for paper/pencil examinations. He reported, "They are
simple quizzes, but I always like using them because it
gives me an indication if we're on the right track," He
added, "I make up my own [quizzes] or use old ones that I
have saved from other classes.
No other instructor reported the use of paper/pencil
examination. Collectively, they reported, "It's too much
like school; this is the real world."
Administrative policy of "non-interference" with
instructors' selection of teaching methodology allowed
options to be exercised. The Center provided instructor
autonomy, so much so, that situations sometimes appeared
"chaotic, because no one knows who is doing what!" At the
conclusion of this study, the program was involved in
discussion of options to bring more "uniformity" to teaching
practices, without infringing on "instructor strengths."
69
The selection of paper/pencil method was influenced
by the instructor's past experience with the method. It
seemed that the one instructor using it, applied it,
"Because I have always done-it." Those who had not, did not
care for it, as they did not see it necessary.
The use of the pre/post testing method resulted from
the need to fulfill State contract requirements. As
reported, correspondence dated June 1992 stated:
The college will use the following for its Stan
dards of Accountability: Based on pre-test data,
both trainees and the business itself will be
required to demonstrate in a post-training analysis
that both a measurable skills increase in the
trainee has been achieved, as well as measurable
productivity . . . has been achieved. (J. M. Rey,
personal correspondence, June, 1992)
However, administrators, responsible for communica
tions between the State and the CPE, reported the State's
interest was with the written plan and not necessarily with
its implementation. One administrator stated, "We tell the
State how we are going to evaluate. They just want to know
that we have a plan." The pre/post training evaluation was
part of the plan submitted for State approval.
The evaluation method was selected because "it seemed
easy," and "it's kind of what is in practice and is expect
ed." As reported earlier, this investigator discovered no
concrete evidence of the practice. It seemed as though the
plan was "committed to paper" in documents for the State
contract, but even at the conclusion of this study, it was
70
not yet implemented. Observation of the student appeared to
be the "preferred" evaluation method of student learning.
As reported, all instructors declared utilizing subjective
observation of trainees' performance before, during, and
after class as assessment of student learning. Some
instructors took the following position in regards to
observation: "That's the best way of evaluating." Instruc
tors relied on their "intuition" and "professional experi
ence" in assessing student learning.
The method was selected because it was considered the
only valid, and expected, method of evaluation. This was
confirmed by comments such as, "How do you know if students
are learning" Well . . . you see it! You just know!"
Student learning was reported as a phenomenon that
"good" instructors could "sense" when it occurred. Repeat
edly, the comment, "you just know," was echoed by instruc
tors in answer to the question regarding student learning.
The fourth method used to assess student learning was
through the application assignments designed to be completed
at home after class. Most assignments were designed by the
curriculum development staff in accordance to the material
covered during a specific class. Administrators indicated
that completion of application assignments was mandatory.
In practice, most instructors complied; some as noted
earlier opted not to comply.
71
Assignments were not graded nor was there any record
of completion kept by any individual. Instructors replied,
"It's up to the students. They're adults, you know."
Some assignments were reviewed collectively during
the "review time" set at the beginning of class. When
assignments were submitted to instructors, all were returned
with comments during the following class meeting.
Team Projects
As indicated, team projects were used to evaluate
program effectiveness at the Behavior Level. An instructor
reported, "The team projects are the meat and potatoes," of
the program. He added, "Through the team projects, you can
tell if someone is learning or not." Collectively, all
instructors reported, "The best way to assess if a student
has learned is to provide them with 'hands-on experience,
and watch them resolve 'real world' applications."
The Standards of Accountability submitted by the CPE
to the State of California directed the design and selection
of team projects to be used as a measure of evaluating
student learning, as behaviors were transferred from the
classroom to the work setting. Most team projects were
assigned by the third or fourth week of class, and students
were "coached" towards the final project presentation held
during the last class meeting. Only one instructor required
team projects to be completed weeks prior to the end of the
class. He reported, "It gives me a chance to evaluate if
72
everything is O.K." He added, "Waiting until the last
meeting is too late. What if it is wrong, what if they
needed my help and then they won't see me after that class.
I have them present around the middle of the program, and
for the rest of the time, until the end, they work on
refining it [project]."
All instructors agreed team projects were "excellent"
indicators of student learning. One of them added, "You
don't need any other evaluation, all you need to do is see
if the project is done right, that'll tell you all."
In summary, the team projects were selected as a
method to evaluate student learning because administrators
and instructors had no doubts that "hands-on" experience and
"real world" applications were among the best indicators of
learning. Additionally, team projects were designed to meet
State requirements of evaluation to indicate transferability
of learned behaviors from the classroom to the job site.
Ninetv-dav Post-training Assessment
The 90-day post-training evaluation was intended to
be used as a measure to evaluate impact at the Results
Level, and to comply with State conditions of accountabili
ty. The conditions of accountability were in relation to
employee retention 90 days post training. The State of
California required that the CPE confirmed employee reten
tion in order to issue "fee reimbursement." All companies
were followed-up on employee retention requirements. In
73
essence, the practice was designed and adopted to collect
evaluation data because contract requirements mandated
employee retention in order to issue payment for services.
The 90-day post-training evaluation was also designed
to follow up on productivity improvements to be correlated
with the training activity. Administrators resorted to the
utilization of the 90-day post-training assessment because
they felt it provided them with the opportunity to indicate
the "outcome" or "results" of training. As reported, this
practice was found to be deficient for reasons such as lack
of personnel to conduct the evaluation, and lack of pre
training data to establish comparisons. At the conclusion
of this study, this tool was forwarded to company represen
tatives, completed, and returned to the CPE. Reported
results were mostly "subjective" interpretations by the
"bosses."
In summary, a variety of factors determined the
selection of evaluation methods. The mid-way evaluation was
determined by the need to conduct formative evaluation for
the purpose of "client satisfaction" "class re-direction"
and "survival." Additionally, the practice was being
considered for assessment of "instructor performance."
Paper/pencil examinations were determined by the
instructor's past experience and the Center's policy of
instructors' freedom to select teaching methods. Instructor
autonomy appeared to be a determining factor.
The pre/post testing, application assignment, team
projects, and 90-day post-training assessment were all
determined by the CPE plan of evaluation submitted to the
State of California. Parts of the plan were developed
before curriculum development; others came during and at the
conclusion of this study; changes were being discussed even
after program implementation. As one interviewee reported,
"We are also learning as we go along."
The subjective evaluation was determined by the
instructors' firm belief that "one just knows" if students
are learning. Collectively, instructors relied upon their
"intuition" and "expertise" to assess learning behavior.
75
CHAPTER V
CONCLUSIONS
There has been many an occasion during the course of
this and previous studies, when this investigator has
questioned her "love" for naturalistic inquiry. Never is
the question so palpable as the time when results must be
interpreted.
For this investigator, conducting qualitative
studies presents the risk of erroneous conclusions. It is
at this time when the "cleanliness" of results from
quantitative research is desired.
Nevertheless, the following section, affords an
interpretation of the results as reported in Chapter IV.
Additionally, this chapter brings closure to this study by
offering recommendations, study implications, and furnish
ing suggestions for future research. Specifically, an
interpretation of the findings within the framework of the
research questions posed in Chapter I is presented.
Interpretation of Results
Analysis of data collected over 23 months indicated
that the CPE assessed programs, student learning, and
company impact through the utilization of multiple methods
of evaluation tools. Evaluation was conducted at all
levels as proposed by the Kirkpatrick Model of Training
Evaluation. Data suggested a variety of factors contribut
ed to the selection of evaluation methods employed and
levels implemented. In the following section, findings are
interpreted in conjunction with the research questions
guiding this study. Before proceeding into a discussion of
findings, time is taken to discuss "that” which was not
found.
Notoriously absent from the evaluation design was
the "link" between the evaluation practices described above
and "systems assessment" as proposed by the review of the
CIPP, TVS, and the IPO Models of Evaluation. Authors such
as Worthen and Sanders (1987), Asgar (1990), McMahon and
Carter (1990), and Holcomb (1993) warned that evaluation
without a needs assessment could not be effectively
conducted. It is reported that the needs assessment
provides the standards which serve as the "building blocks"
for the evaluation design. Only when standards are clearly
established can evaluation serve the purpose of "establish
ing the merit or worth of the program." As Gordon (1991)
reported, with evaluation one cannot say, "We didn't know
what we were doing, so now, let's find out if anything
happened" (p. 23).
McMahon and Carter (1990) reported that training
needs of the companies must be established prior to the
77
design or implementation of training. It is this analysis
of needs that serves as guidelines for the selection of
quality training. Holcomb (1993) stated the "needs
assessment" provides the "why" of training. If there is no
clear understanding of "why," how can "what to train" and
"how to train" be selected? Gordon (1991) declared that
evaluation could not be conducted unless it followed a
"good needs analysis."
The findings indicated that, although many methods
of evaluation were pursued by the CPE, there appeared to be
a lack of direction and uniformity that was mostly due to
the lack of agreement on what training was supposed to
accomplish (Asgar, 1990). This lack of direction was
further muddled by the absence of an integrated team of
administrators, curriculum designers, and instructors
working together in the formulation of the evaluation
process at the design stages of the training program
(Connolly, 1988).
This study ascertained that the TQM training program
offered by the college was not specifically linked either
to a needs analysis. It was not linked to a "skill
deficiency," or to the identification of a particular
"business problem or condition." Rather, it was formulated
in response to a general goal proposed by the State of
California, through the ETP, to provide an "overall," "all
encompassing" training, generalized enough to serve as many
78
companies in the aerospace industry as possible. As a
result, CPE staff was placed in a position to design "a
blanket” training program that could be applicable to as
many companies as possible.
The availability of public funding through RFPs
creates a conflict among competing agencies between
proposing services that "ought" to be offered versus those
that "most likely" will be funded. In order to increase
the probability of funding, the CPE designed a generalized
training program. By doing so, it obstructed the road to
evaluation. The evaluation design lacked direction as it
was not "grounded" on standards resulting from a needs
analysis.
Likewise, it has been this investigator's experience
that agencies "write" proposals that will increase their
probability of funding, but lack the resources or knowledge
to implement them later on. As established, the CPE
evaluation procedures were selected independently from a
needs analysis and were proposed and designed by staff that
lacked evaluation knowledge. As such, the evaluation
procedures have not become a formalized plan, but rather
scattered practices lacking direction.
Whereas a "needs analysis" was absent, answers to
the research questions were discovered. The following
segment presents the interpretation of results.
79
Question 1
How were employee training programs (contractual
education programs) provided by the community college
evaluated? What methods were employed? What levels were
implemented?
This investigation indicated that seven different
methods of evaluation were utilized by the CPE staff to
evaluate different aspects of its program. The methods
reported were:
1. Mid-way class evaluation
2. Paper/pencil examinations
3. Pre/Post testing
4. Observation
5. Application assignments
6. Team projects
7. Ninety-day post-training assessment
However, the discussion of these results is not as concrete
as this researcher would like it to be.
Although all methods just listed above were report
edly utilized, and supported by documents, the utilization
of some methods varied across companies and across instruc
tors. Furthermore, discrepancies between methods reported
in documents, accounts from administrators, and evidence of
practice in the field by instructors made it difficult to
ascertain utilization. Complicating events, further, the
longevity (and possibly intervention) of this study allowed
80
time for program conditions to change from onset to
conclusion. Thus, situations present at the beginning of
1992 were not necessarily reflected by the end of 1993, and
vice versa.
Notwithstanding, the methods reported previously are
further discussed in conjunction with evaluation levels as
proposed by the Kirkpatrick Model. The study also indicat
ed that multi-level evaluation was conducted by the
program. Data reported that "Reaction," "Learning,"
"Behavior," and "Results," levels evaluation were imple
mented.
At the "Reaction Level" the CPE administrative staff
methodically employed an anonymous feedback sheet at about
the mid-way point of the course to assess clients thoughts
or feelings about the program. Results collected were
tabulated; a summary of results was shared with some
instructors and "whenever possible, changes were made."
At the "Learning Level," four evaluation methods
were implemented to assess student learning. They were:
(1) Paper/Pencil examination, (2) Pre-Post test, (3)
Subjective Observation, and (4) Application Assignments.
The four methods were designed with the purpose of measur
ing the acquisition of new skills, knowledge, and informa
tion by participants.
The use of paper/pencil examinations was reported by
only one instructor. This sole event in fact may be
81
considered an "instructor practice" and not necessarily a
"program practice."
Documented plans for Pre/Post testing, and accounts
by administrators dated back to the latter months of 1992
(three instructors interviews had already been held). By
December 1993, at the conclusion of the data collection
phase of this study, only one instructor reported its use.
It is almost as the CPE was just "getting around" to the
practice of the concept as this study was concluding.
Evaluation of student learning through instructor
subjective observation was addressed by all instructors.
It can be reported as a "collective feeling" that the
"best" evaluation of participants was through observations,
"week in and week out." Repeatedly, this researcher was
informed, "good instructors 'just know' if students learn
or not."
Application assignments were also used by the
majority of the instructors to assess student learning.
Most instructors indicated that it was a valid measure of
learning. One instructor termed it "a waste of my time."
Evaluation at the Behavior Level, was conducted
through Team Projects and followed up via the 90-Day Post-
Training Assessment. The projects allowed participants to
design a "real world" application of solutions to a
specific company problem. The 90-day Post-Training
Assessment was partially designed to follow-up the imple
82
mentation of the solutions at the company, 90 days after
the completion of training.
Thus both methods were used as means of evaluating
the transfer of new skill, knowledge, or information unto
the job setting. Unfortunately, and beyond the control of
the CPE representative, not all settings fostered changes
in behavior. Reportedly, some companies embraced the new
practices; others, were reluctant; new skills were "not
allowed." Some projects were implemented at the work site,
allowing for the transfer of new knowledge, others were
"left behind" in the classroom.
In theory, "behavior changes" were to be observed
and measured by the CPE staff 90 days after the completion
of training. In practice, the 90-day follow up was
reported as (a) a "physical inventory" to check whether
employees were still employed by the company (in accordance
with State Agreement), and (b) a "subjective evaluation by
the •Boss'," often times simply conducted through the mail.
It can be said that "Results Level" evaluation was
conducted through the implementation of the 90-Day Post
Training Assessment. The practice was designed with
multiple purposes: (1) to follow-up the transfer of new
skills from the classroom to the job site (Behavior
Evaluation, discussed previously) ; (2) to comply with State
"employee retention" specifications; and (3) to evaluate
impact on company (Results Level Evaluation).
83
The 90-Day Post-Training Assessment was designed to
collect data of impact of training on "targeted areas for
improvement." In theory, baseline information was to be
gathered prior to the implementation of training, and
compared with data gathered, 90 days upon the completion of
training. Thereby a comparison of conditions before and
after training could be established.
In practice, nearly all 90-Day Post-Training
Assessments were conducted through the mail. The evalua
tion consisted of checking employee retention and a
subjective evaluation of the training results as reported
by a company representative. Reasons for the lax practice
were reported as "no funding" was available and "the
program is growing so fast that some things fall through
the cracks."
Question 2
Why were specific methods and levels implemented?
What were the factors contributing to their selection?
The evaluation practices selected by the CPE
appeared to be guided primarily by their "business"
philosophy. As stated previously, page one of the Instruc
tor's Manual declared the CPE's "Mission Statement" as
follows:
Our Mission is to continually improve the efficien
cy, productivity and quality of the California
workforce. We strive to improve our services and
our resources continuously, in order to bring pride
84
to our customers, our employees and the members of
our community.
Furthermore, the Instructor's Manual stated the CPE's
"Quality Policy" as follows:
The [CPE] firmly believes that Total Quality is the
essential ingredient of our business principles.
It demands we focus on the vital part of our opera
tion: the customer. By maintaining this focus, we
will always be able to accomplish the aspects we
strive for in our Mission. (p. 1)
Both the Center's Mission Statement and Quality
Policy are predicated upon business principles. As reported
recently, "In order to be successful, a business must
consistently offer products or services that
* Fulfill an actual need
* Satisfy customers' expectations
* Comply with applicable standards and
specifications" (Ebel, 1991, p. v).
Ebel (1991) further added: "The ultimate focus and
objective of everyone in the organization must be on
satisfying the customer" (p. 12) . Additionally, he de
clared: "Customer satisfaction should be the founding
principle of the organization and the utmost objective of
every person associated with the organization" (p. 124).
Moreover, TQM (Total Quality Management) philosophy
promotes "getting close to the customer" and "making the
customer the 'boss'" (Williams, 1994). Along the same
lines, Townsend and Gebhardt (1993) warned: "When designing
a program to increase customer satisfaction . . . a good
85
design includes plans to actively solicit customers'
opinions" (p. 177).
CPE administrators and instructors reported program
evaluation practices were developed in accordance with TQM
theory and the strategy of continuous improvement. One
administrator reported, "We try to practice what we preach.
We teach TQM principles and we want our clients to see we
practice them too." He further added, "One of our goals is
to make clients happy. We need to know what they are
thinking and feeling about us so that we can make the
necessary changes to meet their needs."
The systematic use of Reaction evaluation conducted
through class evaluations by the CPE appeared to be guided
by the desire to establish a process of customer feedback to
promote changes and to meet client expectations in accor
dance with "good business" practices. Furthermore, class
participants were identified as "external customers" who had
the freedom to use or not use services. As such, these
"external customers have power over our reputation, our
profitability, our market share, and ultimately our organi
zation's existence" (Kinlaw, 1992, p. 97).
It was important for program administrators to gather
information through Reaction evaluation about "how our
clients feel" because "that's how we will survive." The
business focus of the program mandated the practice of
customer involvement through feedback in order to "meet or
86
exceed" client expectations for the purpose of surviving and
excelling in a world of diminishing educational resources.
Learning and Behavior evaluation conducted through
multiple methods were guided by instructors' autonomy and
curriculum guidelines based on assessment practices relevant
to education in the workplace. As reported by the United
States Department of Education, Office of Vocational and
Adult Education (1992) , the role of workplace education must
take different roles from traditional education. Specifi
cally relating to evaluation and measurement, the report
declared that testing measures must be integrated into
instruction for better integration of education into on-the-
job practices. The report declared:
The future appears to point to alternative assess
ment procedures with emphasis on multiple approach
es such as portfolio assessment, peer assessment,
simulation, documentation of incidental learning—
including the ability to participate in other
programs or solve problems— and increased measure
ment of work-based outcomes.
It is anticipated that student assessment will
be customized using different measures to tailor
the assessment approach to particular individuals
and their industries . . . evaluators recognize
that no single data-gathering instrument can cap
ture the accomplishment of workplace programs.
(p. 61)
Asgar (1990) stated that effective evaluation methods
for training in business and industry should simulate
conditions similar to workplace situations in order to
facilitate the transfer of skills to the work setting.
Smith and Merchant (1990) also suggested the use of compe
87
tency examinations, practical examinations on work samples,
or simulations as a means of recreating aspects of the job
so that the use of training concepts would be easily shifted
to the job setting.
Moreover, English and Hill (1994) have extrapolated
principles of TQM (Total Quality Management) into education
and have replaced the term with TQE (Total Quality Educa
tion) . In their discussion of evaluation modes and evalua
tion forms, they reported that whereas traditional educa
tional programs avail themselves of assessment methods such
as proficiency tests or criterion-referenced tests and
resort to the use of letter symbols and comparative graphs
to evaluate student learning, the TQE learning place resorts
to nontraditional assessment tools. These nontraditional
methods include tools such as "hands-on work," "exhibi
tions," "presentations," "creative expressions," and
"qualitative feedback" (p. 101).
Learning and Behavior evaluation as conducted by the
CPE was selected because of the problem-specific methods
developed by instructors to simulate the job setting. The
elaboration of assignments and projects focused on transfer
ability of skills to the workplace.
Results evaluation was conducted in compliance with
the Standards of Accountability Plan in accord with the
State Agreement. As Ebel (1991) reported, "In order to be
88
successful, a business must . . . comply with applicable
standards and specifications" (p. v).
Results level evaluation was designed by the CPE and
accepted by the ETP (Employment Training Panel) to measure
work-based outcomes in accordance with assessment and
evaluation practices of education in the workplace (U. S.
Department of Education, 1992). Additionally, authors such
as Carnevale and Schulz (1990) , reported that accounting for
the positive economic influence of training should be the
most critical issue of training in business and industry.
The motivating force in the design of Results evaluation by
the CPE was the need to ascertain ROI (Return on Investment)
factors resulting from the TQM training intervention.
Program Recommendations
Guided by the conclusions just discussed, this
researcher offers the following recommendations:
1. The formation of an integrated team of adminis
trators, curriculum designers, evaluation experts, and
instructors to formulate training proposals.
2. Solicitation from the funding sources of resourc
es to conduct company-specific needs analysis.
3. The systematic dissemination of the training plan
and evaluation results to all instructors.
4. The stabilization of evaluation practices across
accounts.
89
5. The formulation of a system of record keeping to
maintain objective accounts of student learning.
6. The development of measurable objectives and
competencies for every class meeting that are explicitly
cited in the curriculum materials.
7. The collection of pre-training data as specified
by objectives and competencies.
8. The collection of post-training data for compari
son during the evaluation process.
9. The negotiation with the State of California for
the inclusion of a "Program Evaluator" as a budgeted item.
10. A formal and binding agreement between company
executives to ensure that conditions will be provided to
program participants in the transference of learned skills
to the job setting.
Implications
The recommendations just listed are offered with the
intent that employee training programs, specifically the TQM
program as offered by the CPE, will grow stronger in their
operation as they commit themselves to fulfilling their
mission of strengthening the nation's workforce. Attention
to the recommendations would ensure the inclusion of all
critical elements of effective training programs; namely,
needs analysis, evaluation design, curriculum development,
and program implementation (Brown, 1981; Connolly, 1988;
90
Mcmahon & Carter, 1990; Bramley, 1991; Holcomb, 1993). The
development and dissemination of a comprehensive training
plan evolving from the information collected from each of
the critical components would provide direction for the
program, clarification of responsibilities, delineation of
tasks, and group cohesion.
Data support that training tasks, such as curriculum
development, would be expedited and facilitated if guided by
objectives emerging from a needs analysis. Open communica
tion and partnerships within the program as integrated teams
work together would promote progress in the right direction
and avoid duplication of efforts.
The selection and stabilization of evaluation
practices and measurability of data would safeguard the
principles of Total Quality as expressed in the program's
Quality Policy statement (Instructor's Manual, p. 1) .
Ultimately, the program's implied goal to "meet or exceed"
clients' expectations would be met by a team of experts
focusing on the same aim.
Suggestions for Additional Research
Much has been learned through this case study. Ideas
for further research have emerged as the conclusion of this
investigation was forthcoming. Some of the ideas are
presented in the following section.
Further research should be conducted with programs
not funded through public monies. A comparison of evalua
tion practices between community college programs funded by
the companies themselves and programs funded by government
agencies could be pursued. It would be of interest to
discover if evaluation practices would be more rigid or more
lax whether instruction was not considered a "freebee” by
participating companies.
Additionally, studies could be designed to explore
the importance given to the evaluation process by the
State's Employment Training Panel. An investigation of the
RFP's review and award process would indicate whether the
evaluation plan is considered a substantial element of a
proposal.
Lastly, further investigation is recommended to
establish the effectiveness of the CPE's evaluation methods.
This study was concerned with the "how” and "why." A new
study could concern itself with "Are the practices effec
tive?"
BIBLIOGRAPHY
BIBLIOGRAPHY
93
Asgar, J. (1990, July). Give me relevance or give me
nothing. Training, pp. 49-51.
Bard, R., Bell, C. R., Leslie, S., & Webster, L. (1987).
The trainer's professional development handbook.
San Francisco: Jossey-Bass.
Bass, B. M., & Vaughan, J. A. (1966). Training in
industry: The management of learning. Belmont, CA:
Wadsworth.
Birnbrauer, H. (1987, July). Evaluation techniques that
work. Training and Development Journal, pp. 53-55.
Board of Governors, California Community Colleges.
(1993). State and Federal Legislature Programs.
Sacramento: Governmental Relations Division,
Chancellor Office, California Community Colleges.
Bramley, P. (1991). Evaluating training effectiveness:
Translating theory into practice. London: McGraw
Hill.
Broadwell, M. M. (1989). When trainees should not
evaluate trainers. In Geber, B. (Ed.). Evaluating
Training, p. 79).
Brown, S. M. (1981). A primer for colleges who intend
to provide training in industry. Haverhill, MA:
Northern Essex Community College. (ERIC Document
Reproduction Service No. ED 210-069)
Bushnell, D. S. (1990, March). Input, process, output,
a model for evaluating training. Training and
Development Journal. pp. 41-43.
Carnevale, A. P., & Schulz, E. R. (1990, July). Return
on investment: Accounting for training [Suppl].
Training and Development Journal, pp. S1-S32
Centra, J. A. (1980). Determining faculty effective
ness. San Francisco: Jossey-Bass.
Cohen, A. M., & Brawer, F. B. (1987). The American
community college. San Francisco: Jossey-Bass.
94
Connolly, S. M. (1988, February). Integrating evalua
tion, design and implementation. Training and
Development Journal. pp. 20-23.
Connor, W. A. (1984). Providing customized job training
through the traditional administrative organiza
tional model. New Directions for Community
Colleges. 48, 29-39.
Cornwell, J. B. (1989). In your next class, try this
alternative to happiness sheets. In Geber, B.
(Ed.). Evaluating Training, p. 77.
Craig, R. L. (Ed.). (1979a). Assessment supplement to
training and development handbook. Alexandria, VA:
ASTD, McGraw Hill.
Craig, R. L. (Ed.). (1979b). Training and development
handbook: A guide to human resources. Alexandria,
VA: ASTD, McGraw Hill.
Deegan, W. L., & Drisko, R. (1985, March). Contract
training progress and policy issues. Community,
technical and Junior College Journal, pp. 14-17.
Deegan, W. L., & Tillery, D. J. (Eds.). (1985)
Renewing the American community colleges. San
Francisco: Jossey-Bass.
Derricott, R., & Parsell, G. (1991). Entering a new
field: Evaluation in the workplace. Studies in
Educational Evaluation. 17, 341-353.
Ebel, E. (1991). Achieving excellence in business.
New York: Marcel Dekker, Inc.
Edwards, F. M. (1990a). Building a world class work
force. Community. Technical and Junior College
Journal. 60(4), 8.
Edwards, F. M. (1990b, February/March). Editorial.
Community. Technical and Junior College Journal,
p. 9.
English, F. W. & Hill, J. C. (1994). Total gualitv
education. Thousand Oaks, CA: Corwin Press.
Erickson, P. R. (1990, January). Evaluating training
results. Training and Development Journal, pp. 57-
59.
95
Eurich, N. P. (1990). The learning industry. Prince
ton, NJ: The Carnegie Foundation for the Advance
ment of Teaching.
Evaluating soft skills training. (1991, September).
Training, pp. 14-15; 73.
Feldman, M. J. (1985). Establishing linkages with other
educational providers. In Deegan, W. L., & Tillery,
D. J. (Eds.), Renewing the American Community
Colleges. San Francisco: Jossey-Bass.
Fitz-Enz, J. (1994, July). Yes . . . You can weigh
training's value. Training, pp. 54-58.
Geber, B. (1987, April). Supply side schooling.
Training, pp. 24-30.
Geber, B. (1989). Evaluating training. Minneapolis:
Lakewood. (Collection of Articles from Training
Magazine.)
Goldstein, I. L. (1986). Training in organizations.
Belmont, CA: Wadsworth.
Goldstein, I. L., et al. (1989). Training and develop
ment in organizations. San Francisco: Jossey-Bass.
Gordon, J. (1991, August). Measuring the "goodness of
training. Training, pp. 19-25.
Holcomb, J. (1993). Make training worth every penny.
Del Mar, CA: Wharton.
Jones, J. E. (1990, December). Don't smile about smile
sheets. Training and Development Journal, pp. 19-
21.
Juechter, W. M. (1993, October). Learning by doing.
Training and Development Journal, pp. 28-30.
Keller, G. (1983). Academic strategy. Baltimore:
The Johns Hopkins University Press.
Kinlaw, D. C. (1992). Continuous improvement and
measurement for total gualitv. New York:
Pfeiffer.
Kirkpatrick, D. L. (1960). Techniques for Evaluating
Training Programs. Journal of the American Society
of Training Directors. 13., 3-9; 21-26.
96
Kirkpatrick, D. L. (1978, June). Developing an in-house
program. Training and Development Journal, pp. 40-
43.
Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic
inquiry. Newbury Park, CA: Sage.
Luther, D. B. (1984, December). Partnerships for em
ployee training: Implications for education, busi
ness and industry. New Directions for Community
Colleges, pp. 75-81.
Mager, R. F. (1984). Preparing instructional objec
tives (Rev. 2nd ed.), Belmont, CA: Lake.
Mager, R. F. (1984). Measuring instructional results.
Belmont, CA: Lake.
Maiuri, G. M. (1989). College and business industry
collaborative efforts— Training partnerships that
make "cents." Community Services Catalyst. 19(1),
11-15.
May, L. S., Moore, C. A., & Zammit, S. J. (Eds.).
(1987). Evaluating business and industry training.
Boston: Kluwer.
McEvoy, G. M., & Buller, P. F. (1990, August). Five
uneasy pieces in the training evaluation puzzle.
Training and Development Journal, pp. 39-42.
McMahon, F. A., & Carter, E. M. A. (1990). The great
training robbery. New York: The Falmer Press.
Meloy, J. M. (1994). Writing the qualitative disserta
tion: Understanding by doing. Hillsdale, NJ:
Lawrence Erlbaum.
Merriam, S. B. (1988). Case study research in educa
tion. San Francisco: Jossey-Bass.
O'Banion, T. (1991). The community college's latest
mission. Trustee Quarterly, p. 15.
Onsite materials. (1992). Unpublished Curriculum.
Parnell, D. (1986). The neglected majority (2nd print
ing) . Washington, DC: The Community College Press.
97
Parnell, D. (1990). A new economic development
paradigm. Community. Technical, and Junior College
Journal. 60(4), p. 9.
Parry, S. (1993, September). The missing 'M' in TQM.
Training, pp. 29-31.
Peshkin, A. (1988, October). In search of subjec
tivity— One's own. Educational Researcher, pp. 17-
22.
Philips, J. J. (1991). Handbook of training evaluation
and measurement methods (2nd ed.). Houston: Gulf.
Pincus, F. L. (1985, August). Customized contract
training in community colleges: Who really bene
fits? Paper presented at the Annual Convention of
the American Sociological Assoc., Washington, DC:
(ERIC Document Reproduction Service No. ED 261 721)
Reynolds, A. (1991). The basics: Evaluating training.
In ASTD Trainer's Toolkit: Evaluation Instruments.
Alexandria, VA: ASTD.
Rossi, P. H., Freeman, H. E., & Wright, S. R. (1979).
Evaluation: A systematic approach. Beverly Hills:
Sage.
Schecter, E. S. (1992). Managing for world-class
gualitv. New York: Marcel Dekker, Inc.
Shipp, T. (1989). Calculating the cost effectiveness
of training. In Geber, B. (Ed.). Evaluating
Training, p. 97.
Smith, J. E., & Merchant, S. (1990, August). Using
competency exams for evaluating training. Train
ing and Development Journal, pp. 65-71.
Sorohan, E. G. (1993, October). We do; therefore, we
learn. Training and Development Journal, pp. 47-55.
Spradley, J. P. (1980). Participant observation. Fort
Worth: Holt, Rinehart and Winston.
Study of contractual education programs in the California
community colleges. (1986). Report to the Cali
fornia Legislature and the Chancellor, California
Community Colleges. Los Angeles, CA: Arthur
Young & Co.
98
Suchorski, J. (1987). Contract training in community
colleges. Paper submitted to Dr. James L. Watten-
barger, University of Florida. (ERIC Document
Reproduction Service No. Ed 291 425)
Thompson, J. T. Jr. (1978, July). How to develop a
more systematic evaluation strategy. Training and
Development Journal, pp. 88-93.
Townsend, P. L. & Gebhardt, J. E. (1993) . Quality
in action. New York: Wiley.
United States Department of Education, Office of
Vocational and Adult Education, Division of Adult
Education and Literacy. (1992) . Work-place
literacy: Reshaping the American workforce.
Washington, DC: United States Printing.
Waddell, G. (1990). TIPS for training a world-class
work force. Community. Technical, and Junior
college Journal. 60(4), 20-27.
Whyte, W. F. (1984). Learning from the field. Beverly
Hills: Sage.
Williams, R. L. (1994). Essentials of total gualitv
management. New York: American Managerment Assoc.
Worthen, B. R., & Sanders, J. R. (1987). Educational
evaluation. New York: Longman.
Yin, R. K. (1991). Case study research. Newbury Park,
CA: Sage.
APPENDIXES
100
APPENDIX A
CONVERSATION GUIDE
101
CONVERSATION GUIDE
1. Personal & professional background?
2. Time with the CPE?
3. What attracted you to CPE? How did you connect?
4. Any formal instruction in teaching methodology?
5. How many classes taught?
6. Likes about teaching at the CPE?
7. Dislikes?
8. How do you know if students are learning?
9. Any specific methods?
10. Why the selection? How rated? How used? Frequency?
11. Think program is good? What are the goals? How do
you know that goals are met? Whose goals are they?
12. Describe participant population.
13. Reaction to mid-way evaluation?
14. Reaction to 90-day post-training?
15. Relationship with administration?
16. Relationship with other instructors?
17. Relation with the college?
18. Relation with the company?
102
APPENDIX B
INTRODUCTION LETTER
103
Dear
This letter will serve to introduce or "re-introduce”
Teresita Castro-McGee, as a researcher who has been collect
ing data from our program for the past two years. Teresita
is a Ph.D. Candidate at the University of Southern California
who is now in the process of writing her dissertation on the
use of evaluation in educational programs in business and
industry.
Teresita is conducting a case study and aims to complete her
dissertation by the summer of 1994. She has been interview
ing and observing our instructors and administrators since
January 1991. She has also become quite familiar with our
TQM On-Site curriculum.
She has been given our approval to continue gathering data
through her interviews and observations on site. For that
purpose she will be provided with a list of instructors'
names and numbers. It is her plan to contact some, or all,
of you in the next few months to interview you or observe
your class(es). I ask that you cooperate with her in your
interactions.
Teresita has assured me that all data gathered will be
treated with the highest degree of professional ethics. Upon
writing her dissertation all sources of data will be granted
anonymity to protect confidentiality.
Please be assured that her data gathering is for research
purposes only as they apply to her dissertation. Should
there be any, this office will be open to her recommendations
as we are open to suggestions for improvement from all
sources. However, the primary purpose of her relationship
with us is data gathering for her case study.
I thank you for your attention, and please do not hesitate to
call should there be need for further discussion.
Sincerely,
[College Official]
APPENDIX C-F
CLASS EVALUATION FORM
TRAIN THE IMPLEMENTOR
CLASS EVALUATION FORM
CLASS DAY
DATE: INSTRUCTOR:
PURPOSE:
To evaluate and improve GCC programs, we would appreciate your individual
impressions and suggestions regarding the class program you have just
completed.
Your responses are confidential. Constructive criticism is encouraged.
INSTRUCTIONS:
Read each item carefully and place a check mark in the column which comes the
closest to matching your opinion of each course element. Mark one answer
only for each item. Include any additional comments or suggestions in the
space provided on the reverse side of this form.
OVERALL EVALUATION (Write your answer in the spaces provided or attach
sheet of paper if you need more room)
1. WHAT DID YOU LIKE BEST ABOUT THE CLASS?
2. WHAT DID YOU LIKE LEAST ABOUT THE CLASS?
3. WHAT WOULD YOU LIKE TO SEE CHANGED?
COURSE ELEMENTS:
A. INSTRUCTOR'S PERFORMANCE EXCELLENT GOOD ADEQUATE POOR
1. Knowledge of subject matter
2. Organization of subject matter
3. Clearly defined objectives
4. Management of class
5. Answers to student questions
6. Relationship with class
B. WRITTEN MATERIALS EXCELLENT GOOD ADEQUATE POOR
lm
Accuracy of Content
2. Depth of content
3. Understandable and clear
4. Relevance to the class
I
c. TRAINING AIDS
(Videos, transparencies, etc.)
EXCELLENT GOOD ADEQUATE POOR
1. Knowledge of subject matter
2. Answers to students questions
3 . Relationship with class
4. Relevance to the class
D. ON-THE-JOB-USEFULNESS EXCELLENT GOOD ADEQUATE POOR
1. Relevance of instructor's
presentation to your job
2. Relevance of written materials
and training aids to your job
E. EFFECTIVENESS OF TRAINING
PROGRAM
EXCELLENT GOOD ADEQUATE POOR
1 . Greater understanding of
subject
2. Learned new job skills
3. Know where to find more help
or information on the subject
4. Better able to perform your
job
108
APPENDIX G
SUMMARY OF RESULTS
WORKSHOP: TQMS
WORKSHOP DATE:
INSTRUCTORS:
# OF STUDENTS:
DAY:
PERCENTAGE RESULTS
QUESTION
EXCELLENT GOOD ADEQUATE POOR
INSTRUCTOR PERFORMANCE
A1 KNOWLEDGE O F SU B JEC T MATTER 0% 0 % 0 % 0 %
A2 ORGANIZATION O F SU B JEC T MATTER 0 % 0 % 0 % 0 %
A3 OBJECTIVES CLEARLY DEFINED 0 % 0 % 0 % 0 %
A4 MANAGEMENT O F CLASS 0% 0 % 0 % 0 %
A5 ANSW ERS TO STUDENT Q U ESTIONS 0 % 0 % 0 % 0 %
A6 RELATIONSHIP WITH CLASS 0 % 0 % 0 % 0 %
WRITTEN MATERIALS
B1 ACCURACY O F CONTENT 0 % 0 % 0 % 0 %
8 2 DEPTH O F CONTENT 0 % 0 % 0 % 0 %
8 3 UNDERSTANDABLE AND CLEAR 0 % 0 % 0 % 0 %
B4 RELEVANCE TO THE CLASS 0 % 0 % 0 % 0 %
TRAINING AIDS
C1 QUALITY O F AIDS 0 % 0 4 ^ 0 % 0 %
C2 ACCURACY O F CONTENT 0 % 0 % 0 % 0 %
C3 UNDERSTANDABLE AND CLEAR 0% 0 % 0 % 0 %
C4 RELEVANCE TO CLASS 0 % 0 % 0 % 0 %
ON THE JOB USEFULNESS
D1 RELEVANCE O F PRESENTATION TO JO B 0 % 0 % 0 % 0 %
D2 RELEVANCE O F TRAINING MATERIALS TO JO B 0 % 0 % 0 % 0 %
EFFECTIVENESS OF SEMINAR
E1 GREATER UNDERSTANDING O F SU B JEC T 0 % 0 % 0 % 0 %
E2 LEARNED NEW JO B SKILLS 0 % 0 % 0 % 0 %
E3 KNOW W HERE TO FIND MORE INFORMATION 0 % 0 % 0 % 0 %
E4 BETTER ABLE TO PERFORM JO B 0 % 0 % 0 % 0 %
APPENDIX H
REVIEW TEST
REVIEW TEST
A method of using mathematics to monitor and control
a process is:
a. SPC
b. standard of quality
Products produced by the same machine or process
always exhibit some degree of:
a. tolerance
b. variation
A process that contains only common causes of
variation is considered to be:
a. stable and predictable
b. unstable and unpredictable
The individual aspects of the product which are
covered by specifications are called?
a. quality characteristics
b. tolerances
APPENDIX I-J
CURRICULUM
114
NEW CURRICULUM
Session 1
1. General Orientation
2. Student Introductions
3. TQM Overview
Assignment:
1. Identify 3 internal customers and 3 internal
suppliers
Session 2
1. Cost of Quality
2. Types of Data
3. Flow Charting
Assignment:
1. Create Macro and Micro flow chart
2. Identify 3 problems in your area that you
believe are solvable.
Session 3
1. Key process indicators
2. Pareto Analysis
3. Styles for success
4. Communication skills— styles for success
Assignment:
1. Prepare data collection plan for Pareto Analysis
(Due by next session)
2. Pareto Analysis due by session 6
Session 4
1. Calculator exercise
2. Teamwork (1) - 6" Rule
3. Communication skills - teamwork
4. Resistance to change
(Business of Paradigms
5. Meetings
Assignment:
1. Calculator assignment
115
Session 5
1. Calculator exercise
2. Teamwork - Decision making
(Lost at Sea exercise)
3. TQM Implementary Strategy
4. Problem solving
Assignment:
1. Identify the current data collection tools now
being used in your department and provide your
input as to the value of the data being
collected.
Session 6
1. Review Pareto Analysis assignment given in
session 3
2. Brainstorming
3. Cause and Effect
4. Problem solving (case study)
5. Leadership enabling and empowerment
(Leadership video)
Assignment:
1. Prepare a Cause and Effect diagram using one of
the problem categories identified in your Pareto
Analysis as the problem. Follow the standard
conventions for the Cause and Effect diagram of
Manpower, Machines, Materials, Measurement,
Methods, and Environment
Session 7
1. Variation
a. The Loss Function
2. Frequency distributions
a. Tally charts
b. Histograms
1. Construction
2. Differences from Pareto
3. Characteristics
a. Central Tendency
b. Spread
c. Shape
APPENDIX K-M
POST-TRAINING ASSESSMENT
C om m unity College
Em ploym ent Training Panel
Individual Company
Post Training Productivity Assessment
Company N a m e :____________________________________________________________________________________________ fa x JO:
I }pe oj Training: ______________________________________________________________________ _ Co. Representative:
t\ umber uj Participant*: ________ Phone N um ber:___
Productivity
M easurem ent N o C hange Im provem ent
C om m ents and
Reference Data
C om pany R evenue
M t'g./A sscm bly C osts
Production Im provem ents
Piotits
Scrap/R ew ork
Lead T im es
W arranty C laim s
/ verify tlutt Uie above data is true and correct
Signature o f Com pany Representative D ate
EMPLOYEE POST TRAINING
PRODUCTIVITY ASSESSMENT
NAME OF COMPANY: ___________________________________________________ STATE TAX I.D. #
NAME OF EMPLOYEE TRAINED:
CO. REPRESENTATIVE: _________________________________________ TITLE:___________________
PHONE NUMBER: ________________________________ SIGNATURE:_________________________
1. Describe in detail the skill level o f your em ployee before training:
Limited capabilities in workplace application of Total Quality
Management, Statistical Process Control, Teamwork, and Just In
Time tools and methodologies.
2. List below the work related training objectives your em ployees w ill achieve as a result
o f this training:
PR E TR AININ G
PO ST T R A IN IN G
Date Work Related
Training Objectives
No
Change
Some
Increase
Greatly
Increased
Outstanding
Increased ability to apply
TQM methods on the
job.
Increased ability to
apply SPC tools on the
job.
Increased ability to apply
TLC tools on the job.
Increased ability to apply
JIT methods on the job.
DATE TRAINING BEGAN:_____________________________
DATE TRAINING ENDED:_____________________________
DATE STARTED POST TRAINING:_____________________
DATE OF POST TRAINING PRODUCTIVITY ASSESSMENT:
Post T raining C o R eprescniaiive/T ille D ale
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
Asset Metadata
Core Title
00001.tif
Tag
OAI-PMH Harvest
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC11256586
Unique identifier
UC11256586
Legacy Identifier
9617120