Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Experimental analysis of the instructional materials in behavioral college instruction using a marketing management course
(USC Thesis Other)
Experimental analysis of the instructional materials in behavioral college instruction using a marketing management course
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
EXPERIMENTAL ANALYSIS OF THE INSTRUCTIONAL MATERIALS
IN BEHAVIORAL COLLEGE INSTRUCTION USING A
MARKETING M ANAGEM ENT COURSE
by
Jeffrey Charles Olsen
A Dissertation Presented to the
FACULTY OF THE SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
September 1978
Copyright Jeffrey Charles Olsen 1978
UMI Number: DP26541
All rights reserved
INFORMATION TO ALL USERS
The quality of this reproduction is dependent upon the quality of the copy submitted.
In the unlikely event that the author did not send a complete manuscript
and there are missing pages, these will be noted. Also, if material had to be removed,
a note will indicate the deletion.
UMI DP26541
Published by ProQuest LLC (2014). Copyright in the Dissertation held by the Author.
Microform Edition © ProQuest LLC.
All rights reserved. This work is protected against
unauthorized copying under Title 17, United States Code
ProQuest LLC.
789 East Eisenhower Parkway
P.O. Box 1346
Ann Arbor, Ml 48106-1346
^ ■ c L l 4 . C —«
jx
1 9
0 83
dissertationj written under the direction
of the Chairman of the candidate's Guidance
Committee and approved by all members of the
Committee, has been presented to and accepted
by the Faculty of the School of Education in
partial fulfillment of the requirements for the
degree of Doctor of Education.
Date Sep.temberj,.. 1.978
Dean
Guidance Committee
U- O JUL^
Chairman
ACKNOW LEDGMENTS
I wish to thank my committee members, Dr. William Allen, Dr.
Robert Filep, and Dr. Earl Pul lias. I also wish to express my appre
ciation to Dr. Robert Casey for his help.
Thanks are also given to Waraporn Eoaskoon for helping with the
typing of the final rough draft. Frank Chew, Director of the FlexEd
Program, deserves my thanks for his cooperation. I also wish to thank
the supervising professor. Dr. Burton Marcus, and the learning coaches
and students of the marketing management course.
”“~^must also thank Dr. Frederick A. Indorf for his encouragement
Among others who have encouraged me are Dr. Walter Wittich and Mrs.
Carolyn Banner.
11
TABLE O F CONTENTS
LIST OF ILLUSTRATIONS ..............................................................................................
LIST OF TABLES................................................................................................................ v1
Chapter
I. THE PURPOSE AND PROBLEM.......................................................................
Background of the Problem
Research into Behavioral Instruction in
the College Classroom
The Problem
Purpose of the Study
Statement of Hypotheses
Definitions of Terms
Organization of the Remainder of the Study-
II. REVIEW OF THE LITERATURE........................................................................ 14
The Growth of the Field of Evaluation
Models for the Evaluation of Educational Programs
A Model for Course Evaluation
Answering the Criticism of the Evaluation Models
Learning Systems Using Self-Managed Instruction
Toward an Evaluation Model for Course Improvement
Related Research
Research into the Instructional Materials Component
of Behavioral Instruction in the College Classroom
Summary
III. RESEARCH M ETHODOLOGY................................................................................. 47
Formative Evaluation
Summative Evaluation
Summary
IV. RESULTS............................................................................................................ 61
Formative Evaluation
Summative Evaluation
111
V, SUM M ARY, CONCLUSIONS, AN D RECOM M ENDATIONS ....................... : . 89
Summary
Conclusions
Recommendations
REFERENCES......................................................................................................................... 94
APPENDIXES
A. Prescriptions for Instructional Media Design............................ 100
B. Examples of Test Items.............................................................................. 105
C. Questionnaires I, II, and I I I ............................................................ 109
D. Examples of Design Elements ................................................................ 114
IVi
LIST O F ILLUSTRATIONS
Figure
1. The Reported Use of Cassette Tapes in
Formative Evaluation............................................................................ 73
2. The Reported Use of Cassette Tapes in
Summative Evaluation............................................................................ 87
_vj
LIST OF TABLES
Table
1. Birthdates of FlexEd Students in Summative Evaluation . . . 57
2. Means and Standard Deviations of FlexEd and Lecture
Groups in Formative Evaluation.......................................................... 62
3. Learner Responses to Module II Questionnaire................................. 66
4. Learner Responses to Module V Questionnaire ................................. 68
5. Learner Responses to Module VIII Questionnaire............................ 70
6. Means and Standard Deviations of Pretest Results
in Summative Evaluation, Fall 1976................................................. 76
7. Analysis of CovarTemce and Tukey's HSD Test for
Pretest and Test I in Summative Evaluation............................... 77
8. Analysis of Covariance and Tukey's HSD Test for
Pretest and Test II in Summative E va lu a tio n ........................... 79
9. Analysis of Covariance and Tukey's HSD Test for
Pretest and Final in Summative Evaluation ................................ 80
10. FlexEd Learner Responses to Module II Questionnaire in
Summative Evaluation................................................................................. 81
11. FlexEd Learner Responses to Module V Questionnaire in
Summative Evaluation................................................................................. 83
12. FlexEd Learner Responses to Module VIII Questionnaire
in Summative Evaluation ........................................................................ 85
VI
CHAPTER I
THE PURPOSE AND PROBLEM
L ittle systematic application of the growing theory and
strategies of evaluation to educational programs has been made in higher
education. Universities offering courses that make extensive use of
instructional materials could benefit from evaluation information. This
is especially true where the materials are expensive to produce.
Courses offered through nontraditional learning institutions
such as the University of Mid-America or the Open University are clas
sified under the generic term "behavioral instruction in the college
classroom." Traditional universities also offer such courses as a l t e r
natives to lectures.
There is sometimes provision for formative evaluation, but not
for any ongoing procedure that would provide a continuing flow of
information for course improvement through improved course design. In
some educational programs in the United States that rely on federal
grants, evaluation models have been useful in directing evaluation
studies and providing information on program reform for decision makers
at the federal and local levels.
There has been l i t t l e evidence that any of these procedures,
1 ^
models, or paradigms that provide information for course revision have
been applied to courses at the higher education level. In the case of
self-managed instruction, the course evaluation becomes materials evalu
ation. With regular instruction in the college classroom, evaluation
concentrates on the instructor. With self-managed instruction or be
havioral instruction in the college classroom, where students learn
through the use of instructional materials, the focus shifts from the
professor to the materials. The evaluation of the course becomes the
evaluation of the instructional materials.
Examples of self-managed instruction in higher education are
courses offered at the Open University (OU) in England, at the Univer
sity of Mid-America (UM A) in Lincoln, Nebraska, and at the University
of Southern California (USC) in Los Angeles, California.
An evaluation model for behavioral instruction in the college
classroom was selected for this study on the basis of a comparison of
evaluation models and the evaluation practices at the OU, UM A, and USC.
By applying this model to a course in marketing, the importance of the
instructional design of the materials component was determined.
Background o f the Brohtem
In this section some of the problems encountered with evaluation
at the Open University, the University of Mid-America, and the FlexEd
Program at the University of Southern California are presented.
The Open U n'iversity
The Open University provides courses using a multimedia approach
Units of work include correspondence texts, home experimental work,
broadcast material, and audiovisual materials available at learning
centers (McCormick, 1975). Broadcast material is transmitted over
television (BBC 2) and radio (VHP 3 or VHP 4).
The evaluation practices now in existence in open learning
institutions and universities show that evaluation is carried out at
different levels. Practices range on a continuum from simply collecting
questionnaire data to the measurement of learning by objective tests to
comparisons between several groups in formalized research designs. The
course evaluation carried out by McCormick (1975) on an electronics
course at the Open University was based on questionnaires containing
such questions as:
1. How many hours did you spend on Unit 10?
2. How useful were the experimental notes for Unit 10?
The data gathered from these questions tended to be subjective.
Also, there were no guidelines for the instructional technologist to
follow in course revision. McCormick (1975) concluded:
The lack of a model to direct the revision of a course based on
the evaluation is the most serious defect of what passes for
evaluation "theory," and when the dust has settled on the various
models, e tc ., perhaps someone will look at the problems of this
issue. Also more serious thought must be given in helping non
professionals tackle evaluation and perhaps case studies of actual
evaluation would be more helpful than models and "theory." (p. 28)
The FlexEd Program in the School of Business Administration at
the University of Southern California and the University of Mid-America
are faced with similar problems.
The Univers'ity o f M'id-Ameri-ca
The design and evaluation of courses at the University of Mid-
America are carried out by a large team of professionals. The team con
s ists of an instructional designer, a senior producer, a content spe
c i a l i s t , a writer, an evaluator, and several media specialists.
Two course evaluation reports have been made by the university,
one on Introductory Psychology, and one on Accounting I. A report on
the evaluation of the Introductory Psychology course was made by Sell
(1975). Among the seven concerns of the report was the question:
"What improvements are needed for the further evaluation of Introductory
Psychology as well as other U M A courses?" (p. 3).
It seems that evaluation carried out by the U M A was not p ri
marily for the purpose of course revision; there were other purposes,
including defining the audience and finding out how satisfied i t was
with the course. Sell concluded that, "from the evidence collected by
the instruments and procedures used in the learner evaluations of Intro
ductory Psychology during its f i r s t offering, i t is d iffic u lt to make
specific recommendations for course improvement" (p. 11).
In the Accounting I course, the media components were listed as:
(a) textbook, (b) a study guide, (c) fifteen 30-minute television
4
programs, (d) sixteen 30-minute (or less) audio tapes, and (e) sixteen
newspaper artic le s. There was no statement concerning how these mate
ria ls were to be evaluated, no l i s t of decisions as to how they were to
be revised, and no suggestions for subsequent revision.
The FlexEd Program
The FlexEd program in use in the School of Business Administra
tion at the University of Southern California relies on instructional
materials that include a workbook, tape, textbook, and readings photo
copied from a rtic le s in the library. Tutors are available for students
when they are having trouble with the modules or topics in the course.
To aid the students in their progress through the course,
instructional design elements are applied to the coursebook materials.
These consist of statements of objectives, directions concerning how
the student should proceed, information mapping by labeling each page,
practice questions with space for replies, answers to practice questions
for feedback, and an outline of the contents of the textbook. There is
an introductory section in the coursebook that gives the student infor
mation about the way the course is to be handled, how the tutor can be
contacted, and other mechanics of the course.
At the date of writing this dissertation, there had been no
documented evaluation of the FlexEd courses or program. Informal eval
uations had been carried out by asking the students about the course
they had ju st completed or by having them f i l l out a questionnaire.
5
Research in to B ehavioral I n s tru c tio n
in the C ollege Classroom
This study is related to a trend in the research into college-
level behavioral instruction. This trend has developed away from com
parison studies between behavioral instruction and other forms of
instruction toward attempts to determine the relative contributions of
the various ingredients or components of behavioral instruction to its
overall effectiveness (Robin, 1976).
The components of behavioral instruction listed by Robin as being
studied by researchers include; (a) self-pacing, (b) unit perfection
requirement, (c) stress on the written work, (d) proctors, (e) study ob
jectives, (f) assignment length and frequency of testing, (g) grading,
and (h) lectures. In addition to the attempts to measure the relative
effectiveness of these components, several studies have been made to in
vestigate interactions between student characteristics and the compo
nents listed above.
Some of the student characteristics studied included te st anxi
ety, student ab ility , and academic sk ills. The present study relates to
those that investigated the stress on the written word component. Many
of these studies were concerned with testing. However, according to
Robin (1976):
In order to analyze the variables responsible for each student's
success or failure with behavioral instruction, researchers will
have to scrutinize an aspect of the instructional system often
glossed over—the materials, (p. 344)
The only other studies found in the literatu re that investigated
6
the instructional materials were by Miller (1974) and Mil 1er and
Weaver (1975).
According to Robin (1976):
Mil 1er has demonstrated that (a) his materials produce improve
ments in concept identification with novel examples, (b) each
element of his materials contributes significantly to their over
all effectiveness, (c) the materials produce superior concept
formation to two traditional textbooks covering the same contents,
(p. 345)
The present study investigates the role of improved instruc
tional materials design in the offering of a course in marketing manage
ment in the behavioral instruction mode using both print and audio
material s.
The Frobtem
The general problem was related to the whole field of educa
tional evaluation. According to Stufflebeam (1971):
A third lack is that of appropriate instruments and designs.
Every evaluation situation requires a particular, appropriate
evaluation design, fleshed out with appropriate instruments and
techniques for operational u tility . There is no systematic
treatment of evaluation design, as there is of experimental design
or even quasi-experimental design, in the Campbel1-Stanley sense.
(p. 8)
A more specific problem was the absence of documented evaluation
of courses in business administration and marketing management in par
ticu lar to show that a self-managed or behavioral instruction mode is as
effective as the regular .class. There also has been no evaluation of
I the instructional materials component in this area of business
administration.
J
The FlexEd courses in business administration rely almost com
pletely on the use of instructional materials and tests. Where some
types of behavioral instruction make use of lectures and group discus
sion, the FlexEd courses use instructional materials supported by indi
vidual consultation with the learning coach and supervising professor.
Purpose o f the Study
The FlexEd program offers seven undergraduate courses in busi
ness administration. These are four-unit core courses that are required
by all undergraduates studying for the bachelor's degree in business
administration.
The present study focuses on the marketing management course.
The f i r s t part of the study is theoretical. Its purpose was to select
an evaluation model that could be used to describe course development
and evaluation. The second part is experimental and involved meeting
the following objectives :
1. To evaluate the courses by comparing student achievement
with instruction in the regular college classroom to the
FlexEd instruction in marketing management at the Univer
sity of Southern California
2. To evaluate the instructional materials components of the
course to determine whether they are all contributing to
the learning
3. To ascertain other areas of research that might be produc
tive of course improvement g
With the development of more open-learning systems that rely on
self-managed instructional materials that can be studied at a distance
from the educational institution, the process of course evaluation is
becoming increasingly important. With more studies being published con
cerning the roles of different components of behavioral instruction in
college classrooms, i t is becoming easier to apply this research to the
design of new courses of instruction.
Statem ent o f Hypotheses
The hypotheses of this study were as follows:
H ypothesis 1
There will be no difference in learning between the FlexEd class
and the regular class as measured by three achievement tests.
H ypothesis 2
The revised instructional materials in the FlexEd course in
marketing management will not lead to increased achievement on the
three achievement tests.
H ypothesis 3
The revised instructional design of the instructional materials
for the marketing management course in FlexEd will not lead to increased
utilization of the revised materials.
' • D e fin itio n s o f Terms
E v a lu a tio n .--k definition of evaluation as stated by Stufflebeam
(1971) is used in this study: "Evaluation is the process of delineating,
obtaining, and providing useful information for judging decision a l t e r
natives" (p. 25).
In other words, evaluation is a process that goes on over a
period of time. It is not something that happens at any one time, such
as the end of the semester. So that time is not wasted, i t is important
to delineate what information is going to be useful. There are d iffe r
ent ways of obtaining the same information. If the evaluator wants to
know how many times a student visited the learning coach, the evaluator
could ask the student, the learning coach, or a receptionist who makes
appointments and sees who comes into the office.
To be sure that the information is useful, the evaluator should
l i s t the decision alternatives that are to be judged. If the decision
alternatives are the design elements, these should be listed f i r s t and
then information should be collected to make judgments among these
design elements.
Self-Managed I n s tru c tio n .--T h e D ictio n a ry o f Education defines
self-managed instruction as: "A term used with reference to instruc
tional materials or learning kits which are designed so that students
can be instructed and can learn without teacher intervention or with a
minimum of teacher guidance" (Good, 1973, p. 352).
10
^TÜfi^kind of instruction is used in a process termed "distance
education" in British countries. The process applied to higher educa
tion is carried out at the Open University at Milton Keynes near London
and in departments of external studies attached to Australian universi
ties.
In the United States, the University of Mid-America is relying
on this system in working with state universities in the Midwest. This
present study was carried out within a program called the FlexEd Program
in the School of Business Administration at the University of Southern
California.
B ehavioral --Behavioral instruction in the college
classroom is another name for the personalized system of instruction
(RSI), which is defined by Johnston (1975) as follows:
The Personalized System of Instruction is characterized by five
basic features : (1) The students progress through the course
with relatively more choice about th eir pace than in more tra d i
tional courses. (2) The students are permitted to move on to new
material only after mastering all prior material. (3) Lectures
are used for demonstration and motivation rather than for dis
semination of information. (4) There is considerable stress placed
upon the written verbal performance of the student in teacher-
student communications. (5) Undergraduate students who have
already completed the course are used as proctors for repeated
testing, immediate feedback, and tutoring on an individual basis.
(p. 5)
FlexEd Mode.--The FlexEd mode is that mode of instruction devel
oped at the University of Southern California for students in Business
Administration who choose to study individually without attending
11
lectures. It is generally accepted today that instruction at the higher
education level should be adapted to different learning styles. The
student should be able to choose the mode that best suits his style of
learning for any given subject area.
The mode was devised by Drs. Milton Holmen and Stanley Weingart
during the 1972-1973 academic year and u tilizes a textbook, a course
book, audiotapes, and photocopied readings, along with tutorial aid.
The original purpose was to provide instruction for those who could not
attend the regular lectures or who had lectures that conflicted in their
time schedule.
E valuation Model or Paradigm .--h model is defined by the Ameri
can H eritage D iction ary o f the English Language (1973) as a tentative
ideational structure used as a testing device. An evaluation model in
education is therefore an ideational structure to be followed when car
rying out procedures to be applied to a given educational situation.
Most dictionaries give paradigm a synonymous meaning. The D iction ary
o f Education (Good, 1973) gives the following definition of paradigm:
"a representation, a model of a theory, an idea or a principle" (p. 407)
The F ir s t T e s t.--This te s t covers Modules I and II.
The Second T e s t.--This te st covers Modules III, IV, and V.
The F inal T e s t.--This te st covers Modules VI, VII, and VIII.
12
O rganization o f the Remainder o f the Study
Chapter II contains the review of literatu re. Three models of
educational evaluation are reviewed and compared to determine which of
the systems models best describes the process of course evaluation.
Reports are made concerning evaluation procedures at three institutions
of open learning in higher education, including the University of Mid-
America, the Open University, and the FlexEd program at the University
of Southern California, Finally, the media research and the behavioral
instruction research that relate to this study are discussed.
In Chapter III, the evaluation methodology used as a part of
the evaluation model is explained, f i r s t to collect the formative evalu
ation data and then to te st for the effectiveness of the revisions.
The testing procedures and techniques for te st design are discussed in
this chapter.
Chapter IV presents data collected during the f i r s t administra
tion of the course. A description of how the data were interpreted so
that decisions could be made for course revisions is included. The last
section of the chapter describes the data collected from the second
administration of the course.
Chapter V contains a summary of this study, the conclusions
that can be drawn, and some recommendations for further study.
A l i s t of references and Appendixes A through D complete the
dissertation.
13
CHAPTER II
REVIEW OF THE LITERATURE
The f i r s t objective of this chapter was to compare three evalu
ation models to decide which model more accurately describes the a c tiv i
ties of course evaluation. To support the arguments in this comparison,
evaluations of courses taught through behavioral instruction in the
classrooms at the University of Mid-America, the Open University, and
the FlexEd program at the University of Southern California are made.
A review of the related research completes the chapter.
The Growth o f the F ie ld o f E valuation
The field of educational evaluation is a part of a rapidly
developing field of the evaluation of social action programs. With
legislative organizations studying the appropriation of funds, there is
a growing demand for evaluation research. This research may be on the
effectiveness of mental health programs, public health programs, and
early intervention programs such as Head Start and Follow Through.
In the United States, the "Great Society" created a wide variety
of programs designed to overcome the disadvantages of poverty. When the
Elementary and Secondary Education Act was f i r s t passed by Congress,
14
sponsors of the bills were concerned that the recipient institutions
make concentrated efforts to show some improvement to ju stify further
funding. Thus the different t i t l e s of the Elementary and Secondary
Education Act required those who applied for funds to state how their
programs were to be evaluated. When these same people reapplied, their
evaluations were considered as a basis for further funding.
In Australia, the programs established by the Australian govern
ment between 1972 and 1975 brought about increases of several hundred
percent in government spending in the areas of health and education.
Never before had there been such increases in government spending. The
economic consequences of such spending have not been measured. However,
measures of inflation and cost of living are continually studied and
monitored.
A recent worldwide spiral of inflation and recession has caused
many countries to look closely at government spending. The resulting
need for the refinement of the processes of evaluation has been shown by
the increasing number of publications concerned with this topic.
In his recently published Handbook o f E valuation Research,
Campbell (1975) made the following statement.
The United States and other modern nations should be ready for an
experimental approach to social reform, an approach in which we
try out new programs designed to cure specific social problems,
in which we learn whether or not these programs are effective,
and in which we retain, imitate, modify, or discard them on the
basis of apparent effectiveness of the multiple imperfect c rite ria
available. Our readiness for this stage is indicated by the inclu
sion of specific provisions for program evaluation in the f i r s t
15
wave of the "Great Society" legislation, and by the current con
gressional proposals for establishing "social indicators" and
socially relevant "data banks." So long have we had good inten
tions in this regard that many may feel we are already at this
stage, that we already are continuing or discontinuing programs
on the basis of assessed effectiveness. It is a theme of this
a rtic le that this is not at all so, that most ameliorative pro
grams end up with no interpretable evaluation. W e must look hard
at the source of this condition, and design ways of overcoming
the d iffic u ltie s, (p. 71)
One of the biggest budget items of government spending is the
money spent on various educational programs. One of the major goals of
governments at different levels has been the attempt to equalize educa
tional opportunities at all age levels, for all cultural differences,
and for those individual differences that can be measured and observed.
At the preschool level, programs such as Head Start and Follow Through
have been developed. At the primary and secondary levels, this attempt
at equalization has resulted in the Elementary and Secondary Education
Act in the United States and the development of a National Schools Coun
cil in Australia.
At the higher education level, government support has been given
to open education so that people of all ages may study to an advanced
level. Where there is a problem of distance, this has led to an empha
sis on self-managed instruction. Where there is a problem of large
enrollment, there has been a movement to introduce personalized systems
of education or behavioral instruction in the college classroom. All
such programs could benefit from evaluation information.
16
Models f o r the E valuation o f E ducational Programs
To enable institutions that rely on self-managed instruction to
make more valid decisions about course development and revision based on
collected information, a theory of evaluation is developing. A part of
this theory is a group of procedural models that can be used to collect
the appropriate information.
Out of the diffuse theory of educational evaluation have emerged
three models that give specific statements relative to the process of
evaluation. These are the Center for the Study of Evaluation Model,
the Context, Input, Process, and Product Model, and the Metfessel and
Michael paradigm.
The Center for the Study of Evaluation (CSE) model was developed
by Alkin (1973) and originated from the University of California at Los
Angeles Center for the Study of Evaluation. This model consists of five
parts. The five different types of evaluation in the CSE model are
systems assessment, program planning, program implementation, program
improvement, and program certification. Alkin (1973) defined the f i r s t
of these types by stating:
Systems assessment is a means of determining the range and
specificity of educational objectives appropriate for a particular
situation. The needs may be represented as a gap between the goal
and the present state of affairs. The evaluation problem, then,
becomes one of assessing the needs of students, of the community,
and of society in relation to the existing situation. Assessment,
therefore, is a statement of the status of the system as i t
17
presently exists in comparison to desired outputs or stated needs
of the system, (p. 151)
A symposium at the American Educational Research Association
convention in 1975 reported on a needs assessment covering a 3-year
period at the University of Mid-America. There were 9,000 respondents
included in four market surveys. The report covered needs assessment
in open learning, as well as for a specific course on the history of the
Great Plains, and is 183 pages long (Gooler, 1975).
It is useful to think of system s assessm ent of the CSE model
and needs assessment from systems theory as similar concepts. Mager's
(1972) book. Goal A nalysis^ is useful in understanding the processes of
this type of evaluation. In discussing goal analysis, Mager made the
following comment:
Like medication, instruction can be given when none is needed.
It is also possible, as in prescribing medication, to instruct
where some o th er remedy would be more to the point. Therefore,
i t is appropriate for those who would solve problems of human
performance to perform an analysis before selecting a remedy as
i t is for a physician to make a diagnosis before prescribing a
cure. (p. 5)
The second type of evaluation of the CSE model is called program.
planning, ’’ •. .3
Program planning, the second need area, is concerned with
providing information which will enable the decision-maker to
make planning decisions to select among alternative processes in
order to make judgments as to which of them should be introduced
into the system to f i l l most efficien tly the c ritic a l needs pre
viously determined. The task of the evaluator is to anticipate
the attainment of goals and to assess the potential relative
effectiveness of different courses of action. (Alkin, 1973, p. 152)
18
Here the instructional designer begins to play a more active
part in course evaluation. It would seem that this area of evaluation
needs a clear statement of objectives. If the evaluation is concerned
with course evaluation, and materials and tests are going to be devel
oped, i t would seem that behavioral objectives should be an important
part of this type of evaluation.
The fact that Alkin did not specifically mention beh a vio ra l
objectives can be interpreted as a weakness of his statement as one
suitable for describing the process of course evaluation. However, if
there is to be a statement of need, as Alkin said, then an instructional
designer would probably play a larger role in writing this statement
than he would in deciding what the statement of need should be. Defin
ing the needs statement is primarily the role of the program director.
In the case of course evaluation, i t would be the role of the subject
matter expert.
The third type of evaluation defined by Alkin is program im ple
m entation. Here the instructional designer in course evaluation becomes
actively involved. The instructional designer has helped collect infor
mation for system s assessm ent and has actively assisted in the writing
or defining of the needs statement and program planning. To be more
specific to course evaluation, the instructional designer has been
through sessions involved with the objectives to be written. He has
actively assisted in the writing of the objectives. Now he assists the
subject-matter sp ecialist in the design of instructional materials.
19
After the decision-maker has selected the program to be
implemented, an evaluation of program implementation determines
the extent to which the implemented program meets the description
formulated in the program planning decision. . . .
There have been numerous examples in the educational lite ra tu re
of conflicting results relative to the impact of a specific instruc
tional treatment. W e would maintain that in large part this is
attributable to the lack of specificity of the precise nature of
the instructional treatment that was employed. (Alkim, 1973,
p. 153)
It would seem that one solution to this would be a clear s ta te
ment of objectives stated in behavioral terms, yet Alkin neglected to
call for these objectives in his statement of need. In course evalua
tion, i t is assumed that the objectives have been written specifically
so that tests may be designed and materials developed. In course evalu
ation, the question is not so much how closely the program matches with
what was planned, but how effective the materials are in helping stu
dents achieve the course objectives.
In many cases, a change is needed in the course materials or
in their presentation, because i t may not be possible to implement
planned a c tiv itie s. Thus the implemented course may not need to
resemble the planned course.
One reason the statement of A1kin's model does not adequately
describe course evaluation is because he was concerned with the role of
the e va lu a to r in sch ool programs, not the role of the team members in
course desig n .
The fourth type of evaluation described by the Alkin statement
2 0
\ s program improvement. Here Alkin asked the evaluator to measure how
well the student may have achieved on tests that measure success at
achieving objectives:
The key point in the understanding of the role of the evalu
ator in performing evaluation in this need area is that he is
f i r s t and foremost an interventionist attempting to provide data
which will lead to the immediate modification and, hopefully,
improvement of the program. (Alkin, 1973, p. 153)
However, in this statment, Alkin did not give any directions to
the evaluator or the instructional designer as to how tests are to be
developed, what the problems of designing suitable tests are, and how
the students should be given knowledge of the results of th eir te sts.
In the fifth evaluation need area, program c e rtific a tio n , the
role of the evaluator is to provide the decision-maker with in
formation that will enable him to make decisions about the pro
gram as a whole and its potential generalizability. (Alkin,
1973, p. 154)
Courses are sometimes transferred from one institution to
another and from one geographical area to another. Evaluation data
concerning how the course did or did not work in one area may be useful
to course developers who may need to make some changes to suit local
conditions.
In summary, however, the major problems with the Alkin model as
a suitable model for course evaluation are stated in his section on
program certification.
In considering the situations in which evaluation might take
place in various need areas, we have found i t h e lp fu l to d if f e r e n
t i a t e between the e va lu a tio n o f ed u ca tio n a l system s and th e e v a lu
a tio n o f in s tr u c tio n a l programs [ita lic s mine]. In terms of the
21
conceptual framework that has been presented, one can view the
evaluation of educational systems as involving the f i r s t two need
areas and the evaluation of instructional programs as largely
involving the last three. (Alkin, 1973, p. 154)
The f i r s t two types of evaluation, which Alkin called system s
assessm ent and program plann ing, are primarily for educational systems.
The next three types of evaluation described by Alkin are program im ple
m en tation , program improvement, and program c e r t i f i c a t i o n , which he
suggested are for educational programs. There was no reference to
course evaluation, which is an even more specific area than educational
programs and educational systems.
To summarize what is lacking in the Alkin model for a descrip
tion of course evaluation, i t could be said that:
1. The statements are directed toward programs and systems,
not toward educational products and materials for courses.
2. The statements are for the program evaluator, not for the
instructional designer or the subject matter expert.
3. The statements are broad and do not give direction to
course development a c tiv itie s.
4. There is no adequate statement concerning what is needed
for statements of objectives and the way data are to be
collected through such instruments as tests and ques
tionnaires.
2 2
The Stu ffleheam CIPP Model
The second model to be considered as a pattern of course devel
opment and evaluation was the Stufflebeam model, which is made up of
o o n ten t, in p u t, p r o c e s s, and produ ct (CIPP) evaluation. The difference
between these four types of evaluation is in the information that is
collected for making decisions. Stufflebeam's four types of evaluation
parallel Alkin's f i r s t four types of evaluation. Like Alkin, Stuffle-
beam (1971) neglected to say what a c tiv itie s are to be carried out under
each of his definitions. Stufflebeam provided a summary statement of
what these types of evaluation are:
Content evaluation serves planning decisions to determine objec
tives; in p u t evaluation serves structuring decisions to determine
project designs; pro cess evaluation serves implementing decisions
to control project operations; and produ ct evaluation serves re
cycling decisions to judge and react to project attainments.
(p.218)
In his statement for context evaluation. Stufflebeam made a
clear call for a statement of needs and objectives.
Context evaluation provides a basis for stating change objec
tives through diagnosing and ranking problems in meeting needs or
using opportunities, and i t analyzes change objectives to deter
mine the amount of changed to be effected and the amount of infor
mation grasp available for support. Thereby, i t provides an in i
tia l basis for defining objectives operationally, indentifying
potential methodology strategies, and developing proposals for
outside funding, (p. 219)
Thus the c o n te x t eva lu a tio n of the CIPP model does make
a clearer demand for statements of objectives than does the de
scription of system s assessm ent of the CSE model. Otherwise these two
23
descriptions have much in common. Stufflebeam's statement for in p u t
eva lu a tio n has much in common with program planning:
Essentially, input evaluation provides information to decide
if outside assistance is required to meet objectives, how the
objectives should be stated operationally, what general strategy
should be employed (for example the adoption of available solu
tions or the development of new ones), and what design or pro
cedural plan should be employed to implement the selected
strategy, (p. 223)
Both of these statements could relate to course evaluation where
a course team is searching for all possible areas to teach and selecting
the most important. Having done this, the course team then cooperates
in designing materials, presentations, and a c tiv itie s.
The statement made for context evaluation and input evaluation
could refer to these steps in course evaluation. However, there are
s t i l l no specific steps to be followed for a course team that does not
include an evaluator to direct the a c tiv itie s of an instructional
designer and subject-matter expert. To put i t another way: how does
the course team that does not include a professional evaluator translate
c o n tex t and in p u t evaluation statements into specific activ ities?
The statement of process evaluation could cover that part of
course evaluation that is generally called development. This is where
the materials and measurement and evaluation instruments are produced or
used. The statement that seems to most closely describe this step in
course evaluation is Stufflebeam's (1971) summary statement:
To summarize, under process evaluation, information is delin
eated, obtained, and reported as often as project personnel
24
require such information, daily i f necessary--especially during
the early stages of a project. This provides project decision
makers not only with information needed for anticipating and
overcoming procedural d iffic u ltie s but also with a record of
process information for interpreting project attainments, (p. 232)
The method for product evaluation described in the Stufflebeam
statement seems to be out of sequence with the procedures of course
development, where objectives are stated and te st items developed as the
materials are under development:
The general method of product evaluation includes devising
operational definitions of objectives, measuring c rite ria asso
ciated with the objectives of the activity, comparing these mea
surements with predetermined absolute or relative standards, and
making rational interpretations of the the outcomes using recorded
context, input, and process information. (Stufflebeam, 1971,
p. 232)
The same four criticisms could be made of the CIPP model as were
made of the CSE model. These criticisms are directed toward the su ita
b ility of the model for describing course evaluation, as follows:
1. The statements are directed toward programs and systems,
not toward educational products and materials for courses
in higher education.
2. The statements are for program evaluators, not for the
instructional designer or subject-matter expert.
3. The statements are too broad to give specific direction
to course development ac tiv itie s.
4. There is no adequate statement of what is needed for s ta te
ments of objectives and the way data are to be collected
25
through evaluation instruments such as tests and
questionnaires.
A Pavadigm In v o tvin g M u ltip le
C vitevio n Measures f o r thelE valua-
tio n o f the E ffe c tiv e n e s s o f
School Programs
The third model found in the evaluation lite ra tu re was an eight-
step paradigm (Metfessel & Michael, 1973). Each step is an activity
designed to involve the school community in the evaluation processes.
The paradigm is given here as it appears in the text, with the exception
that Step 2 has been shortened:
1. Involve both directly and indirectly members of the total
school community as participants, or fa c ilita to r s , in the
evaluation of programs.
2. Construct a cohesive paradigm of broad goals and specific
objectives (desired behavioral changes) arranged in a hier
archical order..............
3. Translate the specific behavioral objectives into a form that
is both communicable and applicable to fa c ilita tin g learning
in the school's environment.
4. Develop the instrumentation necessary for furnishing the c r i
terion measures.
5. Carry out periodic observations through use of the tests,
scales, and other indices of behavioral change that are con
sidered valid with respect to the objectives samples.
6. Analyze data furnished by the status and change measures
through use of appropriate s ta tis tic a l models.
7. Interpret the data in terms of certain judgmental standards
and values concerning what are considered desirable levels
of performance.
8. Formulate recommendations that furnish a basis for further
implementation, for modification, and for revisions in the
broad goals and specific objectives so that improvement can
be realized. (Metfessel & Michael, 1973, pp. 270-271)
25
The advantage of the Metfessel and Michael paradigm over the two
other models is that the eight steps direct a c tiv itie s. This can be
seen from the action verbs contained in each statement. The f i r s t step
directs participants to involve other members of a community. The sec
ond step is to construct goals and objectives. The third step is to
tra n slate, the fourth is to develop, the fif th is to observe, the sixth
is to analyze, the seventh is to interpret, and the eighth requires the
evaluator to formulate recommendations. These are specific directions,
not the vague generalizations or definitions that are given in the
other two models.
A Model f o r Course E valuation
Gooler (1975) expressed some of the problems that must be faced
when interpreting all three of the above models in terms of course eval
uation :
Many professionals argue that the content for a given course
is exclusively a function of the structure of the discipline from
which the content comes. There is a long-standing tension between
content selected primarily for its f i t with the discipline and
content selected primarily for appeal to the interests, a b ilitie s ,
and needs of the learner. It is probably true that in most cases,
a combination of attention to content as defined by discipline and
attention to the relevancy of content as defined by the learner is
in order, (p. 8)
In the sections concerned with describing the CIPP model and the
CSE model, criticisms have been made concerning the su ita b ility of these
models to describe course evaluation. These are not general criticisms
of the models because i t is beyond the reach of this author to make
27
decisions concerning the general superiority or in fe rio rity of one
model over another.
The decision must be made regarding steps to be followed in
course evaluation. If there is no model for course evaluation, then
one should be developed, as noted by Stufflebeam (1971). There are some
other problems associated with the application of the CIPP and CSE mod
els to higher education courses.
The decision as to what to put into a university or college
course is made by a faculty committee. Thus the objectives of a course
are handed down to the remainder of the faculty by the committee. The
only evaluation process that can bring about change in objectives is the
accreditation process that could direct the committee to change its
course objectives. There is no role for the instructional technologist
in this area. There is very l i t t l e that an individual faculty member
can do.
Thus Stufflebeam's context and A1kin's systems assessment evalu
ation are not carried out by a course team involved in developing a
course for a college or university setting. If these two models were to
be used for course evaluation, the four previously stated criticisms
would s t i l l apply.
Answering the C ritio ism o f the E valuation Models
With the Metfessel and Michael paradigm, the steps to be taken
are s t i l l concerned with the school program or system; therefore the
28
f i r s t of the four criticisms applies to the Metfessel and Michael para
digm. However, the eight steps do answer the other three criticisms.
The paradigm directs a c tiv itie s rather than defines types of evaluation.
The following can be said of the eight steps of the paradigm.
1. The statements of each of the eight steps can be interpreted
by a subject-matter expert even though they are stated for the program
evaluator at the primary or secondary school level.
2. The statements contain action verbs that specify a c tiv itie s
and are exclusive enough to direct all the a c tiv itie s involved in course
development and revision.
3. The statements combined with the appendix to the paradigm
entitled "Multiple Criterion Measures for Evaluation of School Programs"
are very specific with reference to objectives and criterion measures.
Thus most of the problems of applying an evaluation model to
course development in behavioral, college-level instruction can be
solved through the application of the Metfessel and Michael paradigm.
It remains only to direct these statements to a course team instead of
school personnel.
A restatement of the Metfessel and Michael paradigm for course
development teams would answer the four criticism s, and the paradigm
would:
1. Be directed toward course development in college-level
behavioral instruction
29
2. Contain statements for the instructional technologist and
the subject-matter expert at the college level
3. Contain statements that would direct the a c tiv itie s of the
instructional technologist and subject-matter expert
4. Specifically describe the process of stating objectives
and designing course materials to enable the course teams to
divide their responsibilities and thus reduce confusion and
conflict
This analysis of the three evaluation models leads to the f i r s t step in
answering the descriptive part of this study. This problem is that
there is no paradigm for course evaluation that can be applied to course
development in higher education. From this analysis i t can be seen th a t
of the evaluation models studied, the Metfessel and Michael paradigm
most accurately describes the process of course development and evalu
ation. It now remains to restate the paradigm in terms that are com
municable to subject-matter experts and instructional developers working
in higher education.
Before doing th is, however, i t will be useful to look at some
of the course development efforts at institutions of higher education.
The universities studied include the University of Mid-America in
Nebraska, the Open University in the United Kingdom, and the FlexEd
Program in the School of Business at the University of Southern Cali
fornia. Following this discussion of course development at these
30
universities, reference is made to the research into behavioral instruc
tion in university classroom. The Metfessel and Michael eight-step
paradigm is then stated in course evaluation terms that communicate
specific a c tiv itie s to course teams at the higher education level. It
is assumed that these teams are made up of an instructional technologist
and subject-matter expert without the assistance of an evaluation expert.
Learning System s Using Self-Managed I n s tr u c tio n
The U n iv e rsity o f Mid-America (UMA)
Two course evaluation reports from the University of Mid-America
were considered to be related to this study. These were on Accounting I
and Introductory Psychology. Although there were five courses in opera
tion at the time of the present study, only the f i r s t two had two o ffer
ings. In his report on Accounting I, Brown (1975) drew the following
conclusions:
E n ro llees in Accounting Ia n d p a r tic u la r ly th ose who com plete the
course^ are g e n e ra lly w e ll s a t i s f i e d w ith the course o v e r a ll. . . .
A f a i r l y high percen tage o f Accounting I "for c r e d it" e n r o lle e s
can be e x p ected to com plete a l l requ irem ents f o r academic c r e d it
p ro vid in g th ey cœe given reason ably s u f f i c i e n t time to do so and
c o n ta c t i s m aintained between e n r o lle e s and d e liv e r y system p e r
sonnel. . . .
E o n tra d itio n a l a d u lt lea rn ers e n r o llin g in c o l l e g e - l e v e l courses
the r e s u l t s from the f i r s t o ff e r in g o f Accounting T, a tta in s a t i s
fa c to r y l e v e ls o f achievem ent. . . .
E o n tra d itio n a l a d u lt e n r o lle e s in Accounting I respond p o s i t i v e l y
to cozMpoMgMts t M t a dgMS-Lti/ o f
co n ten t coverage and p re se n t the co n ten t in a stra ig h tfo rw a rd manner
31
w ith a minimim o f elem ents th a t oan be in te v p v e te d as en terta in m en t
o r ie n te d , . . .
E n ro llees in Aooounting I ten d to experien ce academic or o th e r
c o u r s e -r e la te d d i f f i c u l t i e s re q u irin g the s e r v ic e s o f course f a c
u lt y an d/or d e liv e r y system p erso n n el, . . . (pp. 19-22)
The above conclusions were drawn from the first-ru n evaluation
of the course. If evaluation is defined as the process of delineating,
obtaining, and providing useful information for judging decision a l t e r
natives, these conclusions offer l i t t l e information that could help the
designer, the media sp ecialist, or the subject-matter expert make deci
sions about the improvement of the course. What the conclusions seemed
to indicate was that everything was going well. Only the la st conclu
sion might help the course team improve the course.
To support the conclusion that everything was working, the sum
mary of the report stated:
Enrollees generally rated the course very highly overall. They
considered i t as having met their personal objectives and were
positively oriented toward taking future SUN courses. They expe
rienced both personal and course-related d iff ic u ltie s , but with
respect to the l a tte r were generally able to resolve d iffic u ltie s .
They were positively disposed toward the media components of the
course, although a minority were c ritic a l of the television
programs. (Brown, 1975, p. 23)
The material in the report was not particularly relevant to the
process of course improvement.
Sell (1975) made a course evaluation report on the f i r s t offer
ing of Introductory Psychology. One of the problems encountered in the
evaluation was the small percentage of form returns. For Introductory
Psychology, "over all 15 units, 454 (28%) of the evaluation forms were
______________________ 32
returned. The rate of unit evaluation form return decreased over the
15 units from 59% for Unit 1 to 25% for Unit 8 to 10% for Unit 15"
(Sell, 1975, p. 5). This evaluation did present data concerning the
scores on five te sts , but no data were provided as to how much each
instructional unit contributed to achievement. Sell (1975) admitted
that the data gathered did not serve the purpose of providing for course
revi sions:
From the evidence collected by the instruments and procedures used
in the learner evaluations of Introductory Psychology during its
f i r s t offering, i t is d iffic u lt to make specific recommendations
for course improvement. However, for many learners, the course
appears to be rigorous and fast-paced; more frequent contact
among learners, faculty, and learning center s ta ff would be
desirable, (p. 11)
This illu s tra te s what seem to be the two major problems associ
ated with course evaluation being done at the University of Mid-America.
First, evaluation seems to be based on questionnaires from which there
is a low rate of return. Second, the data from tests and questionnaires
apparently are not used to measure changes in student behavior or
achievement, which could be used to direct course revisions.
The Open U n iv e vsity
Another location where course evaluation has been carried out over
a long enough period of time to lead to changes is at the Open Univer
sity in the United Kingdom. McCormick (1975) stated:
Within the University the problem of ' remaking courses' has led
to the following classification of changes along a dimension of
'magnitude':
33
Continuât!on--that is, no changes are made.
Revision --minor alterations to the printed page, e .g ., rew rit
ing a paragraph; remaking one or two T.V. programs.
Improvement --major alterations are made, e .g ., rewriting several
units completely, but aimed at meeting the same
course objectives.
Replacement --a complete rewrite of the course with new subject
matter and perhaps new objectives.
Withdrawal —the course is no longer included in the curriculum
and no replacement for i t is offered.
It would also be possible to consider types of changes along a
dimension of presentation V content, (p. 3)
In his study McCormick (1975) took an electronics course that
lasted for a 1-year period. During the f i r s t year, he collected data
from computer-marked assignments and questionnaires.
The evaluation of the f i r s t year found that students were having
great d iffic u lty with Unit 10 on building electronics circu its. The
questions asked i f the experimental notes were useful, if the television
program was designed specifically to help the student do the experiment,
how many hours the student spent on the unit, and what his major prob
lems were (McCormick, 1975, p. 30). As a result of the questions,
changes were made in the study guide, the correspondence text, the
tutor-marked assignment, and the computer-marked assignment.
Course revision focused on Unit 10. Students were advised that
because of the d ifficu lty of the unit they should plan to apportion more
study time. Because the concept depended on knowledge of Unit 8, Unit
10 was revised to contain more references to the e a rlie r material.
34
Additional explanation of other concepts necessary to understand Unit 10
were added to the correspondence text.
During the second administration of the course, students were
asked how much time they spent studying Unit 10. The amount of time
decreased during the second administration of the course. When the
students were asked how d iffic u lt Unit 10 was, the mean response showed
a decrease during the second administration (McCormick, 1975, pp. 43-44)f
In summary, the main weakness of the Open University evaluations
was the lack of a research design. The evaluator was forced to compare
evaluation data from one year to the next with two completely different
populations.
The McCormick (1975) report did not include achievement scores.
The use of questionnaire data could be supported by more objective data,
such as student achievement on tests.
The FlexEd Program— U n iv e rsity o f
Southern C a lifo rn ia
The FlexEd Program in the School of Business Administration
offers seven courses as a part of the undergraduate curriculum. They
are:
1. BA 302 Management Communications
2. BA 304 Organizational Behavior
3. BA 307 Marketing Management
4. BA 310 Operations Management I
35
5. BA 311 Operations Management II
6. BA 350 Business Growth and Stabilization
7. BA 403 Business Law
Each course relies heavily on the instructional materials com
ponent. This course package accompanies a textbook and set of readings.
A learning coach is available to the students to a s s is t them through the
course.
The program is designed to provide an alternative for those
students who are unable to attend classes because of a work commitment.
The content of the courses in the FlexEd mode is the same as the content
in the regular classes. Each course is divided into eight modules and
each module has a section of the coursebook and textbook, some selected
readings, and an audio tape presentation.
In classroom instruction, evaluation of the lecturer is a com
mon type of evaluation carried out at the university. In behavioral
instruction in the university classroom, there are additional components
to be evaluated. These include the proctor or learning coach and the
instructional materials. Thus evaluation of a FlexEd course cannot
simply be an extension of the evaluation of regular classroom
instruction.
FlexEd needs additional questionnaires to cover the additional
components. Comparison of te s t scores ensures that student evaluation
is being carried out in an effective manner.
35
In the CIPP model, this area of evaluation would be in p u t and
p ro cess evaluation. In the CSE model, the evaluation of a course in
the university classroom by an instructional technologist would be pro
gram im plem entation and program improvement. However, to use these
terms to communicate to a course team what the process of evaluation
involves would not give direction for group a c tiv itie s.
Specific instructions, steps, or procedures should be given.
The team should be told what to do and in what sequence. Not only do
the general terms used thus far fail to give specific directions, they
are stated in terms of the evaluation of school programs. The Metfessel
and Michael paradigm does give eight steps that could guide a course
team through the evaluation process.
Toward an E valuation Model f o r
Course Improvement
The eight steps of the Metfessel and Michael paradigm are pre
sented here to give direction for course development and revision. They
are listed on the l e f t side of the page. As with the other models, this
paradigm is stated jn terms of school programs. The statements on the
right side of the page are more specific statements for course develop
ment, revision, or evaluation (Metfessel & Michael, 1973, pp. 270-271).
1. Involve both d irectly and Step I in course evaluation requires
indirectly members of the the course team to agree on what
total school community as decisions are to be made and what
participants, or f a c ili- information is to be collected,
t a t o r s , in the evaluation The evaluation design could be sub-
of programs. mitted to the team for its approval.
37
5.
6 .
Construct a cohesive paradigm
of broad goals and specific
objectives (desired behavioral
changes) arranged in hierarchi
cal order.
Translate the specific behav
ioral objectives into a form
that is both communicable and
applicable to fa c ilita tin g
learning.
Develop the instrumentation
necessary for furnishing
criterion measures.
Carry out periodic observa
tions through use of the
te s ts , scales, and other
indices of behavioral change
that are considered valid
with respect to the objec
tives sampled.
Analyze data furnished by
the status and change mea
sures through use of appro
priate s ta tis tic a l models.
It is important to state the
objectives of the evaluation.
In Step 2 on cpurse evaluation
the content specialist and in
structional technologist agree on
the course objectives as the
materials are produced or revised
In Step 3 the materials are pro
duced or revised. Where media
are to be used, the instructional
designer could make decisions as
to the content of the different
components and the instructional
design of scripts. Some field
testing should be carried out.
Step 4 would involve the writing
of te s t items to match objectives,
This could be done at the same
time as Step 3. In this way the
tests are valid and criterion
referenced.
In Step 5 the research into the
components of behavioral instruc
tion would help the instructional
technologist to make decisions
(e.g., about how many tests should
be given). Questions on ques
tionnaires must be relevant to
yield useful information. A stu
dent profile containing te s t
results helps to individualize
instruction.
The sixth step could compare
achievement in two groups. Where
changes are made in the course,
the students could be divided into
two groups. A reversal or base
line design could be used.
Criterion-referenced tests enable
the course team to determine
38
7. Interpret the data in terms
of certain judgmental
standards and values con
cerning desirable levels of
performance.
8. Formulate recommendations
that furnish a basis for
further implementation, for
modification, and for revi
sions in the broad goals and
specific objectives so that
improvement can be realized.
where instruction needs to be
strengthened.
Criterion-referenced tests will
measure areas that are not under
stood by the students.
Alternate techniques for teaching
material and learning a c tiv itie s
should be considered. An evalu
ation report should contain
alternate techniques and recom
mended changes in goals and
objectives.
Three models of evaluation have been analyzed as possible models
for course evaluation. The limitations of these models were stated
e a rlie r. The advantages and disadvantages of the Metfessel and Michael
paradigm as a description of course evaluation have been noted. The
eight statements given above make the Metfessel and Michael paradigm a
specific statement of the steps to be taken in course evaluation.
The five parts of the appendix to the paradigm (Multiple Cri
terion Measures for Evaluation of School Programs) give suggestions as
to how measurement of change might be made. The whole purpose of car
rying out the eight steps of the paradigm is to make decisions concern
ing a lte rn a tiv e s . Content decisions are made by the instructor. Design
decisions are made by the instructional technologist.
R ela ted Research
Allen (1975) stated that "the production of effective communi
cations may s t i l l be largely an a r t, but an a rt that u tilizes
39
established principles of how individuals learn and what means may be
used to enhance that learning" (p. 165).
In his a r tic le , Allen (1975) gave a prescription for designing
instructional materials for students with varying aptitudes. One of
the values of this prescription is that i t provides a l i s t of design
elements that a designer may select from in designing or revising course
materials. Concerning the extensive l i s t of design elements, the f o l
lowing statement was made by Allen:
It should be apparent that all of these prescriptions would not
be employed in the preparation of any single instructional com
munication; and, given the present state of the a rt, i t becomes
a matter of designer-producer judgment as to ju st what procedures
would be appropriate given the nature of the content being taught,
the instructional objectives being met, and other instructional
conditions prevailing in the u tilizatio n situation, (p. 16)
Chapters III and IV and Appendixes A and C of the present study
show how these design elements were selected for course materials re v i
sion. Before concluding the study of the evaluation of self-managed
instruction, i t is essential to consider the research that developed
out of the Keller plan lite ra tu re and has been renamed Behavioral
Instruction in the College Classroom (Robin, 1976).
Five features characterized behavioral instruction in its original
form:
1. The go-at-your-own-pace feature, which permits a student to
move through the course at a speed commensurate with his
a b ility and other demands upon his time;
2. The unit-perfection requirement for advance, which lets a
student go ahead to new material only after demonstrating
mastery of that which preceded;
40
3. The use of lectures and demonstrations as vehicles of motiva
tion, rather than sources of c ritic a l information;
4. The related stress upon the written word in teacher-student
communication ; and
5. The use of proctors, which permits repeated testing, immedi
ate scoring, almost unavoidable tutoring, and a marked
enhancement of the personal-social aspects of the educational
process, (pp. 314-315)
Research in to the I n s tr u c tio n a l M a teria ls
Component o f B ehavioral I n s tr u c tio n
in the C ollege Classroom
The research into self-managed instruction in distance education
( i . e . , case studies from O U and UM A) has been considered f i r s t because
of the reliance on instructional materials. Another area of research
that relates to self-managed instruction is the research into the d i f
ferent components of behavioral instruction in the college classroom.
Dubin et al. reviewed 40 years of research into the effectiv e
ness of different modes of instruction in higher education (Dubin &
Medley, 1969; Dubin & Taveggia, 1968). They found no significant d i f
ferences, and concluded that if television, face-to-face instruction.
Independent study, directed study, or the lecture method were not sig
nificantly affecting student performance, i t would make no difference
which one was used. Each method could be applied when necessary to
suit local conditions and content.
Taveggia (1976) later found an approach that was producing
higher student performance on tests. At the time, this new approach
was called the Keller plan. It was begun in the early 1960s, when
41
F. Keller and J. G. Sherman originated a psychology department at the
University of Brazil and taught undergraduate psychology courses with
the use of proctors and instructional materials.
Literature on the subject began with the a r tic le "Goodbye,
Teacher..." (Keller, 1968). By the time of the Taveggia review, the
system was called the Personalized System of Instruction (RSI). A more
recent review (Robin, 1976) created the generic term Behavioral Instruc
tion in the College Classroom. Robin reported research studies which
showed that this new method not only increased students' te s t perfor
mance, but increased student retention and study times. There was less
reliance on instructional materials in behavioral instruction; therefore
other components such as grading and proctoring were studied f i r s t .
In his review of the research, Robin (1976) referred to a study
by Miller and Weaver (1975) which evaluated the printed materials
handed out by the instructor. The materials were designed to teach
concepts in behavioral psychology.
For the purpose of the study, the course was divided into five
sections that followed the baseline period at the beginning of the
course. The following section was concerned with methods of research.
This section was followed by a section on stimulus control, followed by
a section on methods and reinforcement, and fin a lly a section on aver-
sive control. Miller and Weaver (1975) stated:
During baseline, the mean percent of correct answers ranged from
8 to 35 percent correct for the four subtests. During treatment
_.42j
for the Methods su b tests, th is percent jumped to about 70 percent
while very l i t t l e change occurred in the other three subtests.
During treatment for the Reinforcement su b test, the mean percent
increased to 80 percent with l i t t l e change in the other subtests
s t i l l in baseline. During the treatment for the Stimulus sub
te s t s , the mean percent increased to about 60 percent with no
change in the Aversive Control subtests. When the treatment pack
age was applied to the Aversive Control su b tests, the mean percent
of co rrect responding increased to about 70 percent. During post
treatment condition, the mean responding for the four subtests
ranged from 65 percent to 85 percent co rrect. Thus, not only was
an increase noted in each of the su b tests, but th a t increase was
c le a rly associated with the introduction of the teaching package
for th at sp ecific area. (pp. 50-51)
The obvious lack of a control group in all the studies mentioned
(Brown, 1975; McCormick, 1975; M iller & Weaver, 1975; S ell, 1975) is the
most obvious f a u lt. However, these are the best examples found of the
behavioral instru ctio n research involving an evaluation of the in s tru c
tional m aterials component. Perhaps the integration of behavioral
in stru ctio n research and media research would provide a natural settin g
th a t would be an advantage to both areas.
In Robin's (1976) review of 39 stu d ies, the larg e st group
(18 studies) represented d iffe re n t courses in psychology. The remaining
21 studies were from 11 d isc ip lin e s. There were 9 in economics, 3 in
physics, and 1 each in anthropology, biology, business adm inistration,
education, engineering, lib ra ry sciences, mathematics, medical school
biochemistry, and sociology (p. 320). There was no mention of any
course in marketing management. The percentage of correct answers on
the examinations favored behavioral in stru ctio n in the f ie ld of psychol
ogy by 10 percent. In the other d iscip lin es lis te d above, the gain
43
favored behavioral in stru ctio n by 8 percent.
There is s t i l l some debate concerning which components of behav
ioral in stru ctio n have the g reatest impact on student learning. Few of
the researchers or reviewers were w illing to make any suggestions as to
which components have the most sig n ific a n t impact on student achieve
ment. Semb and P h illip s (1977) were the only authors to suggest th a t
some components are more important than others: "The components which
appear to produce the la rg e st positive e ffe c ts are: study questions (18),
frequent quizzing (16), and a high mastery c rite rio n (8, 16)" (p. 2).
During the same period th a t the research into the behavioral
in stru ctio n in the college classroom was developing, research in medical
education was also being carried out. Few studies were found in which
increased performance was brought about, perhaps because in medical edu
cation a c rite rio n must be reached no matter what method is used. How
ever, several studies were found in which the in stru ctio n al m aterials in
a medical course were evaluated.
Medloat Education
Manning, Abrahamson, arid Dennis (1968) compared a programmed
te x t, standard textbook, lectu re demonstration, and a lectu re followed
by a problem-solving workshop session. The programmed te x t was w ritten
f i r s t . A modified branching method with 2 or 3 and occasionally 4 op
tions per teaching frame was u tiliz e d for the 149 pages. Analysis of
variance, analysis of covariance, and Scheffe's comparison showed no
44
difference in performance among the three groups. There was a d i f f e r
ence in time spent th a t proved to be sig n ific a n t. Both the s e l f - l
in stru ctio n al techniques took less time than the presentation tech
niques to achieve the same learning gain.
Abrahamson, Denson, and Wolf (1969) compared the tra in in g of
medical students in endotracheal intubation using a computer-controlled
simulated p atien t popularly called "Sim One." Anethesiologists were
divided into two groups. One group learned endotracheal intubation on
Sim One while the other group learned by the usual demonstration method.
The hypothesis th at Sim One students would achieve predesignated
c rite rio n levels of performance in less time and with fewer operating-
room t r i a l s than would residents not using Sim One was supported a t the
.01 level for the highest performance c rite rio n of 9 out of 10 consec
utive po sitiv e ratin g s. The experiment was based on the assumption
th a t (a) Sim One allows for planned and gradual increase in the d i f f i
cu lty of the problems of learning endotracheal intubation, (b) Sim One
permits almost unlimited re p e titio n , (c) the resident student can obtain
immediate feedback, and (d) each learner proceeds a t his own ra te .
One objective of the present study was to determine whether the
subject of marketing management can be added to the l i s t of subjects
taught e ffe c tiv e ly by behavioral in stru c tio n . Another objective was
to determine the importance of the in stru ctio n al m aterials component
as a component of behavioral in stru ctio n .
45
Summary
The f ie ld of educational evaluation and the growth of the gen
eral fie ld of evaluation recent years have been discussed in th is chap
te r. The next step was an investigation of three evaluation models for
school programs to s e le c t one for th is study. After looking a t the
process of course evaluation at three in s titu tio n s using self-managed
in stru ctio n (the Open U niversity, University of Mid-America, and the
FlexEd Program a t the University of Southern C a lifo rn ia), the Metfessel
and Michael paradigm was selected as appropriate for course evaluation.
A statement was made to tra n sla te each step of the paradigm into a step
in course evaluation. The final section of the chapter reviewed related
research .
46
CHAPTER III
RESEARCH M ETHODOLOGY
The tv7o separate methodologies employed in the study are
described in th is chapter. The f i r s t p art tr e a ts the fo rm a tive evalu a
tio n process in the following sequence: experimental design, experi
mental population, experimental m a teria ls, decision a lte rn a tiv e s , t e s t
ing instruments, and analysis of data. The second part of the chapter
tr e a ts the svonmative e va lu a tio n process. This process is described in
the following sequence: research design, experimental population,
te stin g instruments, decision a lte rn a tiv e s , conduct of the study, and
analysis of the data. The formative and summative evaluations represent
two complete cycles of the Metfessel and Michael eight-step paradigm.
Formative E valuation
The objective of formative evaluation was to determine the
weaknesses in the in structional m aterials component of a course in
marketing management.
E xperim ental Design
To carry out Steps 4 and 5 of the Metfessel and Michael para
digm, a modification of the Campbell and Stanley Design #4 (1963, pp.
13-25) was employed.
EooperimentaZ P opu lation
During the formative evaluation stage, there was no random
selection of the students. The achievement of the 34 FlexEd students
was compared to th a t of the 40 students in a lectu re class th at was
taught p a rallel to the FlexEd class.
Independent V ariable
The independent variable was the FlexEd course package in
marketing management. This package consisted of a study guide and eight
tapes. Periodic te s ts were administered during the semester, and both
groups were retested a t the end of the course. The following section
describes the m aterials in more d e ta il.
Experim ental M a teria ls
In the FlexEd mode of in stru c tio n , the students were brought
together as a class a t the beginning of the semester. At th is meeting,
they were given a course package consisting of a coursebook and a set
of audio tapes. After th is meeting, a ll studying was done individually.
The ooursebook. The coursebook sections for each module were
sim ilar. Each module was divided into topics p ara lle l to the textbook.
Module I covered Chapters 1 and 2, and was divided into four topics:
Marketing Management
The Marketing Concept
The Economic System and Marketing: H istorical
Marketing: Meaning and Perspective
48
Each topic in the coursebook was designed in the same manner,
,and was always in the following sequence:
Topic objectives
Readings associated with the topic
Study directions
Overview of the content of the textbook
Practice questions
Answers to practice questions
The information in the coursebook paralleled the information in
the textbook. The information on the tapes was the same as the in fo r
mation in the coursebook and the textbook. The tapes were divided into
the same coursebook topics based on the textbook content. Key para
graphs were selected and used to form a topic overview in the course
book.
The textbook. The textbook, Modern M arketing (Marcus et a l . ,
1975), consisted of 29 chapters divided into 9 book sections. Three
cases were attached to each part.
The textbook made use of 28 pictures to i l l u s t r a t e important
points in the te x t. For example, the history of the development of
marketing contained a picture of an early American country store. A
short paragraph formed the caption beneath the photograph to in teg ra te
i t into the te x t. This caption was printed with green ink for emphasis.
Paragraph headings were also printed with green ink, which made
them stand out. The use of d iffe re n t p rin t sty les and colors made the
49
headings more d is tin c tiv e . Plain block headings were subordinate to '
the colored headings.
The figures used in the tex t were the same shade of green as
th at used for the section headings. In the ta b le s, the textbook p r in t
ing was in black, but headings were the same shade of green as th a t used
for textbook headings and the background of the figures.
At the beginning of each chapter, an outline in which the head
ings were of a d iffe re n t type sty le gave an overview of the te x t. In
dentation was used to give r e la tiv e importance to d iffe re n t parts of
the chapter. The beginning of each new part was indicated by the use of
a plain green page.
D ecision A lte r n a tiv e s
The objective of the evaluation was to determine any weaknesses
in the in stru ctio n al m aterials component of the course in marketing
management. The decisions concerning content were not the re sp o n sib il
ity of the in stru ctio n al technologist. Instructional design decisions
were the tech n o lo g ist's area of re sp o n sib ility . It was therefore a
necessary part of formative evaluation to 1i s t the design elements th at
could be used.
When s e ttin g out to design an in stru ctio n al message, where could
one go for a l i s t of design elements? How could a designer be certain
th at he had considered a ll the options available to him?
50
Allen (1975) studied interactio n s between aptitude and design ]
I
elements of in stru ctio n . In doing so, he made a l i s t of d iffe re n t |
design elements under seven general statements. These general s ta te - |
ments are lis te d in Appendix A of th is study, along with sp ecific r e f
erences to the m aterials in the course. Where there is an a s te ris k ,
the use of a design element is recommended and given a sp ecific a p p li
cation to the message on the tape. The general statements are given as
quotations. The sp ecific applications th at follow the quotations are
given the same numeration system as th a t used in the original a r t ic le
(Allen, 1975, pp. 139-170).
An analysis of the instru ctio n al design of the textbook, course
book, and tapes led to the following statements. F ir s t, the textbook
could not be changed as a part of course revision. There seemed to be
extensive use of in stru ctio n al design elements in the textbook. Second,
the tape sc rip ts could benefit from additional design i f the formative
evaluation data indicated th a t th is part of the m aterials package needed
change. Third, the coursebook seemed to be adequately designed. Con
firmation th a t the coursebook is a functioning component awaited the
formative evaluation data.
The readings stood apart from the re s t of the course and lacked
the instru ctio n al design elements th at could improve comprehension and
more closely re la te the content of the readings to the content of the
course.
51
T e stin g Instru m en ts
The p re te st for the course consisted of 100 items with 4 options.
The r e l i a b i l i t y measure used for all te s ts was the Kuder-Richardson For
mula 20 (Magnusson, 1967). The r e l i a b i l i t y of the p re te st was .78.
This estim ate of internal consistency was considered high enough for
acceptance of individual scores as a measure of the students' knowledge
of the course. The p re te st was made up of items selected from the f o l
lowing three te s t s .
The f i r s t t e s t , which covered Modules I and II, was made up of
60 m ultiple-choice items. This part of the te s t had a r e l i a b i l i t y mea
sure of .7 4 ,.Which was considered high enough for acceptance of the
t e s t as in te rn a lly re lia b le . The students were directed to answer one
of the three essay questions on the textbook. One of these essay ques
tions is shown as Example 5 in Appendix B. The students were also
directed to answer one of the three questions on the readings. An
example of these questions is shown as Example 6 in Appendix B.
The second t e s t , which covered Modules I I I , IV, and V, was the
same as the f i r s t , with 60 m ultiple-choice items and a r e l i a b i l i t y of
.74. Again, the r e l i a b i l i t y was considered high enough for acceptance
of the t e s t as in te rn a lly re lia b le . Example 9 in Appendix B is an exam
ple of an essay question on the textbook and Example 10 in Appendix B is
an example of an essay question on the readings.
The final t e s t , which covered Modules VI, VII, and VIII, was in
the same format as the f i r s t two, with 60 m ultiple-choice t e s t items and
52
a r e l i a b i l i t y measure of .75, which was considered high in te rn a l-
consistency r e l i a b i l i t y .
Objective t e s t items from the p re te st are Examples 1 and 2 in
Appendix B. Examples 3 and 4 objective items are from the f i r s t t e s t ,
and Examples 7 and 8 objective items are from the second te s t.
Along with each of these p o s tte s ts , the students were required
to respond to a questionnaire (see Appendix C). Opinions were e lic ite d
as to th e ir preferences for the d iffe re n t media in the in stru ctio n al .
m aterials component.
Organ-ization o f New Tapes
Upon presentation of a report concerning instru ctio n al design
to the subject-m atter expert, the following o u tline for the tape s c rip t
was adopted:
1. The tapes began with exposition to repeat important parts
of the te x t, esp ecially those contained in figures and
graphs. Attention was directed toward those figures and
graphs. This section included examples. Numbered headings
were used to f a c i l i t a t e note taking.
2. Examples were used to i l l u s t r a t e major concepts.
3. Advance organizers or introductory statements were used for
readings associated with the module.
4. Questions were asked concerning the journal a r t ic le s .
5. Case studies were discussed and questions were asked.
53
Tapes were then recorded on open-reel tapes following th is
o utlin e.
A n a ly sis o f Data
A t t e s t was used to t e s t for any s ig n ific a n t difference between
the experimental and control groups on the p re te st as well as the objec
tiv e and essay parts of the three p o s tte s ts. This corresponds to Step 6
of the Metfessel and Michael paradigm, which requires an analysis of
data furnished by the statu s and a change in measures through use of
appropriate s t a t i s t i c a l models.
Summary
This section of the chapter corresponds to Steps 3 through 7 of
the Metfessel and Michael paradigm. The steps in the procedures used
r e la te to the paradigm as follows:
Experimental m aterials Step 3
Experimental design Steps 3 and 4
Instructional design decisions Step 5
Testing instruments Step 6
Analysis of the data Step 7
The analysis of the data is Step 7 of the paradigm. To f u l f i l l
Step 8 of the paradigm requires the evaluator to make recommendations
th a t furnish a basis for fu rth er implementation, for m odification, and
for revision in the broad goals and sp ecific objectives so th a t
54
improvement can be realized. In course evaluation, i t is important to
make decisions concerning how the same m aterials or methods can be
improved to achieve the same unchanged objectives. The outline for the
new tapes accomplishes Step 8 of the paradigm.
Summative E valuation
The objective of the summative evaluation was to determine the
effectiveness of the revisions of the in stru ctio n al m aterials component
of the course in marketing management.
To comply with Step 1 of the Metfessel and Michael paradigm, the
learning coach and the supervising professor were made aware of the
following research design. To comply with Step 2 of the Metfessel and
Michael paradigm, brehavioral objectives were rew ritten for la te r course
book revisions. Step 3 of the summative evaluation was the revision and
presentation of the audiotapes.
To carry out Steps 4 and 5, a research design was devised to
t e s t for the improved design of the course package, and te s ts and ques
tio n n aires were administered.
Research Design
In the summative evaluation of the m ate ria ls, the semester-long
performances of three in ta c t groups were compared by means of one-way
analysis of covariance and m ultiple comparison te s ts . The two control
groups were the regular session in marketing management and the FlexEd
55
groups using the old, unrevised m aterials. The experimental group was
randomly selected from the students who chose to take the course through
the FlexEd mode and used the new, revised FlexEd m aterials.
Eccperimental Popu lation
There were 16 students in the group with the old m a teria ls,
18 students received the new m a teria ls, and there were 33 students in
the lectu re c la ss. Of the students who received the new m ate ria ls, 2
were sophomores, 8 were ju n io rs, and 11 were seniors. The b irthdates
of the students in the group ranged from 1948 to 1956 (see Table 1).
Of these students, 9 said th a t they had experience with self-stu d y
m ateria ls, while 11 said they had no experience with se lf-stu d y mate
r ia ls for c re d it. In th is group were 14 part-tim e students and 4 f u l l
time students. The group was planning to re g is te r for an average of
11.5 units of c re d it. Their mean score on the verbal part of the
Scholastic Aptitude Test was 42.17 and was 44.33 on the q u a n tita tiv e
part.
Of the students who received the old m a teria ls, 9 were juniors
and 3 were seniors. Their b irth d ates ranged from 1953 to 1957 (see
Table 1). Four students claimed to have had experience with self-stu d y
m a te ria ls, while eight claimed to have had no experience with self-stu d y
m ateria ls. There were ten part-tim e students and one fu ll-tim e student.
They were planning to re g is te r for an average of 12.7 units of c re d it.
56
Table 1
Birthdates of FlexEd Students in
Summative Evaluation
Year New
M aterials Received
Old
1957 0 1
1956 7 5
1955 4 4
1954 1 1
1953 1 2
1952 1 0
1951 1 0
1950 2 0
1949 0 0
1948 2 0
Total 19 13
57
Their mean scores on the Scholastic Aptitude Test were 52.17 for verbal
and 56.33 for q u a n tita tiv e .
For the lectu re c la s s , the mean scores on the Scholastic A pti
tude Test were 46.77 for verbal and 55.05 for q u a n tita tiv e .
T e stin g In stru m en ts
The te stin g instruments and questionnaires used for the summa
tiv e evaluation were id entical to those used for the formative
evaluation.
D écisio n A lte r n a tiv e s
In the formative evluation phase, the decision a lte rn a tiv e s were
the design elements th a t could be used. These were l i s t e d , and a
ra tio n a le for the use of each of the design elements was developed.
These statements are lis te d in Appendix A. Examples of th e ir use are
located in Appendix D.
In the summative evaluation phase, the decision was whether to
use the revised tapes. To make th is decision. Step 6 of the Metfessel
and Michael paradigm had to be carried out and there had to be an a n al
y sis of the data.
A n a ly sis o f the Data
All te s ts were coded and data cards were punched fo r s t a t i s t i c a l
analysis by computer at the USC Computer Center. The s t a t i s t i c a l analy
s is was performed through the use of the S t a t i s t i c a l Package f o r the
58
S o c ia l S cien ces (SPSS) (Nie e t a l . , 1975).
The data were tabulated by analysis of covariance to control for
the d iffe re n t p re te st scores in determining i f there were any s i g n i f i
cant differences in achievement of the three groups on the p o stte st
scores. The Tukey t e s t for honestly sig n ific a n t difference (HSD) was
used to determine i f the differences between the adjusted means were
s ig n ific a n t. Analysis of the data conformed to Step 6 of the Metfessel
and Michael paradigm.
The Conduct o f the Study
The p re te s t was administered on the f i r s t day of the course.
For the FlexEd class th is was the day the students were given the mate
r ia ls for the semester. During the semester, the supervising professor,
learning coach, and instru ctio n al technologist coordinated in stru ctio n
to ensure th at a ll three groups were tested a t the time during th e ir
progress through the course.
The new tapes were recorded on an open-reel tape recorder to
produce a master recording. From th is open-reel master a c a sse tte was
produced. From th is master the old cassettes were used for copies.
The copies of the new tapes were identical to the old tapes in appear
ance. The new tapes were duplicated on a Wollensak 2700 A V twonslave
c a sse tte duplicator.
The c a s se tte s , t e s t sheets, and questionnaires were id e n tifie d
with an "N" or an "0" for old or new tapes. No evidence was observed
59
th a t the students were aware of the experiment.
The questionnaires were completed anonymously and were given to
the students a f te r the f i r s t , second, and final te s ts .
Summary
The f i r s t part of th is chapter described formative evaluation.
The description covered experimental design, experimental population,
independent variab les, experimental m a teria ls, decision a lte rn a tiv e s ,
te stin g instruments, organization of new tapes, and analysis of the
data. The second part of the chapter described summative evaluation.
This description contained the research design, experimental population,
analysis of the data, testin g instruments, decision a lte r n a tiv e s , and
the conduct of the study.
60
CHAPTER IV
RESULTS
In th is chapter the re su lts from the formative evaluation made
during the Spring 1976 semester and the re su lts from the summative ev al
uation made during the Fall 1976 semester are reported and analyzed.
Fovmat-ive E valu ation
The formative evaluation re su lts assumed two forms, th a t r e s u l t
ing from the performance te s ts and th at re su ltin g from the question
naires .
Fovmative T e st R e su lts
Formative data were gathered from a p re te st and three p o s tte s ts.
The three p o stte sts followed Modules I I, V, and VIII. Questionnaire
data were co llected with each of these three te s ts during the spring of
1976. The te s t data are presented in Table 2 and the questionnaire data
are presented in Tables 3, 4, and 5.
As indicated in Table 2, the mean for the FlexEd group on the
p re te st (z = 26.53) was lower than the mean for the lectu re group
(z = 33.00), but the difference was not s ig n ific a n t { t = .93, p > .05).
The mean for the FlexEd group on the objective part of the f i r s t te s t
61
Table 2
Means and Standard Deviations of FlexEd and Lecture Groups
in Formative Evaluation
Group N X S' ! ) t
S ignificant
Difference
P retest Results
FlexEd Group 34 26.53 7.10 .93 N S
Lecture Group 40 33.00 5.94
Module II Test, Multiple Choice
FlexEd Group 32 45.66 5.63 .27 NS
Lecture Group 39 42.79 5.78
Module II Test, Essay Questions
F irst Essay
'"FlexEd Group 35 15.23 : 02 N S
Lecture Group 40 15.65
Second Essay
FlexEd Group 35 14.54 .02 N S
Lecture Group 40 14.63
Module V Test, Multiple Choice
FlexEd Group 35 45.66 5.63 .93 NS
Lecture Group 40 42.79 5.78 N S
Module V Test, Essay Questions
F irs t Essay
FlexEd Group 32 17.28 2.40 1.32 NS
Lecture Group 33 16.88 3.53
62
Table 2--Cont'inued
Signifi cant
Difference Group SD
Module V Test, Essay Questions-^Continued
Second Essay
FlexEd Group
Lecture Group
17.81 2.50 .48
16.88 3.26
Module VIII Test, Multiple Choice
42.10 FlexEd Group
Lecture Group
6.76
40.54 6.03
Module VIII Test, Essay Questions
F irs t Essay
FlexEd Group
Lecture Group
16.22 4.31 .31
15.00 3.32
Second Essay
FlexEd Group
Lecture Group
16.74 4.31
(x = 45.66) was higher than the mean for the lecture group {x = 42.79),
but the difference was not s ig n ific a n t (t - .27, p > .05).
The means for the FlexEd group on the two essays = 15.23,
X^ = 14.54) were lower than the means for the lectu re group (x^ = 16.65,
X^ = 14.63), but the differences were not s ig n ific a n t = .02,
= . 02, p > . 05).
The mean for the FlexEd group on the objective part of the sec
ond t e s t (x = 45.66), which followed Module V, was higher than th a t of
the lecture group (x = 42.79), but the difference was not s ig n ific a n t
(t = .93, p > .05).
The means for the FlexEd group on the two essays (x^ = 17.28,
X = 17.81) were higher than those of the lectu re group (x^ = 16.88,
X = 16.88), but the differences were not s ig n ific a n t = 1.32,
t = .48, p > .05).
The mean for the FlexEd group on the objective part of the final
t e s t (x = 42.1), which followed Module VIII, was higher than th a t of the
lecture group (x = 40.54), but the difference was not sig n ific a n t
{t = .15, p > .05).
The means for the FlexEd group on the two essay questions on
the final t e s t (x^ = 16.22, X^ = 16.74) were higher than those of the
lecture group (x^ = 15.00, X^ = 15.13), but the differences were not
s ig n ific a n t {t^ = .31, = .19, p > .05).
64
Formative E valuation Q uestionnaire Data
Following the f i r s t t e s t , a questionnaire was completed by the
students (see Questionnaire I, Appendix C). Statements 1, 2, and 3
required answers ranging from "certain ly wrong" to "ce rta in ly right" on
a 5-point scale. The re su lts of the analysis are shown in Table 3. The
statements from the three questionnaires (Appendix C) have been r e a r
ranged to improve the presentations in the tables.
The mean response to Statement 1, "The Module I tape was useful
to you in understanding the material covered by th is t e s t , " was 3.51.
The mean response to Statement 2, "The Module I coursebook material was
helpful to you in understanding the textbook," was 4.33. The mean
response to Statement 3, "The session you had with the learning coach
was for in stru ctio n al purposes and was very helpful to you in preparing
for th is examination," was 4.4.
Responses to Statements 4 and 5 indicated th a t a large number
of those responding did answer the practice questions located in the
coursebook. This seemed to indicate th a t the coursebook was used by
the students, but th a t the tape was not. Responses to Statements 6 and
7 indicated th at students did obtain and read the a r t ic le s from the
lib ra ry . Responses to Statement 8 indicated th a t students did v i s i t
the learning coach (J = 2.00).
In reply to Statement 9, "Write in the number of times you l i s
tened to the tapes in the following lo c a tio n s--c a r, home, work, and
65
Table 3
Learner Responses to Module II Questionnaire
Statements X
C ertainly
Right
Probably
Right Neutral
Probably
Wrong
Certai nly
Wrong
1. The Modules I and II tapes were
useful to you in understanding
the m aterial covered by th is
t e s t .
3.50 7 11 11 2
2. The Modules I and II coursebook
m aterial was helpful to you in
understanding the textbook.
4.33 15 15, 1
3. The session you had with' the
learning coach was fo r in s tr u c
tional purposes and was very
helpful to you in preparing
fo r th i s examination.
4.00 13 10 3
Yes
Statements
No.
% No. %
4. Did you answer the p ra c tic e
questions on Module I?
27 82 6 18
5. Did you answer the p ra c tic e
questions on Module II?
,23 70 10 30
6. Did you read the L evitt
a r t i c l e ?
29 90 3 10
7. Did you read the Kotler
a r t i c l e ?
28 85 5 15
No
Statements T 0 1 2 3 4 5 6 7 8
8. C ircle the number of times you
contacted the learning coach
between the i n i t i a l session and
th i s t e s t .
2.00 3 12 9 6 2 1 - - -
Statements Car Home Work Other
9. Write in the number of times
you liste n e d to the tapes in the
following lo ca tio n s ( e .g . ,
c a r - - 3 ) .
5 23 4 5
Statements Previous Experience Discussion Group
10. Did you use resources other than
the learning coach and i n s tr u c
tional m a terials to help you with
these modules? Please l i s t .
66
other," most responses indicated th a t the students listen ed to the tapes
a t home (28). Responding to Statement 10, one student indicated th a t
previous experience helped him in the course, and three students said
that they p artic ip a te d in discussion groups. Resources other than the
course components were not lis te d . The students concentrated on the
in stru ctio n al m aterials.
Following the second t e s t , a sim ilar questionnaire was completed
by the students (see Questionnaire 2, Appendix C). The re su lts of th is
questionnaire are shown in Table 4.
On a scale from 1 to 5 (from "c e rta in ly right" to "certain ly
wrong"), a mean response of 2.8 was given to Statement 1, "The Modules
I I I , IV, and V tapes were useful to you in understanding the material
covered by th is t e s t ," which was lower than the response to Statement 1
in Questionnaire I. The mean response to Statement 2, "The Modules I I I ,
IV, and V coursebook material was helpful to you in understanding the
textbook," was 4.3. The mean response to Statement 3, "The session you
had with the learning coach was for in stru ctio n al purposes and was very
helpful to you in preparing for th is examination," was 3.5.
As with Questionnaire I, responses to Statements 4, 5, and 6
showed th a t a large number of those responding did answer the p ractice
questions in the coursebook. This and the above data seemed to in d i
cate th a t the coursebook was used by the students throughout Modules
I I I , IV, and V, but u tiliz a tio n of the tape decreased from the already
low u tiliz a tio n in Modules I and II.
67
Table 4
Learner Responses to Module V Questionnaire
Statements
C ertainly
Right
Probably
Right Neutral
Probably
Wrong
Certai nly
Wrong
1. The Modules I I I , IV,.and V tapes
were useful to you in understand
ing the m aterial covered by th is
t e s t .
2. The Modules I I I , IV, and V ,
coursebook m aterials were help
ful to you in understanding the
textbook.
3. The session you had with the
learning coach was fo r i n s t r u c
tio n a l purposes and was very
helpful to you in preparing
fo r th is examination.
2.8
4.3
3.5
10
13
10
Yes
Statements No.
No
No.
4. Did you answer the pra ctice
questions on Module III?
5. Did you answer the p ra ctice
questions on Module IV?
6. Did you answer the p ractice
questions on Module V?
14
14
14
66.7
65.7
66.7
33^
33^
Statements 1 2 3 4 5 6 7
7. How many a r t i c l e s did you read
asso ciated with Modules I I I ,
IV, and V?
8. C ircle the number of times you
contacted the learning coach
between the Module II t e s t and
the Module V t e s t .
4.2
2.9
Statements Car Home Work Other
Write in the number of times you
liste n e d to the tapes in the
following lo catio ns ( e .g .,
c a r - - 3 ) .
33
Statements Professor Discussion Group
10. Did you use resources o th er than
the learn in g coach and in s t r u c
tion al m a terials to help you with
these modules? Please l i s t .
t—.
68
Responses to Statement 7 indicated th a t students did obtain and
read the a r t ic le s from the lib ra ry . Student response to Statement 8
confirmed data from the f i r s t questionnaire which indicated th a t the
students v is ite d the learning coach an average of two times between
te s ts .
Responses to Question 9, "Write in the number of times you l i s
tened to the tapes in the following lo c a tio n s --c a r, home, work, and
o t h e r ," showed th a t the tapes were liste n e d to mostly in the home. This
indicated th a t there was l i t t l e use of the tapes in situ a tio n s where the
coursebook m aterials were not av a ila b le , such as in a parked car.
During th is section of the course, one student made contact with
the supervising professor, and one student claimed th a t a previous class
had helped him in th is course. Responses to Statement 10 indicated th a t
there was no use of m aterials other than those contained in the course
book package.
At the end of the course, a larg er questionnaire was completed
by the students (see Questionnaire I I I , Appendix C). The re s u lts of
Part 1 of Questionnaire III are shown in Table 5. Part 1 was b asic a lly
the same as the f i r s t two questionnaires, but Part 2 asked for sp ec ific
responses concerning the use of the tapes and coursebook on all ten
modules.
Data from Part 1 of Questionnaire III were entered on a scale
from 1 to 5 (from "certain ly wrong" to "ce rta in ly rig h t" ). There was a
69
Table 5
Learner Responses to Module VIII Questionnaire
Statements
C ertainly
Right
Probably
Right Neutral
Probably
Wrong
C ertainly
Wrong
The Modules VI, VII, and VIII
tapes were useful to you in
understanding the m aterial
covered by th i s t e s t . .
The Modules VI, VII, and VIII
coursebook m a teria ls were help
ful to you in understanding the
textbook.
The session you had with the
learning coach was fo r i n s t r u c
tional purposes and was very
helpful to you in preparing
for th is examination.
2.1
3.8 11
4.2 11
Yes
Statements No.
No
No.
Did you answer the p ra c tic e
questions on Module VI?
Did you answer the p ra c tic e
questions on Module VII?
Did you answer the p ra c tic e
questions on Module VIII?
15
16
16
65
70
70
35
30
30
Statements 0 1 2 3 4 5 6 7
How many a r t i c l e s did you read
associated with Modules VI, VIL
and VIII?
C ircle the number of times you
contacted the learning coach
between the Module V t e s t and
the Module VIII t e s t .
4.5
3.3 10
Statement Car Home Work Other
Write in the number of times you
liste n e d to the tapes in the
following lo c atio n s ( e . g . ,
c a r - - 3 ) .
31
Statement Professor Discussion Group
10. Did you use resources other
than the learning coach and in
stru c tio n a l m a te ria ls to help you
with these modules? Please l i s t .
70
jmean response of 2.1 (s lig h tly more than "probably wrong") to State-
I
jment 1, "The Modules VI, VII, and VIII tapes were useful to you in
junderstanding the material covered by th is t e s t . " This mean response of
I
2.1 was lower than th a t on Questionnaire I {x = 3.5) and on Question
naire 2 (z = 2.8). It was also lower than the 3.8 mean response to
Statement 2 of Questionnaire I I I , "The Modules VI, VII, and VIII course
book was helpful to you in understanding the textbook." A mean response
of 4.2 was given in response to Statement 3, "The session you had with
the learning coach was for in stru ctio n al purposes and was very helpful
to you in preparing for th is exam ination."
As with the f i r s t two questionnaires, responses to Statements 4,
5, and 6 indicated th a t a large number of those responding did answer
the practice questions in the coursebook. This and the data from S ta te
ments 1 and 2 seemed to indicate th a t the coursebook continued to serve
a useful purpose, but th a t u tiliz a tio n of the tape decreased as the
course progressed.
The students responded to Statement 7 to the e ffe c t th a t they
were obtaining and reading the a r tic le s from the lib ra ry . Their t e s t
r e s u lts confirmed th is . The learning coach was contacted more often
before the final t e s t than before the previous two te s ts {x = 3.30),
according to responses to Statement 8.
In response to Statement 9, "Write in the number of times you
lis te n to the tapes in the following lo c a tio n s—car, home, work, and
71
o ther," most students indicated th a t they listened to the tapes a t home.
There was no use of the tapes in a lte rn a tiv e locations.
Responses to Statement 10 confirmed the e a r lie r data t h a t , other
than the course components made av ailable to them, the students did not
use additional m aterials.
Because of the observed decline in u tiliz a tio n of the tapes, a
second page was added to Questionnaire III (see Appendix C). Responses
to Part 2 of Questionnaire III are ill u s t r a t e d graphically in Figure 1.
In th is p a rt, students were asked to indicate the use of tapes and ou t
lin es for each of the ten modules and to make general comments regarding
the in stru ctio n al m aterials. The data showed th a t use of the coursebook
maintained a high rating throughout the course, but use of the tapes
declined steeply from 16 users in Module I to 3 users in Module VIII.
Di-souss'lon o f Data from T e s ts
and Q u e stio n n a ire s
According to the t e s t data, there was no s ig n ific a n t difference
between achievement of the FlexEd group and th a t of the lectu re group.
Thus achievement scores did not in d icate any inadequacies in the FlexEd
course or mode. The questionnaire data, on the other hand, did indicate
th a t there was a serious problem with the design of the course m ateri
a ls. One large part of the m aterials package ( i . e . , the audiotapes)
was not being used as intended.
72
100 100
c />
O )
a .
fO
4 ->
C 7 >
L O
=3
c n
c
0 »
• a
-P
c />
< U
cn
ro
+->
C
O J
o
S -
O )
D_
100
90
80
70
60
50
40 I-
30
20 h
10
0
95.5 95.5
86
Old
17
M ew
II III IV V
Modules
VI VII VIII
Figure T. The reported use of c a sse tte tapes in
formative evaluation.
73
R evisio n o f the Tapes
The prescrip tio n for in stru ctio n al m aterials design was used as
a c h eck list to improve the in stru ctio n al design of the tapes. Two
reports were made and d istrib u te d to the course team. In the f i r s t
re p o rt, each of the design elements was discussed to determine whether
i t was su ita b le for the FlexEd m aterials.
Design elements were divided into three groups. The design
elements in the f i r s t group were eliminated because they were more s u i t
able for film or other presentations and could not be used with the
FlexEd m ateria ls. Design elements in the second group were eliminated
because they were already in use in the m aterials and there seemed to
be no fu rth e r possible use for them. An explanation of the use of the
design elements in th is group and in the th ird group formed the second
report. The design elements in the th ird group were those th a t could be
applied to the m ateria ls, and recommendations for th e ir use were con
tained in the second report.
Examples of the application of design elements to the tape
s c rip ts are contained in Appendix D.
Svimmative E va lu a tio n
The summative evaluation re s u lts assumed two forms th a t included
re s u lts from the achievement te s ts and the responses to questionnaires.
74
Simmative E valuation T e st R e su lts
Test re su lts were gathered from the same four te s ts used in fo r
mative evaluation. The same p re te st of 100 items was used, followed by
three p o s tte s ts . The f i r s t p o stte st followed Module I I, the second
p o stte st followed Module V, and a final t e s t came a t the end of Module
VIII a t the completion of the course.
The analysis of covariance re s u lts are reported in Tables 7, 8,
and 9. The covariance analysis controlled for p re te st re su lts in t e s t
ing for s ig n ific a n t differences in mean scores on the three p o s tte s ts by
the three groups of students.
The scores of the three groups of students on the p re te st
resulted in a higher mean score for the regular class (z = 37.47), a
lower mean score for the group with the new tapes (z = 30.40), and a
s t i l l lower mean score (z = 26.64) fo r those with the old tapes. Pre
te s t data are presented in Table 6.
The analysis of covariance showed th a t for the adjusted means
on the f i r s t t e s t there was a sig n ific a n t difference a t the .05 lev el.
The Tukey t e s t showed th a t the Group 1 adjusted mean was s ig n ific a n tly
higher than those of the other two groups. The adjusted mean scores,
th e ir d ifferen ces, and the value of the HSD for the f i r s t t e s t following
Module II are lis te d in Table 7.
The analysis of covariance showed th a t for the adjusted means
on the second t e s t there were no s ig n ific a n t d ifferen ces. The adjusted
75
S ignificant
Group Z 6 T D t Difference
Table 6
Means and Standard Deviations of P rete st Results in
Summative Evaluation, Fall 1976
FlexEd Group (Old Tapes) 10 26.64 5.14
FlexEd Group (New Tapes) 15 30.40 8.78
Lecture Group 34 37.47 6.93
FlexEd Group
FlexEd Group
(Old Tapes)
(New Tapes)
FlexEd Group
Lecture Group
(Old Tapes)
FlexEd Group (New Tapes)
Lecture Group
:53 NS
.96 NS
.83 NS
Table 7
Analysis of Covariance and Tukey' s HSD^ Test for
P retest and Test 1 in Summative Evaluation
Group I N
Covariate ( P r e te s t Score)
Tukey^s HSD
Difference Among X
Adjusted X F Sig. HSD X^ Z
P art 1 (M ultiple Choice)
1. FlexEd Group (Old M aterials)
2. FlexEd Group (New M aterials)
3. Lecture Group
10
15
34
45.36
40.91 7.25 .01
43.78
^ 4 5 Z 5 8
5.49 -2.87
Part 2 (Essay on the Readings)
1. FlexEd Group (Old M aterials)
2. FlexEd Group (New M aterials)
3. Lecture Group
10
15
34
17.17
16.30 .13 NS
16.08
.87 1.09
1.73 .22
P art 3 (Essay on the Text)
1. FlexEd Group (Old M aterials)
2. FlexEd Group (New M aterials)
3. Lecture Group
10
15
34
15.90
15.47 .68 NS
15.90
.43 .00
2.42 -0.43
Total
1. FlexEd Group (Old M aterials)
2. FlexEd Group (New M aterials)
3. Lecture Group
10
15
34
80.68
72.06 4.29 .05
75.90
8.62* 4.78
7.30 -3.84
HSD = Honestly s ig n if i c a n t d ifferen ce.
'"p < .05.
77
scores, th e ir d ifferen ces, and the value of the HSD for the second t e s t ,
which followed Module V, are lis te d in Table 8.
The analysis of coveriance showed th a t for the adjusted means on
the final t e s t there were s ig n ific a n t differences between means on the
f i r s t essay question and the to tal score. The group th a t received the
old m aterials had a higher adjusted mean for the f i r s t essay question
and the to ta l score. The adjusted mean scores, th e ir d ifferen ces, and
the value of the HSD for the final t e s t given a t the end of the course
are shown in Table 9.
Summative E va lu a tio n Q u e stio n n a ire R e s u lts
The responses to the questionnaires are presented and analyzed
in th is section. Table 10 contains a tabulation of the re p lie s to
Questionnaire I.
The mean response to Statement 1, "The Modules I and II tapes
were useful to you in understanding the material covered by th is t e s t ,"
was 3.78 for the old tapes and 3.48 for the new tapes, as compared to
3.51 for the previous semester. The mean response to Statement 2, "The
Modules I and II coursebook material was helpful to you in understanding
the textbook," was 4.52 for the old tapes and 4.38 for the new tapes,
as compared to 4.3:3 for the previous semester.
The mean response to Statement 3, "The session you had with the
learning coach was fo r in stru ctio n al purposes and was very helpful to
you in preparing for th is examination," was 4.05 for the old tapes and
78
Table 8
Analysis of Covariance and Tukey's HSD^ Test for
P re te st and Test II in Summative Evaluation
Covariate ( P r e te s t Score)
Tukey' s HSD
D ifference Among X
Group N Adjusted X F Sig. HSD X^ Zg Z
Part 1 (M ultiple Choice)
1. FlexEd Group (Old M aterials) 10 39.05 1 ^ ^ -0.50
2. FlexEd Group (New M aterials) 15 3A23 6.39 .05 5.53 -2.32
3. Lecture Group 34 39.55
Part 2 (Essay on the Readings)
1 . FlexEd Group (Old M aterials) 10 17.13 .82 .51
2. FlexEd Group (New M aterials) 15 16.31 .47 NS 1.7 -0.31
3. Lecture Group 33 16.62
Part 3 (Essay on the Text)
1 . FlexEd Group (Old M aterials) 10 17.08 1 .41 0.47
2. FlexEd Group (New M aterials) 15 15.67 1.96 NS 1.87 -0.94
3. Lecture Group 34 16.61
Total
1 . FlexEd Group (Old M aterials) 10 7 ^ ^ 5 4.04 0.47
2. FlexEd Group (New M aterials) 15 69.21 5.97 .05 7 ^ # -3.57
3. Lecture Group 34 7 2 ^ ^
HSD = Honestly s i g n if i c a n t d iffe re n c e .
79
Table 9
Analysis of Covariance and Tukey's HSD Test for
P re te s t and Final in Summative Evaluation
Tukey' s HSD
Covariate (P re te s t Score) Difference Among X
Group N Adjusted X F Sig. HSD X^ Zg Z_
P art 1 (Multiple Choice)
1. FlexEd Group (Old M aterials) 10 41.50 2 ^ ^ 3.47
2. FlexEd Group (New M aterials) 15 39.21 7.53 .01 5.31 1.18
3. Lecture Group 34 38.03
Part 2 (Essay on the Readings)
1. FlexEd Group (Old M aterials) 10 18.14 1 .94* 1.04
2. FlexEd Group (New M aterials) 15 15.2 5.84 .05 1.62 -0.9
3. Lecture Group 34 17.1
Part ; (Essay on the Text)
1. FlexEd Group (Old M ateria ls) 10 17.8 1.14 1.15
2. FlexEd Group (New M aterials) 15 16.56 6.63 .05 1 .49 0.01
3. Lecture Group 34 15.65
Total
1. FlexEd Group (Old M aterials) 10 78.62 6.66* 6.81*
2. FlexEd Group (New M aterials) 15 71.95 11.47 .001 6.49 .70
3. Lecture Group 34 71.81
HSD = Honestly s i g n i f i c a n t d iffe re n c e ,
p < .05.
80
Table 10
FlexEd Learner Responses to Module II Q uestionnaire in Summative Evaluation
Statements Group
C ertainly
Right
Probably
Right Neutral
Probably
Wrong
C ertainly
Wrong
The Modules I and II tapes
were useful to you in
understanding the material
covered by th i s t e s t .
The Modules I and II
coursebook m ate ria ls were
helpful to you in under
standing the textbook.
The session you had with
the learning coach was for
in s tru c tio n a l purposes and
was very helpful to you in
preparing fo r th is exami
nation.
Old
New
Old
New
Old
New
3.78
3.48
4.52
4.38
4.05
3.05
15
12
11
Yes
Statements Group No.
No
No.
Did you answer the prac
t i c e questions on
Module I?
Did you answer the
p ra c tic e questions on
Module II?
Did you read the L evitt
a r t i c l e ?
Did you read the Kotler
a r t i c l e ?
Old
New
19
19
83
90
Old 18 78
New 15 71
Old 17 74
New 17 81
Old 20 87
New 18 86
17
10
5 22
6 29
6 26
4 19
3 13
3 14
Statement Group
C ircle the number of
times you co ntacted ,the
learning coach between
the i n i t i a l session and
th i s t e s t .
Old
New
1.43
1.10
11
10
Statement Group Car Home Li brary Other
Write in the number of
times you liste n e d to
the tapes in the follow
ing loc atio n s (e.g. ,
cai— 3).
Old
New
15
4
35
41
Statement Group Professor Discussion Group
10. Did you use resources
other than the learning
coach and in s tru c tio n a l
m ate ria ls to help you with
these modules? Please l i s t .
Old
New
.81
13.05 for the new tapes. I t was 4.4 for the previous semester.
Responses to Statements 4 and 5 indicated th a t the majority of
the students were answering the practice questions. Responses to S ta te
ments 6 and 7 indicated th a t the m ajority of the students were obtaining
and reading the a r t ic le s associated with Modules I and II.
The student responses to Statement 8 showed th a t students did
consult the learning coach, although the average number of contacts was
low (1.43 and 1.10). Responses to Statement 9 showed th a t students used
the tapes more often in the Home than in any other location. In response
to Statement 10, the students did not l i s t any a lte rn a tiv e resources
used to help them with the modules.
Following the second t e s t , a sim ilar questionnaire was d i s t r i b
uted to the students (see Appendix C). Table 11 contains a tabulation
of the re p lie s to Qestionnaire II. For the f i r s t statement concerning
the tapes, the mean response for the old tapes was 3.45, while the mean
response for the new tapes was 3.25. This compared favorably to a
response of 2.8 for the spring semester.
The mean response to Statement 2, concerning the coursebook
m aterial, was 4.55 for the group with the old tapes and 4.44 for the
group with the new tapes. The mean response for the previous semester
was 4.3. The mean response to Statement 3, concerning the learning
coach, was 3.56 for the group with the old tapes and 3.50 fo r the group
with the new tapes. It was 3.5 for the previous semester.
8 2
Table 11
FlexEd Learner Responses to Module V Questionnaire in Summative Evaluation
Statements Group
C ertainly
Right
Probably
Ri ght Neutral
Probably
Wrong
C ertainly
Wrong
The Modules I I I , IV, and
V tapes were useful to
you in understanding the
m aterial covered by
th is t e s t .
The Modules I I I , IV, and
V coursebook m a teria ls
were helpful to you in un
derstanding the textbook.
The session you had with
the learning coach was for
in stru c tio n a l purposes and
was very helpful to you in
preparing for th i s exami
nation.
Old
New
Old
New
Old
New
3.45
3.25
4.55
4.44
3.50
3.56
12
9
Yes
Statements Group No.
No
No.
4. Did you answer the prac
t i c e questions on
Module II I?
5. Did you answer the prac
t i c e questions on
Module IV?
6. Did you answer the prac
t i c e questions on
Module V?
Old
New
Old
New
Old
New
16
11
15
10
12
9
80
68
75
62
60
56
20
32
25
38
40
44
Statements Group
7. How many a r t i c l e s did you
read asso ciate d with Mod
ules I I I , IV, and V?
8. C ircle the number of
times you contacted the
learning coach between
the Module II and th i s
Module V t e s t .
Old
New
Old
New
4.75
4.69
2.50
2.75
11
6
Statement Group Car Home
Write in the number of times
you lis te n e d to the tapes
in the following lo catio n s
( e . g . , c a r - - 3 ) .
Old
New
28
20
Work Other
Statement Group Discussion Group Other
10. Did you use resources
other than the learning
coach and in s tru c tio n a l
m a terials to help you with
these modules? Please l i s t .
Old
New
83
student responses to Statements 4, 5, and 6 indicated th a t they
were using the coursebook by answering the practice questions. The
responses to Statements 7 and 8 indicated th a t the students were ob tain
ing and reading the a r t ic le s and contacting the learning coach. Re
sponses to Statements 9 and 10 indicated th a t students were liste n in g to
the tapes in the home; they were making use of the in stru ctio n al mate
r i a l s supplied, but were not making use of other resources.
At the end of the course, the same la rg e r questionnaire was
completed by the students (see Questionnaire I I I , Appendix B). The
re s u lts obtained from th is questionnaire are tabulated in Table 12.
The mean response to Statement 1 was 2.6 for the old tapes and 2.7 for
the new tapes. This compares favorably with a mean response of 2.1 for
the previous semester.
The mean response to Statement 2, concerning the coursebook, was
3.7 for the group receiving the old tapes and 3.8 for the group receiv
ing the new tapes. This compares to 3.8 for the previous semester and
4.55 and 4.44 fo r the second questionnaire.
The mean response to Statement 3, concerning the learning coach,
was 4.0 for the old tapes and 3.2 fo r the new tapes. This compares to
3.50 and 3.56 fo r the second questionnaire.
Student responses to Statements 4, 5, and 6 indicated th a t they
were s t i l l using the coursebook by answering by answering the practice
questions. The responses to Statements 7 and 8 indicated th a t the
84
Table 12
FlexEd Learner Responses to Module VIII Questionnaire in Summative Evaluation
Statements Group X
Certai nly
Right
Probably
Ri ght Neutral
Probably
Wrong
C ertainly
Wrong
The Modules VI, VII, and
VIII tapes were useful to
you in understanding the
material covered by th is
t e s t .
The Modules VI, VII, and
VIII coursebook m ateria ls
were helpful to you in
understanding the textbook.
The session you had with
the learning coach was for
in s tru c tio n a l purposes and
was very helpful to you in
preparing fo r th is exami
nation.
Old
New
Old
New
Old
New
2.6
2.7
3.7
3.8
4.0
3.2
Yes
Statements Group No.
No
No.
4. Did you answer the prac
t i c e questions on
Module VI?
5. Did you answer the prac
ti c e questions on
Module VII?
6. Did you answer the p rac
t i c e questions on
Module VIII?
Old 14 70
New 12 67
Old 13 65
New 13 72
Old 12 60
New 11 61
30
33
35
28
40
39
Statements Group
How many a r t i c l e s did you
read associate d with Mod
ules VI, V II, and VIII?
C ircle the number of times
you contacted the le a r n
ing coach between the
Module V t e s t and th is
t e s t on Module VIII.
Old
New
Old
New
5.0
6.1
2.4
2.9
Statement Group
Car Home Work Other
9. Write in the number of
times you lis te n e d to
the tapes in the f o l
lowing lo c atio n s ( e . g . ,
c a r - - 3 ) .
Old
New
21
16 11
Statement Group
10. Did you use resources
other than the learning
coach and in s tru c tio n a l
m a terials (workbook,
tape, te x t) to help you
with these modules?
Please l i s t .
New
Old
No sources were l i s t e d for e i t h e r group
85
Istudents were obtaining and reading the a r t ic le s and contacting the
learning coach.
Responses to Statement 9 indicated th a t students were liste n in g
to the tapes in the home, but were not using them as much as they had
e a r l i e r in the course. Responses to Statement 10 showed th a t they were
using the in stru ctio n al m aterials supplied, but were not making use of
other resources.
The purpose of the second p art of the th ird questionnaire was
to gather fu rth e r data on the use of the tapes. The percentages of
students using the tapes for each module are presented in Figure 2.
The two curves, one for each group of students, show th a t there
was a sim ilar decline in the use of the tapes toward the end of the
course. The decline was not as great as i t had been in the previous
I
semester.
D isc u ssio n
The data collected were contradictory. The t e s t data did not
show any change in learner achievement favoring the students in the
experimental group. The questionnaire data did not show any increase in
the use of the m aterials by those students in the experimental group.
The experimental tapes were duplicated in the c a sse tte and were
not of the same high technical standard as the recordings d istrib u te d
to the control group. The control group tapes were duplicated onto tape
before being inserted into the c a s se tte s . The lower s ig n a l-to -n o ise
86
CO
o >
Q-
n s
4 ->
cn
CO
+->
O)
-o
O)
cn
n s
+->
c:
O )
o
S -
O )
O-
100
90
80
70
60
50
40
30
20
10
0
72
Old
M ew
II III IV V
Modules
VI VII VIII
Figure 2. The reported use of c a s se tte tapes in
summative evaluation.
87
ra tio for the experimental tapes could have reduced tape u tiliz a tio n
during the summative evaluation.
The experimental tapes were less re p e titio u s . The control group
tapes were complete re p e titio n s of the contents of the textbook and the
FlexEd coursebook. Redesign of the tapes focused a tte n tio n on important
contents of the textbook and a r t ic le s . Purposeless re p e titio n was
removed and replaced by new content. Where there was re p e titio n , i t
served a useful purpose.
F in ally , the tapes were not a major part of the experimental
m aterials. Most of the m aterials used were printed m aterials th a t
already made use of design elements.
8 8
CHAPTER V
SUM M ARY, CONCLUSIONS, A N D RECOM M ENDATIONS
This chapter contains a summary statement of the complete study,
conclusions th a t may be drawn, and recommendations for students of media
research.
Summary
This study was concerned with the design of a package of mate
r i a l s consisting of tapes and p rin t m aterials for a marketing management
class taught by behavioral in stru ctio n in the college classroom. Data
were c o lle c te d from achievement t e s t s and questionnaires. The study
covered two semesters.
The f i r s t complete semester consisted of a formative evaluation
to determine weaknesses in the design of a course package. The second
complete semester consisted of a summative evaluation to determine the
effectiv en ess of rev isio n s. The evaluation model th a t best described
these evaluation procedures was the Metfessel and Michael paradigm.
Format'ive E vatuat'ton
During the formative evaluation phase, no s ig n ific a n t d i f f e r
ences in achievement were found between the students in the regular
89
class and the students in an experimental se lf-stu d y FlexEd class on the
p re te st or the three p o s tte s ts. The questionnaire data showed th a t the
FlexEd coursebook was used fo r the e n tire semester by the students. The
FlexEd c a sse tte tap es, however, were used for only the f i r s t few mod
u les, then u tiliz a tio n dropped o ff u n til, at the end of the course, u t i
liz a tio n was minimal. As a part of the formative evaluation, the in
stru ctio n al design decisions were id e n tifie d and used to improve the
functioning of the tapes.
E a rlier t r i a l s a t evaluating the m aterials led to attempts to
is o la te items unique to sp ecific components, such as the tapes, course
book, or te x t. Because of the re p e titiv e nature of the three compo
nents, no t e s t items could be found th a t were unique to the tapes.
Whatever information there was on tape had already been presented in the
coursebook and te x t. -
This re p e titio n served no purpose. There seemed to be no
ra tio n a le for the content of the tape recordings. By selectin g the
appropriate design elements and using information based on th is se le c
tio n , both the stru c tu re and the content of the tapes were changed.
This eliminated purposeless re p e titio n and replaced i t with content th at
directed a tte n tio n to important parts of the course.
S im n a tive E v a tu a tio n
When the data were analyzed by an analysis of covariance proce
dure with the p re te s t scores as the covariate and the class le c tu re , use
90
of old FlexEd m a te ria ls, and use of new FlexEd m aterials as treatment
groups, the differences among group achievement were s ig n ific a n t. The
group with the old FlexEd m aterials performed b e tte r than the other two
groups on the final t e s t . The group with the old FlexEd m aterials per
formed b e tte r than did the group with the new m aterials on the f i r s t
t e s t following Module II. Tape u tiliz a tio n was high for th is group, but
the old-m aterials and new-materials groups showed a higher tape u t i l i z a -
than they did in the formative evaluation phase.
Cone'lus'ions
The following conclusions may be drawn from th is study.
1. That the improved design in the in stru ctio n al m aterials in
th is study did not make a major difference in student achievement could
confirm th a t the in stru ctio n al m aterial is not the component of behav
ioral in stru c tio n th at produces the la rg e st p o sitiv e e ffe c t.
2. The tapes th a t were redesigned were only a p a rt of the com
plete package. The coursebook was already well designed and functioning
for the whole of the semester period. Only the tapes did not function
e ffe c tiv e ly . Thus the m aterials could s t i l l be an important component,
but th is study did not cover enough of the component m aterials to make a
d iffe re n c e . Since the FlexEd mode r e lie s almost completely on the
m aterials component for communicating the course content, the m aterials
must lo g ic a lly have some impact on student achievement.
3. The group with the old FlexEd m aterials had a lower p re te st
91
score than did the group with the new FlexEd m aterials. The group with
the old FlexEd m aterials had the higher adjusted mean a t the end of the
course. These data support the conclusion th a t the entry knowledge of
the student is the t r a i t th at in te ra c ts most strongly with college-
level behavioral in stru c tio n to increase performance.
4. The analysis of covariance data fu rth e r supports the con
clusion th a t those students with the le a s t knowledge of the fie ld per
formed b e tte r than th e ir more knowledgeable peers in behavioral
in stru c tio n .
5. F inally, the analysis of covariance data fu rth e r supports
the conclusion th a t the less knowledgeable students in behavioral in
stru ctio n outperform th e ir more knowledgeable peers in the regular
classroom. Behavioral in stru ctio n in the college classroom permits all
p artic ip a n ts to reach a c r ite rio n .
This is not to say th a t behavioral in stru c tio n is merely for
the less knowledgeable students. Those students who already know much
of the f ie ld , but not enough to be waived completely, could save much
time in the le c tu re room through se lf-stu d y courses such as behavioral
in stru ctio n in the college classroom and i t s v ariatio n s such as FlexEd.
Recorm endat'ions
Further experimentation should be carrie d out on the FlexEd
m aterials. If fu rth e r studies into the behavioral in stru ctio n research
a t the college level were to include the in stru ctio n al m aterials
92
[component, there would be a bridge between the behavioral instruction
[research and instructional media research.
Both areas of research would benefit from the integration. If
studies were carried out on FlexEd materials, the FlexEd program would
benefit from such research. Therefore, the following recommendations
are made.
1. It is recommended that similar studies be conducted in math
ematics, economics, communications, and other business administration
courses. Such studies would add to the existing research, which has
created a l i s t of subjects that are being taught more effectively
through behavioral instruction in the college classroom. These studies
should determine which subjects rely more heavily on the instructional
materials component and the instructional design of these materials.
2. The present study should be replicated by using the same
course with students randomly selected from the undergraduate population
in the School of Business Administration.
3. If data were collected on the students' Scholastic Aptitude
Test re su lts, the interaction between aptitude and instructional t r e a t
ments in different subject areas could be determined. Some measures of
anxiety could be administered to students beginning th e ir study of s ta
t i s t i c s . Some measures of study habits could be administered to begin
ning students in any subject area taught by behavioral instruction in
the college classroom.
93
REFERENCES
Abrahamson, S., Denson, J. S., & Wolf, R. M . Effectiveness of a simu
lator in training anesthesiology residents. Jovœnal o f Med'loal
Eduoat-ion, 1969, 44^ 515-519.
Alkin, M . C. Evaluation theory development. In B. L. Worthen & J. R.
Sanders (Eds.), EduoatdonaZ evatu atdon : Theovy and p va o td o e.
Worthington, Ohio: Charles A. Jones Publishing Co., 1973.
Allen, W . H. Learner t r a i t s and instructional media design: In te lle c
tual a b i l i t i e s . AV Commundeatdon Revdew^ 1975, 23, 139-170.
The Amerdoan Herdtage Ddotdonary o f the Engtdsh Language. New York:
Houghton-Mifflin, 1973.
Brown, L. A. Aooountdng I ( f d r s t o fferd n g ) course evatu atdon r e p o r t
(Report pursuant to Grant No. NIE-G-75-001 from the National
In stitu te of Education). Lincoln, Nebraska: University of
Mid-America, 1975.
Campbell, D. T. Reforms as experiments. In E. L. Struening & M . Guten-
tag (Eds.), Handbook o f evatu atdon rese a rc h (Vol. 1). Beverly
H ills, California: Sage Publications, 1975.
Campbell, D. T., & Stanley, J. C. Experdmentat and quasd-experdm ental
desdgns f o r resea rch . Chicago: Rand McNally College Pub-
1ishing Co., 1963.
Dubin, R., & Medley, R. A. The meddum may be r e l a t e d to the message:
C ollege I n s tr u c tio n by TV. Eugene: University of Oregon
Press, 1969.
Dubin, R., & Taveggia, T. C. The teach in g-learn dn g paradox. Eugene :
University of Oregon Press, 1968.
Fraley, L. E ., & Vargas, R. A. Academic tradition and instructional
technology. In J. M . Johnston (Ed.), Behavior resea rch and
tech nology in h igh er edu cation . Springfield, Illin o is: Charles
C. Thomas, 1975.
95
Good, C. V. Vdotdonary o f édu cation . New York: McGraw-Hill, 1973.
Gooler, D. D. Is there a need for needs assessment? In R h e to ric and
p r a c ti c e : Needs assessm en t in a mayor developm ent e f f o r t (Work
ing paper #5, Grant No. NIE-G-75-001 from the National In stitu te I
of Education). Lincoln, Nebraska: University of Mid-America,
1975.
Johnson, K. R., & Ruskin, R. S. B eh avioral in s tr u c tio n : An e v a lu a tiv e
review . Washington, D.C.: American Psychological Association,
1977.
Johnston, J. M . (Ed.). Behavior resea rch and tech nology in h igh er edu
c a tio n . Springfield, Illin o is: Charles C. Thomas, 1975.
Keller, F. S. Goodbye, te a c h e r.... Journal o f A p p lied Behavior Analy
s i s , 1968, 1, 79-89.
Kirk, R. E. E xperim ental d esig n : Procedures f o r the b eh a vio ra l
sc ie n c e s. Belmont, California: Brooks/Cole Publishing Co.,
1968.
Kotler, P. The major tasks of management. Journal o f M arketing,
October 1973, 37, 42-49.
Levitt, T. Marketing myopia. Harvard B usiness Review , 1960, 35, 450-
456.
Mager, R. Goal a n a ly s is . Belmont, California: Fearon Press, 1972.
Magnussen, D. T e st th eory. Menlo Park, California: Addison-Wesley
Publishing Co., 1967.
Manning, P. R., Abrahamson, S., & Dennis, D. A. Comparison of four
teaching techniques: Programmed t e s t , textbook, lecture-
demonstration, and lecture-workshop. Journal o f M edical Edu
c a tio n , 1968, 43, 356-359.
Marcus, B., et al. M arketing management. New York: Random House,
1975.
McCormick, R. The e v a lu a tio n o f Open U n iv e rs ity course m a te r ia ls.
Mil ton Keynes, England: Open University, In stitu te of Educa
tional Technology, 1975.
96
IMetfessel, N. S., & Michael, W . B. A paradigm involving multiple c r i-
i terion measures for the evaluation of the effectiveness of
school programs. In B. R. Worthen & J. R. Sanders (Eds.),
E duoationat e v a lu a tio n : Theory and p r a c tic e . Worthington,
Ohio: Charles A. Jones Publishing Co., 1973.
Mil 1er, L. K. The effects of a behaviorally engineered textbook and
two designed textbooks on concept formation in university
students. In J. M . Johnston (Ed.), P roceedings o f the Second
N ation al Conference on Research and Technology in C ollege and
U n iv e rs ity Teaching. Atlanta: Georgia State University, 1974.
Mil 1er, L. K., & Weaver, F. H. The use of "concept programming" to
teach behavioral concepts to university students. In J. M .
Johnston (Ed.), B eh avioral resea rch and tech nology in higher
edu cation . Springfield, Illin o is : Charles C. Thomas, 1975.
Nie, N. H., Hull, C. H., Jenkins, J. G., Steinbrenner, K., & Bent, D. H .
S t a t i s t i c a l package f o r the s o c ia l sc ie n c e s (SPSS). New York:
McGraw-Hill Book Co., 1975.
Robin, A. Behavioral instruction in the college classroom. Review o f
E ducational R esearch, 1976, 46(3), 313-354.
Salomon, G., & Clark, R. E. Reexamining the methodology of research on
media and technology in education. Review o f E ducational
R esearch, 1977, 47(1), 99-120.
Sell, G. R. In tro d u c to ry psych ology ( f i r s t o ff e r in g ) course e v a lu a tio n
r e p o r t (Report pursuant to Grant No. NIE-G-75-001 from the
National In stitu te of Education). Lincoln, Nebraska: Univer
sity of Mid-America, 1975.
Semb, G., & P hillip s, T. W i P e rso n a lizin g a con tin gen cy managed l e e - '
tu re course through the use o f p e e r -g r a d e r s. Paper presented
at the Fourth National Conference on Personalized Instruction
in Higher Education, San Francisco, April 1977.
Stuff1ebeam, D. L. (Chair.). E ducational e v a lu a tio n and d e c is io n making.
Itasca, I llin o is : F. E. Peacock, 1971.
Taveggia, T. C. Personalized instruction: A summary of comparative
research, 1967-1974. American Journal o f P h y s ic s, 1976, 44,
1028-1033.
97
'Wodarski, J. S., & Buckholdt, D. Behavioral instruction in college
classrooms: A review of methodological procedures. In J. M .
Johnston (Ed.), Behavior resea rch and tech nology in h igh er edu
c a tio n , Springfield, Illin o is: Charles C. Thomas, 1975.
Worthen, B. R., & Sanders, J. R. (Eds.). E ducational e va lu a tio n :
Theory and p r a c tic e , Worthington, Ohio: Charles A. Jones
Publishing Co., 1973.
APPENDIX A
PRESCRIPTIONS FOR INSTRUCTIONAL MEDIA DESIGN
100
APPENDIX A
PRESCRIPTIONS FOR INSTRUCTIONAL MEDIA DESIGN
The following prescriptions for instructional media design were
based on Allen's (1975) different design elements liste d under seven
general statements. The general statements are given as quotations
and the specific applications that follow the quotations are given the
same numeration system as that used in the original a r tic le . When there
is an asterisk, the use of a design element is recommended and given a
specific application to the message on the tape.
1. "Preparatory or motivational procedures that establish a set
to learn the material to follow (at the beginning and at
points within the communication)" (Allen, 1975, p. 161).
1.1. The f i r s t chapters of the textbook could be termed an advance
organizer. Called "Meaning and Perspective," this section
was on the theoretical level and related the marketing field
to history an political science, which many students would
learn at an e a rlie r level of education.
*1.2. Verbal or pictorial overviews, outlines, or summaries of
material to follow were contained in the coursebook and text.
The coursebook took a section of the textbook and provided an
outline or summary. There were no outlines or summaries pro-
fided for the journal a r t i c l e s . These were not contained in
the coursebook. It was recommended that each one of the tapes
contain an overview of the journal a rtic le s associated with
that topic. When the tapes were produced, each tape concluded
with an introduction to the journal a rtic le s . Examples of
these introductions to the artic le s in Module I can be found
in Appendix D, Example 2.
101
I *1.3. There were not verbal directions to direct attention to spe-
I c ific parts of the content. It was recommended that the tape
transcripts direct attention to the tables, figures, and
graphs. Examples of thes directions from Module IV can be
found in Appendix D, Examples 2 and 4.
*1.4. Each topic had questions and answers that emphasized points
and provided opportunities for practice in the coursebook.
Questions that required a response should have been entered
into the script for the tapes to increse student response.
Example 1 is an example of a question providing an oppor
tunity for practice.
*1.5. Other than the practice questions in the coursebook and the
textbook, the tapes did not create any a c tiv itie s for the
student to do while listening to them. It was recommended
that the tapes should contain such a c tiv itie s . As can be seen
in Examples 1 and 3 in Appendix D, the students were asked to
draw diagrams to aid th eir memory of marketing concepts.
2. "Organizational outlines or internal structuring of the con
tent" (Allen, 1975, p. 152).
*2.1. Extensive headings and subheadings were located in the t e x t
book and coursebook. There seemed l i t t l e need to add headings
to the print material. The use of headings in the tapes may
have f a c ilita te d the taking of notes. An example from Module
III is shown as Example 3 in Appendix D.
2.2. Where necessary in the text, important points were enumerated.
Enumeration of points and objectives seemed to have reached
saturation point.
2.3. The content of the course seemed to be logically ordered and
sequenced.
3. "Attention-directing devices that point out, emphasize, or
direct attention to relevant cues in the communication"
(Allen, 1975, p. 162).
*3.1. There were no verbal directions to "look at," "find," "see,"
etc. contiguous with visuals or any other content of the
printed m aterials. These should be inserted into the audio
tape to draw attention to the content of tables and figures.
An example of how this was accomplished in Module III is
Example 4 in Appendix D.
102
3.2. There were no visual pointers used in the printed material and
i t was d if f ic u lt to see where they might have been used.
3.3. With underlining, full capitalization, or ita lic iz in g of
printed material, much was accomplished with different print
faces and the use of a green coloring for headings and
figures.
3.4. Color was used for emphasis in the text and the printing pro
cess for the coursebook does not allow for color printing.
3.5. There was no progressive build-up or disclosure, or d isas
sembly, or reassembly ("implosion") of pictorial elements,
most of the messages contained in the figures, tables, and
graphs did not require such a techhique.
3.6. There seemed l i t t l e need for animated or cartoon-type visuals
that reduce detail to relevant clues inherent in the material
to be learned.
3.7. The textbook made use of close-ups, large pictures, and large,
or heavier, type.
3.8. There seemed l i t t l e capability for movement or change in the
media that was used in the FlexEd mode.
*3.9. Repetition was not used for emphasis. Repetition was used
merely for repeating the content of the text on the audio
tape and in the coursebook. Paragraphs in the student's
coursebook were not selected because of th eir significance to
the content of the textbook. An example of the use of repe
titio n for emphasis is contained in an example concerned with
the three P's and a D of marketing (Example 1, Appendix D).
4. "Procedures that e l i c i t active participation and response
from the learner to the content of the communication"
(Allen, 1975, p. 162).
*4.1. The textbook listed questions a fte r each chapter and case
study. The coursebook contained practice questions after
each topic and answers for feedback. An example of a ques
tion with feedback from Module V is contained in Appendix D,
Example 6. The tape contained no requests for overt "speaking
out" or written responses. This should be done by asking
questions, as mentioned above.
103
*4.2. The end-of-the-chapter questions did not ask for a written
response, therefore "thinking" responses could be said to be
already given. Questions such as appear in Appendix D, Exam
ple 3, were recommended and inserted in the script.
*4.3. The questions in the text and coursebook were the only
requests for student participation. The previously recom
mended questions were used on the new tapes, as shown in
Example 6, Appendix D.
4.4. Students were advised to underline and take notes, and i t was
believed that they did this in general practice.
4.5 Since the materials were self-managed, they were self-paced.
5. "Provisions for correcting or confirming feedback to responses
elic ite d from learners" (Allen, 1975, p. 162).
5.1. The questions on the tapes should be followed by information
that answer these questions.
*5.2. Some of the answers to the questions on the tapes should
require an answer in the form of a graph or a figure.
*5.3. Where the student was asked a question, i t was recommended
that those questions require information, not a "yes" or "no"
response. This was done, as can be seen in Example 3 in
Appendix D.
6. "A slow rate of development or pace of presentation of the
content to be learned" (Allen, 1975, p. 163).
6.1. As the material was self-managed, the learning was self-paced.
7. "Formats that provide for supplantation or replacement of the
mental processing operations, normally done by the 1 earners,
by means of imitation or modeling" (Allen, 1975, p. 163).
7.1. The design elements were concerned with the design of message
for film rather than the materials used in the FlexEd model.
104
APPENDIX B
EXAMPLES OF TEST ITEMS
105
APPENDIX B
EXAMPLES OF TEST ITEMS
Item s from the P r e te s t
1. Business organizations performing marketing a c tiv itie s can be
grouped into three types, depending on th e ir function:
a. Manufacturer, wholesaler, r e ta ile r
b. P rofit, n o t-fo r-p ro fit, nonprofit
c. Durable goods, nondurable goods, services
d. Business intermediary, consumer intermediary, production
intermediary
2. Socialism \s a term used to describe:
a. The private ownership of the means of production
b. The public ownership of the means of production
c. The public ownership of all wealth
d. The parliamentary form of government
e. The public regulation of supply through price control
Item s from the F i r s t T est Foftowing Modules I and I I
3. Extractive and agricultural firms are usually classifie d as:
a. Retailers
b. Wholesalers
106
c. Manufacturers and processors
d. None of the above
4. With reference to the need hierarchy, an advertisement stressing
"move up to a Buick" is an example of which level of the hierarchy?
a. Safety and security
b. Belongingness
c. Self-esteem
d. Self-actualization
Essay Q uestion on th e Text from th e F i r s t T e st
5. Describe the central features of a marketing-oriented firm. How
does this compare with a product-oriented firm? Give examples of
firms and products that demonstrate these characteristics.
Essay Q uestion on th e Readings from th e F ir s t T est
6. According to Levitt, what is "marketing myopia"? Levitt mentions
industries such as o il, automobiles, and dry cleaning. From your
knowledge of current events, what present-day industries are f a l l
ing into the traps mentioned. Give specific examples to show how
the industry's view of marketing during the 1960s and 1970s is
myopic.
ite m s from the Second T e st F ollow ing Modules III^ IV^ and V
7. The general rule to use for deciding when and how much information
to secure is that:
a. Information should be secured until its marginal u t i l i t y is
zero.
b. Information should be secured when its costs and benefits are
equal.
c. Information should be secured whenever the manager's decision
would be enhanced by its use.
107
d. Information should be secured when the expected benefits exceed
expected cost and there are no better uses for the funds.
e. Information should be secured whenever the competitive position
of the firm would be improved and there are no better uses for
the funds.
8. According to the textbook, one of the m ost meaningful changes that
is occurring in marketing today is:
a. A transition from an in tu itiv e to an information-based
management
b. A greatly increased use of data-processing equipment
c. A decrease in the importance of experience in marketing man
agement
d. A decrease in the cost of information as its volume increases
e. Both b and d
Essay Q uestion on the Text from th e Second T e s t -
9. Discuss to what extent the Ralph Williams-type salesman mirrors the
description of a salesperson in the text. Be specific.
Essay Q uestion on the Readings From the Second T e st
10. What is the administration price theory discussed in Gilbert Burck's
a r t ic le , "The Myths and Realities of Corporate Pricing?" What
evidence does he present to disprove it? Do you agree or disagree
with Mr. Burck? Why?
108
APPENDIX C
QUESTIONNAIRES I, II, AND III
109
QUESTIONNAIRE I
The following information will help us to measure the effectiveness of
the instructional materials (workbook, tape, text, readings). Circle
the appropriate response.
1. The Modules I and II tapes were useful to you in understanding the
material covered by this te st.
Certainly Probably , Probably Certainly
right right ‘ wrong Wrong
2. The Modules I and II coursebook materials were helpful to you in
understanding the textbook.
certainly Probably Neutral
Probably Certainly
right right wrong wrong,
3. Did you answer the practice questions on Module I? YES N O
4. Did you answer the practice questions on Module II? YES N O
5. Did you read the Levitt a rtic le ? YES N O
6. Did you read the Kotler artic le ? YES N O
7. Circle the number of times you contacted the learning coach between
the in itia l session and this te st.
0 1 2 3 4 5
8. The session you had with the learning coach was for instructional
purposes and was very helpful to you in preparing for this
examination.
Certainly Probably , Probably Certainly
right right wrqng wrong
9. Write in the number of times you listened to the tapes in the f o l
lowing locations (e.g., c a r--3 ).
Car Home Work Other
10. Did you use sources other than the learning coach and instructional
materials (workbook, tape, tex t, readings) to help you with these
modules. Please l i s t (professor, community library, other tu to r,
discussion group, e tc .).
a.
b.
c.
d.
110
QUESTIONNAIRE I I
The following information will help us to measure the effectiveness of
the instructional materials (workbook, tape, text, readings). Circle
the appropriate response.
1. The Modules III, IV, and V tapes were useful to you in understand
ing the material covered by this te st.
Certainly Probably Neutral Probably Certainly
right right wrong wrong
2. The Modules I II , IV, and V coursebook material was helpful to you
in understanding the textbook.
Certainly Probably ,, . , Probably Certainly
. , . ua. Neutral
rioht rioht wrong wronq
3. Did you answer the practice questions on Module III? YES N O
4. Did you answer the practice questions on Module IV? YES N O
5. Did you answer the practice questions on Module V? YES N O
6. How
and
many
V?
a rtic le s did you read associated with 1Modules III, IV,
0 1 2 3 4 5 6
7. Circle the number of times you contacted the learning coach between
the Module II te s t and the Module V te st.
0 1 2 3 4 5 6 7 8 9 10
8. The session you had with the learning coach was for instructional
purposes and was very helpful to you in preparing for this exami
nation.
Certainly Probably Neutral Probably Certainly
right right wrong wrong
9. Write in the number of times you listened to the tapes in the
following locations (e.g ., car--3).
Car Home Work Other
10. Did you use sources other than the learning coach and instructional
materials (workbook, tape, tex t, readings) to help you with these
modules? Please l i s t (professor, community library, other tutor,
discussion group, e tc .).
a.
b.
c.
d.
Ill
Probably Certainly
wrong wrong
material was helpful to
Probably Certainly
wrong wrong
Module VI? YES N O
Module VII? YES N O
Module VIII? YES N O
Part 1 QUESTIONNAIRE I I I
The following information will help us to measure the effectiveness of
the instructional materials (workbook, tape, text, readings). Circle
the appropriate response.
1. The Modules VI, VII, and VIII tapes were useful to you in under
standing the material covered by this test.
Certainly Probably Neutral
right right
2. The Modules VI, VII, and VIII courseboi
you in understanding the textbook.
Certainly Probably ^g.^ral
right right
3. Did you answer the practice questions i
4. Did you answer the practice questions (
5. Did you answer the practice questions (
6. How many a rtic le s did you read associated with Modules VI, VII, and
VIII?
0 1 2 3 4 7 8
7. Circle the number of times you contacted the learning coach between'
the Module V te s t and the final.
0 1 2 3 4 5 6 7 8 . 9 10
8. The session you had with the learning coach was for instructional
purposes and was very helpful to you in preparing for this
examination.
Certainly Probably , Probably Certainly
right right ^ wrong wrong
9. Write in the number of times you listened to the tapes in the f o l
lowing locations (e.g ., car--3).
Car Home Work Other
10. Did you use sources other than the learning coach and instructional
materials (workbook, tape, tex t, readings) to help you with these
modules? Please l i s t (professor, community library, other tu to r,
discussion group, e tc .).
a.
b.
c .
d.
112
Part 2
This additional data will help us to gather information concerning the
effectiveness of the course materials.
If you used the tape or outline, check the box--if not, leave the box
blank. Comment on each of the tapes and outlines.
Module I
Module II
Module III
Module IV
Module V
Module VI
Module VII
Module VIII
Tape
Tape □
Tape I 1
Tape I I
Tape 1_J
Tape I 1
Tape I 1
Tape L J
Outline |___[
Outline |___|
Outline |___[
Outline |___[
Outline |___[
Outline
Outline |___[
Outline |___|
113
APPENDIX D
EXAMPLES OF DESIGN ELEMENTS
114
APPENDIX D
EXAMPLES OF DESIGN ELEMENTS
Example 1
Question to E lic it Active Participation
Module I (Transcript page 8)
The four P's are an easy and e ffic ie n t way of organizing your
thoughts about marketing and for communicating ideas about marketing to
others. Because the marketing mix and external environment is important
to remember, try drawing the diagram that is on page 7 ju st from
memory. PAUSE.
Example 2
Preparatory Procedure to Establish a Set to
Learn the Material to Follow
Module I (Transcript page 12)
In addition to the text there are two outside readings; the f i r s t
by Ted Levitt is e n titled "Marketing Myopia," and the second by Phillip
Kotler is en titled "The Major Tasks of Marketing." Both were assigned
because of the overview that they offer of marketing as a field in gen
eral. Both a rtic le s are also very different from each other. The
f i r s t by Levitt gives you a feeling for the scope of marketing a c tiv i
ties that a firm might engage in and a feeling for the excitement in the
field in terms of the opportunities that exist for individuals in the
field of marketing and for the c reativ ity by the individuals working in
the field. Levitt stresses in his a r tic le the marketing concept. That
is, i f you re c a ll, an orientation of the firm toward the consumer.
After reading the a r t ic le you might want to ask yourself or think
about the following. Why do you think Ford began to offer automobiles
in color? Another question that you might ask yourself is: Why do you
think Ford expanded to manufacture trucks?
Try to jo t down some of the answers. PAUSE.
115
K otler's a r tic le offers the typology for looking at management's
marketing tasks. He relates these to theory about the firm and the
market. After reading the a r tic le you might ask yourself how useful
Kotler's categories and divisions are.
You might also ask yourself what discipline did that typology
depend upon. Write down the answers. PAUSE.
Example 3
Provision for Confirming Feedback to Responses Elicited
Module III (Transcript pages 2-3)
I'm going to pause for a moment, and during that time ask you to
jo t down the types of information that you think are available or could
be available within the firm that would help the sales manager. PAUSE.
Discuss your l i s t with your learning coach. There are a number of
data banks that you could have come up with. Let's look at some that
you might have thought about.
F irst, and the most obvious, of course, is the information per
taining to sales, that is a sales manager would of course be interested
in knowing how many of each product the company has sold in the past
six months or a year.
Similarly, you might suspect that the sales manager might want to
know where the products were sold.
The l a t t e r might mean geographic reasons in general or could mean
urban vs. suburban, metropolitan vs. nonmetropolitan or any other
divisions which would prove illuminating.
Secondly, sales managers may want to know to whom the products were
sold. This information can be used to determine what promotions, sales
d istrib u tio n s, or pricing policies may be necessary or which might be
modified to more effectively market the product.
Thirdly, another major source of internal information which is
typically kept by the firm, but which is overlooked frequently, is cost.
A sales manager really will want to know how much i t costs to make a
particular product, how much to add a particular feature to a product,
or how much i t costs to promote a particular product. These types of
data are important in order to determine the p ro fita b ility of marketing
a particular product.
Fourthly, another source of information available to the firm re
lates to the customer--how they buy, and why they buy. Sometimes this
is recorded, but more often than not i t is information that is only
known to the salesman.
116
It is not necessarily transmitted to the marketing or product
managers who really must make the decision relativ e to the marketing
efforts.
Example 4
Directions to "Lodk at" or "Find"
Module III (Transcript page 9)
Sometimes procedures for providing information and the process of
information u tiliz a tio n in decisionmaking is summarized under the
umbrella of marketing information system.
If you turn to page 186 of your text you will note in Figure 9-1 a
simplified sketch of what a possible marketing information system looks
like. I will pause for you to find the page. PAUSE.
You will note that on the le f t side of the diagram are the two
major areas where information can be derived. One represents the ex
ternal environment and one represents the internal environment. The
external, for instance, on the top relates to the economy, the consumer,
the government, competition, society, and the like. . . .
Example 5
Directions to "Look at" or "Find"
Module IV (Transcript page 1)
In your text on page 287, Figure 14-1, you will find the diagram
of what is entitled "A Simple Communication Network." PAUSE. Com
munication networks can be far more complicated and consequently far
more d iff ic u lt to understand. The diagram shown, however, summarizes
the key facets of communication.
Yet one might even further simplify the diagram by eliminating
what are marked the encoding and decoding sections. Basically a com
munication network has a sender, a receiver, and something over which
communication is sent.
In the diagram 14-1 the sender is called the source, the receiver
is called the destination, and the communication i t s e l f is the signal
that is being sent from the source to the destination.
117
Example 6
Request for Overt or Written Response
Module V (Transcript page 1)
The Factors That Influence Price
Let's take a few moments and think about what are the factors you
can think of at this time that influence price, or that are in some
way related to price, or that can be considered part of the price paid
for goods or services. List those factors on a piece of paper. PAUSE
One factor that you may have immediately thought of is quality. Two
different goods may cost exactly the same price, but i f the quality of
the two varies, then in a c tu ality the price of the better quality good
is cheaper than the price of the lower priced good which is of poorer
quality.
118
Abstract (if available)
Abstract
Little systematic application of the growing theory and strategies of evaluation to educational programs has been made in higher education. Universities offering courses that make extensive use of instructional materials could benefit from evaluation information. This is especially true where the materials are expensive to produce.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Factors affecting the success of older community college students
PDF
Suburban Political Behavior In Los Angeles County, California
PDF
Stereotypes of selected white college students concerning Negroes
PDF
Training culturally responsive remedial math instructors
PDF
Closing the achievement gap by means of professional learning communities
PDF
Adjustment of large downtown and boulevard churches in Los Angeles to socio-cultural factors in the community
PDF
The use and perceptions of experimental analysis in assessing problem behavior of public education students: the interview - informed synthesized contingency analysis (IISCA)
PDF
Imparting social capital to educationally disadvantaged students: A study of the early academic outreach program
PDF
An analysis of garment manufacturing in the Los Angeles area
PDF
Processes, effects, and the implementation of market-based environmental policy: southern California's experiences with emissions trading
PDF
The significance of The Last Frontier casino in American history
PDF
A study of juvenile gangs in the Hollenbeck area of East Los Angeles
PDF
Informal consent: the complexities of public participation in post-civil war Lebanon
PDF
The use of occupational therapists or interdisciplinary teams in the evaluation of assistive technology needs of children with severe physical disabilities in Orange County schools
PDF
Empress Wu of the Tang dynasty: becoming the only female emperor in China
PDF
Sociological survey of Main Street, Los Angeles, California
PDF
Improving and sustaining math achievement in urban high schools: a case study of a southern California high school
PDF
The unattached Negro woman on relief: A study of fifty unattached Negro women on relief in the Compton district office of the State Relief Administration of California in Los Angeles
PDF
The impact of resource allocation on professional development for the improvement of teaching and student learning within a site-based managed elementary school: a case study
PDF
A sociological study of public opinion concerning certain police practices in Los Angeles
Asset Metadata
Creator
Olsen, Jeffrey Charles
(author)
Core Title
Experimental analysis of the instructional materials in behavioral college instruction using a marketing management course
School
School of Education
Degree
Doctor of Education
Degree Program
Education
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
behavioral instruction,classroom,course evaluation,course improvement,Education,Higher education,instructional materials,nontraditional learning,OAI-PMH Harvest
Place Name
educational facilities: Open University
(geographic subject),
educational facilities: University of Mid-America
(geographic subject),
University of Southern California
(geographic subject)
Language
English
Contributor
Digitized by ProQuest
(provenance)
Advisor
Allen, William H. (
committee chair
), Filep, Robert T. (
committee member
), Pullias, Earl V. (
committee member
)
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m74
Unique identifier
UC11228827
Identifier
DP26541.pdf (filename),usctheses-m40 (legacy collection record id),usctheses-c30-351149 (legacy record id),usctheses-m74 (legacy record id)
Legacy Identifier
DP26541.pdf
Dmrecord
351149
Document Type
Dissertation
Rights
Olsen, Jeffrey Charles
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the au...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus, Los Angeles, California 90089, USA
Tags
behavioral instruction
classroom
course evaluation
course improvement
nontraditional learning