Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
The relationship between process and presage critera of college teaching effectiveness
(USC Thesis Other)
The relationship between process and presage critera of college teaching effectiveness
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
THE RELATIONSHIP BETWEEN PROCESS AND PRESAGE
CRITERIA OF COLLEGE TEACHING EFFECTIVENESS
by
John Lariviere Conklin
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
September 197 2
INFORMATION TO USERS
This dissertation was produced from a m icrofilm copy of the original document.
While the most advanced technological means to photograph and reproduce this
document have been used, the quality is heavily dependent upon the quality of
the original submitted.
The following explanation of techniques is provided to help you understand
markings or patterns which may appear on this reproduction.
1. The sign or "target" for pages apparently lacking from the document
photographed is "Missing Page(s)". If it was possible to obtain the
missing page(s) or section, they are spliced into the film along with
adjacent pages. This may have necessitated cutting thru an image and
duplicating adjacent pages to insure you complete continuity.
2. When an image on the film is obliterated with a large round black
mark, it is an indication that the photographer suspected that the
copy may have moved during exposure and thus cause a blurred
image. You will find a good image of the page in the adjacent frame.
3. When a map, drawing or chart, etc., was part of the material being
photographed the photographer followed a definite method in
"sectioning" the material. It is customary to begin photoing at the
upper left hand-corner of a large sheet and to continue photoing from
left to right in equal sections with a small overlap. If necessary,
sectioning is continued again — beginning below the first row and
continuing on until complete.
4. The m ajority of users indicate that the textual content is of greatest
value, however, a somewhat higher quality reproduction could be
made from "photographs" if essential to the understanding of the
dissertation. Silver prints of "photographs" may be ordered at
additional charge by writing the Order Department, giving the catalog
number, title, author and specific pages you wish reproduced.
University Microfilms
300 North Zeeb Road
Ann Arbor, Michigan 48106
A Xerox Education Company
73-730
CONKLIN, John Lariviere, 1942-
THE RELATIONSHIP BETWEEN PROCESS AND PRESAGE
CRITERIA OF COLLEGE TEACHING EFFECTIVENESS.
University of Southern California, Ph.D. , 1972
Education, curriculum development
University Microfilms, A XEROX Company , Ann Arbor, Michigan
U N IVER SITY O F S O UTHERN C A LIF O R N IA
T H E GRADUATE SCHOOL
U N IV E R S ITY P A R K
LO S A N G E LE S , C A L IFO R N IA 9 0 0 0 7
This dissertation, w ritten b y
. Jo' h n . L a r iv ie r e .. Conklin ...................
under the direction o f h iS . Dissertation C o m
m ittee, and approved by n il its members, h a s
been presented to and accepted b y T h e G radu
ate School, in p a rtia l fu lfillm e n t o f require
ments of the degree of
D O C T O R O F P H I L O S O P H Y
Dean
D a te .A m m fQ J 3 .9...\9.'ZZ.
DISSERTATION COMMITTEE
Chairman
PLEASE NOTE:
Some pages may have
indistinct print.
Filmed as received.
University Microfilms, A Xerox Education Company
TABLE OF CONTENTS
Page
LIST OF TABLES IV
Chapter
I. THE PROBLEM 1
Introduction
Statement of the Problem
Purpose of the Study and Questions to be
Answered
Limitations
Delimitations
Assumptions
Hypotheses
Procedures
Definition of Terms
Organization of Remaining Chapters
II. REVIEW OF THE LITERATURE.................... 13
Teacher Effectiveness
Student Evaluations
Selected Variables
Summary
III. PROCEDURES................................... 47
Student Questionnaires
Instructor Questionnaire
Statistical Treatment
Statistical Analysis
Procedural Problems
Investigation of Hypotheses Concerning
Group Comparisons
Hypotheses Involving Bivariate Correlations
Hypotheses Involving the Multiple Stepwise
Regression Analysis
Supplementary Findings
IV. FINDINGS AND SELECTED DISCUSSIONS 70
ii
Chapter Page
V. SUMMARY, CONCLUSIONS AND RECOMMENDATIONS. . 96
Summary
Conclusions
Recommendations
REFERENCES........................................... 113
APPENDICES........................................... 125
Appendix A - Faculty § Course Guide
Questionnaire ............................ 126
Appendix B - Faculty § Course Guide:
Directions for Students and Administrators 134
Appendix C - IBM 1230 Document Number 510
Answer Sheet.............................. 136
Appendix D - Sample Data Description: All
Respondents.............................. 138
Appendix E - Faculty Questionnaire........ 141
Appendix F - Directions and Introduction
to Faculty Questionnaire................. 145
Appendix G - Second Letter of Transmittal . 147
Appendix H - Simple Data Description of All
Continuous Variables: Means, Standard
Deviations, and Sample Size............. 149
Appendix I - Overall Teaching Effectiveness
Scores for Adjunct and Full-time
Instructors.............................. 151
Appendix J - Correlation Matrix: Full-Time
Instructors Only.......................... 154
Appendix K - Correlation Matrix: All
Instructors Including Adjuncts........... 158
iii
LIST OF TABLES
Table Page
1. Factors Covered by the Faculty § Course
Guide Questionnaire...................... 49
2. A Working Table Designed to Generate a
Teaching Effectiveness Score (TES). . . . 53
3. Generation of an Overall Teaching
Effectiveness Score From an Instructor's
Teaching Effectiveness Score............. 55
4. Hypotheses Receiving an Analysis of
Variance as a Statistical Treatment . . . 63
5. Hypotheses Receiving a Bivariate
Correlation as a Statistical Treatment. . 65
6. Hypotheses Receiving a Stepwise Multiple
Regression Analysis as a Statistical
Treatment................................ 67
7. Summary of the Analysis of Variance of
Overall Teaching Effectiveness Scores For
Adjunct and Full-Time Instructors .... 71
8. Summary of the Analysis of Variance of
Overall Teaching Effectiveness Scores For
Male and Female Instructors............. 72
9. Summary of the Analysis of Variance of
Overall Teaching Effectiveness Scores For
Rank of Instructor........................ 73
10. Summary of the Analysis of Variance of
Overall Teaching Effectiveness Scores For
Instructors Holding EdD and PhD Degrees . 74
11. Summary of the Analysis of Variance of
Overall Teaching Effectiveness Scores For
Inbreeding................................ 75
iv
Table Page
12. Summary of the Analysis of Variance of
Overall Teaching Effectiveness Scores For
Doctorate and Non-Doctorate Holding
Instructors.............................. 76
13. Summary of the Analysis of Variance of
Overall Teaching Effectiveness Scores For
Religion................................... 78
14. Summary of the Bivariate Correlations For
Hypotheses Eight (HR) Through Twenty-
Three (H23) . . . ...................... 80
15. Summary of the Multiple Regression Analysis
For All Instructors Including Adjuncts. . 90
16. Summary of the Multiple Regression Analysis
For Full-Time Instructors Only............ 93
v
CHAPTER I
THE PROBLEM
Introduction
The days of the ivory tower professor are undoubt
edly a thing of the past. Professors of Education no
longer lead the professionally cloistered lives of their
predecessors. The public, whether for financial or intel
lectual reasons, has become increasingly interested in edu
cation and is demanding to play a more active role in the
educational process. The recent clamour that schools be
held accountable is evidence of this trend.
Certainly pressure toward improved education is
nothing new, having always existed within the profession
itself. Monetary rewards in the form of salary raises have
been offered as an incentive for teachers to continue their
own education, ostensibly as a means of improving their
instructional skills; merit pay has been experimented with
as a reward for the effective teacher. In-service and
workshop programs have been widely used to help raise the
quality of instruction. And of course, on an individual
basis, the competent teacher is always striving to seek new
and better ways of improving his teaching skills.
1
How exactly does one go about this job of improving
his instructional skills? Unfortunately, though the pro
blem is a very important one, no simple or definitive solu
tion exists. It seems there are as many answers as there
are educators. Not only is there lack of agreement on a
definition of effective teaching, but in addition there is
no consensus on how to ascertain effective teaching.
Though various methods are widely used, many of these lack
a rationale other than expedience. Thus, the problem of
improving instructional skills affects future teachers pre
paring for the profession and those already in the field
interested in enhancing their teaching skills.
It would seem that the most auspicious time for a
teacher to grapple with this problem would be early in his
training. All too often a lack of knowledge about effec
tive teaching can manifest itself in life-long, bad pro
fessional habits. How then can a neophyte learn?
Bandura § Walter's (1963) Modeling Theory indicated
that people learn by perceiving meaningful others and that
when the learner believes that this other has been rewarded
for his action, the learner will model his behavior after
the other. Charles Silberman (1970) in Crisis in the
Classroom reaffirmed the validity of the modeling theory
concept by citing research which showed that student
teachers model their master teachers' classroom methods.
Most students of education are receiving teaching
images from their education professors; these are the last
teachers they contact as students before they teach. The
same situation occurs with instructors of in-service
courses. Therefore, the prospective teacher must be pro
vided with extremely competent models. Only those instruc
tors perceived by the student as being effective should
serve as models.
Factors which can influence teacher effectiveness
must be identified so that Schools of Education can pro
vide the most effective models for prospective teachers.
Statement of the Problem
Since the last teacher model observed by the stu
dent of education is usually a professor of education, that
professor should provide a competent model by being an
effective teacher. It is ironic that some Schools of Edu
cation ignore basic educational theory in granting tenure
and promotion on the basis of non-teaching activities.
Teaching effectiveness should be the one major criterion
used for granting permanent status. Such a weeding-out
process would thus provide a larger pool of better teaching
models for future educators.
Thus, it becomes necessary for Schools of Education
to be aware of factors affecting teacher effectiveness so
that they may modify their hiring policies in accordance
with this information, thereby providing the most effective
4
models for prospective teachers. This study concerns it
self with ascertaining the specific relationships that
exist between teaching effectiveness and selected
variables.
Purpose of the Study and Questions to be Answered
The purpose of this study was to ascertain what
specific variables were related to teaching effectiveness
and what that relationship was. After a review of the re
lated literature variables were selected for this study and
are presented in Chapter III.
Limitations
1. This study was a form of institutional research
(Dressel § Pratt, 1971) conducted at the School of Educa
tion at the University of Southern California; as such its
findings are not generalizable beyond the scope of the pro
fessional Schools of Education similar in make-up to the
University of Southern California.
2. The nature of the survey instrument called the
Faculty and Course Guide as a numerical rating scale may
limit the expressed perceptions of the student participants
with regard to their measurement of teaching effectiveness.
Delimitations
1. This study restricted itself to the measurement
of teaching effectiveness by means of process criteria and
did not concern itself with product or presage criteria.
2. This study was delimited to a study of the
teaching effectiveness of instructors within the School of
Education at the University of Southern California.
3. Observations of teaching effectiveness were
likewise delimited to those students within the School of
Education at the University of Southern California.
Assumptions
This study was based on the following assumptions:
1. The Faculty Course Guide was a valid and
reliable student evaluation tool to measure teaching effec
tiveness .
2. A questionnaire sent to the faculty requesting
information about the variables yielded reliable informa
tion .
3. The respondents answered truthfully.
4. Students were capable of identifying effective
teachers.
5. The random sample of students and classes uti
lized in this study was actually similar to classes and
students of the School of Education at USC.
Hypotheses
: There is no significant difference between the
mean teaching effectiveness scores of Adjunct Instructors
and full-time Instructors.
6
: There is no significant difference between the
mean teaching effectiveness scores of males and females.
: There is no significant difference between the
mean teaching effectiveness scores of instructors with
different professorial ranks.
: There is no significant difference between the
mean teaching effectiveness scores of instructors posses
sing an EdD and instructors possessing a PhD.
Hj.: There is no significant difference between the
mean teaching effectiveness scores of USC graduates and
non-USC graduates.
Hg: There is no significant difference between the
mean teaching effectiveness scores of instructors with the
Doctorate and non-Doctorate holding instructors.
: There is no significant difference between the
mean teaching effectiveness scores of instructors of dif
ferent religions.
Hg: There is no significant first-order correla
tion between the teaching effectiveness scores and the
instructor's age.
Hg: There is no significant first-order correla
tion between the mean teaching effectiveness scores and
the height of instructors.
There is no significant first-order correla
tion between the mean teaching effectiveness scores of
instructors and their degrees of commitment to their
7
religions.
: There is no significant first-order correla
tion between the mean teaching effectiveness scores and the
total number of years taught on the higher education level.
^12: There is no significant first-order correla
tion between the mean teaching effectiveness scores and the
number of years taught at the University of Southern
California.
11^2: There is no significant first-order correla
tion between the mean teaching effectiveness score and the
total number of years taught.
: There is no significant first-order correla
tion between teaching effectiveness scores and previous
years of practical experience in the area taught.
H^j.: There is no significant first-order correla
tion between the teaching effectiveness scores and the
number of hours spent holding administrative posts by
instructors.
H^: There is no significant first-order correla
tion between teaching effectiveness scores and counseling
activities.
There is no significant first-order correla
tion between teaching effectiveness scores and the number
of hours spent on consulting jobs held by the instructor
during the teaching semester.
H^g: There is no significant first-order correla-
8
tion between the mean teaching effectiveness score and the
number of hours spent in preparation for class presenta
tions .
H^g: There is no significant first-order correla
tion between teaching effectiveness scores and the number
of hours spent on non-instruction related research.
H^q : There is no significant first-order correla
tion between the mean teaching effectiveness scores and the
number of pages authored by the instructor during the
semester.
^2i: There is no significant first-order correla
tion between the mean teaching effectiveness scores of
instructors and their base contract salaries.
^22' There is no significant first-order correla
tion between teaching effectiveness scores and the number
of students registered in class.
^2Z: There is no significant first-order correla
tion between teaching effectiveness scores and the instruc
tor's self-evaluations.
^24: There is no significant multiple correlation
between the aforementioned continuous variables and
teaching effectiveness of all instructors including
Adjuncts.
There is no significant multiple correlation
between the aforementioned continuous variables and
teaching effectiveness of full-time instructors only.
9
Procedures
The survey was conducted in four distinct phases.
Selected students and instructors of the School of Educa
tion at the University of Southern California were the
participants.
In the initial phase of the survey the Faculty
and Course Guide was designed, written, and distributed by
volunteers to selected classes listed in the school cata
logue. The students were asked to respond to a list of
questions regarding the teaching effectiveness of the in
structor. The data were collected and analyzed, and the
procedures for administration and collection of the evalua
tion forms were perfected.
The following year, the improvements having been
made, the Faculty and Course Guide was distributed, admini
stered, and collected and these results were used in the
present study.
In phase II the faculty questionnaire was designed,
administered, and the data were collected. The instructors
chosen were those who had participated in the student eval
uation. The items included on the questionnaire were those
listed as the variables of this study.
In phase III the data were coded and prepared for
the statistical treatment which included the various corre
lational techniques available.
10
The final phase involved the preparation of the
manuscript, which included the interpretation of the data,
the findings and conclusions.
The procedures are fully elaborated in Chapter III.
Definition of Terms
Administrative Posts. A University rank (from
Department Chairman to Dean) requiring time away from class
preparation is designated as an administrative post.
Criteria of Teacher Effectiveness. The effective
teacher is the one who produces desirable changes in student
behavior (product criteria); whose teaching behaviors are
judged as "good" by various raters (process criteria); and
who possesses characteristics which are predictors of
teaching effectiveness (presage criteria).
Product Criteria. The effective teacher is defined
as the one whose students make specific gains. The gains
may be cognitive or affective, norm referenced or criterion
referenced, but must be behaviorally measured change.
Process Criteria. The effective teacher is defined
as having performed certain tasks while teaching. These
may be specific teacher activities, teacher/student inter
action activities or student activities which result from
teacher direction. They are specific activities which take
place within teaching-learning situations.
Presage Criteria. The effective teacher is defined
11
as the one who possesses certain characteristics, attri
butes and/or knowledges. These characteristics, attri
butes and/or knowledges are used to predict who the
effective teacher will be. Thus, any teacher characteris
tic which is used to predict effectiveness will be defined
as a presage criterion.
Counseling Activities. Student/teacher interaction
outside of class time is considered to be counseling acti
vities .
Faculty and Course Guide. A student evaluation
instrument which was designed to assess teaching effective
ness on the college level is labelled the Faculty and
Course Guide.
Instructor. The person who teaches a course re
gardless of rank is considered to be the instructor.
Previous Practical Experience. Previous practical
experience must be in the area taught by the instructor;
i.e., teaching in the sixth grade does not count for an
instructor of administration, but being a manager of a
camera store would count.
Teaching Effectiveness Score. The teaching effec
tiveness score is a continuous scale developed from the
responses on the Faculty and Course Guide including such
criteria as: relevance, interest, structure, rapport,
feedback, general teaching skills, and the encouragement of
12
student use of higher order thinking skills.
Organization of Remaining Chapters
Chapter II presents a review and summary of related
literature and research.
A description of procedures, the instrument design,
and methodology of this study is presented in Chapter III.
Chapter IV presents a description of the data, the
findings, and conclusions drawn.
The summary, conclusions, and recommendations are
found in Chapter IV.
CHAPTER II
REVIEW OF THE LITERATURE
The present study deals with the relationship of
certain variables to effective teaching. Research related
to the present study is reviewed in this chapter under
three general headings: teacher effectiveness; student
evaluations; and certain variables related to teacher
effectiveness.
Teacher Effectiveness
The task of identifying effective teachers (or
effective teaching) is crucial to teacher education,
certification, selection, and promotion, and--insofar
as teaching contributes to the total social welfare--to
ultimate human survival [Mitzel, 1960, p. 148].
Every student has at some time in his education
criticized or commended a teacher for his ineffectiveness
or effectiveness. All involved in the educational process
seem intuitively to have an idea of what each of these
terms means to him; and yet there is a glaring lack of uni
versality about the meaning of teacher effectiveness.
Before one can hope to attempt any evaluation of
teacher effectiveness, there is need for defining the be
haviors of effective teachers in such a way as to permit
measurement. Mitzel (1960) defined these behaviors as
criterion measures: "... any set of observations that may
13
14
be used as standards for evaluative purposes [p. 1481]."
Concerning the need for criterion measures of
teacher effectiveness, Cook and Neville (1971) said:
Every teacher would benefit from a systematic
appraisal of his or her efforts . . . Basic criteria
for teacher performance need to be developed to ensure
that faculty are properly evaluated [p. 1].
Guild (1967) recognized the same problem in writing
about dental instructors. Conversely, Brain (1965) indi
cated that there was a lack of agreement on criterion
measures. Mitzel (1960) said:
. . . more than a half-century of research effort
has not yielded meaningful, measurable criteria . . .
No standards exist which are commonly agreed upon as
the criteria of teacher effectiveness [p. 1482].
In a review of the literature of the past 25 years
Neeley (1968) concurred. Among those historically who have
recognized the need for criterion measures of teacher
effectiveness are Beecher (1949) and Meriam (1906). How
ever, Cortez (1967) indicated there was a "lack of agree
ment on what should be considered as criteria of teacher
effectiveness and its criterion measures [p. 31]."
Many researchers have more specifically analyzed
the nature of the problem of criterion measures. Barr
(1961) said, "One of the biggest problems in evaluating
teacher effectiveness is in setting up criterion measures
which are specifically defined but not rigid [p. 135]."
The same problem was recognized by Brodbeck (1964) and
Ryans (1960a). Cortez (1967) pinpointed the problem by
15
saying:
The task of establishing well-defined and flexible
criterion measures is a major problem involving the
critical aspects of design, rationale, reliability and
validity, and agreed-on definition [p. 18].
Ryans (1960b) criticized studies of teacher effectiveness
for their "poor design, inadequate analysis and lack of
rationale [p. 1487]." Finally, Mitzel (1960) stated empha
tically that, "Teacher effectiveness criterion measures
must have relevance; reliability; freedom from bias and
practicality [p. 1482]."
Not all researchers are so critical, however. Many
have written of the formidable progress they feel has been
made in the area of setting up criterion measures of
teacher effectiveness. Cortez (1967) has written:
Investigators are beginning to arrive at some
agreement concerning predictors and measures of teacher
effectiveness . . . investigators of teacher effective
ness are moving forward to find a common basis for the
assessment of teacher effectiveness [p. 20].
He added:
There is an encouraging trend toward joint efforts
of educators and authorities in evaluation of teaching
competence to establish an agreed-on definition of
teacher effectiveness, its criteria, and its criterion
measures [pp. 31-32].
The preponderance of research yielding data utili
zing common criteria is further evidence of empirical pro
gress made in the field of teacher effectiveness. Many
studies reflected this trend toward the use of specific
criterion measures. Morsh and Wilder (1954) examined over
16
360 studies dealing with effective instruction and came up
with 20 predictors used to assess teacher effectiveness.
Hampton (1951) found 20 important factors in determining
teacher competence. Trabue (1953) was able to group
teacher traits into six major categories and based on re
sponses from 820 college executives, concluded that 15 of
the traits in these categories had "great value." Large
universities have suggested specific criteria by which
merit is achieved (Paulsen, 1964). Gage (1961) grouped
Beecher's (1949) 104 data-gathering devices used in
Wisconsin Studies and put them in seven categories which
could further be defined as criteria. In an attempt to
find the "abilities and patterns of behavior of good and
poor teachers" Peronto (1961) cited certain "elements of
strength found in the performance of 47 good and 47 poor
teachers [p. 92]." "Evidence of common criterion measures
used to assess teacher effectiveness on all levels has been
found [Riley, Ryan and Lifshitz, 1959, pp. 55-56]." Quick
and Wolfe (1965) in their study which attempted to identify
the "ideal teacher" furnished further evidence which sug
gested agreement among college educators on criterion
measures.
The AERA Report of the Committee on the criteria of
teacher effectiveness (1952) proposed that the criteria of
teacher effectiveness be classified as ultimate to the
proximate. The ultimate criteria refer to the end product—
17
pupil progress, while the proximate refers to the charac
teristics a teacher brings with him to the teaching situa
tion. Mitzel (I960) proposed parallel categories which he
referred to as product (ultimate) and presage (proximate).
In addition, he utilized a third classification, called
process criteria, which referred to teachers' classroom
behaviors.
All of the research studies of criterion measures
of teacher effectiveness reviewed lend themselves to
classification by Mitzel's convenient categories.
Product Criteria - Product criteria depend for
definition upon a set of goals toward which teaching is
directed. These goals are most economically stated in
terms of changes in behavior on the part of students
. . . These effects are variously called student gains,
student growth, or student changes, but they all in
volve measurement of change in student behavior, a
portion of which logically can be attributed to the
influence of individual teachers.
Process Criteria - Process criteria are most often
described and measured in the classroom in terms of
conditions, climates, or typical situations involving
the social interactions of students and teacher. One
type of process criterion is obtained from observations
of teacher behavior . . . Another type of process cri
terion involves student behavior in the classroom.
Examples are the extent to which students exhibit af
fection for the teacher, attentive listening, or con
formity to classroom routines.
Presage Criteria - Presage criteria, so-called here
because of their origin in guessed predictions, are
from a logical standpoint completely removed from the
goals of education . . . In a sense they are pseudo
criteria, for their relevance depends upon an assumed
or conjectured relationship to other criteria, either
process or product. Characteristically, presage cri
teria lack chronological proximity to the interplay of
behavior in the classroom . . . There are at least four
18
types of presage variables in common use as criteria in
teacher effectiveness research:
(a) teacher personality attributes
(b) characteristics of teachers in training
(c) teacher knowledge and achievement
(d) in-service teacher status characteristics
Another large group of presage criteria includes such
teacher-training variables as scholastic honor-point-
ratios, marks in education courses, success in student
teaching, and critic-teacher evaluations [Mitzel, 1960,
pp. 1483-1484].
Product Criteria
The following criteria can be considered product
criteria of teacher effectiveness: the attainment of imme
diate objectives (Paulsen, 1964); the teachers' effects on
pupil behaviors (Lehman, 1961; Tead, 1964; Brain, 1965);
learning outcomes (Simpson 8 Brown, 1952); the promotion of
student thought (Musella § Rusch, 1968); student achieve
ment of course objectives (Miller, 1972); and general stu
dent achievement (Guild, 1967; Cook 8 Neville, 1971).
Trabue (1953) in an analysis of the reactions of a jury of
experts, found that the following product criteria emerged
as important: teacher behavior which inspires independent
student thought, encourages independent student responsi
bility, and infectious enthusiasm which inspires students
to want to teach. Finally, McKeachie (1969b) said,
The teacher whose students make good progress
toward educational goals is an effective teacher re
gardless of how he looks or what techniques he uses
[p. 212] .
19
Process Criteria
In a review of studies pertaining to the relation
ship between the processes teachers use and the end pro
ducts in terms of student goal achievement, Rosenshine and
Furst (1971) found:
The five variables which yield the strongest rela
tionships with measures of student achievement are:
clarity, variability, enthusiasm, task orientation
and/or businesslike behavior and student opportunity
to learn [p. 54].
They also found one variable which had a negative relation
ship with measures of student achievement; this was pupil
criticism. Indirect measurement, a term utilized by Cook
§ Neville (1971), can be subsumed under the heading of
process criteria. They have defined indirect measurement
as :
. . . a measure of what the teacher does to facili
tate learning; for example, the teacher selects in
structional objectives, selects course materials,
establishes a learning environment, prepares academi
cally, organizes and presents materials, diagnoses
students, interacts with students, etc. [p. 2].
Other leaders in the field have emphasized the importance
of teacher behavior in the classroom (Paulsen, 1964; Rapp,
1964; Miller, 1972) .
Six facets of process criteria which were empha
sized by McKeachie, Isaacson, § Milholland (1964) were
skill; overload in student assignments; structure; feedback
to the students; group interaction; and student-teacher
rapport. Hildebrand, Wilson, § Dienst (1971) analyzed com-
20
ponents of effective teacher performance and came up with
the following process criteria: analytic/synthetic
approach; organization/clarity; instructor-group inter
action; instructor-individual student interaction; and
dynamism/enthusiasm. Hughes (1959) offered the following:
structure/discipline; building a positive self-concept for
students; creating an environment for problem solving and
creativity. In a summary of early studies of why teachers
fail, the following process criteria were most conspi
cuously lacking: techniques, discipline, effort, initia
tive and adaptability (Barr, Burton § Brueckner, 1938).
Cortez (1967) emphasized course planning, communicative
ability, presentation of subject matter, and organizational
responsibility. Gray (1969) presented the following cri
terion models for teacher evaluation: linguistic, perfor
mative and expressive. Lundstedt (1966) said, "The effec
tive communicator is generally the effective teacher." He
emphasized "a sense of timing" as a process criterion for
the effective communicator. Through a factor analysis of
data on teacher behavior, Gibb (1955) defined four identi
fiable common factors: (1) friendly, democratic; (2)
communication; (3) order, organization; and (4) academic
emphasis. Among Jensen’s (1961) criteria the functions of
the teacher as instructor and counselor could be designated
as process criteria of successful teachers.
The following teacher behaviors have been found to
21
be important for effective instruction: planning, clearly
stating objectives, making the course relevant to the
students, and utilizing good communications skills (Tead,
1964). Quick § Wolfe (1965) emphasized the following:
planning, good communication skills, tolerance and fair
ness. Three prominent patterns of observable classroom be
havior (process criteria) were found by Ryans (1967) :
. . . friendly, understanding sympathetic teacher
behavior; responsible, businesslike, systematic teacher
behavior; and stimulating, imaginative teacher be
havior [p. 59] .
Eble (1970) listed eight process criteria as valued
most highly by students. He found the effective teacher:
1. was a dynamic and energetic person
2. explained clearly
3. had an interesting style of presentation
4. seemed to enjoy teaching
5. had a genuine interest in students
6. was friendly toward students
7. encouraged class discussion
8. discussed points of view other than his own
[pp. 99-100].
In a list of effective teaching behaviors, Perry
(1969) found the following process criteria and listed them
in order of importance:
. . . preparation, teacher interest, . . . tests
which utilize understanding, fairness in evaluation,
effective communicator, encouraging intelligent, inde
pendent student thought; logical organizer, motivates
students, treats students with respect, and acknow
ledges questions as best he can [p. 17].
Also in order of importance, Musella § Rusch (1968)
found that the effective use of questions, speaking
ability, organization of subject matter, extensive and
22
effective use of discussion, were teaching behaviors which
most promoted thinking. Trabue's (1953) poll of college
executives yielded the following process criteria in their
order of popularity:
1. friendly, democratic tolerant and helpful in
his relations with students
2. understands the problems frequently encountered
by college students
3. careful preparation and organization for
classes
4. infectious enthusiasm for teaching
5. demonstrates appropriate skills and methods
6. helps students help themselves.
Harvey § Barker (1970) came up with the following
process criteria: organization of course, preparation for
class, skill as lecturer and discussion leader, variety in
classroom techniques and assignments, and ability to arouse
interest. A factor analysis was conducted by Holmes (1971).
It yielded four factors, three of which measured the
quality of an instructor's presentations: the instructor's
process of evaluation, student-instructor interaction, and
clarity of the tests. Riley, Ryan § Lifshitz (1959) found
ten common criterion measures of which the following are
process criteria: organization of subject matter, ability
to speak and explain; ability to stimulate independent
student thought; and fairness and tolerance. Phillips
23
(1964) found that students themselves favored a highly
structured class with "highly visible" tests. Finally, in
his analysis of good and poor teachers, Peronto (1961)
cited the following process criteria as important:
Interest in pupil response; use of illustrative
materials; well developed assignments; good notebooks
and outside reading; wealth of commentary remarks; fre
quent use of pupil’s experience; ability to stimulate
interest; socialization of class work; supervised study
and willingness to experiment [p. 92].
In general, the teacher with the widest range of behavior
tends to be superior to one with a narrow range of behavior
(Hughes, 1959) .
Presage Criteria
Presage criteria, the previous traits, skills,
knowledge and abilities a teacher brings with him to the
teaching situation (Mitzel, 1960) are considered important
predictors of teacher effectiveness. However, Ryans (1960b)
offers the following words of caution:
A basic problem in the study of the prediction of
teacher effectiveness is determining in what way and to
what extent various data descriptive of teachers
(teacher behaviors and/or conditions affecting teacher
behaviors, both of which may be subsumed under teacher
characteristics) are either antecedents or concomitants
of some specified criterion of teaching competence
[p. 1487].
Mitzel (1960) warned also that the relevance of presage
criteria "depends on an assumed or conjectured relationship
to other criteria [p. 1484]" (process or product).
In spite of such cautions, many researchers utilize
24
presage criteria in their definitions of effective
teaching. Barr (1961) listed 42 presage criteria which he
subsumed under the topics of: personality traits; compe
tencies; knowledge and skills. Tead (1964) said the effec
tive teacher possesses, ". . . some philosophy of life
which places his teaching in some larger frame of reference
and significance [p. 594]." The presage criteria of know
ledge of subject matter and intelligence were stressed as
important in assessing teacher effectiveness (Barr, 1938;
Peronto, 1961; Rapp, 1964; Lundstedt, 1966; Perry, 1969;
Harvey § Barker, 1970).
Other researchers have offered more extensive and
specific lists of presage criteria. The following are
likely to be positively associated with teaching effective
ness:
. . . measured intellectual abilities, achievement
in college course, general culture and special subject-
matter knowledge, professional information, student
teaching marks, emotional adjustment, attitudes favor
able to students, generosity in appraisals of the
behavior and motives of other persons, strong interest
in reading and literary matters, interest in music and
painting, etc. [Ryans, 1960b, p. 1490].
AERA in its Report of the Committee on the Criteria
of Teacher Effectiveness (1952) offered the following
"proximate" criteria as important for consideration for
teaching effectiveness:
Teachers' values or evaluative attitudes; teachers'
knowledge of educational psychology and mental hygiene;
teachers' emotional and social adjustment; teachers'
25
knowledge of methods of curriculum construction;
teachers' knowledge of subject matter; teachers' inter
est in the subject matter; teachers' grades in practice
teaching courses; teachers' grades in education
courses; teachers' intelligence [pp. 243-244].
Pullias, Lockhart, § Bond (1963) presented the
following as the most nearly universal traits of great
teachers: integrity or authenticity; enthusiasm or zest;
directness or nearness to reality; perspective or length
and breadth of view; freedom of mind, specifically freedom
of imagination; breadth of interest or sensitivity to a
wide spectrum of life; and an abiding concern for the indi
vidual learner. In an analysis of factors affecting
teaching behaviors which most promote thinking, Musella §
Busch (1968) listed attitudes towards subject and students,
and knowledge of subject.
Trabue (1953) conducted a poll of college execu
tives concerning their opinions about the effective college
instructor. The poll produced the following presage cri
teria: the effective college instructor possesses a high
academic record in the special field, takes a broad view
of educational problems, regards himself primarily as a
college teacher (as opposed to a subject specialist), con
tinues professional study, is healthy and vigorous, has a
good personality, and most important, is emotionally stable
and mature. Riley, Ryan § Lifshitz (1959) found the attri
butes most highly valued on all levels of effective
teaching were knowledge of and attitude towards subject
26
and the instructor's personality.
Quick § Wolfe (1965) said that the "ideal pro
fessor" possessed the presage criteria of enthusiasm and
knowledge of his subject; affection for college youth; a
good voice; and finally that he was scholarly and an active
researcher. The following presage criteria were found to
be useful in measuring teacher behavior of teachers judged
successful:
. . . professional judgment; sociability; work
habits; motivation and value; initiative and creati
vity; general first impression; member of school staff;
and member of community [Jensen, 1961, pp. 72-73].
Cortez (1967) recommended the following, in their
order of importance, as significant in the appraisal of
teaching effectiveness: knowledge of subject matter, human
relations, personality traits, and appearance. Isaacson,
McKeachie, Milholland (1963) emphasized culture--"artis
tic, polished, imaginative, effectively intelligent, etc.
[p. 112]"--as the only trait consistently rated high when
students and other teachers were rating effective teachers.
This finding was replicated at Ohio State University where
variables inconsistent with the effectiveness item were
found. Isaacson (1963) concluded that teachers most likely
to be seen in a good.light by their students possessed
"surgency, culture and emotional stability [p. 117]."
Guild (1967) in his study of dental instructors suggested
the following presage criteria for use in evaluating effec-
27
tive teaching: relations with others, knowledge of job and
knowledge of teaching and learning. In their review of 360
studies, Morsh and Wilder (1954) found the most frequent
predictors of teacher effectiveness to be the following
presage traits:
. . . intelligence, scholastic achievement, know
ledge of subject matter, age and experience, cultural
background, teaching ability and aptitude, professional
attitude toward and interest in teaching, emotional
stability, social adjustment, and personality traits
[pp. 2-7].
Ryans (1960b) offered the following presage cri
teria which may contribute to the model of the teacher:
Superior intellectual abilities, above average
school achievement, good emotional adjustment, atti
tudes favorable to pupils, enjoyment of pupil relation
ships, generosity in the appraisal of the behavior and
motives of social and school activities appear to apply
very generally to teachers judged by various kinds and
sets of criteria to be outstanding [p. 1490].
And finally, the importance of personal factors,
including "healthy living techniques" in the model for
effective teachers was emphasized by Morton (1964) .
A definition of good teaching, drawn from a compo
site of the literature on the subject, by Miller, serves as
a good summary of the findings of this literature search
on effective teaching:
A good teacher personifies enthusiasm for his stu
dents, the area of competence and life itself. He
knows his subject, can explain it clearly, and is
willing to do so--in or out of class. Class periods
are interesting, and at times alive with excitement.-
He approaches his area of competence and his students
with integrity that is neither stiff nor pompous, and
28
his attitude and demeanor are more caught than taught
[pp. 26-27] .
Student Evaluations
The use of student evaluations is about a half-
century old. As early as 1924 Harvard began publishing its
annual Confidential Guide to courses. The University of
Washington also began publication of a similar guide in
that year (Cook 8 Neville, 1971). The Purdue Rating Scale
for instruction was developed in 1926, and according to
Remmers 8 Weisbrodt (1965) , research which has accompanied
its various refinements has yielded the following conclu
sion :
A third of a century of use . . . by many teachers
and a very considerable amount of experimental research
. . . have demonstrated that student evaluation is a
useful, convenient, reliable and valid means of self-
supervision and self-improvement for the teacher
[p . 1] .
The tradition of student evaluations of teacher
effectiveness, established so many years ago has become
increasingly popular and widespread. Though its popularity
as a tool has at times waned (Gustad, 1967), favorable em
pirical data have caused renewed interest. Gustad’s (1967)
findings showed a "substantial decline in the use of
systematic student ratings between 1961 and 1966 [p. 270]."
According to Miller (1972) no evidence was available for
the period between 1966 and 1971, but "a significant in
crease in systematic student evaluations of teaching has
taken place [p. 15]." Gaff § Wilson (1970) said student
29
ratings were the most prevalent form used to obtain evi
dence about teaching.
Many important studies of teacher effectiveness
have utilized student evaluations (Isaacson, 1962;
Yamamoto, 1966; Musella, 1968; Harvey § Barker, 1970).
Informal student opinions, one of the 15 most frequently
used sources of information for determining teacher effec
tiveness, were used 41 per cent of the time (Austin § Lee,
1966) . A survey done by Gaff § Wilson (1970) showed that
72 per cent of the faculty favored a formal evaluative pro
cedure and of those, 82 per cent felt students should be
involved in the evaluation.
It is not difficult to understand the reasons for
the popularity of student evaluations. In addition to the
valuable feedback provided for the instructor, which can
and does lead to improved instruction (Adams, Siegel, §
Macomber, 1964; McKeachie, 1969a), one need merely examine
the goals of the use of the tool to understand its poten
tial. McKeachie (1969b) listed these as follows:
(a) To make possible comparative judgments of
teaching effectiveness of different instruc
tors
(b) To help instructors improve their teaching
(c) To improve student morale and stimulate stu
dent thinking about educational objectives
(d) To provide information relevant to students'
choice of courses (as in published evaluations
of courses and teachers) [p. 219].
There are many who distrust the tool, however.
Many educators argue against the use of systematic
30
student ratings on the grounds that students cannot
accurately assess teaching effectiveness. It seems
curious, then, that so many of them do use 'informal
student opinion' as a basis for judging an instructor's
performance . . . [Kent, 1967, p. 319].
Kent (1967) continued, "A second reason commonly
given for failure to use systematic student ratings is
distrust of the validity and reliability of the devices
used [p. 319]." Because this is a major concern of re
searchers, the question of reliability and validity war
rants a closer look.
Remmers, Gage, § Rummel (1960) defined validity and
reliability as follows: "Validity of an evaluation device
is the degree to which it measures what it intends to
measure [p. Ill]," and "Reliability is the consistency
with which a test yields the same results in measuring
whatever it does measure [p. 117]."
In assessing validity, Lehman (1961) said, "Stu
dents are perceptive, and they become more so when they
realize their opinions are seriously regarded [p. 353]."
Gustad (1967) pointed out that students "are virtually the
only direct observers . . . they are reasonably competent
if they are asked the right questions [p. 276]."
Skeptics might claim that student subjectivity
undermines the validity of the tool. This appears un
likely, however, as Hawkins (1963) found that there
appeared to be no difference between objective and subjec
tive evaluation of teachers as viewed by many different
31
evaluators. Hildebrand, Wilson, § Dienst (1971) and
Woodburne (1966), found that students were able to discri
minate between best and worst teachers with a high level of
significance.
In terms of the reliability of student evaluations,
Drucker § Remmers (1951) showed that student ratings of
instructors correlate well (.40 to .68) with ratings of the
same instructors made by alumni ten years after graduation.
Remmers (1963) found that student ratings of teachers were
as reliable as the presently available mental and educa
tional tests if the n was greater than 25. On the secon
dary level, pupil ratings on most items have been found to
be highly reliable (Bryan, 1937). In general, student
evaluations of instructor effectiveness have been found to
be valid and reliable tools (Remmers, 1963; Cortez, 1967;
Fahey, 1970).
Some of the following are possible specific
validity problems. The question has been raised as to
whether the grading system of the instructor influences the
validity of the student evaluation. Spaights (1967) and
Stewart § Malpass (1966) found that the grade a student re
ceived makes a difference in the way he rated the instruc
tor. Bryan (1937) found a slight tendency for pupils with
high marks to rate teachers higher than pupils with low
marks. However, this was not a significant difference.
Other researchers have found no relationship be-
32
tween the rating received by the instructor and the grade
the student expected (Heilman § Armentrout, 1936; Langen,
1966; Holmes, 1971). However, Holmes (1971) cautioned, if
the students' grades are not what they expected, the stu
dents will tend to depreciate the instructor's performance
in areas other than the grading system.
There are many other validity problems which pre
sent themselves. Much research has dealt specifically with
variables which might affect student ratings. These have
ranged from instructor differences (i.e., personality
factors) to student backgrounds (grades, majors, sex, etc.).
The instructor's personality could conceivably influence
his ratings. Empirical evidence, however, has refuted this
argument (Elliott, 1949; Russell, 1951; McKeachie, 1969b).
Whether or not student ability affected their
ratings of instructors was investigated by McKeachie
(1969b). He found that there was a relationship. Certain
instructors were found to be most effective with high-
ability students. These high-ability students rated these
instructors more positively than did low-ability students.
Instructors most effective with the low-ability students
were rated higher by them as a group than by the high-
ability students.
Student background was investigated for its
possible effect on student ratings. Findings indicated
that students were not influenced by their sex, grade
33
level, major area, or GPA of grades previously received
from the teacher (Rayder, 1968). Remmers § Brandenburg
(1927) found the raters’ sex to be unrelated to their
rating. They also found no relationship between the
ratings and the difficulty of the course.
Whiteman (1972) found no significant relationship
between student ratings and GPA and student’s anticipated
grade in the course. He did find that the following stu
dents, as compared to others in their classes tended to
rate their instructors higher: Students (1) in small
classes, (2) in upper division classes, (3) who heard the
professor was good, (4) who had a genuine interest in the
subject matter, (5) who prepared assignments daily, (6) who
did considerable independent work out of class, (7) who
said too much work was demanded of them, (8) who frequently
voluntarily participated, (9) who sought explanations for
unclear materials, (10) who locked up minor points not
understood, (11) who monopolized class discussions, (12)
who made relationships between new and old material and
between the course and other courses, and (13) who were
getting as much as anticipated from the course.
Hildebrand, Wilson § Dienst (1971) whose study of
student evaluations of teacher effectiveness is one of the
most comprehensive and thorough, said,
In general, student ratings of best teachers showed
only negligible correlations with academic rank of
34
instructor, class levels, number of courses previously
taken in the same department, class size, required
versus optional course, course in major or not, sex of
respondent, class level of respondent, grade-point
average, and expected grade in course [p. 42].
The following important statements have been made
by authorities in the field about student evaluation forms
or questionnaires:
In general, many of the rating forms that I exa
mined seemed to suffer from either a lack of organiza
tion and a tendency to ask too many questions or from
overgenerality in the phrasing of items [Kent, 1967,
p . 325] .
Remmers (1963) said:
Most rating forms could be improved, but it should
be recognized that the element of bias can never be
completely removed. Such devices are by their very
nature 'biased* in that they depend on the judgments of
human beings who are necessarily subjective in their
judgments. Thus they are limited by 'the characteris
tics of the human rater--his inevitably selective per
ception, memory and forgetting, his lack of sensitivity
to what may be psychologically and socially important,
his inaccuracies of observation . . . [p. 329].
Miller (1972) offered a suggestion for making the tool more
reliable. He said:
It is important that this procedure be standardized
for greater reliability. Likewise, some objective and
standardized procedure for distributing, collecting and
delivering the questionnaires should be developed
[p. 29].
Kirchner (1969) concurred and added that a neutral indivi
dual should administer the form as higher ratings were
obtained when the instructor being evaluated was present.
Concerning the tool's design and construction
Wrightstone (1960) identified four major types of tools
35
used in rating teacher effectiveness:
1. rating scales
2. rank-order method
3. forced-choice technique
4. paired-comparison method
Of these, the rating scales were the most extensively used.
Guilford (1954) named the five categories of rating scales:
1. numerical
2. graphic
3. standard
4. cumulated points
5. forced choice
Human error, both in the construction and comple
tion of the forms, reduced the validity and reliability of
ratings (Guilford, 1954). Wrightstone (1960) cited the
following rater's errors:
Halo effect--rate a trait high or low because of
the way another trait was rated; and logical error--due
to preconceived ideas of rater and ambiguousness of
trait being rated [p. 961].
Finally, Remmers (1964) offered the following word
of caution about designing student evaluation forms:
" . . . forced-choice technique of rating requires exten
sive experimental work to be constructed [pp. 340-341]."
Authorities in the field have varied opinions con
cerning the use of student ratings of teacher effective
ness. Dressel (1964) was generally opposed to their use
36
because they usually emphasized a professor's errors and
ignored his strengths. Morton (1964) indicated that stu
dent evaluations have been "subject to much abuse: poor
student judgment, prejudice, etc.--or poor questionnaire
[p. 521]." Kerlinger (1971) was strongly opposed to the
use of student evaluation because of the negative effects
he claimed they had on instructors. He said they caused
hostility and alienation, undermine the instructor's
autonomy and responsibility, and thereby lessened the moti
vation of the instructor. Kent (1967) also cautioned about
the effect student evaluations would have on instructor
motivation:
It is frequently maintained that, because many
students would rather be entertained than educated, the
use of student ratings will lead the teacher to become
a popularity-seeker [p. 338].
Quoting from an article by Max Lerner, in the October 31,
1965 issue of the Washington Star, Kent continued:
If the teacher gets too self-conscious, if he
strives for approval, then he is running after strange
gods. The poor devil is always tempted to over
dramatize his subject, oversimplify it, make things
'clear' . . . [p. 338].
Others considered student evaluations to be valu
able tools (Morton, 1961; Dressel, 1967; Schwartz, 1968).
Finally, Renner (1967) said, in support of student evalua
tions of teacher effectiveness, "The students are ultimate
consumers of the teacher's efforts, and they know best
whether he has been effective or not [p. 14]."
37
Selected Variables
In the quest for ascertaining predictors of teacher
effectiveness, many of the following presage variables have
provided avenues of exploration.
Class Size
This area has yielded the most research. Edmondson
§ Mulder (1924) found student achievement in classes of
different sizes to be about the same. Hudelson (1928) in
46 separate experiments found only eight differences were
large enough to be significant; six of these favored large
classes. More current research has found that class size
had little or no effect on the quality of learning (Hatch §
Bennet, 1965) unless the classes were extremely large, with
111 or more students (Simpson § Brown, 1952). Other re
searchers found that small classes produced slightly more
favorable results (Cheydleur, 1945; Macomber, 1957 8 1960;
Nachman § Opochinsky, 1958; Siegel, Adams § Macomber,
1960; Gage, 1961; Feldhusen, 1963; McKeachie, 1967).
Sex
Empirical evidence has found that there was not a
significant relationship between the sex of the instructor
and teacher effectiveness as assessed by various raters
(Hauss, 1969; Hess, 1969; Noble, 1969). Student ratings of
teacher effectiveness do not seem to be affected by the sex
38
of the instructor, according to findings by Remmers (1927)
and Lehman (1961).
Studies which did find sex-related differences in
teaching style were: Simpson (1952) who found the male to
be significantly higher in terms of student learning, and
McKeachie (1971) who found that women teachers who were
rated high in structure were more effective than men.
Research has yielded some interesting points about
sex-related differences. On the elementary level, Ryans
(1960) found men more permissive, emotionally stable, and
democratic in the classroom, while women were more respon
sible and business-like. In the secondary school he found
women might be more effective than men on seven out of nine
specified criteria.
Religion
There is a paucity of research on the relationship
between religion and teaching effectiveness, possibly be
cause of its controversial nature. Gustad (1961) found in
a study of faculty evaluations at 584 institutions, that an
"other” category was the second highest correlate in
teacher evaluation. This item was used to deal with such
factors as Christian Character, and church membership and
activity.
Height
Although not directly related to teacher effective-
39
ness, Feldman (1971) found that there is job discrimination
between tall and short college graduates. According to
him, tall men (6*2" and taller) received average starting
salaries of 12.4 per cent higher than college graduates
under six feet tall.
Age
A significant difference existed between teaching
effectiveness and the instructor's age, favoring the
middle-aged group (Hauss, 1969). Teachers under age 55
were at a distinct advantage (Ryans, 1960), and Simpson
(1952) who compared age in ten year groups to a "total
learning quality score (TLQS)" found:
Age 20-30 low TLQS
Age 30-40 high TLQS
Age 40 and up TLQS continuously drops
From this evidence it appeared that age and teacher effec
tiveness have a curvelinear relationship.
Self-Evaluation
Bryan (1962) concluded in his dissertation that
teachers who perceived themselves as having positive atti
tudes toward students tended to be rated higher than
others.
Years of Experience
A relationship was found between teacher effective
ness and teaching experience (Hauss, 1969). It also was
40
found that the very experienced groups were at a disadvan
tage (Ryans, 1960b). The number of years of teaching ex
perience significantly correlated with teaching effective
ness up to ten years, after which there was no
relationship (Hess, 1969). Remmers (1927) concluded that
teachers with fewer than five years of experience tended to
be rated lower than those with eight or more years of
experience.
Many researchers have found no relationship between
teaching effectiveness and years of experience (Noble,
1969; Hawkins, 1963; McMullen, 1927). Though Ryans (1960b)
concurred, he also indicated that there was evidence of an
increase in teacher effectiveness which was positively
correlated with experience during the early years of
teaching.
The indication here is that there is possibly a
curvelinear relationship between teaching effectiveness and
years of experience with the effectiveness increasing with
experience up to about ten years and then showing either
decline or no relationship.
Years of Experience at Currently
Employed School Districts
No significant relationship existed between the
number of years of teaching experience at the currently
employed district and teaching effectiveness (Noble, 1969).
41
Nature of Degree
Newell (1967) in an analysis of the teaching effec
tiveness of Professors in Schools of Dentistry compared the
teaching effectiveness of instructors writh various types of
professional and scholarly degrees (DDS, MD, PhD, etc.) and
found that there was a significant difference between the
teaching effectiveness of instructors with various degrees.
Highest Earned Degree
Hauss (1969) found no significant relationship
between level of preparation beyond the BA degree and
teaching effectiveness on the elementary and secondary
levels. On the college and university levels, instructors
with the BA were rated significantly lower than those with
an MA or a Doctorate (Elliott, 1949).
Previous Practical Experience
Previous practical experience appeared to have no
significant advantage or disadvantage over formal or in
formal evaluations of teaching effectiveness. (Hawkins,
1963)
Salary
Hilgert (1964) believed that salary increases
should be paid on the professor's competence. Empirically
this is still not the case. The effective instructor re
ceived no more remuneration than his ineffective peers.
42
(Hess, 1969; Hauss, 1969; Noble, 1969)
Rank
Rank was found to be a significant predictor of an
instructor's teacher effectiveness rating. (McDaniel §
Feldhusen, 1970). Langen (1966) found:
The median associate professor and the median
assistant professor are more 'superior' than 'compe
tent' and the median instructor is 'competent plus.'
Even the bottom 10 per cent of associate professors are
'competent plus' and the bottom 10 per cent of instruc
tors seldom go below 'competent minus' [p. 25].
The associate professor was rated higher in teacher effec
tiveness than other ranks (Elliott, 1949). More recent
research found that instructors and assistant professors
received lower ratings of their teaching effectiveness than
did associate professors and professors. (Gage, 1961)
Work Load Variables
The following five variables mentioned are work
load variables. Concerning work load in general:
Best and worst teachers engage in the same pro
fessional activities and allocate their time among
academic pursuits in about the same ways. The mere
performance of activities associated with teaching does
not assure that the instruction is effective [Hilde
brand, Wilson, et al, 1971, p. 42].
Number of Pages Authored
Professors with published research were rated
higher than professors without publications. (Riley, Ryan
§ Lifshitz, 1959) Writing reinforced teaching efforts and
43
students considered a professor's knowledge to be authori
tative when he had published a substantial amount.
(Hilgert, 1964) Conversely, McDaniel § Feldhusen (1970)
found significant negative correlations between the compo
site instructor rating and books second-authored and
articles first-authored. Their research also found a posi
tive relationship with booklets second-authored.
Outside Commitments (Consultations)
"In most cases the time of the work week seems to
be divided approximately 2/3 for instructional duties and
1/3 for non-instructional duties [Stickler, 1960, p. 84]."
Later on, Stickler(1960) stated,
I could find no definitive studies relating to time
for out-of-class conferences with students, committee
work, professional services (such as consultations)
(emphasis mine) or required (or appropriate) extra-
curricular activities in computing the total faculty
load [p. 91].
Administrative Posts
Administrative duties were included as part of the
work load (Stickler, 1960). The number of hours spent on
administrative duties was found to have.Jbeen a significant
negative predictor of the composite instructor's rating of
effective teaching. (McDaniel § Feldhusen, 1970)
Research
Research should have an adverse effect on teaching
effectiveness because, as Stickler (1960) said: "Research
44
usually requires more of the time of a college or univer
sity faculty member than any other activity [p. 89]."
There are arguments both for and against doing re
search while teaching. Martin § Berry (1969) advocated a
separation of research and teaching. On the other hand, it
has been said that research is a necessary corequisite to
the teaching/learning process, provided that the teaching
was not neglected and that the research could be of real
value for success in teaching. (Dupont, 1961) There was
also a broad competency model: "In medium and large
institutions of higher education, good researchers are good
teachers [Miller, 1969, p. 68]."
Student Counseling
McKeachie (1969b) defined counseling as three dis
tinctly different types of activities:
1. program planning
2. remedial work
3. individualized teaching
Douglas § Romine (1950) considered counseling activities as
part of the faculty load and Remmers (1963) reviewed re
search and found that the instructor's "popularity in
extraclass activities . . . is probably not appreciably
related to student ratings of that teacher [p. 368]."
45
Summary
Teacher Effectiveness
There has been found to be a need for the develop
ment of common criterion measures in the evaluation of
teacher effectiveness.
Product Criteria. The teacher whose students
gained the most both cognitively and inspirationally was
the most effective.
Process Criteria. The most effective teachers were
well prepared and organized, enthusiastic, and had a good
rapport with individual students as well as to the class
as a whole.
Presage Criteria. There was found to be a positive
relationship between intelligence, knowledge of subject
matter, human relations skills, and personality traits with
teacher effectiveness; no relationship between experience,
remuneration and the sex of the instructor and teacher
effectiveness was found.
Student Evaluations
Student evaluations have been widely used in
studies of teacher effectiveness. The student evaluation
questionnaires used have been found to be valid and reli
able tools in the assessment of teacher effectiveness.
46
Selected Variables
This search of the literature concluded with a
survey of the related literature on the specific variables
used in this study. No related literature could be located
on adjunct versus full-time professors and staff
inbreeding.
CHAPTER III
PROCEDURES
This chapter presents a description of (1) the
instrumentation, (2) design, (3) sample, (4) data collec
tion, and (5) statistical analyses.
Student Questionnaire
Design of the Instrument
In the 1970-71 school year, Jack Housden, the
President of the Education Graduate Organization (EGO) at
the University of Southern California appointed a committee
of graduate students, of which this researcher was chair
man, to design an instrument to evaluate the teaching
effectiveness of instructors in the School of Education.
The form which had been in use prior to the formation of
the committee was used only by the administration and
therefore students had no access to its results. The
committee decided to design an objective student evaluation
form whose results would be made available to the student
body.
The committee first analyzed approximately 25 stu
dent evaluation forms from various colleges and univer
sities. Based upon this information the committee decided
47
48
on a forced-choice rating scale as the technique to be
employed in the format of the questionnaire.
It was observed by the committee that the promotion
of higher order thinking skills were not represented in the
ranks of the questionnaires examined. Therefore, an exami
nation of Bloom's (1956) Taxonomy of Educational Objec
tives: The Classification of Educational Goals: Handbook
l--Cognitive Domain aided in the development of the ques
tions (Table 1) found under the "higher order thinking
factor." The remainder of the questions were organized
under such factors as relevance, interest, structure,
rapport, feedback, pedagogy and two general factors related
to the course and the instructor, respectively. The selec
tion of these categories was based upon an analysis of
other student evaluation forms which were studied.
The name of the student evaluation questionnaire
developed was the Faculty 8 Course Guide (F§CG) (see
Appendix A).
Content Validity
A search of the related literature showed that the
effective teacher was structured and/or well organized, had
good rapport with his students, demonstrated good pedagogi
cal skills, influenced student behavior, possessed enthu
siasm (which made the course interesting), and provided
valuable feedback.
49
TABLE 1
FACTORS COVERED BY THE FACULTY § COURSE GUIDE QUESTIONNAIRE
Factor Question Number*
Structure 5
8
1 0
14
15
2 2
Rapport 14
16
18
19
2 0
General teaching skill 1 1
1 2
13
2 1
23
Higher order thinking 5
6
7
17
Interest 2
3
4
Feedback 24
26
Relevance 1
Attendance 27
"G" Factor (course) 9
"G" Factor (instructor) 25
* For the specific wording of any question, see Appendix A
50
The Faculty § Course Guide had questions dealing
with the above mentioned factors validated by a review of
related literature; in addition, the factors of relevance,
attendance, and general factors relating to the course and
instructor were included (Table 1).
Sample
The sample for the pilot study was the students who
attended classes listed in the 1970 Fall catalogue for the
School of Education at USC. The sample for the present
study was the students who attended education classes
listed in the 1971 Fall catalogue and whose classes met in
their assigned rooms at the USC campus.
Administration
The F$CG was administered twice. The pilot study
conducted in 1970 was conducted by EGO. Representatives
who volunteered for the job brought the course instructor a
packet including the questionnaire, answer sheets and
number two pencils. The instructor was then responsible
for taking these to class. Though the F§CG questionnaire
was designed to be self-administering, it was discovered
that this, in addition to collection procedures, needed
further clarification. These factors adversely affected
the validity and reliability of the pilot study.
In the present study the distribution, administra
tion, and collection procedures were refined and standardi
51
zation was more carefully observed. Student volunteers
from EGO were selected and instructed on the procedures for
distribution, administration, and collection of the F$CG
questionnaire. The F$CG questionnaire was distributed to
each class by the student volunteer; the administration for
each questionnaire was the same (see Appendix B); and the
completed questionnaires were immediately returned to the
EGO office by a student volunteer.
Data Collection
The student responded on IBM 1230 #510 answer
sheets (Appendix C) which were returned to the EGO office,
cleaned up, and prepared for data analysis by the student
administrators.
Data Analysis
Through the use of a visual scanner, the data on
the answer sheets were key-punched on data cards. A pro
gram was written so the data could be treated by course
number. A single sheet for each instructor showing raw
score responses, percentage responses, the "n", and the
course number was returned for analysis. A sample print
out can be found in Appendix D. Although summary data is
presented, Appendix D shows the computer print-out and thus
how the raw scores and percentages were provided for data
analysis.
52
Teaching Effectiveness Score. For purposes of this
study a Teaching Effectiveness Score (TES) was developed.
A working table was designed (Table 2) for the purpose of
obtaining this score. The questions of the F£|CG were
grouped into factors, the names of which are found in the
first column of Table 2. Responses to the questions were
then analyzed and organized from "best" to "worst." "Best"
indicated highest effectiveness in each specific factor.
Responses which did not concern themselves with effective
teaching were not used in the computation and were shaded
in on the table.
Questions five and fourteen were eliminated because
their responses were too open to interpretation and thus
their value was questionable.
The columns formed under the headings "Best,"
"Good," "Fair," "Poor," and "Worst," were filled in with
the percentage of student responses found in that answer
slot of the F$CG questionnaire. Percentage scores in each
column were added and the sum was placed in the "sub-total"
slot on the bottom of the chart. The sub-totals were then
assigned weights (Edwards, 1957). The sub-total of the
"Best" column was multiplied by a positive two; the sub
total of the "Good" column was multiplied by a positive
1.5; the sub-total of the "Fair" column was multiplied by a
positive one; the sub-total of the "Poor" column was multi
plied by a negative 1.5, and the sub-total of the "Worst"
53
TABLE 2
A WORKING TABLE DESIGNED TO GENERATE A
TEACHING EFFECTIVENESS SCORE (TES)
Factors
a
o
•H J h
■P < D
V) X i
< D g
Percentage Responding
Response Response Response Response Response
Best Good # Fair Poor Worst
Relevance
H
2 -*
3
Interest
2
3
T
"G" Factor
Structure
Factor
ITT
15
22
Higher
Order
Thinking
6
7
17
%
Rapport
16
18
19
20“
Feedback
2T
W 1
Demon
strated
Teaching
Skills
11
IT
13
21
7T
"G" Factor
Attendance
25
27
SUB TOTALS
x 2 . 0 x 1.5 x 1 . 0 x -1.5 x -2.0
PRODUCTS
TES
54
column was multiplied by a negative two. The sum of these
five products became the TES. Extreme Teaching Effective
ness Scores (TES) possible were a - 50,000.
A TES was generated only for classes with a sample
size ("n") greater than 25, or if the "n" was less than 25,
when the sample size represented 70 per cent or more of the
class size. Two classes were eliminated from the study
because of this restriction.
The TES thus developed was used for each course in
correlating class size and the instructor's self-rating of
a specific course.
Overall Teaching Effectiveness Score (OTES). Since
the present study was concerned with general characteris
tics of an instructor, a score of his general teaching per
formance, as opposed to his specific performance in one
class, was necessary. All classes taught by an instructor
were averaged together in a special way so as not to inter
fere with the sample size. The effect was to increase the
size of the sample used in generating the score and thus
increase its validity.
An OTES was generated by averaging together all the
TES of an instructor. The specific process included multi
plication of each course TES by the "n" of the sample;
addition of these products; and finally, the division of
the added products by the total "n" (Table 3). The OTES
(an average) thus developed, was used for each instructor
55
in correlating the remaining variables and in the step-wise
multiple regression analysis (02R).
TABLE 3
GENERATION OF AN OVERALL TEACHING EFFECTIVENESS SCORE
FROM AN INSTRUCTOR'S TEACHING EFFECTIVENESS SCORES
Sum Total Class
Class Size 31 12 32
3470 3530 3281 TES
10 25
34700 219095 102370 82025 n" x TES
3423 219095 f 64 OTES
Instructor Questionnaire
Design
A questionnaire was designed to gather information
from the instructors which paralleled the variables used
in this study:
1. Adjunct vs. full-time Instructor
2. Sex of the instructor
3. Professorial rank
4. EdD vs. PhD
5. Inbreeding (USC graduate vs. other institution)
6 . Doctorate vs. no Doctorate
7. Religious affiliation
56
8 . Age of the instructor
9. Height of the instructor
10. Degree of religious commitment
11. Years of teaching experience on the college
level
12. Number of years teaching experience at USC
13. Total number of years of teaching experience
(any level)
14. Previous practical experience in related area
15. Hours spent in administrative posts
16. Total number of hours per week of counseling
students, in addition to class time
17. Field experience--consultations
18. Hours of preparation for classes
19. Total number of hours per week devoted to
research (other than for classes or lectures)
20. Total number of pages written as an author of a
textbook or article done during the semester of
the evaluation
21. Salary
22. Number of students registered in the class
23. Instructor's self-evaluation
Most of the questions on the faculty questionnaire
were straightforward; however, questions relating to vari
ables number 1 1 through 2 0 called for a review of the re
lated literature on the collection of faculty and work load
57
data.
The total work load of a faculty may be analyzed
into fairly discrete functions --teaching, research,
administration, advising, and others that may be
identified [Enochs, 1960, p. 20].
The questionnaire included an analysis of discrete
functions in units of time. Stecklein (1961) indicated,
"Man's ability to estimate time spent is amazingly well de
veloped [p. 4]." He made the statement in relation to the
collection of work load data; thus, providing the rationale
for the analysis of discrete functions in units of time.
All questions were reduced to fit one sheet of
paper (see Appendix E).
Sample
The sample consisted of selected instructors in the
School of Education at USC during the Fall semester of
1971. These were the instructors who were evaluated by the
student questionnaires.
Administration
Each of the 84 instructors in the School of Educa
tion whose classes had been evaluated was sent, through the
inter-office or United States mail, a packet. The packet
included a letter of explanation, instructions to be
followed, and a questionnaire to be completed (see
Appendices E and F).
Data Collection
Enclosed with the explanation, instructions, and
questionnaire was a self-addressed stamped envelope in
which the faculty questionnaire was to be returned upon
completion. After three weeks 53 of the 84 questionnaires
had been returned. The researcher then sent a new letter
(see Appendix G) with the same questionnaire and a self-
addressed, stamped envelope to the non-respondents. This
yielded an additional 14 responses. Due to the personal
nature of the questionnaire, five instructors returned the
questionnaire blank and indicated that they did not wish to
participate in the study. Thus, of the original 84 in
structors, 67 returned a completed questionnaire. Three
more instructor questionnaires were eliminated because they
were incomplete. Therefore the sample of 84 was reduced
to 64. The data were then coded for the statistical
treatment.
Statistical Treatment
The statistical methods considered for the present
causal-comparative survey were the various correlational
methods available, the analysis of variance, a mean analy
sis, and a simple data description. All the statistical
procedures used were done on an IBM 360 Computer using the
BIOMED statistical package series.
59
Review of Related Literature
Since the present study concerned itself with the
relationship between teaching effectiveness and the selec
ted variables, a review of selected references to correla
tional research was included.
Fox (1969) cautioned researchers who involved them
selves in correlational studies by indicating that the
actual total amount of data that will be collected could be
summarized in a few two-digit decimals. He further cau
tioned that this is too limited a goal for dissertation-
type research.
Isaac and Michael (1971) set up the steps involved
in a correlational study and suggested that the researcher
select the correlational approach that fits the problem.
Simple correlations could be obtained by the use of the
Pearson r, Rho and the Point Biserial correlation coeffi
cient. However, Downis and Heath (1970) suggested:
"Making predictions with a single predictor is not the most
efficient scheme that can be arranged," and they further
pointed out that, "The efficiency or accuracy of prediction
may be improved by using more than one predictor to predict
a single criterion [p. 140]."
Kelly, Beggs, McNeil, Eichelberger § Lyon (1969)
said that multiple factors underlie human behavior. They
further stated that most statistics failed to reflect the
60
complexities regarding behavior because they study only one
or two variables. Kelly, et al (1969) also pointed out
that past research was limited because the statistical
models which previous investigators had used were self-
limiting .
Since the advent of the high-speed digital com
puter, newer statistical models have been made easier to
utilize. One tool, multiple regression analysis, can aid a
research investigator in constructing a statistical model
which will not place constraints on the original research
question.(Kelly, et al, 1969)
Guilford (1965) pointed out some of the principles
of multiple correlation:
(1 ) a multiple correlation increases as the size
of correlations between dependent and indepen
dent variables increases and
(2 ) a multiple correlation increases as the size
of the intercorrelations of independent vari
ables decreases [p. 403).
Multiple regression analysis is a more powerful
statistical tool than bivariate correlation methods, since
it yields more predictive data. Therefore, statistical
computer packages, making for efficiency in application of
multiple regression analysis, as well as other correla
tional methods were selected as a part of the statistical
treatments used in this research.
Statistical Analysis
The statistical procedures conducted in this study
61
were an analysis of variance, bivariate correlations,
multiple regression analysis, mean analysis, and simple
data description.
Analysis of Variance
Hypotheses one (H^) through seven (H^) were in
vestigated through an analysis of variance. which
included the variable adjunct vs. full-time instructor was
investigated by comparing the mean OTES and the variability
of these scores for both groups. which included the
variable sex was investigated by comparing the mean OTES
and the variability of the scores for male and female in
structors. which represented the variable rank compared
the mean OTES and the variability of the scores of Instruc
tors, Assistant Professors, Associate Professors and Full
Professors. referred to the variable type of Doctorate,
and compared the mean OTES and variability of the scores of
instructors possessing the EdD and PhD. concerned itself
with an inbreeding variable and the researcher compared the
mean OTES and variability of the scores of instructors who
were USC graduates and those who were graduates of other
Universities. Hg concerned itself with analyzing the mean
OTES and the variability of scores of instructors who
possessed the Doctorate and those who did not possess the
Doctorate. Finally, which included the variable
religion was investigated by comparing the mean OTES and
62
variability of scores of Jews, Protestants, Catholics,
those with no religion and others.
The data were run on an IBM 360 computer and the
BIOMED Statistical package BMD01V was used. Means, stan
dard deviations and an F-ratio were computed. Table 4
summarizes by hypothesis and variable the data which were
treated through the analysis of variance.
Correlations
Bivariate Correlation. Hg through ^ 1 ’ pertaining
to instructors, were investigated across and within two
separate groups: group one was, "all instructors including
adjuncts" and group two was, "full-time instructors only."
Data to be treated were the OTES (the dependent variable)
and the variables found in the Hypotheses (the independent
variables). Hg through H ^ (Hg - Birthdate; Hg - Height;
H1 0 - Religious commitment; H ^ - Teaching experience on
the college level; H^ 2 _ Teaching experience at USC; H-^ _
Total teaching experience on all levels; H ^ - Years of
practical experience) were investigated across both groups.
H^g through H ^ (h^g “ hours of administration; H^^ - Hours
of counseling; H.^ - Hours of consultation; H^g - Hours of
preparation; H^g - Hours of research; H2q - Number of pages
authored; H ^ - Salary) were investigated solely with the
"full-time instructor only" group.
A third group of Hypotheses (H2 2 - Class size and
TABLE 4
HYPOTHESES RECEIVING AN ANALYSIS OF VARIANCE
AS A STATISTICAL TREATMENT
Hypothesis Number Discrete Variables
1 Adjunct versus full-time
instructors
2 Men versus women
3 Rank
4 EdD versus PhD
5 Inbreeding (USC graduate vs.
other)
6 Doctorate versus no Doctorate
7 Religious affiliation
64
H2 2 " Instructor's self-evaluation) were investigated. The
data collected for and were unique, inasmuch as
they represented course-related data as opposed to data
about the instructors' OTES. The Teaching Effectiveness
Score (TES) for each course was used as the dependent vari
able, while class size (b^) an
evaluation (H2 3 ) were the independent variables. Table 5
summarizes the groups and dependent variables of the Hypo
theses and their related variables which received first-
order correlations as their statistical analyses.
The data were run on the IBM 360 computer and the
BMD02D and the BMD02R statistical packages were used.
Stepwise Multiple Regression Analysis. ^ 4 dealing
with the "all instructors - including adjunct" group and
dealing with the "full-time instructors only" group were
statistically investigated by using a multiple correlation
al technique.
The predicted variable was the OTES and the predic
tor variables were those found in Hg through l^. ^ 4
which dealt with the "all instructors-including adjunct"
group had the following predictor variables: Birthdate;
Height, Religious commitment; Teaching Experience on the
college level; Teaching experience at USC; Total teaching
experience on all levels and Years of practical experience.
dealt with the "full-time instructor only" group. The
predictor variables included those mentioned above and
65
TABLE 5
HYPOTHESES RECEIVING A BIVARIATE CORRELATION
AS A STATISTICAL TREATMENT
Dependent
Groups Variables
Hypothesis
Number
Variables
Instructors (All)
Including Adjuncts
Full-time
Instructors Only
All Courses Sampled
i n this Study
OTES Generated for
Each Instructor
TES for Each Course
8 Birthdate X X Y
9 Height X X Y
1 0 Religious commit X X Y
ment
1 1 Teaching experience X X Y
college
1 2 Teaching experience X X Y
USC
13 Total experience X X Y
14 Practical exper X X Y
ience
15 Hours of adminis X Y
tration
16 Hours counseling X Y
17 Hours consultations X Y
18 Hours preparation X Y
19 Hours research X. Y
2 0 Pages authored X Y
2 1 Salary X Y
2 2 Class size X X Y
23 Instructor's self- X X . Y
evaluation
X - Independent Variable
Y - Dependent Variable
66
Hours of administration; Hours of counseling; Hours of con
sultation; Hours of preparation; Hours of research; Number
of pages authored; and Salary.
Intercorrelations between all the above variables
were generated and multiple predictors of the OTES were
developed through the stepwise multiple regression
analysis.
The BMD02R statistical package was used in an IBM
360 computer.
Table 6 summarizes by hypothesis and group, the
data which were treated through the stepwise multiple
regression analysis.
Mean Analysis. A close look at the individual OTES
of adjunct and full-time instructors was conducted to gain
a fuller understanding of their scores.
Simple Data Description. The data were run through
a BMD01D statistical package. The resulting information
yielded the mean, standard deviation, standard error of the
mean, sample size, maximum score, minimum score, and the
range of each of the 15 continuous variables that were
studied.
Procedural Problems
Missing Data
Because of the personal nature of the faculty ques
tionnaire, certain instructors chose to refrain from
TABLE
67
CO
l —l
CO
X
£
<
o
l —l
C /3
CO
w
Pd
u
w
Pd
w
1 -3 ]
pH
I —I
H
i - 5
PD
S
E - h
2 ;
w
S
H
<1
H
Pd
E - h
i-5
W CJ
CO l-H
t - i E - h
is
P L ,
M
E - h
C O
w
u
w
Pd
C O
W
CO
w
X
E - h
o
P L ,
X
X
C O
I —I
H
c
E - h
C O
C O
<
Hours of preparation
Hours of research
to
C D
rH
X
c d
•H
E h
ctj
>
Hours of Administration
Hours counseling
Hours consulting
Pages Authored
Salary
Birthdate
Height
Religious commitment
Teaching experience at college
level
Teaching experience at USC
Total experience
Practical experience
OTES (dependent variable)
PL ,
P S
O
E h
U
(0
•H
tO
C D
X
P
o
PL,
X
X
X
X
X
X
X
X
10
P
i u
co r t
E h p 5
O T-I
P ni
U <
P S
Eh bO
P P !
CO "H
P ! X
P S
t - H
rH U
< PC
rf
C X I
X
X
X
X
X
X
X
X
X
X
X
X
X
X
CD
6
•H
P
I
P S
X
X
rH
P!
O
to
E h
O
P
O
PI
E h
P
to
P S
LO
CXI
68
answering specific questions. Three specific instructors
answered so few questions that their data were eliminated
from the study. Other missing data were treated as
follows.
Salary Data. One full-time Professor and one In
structor failed to include their salary date on the faculty
questionnaire. A median score of personnel of the same
rank was found and used as an estimate of each instructor's
salary.
Birthdate Data. Birthdate data were missing for
two female instructors. The mean year of birth, 1923, was
used as an estimate of their year of birth and inserted in
their data cards.
Height Data. There were two cases where height was
missing. The researcher knew one instructor and estimated
the instructor's height, while the mean height of five feet
ten inches was used as an estimate for the other instructor.
Commitment to Religion. Absence of this informa
tion eliminated these data from the study because a mean
score as an estimate of religious commitment would be mean
ingless .
Data Processing. With the use of the computer the
researcher lost some flexibility in handling the data. The
above adjustments were necessary to compensate for the
missing data because if one piece of continuous datum was
not present the rest of the data would have been lost to
69
the analysis.
The overall effect of substituting a mean score for
the missing data was to have a more conservative estimate
of the correlation. The closer to the mean the scores lie,
the less discrimination, and thus a lower correlation will
be generated (Guilford, 1965).
CHAPTER IV
FINDINGS AND SELECTED DISCUSSIONS
This study was concerned with finding the variables
associated with the teaching effectiveness of college in
structors. In order to investigate seven hypotheses con
cerning discrete variables, an analysis of variance was
conducted. Continuous variables were investigated by means
of first order and multiple correlations. An Overall
Teaching Effectiveness Score (OTES)--an average teaching
effectiveness score generated from student evaluations --
was one of the statistics used in the analysis. A group
mean OTES was used with the dichotomous variables and the
OTES was used as the dependent variable in the various
correlational procedures.
Discussions are scattered throughout this chapter.
These discussions are provided for the reader's benefit and
present statements regarding pertinent past research and
further analyses of selected findings.
The findings are discussed by hypothesis with the
exception of supplementary findings which are presented at
the end of the chapter.
70
71
Investigation of Hypotheses Concerning
Group Comparisons
: There is no significant difference between the mean
teaching effectiveness scores of Adjunct Instructors
and full-time Instructors.
TABLE 7
SUMMARY OF THE ANALYSIS OF VARIANCE OF OVERALL
TEACHING EFFECTIVENESS SCORES FOR
ADJUNCT $ FULL-TIME INSTRUCTORS
Adjunct Full-Time F-ratio
n = 28 n = 39
mean S.D. mean S.D.
2907 1162 2962 658 0.06 N.S.
The group mean OTES of Adjunct Instructors was 2907
and the group mean OTES of full-time Instructors was 2962,
thus indicating no difference between means. The F-ratio,
0.06 (65 degrees of freedom) was not significant. There
fore, the research hypothesis was supported.
It must be noted, however, that there appeared to
be great variability in the standard deviations between the
two groups. The Adjunct group Standard Deviation (SD)
(1162) and full-time Instructor group SD (658) called for
further analysis of the individual OTES of the two groups
(see Appendix I).
72
The mean OTES of all instructors including Adjuncts
was 2938 with an SD of 896 (see Appendix H). This mean and
SD were used in an analysis of the individual OTES of the
two groups --Adjuncts and Full-time. Eighty-one per cent of
the full-time Instructors' OTES fell between one standard
deviation of the group mean, and 1 0 0 per cent within two
standard deviations. Seventy-five per cent of the Adjunct
Instructors' OTES fell within one standard deviation of the
mean, 1 1 per cent fell within two standard deviations and
14 per cent significantly deviated from the mean.
Discussion. Two of the highest and lowest OTES
were found among Adjunct instructors. Although the F-ratio
was not significant, it appeared that the greatest vari
ability in scores of Adjuncts was due to the fact that they
as a group are different from other School of Education
members and that these findings warrant attention.
H2 • There is no significant difference between the mean
teaching effectiveness scores of males and females.
TABLE 8
SUMMARY OF THE ANALYSIS OF VARIANCE OF OVERALL
TEACHING EFFECTIVENESS SCORES FOR MALE $ FEMALE INSTRUCTORS
Female Male
F-ratio
n = 1 2 n = 55
mean S.D. mean S.D.
2797 1126 2970 847 0.37 N.S.
73
The mean OTES of female Instructors was 2797 with
an SD of 1126, while the mean OTES of male Instructors was
2970 with an SD of 847. The F-ratio was 0.37 and was not
significant. Therefore, the research hypothesis was sup
ported. There is no difference between the teaching effec
tiveness of males and females.
Discussion. These findings support the findings of
Hauss (1969) , Hess (1969), and Noble (1969) that there are
no sex related differences in teaching effectiveness.
: There is no significant difference between the mean
teaching effectiveness scores of instructors with
different professorial ranks.
TABLE 9
SUMMARY OF THE ANALYSIS OF VARIANCE OF OVERALL TEACHING
EFFECTIVENESS SCORES FOR RANK OF INSTRUCTOR
Lecturers §
Instructors (grouped)
n = 14
Assistant Professors
n . lg F-ratio
mean S.D.
2944 1254
mean S.D.
2852 647
Associate Professor
n = 1 2
Full Professor
n = 2 2
mean S.D. mean S.D.
3220 671 2858 947 0.50 N.S.
74
The grouped data for Lecturers and Instructors
yielded a mean OTES of 2944 with an SD of 1254. The mean
OTES for the Assistant Professor group was 2852 with an SD
of 647, while the highest mean OTES was attained by the
Associate Professor group with an OTES of 3220 and an SD of
671. The Full Professor group had a mean OTES of 2858 with
an SD of 947. The F-ratio, 0.50, (65 degrees of freedom)
was not significant. Therefore, the research hypothesis
was supported: There is no difference between the teaching
effectiveness of instructors of various ranks.
Discussion. The highest mean OTES for any group of
instructors was found with the Associate Professor group.
This finding supports the findings of Elliott (1949) and
Langen (1966) who discovered that the Associate Professor
is rated higher than other ranks although there may not be
significant statistical differences.
: There is no significant difference between the mean
teaching effectiveness scores of instructors posses
sing an EdD and instructors possessing a PhD.
TABLE 10
SUMMARY OF THE ANALYSIS OF VARIANCE OF OVERALL TEACHING
EFFECTIVENESS SCORES FOR INSTRUCTORS
HOLDING EdD $ PhD DEGREES
Ed.D.
n = 29
Ph.D.
n = 26
F-ratio
mean S.D.
2922 982
mean S.D.
3061 846 0.31 N.S.
75
The mean OTES for instructors with an EdD was 2922
with an SD of 982, while the mean OTES for instructors
possessing the PhD was 3061 with an SD of 846. The F-ratio
0.31, (53 degrees of freedom) was found to be not signifi
cant. It was therefore concluded that the research hypo
thesis was supported: There is no difference between the
teaching effectiveness of instructors with EdD and PhD
Degrees.
Ht-: There is no significant difference between the mean
teaching effectiveness scores of USC graduates and
non-USC graduates.
SUMMARY OF THE ANALYSIS
EFFECTIVENESS
TABLE 11
OF VARIANCE OF OVERALL
SCORES FOR INBREEDING
TEACHING
Highest Earned Highest Earned
Degree at USC Degree non-USC
F-ratio
n = 36 n = 31
mean S.D. mean S.D.
2832 1072 3063 630 1.10 N.S.
The mean OTES for instructors whose highest earned
degree was granted by USC was 2832 with an SD of 1072,
while non-USC graduates had a mean OTES of 3063 and an SD
of 630. An F-ratio of 1.10 with 65 degrees of freedom was
not significant and the research hypothesis of no dif
ference was supported. Thus, it is safe to say that there
76
is no significant difference between the teaching effec
tiveness of instructors who received their highest earned
degree from USC and those who did not.
Hg: There is no significant difference between the mean
teaching effectiveness scores of instructors with the
Doctorate and non-Doctorate holding instructors.
TABLE 12
SUMMARY OF THE ANALYSIS OF VARIANCE OF OVERALL TEACHING
EFFECTIVENESS SCORES FOR DOCTORATE
8 NON-DOCTORATE HOLDING INSTRUCTORS
Non-Doctorate Doctorate
F-ratio
n = 1 1 n = 56
mean S.D. mean S.D.
2706 840 2985 907 0.89 N.S.
Instructors who do not possess a Doctorate had a
group mean OTES of 2706 with an SD of 840, while instruc
tors who possess a Doctorate had a mean OTES of 2985 with
an SD of 907. The F-ratio, 0.89, (65 degrees of freedom),
was not significant. Therefore, the research hypothesis
was supported. There is no difference between the teaching
effectiveness of instructors who possess or do not possess
the Doctorate.
Discussion. This finding tends to extend to the
university level the findings of Hauss (1969) that there is
no significant relationship between teaching effectiveness
77
on the elementary and secondary level and the level of pre
paration beyond the B.A. degree.
: There is no significant difference between the mean
teaching effectiveness scores of instructors of
different religions.
There were five religious categories under which
the respondents were grouped. They were: Jewish, Protes
tant, Catholic, No religion (which included "agnostic,"
"atheist," and "personal ethics," but no specific affilia
tion) , and an Other category (which included four Latter-
day Saint, one Quaker, one Seventh-Day Adventist and two
Unitarian). The groups' means and standard deviations can
be found in Table 13. The F-ratio computed was 0.16 and
was found to be not significant for four variables and 56
degrees of freedom. Therefore, the research hypothesis
was supported. There is no difference between the teaching
effectiveness of instructors of various religious affilia
tions .
Hypotheses Involving Bivariate Correlations
Simple first-order correlations were run on Hg
through The continuous variables included in Hg
through were correlated with the dependent variable
OTES, and and ^ 3 were correlated with the TES of each
class.
There were three separate ways used to handle the
TABLE 13
SUMMARY OF THE ANALYSIS OF VARIANCE OF OVERALL TEACHING EFFECTIVENESS
SCORES FOR RELIGION
Jewish
n = 6
Protestant
n = 35
Catholic
n = 5
No Religion
n = 9
Other
n = 9
F-ratio
mean S.D. mean S.D. mean S.D. mean S.D. mean S.D.
3186 604 2948 1018 2789 439 2842 830 2952 1020 0.16 N.S.
79
data reported. The data for Hg through H ^ were run for
both the Adjunct and full-time Instructors combined with an
"n" of 64. The second group of data was for full-time
Instructors only. This second group included data from H^ 5
through H ^ and had an "n" of 36. The third method used to
handle the data was unique. This data related to specific
classes, the class size, and the instructor’s self-
evaluation of his effectiveness in a class. These data
were related to the TES of each class and since there were
97 classes, the "n" was 97 (see Table 14).
Age
Hg: There is no significant first-order correlation
between the teaching effectiveness scores and the
instructor's age.
The measure used to show a relationship between age
and the OTES of an instructor was his birthdate. Therefore
a negative correlation would show a positive relationship
between the teaching effectiveness of an instructor and his
age. With an "n" of 64 the independent variable, instruc
tor's birthdate, was correlated with the OTES and a corre
lation coefficient of .00 was generated. The research
hypothesis of no relationship was supported and it was
found that there is no relationship between teaching effec
tiveness and the age of the instructor.
80
TABLE 14
SUMMARY OF THE BIVARIATE CORRELATIONS FOR
HYPOTHESES EIGHT (Hg) THROUGH TWENTY-THREE (H )
HyNumber1S IndePendent Variable
Dependent
Variable
N Correlation
Coefficient
8 Instructor birthdate OTES 64 . 0 0
9 Instructor height OTES 64 . 1 0
1 0 Religious commitment OTES 64 - .19
1 1 College Teaching
Experience
OTES 64 . 0 1
1 2 USC Teaching
Experience
OTES 64 .06
13 Total Teaching
Experience
OTES 64 .13
14 Practical Experience OTES 64 . 1 1
15 Hours administration OTES 36 .04
16 Hours counseling OTES 36 .07
17 Hours consulting OTES 36 . 1 2
18 Hours preparation OTES 36 . 0 0
19 Hours research OTES 36 .25
2 0 Pages authored OTES 36 . 0 1
2 1 Salaries OTES 36 .19
2 2 Class size TES 97 - .33**
23 Self-evaluation TES 97 .17
** p < .01
81
Height
Hg: There is no significant first-order correlation
between the mean teaching effectiveness scores and
the height of instructors.
Instructors' heights were correlated with teaching
effectiveness as measured by the OTES and a correlation
coefficient of .10 was found. With an "n" of 64 this was
found not to be a significant relationship. The research
hypothesis was supported and it was found that there is no
relationship between teaching effectiveness and the height
of the instructor.
Degree of Religious Commitment
H1q: There is no significant first-order correlation
between the mean teaching effectiveness scores of
instructors and their degree of commitment to their
religions.
The independent variable (degree of religious
commitment) was correlated with the dependent variable
(OTES) and a correlation coefficient of -.19 was found.
With an "n" of 64 a correlation coefficient was found to be
not significant and the research hypothesis of no relation
ship between teaching effectiveness and the degree of
religious commitment was supported.
82
College Level Teaching Experience
^11: There is no significant first-order correlation
between the mean teaching effectiveness scores and
the total number of years taught on the higher edu
cation level.
A test of the relationship between teaching effec
tiveness as measured by the OTES and number of years
teaching on the college level was conducted. A correlation
coefficient .01 was generated and with an "n" of 64 was
found to be not a significant relationship. Therefore the
research hypothesis of no relationship between years of
college teaching experience and effective teaching was
supported.
Years of Teaching Experience at USC
H^2 : There is no significant first-order correlation
between the mean teaching effectiveness scores and
the number of years taught at the University of
Southern California.
Teaching effectiveness (OTES) was correlated with
the number of years of teaching experience at the currently
employed institution (USC). A correlation coefficient of
.06 was found. With an "n" of 64 this correlation coeffi
cient was found to be not a significant relationship and
therefore the research hypothesis was supported. There is
83
no relationship between teaching effectiveness and the
number of years of teaching experience at the currently
employed institution which lends support to the findings of
Noble (1969).
Total Teaching Experience
H-j^: There is no significant first-order correlation
between the mean teaching effectiveness score and the
total number of years taught.
With total teaching experience as the independent
variable and the OTES as the dependent variable, a correla
tion coefficient of .13 was generated. With an "n" of 64
this relationship was found to be not significant. There
fore, the research hypothesis of no relationship between
teaching effectiveness and total years of teaching ex
perience was supported.
Discussion. ^12’ an^ ^13 ten(^ec^ t 0 support
the findings of McMullen (1927) , Ryans (1960b) Hawkins
(1963) and Noble (1969) who found that the amount of pre
vious teaching did not influence teaching effectiveness.
Practical Experience
: There is no significant first-order correlation
between teaching effectiveness scores and previous
years of practical experience in the area taught.
The correlation coefficient found to exist between
84
the OTES and the number of years of practical experience
other than teaching experience was .11. This did not de
viate significantly from zero correlation when the "n" was
64. Thus, the research hypothesis of no relationship
between teaching effectiveness and previous practical
experience was supported.
Discussion. This finding supported Hawkins' (1963)
research which showed that previous practical experience
had no significant positive or negative effect on teaching
effectiveness ratings.
Hours Spent in Administrative Duties
H^^: There is no significant first-order correlation
between teaching effectiveness scores and the number
of hours spent holding administrative posts by
instructors.
With the number of hours spent by an instructor on
administrative duties as the independent variable and the
OTES as the dependent variable, a correlation coefficient
of .14 was found. With an "n" of 36 this correlation co
efficient was found to be not significantly different from
zero correlation. Therefore, the research hypothesis of
no relationship was supported.
Discussion. These findings do not support the
findings of McDaniel 5 Feldhusen (1970) who found adminis
trative duties to be a negative predictor of teaching
85
effectiveness.
Student Counseling
H^g.' There is no significant first-order correlation
between teaching effectiveness scores and counseling
activities.
With the number of hours spent counseling students
as the independent variable and the OTES as the dependent
variable, a correlation coefficient of .07 was generated.
With an "n" of 36 the resultant correlation coefficient was
found to be not significant. The research hypothesis of no
relationship between the number of hours spent per week in
counseling students and teaching effectiveness was
supported.
Discussion. These findings support Remmer's (1963)
expert opinion that extra-class activities with students
were not related to student ratings of a teacher.
Consultations
H-jThere is no significant first-order correlation
between teaching effectiveness scores and the number
of hours spent on consulting jobs held by the
instructor during the teaching semester.
When the number of hours per week spent in extra
university consultations was used as the independent vari
able and the OTES was used as the dependent variable, a
86
correlation coefficient of .12 was found. This correlation
coefficient was found to be not significantly different
from zero when the "n" was 36. Therefore, the research hy
pothesis was supported that there is no relationship
between teaching effectiveness and the amount of time spent
on consultations.
Class Preparation
H.. 0 : There is no significant first-order correlation
1 o
between the mean teaching effectiveness score and the
number of hours spent in preparation for class pre
sentations .
Hours spent in preparation for class was used as
the independent variable and was correlated with the OTES
(a measure of teaching effectiveness). The correlation
coefficient generated was .00. Thus, the research hypo
thesis was supported. No relationship exists between hours
spent in preparation for class and teaching effectiveness.
Hours Devoted to Research
H^g.* There is no significant first-order correlation
between teaching effectiveness scores and the number
of hours spent on non-instruction related research.
The independent variable, hours of research, was
correlated with the dependent variable the OTES. The cor
relation coefficient generated was .25 (n = 36) which was
87
found to be not significant. Thus, the research hypothesis
was supported. There is no relationship between the number
of hours spent in research and teaching effectiveness.
Discussion. Expert opinion involving this question
varied. Some felt that time spent on research would have
an adverse effect on teaching effectiveness- (Stickler, 1960
Martin 8 Berry, 1969) Others felt that time spent in
research would only help make the teacher more effective.
(Dupont, 1961) The findings of this study refute both
sets of opinions by finding no relationship between
teaching effectiveness and time spent in research.
Number of Pages Authored
I^q : There is no significant first-order correlation
between the mean teaching effectiveness scores and
the number of pages authored by the instructor during
the semester.
The correlation coefficient derived from the rela
tionship between the number of pages authored during the
semester of investigation and the OTES was .01. The "n"
was 36 and the correlation coefficient was found to be not
significant. Therefore the research hypothesis of no rela
tionship between teaching effectiveness and the number of
pages authored was supported.
Discussion. Research findings were mixed regarding
the effects of instructor authorship. Instructors who are
88
also authors have been rated higher (Riley, Ryan §
Lifshitz, 1959) and lower (McDaniel § Feldhusen, 1970).
This research refutes both of these previous findings with
a finding of no difference between teaching effectiveness
and authorship.
Salaries
#2i: There is no significant first-order correlation
between the mean teaching effectiveness scores of
instructors and their base contract salaries.
When instructors' salaries were used as the inde
pendent variable and the OTES was used as the dependent
variable, a correlation coefficient of .19 was generated.
With an "n" of 36 this was found to be a non-significant
correlation coefficient. Therefore, the research hypo
thesis was supported. There is no relationship between
instructors' salaries and teaching effectiveness.
Discussion. The findings of this research, that
there is no relationship between salary and teaching effec
tiveness, supports the findings of Hess (1969), Hauss
(1969) and Noble (1969) who reported similar findings on
the elementary and secondary level.
Class Size
H2 2 : There is no significant first-order correlation
between teaching effectiveness scores and the number
89
of students registered in class.
When class size was used as the independent vari
able and the instructor's TES for the class was used as the
dependent variable a correlation coefficient of -.33 was
found. With an "n" of 97 a correlation coefficient of .33
was found to be significant beyond the .01 level. The re
search hypothesis that there is no relationship between the
TES and class size was rejected.
Discussion. The findings of this research tend to
support an overwhelming body of research (Cheydleur, 1945;
Macomber § Siegel, 1957 8 I960; Nachman § Opochinsky, 1958;
Siegel, Adams, § Macomber, 1960; Gage, 1961; Feldhusen,
1963; McKeachie, 1969) which has found a significant rela
tionship between class size and effective teaching,
favoring smaller classes.
Instructor's Self-Evaluation
H2 2 : There is no significant first-order correlation
between teaching effectiveness scores and the in
structors' self-evaluations.
The independent variable, instructor's self-
evaluation, was correlated with the dependent variable, the
TES and a correlation coefficient of .17 was obtained. The
sample size was 97 and the correlation coefficient was
found to be not significant. Therefore, the research hypo
thesis of no relationship between the TES and the instruc
90
tor's self-evaluation was supported.
Hypotheses Involving the Multiple
Stepwise Regression Analysis
H2 4 : There is no significant multiple correlation between
the aforementioned continuous variables and teaching
effectiveness of all instructors including Adjuncts.
As shown in Table 15, the best single predictor of
the OTES of the total teaching sample including Adjunct
Instructors is the degree of religious commitment an in
structor has. The correlation coefficient is -.1940.
Thus, the less commitment to religion an instructor has,
the greater the OTES. The next best predictor of the OTES
TABLE 15
SUMMARY OF THE MULTIPLE REGRESSION ANALYSIS
FOR ALL INSTRUCTORS INCLUDING ADJUNCTS
Step
No .
Variable Entered
Multiple
R R2
Increase
• D 2
m R
1 Religious Commitment 0.1940 0.0376 0.0376
2 Total Teaching Experience 0.2615 0.0684 0 .0308
3 Years Teaching at USC 0 .3477 0.1209 0.0525
4 Practical Experience 0.3543 0.1255 0.0047
5 Height 0.3610 0.1303 0.0048
6 Birthdate 0.3651 0.1333 0.0030
F--Level Insufficient for Further Computation
n = 64
P <
.05 = .53
OTES = Dependent Variable
91
is total teaching experience. With this predictor entered,
the multiple correlation jumps to .2615 with 7 per cent of
the variance accounted for. The third best predictor is
the number of years teaching at USC. This predictor vari
able increases the multiple correlation coefficient to
.3477 and the multiple r accounts for 12 per cent of the
variance. The fourth best predictor of the OTES is years
of practical experience. With this predictor added, the
multiple correlation increases slightly to .3543 and thus
accounts for 13 per cent of the variance. Height is the
fifth best predictor of the OTES and slightly increases the
multiple correlation to .3610, and the multiple r accounts
for 13 per cent of the variance. The final predictor com
puted was the birthdate of the instructor which barely
increased the multiple correlation to .3651. The variance
accounted for was at 13 per cent. The F-level was insuffi
cient for further computation because the other variable of
teaching experience at the college level would have such a
minimal effect on the multiple correlation as to be insig
nificant .
The significance level for a multiple correlation
with an "n" of 64 is .53 which was not met. Therefore, the
research hypothesis of no multiple correlations between the
continuous variables of the study and the teaching effec
tiveness of the total staff including Adjuncts was
supported.
92
There is no significant multiple correlation between
the aforementioned continuous variables and teaching
effectiveness of full-time Instructors only.
As shown in Table 16, the best single predictor of
teaching effectiveness in full-time staff members only is
the number of hours spent in research. The correlation co
efficient generated was .25 which accounts for 6 per cent
of the variability in OTES. The second best predictor of
effective teaching is the number of years at USC which
helped to generate a multiple correlation coefficient of
.29 (accounting for 9 per cent of the variance). The total
number of years of teaching experience was the third best
predictor with a multiple r of .49 which accounted for 24
per cent of the variability in OTES. Salary became the
fourth best predictor and increased the multiple correla
tion to .60, thus accounting for 36 per cent of the vari
ability in scores. The fifth best predictor of the depen
dent variable, OTES, was college teaching experience which
increased the multiple r to .63 and accounted for 40 per
cent of the variance. The number of hours spent in admini
strative duties was the sixth best predictor of the depen
dent variable, the OTES. The multiple correlation
coefficient was .65 and accounted for 42 per cent of the
variability in the OTES. This figure was significant at
the .05 level for an "n" of 36. The research hypothesis of
93
TABLE 16
SUMMARY OF THE MULTIPLE REGRESSION ANALYSIS
FOR THE FULL-TIME INSTRUCTORS ONLY
Step
No .
Variable Entered Multiple R R2
Increase
in R2
1 Hours of research . 2525 .0638 .0638
2 Years at USC .2931 .0859 . 0 2 2 1
3 Total teaching experience .4875 . 2376 .1518
4 Salary .6014 .3617 .1241
5 College teaching experience .6295 .3963 .0346
6 Hours of administration .6465* .4180 .0217
7 Hours of consultation .6553* .4294 .0114
8 Practical experience .6597* .4352 .0058
9 Birthdate .6627* .4392 .0040
1 0 Number of pages authored .6659* .4434 .0042
1 1 Hours of preparation .6676* .4457 .0023
1 2 Height .6679* .4461 .0004
13 Religious commitment .6683* .4467 .0006
-Level insufficient for further computation
n = 36
p .05 = .64
OTES = Dependent Variable
94
no difference was rejected.
The following variables contributed to the multiple
correlation and are presented in order of their contribu
tion: hours of consultation, previous practical experi
ence, birthdate, number of pages authored, hours spent
preparing for class, height of the instructor, and the in
structor's degree of religious commitment. The multiple
correlation coefficient generated with the above variables
included was .67 which accounted for 45 per cent of the
variability in the OTES. Therefore, the research hypo
thesis of no multiple correlation between teaching effec
tiveness and the above mentioned variables for full-time
instructors only was rejected.
Supplementary Findings
When the multiple regression analysis was con
ducted, intercorrelations between all variables were gene
rated .
Some of the more obvious findings (such as the
relationship between age and years of teaching experience)
are not reported in this chapter, but can be found in
Appendices J and K.
There was a significant correlation found between
birthdate and religious commitment. The correlation co
efficient of -.34 was significant at the .05 level with an
"n" of 36.
95
A relationship was found to exist between height
and salary. An extremely significant correlation coeffi
cient of .54 was generated which was significant at the
.001 level. This supports the findings of Feldman (1971)
when he said that height was a significant predictor of
salary.
A correlation coefficient of .37 was generated
between height and the number of hours spent counseling
students. This was significant at the .05 level.
Finally, a negative relationship was found to exist
between hours of consulting and hours of preparation for
class. A correlation coefficient of -.38 was generated
which was found to be significant at the .05 level.
Summary Discussion
No single criterion (with the exception of class
size) was significantly related to an effective teaching
score.
No significant multiple correlation was found
between the Overall Teaching Effectiveness Score (OTES) and
the variables studied when considering all instructors in
cluding Adjuncts.
A significant multiple correlation was found
between OTES and research activities, various teaching ex
periences and administrative time when full-time Instruc
tors only were studied.
CHAPTER V
SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS
Summary
The last university instructors which both future
and in-service teachers observe are usually instructors in
the Department or School of Education. Therefore, it is
imperative that Schools of Education provide effective
teachers as models for future and in-service teachers.
A review of the literature indicated the potential
of selected predictor variables of teaching effectiveness
and these were used in this study.
Purpose
The study was conducted to identify selected presage
criteria (predictor variables) associated with effective
college teaching in Schools of Education.
Hypotheses
Through a review of the related literature, vari
ables were selected from which the following hypotheses
were developed.
H^ : There is no significant difference between the mean
teaching effectiveness scores of Adjunct Instructors
and full-time Instructors.
96
97
H2 * There is no significant difference between the mean
teaching effectiveness score of males and females.
: There is no significant difference between the mean
teaching effectiveness scores of instructors with
different professorial ranks.
: There is no significant difference between the mean
teaching effectiveness scores of instructors posses
sing an EdD and instructors possessing a PhD.
H[-: There is no significant difference between the mean
teaching effectiveness scores of USC graduates and
non-USC graduates.
Hg: There is no significant difference between the mean
teaching effectiveness scores of instructors with the
Doctorate and non-Doctorate holding instructors.
Hy: There is no significant difference between the mean
teaching effectiveness scores of instructors of
different religions.
Hg: There is no significant first-order correlation
between the teaching effectiveness scores and the
instructor's age.
Hg: There is no significant first-order correlation
between the mean teaching effectiveness scores and the
height of instructors.
H^q : There is no significant first-order correlation
between the mean teaching effectiveness scores of
instructors and their degrees of commitment to their
98
religions.
: There is no significant first-order correlation
between the mean teaching effectiveness scores and
the total number of years taught on the higher educa
tion level.
H^2 : There is no significant first-order correlation
between the mean teaching effectiveness scores and
the number of years taught at the University of
Southern California.
: There is no significant first-order correlation
between the mean teaching effectiveness score and the
total number of years taught.
: There is no significant first-order correlation
between teaching effectiveness scores and previous
years of practical experience in the area taught.
: There is no significant first-order correlation
between the teaching effectiveness score and the
number of hours spent holding administrative posts
by instructors.
: There is no significant first-order correlation
between teaching effectiveness scores and counseling
activities.
HThere is no significant first-order correlation
between teaching effectiveness scores and the number
of hours spent on consulting jobs held by the
instructor during the teaching semester.
99
H^g: There is no significant first-order correlation
between the mean teaching effectiveness score and
the number of hours spent in preparation for class
presentations.
There is no significant first-order correlation
between teaching effectiveness scores and the number
of hours spent on non-instruction related research.
H20: There is no significant first-order correlation
between the mean teaching effectiveness scores and
the number of pages authored by the instructor during
the semester.
H2^: There is no significant first-order correlation
between the mean teaching effectiveness scores of
instructors and their base contract salaries.
H22: There is no significant first-order correlation
between teaching effectiveness scores and the number
of students registered in class.
H^: There is no significant first-order correlation
between teaching effectiveness scores and the in
structors' self-evaluations.
H2^: There is no significant multiple correlation between
the aforementioned continuous variables and teaching
effectiveness of all instructors including Adjuncts.
H2 g: There is no significant multiple correlation between
the aforementioned continuous variables and teaching
effectiveness of full-time instructors only.
100
Procedures
Sixty-four instructors and 1,942 students of the
School of Education at the University of Southern
California were the participants in the study. Ninety-
seven classes listed in the Fall, 1971 Catalogue who were
meeting in their assigned rooms were given the Faculty §
Course Guide (F$CG) for the students to complete, evalua
ting the teaching effectiveness of their instructors. The
instructors who were evaluated were to respond to a ques
tionnaire which requested information paralleling the vari
ables found in Hypotheses one (H^) through twenty-three
. A Teaching Effectiveness Score (TES) was generated
from the student responses on the F£|CG for each class, and
an Overall Teaching Effectiveness Score (OTES)--an average
teaching effectiveness score--was generated for each in
structor. These scores became the dependent variables.
The investigation of the hypotheses included the
following data manipulation:
1. An analysis of variance for the discrete vari
ables (Hypotheses one through seven) with the
mean OTES being the statistic used.
2. A bivariate correlation of the variables found
in Hg through was conducted with the OTES
being the dependent variable.
3. Another bivariate correlation was conducted on
101
the variables in H^ a n < 2 H23 t^ ie ^ES as
the dependent variable.
4. An analysis of the mean OTES of full-time and
of Adjunct Professors was conducted.
5. A stepwise multiple regression analysis was
conducted on the continuous variables included
in the hypotheses. One analysis was done with
all Instructors including Adjuncts C^24 anc^
one with full-time Instructors only (l^) .
Findings
1. There was found to be no significant difference
between the mean OTES for Adjuncts and full
time Instructors. A further analysis of the
individual OTES of Adjuncts and full-time
Instructors found that Adjunct Professors'
scores possessed greater variability. However,
an F-ratio computed was not significant.
2. There were no significant differences between
the OTES of male and female instructors.
3. There were no significant differences between
the mean OTES of instructors of different
ranks; however, Associate Professors achieved
the highest mean OTES.
4. There were no significant differences between
the OTES of instructors who held the PhD or
102
EdD.
5. There were no significant differences in the
OTES of instructors who were USC graduates and
those who were not.
6 . There were no significant differences in OTES
between instructors holding the Doctorate and
those not holding a Doctorate.
7. There were no significant differences in the
OTES of instructors of different religions.
8 . There was no significant relationship between
the OTES and the age of the instructor.
9. No relationship between the OTES and height was
found.
10. No relationship between the OTES and an in
structor's degree of commitment to religion was
found.
11. No relationship between the OTES and years of
college level teaching experience was found.
12. No relationship between the OTES and years of
experience at the currently employed institu
tion was found.
13. No relationship between the OTES and the total
number of years of teaching experience was
found.
14. No relationship between the OTES and the number
of years of practical experience in the area
103
taught was found.
15. No relationship between the OTES and the number
of hours spent at administrative duties was
found.
16. No relationship between the OTES and the number
of hours spent counseling students was found.
17. No relationship between the OTES and hours
spent on extra-university consultation assign
ments was found.
18. No relationship between the OTES and the number
of hours spent preparing for classes was found.
19. No relationship between the OTES and the number
of hours spent on research was found.
20. No relationship between the OTES and the number
of pages authored by the instructor was found.
21. No relationship between the OTES of instructors
and their base contract salaries was found.
22. A very significant negative relationship
between class size and the instructor’s TES was
found.
23. No relationship between the TES of an instruc
tor and his self-evaluation was found.
24. There was no significant multiple correlation
between the OTES and aforementioned continuous
variables when all Instructors including
Adjuncts are included.
There was found to be a significant multiple
correlation between the OTES and the afore
mentioned continuous variables when full-time
instructors are the only instructors included
in the multiple regression analysis.
Supplementary findings included a positive
relationship between height and salary, and
height and hours counseling. A negative rela
tionship was found to exist between birthdate
and degree of religious commitment and between
hours of counseling students and hours spent
preparing for class.
Conclusions
There was a greater degree of variability among
the teaching effectiveness of Adjuncts that
full-time Instructors even though this vari
ability was found to be not significant.
Female instructors are as effective as male
instructors.
Instructors of various ranks are equally effec
tive teachers.
Instructors who hold the PhD are as effective
as instructors who hold the EdD.
USC graduates are as effective as non-USC
graduates. Thus, inbreeding does not affect
105
the quality of teaching on a staff.
6. Instructors who hold the Doctorate are as
effective as those who do not hold the
Doctorate.
7. Instructors of various religions are equally
effective teachers.
8. Instructor's age was not related to effective
teaching.
9. Instructor's height was not related to effec
tive teaching.
10. Degree of commitment to religion had no rela
tionship to effective teaching.
11. Teaching experience at the college level had
no relationship to effective teaching.
12. Teaching experience at the currently employed
institution had no relationship to teaching
effectiveness.
13. Total number of years of teaching experience
is not related to effective teaching.
14. Previous practical experience in the area
taught did not increase the instructor's
effectiveness as a teacher.
15. Hours spent in administrative posts was not
related to teaching effectiveness.
16. The number of hours an instructor spent coun
seling was not related to effective teaching.
106
17. The number of hours an instructor spent on
extra-University consultations had no relation
ship to teaching effectiveness.
18. The number of hours of preparation for class
an instructor spent had no relationship to
effective teaching.
19. The number of hours spent in research had no
effect on teaching effectiveness.
20. The number of pages authored by an instructor
had no effect on his teaching effectiveness.
21. An instructor's salary was not related to his
effectiveness as a teacher.
22. There was a relationship between class size and
effective teaching favoring smaller classes.
23. Instructors were not able to rate their teach
ing ability accurately.
24. When considering all Instructors (full-time and
Adjuncts) there were no multiple predictors of
teaching effectiveness.
25. When considering full-time Instructors only,
there were multiple predictors of teaching
effectiveness and the best single predictor was
the number of hours spent in research, followed
by various types of teaching experience, salary
and hours spent on administrative duties.
26. Supplementary Conclusions:
107
a. The older an instructor, the more he was
commited to religion.
b. Height appeared to be a predictor of
instructor's salary.
c. Height was also related to the number of
hours an instructor spent counseling
students.
d. The more time an instructor spent coun
seling students the less time he spent
preparing for class.
Recommendations
Student Evaluations
1. It is recommended that student evaluations of
instructor effectiveness be conducted. They
provide valuable feedback to instructors,
assist the administration in making salary,
tenure, and promotion decisions, and aid stu
dents in selecting their classes.
2. The use of home-made versions such as the F$CG
which was designed for this study must be
discouraged as the design and validation of
such instruments are costly and time consuming,
A proven instrument, already in use is more
valid and reliable.
3. Include questions from proven instruments in
108
the evaluation instrument. Good examples of
questionnaires which have already been vali
dated are those developed by: The Center for
Research and Development in Higher Education,
Berkeley (Hildebrand, Wilson § Dienst, 1971);
Cortez (1969) ; and the University of Michigan
(McKeachie, 1969b).
4. Devise the instrument so that a short form
(ten to twenty questions) of one of the above
recommended instruments be used in conjunction
with approximately five questions for each
department to design around its specific goals
and objectives.
5. Provide design formats and validation checks
for each department's use.
6. It is recommended that the University take
responsibility for the administration, data
collection, and distribution of the instrument
and its results.
7. Devise an elaborate system of administration
*
and data collection so as to insure student
anonymity. The student should feel confident
that his responses would not be recognized by
his instructor.
8. The results must be made available to instruc
tors, the administration, and student leaders
109
by the University.
9. The feedback provided by student evaluations
will be used by instructors to upgrade instruc
tion .
10. Instructors and student leaders also will use
the results in their advisement of students.
11. Limit the number of copies of the published
results of the student evaluations though
everyone be permitted to have access to them
through advisement. This will eliminate
unnecessary waste as students use only a few
of 300 or more published pages.
12. The first time an instructor teaches a course
the results will be made available only to him
(Hildebrand, et al, 1971).
13. It is recommended that all instructors be
allowed to respond to the evaluation and their
responses should be printed with the evalua
tion .
Improving University Instruction
1. Since there is such variability in the scores
of Adjunct Professors, closer supervision of
the student evaluations of these instructors
must be conducted. Those instructors whose
evaluations are significantly lower than their
peers should be helped by University policy
geared towards improving instruction. If
future evaluations remain consistently poor,
these instructors must be eliminated from the
staff.
Since it was found that an instructor's
teaching effectiveness was not related to his
professional activities, it is recommended that
University professors should be allowed to
spend their time as they see fit and not be
required by the University to spend a certain
amount of time conducting specific activities.
The University should become aware of the fact
that a great degree of the variability in
salaries is accounted for by height and thus
be cautious about allowing this factor to in
fluence salaries. (Height is not significantly
related to teaching effectiveness, number of
pages authored or research; therefore, its
relationship to salary appears to be the sole
determiner.)
Student evaluations should be one of, but not
the sole criterion for granting salary in
creases, promotions and tenure.
Since male and female instructors are equally
effective teachers, and because of the small
Ill
representation of female instructors found, it
is strongly recommended that the population
of female instructors be increased.
Future Research
1. It is recommended that "work load" or activity
studies of elementary and secondary school
teachers be conducted to see if selected pro
fessional activities of teachers influence
classroom effectiveness.
2. Since some of the findings of this study have
supported past research (salary and experience
are not related to teaching effectiveness;
smaller classes are favored over larger ones;
and all college instructors, good and bad,
engage in the same activities) it is recom
mended that future research concern itself not
with the above variables but with how and why
height influences salary. Similar studies of
height must be conducted across wider popula
tions so as to provide greater generaliza-
bility.
3. The best single predictor of teaching effec
tiveness of all instructors found in the
multiple regression analysis was the degree of
religious commitment and since religious
112
commitment is in the realm of attitudes, more
research must be done in finding relationships
between teaching effectiveness and instructor
attitudes.
4. One of the lesser purposes of this study was
to conduct research on certain taboos. This
is why sex, religion and degree of religious
commitment were included as variables. The
variable of race which the researcher would
have liked to consider could not be included
as the population of non-White instructors was
too small. It is recommended that future
studies be conducted on the teaching effective
ness of instructors of various races. The
information can prove valuable as it might
affect the validity of student evaluations of
teaching effectiveness. It is also recommended
that the reactions of Black and White students
to Black and White instructors be studied and
analyzed to see what differences, if any,
exist.
REFERENCES
113
REFERENCES
Adams, J. F.; Siegel, L. ; & Macomber, F. G. Recorded
student interviews in improving instruction. In
H. A. Estrin & D. M. Goode (Eds.), College and
University Teaching. Dubuque, Iowa: William
Brown Company, 1964.
American Educational Research Association. Report of the
committee on the criteria of teacher effectiveness.
Review of Educational Research, 1952, _22, 238-263.
American Psychological Association. Publication manual
of the American Psychological Association. Washing
ton, D. C.: American Psychological Association, Inc
1967.
Austin, A. W., & Lee, C. B. T. Current practices in the
evaluation and training of college teachers. The
Education Record, 1966, 4^7, 361-365.
Bandura, A., & Walters, R. H. Social learning and
personality development. New York: Holt, Rinehart
& Winston, 1963.
Barr, A. S. Teaching ability and its correlates.
Wisconsin studies of the measurement and prediction
of teacher effectiveness. Madison, Wisconsin:
Dunbar Publications, 1961.
Barr, A. S.; Burton, W. H.; and Brueckner, L. J. Super
vision. New York: Appleton-Century-Croft, 1938.
Beecher, D. The evaluation of teaching. New York:
Syracuse University Press, 1949.
Bloom, B. S.; Engelhart, M. D.; Furst, E. J.; Hill, W. H.
& Krathwohl, D. R. Taxonomy of educational objec
tives: The classification of educational goals:
Handbook I cognitive domain. New York: David McKay
1956.
Brain, G. Evaluating teacher effectiveness. National
Education Association Journal, February 1965, 35-36.
114
115
Brodbeck, M. Logic and scientific method in research on
teaching. In N. L. Gage (Ed.), Handbook of research
on teaching. Chicago: Rand-McNally, 1964.
Bryan, Q. R. The influence of certain characteristics of
teachers and teacher raters on the quality of formal
teacher appraisal. Unpublished doctoral disserta
tion, University of Southern California, 1962.
Bryan, R. C. Pupil rating of secondary school teachers.
New York: Teachers College, Columbia University,
1937.
Cheydleur, F. D. Criteria of effective teaching in
basic French courses. Bulletin of the University
of Wisconsin, 1945, August.
Cook, J. M., & Neville, R. F. The faculty as teachers:
A perspective on evaluation. Report 13. Washington,
D. C.: ERIC Clearinghouse on Higher Education,
one Dupont Circle, 1971.
Cortez, J. D. The design and validation of an evaluative
instrument to appraise the teaching effectiveness
of the college instructor. Dissertation Abstracts,
1967, 28, 2435-A.
Douglass, H. R., & Romine, S. Teaching load. In W. S.
Monroe (Ed.), Encyclopedia of educational research.
New York: Macmillian Co., 1950, pp. 1454-1461.
Downie, N. M., & Heath, R. W. Basic statistical methods.
(3rd ed.) Evanston: Harper & Row, 1970.
Dressel, P. L. Teaching, learning and evaluation. In
H. A. Estrin, & D. M. Goode (Eds.), College and
university teaching. Dubuque, Iowa: William Brown
Company, 1964.
Dressel, P. L. Evaluation of instruction. Journal of
Farm Economics, 1967, 49^, 299.
Dressel, P. L., & Pratt, S. B. The world of higher
education. San Francisco: Jossey-Boss, 1971.
Drucker, A. J., & Remmers, H. H. Do alumni and students
differ in their attitudes towards instructors?
Journal of Educational Psychology, 1951, 4_2, 129-
143.
116
Dupont, G. E. The relationship between teaching and
research. In R. J. Deferrari (Ed.), Quality of
college teaching and staff. Washington, D. C.:
The Catholic University of America Press, 1961.
Eble, K. E. Project to improve college teaching.
Academe, 1970, 4^, 3-6.
Edmondson, J. B., & Mulder, F. J. Size of class as a
factor in university instruction. Journal of
Educational Research, 1924, 9^, 1-12.
Edwards, A. Techniques of attitude scale construction.
New York: Appleton-Century-Crofts, 1957.
Elliott, D. N. Characteristics and relationships of
various criteria of teaching. Unpublished doctoral
dissertation, Purdue University, 1949.
Fahey, G. L. Student rating of teaching: Some question
able assumptions. In Institute For Higher Education
(Ed.), Student evaluation of teaching: Presenta
tions at a conference. Pittsburgh: University
of Pittsburgh, 1970.
Feldhusen, J. R. The effects of small- and large-group
instruction on learning of subject matter, attitudes,
and interests. Journal of Psychology, 1963, 55,
357-362.
Feldman, S. D. The presentation of shortness in everyday
life— height and heightism in American society:
Toward a sociology of stature. (Rev. ed.) Unpub
lished paper presented to the 1971 meeting of
American Sociological Association. Denver, Colorado,
1971.
Fox, D. J. The research process in education. San
Francisco: Holt, Rinehart & Winston, 1969.
French, J. W., & Michael, W. B. Standards for educational
and psychological tests and manuals. Washington,
D. C.: American Psychological Association, 1966.
Gaff, J. G., & Wilson, R. C. The teaching environment:
A study of optimum working conditions for effective
college teaching. Berkeley, California: Center for
Research and Development in Higher Education, 1970.
117
Gage, N. L. The appraisal of college teaching. Journal
of Higher Education, 1961, 3_2, 17-22.
Gage, N. L. (Ed.) Handbook of research on teaching.
Chicago: Rand-McNally, 1963.
Gage, N. L. Paradigms for research on teaching. In
American Educational Research Association, Handbook
' of research on teaching. Chicago: Rand-McNally,
1964.
Gibb, C. A. Classroom behavior of the college teacher.
Educational and Psychological Measurement, 1955, 15
254-263.
Gray, C. E. The teaching model and evaluation of teaching
performance. Journal of Higher Education, 1969, 40,
636-642.
Guild, R. Criterion problem in instructor evaluation.
Journal of Dental Education, 1967, J3J3, 270-279.
Guilford, J. P. Fundamental statistics in psychology and
education. (4th ed.) San Francisco: McGraw-Hill,
1965.
Guilford, J. P. Psychometric methods. (2nd ed.) New
York: McGraw-Hill, 1954.
Gustad, J. W. Evaluation of teaching performance: Issues
and possibilities. In C. B. T. Lee (Ed.), Improving
college teaching. Washington, D. C.: American
Council on Education, 1967, pp. 265-281.
Gustad, J. W. Policies and practices in faculty evalua
tion. Washington, D. C.: Committee on College
Teaching, American Council on Education, 1961.
Hampton, N. D. An analysis of supervisory ratings of
elementary teachers graduated from Iowa State
Teachers College. Journal of Experimental Research,
1951, 20, 179-215.
Harvey, J. N., & Barker, D. G. Student evaluation of
teaching effectiveness. Improving College and
University Teaching. 1970, 18, 275-278.
118
Hatch, W. R., & Bennet, A. Effectiveness in teaching.
New Dimensions in Higher Education. Washington,
D. C.: U. S. Department of Health, Education, and
Welfare, 1965, p. 2.
Hauss, W. F. The relationship of preparation and
experience to teaching effectiveness. Unpublished
doctoral dissertation, University of Southern
California, 1969.
Hawkins, E. E. Identification of outstanding elementary
teachers by subjective and objective means. Unpub
lished doctoral dissertation, University of
Southern California, 1963.
Heilman, J. D., & Armentrout, W. D. The rating of
college teachers on ten traits by their students.
Journal of Educational Psychology, 1936, 2J_, 197-216.
Hess, C. A. The relationship of selected variables to
teacher effectiveness as assessed by the building
principal. Unpublished doctoral dissertation,
University of Southern California, 1969.
Hildebrand, M., & Wilson, R. C. Effective university
teaching and its evaluation. Berkeley, California:
Center for Research and Development in Higher
Education, 1970.
Hildebrand, M.; Wilson, R. C.y & Dienst, E. R. Evaluating
university teaching. Berkeley, California: Center
for Research and Development in Higher Education,
1971.
Hilgert, R. L. Teacher or researcher? The Educational
Forum, 1964, 28., 463-468.
Holmes, D. S. The teaching assessment blank: A form
for the student assessment of college instructors.
The Journal of Experimental Education, 1971, 39,
34-38.
Hudelson, E. Class size at the college level. Minnea
polis: University of Minnesota Press, 1928.
Hughes, M. M. Assessment of the quality of teaching in
elementary schools. University of Utah, 1959.
119
Isaac, S., & Michael, W. B. Handbook in research and
evaluation. San Diego: Robert R. Knapp Publishers,
1971.
Isaacson, R. L.; McKeachie, W. J.; & Milholland, J. E.
Correlation of teacher personality variables and
student ratings. Journal of Educational Psychology,
1963, 54, 110-111.
Isaacson, R. L.; McKeachie, W. J.; Milholland, J. E.;
Lin, Y. G.; Hofeller, M.; Baerwalt, J. W.; & Zinn,
K. L. The dimensions of student evaluations of
teaching. In W. J. McKeachie, R. L. Isaacson, &
J. E. Milholland, Research on the characteristics
of effective college teaching, cooperative research
project number OE 850. Ann Arbor: The University
of Michigan, 1964.
Jensen, L. E. A non-additive approach to the measure of
teacher effectiveness. Wisconsin studies of the
measurement and prediction of teacher effectiveness.
Madison: Dunbar Publications, 1961.
Kelly, F. J.; Beggs, D. L.; McNeil, K. A.; Eichelberger,
T.; & Lyon, J. Research design in the behavioral
sciences: Multiple regression approach. Carbondale
and Edwardsville, Illinois: Southern Illinois Press,
1969.
Kent, L. Student evaluation of teaching. In C. Lee (Ed.),
Improving college teaching. Washington, D. C.:
American Council on Education, 1967.
Kerlinger, F. N. Student evaluation of university
professors. School and Society, 1971, _99, 353-356.
Kirchner, R. P. A control factor in teacher evaluation
by students. Unpublished research paper. Lexington,
Kentucky: College of Education, University of
Kentucky, 1969.
Langen, T. D. F. Student assessment of teaching effective
ness. Improving College and University Teaching,
1966, 14, 22-25.
Lehmann, I. J. Evaluation of instruction. In P. Dressel
(Ed.), Evaluation in higher education. Boston:
Houghton Mifflin, 1961.
120
Lundstedt, S. Criteria for effective teaching.
Improving College and University Teaching, 1966,
14, 27-31.
Macomber, F. G., & Siegel, L. Experimental study in
instructional procedures: Final report. Oxford,
Ohio: Miami University Press, 1960.
Macomber, F. G., & Siegel, L. A study of large-group
teaching procedures. Educational Research, 1957,
38, 220-229.
Martin, T. W., & Berry, K. J. The teaching-research
dilemma: Its sources in the university setting.
The Journal of Higher Education, 1969, 40_, 691-703.
McDaniel, E. D., & Feldhusen, J. F. Relationships
between faculty ratings and indexes of service and
scholarship. Paper presented at the 78th annual
convention, American Psychological Association,
1970.
McKeachie, W. J. Student ratings of faculty. AAUP
Bulletin, 1969a, 5_5, 439-444.
McKeachie, W. J. Teaching tips: A guidebook for the
beginning college teacher. Lexington, Mass.:
D. C. Heath, 1969b.
McKeachie, W. J., et al. Student ratings of teacher
effectiveness. American Educational Research
Journal, 1971, 8., 435-445.
McKeachie, W. J.; Isaacson, R. L.; Milholland, J. E.
Research on the characteristics of effective college
teaching, cooperative research project number OE
850. Ann Arbor: The University of Michigan, 1964.
McMullen, L. B. The service load in teacher training
institutions of the United States. New York:
Bureau of Publications, Teachers College, Columbia
University, 1927.
Meriam, J. L. Normal school education and efficiency in
teaching. New York: Teachers College, Columbia
University, 1906.
121
Miller, R. I. Evaluating faculty performance. San
Francisco: Jossey-Boss Publishers, 1972.
Mitzel, H. E. Teacher effectiveness. In C. Harris (Ed.),
Encyclopedia of Educational Research. (3rd ed.)
New York: Macmillan, 1960.
Morsh, J. E., & Wilder, E. W. Identifying the effective
instructor: A review of Quantitative studies,
1900-1952. San Antonio: Air Force Personnel and
Training Research Center, 1954.
Morton, R. K. Evaluating college teaching. Improving
College and University Teaching, 1961, 9_, 122-123.
Morton, R. K. Evaluating college teaching. In H. A.
Estrin & D. M. Goode (Eds.), College and University
Teaching, Dubuque, Iowa: William Brown Company,
1964.
Musella, D., & Rusch, R. Student opinion on college
teaching. Improving College and University
Teaching, 1968, 16, 137-140.
Nachman, M., & Opochinsky, S. The effects of different
teaching methods: A methodological study. Journal
of Educational Psychology, 1958, 49, 245-249.
Neeley, M. A teacher's view of teacher evaluation.
Improving College and University Teaching, 1968, 16,
207-209.
Newell, D. Evaluation of teachers. University of Ken
tucky: College of Dentistry Conference on
Evaluation of Teaching and Teachers. Preconference
readings, 1967.
Noble, N. L. The relationship of teacher preparation
and experience to the appraisal of classroom
effectiveness by the principal. Unpublished
doctoral dissertation, University of Southern
California, 1969.
Paulsen, F. R. Professors can improve teaching. In
H. A. Estrin & D. M. Goode (Eds.), College and
University Teaching. Dubuque, Iowa: William Brown
Company, 1964.
122
Peronto, A. L. Patterns of effectiveness and ineffective
ness. Wisconsin Studies of the Measurement and
Prediction. Madison, Wisconsin: Dunbar Publica
tions, 1961, pp. 88-98.
Perry, R. R. Criteria of effective teaching in an
institution of higher education. Toledo, Ohio:
Office of Institutional Research, University of
Toledo, 1969.
Phillips, B. N. The "individual" and the "classroom
group” as frames of reference in determining teacher
effectiveness. Journal of Educational Research,
1964, 58, 128-131.
Pullias, E. V. ; Lockhart, A.; Bond, M. H.; Clifton, M.;
and Miller, D. M. Toward excellence in college
teaching. Dubuque, Iowa: William Brown Company,
1963.
Quick, A. F., & Wolfe, A. D. The ideal professors.
Improving College and University Teaching, 1965, 13,
133.
Rapp, M. A. Making teaching more effective. In H. A.
Estrin & D. M. Goode (Eds.), College and University
Teaching. Dubuque, Iowa: William Brown Company,
1964.
Rayder, N. F. College student ratings of instructors.
The Journal of Experimental Education, 1968, 37,
76-81.
Remmers, H. H. Rating methods in research on teaching.
In N. L. Gage (Ed.), Handbook of research on
teaching. Chicago: Rand McNally, 1963.
Remmers, H. H., Sc Brandenbury, G. C. Experimental data
on the Purdue rating scale for instructors.
Educational Administration and Supervision, 1927,
13, 519-527.
Remmers, H. H. ; Gage, N. L. ; Sc Rummel, J. F. A practical
introduction to measurement and evaluation. New
York: Harper Brothers, 1960.
Remmers, H. H., Sc Weisbrodt, J. A. Manual of instructions
for the Purdue rating scale for instruction. West
Lafayette, Indiana: University Book Store, 1965.
123
Renner, R. R. A successful rating scale. Improving
College and University Teaching, 1967, L5, 12-14.
Riley, J. E.; Ryan, B. V.; & Lifshitz, M. The student
looks at his teacher. (2nd ed.) New Brunswick:
Rutgers University Press, 1959.
Rosenshine, B., & Furst, N. Research on teacher perfor
mance criteria. In B. 0. Smith (Ed.), Research in
teacher education - a symposium. Englewood Cliffs,
New Jersey: Prentice-Hall, 1971.
Russell, H. E. Inter-relations of some indices of
instructor effectiveness: An exploratory study.
Unpublished doctoral dissertation. University of
Pittsburgh, 1951.
Ryans, D. G. Characteristics of teachers. Washington,
D. C.: American Council on Education, 1960a.
Ryans, D. G. Prediction of teacher effectiveness. In
C. Harris (Ed.), Encyclopedia of educational research.
(3rd ed.) 1960b, 1486-1491.
Schwartz, R. Student power - in response to the questions.
The Future Academic Community: Continuity and Change.
American Council on Education, Annual Meeting,
background papers, 1968.
Siegel, L. ; Adams, J. F.; & Macomber, F. G. Retention
of subject matter as a function of large-group
instructional procedures. Journal of Educational
Psychology, 1960, 5JL, 9-13.
Silberman, C. E. Crisis in the classroom: The remaking
of American education. New York: Vintage Books,
1970.
Simpson, R. H., & Brown, E. S. College learning and
teaching. Urbana, Illinois: University of Illinois
Bulletin, 1952.
Smith, R. A.; Basil, D.; Bodaken, E. M.; Cliff, N.; &
Shugarman, P. M. Summary statement of report of
ad hoc committee ". . . to ascertain the possibility
of evaluating teaching performance in the university."
Unpublished document presented to the University
Senate, University of Southern California, Spring,
1972.
124
Spaights, E. Students appraise teachers' methods and
attitudes. Improving College and University-
Teaching, 1967, 15_, 15-17.
Stecklein, J. E. How to measure faculty work load.
Washington, D. C.: American Council on Education,
1961.
Stewart, C. T., & Malpass, L. F. Estimates of achieve
ment and ratings of instructors. Journal of Educa
tional Research, 1966, 59, 347-350.
Stickler, W. H. Working material and bibliography on
faculty load. In K. Bunnell (Ed.), Faculty work
load. Washington, D. C.: American Council on
Education, 1960.
Tead, 0. Teacher self-evaluation. In H. A. Estrin &
D. M. Goode (Eds.), College and University Teaching.
Dubuque, Iowa: William Brown Company, 1964.
Tead, 0. Twelve suggestions for improving teaching.
In H. A. Estrin & D. M. Goode (Eds.), College and
University Teaching. Dubuque, Iowa: William Brown
Comp any, 1964.
Trabue, M. R. Judgments by 820 college executives of
traits desirable in lower-division college teachers.
Journal of Experimental Education, 1953, 21., 337-341.
Whiteman, S. L. Analysis of student appraisal of
instruction form. Appendix B in R. I. Miller,
Evaluating faculty performance. San Francisco:
Jossey-Boss, 1972.
Woodburne, L. S. Guidelines for student ratings of
teachers. Paper presented at the 21st National
Conference on Higher Education, Chicago, March 15,
1966.
Wrightstone, J. W. Rating methods. In C. Harris (Ed.),
Encyclopedia of Educational Research. (3rd ed.)
1960, 961-964.
Yamamoto, K., & Dizney, J. F. Eight professors - a study
on college students' preferences among their
teachers. Journal of Educational Psychology, 1966,
57, 146-150.
APPENDICES
125
APPENDIX A
FACULTY § COURSE GUIDE QUESTIONNAIRE
126
127
COPY
EDUCATION GRADUATE ORGANIZATION (EGO)
FACULTY AND COURSE GUIDE QUESTIONNAIRE
Please fill out this questionnaire carefully and conscien
tiously. It is potentially a very valuable tool to aid all
education students in course selection.
Directions:
a. Use #2 lead pencils only.
b. Mark on answer sheet only one response for
each item.
c. Submit only one questionnaire per class taken
this semester (you should receive one in each
class--if not you may pick up and return your
questionnaire at the EGO office, WPH Room 303).
d. Name of test: "EGO Faculty and Course Guide."
Identification Number:
a. This box is located in the upper right hand
corner of the front page.
b. Row #1 - Indicate the course department: (0)AD,
(1)CE, (2)EL, (3)EX, (4)HE, (5)IT, (6)PS, (7)SE,
(8) SP, (9)TE; for example: Psychology-6.
______0 1 2 3 4 5 6 7 8 9
1 ~______________
c. Row #2, 3 § 4 - Indicate the course number -
for example: ED PS 511
0 1 2 3 4 5 6 7 8 9
s
—*
/
/
d. Row #5 - Semester: (0)Spring, (l)First summer
session, (2) second summer session, (3) third
summer session, (4)Fall.
e. Row #6 § 7 - Year semester began - for example:
1970 use '70.
0 1 2 3 4 5 6 7 8 9
r
—
0
—
128
£. Row #8, 9, 8 10 - Indicate your instructors
three digit number - for example: Frank Fox -
058 (separate sheet provided for each semester)
______0 1 2 5 4 5 , 6 7 8 9
o
—
s
—
8
—
Course Evaluation:
Try to confine your responses to the course itself.
1. The course was relevant to:
1. the nature of the subject (i.e., the discipline)
2. issues or concerns of 'everyday life'
3. seeking future solutions
4. any two or three of the above
5. none of the above
2. How would you rate the subject matter of the course?
1. extremely interesting most of the time
2. interesting most of the time
3. sometimes interesting, sometimes not
4. uninteresting most of the time
5. extremely uninteresting most of the time
3. How would you rate the class assignments?
1. extremely interesting most of the time
2. interesting most of the time
3. sometimes interesting, sometimes not
4. uninteresting most of the time
5. extremely uninteresting most of the time
4. How would you rate the reading material for the course?
1. extremely interesting most of the time
2. interesting most of the time
3. sometimes interesting, sometimes not
4. uninteresting most of the time
5. extremely uninteresting most of the time
5. How important was memorizing unimportant details for
your exams?
1. extremely important
129
2. fairly important
3. somewhat unimportant
4. unimportant
5. no exam
6. How important was the integration and/or application
of course and related materials for your exam?
1. extremely important
2. fairly important
3. somewhat unimportant
4. unimportant
5. no exam
7. How important was creative and original thinking for
your exam?
1. extremely important
2. fairly important
3. somewhat unimportant
4. unimportant
5. no exam
8. Concerning the papers in your course - how much did you
learn from them?
1. a great deal
2. a moderate amount
3. a fairly small amount
4. nothing at all
5. no papers
9. Compared to other courses you've had in graduate school
(if this is your first graduate course answer in rela
tion to all college courses) this course was:
1. one of the best
2. above average
3. average
4. below average
5. one of the poorest
130
10. If the instructor had mimeographed his lecture notes
and had given them to you at the beginning of this
course along with reading assignments, do you feel
that you would have met the objective stated in the
course description as well as if you had attended
class?
1. yes
2. most likely
3. maybe
4. highly unlikely
5. no
11. The teacher speaks carefully enough and makes under
standing him easy:
1. almost always
2. most of the time
3. sometimes
4. not much of the time
5. almost never
12. Did the teacher restate the text?
1. almost always
2. most of the time
3. sometimes
4. not much of the time
5. almost never
13. Did the teacher read directly from his notes?
1. almost always
2. most of the time
3. sometimes
4. not much of the time
5. almost never
14. Did the teacher refuse to stray from the topic for
discussion?
1. almost always
2. most of the time
3. sometimes
4. not much of the time
5. almost never
131
15. Did the teacher stray from the topic for irrelevant
discussion?
1. almost always
2. most of the time
3. sometimes
4. not much of the time
5. almost never
16. Did the teacher grasp the point of students' questions
and answer them clearly - understandably?
1. almost always
2. most of the time
3. sometimes
4. not much of the time
5. almost never
17. How often did your teacher stimulate independent
thinking?
1. almost always
2. most of the time
3. sometimes
4. not much of the time
5. almost never
18. To what degree did the teacher encourage student
participation?
1. almost always
2. most of the time
3. sometimes
4. not much of the time
5. almost never
19. How tolerant was your teacher of variant points of
view?
1. extremely tolerant
2. fairly tolerant
3. somewhat tolerant
4. fairly intolerant
5. extremely intolerant
132
20. Concerning the teacher's willingness to speak to
students outside of class time he:
1 . encouraged outside meetings
2 . met with students at a mutually opportune time
3. saw students only during office hours
4. discouraged outside meetings
5. refused to meet with students
21. Most of the time the teachers way of handling the
material was:
1 . extremely interesting
2 . fairly interesting
3. sometimes interesting - sometimes not
4. fairly uninteresting
5. extremely poorly organized
22. In his presentation of the course, how organized was
the teacher?
1 . extremely i\rell organized
2 . well organized
3. organized
4. poorly organized
5. extremely poorly organized
23. The teacher was enthusiastic:
1 . almost always
2 . most of the time
3. sometimes
4. not much of the time
5. almost never
24. Concerning the written assignments in the course, to
what extent were they given careful grading and
criticism:
1 . a great deal
2 . a fairly large amount
3. a moderate amount
4. a small amount
5. no written assignments
133
25. Compared to other teachers you've had in college,
this teacher was:
1 . one of the best
2 . above average
3. average
4. below average
5. one of the poorest
26. In your opinion grading for the course was:
1 . very fair
2 . fair
3. unfair
4. very unfair
5. no grades
27. Concerning your attendance, you cut:
1 . never
2 . 1 0 per cent of the classes or less
3. 30 per cent of the classes or less
4. 50 per cent of the classes or less
5. more than 50 per cent of the classes
Your Department: answer 28 or 29
28. (1) AD, (2) CE, (3) EL, (4) EX, (5) HE
29. (1) IT, (2) PS, (3) SE, (4) SP, (5) TE
Your Goals at USC: answer 30 or 31
30. (1) Pay Credit, (2) Credential, (3) MA. or MS,
(4) Pre-Comp, (5) Pre-Qual
31. (1) Pre Oral, (2) Personal, (3) Other
APPENDIX B
FACULTY $ COURSE GUIDE:
DIRECTIONS FOR STUDENTS AND ADMINISTRATORS
134
FACULTY AND COURSE GUIDE
DIRECTIONS FOR STUDENTS AND ADMINISTRATORS
INTRODUCTION:
"This is the Faculty and Course Guide question
naire, evaluation section. The results of this survey will
be available to students next semester. It takes only 10
to 15 minutes to complete this form. Questions one through
nine deal with the course and questions 1 0 through 26 deal
with the instructor's handling of the course."
ASK FOR TWO VOLUNTEERS TO PASS OUT AND COLLECT FACULTY §
COURSE GUIDE, ANSWER SHEETS AND #2 PENCILS.
PASS OUT FACULTY § COURSE GUIDES, ANSWER SHEETS, AND #2
PENCILS.
"You will need a #2 pencil to fill in the answer
sheet. Make sure your marks are black and shiney. If you
wish to change an answer, make sure you erase completely."
"Please look at your answer sheet. Do not write
in your name; you should however write in the name of the
instructor and the course number. Look at the identifica
tion number square at the upper right hand corner of the
answer sheet." (HELP EVERYONE TO FIND IT.) "The direc
tions on how to fill this out are on the first page under
the Faculty Coding List. Finally, note that the answer
numbers go from left to right. Are there any questions?"
"Begin!"
ASK VOLUNTEERS TO COLLECT FACULTY AND COURSE GUIDES WHEN
COMPLETED, ANSWER SHEETS AND #2 PENCILS, AND RETURN TO EGO
OFFICE (IF POSSIBLE) OR FILE THEM NEAR THE FRONT DOOR OF
THE CLASSROOM AND ARRANGE BEFOREHAND TO HAVE ADMINISTRATOR
PICK THEM UP.
APPENDIX C
IBM 1230 DOCUMENT NUMBER 510 ANSWER SHEET
136
137
-G R A O E OR C L A S S -
- S E X T 5 r r 0 A T E OP BIR T H _
.IN S T R U C T O R __________________
N A M E O f T E S T .
d i r e c t i o n s : r * o< j <
j o g h o e * d e c '- le d w f
I p O C * o n lh > * * h * « 1
l h « p o i ' o f l i n e i , o m
I f y o u « h o n g * j o g r n
n o » * r o y m o r h * - , t h e
> j Ki J m « v e U
, m 5 i e t . i r l . l l I
iio c e tn th# c o 'M » D » *d in g
k * j o u t IM f» c» >S«gO*
r f t r f m o rk C O U P L E T £ L T W ok*
I . C H I C A 0
t -» • «
t -* • m
I -1 « n
. IDENTIFICATION NUMBER
■ A - A; A - . A - .
A = - A ,
A, A. A: ■ A : A: A:
A: A: A: A: A. A:
* : • : : : : : A: : : : : : A, A: A - . A:
A: : : : : : A: A, A, A A^
A: A:
A, A - . A: A, - . 5 - . - . A: A , . A: A:
A: : : t : = A. A, A:
A: ■ . A A - . A - . A,
I
3
t
13
ir
21
25
29
33
3 .’
4 1
4 3
4 9
53
57
Cl
C5
6 9
73
77
I I
15
19
93
17
1 0 1
105
109
1 1 3
1 1 7
121
125
129
133
137
141
145
: ::::
:::: :
: : : : :
2 : : : : :
I
1
: : : : :
to : : : : : : : : : :
1 4
t
;: :: :
1 8 : : : : :
2 2 : : : : :
Zh
t
3 0
3 4 : : : : : :*
3 0
: : : : :
4 2
t
4 6 : : : : :
5 0 : : : : :
•
: : : : :
5 4
t
5 0
:::::
6 2
I
: : : : :
6 6
1
7 0
1
: : : : :
7 4
t
: : : : :
7 1
t
< 2
1
: : : : :
< 6
f
9 0
1
t
9 0
>
1 0 2
t
1 0 6
t
1 1 0
t
1 1 4
i
: : : : :
III
1
1 2 2
t
1 2 6
S
; t r c:
1 3 0
t
1 34
<
: : : : :
1 3 8
$
1 4 2
1
1 4 6
1
1 5 0
C
3
7
I I
:::i:
19
2 3
2 7
31
3 5
3 9
4 3
4 7
5 1
5 5
5 9
4 3
6 7
71
7 5
7 9
1 3
9 7
91
9 9
1 0 3
1 0 7
I I I
1 1 5
1 1 9
1 2 3
1 2 7
131
135
1 39
143
147
t::::
2 8
r
: : : : :
3 2
3 6
4 0 : : : : :
4 4 : : : : :
4 8 : : : : :
5 2
5 6 : : : : :
1 0
6 4
6 0 : : : : :
7 2
7 6 : : : : :
0 0
1 4 : : : : :
0 0 : : : : :
9 2
1 0 0
1 0 4 : : : : :
1 0 8 : : : : :
1 1 2
1 1 6 - z
1 2 0 : : : : :
1 2 4 : : : : :
1 2 8
1 3 2
: : : : :
1 3 6
1 4 0
1 4 4
1 4 9
APPENDIX D
SAMPLE DATA DESCRIPTION: ALL RESPONDENTS
138
EGO FACULTY AND COURSE GUIDE QUESTIONNAIRE - ALL DEPARTMENTS
N 2417
NUMBER RESPONDING PER CENT RESPONDING
1 2 3 4 5 OMIT 1 2 3 4 5 OMIT
Question 1 903 84 1 2 0 1179 1 2 2 9 37 3 5 49 5 0
Question 2 639 852 646 158 114 8 26 35 27 7 5 0
Question 3 484 903 646 188 95 1 0 1 2 0 37 27 8 4 4
Question 4 406 806 738 2 2 0 132 115 17 33 31 9 5 5
Question 5 242 386 386 737 554 1 1 2 1 0 16 16 30 28 5
Question 6 758 656 185 136 543 139 31 27 8 6 2 2 6
Question 7 308 524 377 527 544 137 13 2 2 16 2 2 23 6
Question 8 818 656 242 8 8 503 1 1 0 34 27 1 0 4 2 1 5
Question 9 690 718 527 185 193 104 29 30 2 2 6 8 4
Question 1 0 517 237 381 332 863 87 2 1 1 0 16 14 36 4
Question 1 1 1597 548 180 57 25 1 0 6 6 23 7 2 1 6
Question 1 2 139 225 482 475 878 218 6 9 2 0 2 0 36 9
Question 13 97 140 382 419 1311 6 8 4 6 16 17 54 3
Question 14 1 1 0 238 450 603 961 55 5 1 0 19 25 40 2
Question 15 95 82 533 676 976 55 4 3 2 2 28 40 2
Question 16 1415 631 244 65 43 19 59 26 1 0
3
2 1
18
19
20
21
22
23
24
25
26
27
28
29
30
31
AND COURSE GUIDE QUESTIONNAIRE - ALL DEPARTMENTS (Continued)
N 2417
NUMBER RESPONDING PER CENT RESPONDING
1 2 3 4 5 OMIT 1 2 3 4 5 OMIT
786 698 502 235 157 39 33 29 2 1 1 0 6 2
1369 595 284 113 37 19 57 25 1 2 5 2 1
1317 649 215 133 52 51 54 27 9 6 2 2
996 990 166 64 3 198 41 41 7 3 0 8
839 787 468 223 6 8 32 35 33 19 9 3 1
873 750 539 158 63 34 36 31 2 2 7 3 1
1364 697 241 69 26 2 0 56 29 1 0 3 1 1
650 483 296 2 0 0 417 371 27 2 0 1 2 8 17 15
948 735 412 140 138 44 39 30 17 6 6 2
904 839 6 8 44 198 364 37 35 3 2 8 15
1314 890 129 2 2 7 55 54 37 5 1 0 2
326 352 454 255 182 848 13 15 19 1 1 8 35
108 225 237 114 161 1572 4 9 1 0 5 7 65
16 460 827 332 404 378 1 19 34 14 17 16
51 72 84 3 1 1 2196 2 3 3 0 0 91
APPENDIX E
FACULTY QUESTIONNAIRE
141
1 .
3.
5 .
6 .
7.
8.
9.
10.
11.
142
COPY
FACULTY QUESTIONNAIRE
ID #
Personal Data
Department 2. Sex: M/F
Date of Birth: ___/___/___ 4. Height: Ft.___ In,
Name of the religion into which you were born:____
Degree of present commitment: (Check
one)weak strong
~ 0 I 2 3 4 5
Name of the religion to which you now subscribe if
different from above:
Commitment: weak_____________________________ strong
0 1 2 3 4 5
Highest earned degree held: _______________________
Name of the Institution from which the above degree
was earned: ______________________
Academic rank:__________ ' Adjunct___or Full time
Yearly University salary (only for your regular
appointment, not including summer school or off-campus
extras): ' ___________________
Teaching experience: (include present year as full
Year^ Full time Part time
a. Elementary or Secondary
School
b. College or University
c. Years at USC
d. Total years teaching
experience
143
12. Related professional experience:
a. Elementary or Secondary School
Administration
b. College or University Administration
c. Related practical experience: ______
(job title)
Work Load--Fall Semester, 1971
(Approximate to the best of your knowledge)
University-Related
Hours per week
1. University related research: (other
than class or lectures)_________________ _______________
2. Counselling students:___________________ _______________
3. University related Administrative
Posts:___________________________________________________
4. Class time and preparation:____________________________
Extra-University Commitments
Hours per week
5. Consultations and their related
research time: __________________________________________
6 . Other extra-University field
experiences : _______________________ _______________
(job titles)
7. Total number of pages written as an author
of a text or article during the Fall 1971
semester:________________________________ _______________
(pages)
Self Evaluation
List courses taught this Fall Semester 1971, the total
number of semesters you have taught each course, and give
a personal evaluation of your effectiveness as an instruc
tor for each course.
144
SEMESTERS
TAUGHT
COURSE (include
(title) present)
EFFECTIVENESS SCALE
(please check)
weak strong
0 1 2 3 4 5 6
Oh
00
1 0
0 1 2 3 4 5 6
Oh
CO
1 0
U I 2 3 4 5 5 7 8 § TIT
4.
T 5 I 2 3 4 5 6 7 8 5 ITT
APPENDIX F
DIRECTIONS AND INTRODUCTIONS TO FACULTY QUESTIONNAIRE
145
146
COPY
UNIVERSITY OF SOUTHERN CALIFORNIA SCHOOL OF EDUCATION
EDUCATIONAL SURVEY
The purpose of this survey is to secure information
about faculty members in the School of Education at the
University of Southern California. The results of this
survey will be used in a doctoral dissertation to show what
relationships exist between student ratings and the
selected variables covered by this questionnaire. You have
been chosen for this survey as a follow-up to your parti
cipation in the first step of the research--specifically,
the Education Graduate-Students1 Organization Faculty and
Course Guide.
Upon completion of the enclosed form, please mail
to: Jack Conklin, 1240-1/2 Mariposa Street, Glendale,
California, 91205. For your convenience, a self-addressed,
stamped envelope has been enclosed. Be assured that this
information will be completely confidential and no one but
the researcher will know your name.
DIRECTIONS: THE INFORMATION REQUESTED CONCERNS THE 1971
FALL SEMESTER ONLY. PLEASE COMPLETE EACH ITEM, INDICATING
"NONE" OR "NO INFORMATION" WHEN THIS IS AN APPROPRIATE
RESPONSE.
APPENDIX G
SECOND LETTER OF TRANSMITTAL
147
148
COPY
Dear Faculty Member:
Several weeks ago I sent you a questionnaire. I
have not yet received the completed questionnaire from you
and am most anxious to have this for my doctoral disserta
tion .
Once again let me assure you that the results of
the questionnaire will be strictly confidential. Data
will not be used in any way to identify any individual
professor; only group data will be used.
If you still prefer to omit certain items would
you please return the questionnaire partially completed.
Your cooperation will be greatly appreciated.
Sincerely yours,
Jack Conklin
APPENDIX H
SIMPLE DATA DESCRIPTION OF ALL CONTINUOUS VARIABLES:
MEANS, STANDARD DEVIATIONS, AND SAMPLE SIZE
149
SAMPLE DATA DESCRIPTION
Variables_______________Me an______SD____Sample
Year of Birth
1923 1 0
Height 1 0 4
Religious Commitment 3 2
Salary
$14,800 $3,200
College Teaching Experience 9 8
USC Teaching Experience 7 7
Total Teaching Experience 16 1 0
Practice Experience 7 8
Hours Research 6 6
Hours Counseling 1 0 7
Hours Administration 6 6
Hours Preparation 17 8
Hours Consulting 9 9
Pages Authored 103 190
OTES 2939 896
65
65
64
35
66
67
67
65
37
37
37
37
36
37
67
APPENDIX I
OVERALL TEACHING EFFECTIVENESS SCORES FOR
ADJUNCT AND FULL-TIME INSTRUCTORS
151
152
OVERALL TEACHING EFFECTIVENESS SCORES
OF ADJUNCT INSTRUCTORS
0 0 0 0 * 3346
0644 3346
two SD 3420
1423 3445
1841 3531
one SD 3747
2214 3801
2265 one SD
2352 3943
2410 two SD
2555 4855
2562 5994
2807
2816 Mean 2906
mean SD 1162
2935 n 28
2989
3140
3234
3263
3265
Group Mean 2934
Group SD 896
* -0098. was converted to 0000 for statistical purposes
153
OVERALL TEACHING EFFECTIVENESS SCORES
OF FULL-TIME INSTRUCTORS
1337 3084
1496 3095
1948 3122
1998 3154
one SD 3167
2278 3199
2390 3224
2441 3397
2453 3423
2504 3427
2512 3435
2582 3750
2736 3783
2784 3817
2814 one SD
2928 3932
mean 4169
2975 4313
2985
2999 Mean 2962
3045 SD 658
n 36
Group Mean 2934
Group SD 896
appendix j
CORRELATION MATRIX:
FULL-TIME INSTRUCTORS ONLY
154
APPENDIX J
CORRELATION MATRIX: FULL-TIME INSTRUCTORS ONLY
Variable
Numbers
1 2 3 4 5 6 7 8
1 1.000 -0.113 -0.340* -0.505** -0.601*** -0 .651*** -0.711*** -0 .470**
2 1 . 0 0 0 -0.244 0.544*** 0.280 0.250 0.153 0 .262
3 1 . 0 0 0 -0.043 0.074 0.161 0.355* 0 .238
4 1 . 0 0 0 0.760*** 0 .727*** 0.476** 0 .333*
5 1 . 0 0 0 0.932*** 0.755*** 0 .234
6 1 . 0 0 0 0.732*** 0 .228
7 1 . 0 0 0 0 .260
8 1 .0 0 0
(Continued)
CORRELATION MATRIX: FULL-TIME INSTRUCTORS ONLY (Continued)
Variable
Numbers
9 1 0 1 1 1 2 13 14 15
1 0.163 0.003 0.024 -0.214 -0 .059 0 . 1 0 1 -0 . 0 1 2
2 0 .125 0 .371* -0 .076 -0 . 2 1 0 0 .136 0.204 0.156
3 0.177 -0.236 -0.167 0.239 0.190 -0.057
-0 .013
4 0.309 0.471** 0 .038 0.037 0.108 0.173
0.196
5 0.262 0.215 -0 . 0 1 1 0.281 0.008 0.167
0.040
6 0.195 0.163 -0.033 0 . 2 1 2 0.030 0 .156
0.047
7 -0.033 -0.003 -0.123 0.348* -0 . 0 2 2 0.106
0.136
8 -0 . 1 1 2 0 .104 -0 .198 0 .342* 0.156 -0.036
0.105
9 1 . 0 0 0 0.124 -0.156 -0.060 -0.078 0.244
0.253
1 0 1 . 0 0 0 -0.038 -0.317 0 . 2 0 2 0.203
0.074
1 1 1 . 0 0 0 0.078 -0.072 -0 . 2 1 2
0.041
1 2 1 . 0 0 0 -0.376* -0 .085
-0 . 0 0 2
13 1 . 0 0 0 0.127
0 . 1 2 2
14 1 . 0 0 0
-0 . 0 1 1
15 1.000
156
157
CORRELATION MATRIX: FULL-TIME INSTRUCTORS ONLY
(Continued)
V a r i a b l e
Numbers____________________ Meaning
1 Birthdate
2 Height
3 Religious Commitment
4 Salary
5 College level teaching experience
6 Teaching experience at USC
7 Total teaching experience (all levels)
8 Previous practical experience
9 Hours spent in research
1 0 Hours spent counseling students
1 1 Hours spent in administrative posts
1 2 Hours spent in preparation for classes
13 Hours spent in extra University consulta
tions
14 Number of pages authored
15 Overall Teaching Effectiveness Score
(OTES)
* P <. .05 (r = .32)
* * P ^ . 0 1 (r = .42)
* * * P ^
. 0 0 1 (r = .52)
N = 36
APPENDIX K
CORRELATION MATRIX:
ALL INSTRUCTORS INCLUDING ADJUNCTS
158
APPENDIX K
CORRELATION MATRIX: ALL INSTRUCTORS INCLUDING ADJUNCTS
College Overall
Level Total Previous Teaching
Variable
Birth- Religious Teaching Years Teaching Practical Effectiveness
Date Height Commitment Experience at USC Experience Experience Score (OTES)
Birthdate
1,000 0.017 -0.277* -0.504** -0.595** -0.623** -0.348** -0.000
Height 1.000 -0.203 0.195 0.195 0.135 0.153 0.093
Religious
Commitment
1.000 0.074 0.101 0.229 0.139 -0.194
College Level
Teaching
1.000 0.849** 0.770** 0.157 0.013
Experience
Years Teach
1.000 0.704** 0.104 -0.059
ing at USC
Total Teach
ing Experience
1.000 0.061 0.126
Previous
Practical
1.000 -0.111
Experience
Overall Teaching
Effectiveness
1.000
Score (OTES)
* p < . .05
* * p < .01
N = 64
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Relationships between self-concept, specified scholastic variables, and the grade point averages of selected continuation high school students
PDF
Planning - Programming - Budgeting System--An Implementation Model For Public Elementary School Principals
PDF
An Empirical Analysis Of The Relationship Between Risk Taking And Personal Probability Responding On Multiple-Choice Examinations
PDF
The Effect On Reading Achievement Of Visual Training With Lower Case Letters And Frequently Encountered Words
PDF
The Classification Skills Of Five-Year-Old, Six-Year-Old, And Seven-Year-Old Bilingual, Biliterate, And Monolingual Children
PDF
The Relationship Between Parent-Child Reciprocity And The Child'S Mental Level
PDF
The Laboratory Method And Behavior Change In A Religious Institution: An Exploratory Study
PDF
The Use Of Behavior Rating As An Indicator Of Concomitant Development In A Prescriptive Teaching Program
PDF
The Relative Effectiveness Of Three Methods Of Teaching Elementary Algebra In Community Colleges When Sex And Abilities Of The Student Are Considered
PDF
The Scope Of Professional Negotiations In California School Districts
PDF
The Effects Of Early Educational Stimulation On Selected Characteristics Of Kindergarten Children
PDF
Teacher role change in the NASSP (National Association of Secondary School Principals) model schools project
PDF
Functioning And Desired Roles Of The Director Of Curriculum
PDF
Relationship Between Difficulty Levels Of Assigned English Texts And Reading Ability Of Community College Students
PDF
The Effectiveness Of Several Widely Used Sixth Grade New Mathematics Publications In Teaching Initial Concepts
PDF
The relationship of scores on a measure of test-wiseness to performance on teacher-made objective achievement examinations and on standardized ability and achievement tests, to grade-point average, an
PDF
The Critical Behaviors Of Certificated Employee Council Representatives In The Meet And Confer Process
PDF
Contents Of Mathematics Curriculum For Seventh Grade: Practices And Recommended Program In Los Angeles
PDF
The Modification Of Age-Specific Expectations Of Piaget'S Theory Of Development Of Intentionality In Moral Judgments Of Four-Year-Old To Seven-Year-Old Children In Relation To Use Of Puppets In A...
PDF
An Analysis Of Competition In The Title Insurance Industry
Asset Metadata
Creator
Conklin, John Lariviere (author)
Core Title
The relationship between process and presage critera of college teaching effectiveness
Contributor
Digitized by ProQuest
(provenance)
Degree
Doctor of Philosophy
Degree Program
Philosophy
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
education, curriculum and instruction,OAI-PMH Harvest
Language
English
Advisor
Brown, Charles M. (
committee chair
), Brown, William H., Jr. (
committee member
), Fox, Frank H. (
committee member
)
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c18-764186
Unique identifier
UC11364490
Identifier
7300730.pdf (filename),usctheses-c18-764186 (legacy record id)
Legacy Identifier
7300730
Dmrecord
764186
Document Type
Dissertation
Rights
Conklin, John Lariviere
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the au...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus, Los Angeles, California 90089, USA
Tags
education, curriculum and instruction