Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Relationship Between Difficulty Levels Of Assigned English Texts And Reading Ability Of Community College Students
(USC Thesis Other)
Relationship Between Difficulty Levels Of Assigned English Texts And Reading Ability Of Community College Students
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
RELATIONSHIP BETWEEN DIFFICULTY LEVELS OF
ASSIGNED ENGLISH TEXTS AND READING ABILITY
OF COMMUNITY COLLEGE STUDENTS
by
Walter Dana Gibson
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(Education)
February 1971
71-16,411
GIBSON, Walter Dana, 1934-
RELATIONSHIP BETWEEN DIFFICULTY LEVELS OF
ASSIGNED ENGLISH TEXTS AND READING ABILITY
OF COMMUNITY COLLEGE STUDENTS.
University of Southern California, Ph.D., 1971
Education, higher
University Microfilms, A X E R O X Company, Ann Arbor, Michigan
Copyright © by
WALTER DANA GIBSON
1971
THIS DISSERTATION HAS BEEN MICROFILMED EXACTLY AS RECEIVED
UNIVERSITY OF SOUTHERN CALIFORNIA
THE GRADUATE SCHOOL
UNIVERSITY PARK
LOS ANGELES, CALIFORNIA 8 0 0 0 7
This dissertation, w ritten by
under the direction of his . — D issertation C om
m ittee, and a p p ro ved by a ll its m em bers, has
been presented to and accepted by T he G radu
ate School, in p a rtia l fulfillm ent of require
ments of the degree of
D O C T O R O F P H I L O S O P H Y
Dem
IS .
DISSERTATION MITT^E
TABLE OF CONTENTS
LIST OF TABLES .............................
LIST OF ILLUSTRATIONS .....................
Chapter
I. PRESENTATION OF THE PROBLEM ....
Introduction
The Problem
Importance of the Problem
Organization of the Remainder of
the Study
II. A REVIEW OF THE RELATED LITERATURE .
Community College Enrollments
Reading and Academic Success
Readability Measurement
Reading Measurement
III. PROCEDURES AND SOURCES OF DATA . . .
The College and Its Communities
The Selection of Classes and Texts
Sources of Data and Procedures Used
Readability Analysis of Texts
Treatment of the Data
Chapter
IV. ANALYSIS OF THE DATA .......................
Results of the Nelson-Denny Reading Test
and the Informal Reading Inventory
Results of Readability Analysis of Texts
Comparisons of Results
V. SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS .
Summary of the Study
Findings
Conclusions
Educational Implications and
Recommendations
Recommendations for Further Research
BIBLIOGRAPHY ........................................
Page
82
111
125
iii
Table
1.
2 .
3.
4.
5.
6 .
7.
8 .
9.
10.
11.
LIST OF TABLES
I
Page
Purdue English Placement Test Results for
the Graduates of the Five High Schools
Who Enrolled in the Community College .... 48
Percentage of the Sample of the Total
English Enrollment ......................... 51
Distribution of the Sample According to
S e x ..................................... 53
Distribution of the Sample According to
Chronological A g e ...................... 55
Teachers' Experience and Assignment of
the Sample Classes.................... 56
Books Used in This Investigation........ 60
Gunning FOG Index Formula................ 71,
Reading Level by Grade Using the Gunning
FOG Index............................... 72]
Illustrative Gunning FOG Index Worksheet
Using Data from Basic Writer and Reader . . . 73j
Illustrative Fry Readability Graph Worksheet
Using Data from Basic Writer and Reader . . . 7 f i i
Illustrative McLaughlin SMOG Index Worksheet
Using Data from Basic Writer and Reader . . . 80
Table
12.
13.
14.
15.
16.
17 .
18.
19.
20.
21.
Page|
Reading Scores Which Characterized Students
as Poor, Average, and Superior in This
Investigation ............................... 83
Percentages of Poor, Average, and Superior
Readers as Determined by the Nelson-
Denny Reading T e s t ......................... 84
Vocabulary Scores on the Nelson-Denny
Reading T e s t ............................... 86
Comprehension Scores on the Nelson-Denny
Reading T e s t ............................... 88
Total Vocabulary and Comprehension Scores
on the Nelson-Denny Reading T e s t .......... 89
Reading Rate Scores on the Nelson-Denny
Reading T e s t ............................... 91
Percentage of Accuracy of Oral Reading by
Students on the Informal Reading
Inventory................................... 92
Percentage of Comprehension of Oral Reading
by Students on the Informal Reading
Inventory.................................... 94j
Number of Students at the Independent, j
Instructional, and Frustration Levels
According to the Informal Reading
Inventory................................... 95j
Readability Ratings of the 11 Required j
English Texts ............................... 96
v
Table Page
22. Comparison of Mean Reading Scores of the
Sample on the Nelson-Denny Reading Test
with the Mean Readability Scores of the
Texts........................................ 98
23. Percentages of Students Reading At, Below,
or Above the Mean Readability Level of
the Assigned English Text According to
the Nelson-Denny Reading Test................ 102
24. Number of Students Reading At, Below, or
Above the Mean Readability Level of the
Assigned English Texts According to the
Informal Reading Inventory ........ .... 105
25. Distribution of Poor, Average, and Superior
Readers According to S e x .................... 107
26. Percentile Rank of the Sample According to
the Nelson-Denny Reading Test................ 110
vi
Figure
1.
I
LIST OF ILLUSTRATIONS
Page
Fry's Graph for Estimating Readability .... 76
i
i
vii
CHAPTER I
j
PRESENTATION OF THE PROBLEM
I
Introduction
The Educational Policies Commission of 1964 proposed
that the nation's goal of universal opportunity must be ex
tended to provide at least two additional years for high
school graduates. The open door policy of the community
colleges seems to approach the goal of the Commission (55:
1). It implements the American dream of universal education
(39:3) .
The community college salvages human resources (111:;
: I
j
43) by having an open door which affords many individuals a j
second chance to fulfill desires for education beyond the
secondary level (11:274). Today's community college stu-
; I
dents appear to be more heterogeneous than ever before. The
i
; i
open door policy admits students who are more representativej
j
of the total population, mentally, socially, and economi
cally (100:453).
J Myran (89:1) wrote:
i
The "open door" of the community college is becoming
a "double door"; with the two-way traffic involving
greater penetration of the college into the life of >
| the community, and greater participation of the com-
| munity in the life of the college.
i
The effectiveness of a community college is best
measured by its responsiveness to the changing demands of
I
education in general and to the needs of the community in
particular (101:1).
Since every student is required to take an English
class, each student is affected by his ability to read the
English textbook. Preliminary examination of doctoral level
research concerning readability of textbooks at the commu-
I ,
inity college level revealed that comparatively few studies
had been conducted. j
The Problem
i
This study was primarily concerned with comparing
the reading ability of community college students with the j
readability level of their assigned English textbooks. The j
| i
research was confined to the English Department of an urban !
i
' I
medium-sized community college in southeast Los Angeles, |
California. The specific areas investigated were: (1) the
difficulty levels of the assigned textbooks, (2) the reading
ability of the students, and (3) the ability of the students
to read and comprehend the assigned texts.
! I
Hypotheses i
j ;
j To appraise the extent to which the selected English'
'textbooks meet the needs of poor readers, average readers,
i :
and superior readers, the following hypotheses were tested.
Hypothesis I.— There is a difference between the
readability level of the assigned text and the reading level
of the poor readers.
Hypothesis II.— There is a difference between the
readability level of the assigned text and the reading level
of the average readers.
Hypothesis III.— There is no difference between the ;
I i
readability level of assigned texts and the reading level of
the superior readers.
[Questions to be answered j
\ \
1. At what levels of accuracy, comprehension, and
j
Is peed do the students read as determined by the Nelson-Denny;
I
Reading Test?
2. What are the difficulty levels of the textbooks
assigned to the students as determined by the Gunning FOG
Index Readability Formula, the McLaughlin SMOG Grading Read-,
lability Formula, and the Fry Graph for Estimating Readabil- j
; i
lity?
i ;
i 3. How closely do the grade placements, as measured
I
|by the Nelson-Denny Reading Test, compare to the difficulty
I
jlevel of the assigned text as measured by the formulas named
lin question 2 above?
4. What percentage of the students have poor read
ing ability, average reading ability, or superior reading
ability as defined in this study?
5. What is the oral reading level of poor reading
istudents, average reading students, and superior reading
students as measured by an Informal Reading Inventory?
; j
6. Is sex difference related to reading ability?
Assumptions
j I
In planning the investigation, it was necessary to
Imake certain assumptions: j
1. Although present readability formulas are still ;
i
I ,
in the process of development, -they demonstrate sufficient
: i
j
reliability and validity for the purposes of this study. It
was assumed that objective comparisons of the readability
levels of textbooks can be made. ;
2. It was assumed that the Nelson-Denny Reading
Test for High Schools and Colleges had high coefficients of
i
comparability with similar tests and would give similar
grade placements.
Limitations of the study
This study mainly emphasized the relationship be
tween the readability levels of assigned English textbooks
in seven different types of English classes and the reading
ability of the students enrolled in these classes. The
findings in this study were necessarily influenced by the
reliability and validity of the instruments used in the
appraisal. The study was confined to seven selected English
classes in one community college in southeast Los Angeles.
|
Definitions of terms
Certain terms which are important to this investi
gation are clarified here.
Informal reading inventory.— An informal reading
inventory is a non-standardized informal appraisal of a
student's reading and comprehension based on published or
’ unpublished classroom instructional materials.
I Readability.— The term readability is related to
I I
I
! j
that quality of a reading selection which makes it interest-1
i
ing and understandable to those for whom it is written (46:
1 I
; I
J442) . It also refers to the ease with which a reader can
I |
read and understand the written material.
i
Readability index.— The readability index is the
score obtained by applying a readability formula to written
materials. It is usually measured in terms of school grade.
Readability formula.— A readability formula is a
device for quantitatively measuring the reading ability that
is required to read and understand a written selection.
Reading ability.— The term reading ability refers to
the many degrees or levels of reading proficiency as meas-
! . i
ured by the Nelson-Denny Reading Test for High Schools and |
Colleges.
Reading level.— The reading level is the school year!
I
i I
and month that a student places on a standardized reading S
test.
Poor readers.— Community college students scoring |
11.9 or below on the Nelson-Denny Reading Test were con- |
sidered poor readers . ________________________________________
j Average readers.— Community college students scoring
12.0 to 14.0 were considered average readers.
j ;
Superior readers.— Community college students scor- I
jing 14.1 and above were considered superior readers.
;
i Importance of the Problem
i !
Americans in all walks of life, according to Freer
(41:2), find it increasingly difficult to keep up with all
the reading required of them. Rice and Scofield (99:20)
felt that the single most important factor leading up to the
community college drop-out today was the scientific knowl
edge explosion of the twentieth century. Knowledge, they
felt, was geometrically progressing at a rate which was
j
beyond human comprehension. With business, industry, higher
i i
education, and other areas making giant strides in techno- |
logical research, accrued learning doubles every eight years!
(64:1) .
A 1963 study by Winther and others (124:35) revealed
, I
jin their areas of research that 47 per cent of students j
dropped out during the freshman year, with an additional 20
iper cent dropping out in the third, fourth, and fifth years !
of school.
A total of 381,000 students were dropped from
colleges and universities in 1967 because of academic under-
j
achievement. By 1980, it is estimated that the annual fail-!
out rate will reach one million. Numerous studies of the
jattrition rate in colleges make it clear that fewer than 40
j i
iper cent of the students who enroll as freshmen in the com-
i ;
i ‘
imunity colleges will continue as students after their first
j
semester or quarter of attendance in that or any other edu
cational institution.
Fullerton Junior College, which is considered an
.average community college, found that more than 1,000 stu
dents from an enrollment of 12,000 day and evening students
were reading at the ninth grade level or below (127:2).
'Teachers on the community college, college, and adult levels
' :
[have generally depended upon other teachers to equip the j
I
i |
students with all the necessary skills of reading.
Statements appear frequently in educational journals
I
decrying the average student's lack of reading ability. To
^succeed academically, many students may need help in reading
land in study skills . Such a program has been developed by
jFrank Christ (19) at Loyola University of Los Angeles.
!
A serious problem for the community colleges was
: i
pointed out by Shaw and Townsend (104:30), who discovered
that many college texts were several levels beyond the
! 9 j
j i
independent reading level of college students. Although >
Smith (107:56) found that definite steps of difficulty
appeared to be present among two graded elementary textbook
! i
series, this type of gradient of difficulty was not avail-
i
able among textbooks adopted for community college students.
I
Very few community college teachers know the readability
level of the text they adopt for classes. Publishers have
!
I
a problem writing texts with low-level vocabulary and high
interest for adult students. Schrader's investigation (102:
37) revealed that most of the adult students believed that
i
books of this type were too elementary or too childish.
Klare (62:14) recalled that there was reason to be
lieve that readability was more important in voluntary
!
reading, since the person who was not required to read would
I
stop altogether if he could not proceed efficiently. Read- j
ability formulas provided a partial solution to the problem
of fitting material to the reading level of the students.
Bormuth (14:2) stated that suitability could be achieved by
manipulating the materials to make them coincide with the
jstudent's reading level, and by selecting and using those
materials suited to the student's comprehension ability.
The selection of texts for any course in a community
college is a difficult and complex problem for the
'department or faculty committee. Doyle (28:1) found the
i
selection of reading material for freshman community col
lege English classes a perplexing task. Ferguson (33:40)
land Wallace (119:42), in two separate studies of commercial
i
textbooks, agreed that authors should give more attention to
readability formulas when writing textbooks. In four sepa
rate studies, Cornelsen (21:53), Beurman (10:55), Grant
(47:65), and Madrid (84:54) discovered a wide range in read
ing level from chapter to chapter in individual texts.
In a normal high school graduating class, as in any
average high school grade level, it is estimated by many
reading authorities that one-third of the class will be
reading at grade level, one-third below grade level, and
one-third above grade level. Since the Master Plan for
j
Higher Education in California provides for direct admission
! i
of the academically superior students from high school to
the University of California and to the California State
Colleges, the average and below average readers are left to
j i
I j
enter the community colleges. Regular college textbooks
[create a handicap for the community college student who does!
i
not read up to grade level.
J - - . . . . . _ . . . _ . . . . . . . . . T 1
I
j Organization of the Remainder
of the Study
This dissertation has been divided into five chap
ters and a bibliography. Chapter I contains a statement of |
I
the problem, with hypotheses and questions to be answered.
'Assumptions, limitations of the study, definitions of terms,,
and a discussion of the importance of the problem are also
! included.
Chapter II presents a review of the research litera
ture related to the problem and a brief history of the de
velopment of readability formulas along with a discussion of
the Informal Reading Inventory.
The procedures and nature of the testing situation
{are described in Chapter III. Also described in that chap
iter are the Nelson-Denny Reading Test for High Schools and j
' |
jcolleges, the Informal Reading Inventory, the Gunning FOG |
Index Readability Formula, the McLaughlin SMOG Grading
Readability Formula, and the Fry Graph for Estimating Read- !
ability. I
The findings are presented in Chapter IV. Chapter i
V contains a summary of the investigation, conclusions,
i
general recommendations, and suggestions for further study t
and research. I
CHAPTER II
|
1 A REVIEW OF THE RELATED LITERATURE
I
This chapter contains a review of research pertinent
to the main topics of this project.
t
Community college English teachers spend many hours
working on the problem of textbook selection. Each teacher
'hopes to find the book that will fit the departmental ob-
jectives for the course, his own criteria of student needs,
and the student's reading ability. A text that meets all
ithe objectives and criteria does not exist. Shaw and Town-
i ;
[send (104:31-36) observed, "We are almost totally in the j
dark about the use of texts and the relation of the teaching!
of a course to the apparent difficulty of the text." While
many textbook selection committees may try to follow some
type of pre-set guidelines, Angell (4:122-123) felt that
textbooks were selected mainly because of an author's famous!
j i
name built through his prominence in the field, and his
i
'publication of journal articles and research findings.
j As English teachers become more aware of the role
| t
of readability in textbook selection as related to the read-!
I i
ing ability of their students, a good portion of the problem
be solved.
This chapter is divided into four major sections.
Section one deals with community college enrollments. Sec-
i '
I
tion two develops the relationship of reading to academic
t
success, the relationship of reading training and academic
success, and the predicting of academic success through
reading. Section three gives an account of the factors in
volved in readability measurement as well as the major con
tributions of authors to such measurement at the secondary
and college levels. The last section deals with the formal
i
I
and informal measurement of reading ability.
; 1
: |
Community College Enrollments j
The United States has experienced a tremendous
growth in population, population mobility, and technological
knowledge. According to some predictions, the population of
jthe country may reach a new high of 230 million by 1975,
I
with about one-half of the population being 25 years old or
less. At that time, approximately three-fourths of the
total population may be living in urban centers (87:7).
imay
I 14
i
| As a result of the country's exploding population j
; i
j i
and its accompanying increase in technology from business,
t
Industry, and higher education, there has been a growing |
need to offer a system of education to supplement the tra-
j
ditional four-year institution. The community college ap
pears to be providing the needed system.
One count showed that in 1967 22 per cent of all
undergraduates in the United States were enrolled in commu
nity colleges (65:4). it has been predicted that by 1978
approximately three million students will be enrolled in the
1,500 public, independent, and church-related community col
leges (2:1).
California's public community colleges now enroll
over 73 per cent of all full-time lower division students j
i
j
in the public higher educational system of the state (1:4). j
It has been estimated that by 1977 85 per cent of all lower
division students in the state will be enrolled in community!
colleges.
!
Even now substantial increases have been noted in
i
( i
ithe enrollments of all California community colleges (92:3).
;The enrollment of full-time and part-time students in the
i
fall semester of 1967 showed an average increase of 7.2 per
cent over the fall enrollment of 1966. The spring semester
jof 1968 had an increase of 9.1 per cent in the total en
rollment over the spring semester of 1967.
Reading and Academic Success !
| Comparatively little reading was required of the
early college students because textbooks and other source
t i
materials were rare and few in number (7:4-15). As books !
became more plentiful and libraries were enlarged, more
i i
j i
reading and varied types of library use were required of
the college students.
Today's students are confronted with decidedly dif-
1 j
ferent reading tasks than were early college students or
even students of a decade ago (97:44-45). In addition to
{ the selected text, a student now must keep up with rapidly
! i
developing areas of knowledge by reading many types of pub
lished and unpublished professional presentations in his
field and related areas. The reading is not limited to
: i
local research, but includes national and international
research findings as well. With all this required reading,
a student not only must be able to read, but also must be
lable to read well to succeed academically.
16
Relationship of reading and
academic success
Witty and Lehman (126:47-56) and others (95:324-330)|
noted that in the second and third decades of this century !
some college students were poor readers with poor comprehen
sion and thus could not be expected to succeed in college
work until the reading deficiency was corrected (12:243).
More recent studies have shown results very similar
to those mentioned above. In a study of Harvard College
Sfreshmen, it was concluded that there was a positive rela
tionship between reading and college achievement. From
these findings, the researchers concluded that reading
classes on the college level were justified (3:387-396).
Mathews, Larsen, and Butler (85:499-505), working at a dif
ferent institution, found a positive correlation between
reading ability and scholarship. Using the STEP reading
subtest, Endler and Steinberg (30:694-699) noted the subtestl
I
scores correlated highly with grade-point averages. !
Relationship of reading train
ing and academic success
Few people read at the rate and at the level of
comprehension of which they are capable (41:5). There is
evidence that students who had professional help and made a
systematic effort substantially improved their reading j
I j
ability. This fact has presented an interesting challenge
to colleges. Broom (16:11) found that college students
ispend an average of 60 per cent of their study time in read-
;
ing.
I |
From a comprehensive study, it was estimated that
95 per cent of college entrants lacked adequate study
skills, comprehension skills, and reading speed (51:353).
After studying and testing entering freshmen for eight
i
years, Halfter and Douglass (52:42) concluded that two-
'thirds of the entering college freshmen lacked reading
jskills necessary for academic success.
|
At Texas A. & M., 160 male students were tested
I
during their freshman and junior years. These students
tended to develop faster rates of reading; however, more
effective gains were made by those who received specialized
reading training. Furthermore, there were no gains between
i :
i
the freshman and junior years in comprehension unless the
i j
students had received specialized training to develop this
specific skill (61:471-475). At Cornell University those
:who had taken reading instruction had a higher grade point
i
average than the control group who did not take reading
training. Those who had taken reading training also were
1 8 !
less likely to drop out of college than either members of j
j
the control group or their classmates who were not involved
in the study (29:198-200). McCord (79:214-215) gave I.Q.
tests before and after an adult course in speed reading and
found that I.Q. scores tended to be higher after the course.
In a 19-year follow-up study, Howden (58) found that
i
those who were good readers when tested in the fifth and
sixth grades were still good readers as adults, and those
who were poor readers remained so; those who had been aver
age readers formed a distribution which overlapped the
scores made by both poor readers and good readers. This
would tend to show that those who were poor readers and some
of those who were average readers during the middle grades
would need to take specialized reading courses in the com-
t
munity college or in college.
! j
In the subject area of science,
i
the expectation that students entering freshman sci
ence will be able to read that subject adequately is
based upon a false conception of reading. A person
does not learn to read once and for all, however com
petent and complete the materials. Different levels
and different subjects demand varying skills in read
ing. (41:5)
A student assigned a chapter in history must do four
things in order to master the assignment, according to Pauk
| (98:2) :
1. He must read
2 . He must comprehend
3. He must remember !
4. He must integrate to some degree the contents of |
| the chapter into his mental framework.
1 1
Predicting academic success
i
The over-all average of high school grades is used !
by some authorities in predicting academic success. A
i
search of the literature shows that reading is also a reli
able means of predicting academic success. When comparing |
j I
Cooperative English Test scores in the School of Education |
of a large university, Centi (17:102-105) found the highest :
scholastic ranking group of students to be superior on all
i i
(scores of the test over the lowest scholastic ranking stu-
i
dent group. This also included superiority on the reading
section of the examination. Using the data from 1,951
(freshmen from a southern university, Webb and McCall (121:
i j
i I
660-663) found the Cooperative English Test: Reading Ex-
amination was the best predictor of first-quarter grades.
( Sweeney (112:2640-2641) reported that success in
^college English courses was closely related to reading pro
ficiency, while other research found that verbal comprehen
sion was a predictive factor of academic success (24:611-
|616} . Using 245 students in professional education courses,
Lunn (76:1490-1491) concluded that the better a student's
!
reading comprehension was, the better were his chances of
i
success. Cummiskey (22:812) reported reading ability in
fluenced the final test scores in a college general psy-
i
I
chology course.
Readability Measurement
The term "readability" has come to be used in three
;ways, according to Klare (62:1):
j
| 1. To indicate legibility of either handwriting or
typography.
! 2. To indicate ease of reading due to either the
interest-value or pleasantness of writing.
3. To indicate ease of understanding or comprehen
sion due to the style of writing.
The third use of the term is the one which has been
developed in this section.
|
Readability research in education began in the
|1920's (18:142). Since that time, formulas have moved
l
through a series of developments that finds the modern for
mulas based on two major factors, the word factor and the
sentence factor.
i
The word factor
As early as 900 A.D., Jewish scholars compiled an
analysis of written material by making a word frequency
count of the Torah. Their study was the comparison of usual
I
meanings and unusual meanings of words and individual ideas
(75:544-552).
i
Word frequency counts during the late 1800's and
this century have been made in many languages, with the
jfirst known Russian language study being done in 1889 by a
j
Russian, N. A. Rubakin. He analyzed over 10,000 manuscripts
jwritten by soldiers, artisans, and farmers. From this com-
| j
prehensive study he concluded that most people understood
ithe words on his 1,500-word list, whereas the main hin-
jdrances to readability were "unfamiliar vocabulary and ex-
I
cessive use of long sentences" (63:36).
Kaeding's tabulation of nearly 11 million German
words in 1898 showed that 50 per cent were monosyllables and
only 8.4 per cent had four or more syllables (86:89). Lorge
(75:545) felt that Kaeding's monumental work, even though it|
i
was done with the German language, established the word
count as a method of recognizing basic vocabulary.
In 1921, predating the first recognized readability
jformula by two years, Edward L. Thorndike made the most com
prehensive count of word frequency in written English. In |
his list, 10,000 words were rated according to their
22 |
| I
frequency of occurrence and subdivided into groups of 1,000 j
I I
words each (113). Thorndike reasoned that the words that
should be taught to children first were the ones appearing
Imost frequently in his word sampling. In 1931, Thorndike i
j
i
published an expanded list of 20,000 words (114), and in
j 1
1944, in collaboration with Lorge, he extended the list to
! i
i
the 30,000 most commonly used words (115).
Chall (18:17) attributed the appearance in 1923 of
the first quantitative readability study to Thorndike's word!
jstudy published in 1921. The word list, she felt, gave I
i
formula makers an objective means of labeling vocabulary
lease or difficulty. Klare and Buck (63:47) named Thorndike
ithe father of the readability formula, because five of the
!first seven formulas made use of Thorndike's word list and j
] !
.because 10 formulas published by 1954 had used one of the
!
three word lists as a basis of their computation.
|
The sentence factor
i
Modern readability researchers have considered the
I
premise that the longer a sentence was, the more difficult !
I
I
it would be. The excessive use of long sentences increased
the difficulty level of written material. Sentence length |
i
was considered a second major factor in modern readability
r ' ’ .... " " " ~ _ 23
jformulas.
L. A. Sherman, a professor of English literature in
the late nineteenth century, proposed a study of writing i
! . !
style by quantitatively analyzing sentence structure and i
I
length (105:256-268). He also noted that each author tendedj
i !
( to have a distinctive pattern of sentence lengths and that I
i
! !
jover the years there was a decrease in sentence length.
{This was probably due to two main factors: the standardi
zation of punctuation and the increase in the use of simple
I •
S !
sentences (71:57).
! i
G. U. Yule used the sentence length analysis to
establish the authorship of the medieval literary work, De ■
j lmitatione Christi. Other works known to be by Thomas a
Kempis showed a similarity in sentence length patterns; and,
i
as a result of Yule's investigation, Thomas a Kempis was
credited with being the author (130:363-390).
I
j i
Since writers show consistency in sentence length
t
patterns, samples can be taken from various parts of a book
to be used in modern readability formulas. Flesch (34:200-
201) cautioned researchers against applying modern reada-
i
bility formulas to the literature of the past. Because this;
i
i
literature was consistently written with greater sentence
length, modern readability formulas using sentence length
j a s a computing factor cannot be used for accurate measure
ment .
| j
: i
i
Early formulas !
Lively and Pressey are credited by Chall (18:17) as j
i
being the authors in 1923 of the readability formula which
i I
most closely parallels today's formulas. Some later writers!
I ;
used the concepts of their formula in their own works. This
i i
i
method, which took approximately three hours per book to
evaluate, was based on a systematically selected sample of j
1,000 words. The formula used a diverse vocabulary range j
!
with a value of difficulty based on the frequency rank of |
Thorndike's first word list, the number of words not on the
jlist, as well as the number of different words (72:389-398).:
j Lewerenz's first technique, published in 1929,
jproved to be one of the most unusual measures of readabil
ity. He took a systematically spaced 1,000-word sample and
j
recorded the number of words beginning with different let
ters of the alphabet. From a grand total of the entire
alphabet, he found the percentages of words beginning with
! i
!"w," "h," and "b" (easy words) and those beginning with "i" j
land "e" (hard words). Each percentage for the five letters
was then given a value from a table of norms, with the
average value of the five being the grade placement (67:
I
I |
11-16).
I
1 His second technique, which appeared a year later,
| !
jwas completely different from his first technique. The new
I
one measured vocabulary diversity by establishing a ratio j
between the total number of words used and the words appear - j
i '
ing in the first 500 on Clark's vocabulary list; vocabulary
difficulty was measured by establishing a ratio between
simple words derived from Anglo-Saxon and difficult words
jderived from Greek and Latin (70:4-6).
I ,
Vocabulary interest, which he described as an esti-
j i
mate of image-bearing or sensory words, was added in 1935 to
i
the second technique (69:236). Four years later he made two
more additions by considering a word count of polysyllabic
words and vocabulary mass (68:151-156). j
Patty and Painter published in 1931 their formula
!
Which was calculated to measure vocabulary burden for grades;
i ' !
4 through 12. Their method was based upon a sample word
count from the third complete line on every fifth page. The
words were weighted according to their value from Thorn-
! j
dike's first word list, after which the average^word- j
weighted-value and the index number were calculated (96:23-
32) .
Very few studies have been done on the adult level, j
j !
and those that have been completed have been done in spe-
i '
cialized areas. The first to use adult material for his
criteria was Ojemann, who in 1934 published his work based
i
entirely on parent-education articles from periodicals for
jadults. Using 17 quantitative and qualitative factors, he
I i
i :
j " found that the conceptual difficulty of the text is prob
ably of greater importance than rarity of vocabulary as
measured by a word list" (18:45). Ojemann did not propose
^ formula, but arranged in order of difficulty 16 500-word
samples which were to be used in grading like materials (91:’
11-32) .
McClusky's study in 1934 was one of the few to use
I ;
i
speed as a readability index instead of comprehension. He
jstudied ideas per 100 words, average letters, per word, words
per sentence, and various types of nouns. He then concluded
that easy material was written with easy vocabulary and
; j
simple sentences, whereas difficult material was written
|
jwith unfamiliar or technical vocabulary and complex senten-
i
jces (78:276-282).
t j
The most extensive study in the history of reada
bility was published in 1935 by Gray and Leary, both of whonj
were greatly concerned with difficulty factors of general j
27 j
l
i
type materials in books and magazines for adults of limited j
|
or average reading ability. The authors also constructed
i
an adult reading test which was developed with both fiction
i
!
and non-fiction paragraphs. Through the use of summary
questions and detail questions, they had some idea of how
well the adults read (49).
The Gray-Leary study commenced with 289 factors
related to readability, the final step being a setting up
of five factors in a regression equation for predicting
readability. The five factors were: (1) number of personal!
pronouns, (2) number of different hard words, (3) average
number of words per sentence, (4) percentage of different
words, (5) number of prepositional phrases (48). This five-
factor equation, which had a high multiple correlation co-
i
efficient, was usually preferred by other researchers over !
J
their eight-factor equation or any of their nine four-factor!
i
equations.
In their analysis of magazines, Gray and Leary were i
the first to note that magazines tend to follow a distinc-
i
tive pattern of sentence length (49:168-171). Magazines
were classified as easy, average, or difficult on the basis
: j
of four factors: percentage of monosyllables, percentage ofj
different words, percentage of simple sentences, and average
j 28
i
jsentence length. Magazines which were most popular and had
i
the widest circulation were rated as average in difficulty,
(while the magazines with the lowest circulation were rated
|
most difficult (49:175-193). One researcher found a corre
lation of .73 between the Gray-Leary and Winnetka formulas |
when contrasted with the same popular magazines, same is- j
i
i 1
sues, and same samples (120:100-105).
i ;
Part of the Gray-Leary formula was used by Kessler
i
in 1941 as a basis for examination of 35 books for leisure
jreading in high school biology (60:260-264) . He used only
j I
two factors from the Gray-Leary formula, namely sentence j
i I
length and hard words from a sample of 10 paragraphs of
approximately 100 words. He did not combine the two ele-
jments, but the 35 books matched the grade placement ratings
from Lewerenz's list of over 2,700 books rated for the Los
Angeles School District (66).
! I
Between 1935 and 1938 Morriss and Halverson formu-
jlated at the Readability Laboratory, Teachers College,
Columbia University, the "idea analysis technique" (88).
The unpublished manuscript featured a method of word analy- j
I
sis that put less emphasis on Thorndike's word count and
differed from other formulas in that words were analyzed
only in context. Words were classified into four
29
categories: (1) words learned early in life; (2) localisms
I
used by limited groups of the population; (3) words of con
crete ideas, and (4) abstract words. This method never
j
proved popular because it was unpublished and cumbersome to i
|
!
use. ;
j !
i
Modern efficient formulas
j The work of Lorge (74:229-233) in 1939 was the be-
j
ginning of the modern formulas that started a trend to
emphasize efficiency and simplicity of use. Lorge started i
! I
I i
his search for an efficient formula by using all five of
' I
the Gray-Leary factors, to which he added a weighted index
I
for words obtained from Thorndike's word count. From this
|he calculated a maximum value of .77, which he raised to .7$
i
iby combining parts of these elements along with parts of
Morriss and Halverson's word classification. Lorge felt
that for the small increase in prediction, this later com- !
! i
bination was too cumbersome and too time-cons tuning to cal- 1
culate. After further work he reduced his formula to three
factors. He computed average sentence length, number of
prepositional phrases, and number of difficult words, and j
then substituted these in a formula. In 1948, the numerical!
constants of the formula were changed slightly to make it
more accurate (73:141-142).
When Lorge was doing his original computations, he
Las able to reduce the factors to three and still maintain
i
I
I
the predictive accuracy. He used as a criterion 376 selec- j
i
jtions from McCall-Crabbs' Standard Test Lessons in Reading. j
I I
i
Books II, III, IV, and V. These 376 standardized selectionsj
were later used by Flesch, Dale, Chall, and others in devel
oping their readability formulas.
In 1939, Yoakum put out in mimeographed form his
! i
readability formula based on the use of only one factor. j
The formula was similar to Lively and Pressey's published I
work in 1923. It also used only a weighted index of vocabu-
|
ilary difficulty. Lorge in his first work in 1939 used the
j ;
'same one-factor base. Yoakum revised this formula in 1948 j
I
(129) and again it was available only in mimeographed form.
It was not until 1955 that the formula appeared in print in ,
the appendix of his book Basal Reading Instruction. j
! I
Thorndike's second word list of 20,000 words, pub-
I
lished in 1932, was used by Yoakum to tabulate his weighted
index of vocabulary difficulty. Words taken from a system- ;
atically selected sample of 10 pages per book were assigned !
i
a number value if they fell into the fourth thousand and J
i
above in frequency. Those in the fourth thousand were
31
assigned a value of four, those in the fifth thousand were
I
assigned a value of five, etc. Grade level difficulty could
I
be found by referring the total of index numbers to a con- j
I I
version table found in the 1948 instructions.
i
Yoakum believed that a sample of 10 selected pages
i ;
evenly distributed throughout a book would give a reasonable]
reliability close to the true measure. A larger sample
I
would be better, but the time needed to compute the formula '
would be greater and not worth the slight increase in re
liability (129:1). :
i j
j Textbooks from 1930 to 1945 were rated by Yoakum,
i
i . .
{who found a definite trend toward simplification. Although
!
he believed in simplification of school textbooks, he ex
pressed concern lest the trend toward simplification go too
far. He foresaw the time when all manuscripts would be
checked for readability before publication, leading to "more
accurate and skillful use of textbooks" (128:309).
| Flesch, associated with the Readability Laboratory
at Columbia University, was the only researcher to publish
i
I
jhis works as popular books . Two of his books, The Art of
: I
Plain Talk in 1946 and The Art of Readable writing in 1949, ;
generated a great deal of interest in readability among
laymen.
| 32
I
j The Associated Press retained Flesch as a reada
bility consultant to bring clarity to news writing. In 1951
he wrote The AP Writing Manual, which was used as a writing
guide by the staff. !
I
In the beginning, Flesch (36) tested a number of {
i i
formulas and felt they were not satisfactory for adult mate-j
rials because of the emphasis on vocabulary and the lack of
i
emphasis on abstract words. Using the McCall-Crabbs Stand- •
i
l ard Test Lessons as his criterion, he published in 1943 a
! !
three-factor formula which required computation of average j
i '
sentence length, number of affixed morphemes as a measure
of abstraction, and number of personal references. After
|
i !
these three factors were computed from systematically se
lected samples of 100 words, the results were substituted
in a formula. His work appeared in The Art of Plain Talk
in 1946; a revision of the formula was published in The Art
i of Readable Writing in 1949.
In 1948 the formula was revised by separation into
two parts, Reading Ease and Human Interest. To compute the
Reading Ease formula from systematically selected 100-word
i
samples, Flesch counted the number of syllables for each 100,
words, determined the average number of words for each sen
tence, and then substituted these two factors in the reading
33 j
lease equation. The ease with which the formula could be
I
applied made it the most popular and most widely used reada
bility formula.
I
The Human Interest section was also computed from
i
i
systematically selected 100-word samples. In this part,
i !
Flesch counted the number of personal words in each 100
! i
(words and the number of personal sentences per 100 words,
(and then substituted these two factors in a human interest
(equation (37:221-233).
j Flesch published two more formulas in 1954 and 1958 |
which were primarily for use by writers. The rather complex
formulas had an "r" count to measure realism and concrete-
ness, and an "e" count to measure energy, forceful delivery,
j ;
'and vividness. It was necessary to use a table to interpret
i
jthe results (35, 38).
i
The second most frequently used readability formula ;
was published in 1948 by Dale and Chall (23:11-20) as the j
i 1
result of their search to find a simple but reliable formula
for use with adult materials. The formula was based on the
I
i
McCall-Crabbs passages and had a correlation of .70.
I ;
The formula was developed to rectify shortcomings
I
(in the original Flesch formula, such as the unreliability
lof the count of affixes. A two-factor formula was
researched by using a multiple-regression technique.
One hundred-word samples from every tenth page of a
'book were selected to be rated. The average sentence lengthj
in words and the words outside of the Dale list of 3,000
' I
were computed and then substituted in a formula. A table
was then consulted to give a more accurate grade level of
difficulty. Klare (62:22) rated the Dale-Chall formula as
I
"consistently more accurate than others in comparisons."
In 1957, Gillie (45:214-217) presented in the Jour
nal of Applied Psychology a shorter version of Flesch's
Level of Abstraction readability formula. Even Flesch agreed!
that his formula was complicated to compute. This gave the
Gillie version a distinct advantage, for it was easier to
apply and gave the same total score as the original. j
i
Farr, Jenkins, and Patterson (32:333-337) claimed
that simplification of Flesch's Reading Ease formula was
i
possible by changing the syllable count to a count of one-
i
syllable words. The author's research of readability at the
adult level established a correlation of .91 between the two
factors. Validation was made through a score comparison of
360 passages using the Flesch formula and the new formula,
which resulted in a correlation coefficient of .93. They j
also published a table for their formula to speed
(Computation. Together Farr and Jenkins also aided the move
ment toward making formula applications easier and quicker j
by eliminating some of the tedious computations. They
; I
developed tables for Flesch's Reading Ease and Human Inter
est formulas (31:275-278).
Gunning believed there was a need for clarity and
understandability in writing. His chief concern was with
writing in general, with readability, and other areas of
communication. Serving as a private consultant, he alerted
management to the importance of readability in industrial
and business materials, communications, and publications.
His readability counseling firm worked with more
than 70 large daily newspapers and a number of magazines
(18:147-148). His work with the United Press wire service
|
resulted in greater readability of releases and a shorten-
i
ing of lead paragraphs.
His readability formula, the FOG Index, and 10 prin-!
ciples of clear writing are found in his book The Technique <
I of Clear Writing, published in 1952 (50) . Gunning assigned j
' i
increased values for hard words and sentence length and the
FOG Index was the grade level required for understanding the
material. Wheeler and Smith (123:397-399) developed in 1954j
: i
a formula similar to Gunning's in that it used many of the I
36
sam e f a c t o r s . G u n n in g 's fo rm u la i s d i s c u s s e d in g r e a t e r
d e t a i l i n th e n e x t c h a p t e r a s i t was o n e o f th e t h r e e f o r
m u la s u s e d in t h e r e s e a r c h o f t h i s s t u d y .
i
!
In 1943, McElroy (81:11-13) made available to the j
i
c l i e n t s of R e a d a b i l i t y A s s o c i a t e s a Fog C o u n t w hich c l o s e l y !
resembled Flesch's Reading Ease formula and Gunning's FOG
Index. While at first his count of separate sounds might j
: !
seem to be based on phonetic sounds, a close analysis would '
disclose that it was a simple count of syllables. The au- |
i
thor claimed that anyone could apply the formula in five j
minutes if he could speak English and count to 40. The Fog |
Count could be made without the use of word lists, charts,
or tables (83).
McElroy's practical application of readability in j
i
Industry through Readability Associates resulted in a
greater awareness of the need for clear and understandable
writing. !
|
In 1953, Forbes and Cottle (40:185-190) made their
^contribution to the readability field by publishing a spe
cialized formula used to check the readability of standard- j
j
ized psychological tests and inventories such as intelli-
I
g e n c e , p e r s o n a l i t y , i n t e r e s t , a n d r e a d i n g . They m e a su re d
t
the difficulty of the instructions as well as the test by
37
taking 100-word samples from the beginning, middle, and end.
i
They gave weighted value to the words above the 4,000 most
i
common as found in the 1942 Thorndike Junior Century Pic- j
tionarv. A table yielded the corresponding grade level.
i
The new formula was found to correlate .95 with the
I !
mean of five popular readability formulas when using 27
i !
{standardized tests. The Forbes-Cottle formula was easy to
apply and greatly reduced computation time.
j
Current formulas
I
j Fry developed his Readability Graph in Uganda and
first presented it to British readers in 1963. A special
jrevision for use in the United States was published in 1968.
Omitting proper nouns, the formula used 100-word samples
t
selected at random from the beginning, middle, and end of a
book. Great variability in sentence length or in the syl-
!
!
liable count for the three selections required the user to |
I I
; i
select several more samples to be averaged in before using
i
the graph. Two factors were used in the count: the average
sentence length and the average number of syllables. The
approximate grade level was then found by plotting both
averages on the Readability Graph.
The Readability Graph correlated .94 with the Dale-
{ 38
iChall formula, .96 with the Flesch formula, and .98 with
i
the SRA formula. Fry's Readability Graph will be discussed
|in greater depth in the next chapter, as it was one of the
I
I
three readability formulas used in this study (43:513-578,
j
575-578). !
i !
I !
| In 1969 McLaughlin was one of the first readability |
i
researchers to mention the use of a computer in his working
but of the regression equation for his SMOG Grading formula.;
'From 30 selected sentences— 10 from the beginning, 10 from
jthe middle, and 10 near the end of the written material—
i |
McLaughlin counted every word with three or more syllables.
Then he estimated the square root to the nearest perfect
j |
square of the number of polysyllabic words counted. By add-*
jing three to the approximate square root, he arrived at the j
jreading grade a person must have reached to fully under- |
stand the text. This formula was the last of the three used
in assessing readability in this study and is discussed in j
greater detail in Chapter III (83:639-646). j
i
Reading Measurement
I ;
! In order for the teacher to become aware of his
j
pupils' needs in reading, he must utilize certain devices,
techniques, and procedures. These means of appraisal make
39
up the process of measurement. Measurement of reading
ability is an essential part of reading instruction, and ;
the methods of measurement of reading abilities have pro- j
gressed and developed along with the methods of instruction j
I
itself. |
| In an examination of the measurement of reading |
ability, one finds two types of instrumentation, the stand-
I I
ardized reading test and the informal reading test. Both
Stypes of reading tests have improved to the present level
i
j I
of sophistication.
1 i
; I
Standardized tests
Standardized tests are measuring devices which in
clude norms or standards of achievement for a specific age,
Iseries of ages, grade level, or combinations of these. They
provide instructions for administration and scoring (117:
; i
309). These tests measure specific reading skills (110: |
' j
132). !
j
Standardized reading tests are either the survey
type or the diagnostic type. The survey type tests spe
cific acquired reading skills which determine a pupil's
level of achievement in such basic reading skills as vocab
ulary, comprehension, and speed. The diagnostic type tests
40
readiness for future learning as well as difficulties in
j
learning by showing the student's strengths and weaknesses
j i
(57:166) . j
The 15-year period from 1915 to 1930 was called the !
"boom" period in test development. During this time, stand-
j
ardized tests were developed for all the subject areas (116 : l
i
5) . In 1915, the first standardized reading test, the Gray
]
Standardized Oral Reading Paragraphs, was published. This
was soon followed by more reading tests, mainly of the si
lent reading type (108:157).
! ;
I In the history of standardized test development,
'reading was one of the last areas to have a standardized
i
I
test. The reason for this has been stated by Smith:
. . . oral reading procedure was the only one in
general use, and oral reading proved to be an un
wieldy and uneconomical product to measure by means
of standardized group tests. Furthermore, the en-
| tire subject of reading was so complicated that it j
| was difficult to analyze it into elements vAiich I
seemed sufficiently significant to warrant testing. |
This very complication challenged the test makers,
who directed their attention to this subject and
succeeded eventually in making rather clear-cut
analyses, in which speed and comprehension in silent
reading stood out as highly important and at the
same time testable features of the reading process. |
(108:161) I
Courtis discussed the first attempt to determine
standard scores for some of the measurable elements of
reading in the Fourteenth Yearbook of the National Society
for the Study of Education. In the same year Starch re
ported a silent reading test which he had devised. This j
test named chief elements of reading, "(1) the comprehension
of the matter read, (2) the speed of reading, and (3) the
correctness of the pronunciation" (108:161). |
i
Once the elements of reading had been identified,
I
many more reading tests appeared. Among these were: the
i
Brown Silent Reading Test, the Kansas Silent Reading Test,
Courtis's Silent Reading Tests, and Monroe's Standardized j
i |
Silent Reading Test. The wide use of these tests was a
i i
powerful factor in stressing silent reading in the classroom
1(108:161-162).
Much research was carried out to determine more
j
about the accomplishments and the problems of educational
measurement. This scientific investigation entered into
j
|the area of reading tests, but not before the tests were
already published and in use. Many of these investigations j
concerned themselves with the standardization and applica- j
|
tion of reading tests. Most of these studies were carried
out in school classrooms (108:186).
Harris (53) in the 1956 edition of How to Increase
Reading Ability included an annotated list of over 50
popular reading tests taken from a more comprehensive list
f
in the Fifth Mental Measurements Yearbook. In the 1961
{edition (54), he includes over 16 pages of reading tests
with annotations.
I
I
i
Informal reading tests
: i
j
As early as the fifth century B.C., Socrates used
jthe informal oral method to test his students (109:12). j
i !
Oral testing was the primary method of testing until the end
I • !
of the nineteenth century. Under this system the teacher ort
I
examiner asked a question, the student answered it, and the !
! j
examiner arrived at an immediate subjective evaluation of
the answer (116:1).
i !
! With oral recitation testing as a precedent, it is
natural that when printed matter became more available oral
{reading would be used to evaluate a student's reading abil
ity. Horace Mann made an informal appraisal of students'
ioral reading ability by having them read from the newspaper
1(56:158) .
I
By 1920, a list of errors was used in rating oral
reading. This list was similar to today's rating scales in
that it included insertions, omissions, mispronunciations,
substitutions, and repetitions as errors (122:308-312).
43
Kender describes the informal reading test:
I
An informal reading test is one which evaluates an
individual's performance in reference to his own
ability in contrast to the standardized test which
compares the individual's performance to the per
formance of others. Such a test is constructed from
functional materials and can be published or unpub
lished. An informal reading test yields information |
concerning the level at which a student can read
independent of instruction, the level at which he I
can profit from instruction, and highest level he
can understand when someone reads or talks to him.
In addition, an informal reading test yields infor-
: mation concerning a student's particular strength
and weaknesses as they are related to his total read
ing ability. (59:3-4)
| i
| The Informal Reading Inventory (8:438-487) vrtiich has|
I i
appeared in the numerous editions of Betts' Foundations of j
Reading Instruction has become the most popular of the in
formal reading appraisal instruments. He rated the student
as having achieved the independent level when the student
had 99 per cent accuracy in pronunciation and at least 90
iper cent accuracy in comprehension. The instructional level
j j
was attained with 95 per cent accuracy in pronunciation and
i
i
75 per cent accuracy of comprehension. The frustration
level was reached when the student had 90 per cent accuracy
in pronunciation with 50 per cent or less in accuracy of
comprehension. The administration of the Informal Reading
Inventory is discussed in detail in the next chapter, as it
44
is one of the appraisal instruments used in this investiga
tion.
Smith stated, "By rule of thumb, the independent
level may be thought of as one grade below the instructional!
level and the frustration level one grade above it" (106:
! 10). I
i 1
Using 10 paragraphs of increasing difficulty, Fry
developed a scale to appraise the student's independent,
instructional, and frustration levels (42). Dolch has pro-
! :
posed two methods for informal testing of reading. One j
i . |
method calls for reading out of any book available (27:124- j
i !
■ i
125, 165) and the other has been developed in a printed form
I ;
for use with his list of 220 basic sight words (26:175-184).
j Austin (5:338-345), Botel (15), and Betts (9) rate
informal reading as useful in evaluating a student's per
formance .
!
1
I
| CHAPTER III
PROCEDURES AND SOURCES OF DATA
In Chapter I the problem for this investigation was
explained and discussed. The relationship of reading to
academic success in college, the measurement of the reada-
i
bility of books, and the formal and informal measurement of
l
students' reading ability as found in the research were
developed in Chapter II.
j
This chapter contains information on: (1) the col
lege and its communities, (2) the selection of the classes
and texts to be measured, (3) the standardized test used to
measure the reading ability of the students in the selected
English classes and the informal measure used to evaluate
I
reading levels of selected students, (4) the three reada
bility formulas used to appraise the readability levels of
the texts, and (5) the treatment of the data.
I The College and Its Communities
! The college with its 83-acre campus
i !
in the south central area of greater Los Angeles j
serves a district in which there are many minority j
group families. With a student body of about 5,000,
the college estimates that between 40 and 50 percent
of its students are disadvantaged, and that this
| group is predominantly black. (6:36) |
I !
1 Transfer credit courses, occupational and technical ,
1 i
i
programs, and foundation classes compose the schedule of
classes. The standard college grading system of A for su-
i
jperior work, B for good work, C for average work, D for in-
! i
ferior work, and F for failing work is followed. Auditing I
! I
|
of classes is not permitted. A policy of probation and dis-i
missal has been adopted by the Board of Trustees after being
!
established under Section 131 of Title V of the California
Administrative Code (20:34-35).
The community college district is composed of three
unified school districts, two of which have one high school !
: j
each, while the other has three high schools.
In the district with three high schools (City A),
the city population is highly transient with 45 per cent of
the people having a residency of less than eight years. Thej
city is 61 per cent black. Sixty per cent of the labor
force works outside the city and the median income is
$4,700 (25).
High school 1, with a predominantly black enroll-
i
ment, had a drop-out rate of 23.4 per cent and the tenth- |
grade reading scores were the lowest in the state. On the j
Purdue English Placement Test given at the college (Table j
1), 55 per cent of the graduates of high school 1 scored
i 1
i
below the 10th percentile, 34 per cent between the 10th and
I
42nd percentile, and 10 per cent above the 42nd percentile
(25:19).
At high school 2, with a predominantly black en
rollment, 67 per cent scored below the 10th percentile on
the Purdue Test, 25 per cent between the 10th and 42nd per-
jcentile, and 6 per cent above the 42nd percentile.
< i
j High school 3, with a school enrollment of 50 per j
cent black and the remainder white with a small percentage
of brown, had 30 per cent below the 10th percentile, 50 per
cent between the 10th and 42nd percentile, and 18 per cent
above the 42nd percentile. j
The second city (City B), with a population of about
50,000, has very little manufacturing and could be consid
ered a "bedroom community," as most of the populace work
outside the city. Most dwellings are single-family; there
are a few apartment houses, but more are beginning to
48~j
TABLE 1
PURDUE ENGLISH PLACEMENT TEST RESULTS FOR THE
GRADUATES OF THE FIVE HIGH SCHOOLS WHO
ENROLLED IN THE COMMUNITY COLLEGE
j
1
; City and School
i
Below 10th
Percentile
Between 10th
and 42nd
Percentile
Above 42nd
Percentile
City A
i
1
High School 1 55 34 10
High School 2 67 25 6 i
High School 3 30 50 18
City B
High School 4 19 42 38
City C
High School 5 26 40 33 |
i
|
i
I
appear (77). In the one high school the enrollment of 2,300!
was predominantly white; 236 students had Spanish surnames,
and one student was black. On the Purdue Test, 19 per cent
i
scored below the 10th percentile, 42 per cent between the
10th and 42nd percentile, and 38 per cent above the 42nd
percentile.
The third city (City C), which has no city taxes,
is zoned one-third industrial, one-third commercial, and
one-third residential. The average income of the approxi
mately 40,000 residents was $6,057 (94).
The population of the one high school (high school
5) was predominantly white; 20 per cent of the students had
Spanish surnames, and 10 students were black. Twenty-six
per cent of the students enrolling at the community college
scored below the 10th percentile on the Purdue English
I I
Placement Test, 40 per cent scored between the 10th and the '
42nd percentile, and 33 per cent scored above the 42nd per-
i
centile.
I
i
The Selection of Classes and Texts j
Separate preliminary discussions were held with the
Dean of Instruction, the Dean of the Evening Division, and
; i
the Chairman of the English Department. Although the
Research project had been approved by the college and was
i
ready to commence, it was decided to postpone the testing
for eight weeks. This delay would allow the instructors
four weeks for review and final examinations without in-
i
i
terruption at the end of the fall semester and four weeks |
at the beginning of the spring semester for the organizationj
of classes.
; j
j j
The sample
| A sample of convenience was used in the selection ofj
I ■
ithe English classes to be used in this investigation. A j
! |
chart of different types of English classes meeting during ;
ithe spring semester along with the number of sections and
i
i
their meeting times was drawn up. From this chart, the Dean
of the Evening Division and the Chairman of the English
Department selected the seven classes of differing types to
i
be used in the investigation. Remedial reading and develop
mental reading classes were excluded from the list. English
i
Literature 41b was also excluded for administrative reasons.
Table 2 is an analysis of the seven classes used.
There were a total of 21 sections of English classes avail
able with only one section in American Literature, two sec
tions in Communications 201, three sections in Spelling-
TABLE 2
PERCENTAGE OF THE SAMPLE OF THE TOTAL ENGLISH ENROLIMENT
Class Sections
Total
Sample
Enrollment
Number
Percentage
of Total
Communications 200 4 135 26 19
Communications 201 2 74 34 46
S pe11ing-vocabulary 3 86 25 29
Grammar A 4 146 35 24
Reading and Composition A* 4 150 30 20
Reading and Composition B* 3 98 28 29
American Literature* 1 23 22 96
Total 21 712 200 28
*College transfer credit classes.
in
52
Vocabulary, Reading and Composition B, and four sections in
the remaining three types of classes.
The student enrollment in the sections ranged from
!
23 in American Literature to 150 in Reading and Composition j
A. The number of students in the sample ranged from 22 in
j i
American Literature to 35 in Grammar A for a grand total of j
200. The percentage of the total of the sample ranged from ;
j
19 per cent in Communications 200 to 96 per cent in Americanj
j i
Literature.
| In summary, of the 21 sections of the seven classes :
I |
available for inclusion in the sample, one-third, or seven |
sections, were used. Out of the total English enrollment
jof 712 students, 200, or 28 per cent of the total, were
! ;
| tested.
|
An analysis of Table 3, which gives the number and
percentage of males and females in each class, shows an
(almost even distribution of males and females in the total
sample of 200 with 52 per cent or 103 males and 48 per cent
or 97 females. Three classes had more females, one was
evenly divided, and in three classes the number of males
dominated. In three of the basic type English classes,
there were more females, while in the three college transfer
classes, one was evenly divided and the other two had more
TABLE 3
DISTRIBUTION OF THE SAMPLE ACCORDING TO SEX
Class
Male
Number Percentage
Female
Number Percentage
Total
Communications 200
Communications 201
Spe11ing-vocabulary
Grammar A
Reading and Composition A
Reading and Composition B
American Literature
Total
12 46 14 54 26
16 47 18 53 34
9 36 16 64 25
20 57 15 43 35
15 50 15 50 30
18 64 10 36 28
13 59 9 41 22
103 52 97 48 200
males. Both the number of males and females in the classes
ranged from 36 per cent to 64 per cent.
i
J Table 4 represents the distribution of the sample
i
I
according to chronological age. The means of the classes
i
ranged from 26 years to 30 years 11 months, with the young- |
|
est student in any of the seven classes being 18 years of
age and the oldest slightly over 53 years of age.
i 1
The teachers
i ,
The number of years of teaching experience of the s
i
j . i
seven teachers (Table 5) ranged from five years to 29 years;
1 i
t 2
with a mean teaching experience of 18.3 years. Four of the j
' i
seven teachers were full-time teachers at the college and
i
|three were part-time instructors. One of the part-time in-
i
struetors was a full-time high school English teacher, one
was a high school director of student activities and author,
l
while the third was an attorney-at-law with a teaching back-
t
I ground.
The classes
The catalogue description of the seven selected
classes is as follows:
j
200 - COMMUNICATIONS - 3 units, 3 hours.
Prerequisite: None. This non-transferrable course
in communications is planned to qualify students to____
TABLE 4
DISTRIBUTION OF THE SAMPLE ACCORDING TO CHRONOLOGICAL AGE
Class
Mean Range
Years Months Years Months Years Months
Communications 200 30 11 18 6 50
o
Communications 201 26 1 18 0 45 1
Spe11ing-Vocabulary 26 2 18 1 53 2
Grammar A 25 1 18 0 50 0
Reading and Composition A 29 9 18 6 44 0
Reading and Composition B 26 0 18 4 41 6
American Literature 28 8 20 3 45 9
i
u i
ml
TABLE 5
TEACHERS’ EXPERIENCE AND ASSIGNMENT OF
THE SAMPLE CLASSES
Class
i
Years of
Teaching
Experience
Full
time
Part- |
time
\
i
Communications 200 27 X
Communications 201 5
l
x |
Spelling-Vocabulary 19 X
Grammar A 16 X
1
Reading and Composition A 20 X
Reading and Composition B 29 X
American Literature 12 X 1
I
57
express their ideas more efficiently in oral and
written English. The student is expected to become
more proficient in the use of the dictionary, to
acquire skill in the pronunciation and spelling of
words, to increase his vocabulary, to learn the
essentials of grammar necessary for accurate ex
pression, to master sentence structure, to learn
to write satisfactory paragraphs, and to acquire
efficiency and pleasure in reading.
201 - COMMUNICATIONS - 3 units, 3 hours.
Prerequisite: Communications 200. Emphasis is
placed on developing writing skills and on reading
material of increasingly varied and complex types.
D - ENGLISH (Spelling and Vocabulary) - 2 units, 2
hours.
Prerequisite: None. English D supplements English
A by helping the student gain mastery of the words
he needs in order to communicate with others.
Analysis of the cause of difficulties and drill to
provide competence in spelling are an important
part of the course. The student uses both direct
and indirect methods to acquire a suitable vocabu
lary and to read for precise meanings.
A - ENGLISH (Grammar) - 3 units, 3 hours.
Prerequisite: Designated score in the English
Placement test or approval of counselor. The stu
dent learns the mechanics of English expression.
In addition to drill in grammar and punctuation,
the student spends approximately one-third of the
course practicing the development of simple para
graphs .
31A - READING AND COMPOSITION - 3 units, 3 hours.
Prerequisite: Pass Subject A examination or English
A. English 31A, the basic English course required
in every college and university, is designed for the
mature student who needs practice in presenting his
ideas and opinions, as well as factual information,
in an accurate, clear and understanding manner. This
course gives help in solving problems of research and
in writing term papers for other departments. It
also trains the student to evaluate non-fiction of
various types. U.C.L.A. equivalent, English 1A.
58
3IB - READING AND COMPOSITION - 3 units, 3 hours.
Prerequisite: English 31A. This course stresses
the value of the humanities as illustrated in care-
I fully selected examples of the short story, the
! novel, the drama, and poetry. The student acquires
an appreciation of the distinctive technical charac
teristics of each type of literature. He also learns
how the principles of good composition govern effec
tive writing as exemplified in literary masterpieces.
The reading approach to writing is definitely empha
sized. The ideas discovered in the readings serve
as topics for many experiences in purposeful writing.
U.C.L.A. equivalent, English IB.
I 45B - AMERICAN LITERATURE - 3 units, 3 hours.
! Prerequisite: English 31AB. The second semester
i takes up the study of Transcendentalism, with re
minders of the contributions of Emerson and Thoreau.
| Walt Whitman and Emily Dickinson are particularly
| emphasized, and strains of the earlier Puritanism,
Transcendentalism, and Symbolism are traced through
their works, covering the period from the Civil War
to the present. U.C.L.A. equivalent, English 30B.
| (20:79-82)
i |
The schedule of testing was arranged so that the
Nelson-Denny Reading Test was given to six of the selected
classes in the span of one week, with the remaining class
j
being tested the following week. i
I I
j A letter was sent to each of the faculty members
involved. The letter explained (1) the purposes for the
testing and that the investigator would administer and score
the test, (2) that each student would receive his results on
the test, and (3) that the teacher would receive the class
reading score results as well as the findings of the
readability analysis of the required texts.
The letter was followed by a personal contact with
each of the seven consenting faculty members to assure as
smooth a testing program as possible with a minimum inter-
| I
Iruption of classroom procedure. Preliminary arrangements j
! :
for the oral interviews were also made with the instructors j
at this time. The superb cooperation of the college faculty
was noted.
The texts
i
| j
Texts used in the seven different classes were
!
i 1
selected either by the English Department or by the indi- ,
I
vidual English instructor. Table 6 lists the 11 required
{English texts used in the seven classes with author, pub
lisher, and copyright date. New copies of each text were
obtained on loan from the college bookstore. Here again,
the cooperation was outstanding.
! ;
Sources of Data and Procedures Used
To measure the students1 silent reading ability, the
Nelson-Denny Reading Test was administered in seven differ
ent types of English classes. To appraise the students'
oral reading ability, the Informal Reading Inventory was
administered to selected students from the same seven
TABLE 6
BOOKS USED IN THIS INVESTIGATION
Class Author Title
Publisher,
Copyright Date
Communications
200
Communications
201
Spelling-
Vocabulary
Grammar A
Reading and
Composition A
Griggs, Bludworth,
Llewellyn
Kocher, Ross
Craven
Mullen
Laird, Gorrell
Birk, Birk
Birk, Birk
Basic Writer and Reader
Success with Sentences
Sight and Sound
Toward Better vocabulary
Course in Modern English
Readings for Understanding
and Using English
Understanding and Using
English
American Book Co.,
1961
Macmillan, 1966
Dickenson, 1967 '
Scott, Foresman,
1963
Prentice-Hall, 1960
Odyssey, 1959
Odyssey, 1965
o>
o
TABLE 6— Continued
Class Author Title
Publisher,
Copyright Date
Reading and
Composition B
American
Literature
Voltaire
Stallman, Watters
Wharton
Bradley, Beatty,
Long
Candide
Creative Reader
Ethan Frome
American Tradition in
Literature. Vol. II.
Penguin, 1947
Ronald Press, 1962
Scribner, 1939
Norton, 1967 j
62
classes. This section contains a description of the two
instruments, the testing procedures followed, and the method
| (
'used in tabulating the research data. j
| J
The Nelson-Dennv Reading Test j
1 Both the original version (1929) and the revised j
I ' 1
jversion (1960) of the Nelson-Denny Reading Test for High
jSchools and Colleges are composed of a 100-item vocabulary
isubtest and a 36-item comprehension subtest. There are two j
; i
forms available, Form A and Form B. The authors used the
two forms to determine a split-half reliability coefficient.
These coefficients were reported as: vocabulary subtest, j
! i
.93; comprehension subtest, .81; total score, .92; and read-
I
ing rate, .93 (90) . Validity coefficients ranged from .38
|to .47 (93:1077-1078).
The authors suggest including the test in a battery
1 1
of college entrance examinations to aid in predicting aca-
jdemic success (44:130). Intelligent decisions in possible
curricular choices may be enhanced with reading scores
available, because courses demanding we11-developed reading
skills can be included or excluded according to individual
scores.
The Nelson-Denny Reading Test can be used as a
comparative diagnostic instrument by examining specific
strengths and weaknesses of percentile rank scores. Compar
ing the three key areas of the test— vocabulary, comprehen
sion, and reading rate— could point out the area or areas j
i
in which a student needs further improvement.
i
I
As a screening device, the test can help identify |
the superior readers, who could be scheduled into a develop-
i
mental reading program. On the other end, students needing i
i i
{clinical or remedial programs would also be identified.
! ;
To select those students who would benefit from !
j
remedial instruction, the authors recommend the supplemen- i
i 1
I i
jtary use of a non-reading type intelligence test. The
j '
Stanford-Binet Scale, the Lorge-Thorndike Intelligence Test,
]
or the Henmon-Nelson Test of Mental Ability have been sug
gested as appropriate intelligence tests to be used with the
Nelson-Denny Reading Test. !
i
I The total test time is 30 minutes, comprising 10 |
i !
minutes for the vocabulary subtest and 20 minutes for the
comprehension subtest. The reading rate is marked after
one minute in the comprehension subtest section and is part
of the 20-minute allotment.
The University of Michigan has worked out a cut-time
administration of the test for use with superior reading
64
groups that need a higher test ceiling. Total working time
is 22 1/2 minutes, comprising 7 1/2 minutes for the vocabu
lary subtest and 15 minutes for comprehension and rate.
The test is easy to score, with the raw score com- !
i
puted at one point for each correct answer in the 100 vo-
i
cabulary items and at two points for each of the 36 reading |
comprehension items. Raw scores can be converted into
!
standings in the norm group by using the separate percentilej
rank of score tables for grades 9 through 16 in the Exami-
j |
j ner's Manual. The manual also has special adult percentile j
norms for the cut-time administration. The raw score is the
jbasis for arriving at grade level equivalents on the appro-
j :
priate norm table. The grade equivalent norms range from
7.0 to 14.0 for all four subscores.
Nearly 21,000 cases were used for norming both the
A and B forms of the 1960 revision of the Nelson-Denny
jReading Test. The groups ranged from 545 to 1,877. A
|
single table in the manual lists by grade level the forms
and subtests and the statistical norms. The Statistical
Abstract of the United States was used as the basis for
stratified random sampling of the population norm.
The test contains carefully constructed items which
are scaled according to their difficulty. The soundness of
65
ithe test construction has been a factor in its continued
i
I
popularity (13:585). Two reviewers in the Sixth Mental
Measurements Yearbook rate the Nelson-Denny Reading Test as
most appropriate for use with upper-level high school stu
dents and college students (118:1078-1080). |
i !
j I
! |
! Informal Reading Inventory
i
i ;
A list of 15 students was prepared for each of the
i ' j
seven classes. Since there were no superior readers (14.1
ior above) in three of the classes, the list of 15 was pre- :
pared by selecting five from the top scores, five from the !
j !
middle scores, and five from the lower scores on the Nelson-1
i
Denny Reading Test. In one class, the list had 13 names:
jthree superior readers (above 14.1), five average readers
1(12.0 to 14.0), and five poor readers (below 11.9). In the
remaining three classes, there were five names in each of
the three classifications. From this list, 21 volunteers
jwere accepted for oral reading appraisal. They represented
one classification from each of the seven classes. This
phase of the research was completed in eight nights over a
three-week period.
When the student appeared at the library conference
room with his text, several minutes were spent establishing
I 66
rapport. Most students were interested in further discus
sion of their own results on the standardized test. A great
deal of interest in the research project was shown by the
i
students during both the oral interviews and classroom visi-
!
i
tations.
i
i
For the oral reading segment, the following proce
dure was used:
j I
; 1. Before beginning the oral reading, the student
|was advised that five questions would be asked at the end of
j i
the reading. The necessity of using the tape recorder for j
I |
more accuracy was explained.
I 1
2. Each student was asked to read several pre-
j
selected continuous paragraphs at sight from his textbook.
jThe investigator followed the reading in another text.
i , #
3. Predetermined comprehension questions pertaining
to the material were then asked by the investigator and the
{responses recorded on a sheet prepared in advance for this |
i
purpose.
4. A short playback of the tape was made to allow
each student to hear his own voice and to check on the ac
curacy of the recording.
5. Using the tape at a later date, the investigator
appraised the oral reading by marking a duplicated copy of
67
the page from the student's textbook using symbols for sub
stitutions, mispronunciations, words pronounced by the in-
I
j
vestigator, disregard of correct pronunciation, insertions,
hesitations, repetitions, and omissions. These were tallied
!
for computation into a percentage score. The tapes were j
I !
i
played three times to increase accuracy. The criteria set j
forth by Betts for an Informal Reading Inventory were:
I j
j a. The independent level is achieved when the stu- |
I dent can read with 99 per cent accuracy in pro-
i
nunciation and at least 90 per cent accuracy of j
i ,
comprehens ion.
b. The instructional level is achieved with 95 per
cent accuracy in pronunciation and 75 per cent
accuracy of comprehension. j
c. The frustration level is achieved with 90 per
cent accuracy in pronunciation and 50 per cent j
or less in accuracy of comprehension (8:438-
! I
487). |
The first 100 words of each reading selection were used for (
|
standardization of comparison. The scores were evaluated in;
I
i
I
percentages for the accuracy and comprehension sections ;
which allowed comparison of each student's performance with
Betts' rating criteria.
I 68
j
j Readability Analysis of Texts
! The readability level of the 11 assigned English
textbooks was determined by using three separate readability
formulas. This permitted a comparison of the students 1
reading ability with the readability of the required texts.
i ;
j The appraisal of the readability level of the texts j
i !
!
jwas made by applying Gunning's FOG Index, Fry's Readability (
Graph, and a comparatively new readability formula, McLaughH
lin's SMOG Grading formula.
i
i ’ I
| This section presents the formulas and their appli- :
jcation to the 11 English textbooks.
Gunning FOG Index
I :
In Part I of The Technique of Clear writing. Gunning
I
sets forth his readability formula to be used as a basis for
i
measuring the fog in a piece of writing (hence the name
given to the formula, the FOG Index). This gives the ap- i
I |
proximate reading grade level necessary to read and under- j
stand the material. |
The formula is based on an average sentence length
and a hard word count, which are also the two basic factors
found in the other two formulas used in this investigation.
Application of Gunning's FOG Index.— The FOG Index
I - - ....g 9 I
; i
reading grade level is computed as follows:
i
1. The words (outlines) in each sample of 100 words;
I !
are counted to the end of a complete sentence. Therefore,
the words in a sample may be more or less than 100. If the
i
i
writing is long, the researcher may wish to take several
samples of 100 words each, spaced evenly throughout the
material.
2. Sentences are then counted in thought groups as
they are in the Flesch formula.
3. The average thought group is found by dividing
the number of words (outlines) in the sample by the number
of thought groups.
4. The hard-word (hard-outline) count is composed
of polysyllabic words, defined as those words which contain j
three or more syllables. Capitalized words are omitted. !
| I
Exceptions, as listed by the author, are "words made up of j
one or more easy words such as 'bookkeeper' and 'butterfly.'
i
Other exceptions are verb forms that make three syllables by
I
adding 'ed,' 'ly,' or 'es.'" !
i
5. The percentage of hard-words (hard-outlines)
used in the computation of the formula is found by dividing
the number of hard-words (hard-out1ines) by the number of
words (outlines) in the sample.
6. The average sentence length is added to the
i
(percentage of hard words.
7. The raw score is found by taking the results
from the last step (6) and multiplying them by .4 (50:36-
37). |
i
| 8. The approximate reading grade level is found by
rounding off the raw score. It does not need to be con-
| !
verted by use of a separate table as in other formulas. Seej
(Table 7 for an outline of the formula.
| |
: , I
Reading levels.--Gunning stated:
j If your copy tests 13 or more, you are beyond |
the danger line of reading difficulty. You are writ-
i ing on the college level of complexity and your reader |
j is likely to find it heavy going even though he is
paying close attention. Copy with a Fog Index of 13
| or more runs the danger of being ignored or misunder-
i stood. (50:38) (See Table 8.)
Nine samples evenly spaced throughout the 11 books
i
were used as the readability basis of this investigation.
jTable 9 is one of the completed work sheets using the FOG
Index.
Fry Readability Graph
In 1968, Fry of Rutger* University presented an
adaptation of his Readability Graph designed for use in the
TABLE 7
GUNNING FOG INDEX FORMULA
Step
1. Outlines in sample
(Count outlines in sentences which end
nearest 100-word total)
2. Number of sentences (thought groups) in
i the sample
| 3. Average sentence length
| (Divide the total number of outlines in
! the sample by the number of sentences:
; Step 1 * Step 2)
4. Number of hard outlines
| (Outlines with three syllables or more)
! 5. Percentage of hard outlines
(Number of hard outlines divided by
number of outlines in sample: Step 4 *
j Step 1)
j 6. Average sentence length plus percentage of
| hard words
(Step 3 + Step 5)
! 7. Raw score
(Step 6 x .4)
8. Grade level
(Raw score rounded off to nearest round
number)
TABLE 8
READING LEVEL BY GRADE USING THE GUNNING FOG INDEX
Reading Level by Grade
1
Fog j
Index
College graduate
17
College senior 16
College junior 15
College sophomore
14 i
College freshman
Danger line
13
1
High school senior 12 '
High school junior 11
High school sophomore
1
1
10
High school freshman
Easy reading
9
Eighth grade
range
8
Seventh grade 7
Sixth grade 6
Source: Robert Gunning, The Technique of Clear
Writing (New York: McGraw-Hill Book Co., 1952), p. 38.
TABLE 9
ILLUSTRATIVE GUNNING FOG INDEX WORKSHEET USING DATA FROM BASIC WRITER AND READER
Page
Number
of
Outlines
Number
of
Sentences
Average
Sentence
Length
Hard Outlines
Average
Sentence Length
plus Percentage
Hard Outlines
Raw
Score
Grade
Level
Number
Percent
age
1 91 6 15.1 3 3.3 18.4 7.36 7
49 98 10 9.8 6 6.1 15.9 6.36 6
105 107 7 15.2 1 .9 16.1 6.44 6
155 99 7 14.1 18 18.2 32.3 12.92 13
203 97 9 10.7 12 12.8 23.5 9.40 9
256 100 6 16.7 3 3.0 19.7 7.88 8
307 95 9 10.5 11 11.6 22.1 8.84 9
358 90 10 9.0 13 14.4 23.4 9.36 9
409 89 5 17.4 12 13.5 30.9 12.36
12
Total 866 69 118.5 79 83.8 202.3 80.92 79
Averages 96.2 7.7 13.2 8.8 9.3 22.5 8.99 9
u>
United States (43:513-516, 575-578). The author has been
concerned with the fact that although readability formulas
have been available for years, few have been used during
that time. The difficulty in computation and the length of |
I
- i
time involved have kept them from widespread use. I
i
Fry's contribution to readability analysis was to
develop a formula that was both accurate and simple to use. ;
This, he hoped, would result in wide use of his readability !
|
formula.
i
i
I t I
! As with most modern formulas using two basic fac-
j
tors, Fry's formula uses sentence length and a polysyllabic j
j i
word count. Final plotting of the two scores on a graph
jgives the grade level equivalent. j
|
' Application of Fry's Readability Graph.— Fry's
readability level formula is computed as follows:
1. Three 100-word samples are selected from the
beginning, middle, and end of a book. Proper nouns are
i
omitted.
I
| 2. The number of sentences in each 100-word sample
i
is totaled. Since exact 100-word samples are used, any
fragmented sentence is estimated to the nearest tenth. The
average of the three samples is computed; the resulting
75
number is one of the two to be plotted on the graph.
3. The number of syllables in each sample is to
taled. The average of the three samples is the second num
ber to be plotted on the graph. If there is great varia-
[
bility in the syllable count or in sentence length, more j
samples may be selected and averaged in. The author sug-
jgests that the researcher "count every syllable over one in
|
leach word and add 100."
4. The average number of sentences and the average 1
I • :
.'number of syllables are plotted on the graph (Fig. 1). This
i !
gives the approximate grade level of readability. I
; I
I On the lower grade levels, there was a close agree-
jment between the readability levels of the Spache formula
| !
land the Readability Graph. The latter was recommended as
probably the most efficient at the primary grade level (43:
576).
i
1 In a study using 10 boohs and four other formulas,
the Readability Graph correlated .98 with the SRA formula,
.96 with the Flesch formula, .94 with the Dale-Chall for
mula, and .78 with the Botel formula. The Botel formula
does not use sentence length as a factor which could account
for the lower correlation (43:576).
For purposes of this investigation, nine samples
Short words Long words
IQ / / i //* n o i u if* i n 11* /fa Ifd if f id* iso 1 4# is* l y
I I
i
couess
r m
DIRECTIONS: Randomly select S one hundred word passages from a book or
in article. Plot average number of syllables and average number of words per
ten ten ce on graph to determine area of readability level. Choose more passages
jer book if great variability is observed.
Fig. 1.— Fry's Graph for Estimating Readability.
Source: Journal of Reading. II, No. 7 (April,
1968), 577.
jevenly spaced throughout the books were taken to give
I j
greater accuracy in arriving at the readability level of the|
i i
I i
jbooks. The results of both the sentence length count and j
! I
I . :
Jthe syllable count were averaged before applying to the |
j !
Readability Graph. Table 11 is a sample of one of the 11
I j
worksheets using the Readability Graph.
' I
[ McLaughlin's SMOG Grading
In 1969, McLaughlin presented his SMOG Grading for
mula, which is designed to measure the grade level at which
|
the reader would need to read in order to completely compre
hend the material. The word SMOG, a "Simple Measure of
i :
Gobbledygook" (82:210), is a tribute to Gunning's FOG Index
I .
and a reference to the author's birthplace, London, where !
smog was first named (83:641).
Application of McLaughlin's SMOG Grading.— The SMOG
Grading formula is applied as follows:
j
1. Thirty sentences, comprising 10 consecutive
sentences from the beginning, 10 from the middle, and 10
from the end of the material, are selected. A sentence is
j
defined as a group of related words ending with one of the
three standard end marks of punctuation.
2. All the polysyllabic words in these 30 sentences
TABLE 10
ILLUSTRATIVE PRY READABILITY GRAPH WORKSHEET
USING DATA FROM BASIC WRITER AND READER
Page Words Sentences Syllables
1 100 3.3
|
127
49 100 7.2 136
105 100 3.8 143
155 100 5.0 165
203 100 5.3 159 1
256 100 6.0 115 i
307 100 5.2 146
358 100 7 .5 162 |
409 100 3.3 158
Total 900 46.6 1,311
Average 100 5.2 147
Grade 9
I 79
Jare counted. A polysyllabic word is defined as one having
.three or more syllables.
3. The nearest square root of the total number of
polysyllabic words is estimated. If the total number of
i
polysyllabic words should fall roughly between two perfect
i
i
squares, the lower square root is taken. j
4. The SMOG Grade, which is the reading level I
j
necessary for a person to fully understand the written mate
rial, is obtained by adding three to the square root.
I ^
I For the purposes of this investigation, three sets
jof samples from each book or nine evenly-spaced pages were
^analyzed (Table 11). It was necessary to average the poly- i
j
syllabic word count, then multiply the average by three
i
{before proceeding with Step 3 of the formula.
j
McLaughlin used 390 passages from the 1961 edition
of the McCall-Crabbs Standard Test Lessons in Reading to
{standardize the polysyllabic count. To set up his regres- |
sion equation relating to the polysyllabic count of each
lesson, he used "as the indicator of the reading difficulty
of each lesson the grade of subjects showing complete com-
i
prehension." I
By using a computer to run his trials and errors,
he arrived at his formula, which has a standard error of
TABLE 11
ILLUSTRATIVE MCLAUGHLIN SMOG INDEX WORKSHEET
USING DATA FROM BASIC WRITER AND READER
I
■
Page Sentences
Polysyllabic
Words
1 10 20
49 10 8
105 10 12
155 10 18
203 10 18
256 10 2
307 10 19
358 10 14
409 10 43 j
Total 90 154
Average 10 17
Grade 10
i
— - . . . . . . . ■ i
811
I
about 1.5 grades. This gives a prediction within three
i :
grades in 68 per cent of the cases.
! I
Since the computed grade level is the reading level
|
necessary for the reader to fully understand the material,
I ' I
the reading grade level may be higher than other formulas
which accept and measure a lower level of understanding |
(83:639-646).
Treatment of the Data
After data on the reading ability of the 200 stu
dents in the sample were gathered, special sheets were pre
pared for IBM key punching. Poor, average, and superior
readers were categorized in each of the seven English clas
ses for analysis. The readability ratings of the 11 text
books from three readability formulas were also prepared.
The Biomedical 03D Computer Program originally de
veloped by the Health Service Computing Laboratory at the |
j
University of California at Los Angeles was utilized in the |
IBM 365 computer at the Computer Service Laboratory of the
i
University of Southern California. The program provided j
absolute and relative frequencies for the three classifica-
: t
tions of readers as well as means, correlation matrices,
'and other data for the 13 test categories.
! CHAPTER IV
|
I
!
ANALYSIS OF THE DATA
1 i
! i
I The first section of this chapter considers the ;
I |
reading ability of the students in the sample as determined j
by the Nelson-Denny Reading Test for High Schools and Col- '
; I
leges plus the results of the oral Informal Reading Inven-
I J
tory. The second section details the findings of the text- |
1
book analysis using the three readability formulas. The j
last major section presents a comparison of the mean read- !
j :
ling scores of the sample with the mean readability scores
of the texts, the percentages of students reading below, at,
and above the readability level of the texts, a comparison
of the Informal Reading Inventory scores and the readability
jof the book, the distribution of poor, average, and superiorj
I ' I
readers according to sex, and the reading percentile ranks
of the sample.
I
l
82
I Results of the Nelson-Denny Reading Test
and the Informal Reading Inventory
j
As explained in Chapter I under the definitions of i
terms, a student reading 11.9 or below would be considered
i
a poor reader, while the average reader would range from !
12.0 to 14.0, and those reading 14.1 and above would be j
I
i
considered superior readers. Table 12 presents these cri
teria in graphic form.
TABLE 12
READING SCORES WHICH CHARACTERIZED STUDENTS AS
POOR, AVERAGE, AND SUPERIOR IN THIS INVESTIGATION
1
i Poor
!
1
Average Superior
11.9 and below
1
]
12.0-14.0 14.1 and above
Table 13 lists the number and percentage of poor,
I
{average, and superior readers in the seven English classes
j
as measured by the Nelson-Denny Reading Test using the
evaluative criteria presented above.
!
As may be seen in Table 13, three classes had no
!
superior readers. Communications 200 had no superior read
ers and only one average reader. Communications 201 had no
TABLE 13
PERCENTAGES OF POOR, AVERAGE, AND SUPERIOR READERS
AS DETERMINED BY THE NELSON-DENNY READING TEST
Class
Poor Reader Average Reader Superior
. 1
Reader
Number Percentage Number Percentage Number Percentage
Communications 200 25 96 1 4
__
Communications 201 32 94 2 6
—
S pe11ing-Vocabulary 22 88 3 12 — -
Grammar A 21 60 13 37 1 3
Reading and Composition A 13 43 14 47 3 10
Reading and Composition B 12 43 9 32 7 25
American Literature 5 23 6 27 11 50
Total readers 130 65 48 24 22 11
•
00
I 85
{superior readers and only two average readers. Spelling- j
i I
Vocabulary had no superior readers and only three average j
j I
jreaders. One class, Grammar A, had only one superior
)
reader.
t
I
; The poor readers ranged from 23 per cent to 96 per ,
!
cent; the average readers from 4 per cent to 47 per cent; j
and in the four classes that had superior readers, the rangej
j i
{was from 3 per cent to 50 per cent. !
Of the total sample of 200, 130 students or 65 per
j I
cent were rated as poor readers, 48 students or 24 per cent {
as average readers, and 22 students or 11 per cent as su-
iperior readers.
The Nelson-Denny Reading Test for High Schools and
s !
Colleges was administered to measure the reading ability of
the sample of 200 community college students. The test
measured three main areas: vocabulary, comprehension, and
reading rate.
The vocabulary scores as reported in Table 14 show
the poor readers ranging from 14 per cent in the American
Literature class to 91 per cent in Communications 201. The
i
i i
average readers ranged from 9 per cent in Communications 201]
to 60 per cent in Reading and Composition A. Superior
readers were found in only four sections, from 9 per cent
TABLE 14
VOCABULARY SCORES ON THE NELSON-DENNY READING TEST
Class
Poor Reader Average Reader Superior Reader
Number Percentage Number Percentage Number Percentage
Communications 200 22 85 4 15
—
i
Communications 201 31 91 3 9 — —
Spe11ing-Vocabulary 18 72 7 28 — —
Grammar A 18 51 14 40 3 9
Reading and Composition A 8 27 18 60 4
13
Reading and Composition B 5 18 15 54 8 28
American Literature 3 14 8 36 11 50
Total readers 105 52.5 69 34.5 26 13
a>
< T >
87
jin Grammar A to 50 per cent in American Literature.
I
The comprehension scores as given in Table 15 show
I
!
an unusual pattern. The three classes having no superior
i
readers were the three basic classes: Communications 200 j
I
and 201 and Spelling-Vocabulary. One class, Communications
j
201, had no average or superior readers and reported 100 per
'cent as poor readers. Communications 200 had only one aver-
Jage reader and Spelling-Vocabulary had two.
The poor readers ranged from 32 per cent in Americanj
I !
Literature to 100 per cent in Communications 201. The aver-*
j
age readers ranged from 4 per cent in Communications 200 to i
!36 per cent in American Literature. The superior readers irj
|four classes ranged from 6 per cent in Grammar A to 32 per j
; t
cent in American Literature.
Table 16 presents the total vocabulary and compre
hension scores with the comprehension being weighted two
for each correct response and the vocabulary being weighted
I
one for each correct response.
j Three classes reported no superior readers, and one
I
i
class had only one. One class had one average reader, one
class had two, and one class had three.
The poor readers ranged from 23 per cent in Americar
Literature to 96 per cent in Communications 200. The
COMPREHENSION SCORES
TABLE 15
ON THE NELSON-DENNY READING TEST
-
Poor Reader Average Reader
----------
Superior Reader
Class
Number Percentage Number Percentage Number Percentage
Communications 200 25 96 1 4
__
Communications 201 34 100 — — — —
Spe1ling-Vocabulary 23 92 2 8 —
Grammar A 22 63 11 31 2 6
Reading and Composition A 18 60 8 27 4 13
Reading and Composition B 15 54 9 32 4 14
American Literature 7 32 8 36 7 32
Total Readers 144 72 39 19.5 17 8.5
00
00
TABLE 16
TOTAL VOCABULARY AND COMPREHENSION SCORES ON THE NELSON-DENNY READING TEST
Class
Poor Reader Average Reader Superior Reader
Number Percentage Number Percentage Number Percentage
Communications 200 25 96 1 4
—
Communications 201 32 94 2 6 —
I
S pe11ing-Vocabulary 22 88 3 12 — —
Grammar A 21 60 13 37 1 3
Reading and Composition A 13 43 14 47 3 10
Reading and Composition B 12 43 9 32 7 25
American Literature 5 23 6 27 11 50
Total readers 130 65 48 24 22 11
00
VO
| 90
average readers ranged from 4 per cent in Communications 20C
to 47 per cent in Reading and Composition A. In the four
classes having superior readers, the range went from 3 per
I
cent in Grammar A to 50 per cent in American Literature. |
I
i
Using a reading rate based upon one minute's read
ing, two classes placed no students in the superior reader !
!
column in reading rate scores (Table 17). Communications j
j j
200 and 201 had no superior readers. Spelling-Vocabulary !
had one average reader. American Literature showed no aver-f
j !
age readers, 36 per cent poor readers, and 64 per cent su- .
i . >
perior readers.
: The poor readers ranged from 36 per cent in Americarl
Literature to 92 per cent in Communications 200. Six clas
ses reported average readers, with the range from 4 per cent
in Spelling-Vocabulary to 20 per cent in Grammar A. Five
classes had superior readers, ranging from 12 per cent in
jspelling-Vocabulary to 64 per cent in American Literature.
i
| Using the criteria for the Informal Reading Inven
tory as discussed in Chapter III, 21 students were rated on
oral reading using their required text. Table 18 lists the
percentage of accuracy of the oral reading from 88 per cent
to 100 per cent, and from 12 errors in pronunciation to no
errors. Two students had perfect scores.
TABLE 17
READING RATE SCORES ON THE NEISON-DENNY READING TEST
Class
Poor Reader Average Reader Superior Reader
Number Percentage Number Percentage Number Percentage |
Communications 200 24 92 2 8
—
i
— — !
Communications 201 30 88 4 12 — -
Spe11ing-Vocabulary 21 84 1 4 3 12
Grammar A 23 66 7 20 5 14
Reading and Composition A 21 70 2 7 7 23
Reading and Composition B 11 39 4 14 13 47
American Literature 8 36 — — 14 64
Total readers 138 69 20 10 42 21
i
r " .........'....’ ~ ~ 92
i
I TABLE 18
I
PERCENTAGE OF ACCURACY OF ORAL READING BY STUDENTS
| ON THE INFORMAL READING INVENTORY
i _ _ _
1
Errors in
100 Words
i
Number of Students
Percentage
of Accuracy
!
Communications 200
Communications 201
Spe11ing-Vocabulary
Grammar A
Reading and Composition A
Reading and Composition B
American Literature
o 1 1 100
l 1 1 1 1 99
2 2 1 98
3 1 1 1 1 97
4 1 96
5 1 1 95 1
6 1 94 j
7 1 1 93
8 92
9 1 91
10 90 !
11 89
12 1 88
: The comprehension scores of the students when ques- j
i I
tioned concerning the material read ranged from 20 per cent j
i
i i
ito 100 per cent (Table 19) . Four students had perfect J
! 1
scores of 100 per cent, six had 80 per cent, seven had 60 j
per cent, three had 40 per cent, and only one had 20 per
I |
i '
jcent. :
Using Betts' criteria for the 21 students, four were!
I !
rated as reading at the independent level, six at the in
structional level, and 11 at the frustration level. Of
I i
those tested, 19 per cent read at the independent level, 29 ,
per cent at the instructional level, and 52 per cent at the !
frustration level (Table 20).
i
Results of Readability Analysis of Texts
The mechanics of rating the 11 required English
textbooks used in the seven English classes in this inves
tigation were developed in detail in the preceding chapter, j
i . i
: i
Three readability formulas were used: Gunning's FOG Index, j
jFry's Readability Graph, and a comparatively new formula,
jMcLaughlin's SMOG Grading formula. I
i i
Table 21 lists the 11 texts, the grade level rating |
of the three formulas, and the mean grade level. The Gun- j
ning and Fry formulas correlated at the same grade level on
94
TABLE 19
PERCENTAGE OF COMPREHENSION OF ORAL READING BY
STUDENTS ON THE INFORMAL READING INVENTORY
Percentage of
Comprehens ion
100
80
60
40
20
0
Number of Students
1
1
1
>i
n
i d
rH
3
■8
u
o
&
&
w
1
2
1
1
c
o
•H
+J
■ H
( 0
a
u
*8
i d
g*
•H
*0
i d
«
1
1
P Q
8
•H
+>
•H
0 )
a
i d
•H
i d
£
2
1
TABLE 20
NUMBER OF STUDENTS AT THE INDEPENDENT, INSTRUCTIONAL, AND FRUSTRATION
LEVELS ACCORDING TO THE INFORMAL READING INVENTORY
Class
Informal Reading Inventory Levels |
Independent Instructions1
I
Frustration j
Communications 200 1
I
2
Communications 201 1 2
Spe11ing-Vocabulary 1 2
Grammar A 1 1 1 !
Reading and Composition A 1 2
Reading and Composition B 2 1
American Literature 1 1 1 1
Total percentage 19 29 52 I
TABLE 21
READABILITY RATINGS OF THE ELEVEN REQUIRED ENGLISH TEXTS
Book
Readability Formula
I
Gunning
FOG
Fry
McLaughlin
SMOG
Mean
Basic Writer and Reader 9 9 10 9.3
Success with Sentences
:
8 8 10 8.7
Sight and Sound 11 9 11 10.3
Toward Better Vocabulary 13 College 15 14.0
Course in Modern Enqlish 11 10 11 10.7
Readinq for Understandinq and Usinq Enqlish 10 10 12 10.7
Understandinq and Usinq Enqlish 13 College 15 14.0
Candide 7 7 9 7.7
Creative Reader 7 7 8 7.3
Ethan Frome 8 8 9 8.3
American Tradition in Literature. Vol. II 9 9 12 10.0
vo
ov
I
nine of the 11 books, differed only one grade level on one
i
of the books, and two grade levels on the remaining book. |
i
i
The Gunning formula takes much more time to compute than the
jsimpler Fry Readability Graph.
! The McLaughlin formula generally rated the texts one
S |
or two grades higher and in one case three grades higher
| j
than the Gunning or Fry formulas. The readability grade
! i
level may tend to be higher than other formulas because the
McLaughlin formula places the book at the grade level for
: l
complete understanding. j
One book used in Communications 200, Basic Writer |
and Reader, ranged from grade 6 to grade 13 in the nine !
pages analyzed in computing the Gunning formula (Table 9).
Both the Gunning and Fry formulas rated the book at grade 9
and the McLaughlin formula at one grade higher. The mean
for the three formulas was 9.3.
j
Comparisons of Results '
|
| The comparisons of the mean reading scores of the
i
sample with the mean readability scores of the texts are |
found in Table 22. The table lists the class, the required ;
text or texts, the range of reading scores and mean, the
|
jmean of the readability formulas, and the differences
TABLE 22
1
I
I
i
l
COMPARISON OF MEAN READING SCORES OF THE SAMPLE ON THE NELSON-DENNY READING TEST
WITH THE MEAN READABILITY SCORES OF THE TEXTS j
Class and Book
Class Reading Readability
Formulas
(Mean)
Difference
Range Mean
of Means
Communications 200
Basic Writer and Reader
-7.0 to 12.5 8.6
9.3 + .7
Communications 201
Success with Sentences
-7.0 to 12.5 8.8
8.7 - .1
Spe11ing-Vocabulary
Sicrht and Sound
-7.0 to 13.7 9.4
10.3 + .9
Toward Better vocabulary 14.0 +4.6
Grammar A
Course in Modern Enqlish
-7.0 to 14.0+ 11.0
10.7 - .3
Reading and Composition A
Readings for Understandinq and Usinq
Enqlish
7.5 to 14.0+ 11.8
10.7 -1.1
V O
00
TABLE 22— Continued
Class and Book
Class Reading Readability
Formulas
(Mean)
Difference
Range Mean
of Means
Reading and Composition A (continued)
Understanding and Using English 14.0 +2.2
Reading and Composition B
Candide
8.6 to 14.0+ 12.4
7.7 -4.7
Creative Reader 7.3 -5.1
Ethan Frome 8.3 -4.1
American Literature
American Tradition in Literature,
Vol. II
9.3 to 14.0+ 13.3
10.0
1
-3.3 ;
1
100
between the reading means and the readability means.
A plus score in the difference column indicates that!
the text readability mean is above or more difficult than
i
the students1 reading mean, while a minus score indicates
j
that the text readability level is below or easier than the j
mean reading score of the class.
The main points of interest presented in Table 22
are the differences in means. Five of the means of the 11
required texts are within 1.1 years of the reading means.
Toward Better Vocabulary, required in the basic Spelling-
Vocabulary class, has a mean readability that is 4.6 years
above the reading mean of the class. It is .3 above the top
score of the reading range of the class.
In Reading and Composition A, the text Understanding!
and Usinq English has a mean readability of 14.0, which is
2.2 years above the mean class reading ability of 11.8.
i
Candide. Creative Reader, and Ethan Frome. the threej
required texts for Reading and Composition B, rate -4.7,
-5.1, and -4.1 respectively from the class reading mean of
12.4. Ethan Frome was rated as the most difficult book of
the required trio with a mean readability rating of 8.3.
iThe lowest student reading score was 8.6. The other two
Itexts were rated at 7.7 and 7.3._____________________________
101
The text for American Literature, American Tradition
in Literature. Vol. II, was analyzed to be 3.3 years below
i
the mean reading level of the class. The class's mean read*
; j
ing level was 13.3, but the readability level of the text
I ;
was 10.0.
Table 23 is a significant table because it shows,
j
according to the Nelson-Denny Reading Test, the percentage
of students reading below, at, or above the readability
level of the text, thus graphically presenting the easy
texts and the difficult texts.
Sixty-five per cent of the students in the basic
class, Communications 200, read below the readability level
of the required text, Basic Writer and Reader. In another
basic class, Communications 201, 56 per cent of the class
|
read below the required text, Success with Sentences,
i
i
In a third basic class, Spelling-Vocabulary, the
majority of students read below the mean readability level
of the texts. In Sight and Sound. 68 per cent read below;
and in Toward Better Vocabulary. 100 per cent read below
the mean readability level of the required text.
In the course entitled Grammar A, the required text
|was Course in Modern English. Forty-three per cent read
below and 57 per cent read above the level of the book.
TABLE 23
PERCENTAGES OF STUDENTS READING AT, BELOW, OR ABOVE THE MEAN READABILITY LEVEL
OF THE ASSIGNED ENGLISH TEXT ACCORDING TO THE NELSON-DENNY READING TEST
i
Mean Readability
Percentage of Students
Reading at Readability
Class and Book Level of Text Level of Text
Below At Above
Communications 200
Basic Writer and Reader* 9.3 65 — 35
Communications 201
Success with Sentences* 8.7 56 — 44
Spe11ing-vocabulary
Sicrht and Sound* 10.3 68 — 32
Toward Better Vocabulary 14.0 100
Grammar A
Course in Modern Enqlish* 10.7 43 — 57
TABLE 23— Continued
Class and Book
Mean Readability
Level of Text
Percentage of Students
Reading at Readability 1
Level of Text
Below At Above
Reading and Composition A
Readinqs for Understandinq and Usinq Enqlish 10.7 26 3 70
Understandinq and Usinq Enqlish* 14.0 90 10
Reading and Composition B
Candide 7.7 100
Creative Reader* 7.3
—
100
Ethan Frome 8.3
—
100 !
American Literature
American Tradition in Literature, Vol. II* 10.0 5
95
Total number 96 104
Total percentage 48 52
♦Used in computing total.
HI
oi
Si
104 ]
l
I
Reading and Composition A had two required texts. One of j
i
the texts, Understandinq and Usinq English, had a reada-
i
bility level of 14.0 and 90 per cent of the class read belowj
i
this level.
I
An analysis in Reading and Composition B found 100
per cent of the class reading above the readability level
of the three texts, Candide. Creative Reader, and Ethan
Frome. In the last class to be analyzed, American Litera
ture, 95 per cent of the class read above the readability
level of American Tradition in Literature. Vol. II.
Using the same seven texts as were used in the In
formal Reading Inventory, the total number of students read
ing below the mean readability levels of the texts was 96,
or 48 per cent of the entire sample. Those reading above
|
the mean readability levels of the texts totalled 104, or
j
52 per cent of the entire sample.
The Informal Reading Inventory was administered to
j
rate the oral reading of those students reading below, at,
j
or above the mean readability levels of their texts. Table I
24 indicates a total of 11, or 52 per cent, reading below;
six, or 29 per cent, at; and four, or 19 per cent, above. |
: ]
I
Table 25 lists the distribution of poor, average,
and superior readers according to sex in each of the seven
TABLE 24
NUMBER OF STUDENTS READING AT,
ASSIGNED ENGLISH TEXTS
BELOW, OR ABOVE THE MEAN READABILITY LEVEL OF THE
ACCORDING TO THE INFORMAL READING INVENTORY
Class and Book
Mean Readability
Level of Text
Number of Students
Reading at Readability
Level of Text
1
Below At Above
Communications
Basic Writer
200
and Reader 9.3 2 1
Communications
Success with
201
Sentences 8.7 2 1
Spe11ing-Vocabulary
Siqht and Sound 10.3 2 1
Grammar A
Course in Modern English 10.7 1 1 1
o
< _ n
TABLE 24— Continued
Class and Book
Mean Readability
Level of Text
Number of Students
Reading at Readability
Level of Text
Below At Above
Reading and Composition A
Understandinq and Usinq Enqlish 14.0 2 1
Reading and Composition B
Creative Reader 7.3 1 2
American Literature
American Tradition in Literature, Vol. II 10.0 1 1 1
Total number 11 6 4
Total percentage 52 29 19
ol
TABLE 25
DISTRIBUTION OF POOR, AVERAGE, AND SUPERIOR READERS ACCORDING TO SEX
Class Group
Men Women Total
Number Percent
age
Number Percent
age
Number Percent
age
Communications Poor readers 11 42 14 54 25 96
200 Average readers 1 4 — — 1 4
Superior readers — — — — — —
Commun icat ions Poor readers 15 44 17 50 32 94
201 Average readers 1 3 1 3 2 6
Superior readers —
--
— — — —
Spelling- Poor readers 6 24 16 64 22 88
Vocabulary Average readers 3 12 — — 3 12
Superior readers — -- -- — — — —
Grammar A Poor readers 9 26 12 34 21 60
Average readers 10 29 3 8 13 37
Superior readers 1 3 ” “ 1 3
TABLE 25— Continued
i
Men Women
--------------------
Total
Class Group
Number Percent Number Percent Number Percent-
age age age
Beading and Poor readers 6 20 7 23 13 43
Composition A Average readers 8 27 6 20 14 47
1
Superior readers 1 3 2 7 3 10
Beading and Poor readers 7 25 5 18 12 43
Composition B Average readers 7 25 2 7 9 32
j
Superior readers 4 14 3 11 7 25
j
American Poor readers 4 18 1 5 5 23
Literature Average readers 2 9 4 18 6 27
Superior readers 7 32 4 18 11 50
Totals Poor readers 58 29 72 36 130 65
Average readers 32 16 16 8 48 24
I
Superior readers 13 7 9 4 22 11
r ~ 109
classes. In the first five classes listed, women had a
greater percentage of poor readers, while men had a greater
percentage in the last two. The men had more average read- ;
! I
ers in five of the classes, one class had an even number,
land in American Literature the women had a greater number.
Of the four classes containing superior readers, a greater |
percentage were men in three of the classes and women in
one of the classes.
Men and women are almost equally divided in their
number in the total sample. Checking the totals of poor, ■
i
average, and superior readers in all classes, the men have
a greater percentage of average and superior readers, while
the women have a greater percentage of poor readers .
Fifty-six per cent of the total sample, or 112 stu- |
dents, read between the 1st and 25th percentiles according
to the Nelson-Denny Reading Test; 41 students, or 20 per
cent, between the 26th and 50th percentiles; 29 students, or|
15 per cent, between the 51st and 75th percentiles; and 18
students, or 9 per cent, above the 75th percentile (Table
26).
TABLE 26
I
j PERCENTILE RANK OF THE SAMPLE ACCORDING TO THE NELSON-DENNY READING TEST
!
Percentile Rank
Class N
1-25 26-50 51-75 76-99
No.
% NO.
% NO.
%
No. %
Communications 200 26 24 92 2 8
Communications 201 34 32 94 2 6
Spe11ing-Vocabulary 25 20 80 4 16 1 4
—
Grammar A 35 16 46 11 31 8 23
—
Reading and Composition A 30 10 33 12 40 5 17 3 10
Reading and Composition B 28 9 32 6 22 9 32 4 14
American Literature 22 1 5 4 18 6 27 11 50
Total 200 112 56 41 20 29 15 18 9
o
CHAPTER V
SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS
This study was conducted to compare the reading
ability of community college students with the readability
of their required English textbooks. This chapter contains
a summary of the procedures followed in this investigation,
a list of findings and conclusions, some curricular appli
cations, and general recommendations.
Summary of the Study
The Problem
j
Since this study was primarily concerned with com- |
i
paring the reading ability of community college students
|
with the readability level of their assigned English texts, !
i
!
it was necessary to obtain the following data: (1) grade
levels of assigned texts according to three readability
formulas, (2) grade level placement of students on a stand- |
ardized reading test, and (3) percentage of accuracy of
112 !
oral pronunciation and comprehension on the Informal Reading
Inventory. Answers were sought to the following questions:
1. At what levels of accuracy, comprehension, and ^
speed did the students read as determined by the Nelson-
i
Denny Reading Test?
j
2. What were the difficulty levels of the textbooks
assigned to the students as determined by the Gunning FOG
Index Readability Formula, the McLaughlin SMOG Grading
Readability Formula, and the Fry Graph for Estimating Read
ability?
3. How closely did the grade placements as measured!
by the Nelson-Denny Reading Test compare to the difficulty
level of the assigned text as measured by the formulas in
question 2 above?
4. What percentage of the students had poor read-
I
ing ability, average reading ability, or superior reading
ability as defined in this study?
5. What was the oral reading level of poor reading
students, average reading students, and superior reading
students as measured by the Informal Reading Inventory?
6. Is sex difference related to reading ability?
113
The sample
i
A sample of convenience was used in this investiga- !
tion. In seven different types of English classes having a ;
i
I
total enrollment of 712, 200 students were tested. The j
1 i
community college used in the investigation had a total
population of approximately 5,000.
!
Method of gathering data
Two hundred students enrolled in seven different
types of English classes were administered the Nelson-Denny
Reading Test during a regular class session. The test was
I
given by the investigator and an assistant. All tests were
scored by the investigator and twice checked for accuracy
by assistants.
Twenty-one volunteers, one poor reader, one average |
reader, and one superior reader from each of the seven
classes were administered the Informal Reading Inventory
using the required class text. This oral test was scored,
!
taped, and checked by the investigator.
i
Treatment of the data
Special IBM key punching sheets were prepared for
i
the 13 test categories and the three reading categories of
:the 200 students in the sample. The readability ratings
114
from three readability formulas as applied to the 11 texts
were also prepared.
The IBM computer at the University of Southern Cali
fornia's Computer Service Laboratory provided the absolute
i i
; i
and relative frequencies for the three classifications of
i
readers plus means, correlation matrices, and other data on ;
the 13 test categories.
Findings
The findings vdiich present answers to the six ques
tions are summarized below.
Nelson-Denny Reading Test
1. Over one-half of the total sample of 200 stu
dents were rated as poor readers on the Vocabulary subtest !
of the Nelson-Denny Reading Test. Poor readers numbered
105, or 52.5 per cent; average readers numbered 69, or 34.5
per cent; superior readers totaled 26, or 13 per cent.
j
2. On the Comprehension subtest, 144 students, or
72 per cent, rated as poor readers; 39 students, or 19.5
i
per cent, as average readers; and 17 students, or 8.5 per
cent, as superior readers.
3. For the total vocabulary and comprehension
score, correct vocabulary responses were counted as one and
115
correct comprehension responses as two. Poor readers num
bered 130, or 65 per cent; average readers numbered 48, or
i
24 per cent; and superior readers numbered 22, or 11 per j
: !
cent.
j
4. The one-minute speed test of reading rate re-
I
suited in 138 students, or 69 per cent, being rated as poor I
readers; 20 students, or 10 per cent, as average readers;
and 42 students, or 21 per cent, as superior readers.
Textbook difficulty levels
1. Rating five required texts for four basic clas
ses with three readability formulas— Gunning's FOG Index,
Fry's Readability Graph, and McLaughlin's SMOG Grading for
mula— disclosed Basic Writer and Reader rating at grade 9.3,;
Success with Sentences rating at 8.7, Sight and Sound at
Il0.3, Toward Better Vocabulary at 14.0, and Course in Modern
English at 10.7 .
2. In the three college transfer classes, six re- !
I
quired texts were rated with the three formulas. Reading j
for Understanding and Using English rated at 10.7, Under
standing and Using English at 14.0, Candide at 7.7, Creative;
Reader at 7.3, Ethan Frome at 8.3, and American Tradition in
Literature. vol. II at grade 10.
116”
!
I
Comparison of students 1 grade j
placement according to the
Nelson-Penny Reading Test and
the difficulty levels of the
required texts according to
readability formulas 1
i
|
1. The readability levels of the five texts used in
the four basic classes ranged from three months below the
mean reading level of the class to 4.6 years above the mean !
reading level of the class. Basic Writer and Reader was
rated .7 above the mean reading level of the class, Success
with Sentences .1 below, and Course in Modern English .3
below. In one course using two books, Sight and Sound was
rated .9 above and Toward Better Vocabulary was rated 4.6
above the mean reading level of the class.
2. In the three college transfer classes, only one
j
book rated above the mean reading level of the class, and j
jail the rest rated beJow. Those ranking below the mean
ranged from 1.1 years to 5.1 years below. One class used j
!
Readings for Understandinq and Using Enqlish. vdiich rated
1.1 below the mean reading level of the class and also |
Understandinq and Usinq English, vhich rated 2.2 years abovei
the mean reading level of the class. Another class used
three books: Candide. which rated 4.7 below; Ethan Frome.
I
4.1 below; and Creative Reader. 5.1 below. The last class
117
iused one text, American Tradition in Literature. Vol. II,
which rated 3.3 below the mean reading level of the class.
i
Percentages of students with I
poor, average, and superior
Ireading ability
1. According to the Nelson-Denny Reading Test, 130
i
students, or 65 per cent of the total sample of 200, had
poor reading ability; 48 students, or 24 per cent of the
total, had average reading ability; and 22 students, or 11
per cent, had superior reading ability.
Informal Reading Inventory
1. Twenty-one students read from their assigned
texts for the Informal Reading Inventory. The accuracy of
oral reading ranged from 88 per cent to 100 per cent, and
from 12 errors in pronunciation to no errors. The median 1
was 97 per cent, with two students having perfect scores. !
2. The comprehension scores of the same students,
when asked questions pertaining to the material they had
read, ranged from 20 per cent to 100 per cent. Four stu- !
i
dents had perfect scores of 100 per cent, six had 80 per
cent, seven had 60 per cent, three had 40 per cent, and only
one had 20 per cent. The median score was 60 per cent. j
3. Using Betts' criteria for the 21 students, !
118
four, or 19 per cent, were rated as reading at the inde
pendent level; six, or 29 per cent, at the instructional
level, and 11, or 52 per cent, at the frustration level.
j Sex differences in reading
ability
1. Men and women were almost equally divided in the;
total number in the sample. There were 103 men (52 per
cent) and 97 women (48 per cent). A check of the totals of
poor, average, and superior readers in all classes disclosed
that the men had a greater percentage of average and supe
rior readers, and the women had a greater percentage of poor;
i
readers. The men had 3 per cent more superior readers and
8 per cent more average readers than the women. The women
had 14 per cent more poor readers than the men.
i The hypotheses
This section contains an examination of the hypoth-i
i
eses stated in the first chapter in respect to the findings j
of this study.
Hypothesis 1. There is a difference between
the readability level of the assigned text and
the reading level of the poor readers.
According to the Nelson-Denny Reading Test, 130
students, or 65 per cent of the total sample of 200, were
119
irated as poor readers. The mean readability level was de
termined by applying three readability formulas to the re
quired texts. Ninety-six students, or 48 per cent of the
total sample, read below the mean readability level of the
texts. The hypothesis was judged to be upheld.
i
Hypothesis II. There is a difference between j
the readability level of the assigned text and
the reading level of the average readers. !
The average reader was defined as reading between
12.0 and 14.0. The mean readability level of two books was i
rated at 14.0. The remaining nine books were rated at 10.7 I
or below. Since nine of the 11 books were too easy for the {
average reader, the hypothesis was considered to be sup
ported.
j
i
Hypothesis III. There is no difference between ;
the readability level of assigned texts and the
reading level of the superior readers.
i
There were 22 students, or 11 per cent of the total j
sample, rated as superior readers according to the Nelson-
Denny Reading Test. A superior reader was defined in this
study as one who read above 14.0. None of the means of the
11 texts rated by three readability formulas were above
14.0; therefore, all 11 required texts were below the read-
|ing level of the superior reader. The null hypothesis was
rejected and thus judged to be untenable. |
! |
i
Conclusions
In light of the findings, the following general
I
conclusions have been made:
! i
1. The objectives of community college basic Eng
lish classes would seem to be best served by a multi-text
approach.
2. In any given class the range of reading skills
will be wide and will vary from class to class.
3. The readability levels of most reading materials!
i
selected for community college English classes do not coin
cide with the reading ability of the students.
4. Most instructors are not adequately informed as
to the difficulty level of the texts or the reading ability j
I i
of the students. j
Educational Implications and j
Recommendations !
Suggestions for curricular application appear to be
justified by the findings and conclusions in this investi
gation based on the data collected.
t
1. To evaluate student reading skills or textbook
difficulty, more than one reading test or readability
121
formula should be used.
2. In publishing basic English skill texts, pub
lishers should make attempts to adjust readability of texts
to reading ability of students.
3. In the four basic classes, showing 64 per cent
of the students reading below the mean readability level of
the text, a textbook committee should screen and then rec
ommend books that would aid in meaningful learning experi-
t
ences.
4. A range of over seven years' difference in the
reading ability of the students in the basic classes sug
gests a multi-level texts approach. In-service training
for instructors in the use of multi-level texts would be
necessary.
5. Supplemental material for use in all English
classes should be developed by the staff and should be
readily available to all English instructors in a centrally
located service area. Adequate clerical personnel should
service this central area.
6. Greater use of audio-visual materials is needed
in classroom instruction. Helpful materials should be de
veloped by the staff, and adequate media already commer-
jcially produced should be purchased. Delivery and pick-up
122
service to the classrooms should be instituted for audio
visual equipment.
7. Since 66 per cent of the students in basic
classes are reading below the tenth grade and 45 per cent
: . |
below the ninth grade, further screening for reading inade-
quacies should be instituted. A we11-developed reading-
study skills program presented in a reading-study skills
center should heighten the learning experiences for these
students.
8. Anthologies having potential merit should be
examined by a textbook selection committee. From a selec
tion of possibly five, the anthologies could be rated by
Gunning's FOG Index. Every selection in the text should be
rated. Competent clerical help could be used in the compu
tation of the formula under the direction of a reading
I
specialist.
j
9. Reading test scores of their students should be:
I
readily available to English instructors for use in class
planning.
10. Texts three, four, and five years below the mean
readability level of the college transfer classes lack in
tellectual challenge. Texts for these classes should be
|rated with a readability formula before adoption.
123
11. There is a need for English instructors to have
a general awareness of readability concepts along with the
readability formulas themselves. The quantitative nature of,
readability formulas present instructors with an added di-
I I
mension for textbook selection. j
Recommendations for Further Research j
In concluding this study, the following areas are
suggested as additional research projects:
1. A parallel study to this investigation using
data from the day school of the same community college couldj
be undertaken.
2. A program could be developed for the IBM com
puter that would present the readability of a text after
being programmed with basic data on polysyllabic words and
sentence lengths.
j
3. More research is needed to interpret the upper j
j
levels of standardized college reading tests in terms of
grade level.
4. Readability levels of all texts used in the
college should be rated to determine what is expected of
the community college students in reference to reading.
j
I
i
i
I
I
L
124
BIBLIOGRAPHY
BIBLIOGRAPHY
: 1. Alt, Weston M. Aspects of the Junior College. Sacra
mento, Calif.: California State Department of Edu
cation, Division of Higher Education, 1968.
2. Anderson, Kenneth E., et al. The American Two-Year !
College in Transition. Lawrence, Kan.: University
of Kansas, 1969.
3. Anderson, Irving H., and Dearborn, Walter F. "Reading
Ability as Related to College Achievement." Journal
of Educational Psychology. II (April, 1941), 387-
396.
4. Angell, Robert C. "Reading in the Social Sciences."
Reading for Life: Developing the College Student's
Lifetime Reading Interest. Edited by Jacob N.
Price. Ann Arbor, Mich.: University of Michigan
Press, 1959.
5. Austin, Mary C., and Huebner, Mildred H. "Evaluating
Progress in Reading through Informal Procedures." j
The Reading Teacher. XV (March, 1962), 338-343.
i j
* 6. Berg, Ernest H., and Axtell, Dayton. Programs for !
Disadvantaged Students in the California Community j
Colleges. Oakland, Calif.: Peralta Junior College,
1968. i
j
7. Berry, Betty T. "A Study of Reading Comprehension at
the College Freshman Level." Unpublished Ph.D.
dissertation, University of Southern California,
1931.
125
126
j 8.
9.
| 10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
Betts, Emmett A. Foundations of Reading Instruction.
New York: American Book Co., 1946.
________ . Foundations of Reading Instruction. New
York: American Book Co., 1957.
i
I
Beurman, Donald D. "The Readability of Business Mather
matics Textbooks." Unpublished Master's project,
University of Southern California, 1959.
i
Blocker, Clyde E., and Richardson, Richard C. Jr. The j
Two-Year College: A Social Synthesis. Englewood
Cliffs, N. J.: Prentice-Ha11, Inc., 1965.
Book, William T. "How Well College Students Read."
School and Society. XXIV (August 20, 1927), 242-247.
Booker, A. Fourth Mental Measurements Yearbook. Ed
ited by Oscar Krisen Buros. Highland Park, N. J.:
Gryphon Press, 1953.
Bormuth, John R. Development of Readability Analysis.
Chicago: University of Chicago, Office of Educa
tion, 1969.
Botel, Morton. Guide to the Botel Reading Inventory.
Chicago: Follett Publishing Co., 1961.
i
I
Broom, Mybert E. "An Analysis of Certain Factors |
Affecting Reading Achievement at the College Level."j
Unpublished Ph.D. dissertation, University of |
Southern California, 1931. I
l
Centi, Paul. "Intellective and Language Factors Re- !
lated to College Success." Peabody Journal of Edu- |
cation. XL (September, 1962), 102-105. j
Chall, Jeanne S. Readability; An Appraisal of Re- j
search and Application. Columbus, Ohio: Ohio State
University, Bureau of Educational Research, 1958.
Christ, Frank L. "The SR/SE Laboratory: A Systems
Approach to Reading/Study Skills Counseling.” A
paper presented at the National Reading Conference j
meeting, Los Angeles, December 5-7, 1968. i
127
20. Compton College. The College Plan: 1969-1970.
Compton, Calif.: N.p., 1969.
I
21. Cornelsen, Allen D. "A Study of the Readability of
| General Business Books." Unpublished Master's
project, University of Southern California, 1959.
22. Cummiskey, Cletus J. "The Use of Tape Recordings as a
Supplementary Study Aid in General Psychology."
Dissertation Abstracts. XXI (October, 1960), 812.
23. Dale, Edgar, and Chall, Jeanne S. "A Formula for Pre
dicting Readability." Educational Research Bulle
tin. XXVII (January 21, 1948), 11-20.
24. D'Amico, Louis A.; Bryant, J. H .; and Prahl, Marie.
"The Relationship between MAT Scores and Achievement
in Junior College Subjects." Educational and Psy
chological Measurement. XIX (Winter, 1959), 611-616.!
25. Department of Housing and Urban Development. Compton I
Model Cities: Comprehensive Demonstration Plan.
San Francisco: N.p., 1970.
26. Dolch, Edward W. Psychology and Teaching of Reading.
Champaign, 111.: The Garrard Press, 1951.
27. ________ . "Testing Reading with a Book." Elementary
English. XXVIII (March, 1951), 124-125, 165.
i28. Doyle, Marvyl. "Readability as a Key for Evaluating
Junior College Freshman English Anthologies." Un
published Ed.D. dissertation, University of Southern
California, 1961.
29. Eller, William. "Future Trends in Materials and Meth
ods for College Reading as Suggested by Current Re
search ." Evaluating College Reading Programs.
Fourth Yearbook of the Southwest Reading Conference
for Colleges and Universities. Fort Worth, Texas:
Texas Christian University Press, 1955.
30. Endler, Norman S., and Steinberg, Danny. "Prediction
of Academic Achievement at the University Level."
Personnel and Guidance Journal. XLI (April, 1963),
694-699.
31.
32.
i
33.
34.
35.
36.
37.
38.
<39.
40.
4 1 .
128 ;
Farr, James N., and Jenkins, James J. "Tables for Use
with the Flesch Readability Formulas." Journal of j
Applied Psychology. XXXIII (June, 1949), 275-278. !
I
________ , ________ , and Paterson, Donald. "Simplifi
cation of Flesch Reading Ease Formula." Journal of i
Applied Psychology. XXXV (October, 1951), 333-337.
Ferguson, Margaret Ann. "A Study of the Readability
of Secretarial Office Practice Textbooks." Unpub
lished Master's project, University of Southern
California, 1958.
Flesch, Rudolf F. The Art of Readable Writing. New
York: Harper and Bros., 1949.
________ . How to Make Sense. New York: Harper and
Bros., 1954.
________ . Marks of Readable Style: A Study in Adult
Education. New York: Bureau of Publications,
Teachers College, Columbia University, 1943.
________ . "A New Readability Yardstick." Journal of
Applied Psychology. XXXII (June, 1948), 221-233.
________ . A New Wav to Better English. New York:
Harper and Bros., 1958. j
Florida State Department of Education. The Community
Junior College in Florida's Future. Tallahassee,
Fla.: Florida State Department of Education, 1957. i
Forbes, Fritz W., and Cottle, William C. "A New Metho^
for Determining Readability of Standardized Tests."
Journal of Applied Psychology. XXXVII (June, 1953),
185-190.
Freer, Imogene J. "A Study of the Effect of a College !
Reading Program upon Grade-Point Average in Odessa
College, Odessa, Texas." Unpublished doctoral dis
sertation, Michigan State University, 1965.
129
42.
43.
1
I
44.
45.
46.
47.
48.
49.
50.
51.
52.
Fry, Edward B. "Oral Reading Paragraphs." Los Ange
les: University of Loyola Reading Clinic, 1958. j
(Mimeographed.)
________ . "A Readability Formula That Saves Time."
Journal of Reading. II, No. 7 (April, 1968), 513- '
516, 575-578.
Garrett, Harley F. "A Review and Interpretation of
Investigations of Factors Related to Scholastic
Success in Colleges of Arts and Science and Teachers
Colleges." Journal of Experimental Education. De
cember, 1949, p. 130.
Gille, P. J. "A Simplified Formula for Measuring Ab
straction in Writing." Journal of Applied Psy
chology. XLI (August, 1957), 214-217.
Good, Carter V., ed. Dictionary of Education. 2nd ed.
New York: McGraw-Hill Book Company, Inc., 1959.
Grant, Jessie K. "An Evaluation of the Readability of
Selected Economics Textbooks." Unpublished Master's
project, University of Southern California, 1958.
Gray, William S., and Leary, Bernice E. "What Makes a
Book Readable?" Journal of Adult Education. VI
(October, 1934), 408-411. j
i
________ . What Makes a Book Readable . . .: An Ini
tial Study. Chicago: University of Chicago Press,
1935.
Gunning, Robert. The Technique of Clear Writing. New !
York: McGraw-Hill Book Company, Inc., 1952. j
Hadley, L. S. "New College Students Lack Study Tech
niques." School and Society. LXXXV (November 9,
1957), 350-353. i
Halfter, Irma T., and Douglass, Frances. "Inadequate ;
College Readers." Journal of Developmental Reading.!
I (Summer, 1958), 42-45. j
5 3 .
54.
55.
56.
57.
58.
59.
60.
61.
62.
63.
130 |
Harris, Albert J. How to Increase Reading Ability. |
New York: Longmans, Green and Co., 1956.
________ . How to Increase Reading Ability. New York:
David McKay Company, Inc., 1961.
Hartman, Neal E. Correlates of Educational Outcome for
Junior College Remedial Students. St. Louis, Mo.:
Florissant Valley Community College, 1968.
|
Hildreth, Gertrude. Learning the Three R1s. Minnea
polis, Minn.: Educational Publishers, Inc., 1947.
Horrocks, John E., and Schoonover, Thelma I. Measure- j
ment for Teachers. Columbus, Ohio: Charles E.
Merrill Publishing Co., 1968.
Howden, Mary E. "A Nineteen-Year Follow-Up Study of
Good, Average, and Poor Readers in the Fifth and
Sixth Grades." Dissertation Abstracts. XXIX (June,
1968) . |
Kender, Joseph P. An Analysis of Factors Associated
with Informal Reading Tests at the Eighth-Grade
Level. Boston: International Reading Association,
1968.
Kessler, Edward. "Readability of Selected Contemporary
Books for Leisure Reading in High School Biology."
Science Education. XXV (October, 1941), 260-264.
Kingston, Albert J., and George, Clay E. "The Effects
of Special Reading Training upon the Development of j
College Students' Reading Skills." Journal of Edu- !
cational Research. L (February, 1957), 471-475.
Klare, George R. The Measurement of Readability.
Ames, Iowa: Iowa State University Press, 1963.
________ , and Buck, Byron. Know Your Reader: The
Scientific Approach to Readability. New York:
Hermitage House, 1954.
131
64.
65.
66 .
67.
68.
69.
70.
71.
72.
73.
74.
Lee, Wayne D. "Why Bother to Teach Critical Reading
Skills to College Reading Classes?" A speech de
livered at the National Reading Conference, Tampa,
Florida, December 2, 1967.
Lefstad, Dana J. Community Junior Colleges. Denver,
Colo.: Colorado State Department of Education,
1967.
L ew erenz, A l f r e d S . Books E v a lu a te d by Means o f th e
V o c a b u la ry G rade P la c e m e n t F o rm u la . Los A n g e le s :
Los Angeles City School District, 1937.
________. "Measurement of the Difficulty of Reading
Materials." Educational Research Bulletin (Los
Angeles City School District), VIII (March, 1929),
11-16.
__________. " S e l e c t i o n o f R eading M a t e r i a l s by P u p i l
A b i l i t y and I n t e r e s t . " E le m e n ta ry E n g lis h R ev iew .
XVI ( A p r il, 1939), 151-156.
________. "A Vocabulary Grade Placement Formula."
Journal of Experimental Education, III (1935), 236.
__________. "V o cab u lary G rade P lacem en t o f T y p ic a l
Newspaper C o n t e n t ." E d u c a tio n a l R e se arch B u l l e t i n
(Los A n g eles C ity S c h o o l D i s t r i c t ) , X (S ep tem b er,
1930), 4-6.
Lewis, M. M. Language in Society. New York: Social
Sciences Publisher, 1948.
Lively, Bertha A., and Pressey, S. L. "A Method for
Measuring the 'Vocabulary Burden' of Textbooks."
Educational Administration and Supervision. IX
(October, 1923), 389-398.
Lorge, Irving. "The Lorge and Flesch Readability For
mulae: A Correction." School and Society. LXVII
(February 21, 1948), 141-142.
________ . "Predicting Reading Difficulty of Selections
for Children." Elementary English Review. XVI
(October, 1939), 229-233. !
75.
76.
77.
78.
79.
80.
81.
82.
83.
84.
85.
132
Lorge, Irving. "Word List as Background for Communi- ;
cation." Teachers College Record. XLV (May, 1944), j
543-552.
Lunn, Mervel S., Jr. "The Prediction of Success of
Students Enrolled in Professional Education Courses
at the University of Oklahoma." Dissertation Ab
stracts . XXII (November, 1961), 1490-1491.
Lynwood Chamber of Commerce. "Standard Industrial
Survey Report." Lynwood, Calif., September, 1968.
McClusky, Howard Y. "A Quantitative Analysis of the
Difficulty of Reading Materials." Journal of Edu
cational Research. XXVIII (December, 1934), 276-282.
McCord, Ha Hack. "Increase in Measured I.Q." Journal
of Developmental Reading. V (Spring, 1962), 214-215.
McElroy, John. The Limping Language of Industry.
Columbus, Ohio: Readability Associates, 1949.
________ , et al. Guide for Air Force Writing. Air
Force Manual 11-13. Maxwell, Ala.: Department of •
the Air Force, Maxwell Air Force Base, Air Univer
sity, June, 1953.
McLaughlin, G. Harry. "Clearing the SMOG." Journal
of Reading (International Reading Association),
XIII, No. 3 (December, 1969), 210.
________ . "SMOG Grading— A New Readability Formula."
Journal of Reading (International Reading Associa
tion), XII, No. 8 (May, 1969), 639-646.
Madrid, Ernest W. "A Study of the Readability of
Gregg Shorthand Textbooks." Unpublished Master's
project, University of Southern California, 1960.
Mathews, Ernest G.; Larsen, Robert P.; and Butler,
Gibbon. "Experimental Investigation of the Relation
between Reading Training and Achievement in College i
Composition Classes." Journal of Educational Re
search. XXXVIII (March, 1945), 499-505.
86.
87.
88.
89.
90.
91.
92 .
93.
94.
95 .
96.
............. ' 133
j
Miller, George A. Language and Communication. New
York: McGraw-Hill Book Company, Inc., 1951.
Moen, Norman W. Minnesota Junior College Faculty In
terests and Concerns. Minneapolis: University of
Minnesota Press, 1968.
Morriss, Elizabeth C., and Halverson, Dorothy. "Idea
Analysis Technique." Unpublished paper on file in
Columbia University Library, 1938.
M yran, G under A. Community S e r v i c e s : An E m erging
C h a lle n g e f o r th e Community C o l l e g e . W a sh in g to n ,
D. C.: American Association of Junior Colleges,
1969.
Nelson, M. J., and Denny, E. C. Examiner1s Manua1:
The Nelson-Denny Reading Test. Revised by James I.
Brown. Boston: Houghton Mifflin Co., 1960.
Ojemann, Ralph H. "The Reading Ability of Parents and
Factors Associated with Reading Difficulty of Parent
Education Materials." University of Iowa Studies in
Child Welfare. VIII (1934), 11-32.
Orange Coast Junior College District. A Comparison of
the California Junior College Active Enrollments
with the Orange Coast College and Golden West Col
lege Enrollments. Spring. 1968. Costa Mesa, Calif.:
Orange Coast Junior College District, 1969.
Orr, David B. Sixth Mental Measurements Yearbook.
Edited by Oscar Krisen Bur os. Highland Park, N.J.:
Gryphon Press, 1965.
Paramount Chamber of Commerce. "Standard Industrial
Survey Report." Paramount, Calif., November, 1968. I
Parr, F. W. "Teaching College Students How to Read."
Journal of Higher Education. II (June, 1931), 324-
330.
j
Patty, W. W., and Painter, W. I. "Improving Our Methoc
of Selecting High-School Textbooks." Journal of
Educational Research. XXIV (June, 1931), 23-32.
97.
98.
99.
100.
101.
102.
103.
104.
105.
106.
134
Pauk, Walter J. "Basic Skills Needed in College Read
ing ." International Reading Association Conference
Proceedings. Ill (1958), 44-45.
________. "College Reading Instruction: Past, Pres
ent, Future." A paper presented at the College
Reading Association Conference, Knoxville, Tennes
see, April 4-6, 1968.
Rice, Gary A., and Scofield, William. "A Contrast
between the 'Successful' and 'Dropout' Student at
Yakima Valley College." Yakima, Wash., March, 1969.
Roueche, John E., and Hurlburt, Allan S. "The Open
Door College: The Problem of the Low Achiever."
Journal of Higher Education. XXXIX (November, 1968),
453.
Scheidt, Omar A. Annual Report of the President.
Yakima, Wash.: Yakima Valley College, 1968.
Schrader, Gene. An Evaluation of Adult Basic Educa
tion Programs in Wyoming: A Follow-Up Study.
Riverton, Wyo.: Central Wyoming College, 1968.
Shaw, Philip B. "College Reading-Improvement Programs
of the Future ." International Reading Association
Conference Proceedings. VI (1961), 49.
________, and Townsend, Agatha. "Diagnosis of College
Reading Programs by Use of Textbooks." The Reading
Teacher. XIV (September, 1960), 31-36 .
Sherman, L. A. "The Literary Sentence-Length in Eng
lish Prose" and "The Decrease in Predication."
Analytics of Literature. Boston: Ginn and Co.,
1893.
Smith, Edwin H., and Smith, Marie P. Teaching Read
ing to Adults . Washington, D. C.: National Asso
ciation for Public School Adult Education, 1962.
I
i
107.
108.
109.
110.
111.
112 .
113.
114.
115.
116.
117.
135
Smith, Maryl King. "An Analysis of the Readability
of Selected First and Second Grade Reading Text
books to Determine Specific Elements of Reading
Difficulty." Unpublished Master's project, Uni
versity of Southern California, 1962.
Smith, Nila Banton. American Reading Instruction.
Newark, Del.: International Reading Association,
1965.
Stanley, Julian C. Measurement in Today's Schools.
Englewood Cliffs, N. J.: Prentice-Hall, Inc., 1964.
Strang, Ruth. Diagnostic Teaching of Reading. New
York: McGraw-Hill Book Company, Inc., 1969.
Sugarman, Michael N. "What About the Dropouts?"
Michigan Education Journal. XLIV (April, 1967), 43.
Sweeney, Mary Rose. "A Study of the Relationship of
the Quantity of High School English to College Per
formance in English." Dissertation Abstracts. XXII
(February, 1962), 2640-2641.
Thorndike, Edward L. A Teacher1s Word Book. New
York: Teachers College, Bureau of Publications,
Columbia University, 1921.
________ . A Teacher's Word Book of 20,000 Words.
New York: Teachers College, Bureau of Publications,
Columbia University, 1931.
________ , and Lorge, Irving. The Teacher 1s Word Book
of 30.000 Words♦ New York: Teachers College,
Bureau of Publications, Columbia University, 1944.
Thorndike, Robert L., and Hagen, Elizabeth. Measure
ment and Evaluation in Psychology and Education.
New York: John Wiley and Sons, Inc., 1955.
Tinker, Miles A., and McCullough, Constance. Teach
ing Elementary Reading. New York: Appleton-
Century-Crofts, Inc., 1962.
118.
119.
120.
121.
122 .
123.
124.
125.
126.
127.
136
Townsend, Agatha, and Orr, David B. Sixth Mental
Measurements Yearbook. Edited by Oscar Krisen
Buros. Highland Park, N. J.: Gryphon Press, 1965.
Wallace, Dorothy M. "A Study of the Readability of
Gregg Shorthand Textbooks from Shorthand Plates."
Unpublished Master's project, University of Southern
California, 1960.
Walther, Cyrilla. "The Reading Difficulty of Maga
zines." School Review. LI (February, 1943), 100-
105.
Webb, Sam C., and McCall, John N. "Predictors of
Freshman Grades in a Southern University." Educa
tional and Psychological Measurement. XIII (Winter,
1953), 660-663.
Wheat, Harry G. The Teaching of Reading. Boston:
Ginn and Co., 1923.
Wheeler, L. R., and Smith, E. H. "A Practical Reada
bility Formula for the Classroom Teacher in the
Primary Grades." Elementary English, XXXI (Novem
ber, 1954), 397-399.
Winther, Sven F., et al. A Longitudinal Study of the
Beginning Freshman Class of 1963 at the University
of New Mexico. Albuquerque, N. M.: University of
New Mexico, 1969.
Witty, Paul A. "Practices in Corrective Reading in
Colleges and Universities." School and Society. LII
(November 30, 1940), 554-563.
________ , and Lehman, Harvey C. "Teaching the College
Student How to Study." Education. XLVIII (Septem
ber, 1927), 47-56.
Wortham, Mary H. "Reading: Emerging Issues in the
Two-Year Colleges." A speech delivered at the An
nual Convention of the National Council of Teachers
of English, Honolulu, Hawaii, November, 1967. |
128.
129.
137
Yoakum, Gerald A. "The Reading Difficulty of School
Textbooks." Elementary English Review. XXII (Decem
ber, 1945), 304-309.
________. "Revised Directions for Using the Yoakum
Readability Formula." Pittsburgh, Pa., 1948.
130. Yule, G. Udny. The Statistical Study of Literary
Vocabulary. Cambridge, England: Cambridge Univer
sity Press, 1944.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
The Relationship Between Student Evaluations And Self-Evaluations Of Teachers In The School Of Communications And Professional Studies At A California State University
PDF
Actual And Desirable Campus Environment As Perceived By Allied Health Students Classified Into Four Creative Ability Groups
PDF
The Academic Performance Of Community College Transfer Students: An Analysis Of Consistency
PDF
The Relationship Of Ethnic Origin To Performance On A Visual Photographic Task At Different Mental Ability Levels
PDF
Community Service Programs Of California Community Colleges: An Analysis Of Recent Developments
PDF
A Critical Analysis And Evaluation Of Evidence Regarding The Reliability And Validity Of Four Selected Measures Of Self-Concept
PDF
Relationships Of Variables Identified With Success In College Clothing-Construction Courses: A View Toward Advanced Placement And Improved Learning
PDF
Student Perceptions Of Selected Innovations In Secondary Education
PDF
The Relative Efficiency Of Multiple Regression Analysis And Multiple Cutoff Analysis In The Prediction Of Academic Performance In A Selected Medical School
PDF
The Extension Program Of Brigham Young University/California Center: A Study In Purpose And Achievement
PDF
The Use Of Masterpieces Of World Literature In California Public Community Colleges: A Study In Purposes And Trends
PDF
The Transfer Purpose Of The Public Community Junior College In Californiahigher Education: A Study In Purpose And Development
PDF
Achievement In Junior College As Related To Eligibility Level On Entrance
PDF
Relationships Among Selected Levels Of Attainment Of Physical Education Teachers And Students
PDF
An Evaluation Of Orientation Programs In Seven Selected Small Liberal Arts Colleges, Using Student Satisfaction Criteria
PDF
Inservice Training Of Elementary School Teachers In Contemporary Conceptsof Arithmetic
PDF
Selective Service And College Student Deferment
PDF
Nietzsche'S Philosophy Of Education: A Critical Exposition
PDF
The Decision Process In College Attendance And Cognitive Dissonance
PDF
Procedural Due Process Considerations In The Establishment Of Student Disciplinary Procedures For California Public Community Colleges
Asset Metadata
Creator
Gibson, Walter Dana (author)
Core Title
Relationship Between Difficulty Levels Of Assigned English Texts And Reading Ability Of Community College Students
Contributor
Digitized by ProQuest
(provenance)
Degree
Doctor of Philosophy
Degree Program
Education
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
education, higher,OAI-PMH Harvest
Language
English
Advisor
Pullias, Earl Vivon (
committee chair
), Brown, Charles M. (
committee member
), Michael, William B. (
committee member
)
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c18-465369
Unique identifier
UC11362994
Identifier
7116411.pdf (filename),usctheses-c18-465369 (legacy record id)
Legacy Identifier
7116411
Dmrecord
465369
Document Type
Dissertation
Rights
Gibson, Walter Dana
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the au...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus, Los Angeles, California 90089, USA
Tags
education, higher