Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
A Factor Analytic Study Of Tests Designed To Measure Reading Ability
(USC Thesis Other)
A Factor Analytic Study Of Tests Designed To Measure Reading Ability
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
This dissertation has been microfilmed exactly as received 6 6 -1 0 ,5 5 1 SUTHERLAND, Samuel Philip, 1932- A FACTOR ANALYTIC STUDY OF TESTS DESIGNED TO MEASURE READING ABILITY. U n iversity of Southern C alifornia, Ph.D ., 1966 Education, psychology University Microfilms, Inc., Ann Arbor, M ichigan GAMULL PHILIP SUTHERLAND All Rights Reserved 1G67 A FACTOR ANALYTIC S J1>Y OF TESTS DESIGNED TO MEASURE READING ABILITY by Samuel Philip Sutherland A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (Educational Psychology) June 1966 UNIVERSITY OF SOUTHERN CALIFORNIA T H E G R A D U A T E S C H O O L U N IV E R S IT Y PA R K L O S A N G E L E S , C A L IF O R N IA 9 0 0 0 7 This dissertation, written by SAMUEL PHILIP SUTHERLAND mittee, and approved by all its members, has been presented to and accepted by the Graduate School, in partial fulfillment of requirements for the degree of D O C T O R O F P H I L O S O P H Y under the direction of hi.S....Dissertation Com- Dean DISSERTATION COMMITTEF, Chainnan DEDICATION To my wife, Joann, for her patience, unselfishness, and constant encouragement. ii ACKNOWLEDGMENTS The writer is deeply indebted to several persons who have played important roles in the preparation of this study* Special appreciation is given to Dr. Newton Metfessel, Chairman of the Dissertation Committee, for his assistance, invaluable criticism, and encouragement throughout the study, and to Dr. Kenneth D. Hopkins for his advice and help with the computer programs. The suggestions given by Dr. J. P. Guilford and Dr. William B. Michael were most helpful. The assistance of Superintendents Dr. W. Tracy Gaffey and Fred Sparks, and Principals Frank Thompson, Will Longenecker, Lew Moore, and Bill Baker in obtaining subjects and assisting in test administration is gratefully acknowledged. The editing and typing of the manuscript by Mrs. Jessie Levine are also deeply appreciated. TABLE OF CONTENTS Page DEDICATION......................................... ii ACKNOWLEDGMENTS.................................. iii LIST OF TABLES.................................... vii LIST OF FIGURES.................................. ix Chapter I. INTRODUCTION ........................... 1 Area of Investigation Need for Research in This Area Background of the Problem Statement of the Problem Questions to Be Answered Procedures Delimitations of the Study Definitions of Terms Outline of the Research Outline of Remainder of the Dissertation II. REVIEW OF RELEVANT LITERATURE ......... 14 Factor Analyses of Reading Tests Correlational Studies of Reading Tests Relationship Between Reading and Other Mental Abilities Skills Related to Reading Ability Mode of Response as a Variable Conclusions III. PHASE I: THE PRELIMINARY INVESTIGATION — PROCEDURES, FINDINGS, AND CONCLUSIONS........................... 35 Introduction Procedures Instruments Under Investigation Subjects Chapter Page Administrative Procedures Scoring Statistical Analysis Findings Findings Related to Question 1 Findings Related to Question 2 Findings Relating to Question 3 Conclusions IV. PROCEDURES— PHASE II: FINAL INVESTIGATION.................. 57 Test Instruments Reference Tests: Bases for Selection Subjects Administration Procedures Scoring Statistical Analysis V. FINDINGS................................ 69 Findings Relating to the Question, "What Is the Nature of the Vocabulary Subtest Factor(s)?" Findings Relating to the Question, "What Is the Nature of the Comprehension Subtest Factor(s)?" Relationship to Previous Studies Generalization to Phase I Characteristics of the Test Battery Sex Differences VI. SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS ....................... 87 Summary Area of Investigation Background of the Problem Questions to Be Answered Procedures Findings Supplementary Findings v Chapter Page Conclusions Recommendations for Test Users Recommendations for Further Research BIBLIOGRAPHY ...................................... 100 APPENDICES I. TESTS USED IN PHASE I .................. 109 II. TESTS USED IN PHASE I I ................ 115 III. LISTING OF TAXONOMY OF EDUCATIONAL OBJECTIVES . . ..................... 118 LIST OF TABLES Table Page 1. Summary of the Research D e s i g n ...... 12 2. Factor Pattern After Orthogonal Transformation ......................... 17 3. Centroid and Orthogonal Rotated Factor Loadings Together with communalities . 20 4. Rotated Factorial Matrix ................ 25 5. Factor Pattern for Normal Group (Showing Factor Loadings) .............. 30 6. Summary— Review of Literature........ 34 7. Order of Administration— Reading Tests . 39 8. Classification of Items According to the Taxonomy of Educational Objectives . . 40 9. Matrix of Factor Loadings— Reading Tests (30+ - Significant Loading)....... 43 10. Eigenvalues and Cumulative Proportion of Variance Reading Tests .............. 44 11. Intercorrelations among Standardized Reading Tests, Fifth Grade Subjects . . 45 12. Rotated Factor Matrix— Reading Tests . . 47 13. Intercorrelations between Subtest Scores: Taxonomy Classification ................ 53 14. Rotated Factor Matrix ..................... 54 15. Rotated Factor Matrix Showing Factor Loadings............................ 70 16. Means, Standard Deviations, and Reliability Coefficients— California Reading and Selected Reference Tests . 79 vii Table Page 17. Intercorrelations among the California Reading Tests and Selected Reference Tests.................................... 80 18. Unrotated Factor Matrix— California Reading Tests and Selected Reference Tests.................................... 82 19. Rotated Factor Matrix Comparing Boys and Girls— California Reading Tests and Selected Reference Tests .............. 84 20. Comparison between Boys and Girls Means and Standard Deviations on California Reading Tests and Selected Reference Tests.................................... 85 viii LIST OF FIGURES Figure Page 1. Graphic Illustration of Data in Table 12 Showing Factor Loadings on Factors I and I I I ................................... 48 2. Graphic Illustration of Data in Table 12 Showing Factor Loadings on Factors I and I I ................................... 49 3. Graphic Illustration of Data in Table 12 Showing Factor Loadings on Factors II and I I I ................................... 50 4. Model of the Structure-of-Intellect . . . 59 5. Model of Semantic Category Showing Those Factors Selected as Reference Factor . 63 6. Factorial Structute of Vocabulary Subtest (From California Reading Test) Imposed Upon "Structure-of-Intellect" Model . . 73 7. Factorial Structure of Comprehension Subtest (From California Reading Test) Imposed Upon "Structure-of-Intellect" Model..................................... 76 ix CHAPTER I INTRODUCTION The most important single question concerning any measuring instrument is, 1 1 What does it measure?" It is the skills, or abilities, or traits, or knowledge, or processes, or other factors which contribute to test score variance with which test users are concerned. Early in the testing movement it was assumed that tests measured what they appeared to measure* Through the years, however, it became obvious that appearances are quite untrustworthy, and for this reason validity based upon observation of the test items (face validity) is not recommended by the Committee on Technical Recom mendations (23). They recognize only those validities based upon acceptable correlational procedures and/or rigorous content analysis. In some cases there is a lag between recommenda tions and practice, so that test users are furnished with inadequate information concerning the nature of the sources of variance in the various test results. It seems that this is true of the very popular survey reading tests which are common to most children's school experience and are required in California schools at certain grade levels. After having several 1 years of experience discussing reading test scores with other psychologists, principals, and teachers, and noting the high correlations between reading and aptitude scores, it was felt that there was a need for greater precision and understanding of what these reading tests measure. Area of Investigation This study was an attempt to answer the question, "What do survey reading tests measure?" This is the problem of the "validity" of the standardized, group- administered reading tests which are given to over a million school pupils each year. Since these are achievement or ability tests rather than aptitude or intelligence test measures, an attempt is being made to measure current status rather than to predict future performance. Therefore, the concern is with content and construct validity rather than predictive validity. Need for Research in This Area There appears to be imprecision, ambiguity, and diversity of opinion concerning the nature of the skills and/or abilities which are being measured by survey reading tests. Since this category of tests is one of the most used in the country, and important decisions (i.e. grouping, evaluation of instruction, and grade placement) are influenced by them, it is important that there be a thorough understanding of the factors which contribute to test variance. As Cronbach said: Every test is to some degree impure, and very rarely does it measure exactly what its name implies. Yet the test cannot be interpreted until we know what factors determine scores. (7:120) The Technical Recommendations place primary responsibility for publication of validity data upon the manuals which accompany the tests (23). In the manuals which accompany the survey reading tests in question there is a dearth of validity type data. Seldom are there any data of an empirical nature given. In most cases the manuals only make comment to the fact that while writing the test items certain aspects of the reading process were kept in mind. To demonstrate the problem the following test descriptions, taken from the test manuals, are pre sented. In one manual, under the interrogative heading nWhat are the reading skills tested?" the following statement is made: "So far as possible questions on each passage are distributed among five general categories of skills identified." These skills are then listed as "ability to understand 4 direct statements," "ability to interpret and summa rize," "ability to see motives," "ability to observe the organization," and "ability to criticize" (62:20). Nowhere is there any empirical evidence included which might validate these labels. The Iowa Test of Basic Skills (ITBS) manual attempts to classify each item in the test according to its stated category, but they admit that the classification is "somewhat subjective" (66:23). No evidence of any empirical validation of their classification is given. In another manual, the only mention of a validity type statement is that the questions were "designed to measure various aspects of reading comprehension" (63:4). This manual goes on to list some of these aspects, such as "ability to select the main thought," "ability to understand the literal meaning," "ability to see the relationships among the ideas," and "ability to determine the meaning of a word from context." In another manual, from a leading test publisher, the reading comprehension test is described as being able to . . . reveal strengths and weaknesses in several general areas, among which are: following specific instructions, finding sources and doing reference work, comprehending factual information, making proper inferences and drawing valid conclusions from materials read. (69.^5) 5 The test manuals claim that their respective tests measure a variety of skills, hut there is no evidence of such provided in the manuals other than the usual statement that the items were constructed with these certain skills in mind. Thorndike and Hagen, in discussing reading tests, recognized the problem and gave as a solution the statement that "the potential user must examine each test in which he is interested . . . in order to judge whether that particular test is the one best suited for his purpose" (13:295). It is doubtful, however, that the typical test user has a more substantial basis for judgment than the test makers. Background of the Problem It seems apparent that the question, "What do reading tests measure?" is inadequately answered by the respective test manuals. They obviously are trying to measure "reading" of some kind. Good practice dictates that, prior to the construction of a measuring instrument, it is necessary to define in operational terms that which is to be measured. In the present area of study this indicates that one should answer the question "What is reading?" before one can answer the question "What do reading tests measure?" For this reason some effort was expended toward the pur pose of locating a definition, or "goal of reading instruction," which could be stated in measurable terms. This effort proved quite fruitless. Although there are many definitions of reading, most are too broad or too vague to be of value from the standpoint of measurement. Probably the least satisfactory definition, from the measurement view, is that given by Burton. He defined reading as . . . the vital part of a rich and varied program of learning experiences through which an indi vidual learns to know and manage himself, to know and to mingle with other people, to know and to utilize his environment. (6:vii) Further on in his book he defined reading in a "limited sense" as "a form of thinking, problem solving, or reasoning involving such functions as analyzing, dis criminating, evaluating and synthesizing" (6:27). This is rather typical of many definitions and presents a common difficulty; it involves too many vague and undefined words. Compounding the problem is the inclusion of such terms as "problem solving," "analyz ing," "evaluating," and others which historically have come under "aptitude," or "mental abilities," or "intelligence" rather than ability or achievement. This trend is increasing the difficulty for the test 7 maker. The purpose of reading tests is not, however, to measure theoretical constructs which their leading theorists devise, but to measure reading as taught in the classroom. At present reading instruction, especially in the primary grades, is aimed primarily at vocabulary building and word attack skills rather them, as Burton stated, at "evaluating," "criticizing," "managing oneself," or "mingling with other people" (6:vii). Therefore, one would expect the same lag between theory and practice to apply to both classroom instruction and the measuring instruments. It appears that current reading theory includes constructs which as yet are not well defined nor reliably measured. It might be quite some time before instruments are devised which do measure these and other aspects of reading. It is important, however, to know what cur rent tests do measure. The test manuals implicitly, and sometimes explicitly, indicate that sight vocabulary as well as some of the higher mental processes are being measured reliably by their survey reading tests. It is the purpose of this paper to determine if this goal has, in fact, been reached. Statement of the Problem There is no empirical basis for determining which sources account for reading test score variance, Questions to Be Answered 1. How many factors are needed to account for reading test score variance? 2. Does the typical vocabulary (or word meaning) and paragraph meaning (or comprehension) division represent a true dichotomy? 3. Can the test items be regrouped in such a way that separate factors will emerge (using Bloom's Taxonomy of Educational Objectives (2) as a model)? 4. What is the nature of the vocabulary subtest factor(s) (using Guilford's "Structure-of-Intellect” (71) as a model)? 5. What is the nature of the comprehension subtest factor(s) (using Guilford's "Structure-of-Intellect” as a model)? Procedures Phase I (preliminary investigation) was devised to discover possible answers to the first three 9 questions in the preceding section. Five of the most popular survey reading tests were administered to a sample of fifth grade students, and the resulting raw scores were factor analyzed. Phase 11 (final investigation) was devised to discover possible answers to Questions 4 and 5 in the "Questions to Be Answered" section. In this phase a battery of six semantic tests was used as reference tests and was administered, along with a survey reading test, to a sample of seventh grade pupils. Each reference test represented a factor which could have been present in the reading test as a source of variance. The scores of the entire battery were factor analyzed to determine which of the factors would account for the reading test score variance. Delimitations of the Study The subjects in this study were pupils of fifth and seventh grades in public schools in Southern California. Swineford (49) and Anderson (17) have shown that the factor patterns for different age levels are similar but not identical, therefore caution is urged in generalizing to other grade levels. It is also reasonable to assume that pupils who have received instruction from schools with different 10 emphases for curricular goals from those found in this study might produce results different from those found herein. The tests analyzed in this study were group- administered survey reading tests and were not designed to represent all possible types of reading tests. It is reasonable to assume that other kinds of reading tests might yield different results. Definitions of Terms Survey reading test. This is a test of general reading ability. It is designed to measure a variety of skills, but to yield only one or two scores. It is the opposite of a "diagnostic test," which yields several scores. Validity. A test is valid to the extent that what it measures or predicts is known. According to the California test glossary: a. Content validity: How well the content of the test items samples the universe of possible test items. b. Construct validity: Concerns the descrip tion of psychological qualities or "con structs" which a test measures. c. Predictive validity: "Relates to how well predictions made from the test are confirmed by data collected at a later time." d. Concurrent validity: Refers to how well test scores match measures of "contemporary criterion performance." (61:18) 11 Factor analysis. This is a statistical process which is designed to determine ihe sources of variance which are operating in a pattern of intercorrelations. Each source of variance which emerges is called a "factor. " Factor loading. A numerical value which indi cates the saturation of a variable on a given factor is known as factor loading. It is the end result of a factor analysis. Communality. Communality indicates that portion of the variance of each variable which is correlated with the other variables in the battery. It is obtained by summing the squared factor loadings in each row of a factor matrix. Outline of the Research Table 1 contains a summary of the research design in tabular form. Outline of Remainder of the Dissertation Chapter II contains a review of relevant litera ture. All articles pertaining to the topic of reading test validity were reviewed. Only those articles which reported empirical data or gave conclusions based on TABLE 1 SUMMARY OF THE RESEARCH DESIGN 12 Area of Need for Statenent Findings of Questions to Investigation Research of Problem Literature Be Answered Statistical Procedures Subjects Instruments Analysis Survey read ing test validity Very little le do d k know 1, Most reading 1, How nany Phase I: 5th grade empirical which J]Kors tests are fac factors account Administer pupils data are contril P to torial^ simple for reading reading test N * 264 given in test sire test variance? battery the test varianl 2. Most reading Factor oanuals tests measure "verbal compre hension" 1 UVV VI analyze the resulting intercorrela tions Reading tests frog following achieveoent test batteries (5th grade level): S.T.E.P. SRA Metro Iowa California Factor analysis (Principal axes) Orthogonal rotation Varimax criterion 2. What is the nature of the factors involved in reading score variance/ Phase II: 7th grade Administer pupils representa N = 248 tive reading test and battery of reference tests Factor analyze the resulting intercorrela tions California Reading Test Wide Range Achieveoent Sentence Selection Arithmetic Verbal Analogies I and III lord Classi fication' Factor analysis (Principal axes) Orthogonal rotation Variiax criterion a aFroo Guilford's battery. 13 reports of empirical data were utilized. Articles based on subjective opinion or face validity were not included. Chapter III presents Phase I of the study. This section describes the first phase in its entirety, including procedures, subjects, findings, and con clusions. Chapter IV develops the procedures, Phase II, of the study. Here the subjects, test instruments, and other descriptive materials concerning the second phase are presented. All the information needed to replicate this study is included in this chapter. Chapter V presents the findings of the study. Included are the results of the statistical applica tions to data obtained in Phase II. These findings are then related to the questions asked in the intro ductory chapter. Chapter VI summarizes the entire study, dis cusses some conclusions made on the basis of the findings, and puts forth some recommendations pertain ing to test validity and further research in this area. CHAPTER II REVIEW OF RELEVANT LITERATURE Studies which are relevant to the present study have been published sporadically since the early 1920*s. The primary motivations for these studies may be classified in three ways: (1) those concerned with the reading process itself, (2) those concerned with the nature of mental abilities, and (3) those dealing primarily with the nature of reading tests. Whichever the primary motivation or interest, the researchers utilized tests which were similar to those being investigated in this study. Therefore, the studies will not be classified according to the interest of the author, but according to the statistical method ology and/or results. This chapter will discuss the following: (1) factor analyses of reading tests, (2) correlational studies of reading tests, (3) relation ship between reading and other mental abilities, (4) skills related to reading ability, (5) mode of response as a variable, and (6) a summary of previous studies. Factor Analyses of Reading Tests Since this paper is concerned with the factorial validity of reading tests, factor studies are the most 14 relevant. Studies reviewed in this section are those which included the administration of a battery of tests of a verbal nature and a factor analysis of the resulting correlation matrix. One of the earliest researchers to do this was Schneck (43). In an attempt to discover more of the nature of verbal and numerical abilities, he administered five verbal and several numerical tests. The five verbal tests were: (1) a vocabulary test in which the subject was to choose the best synonym for the stimulus word, (2) a vocabulary test in which the testee was to choose the best antonym for the stimulus word, (3) a verbal analogies test, (4) a sentence completion test, and (5) a "disarranged sentences" test. The intercorrelations for the first three tests were high. The sentence completion test correlated moderately with the first three and the disarranged sentences test showed little correlation to the others. A factor analysis of the entire battery of verbal and numerical tests led Schneck to conclude: (1) a general factor accounts for most of the variance of the verbal tests and the vocabulary tests load highest on this factor, and (2) verbal and numerical abilities have very little in common. 16 Chein (21) administered fourteen verbal, numerical, and spatial tests to a group of college students. Among the verbal tests were a sentence completion, verbal analogies, grammatical analogies, verbal generalization, and a grammatical generaliza tion. Factor analysis of the entire battery yielded three factors: (1) a verbal factor, (2) a numerical factor, and (3) a spatial factor. These are repre sented by columns I, II, and IV in Table 2. Davis (25) constructed nine tests specifically designed to measure his nine hypothesized "basic reading skills": (1) knowledge of word meaning, (2) ability to select the appropriate meaning for a word in context, (3) ability to follow the organization of a passage, (4) ability to follow the main thought of a passage, (5) ability to answer questions answered directly in the passage, (6) ability to answer ques tions inferred in the passage, (7) ability to make inferences, (8) ability to determine the mood of the author, and (9) ability to determine the author's viewpoint. These tests were administered to college freshmen. Factor analysis yielded nine "components" which corresponded to the nine hypothesized skills. As it stands, this result was rather unique in break ing reading into components. However, Thurstone TABLE 2 FACTOR PATTERN AFTER ORTHOGONAL TRANSFORMATION Test V XR N n R XIIR S IVR h2 1. Sentence completion .633 .355 -.006 .041 .528 2. No. series completion .325 .501 .076 .403 .525 3. Minnesota Form Board i.267 -.012 -.109 .675 .539 4. Verbal analogies .657 .300 .094 .257 .597 5. Numerical analogies -.015 .478 .305 .422 .500 6. Grammatical analogies .578 .271 .438 .016 .600 7. Spatial analogies .277 .488 .001 .564 .633 8. Arithmetic reasoning .365 .601 .094 .344 .622 9. Anagrams .276 .304 .293 .256 .320 10. Verbal generalizations .475 .053 .205 .404 .434 11. Numerical generalizations .239 .368 .433 .400 .540 12. Grammatical generalizations .520 .314 .412 .115 .552 13. Spatial generalizations .282 .109 .260 .622 .546 14. Mod. Kelley Spatial .317 .201 -.104 .754 .720 K2 N .169 .125 .063 .190 .547 SOURCE: I. Chein, "An Empirical Study of Verbal, Numerical and Spatial Factors in Mental Organization," The Psychological Record, III (May, 1939), 71-94. 18 reanalyzed Davis' data using a centroid solution, and concluded that one factor could account for most of the test variance. Residuals following the removal of the first factor were insignificant (.00-.07). Thurstone called the one factor "reading ability" and concluded "the question still remains to be investi gated by new tests in the hope of identifying funda mental parameters of reading ability" (54:187). In the debate between these two men, Thurstone appears to have greatest support. Davis, in his original analysis, used the principal components method with unity in the diagonal, which includes a good deal of error variance and invariably produces as many factors as there are variables (tests). It appears that Davis' residuals were due largely to the unreliability of his homemade instruments. Robinson and Hall (42) analyzed the correlations resulting from the administration of a large number of tests to college students. Included in the battery were tests of geography, history, art, chart reading, map reading, vocabulary, spatial relations, inductive reasoning, rate of reading, as well as grade point averages. They used Thurstone's centroid method of analysis and orthogonal rotations, and found the following factors: (1) "attitude," (2) inductive 19 reasoning, (3) verbal, (4) not interpretable, (5) chart reading, and (6) rate of reading. In this analysis the only factor of a "reading" nature was the "verbal" factor. Cassell (19), in a more recent study, admin istered several tests to ninth grade pupils. The test battery included the Iowa Tests of Social Concepts, Natural Science, Correctness of Expression, Quantitative Thinking, Interpretation of Social Studies, Literature and Vocabulary; Cooperative English Comprehension; D.A.T. Verbal Reasoning, Numerical Reasoning, Spelling, and Sentences Test; and the California Test of Mental Maturity (language and nonlanguage). The resulting factor analysis and rotation yielded six factors: (1) correctness of expression, (2) intelligence (CTMM), (3) quantitative, (4) natural science, (5) reading competency, and (6) basic social studies. This is another instance of a general reading test factor, and resistance of this factor to breaking down. The results of this analysis are shown in Table 3. Woodrow (58) administered fifty-two "mental tests" to a sample of college students. Tests of a verbal nature in this battery included sentence comple tion, constituting definitions, opposites and simi- TABLE 3 CENTROID AND ORTHOGONAL ROTATED FACTOR LOADINGS TOGETHER WITH COMMONALITIES (N-124) 20 Centroid Factor Loadings Rotated Factor Loadings Coiaunaiitles I II III IV V VI I II III IV V VI hc h r 1. California Test of Mental Maturity Advanced '57 S-Fori X1 Language Score 654 -155 236 -213 -114 201 095 491 287 -195 444 194 606 605 Xg Nonlanguage Score 480 -257 352 064 -259 -073 -096 554 226 031 336 -136 497 500 2. Differential Aptitude Test Battery Xg Verbal Reasoning 806 041 110 121 -040 -063 054 289 229 159 719 049 684 684 X^ Numerical Ability 634 -284 059 150 -038 -112 156 384 373 226 374 -144 523 523 X3 Language Usage-Spelling 572 -306 -315 -182 305 -194 569 -049 513 034 234 -184 684 679 Xg Language Usage— Sentences 711 -244 -179 089 -257 -045 391 509 217 249 376 -117 673 676 3. Cooperative English-Reading Com prehension Cl-(y) Lower Level X? Total Score 813 163 -142 -045 049 100 278 169 159 084 712 276 722 721 4. Iowa Tests of Educational Development ¥-35 Xg Basic Social Studies 761 247 271 079 069 181 -129 171 163 032 761 332 757 763 Xg Basic Natural Science X ^ Correctness of Expression X ^ Quantitative 712 280 265 -262 136 -181 700 -346 -387 081 090 135 717 - 09 246 131 277 099 Xj2 Interpretation of Social Studies849 331 -031 044 049 044 X.. Interpretation of Natural Science 702 348 -160 358 160 -118 Xj4 Interpretation of Literature 758 178 -252 -160 -219 166 X15 Vocabulary 826 181 -096 -129 -122 -033 -Oil 004 162 -264 823 009 775 773 574 279 427 293 253 234 792 794 -050 190 554 135 540 184 690 689 119 079 074 089 857 259 838 835 069 -149 064 414 764 124 807 802 402 286 -230 -079 624 284 771 773 274 249 044 -093 769 133 757 757 NOTE: Deciial points omitted— loadings of 300 or sore considered statistically significant at 1 per cent level. SOURCE: R. N. Cassell and E, J. Stancik, "Factorial Content of the Iova Tests of Educational Development and Other Tests," Journal of Experimental Education. XXIX (December, I960), 193-196. 21 larities, social judgment, recognition of the mental states of the speaker, memory of names and faces, opposites, disarranged sentences, disarranged syllables, analogies, Otis Intelligence Scale (advanced, Form A), an alphabet test, skeleton words test, memory span, and word building (anagrams). Using Thurstone's centroid method, he factored the resulting correlation matrix and ten factors emerged: (1) verbal, with heavy loadings on all verbal tests; (2) spatial, (3) numeri cal, (4) attention, (5) musical, (6) memory, and (7-10) minor unnamed factors. In spite of ample opportunity for factorial breakdown, the verbal factor remained intact as no subfactor of a verbal nature emerged. Anderson and Slivinske (17) analyzed the California Achievement Test Battery including arith metic, reading, and language and the California Test of Mental Maturity as well as report card grades in the various content areas. Their fourth grade sub jects produced four factors: (1) achievement test (loadings in the three achievement tests), (2) school grades, (3) CTMM language, and (4) CTMM nonlanguage. Their fifth and sixth grade subjects produced four factors: (1) verbal (reading and CTMM language), (2) school grades, (3) CTMM nonlanguage, and (4) arithmetic. In the researches which have been reviewed thus far one conclusion is quite apparent— most of the variance on reading tests, whatever their format or content, can be attrib' ced to one factor, and this factor is the mental ability which has come to be called "verbal comprehension." French, in an attempt to establish a library of factorially pure tests, surveyed factorial studies which had been published up to that time (1951). Concerning the verbal compre hension factor, he wrote: It is the writer's view that verbal comprehension has exhibited remarkable resistance against breaking down into sub-factors. A large number of analyses have included enough different types of verbal tests that there have been many opportunities for the appearance of sub-factors. (30:87) He further made a cogent distinction between "compre hension" of the English language and "production," as it is involved in fluency tests. He saw these as distinct factors. Lennon completed an extensive review of the research literature concerning the topic "what can be measured in reading," and concluded: It seems entirely clear that numerous super ficially discreet reading skills to which separate names or titles have been attached are in fact so closely related as far as any test results reveal that we must consider them virtually identical. (38:333) 23 He further concluded that, at present, it is possible to recognize and perhaps reliably Measure the following components of reading ability: (1) a general verbal factor, (2) comprehension of explicitly stated material, (3) comprehension of implicit or latent meaning, and (4) appreciation. The "general verbal factor" which he mentioned has been acknowledged by many, but the separation of his second and third factors is not supported by the research except on occasion, and then in a rather vague manner. A study by Langsom (36) deserves special atten tion because of the comprehensiveness of her test battery and the uniqueness of her results. Her specific purpose was to discover more of the nature of the reading process. Using the freshman class of Hunter College in New York (all female), she administered seventeen different reading tests and four numerical. Her battery of reading tests was perhaps the most comprehensive of all the test batteries reviewed. Using Thurstone's centroid method, the factor analysis produced four verbal factors and one numerical. The four verbal factors she identified as: (1) general verbal; (2) a word factor, with loadings of some, but not all of the vocabulary tests; (3) a perception factor primarily concerned with speed; and (4) a 24 relationship factor which had loadings in four reading comprehension tests and an arithmetic test. Her factor number one seems to be the well-known verbal factor, and her factor number four perhaps is the factor that has come to be known as Spearman's "g." The uniqueness of her results is the existence of a word factor which is separate from verbal, and which has loadings of .4 and .5 on three vocabulary tests. Langsom's study is the one study reported in the literature which gives hope of separating verbal and word knowledge factors (see Table 4). Correlational Studies of Reading Tests Some investigators have administered reading tests and reported the resulting correlations without having gone further with any factor analytic solutions. This is not satisfactory as far as determining the factorial content of the tests, but it does give addi tional information. Typical of the correlational studies are the two reported here. Dewey (27) studied the problem of the relation ship between the ability to secure facts and the ability to do differential thinking using historical material as stimuli. He constructed four forms each of tests for acquisition of facts, and of tests for TABLE 4 ROTATED FACTORIAL MATRIX Test I II III IV V h* 1 .395 .494 -.086 -.108 .283 .4992 2 .654 .298 -.002 .071 -.066 .5259 3 .614 .349 .013 .034 .066 .5046 4 .452 .544 .170 .022 .253 .5936 5 .618 .091 .135 .108 .348 .5412 6 .637 .260 -.217 .120 .060 .5385 7 .570 .391 .017 .165 -.016 .5056 8 .548 .164 .575 .065 .393 .8164 9 .338 -.014 .101 .000 .612 .4991 10 .563 .299 .453 .031 .486 .8488 11 .718 -.056 -.015 .025 .354 .6445 12 .684 .189 .107 .072 -.022 .5207 13 .612 .432 .156 .253 -.001 .6494 14 .603 .018 .510 -.109 .408 .8024 15 .564 .311 .263 -.141 .442 .6993 16 .556 .306 .012 .079 .087 .4166 17 .438 .542 .125 .286 .111 .5953 18 .116 .000 .031 .603 .413 .5487 19 .212 .206 .012 .705 .193 .6216 20 .243 .163 -.080 .510 .006 .3521 21 .000 .454 -.059 .186 -.002 .2447 SOURCE: R. S. Langsom, HA Factorial Analysis of Reading Ability," Journal of Experimental Education, X (September, 1941), 57-63. 26 inferential thinking. The resulting correlations between the two types of tests ranged from .38 to .65. He gave no information concerning the reliability of each form of test, which is certainly an important variable especially in homemade tests, but he did con clude, on the basis of his findings, that these two abilities are different. Traxler administered a variety of published reading tests one year apart. He obtained both reliabilities (tests-retest) and intercorrelations between the various published reading examinations. The correlations ranged from .77 to .80. He concluded that "the correlations do not suggest any important difference between the Iowa, Nelson-Denney, Schank, and Traxler tests from the standpoint of reliability and validity" (55:421). These studies give somewhat different conclu sions, Dewey claiming to have found two different abilities and Traxler concluding that his tests were measuring essentially the same thing. Relationship Between Reading and Other Mental 55iliii.es Since test makers have as a goal to measure that which is being taught, it is essential that the 27 goals of the various instructional courses be well defined. As stated previously, the answer to the question "What is reading?" is an essential prelude to the question "What shall our reading tests measure?" It appears that the trend among leading reading theorists is to subsume all types of thinking under the label of "reading," whether the stimuli be printed words, spoken words, natural phenomena, things, or internal stimuli. As a result reading becomes essen tially the same thing as mental abilities, and indeed one of the primary mental abilities, verbal compre hension, uses reading tests as its most representative test. Conceptually it is relatively simple to dis tinguish reading ability from verbal comprehension. From the measurement standpoint, however, it has been, in the past, extremely difficult to measure verbal comprehension apart from a reading task. This is demonstrated by the high correlation (.78) between language IQ and reading vocabulary comprehension scores as measured by the California Achievement Test Battery (69:12). There appears, however, to be a need to distinguish "reading skills" from "verbal comprehen sion." Certainly there are students who are essentially nonreaders but who are able to handle words when spoken orally. Verbal comprehension, on the one hand, is 28 aptitude, a n d reading is nore in the nature of an achievement factor which would depend upon a specific course of instruction. From the measurement stand point, test authors have been unable to clearly dis tinguish the two factors. Many of the investigators whose articles were reviewed in preparation for this study were concerned primarily with discovering the nature of the "basic" cognitive or mental abilities. Four of these primary mental abilities which have been consistently established are: (1) "general," which apparently corresponds to Spearman's "g"; (2) a verbal factor; (3) a numerical factor; and (4) a spatial factor. At this point a study by Swineford (49) deserves special attention. Her primary purpose was to learn more of the nature of the three factors labeled "general," "verbal," and "spatial." The general factor was so named because it had loadings from all the tests in her battery. This factor is assumed to account for the fact that nearly all "mental tests" are positively correlated. The verbal and spatial factors are "group" factors; they are common to some, but not to all tests. These factors can best be defined by describing those tests which measure them. Swineford felt that the best measures of the general factor are those tests 29 which involve numerical and deductive reasoning. She used an arithmetic test, a series completion test, and a verbal deduction test as measures of the general factor. Her verbal factor was measured by the word meaning section of the Traxler Silent Reading Test, a general information test, and a reading comprehension test. The spatial tests involved drawing mirror images of a geometric figure, ability to visualize objects, and identifying an "unfolded" geometric figure. Table 5 shows the results of her factor analysis of these nine tests for normal pupils. Swineford analyzed the patterns of these three factors for both sexes, different age levels, and school achievement levels. She came to the following conclusions concerning the general and verbal factors: On the basis of the foregoing evidence, the general bi-factor defined by the present battery of nine tests may be described with confidence as general mental ability, for it exhibits the characteristics of general mental ability. It is positively correlated with mental tests and with school achievement. It increases with chronological age during the period represented by Grades V-X. It is possessed in greater amount by normal pupils than by dull pupils, where brightness is defined by grade placement. There are no significant sex differences. . . . [the verbal factor] may be described as that part of an understanding of words and phrases which is independent of general mental ability. There is some growth in this factor 30 TABLE 5 FACTOR PATTERN FOR NORMAL GROUP (SHOWING FACTOR LOADINGS) Test General Factor Verbal Spatial Arithmetic .736 Series Completion .768 — - Reduction .624 — - General Information .768 .411 Reading Comprehension .694 .316 — Word Meaning .699 .566 — Punched Hole .638 .606 Drawings .610 - .504 Visual Imagery .641 .333 SOURCE: Frances Swineford, "The Nature of the General, Verbal, and Spatial Bi-Factors,” Supplementary Educational Monographs, November, 1948, pp. 1-71. 31 from Grade V to Grade X. Deficiency in it may be in part responsible for retardation in school. (49:63, 65) Other researchers who discussed one or more of the four factors (general, verbal, numerical, and spatial) mentioned above are Thurstone (52), Guilford (33), Robinson and Hall (42), Schneck (43), Anderson and Slivinske (17), Fortna (70), Cassell (19), Langsom (3€j), and Davis (25). In these studies "verbal compre hension" was best represented by tests which typically are also classified as "reading tests," in most cases vocabulary or word meaning. The two studies in which a separation between reading and verbal comprehension factors occurred were those by Cassell, in which CTMM was somewhat separate from "reading competency," and Langsom, in which "word factor" separated from general verbal. In both cases the separation was not complete, but it does give rise to the possibility of separation with greater refinement of the instruments. Skills Related to Reading Ability Rate of reading has been studied a good deal in its relationship to comprehension. Many of the earlier reading tests included rate as one subscore. It was found, however, that rate is difficult to measure separately from comprehension because it 32 depends quite heavily upon the type of material read, as well as the mental set established by the test directions. For this and other reasons, most recently published reading tests do not include rate scores. Robinson and Hall (42) studied the comparison between the ability to read maps and charts and prose reading. They concluded, on the basis of factor analytic study, that prose and nonprose reading demand different skills. Mode of Response as a Variable In a study concerning the mode of response and its effects, Sims (45) administered four vocabulary tests to his students. In test A, the subject responded orally to the stimulus word as in the Binet vocabulary. Test B was a typical multiple-choice response in which the subject was to choose which of four words was "most like” the stimulus word. Test C was a matching format in which words in one column were to be matched with their synonyms in another column. Test D was like many of the earlier reading tests and asked the subject to mark words which he felt he knew. Intercorrelations among the first three tests were high, ranging from .74 to .93. The fourth test had low intercorrelation with the other three. 33 Schneck (43), in his study, obtained the correlation between a vocabulary test of synonyms and a vocabulary test of opposites. The correlation was .956. Apparently these two variations measure essentially the same thing. Some variations in format affect the factorial structure of a test. Current tests tend to favor use of a multiple-choice format, probably due in part to its relatively simple application to machine scoring procedures. Conclusions Table 6 is a summary, in tabular form, of the findings of the relevant published studies. The following conclusions are based upon these findings: 1. The reading tests analyzed in these studies have heavy loadings on the "verbal" factor and not much on other factors. 2. The "verbal" factor has been quite resistant to factorial breakdown. 3. The "verbal" factor has appeared consistently when verbal tests are included in the battery. This verbal factor is distinct from general reasoning and spatial ability. REVIEW 34 v > H ¥ 9 ¥ a n oa 91 ut Ah 9 « ► k 9 Ha 9a a a t e a « >a H <H 0 I S ¥ a d 9 9 9a Ha o u b d 0 d b IS 1 i t i a bb a o Ha s » oh 0 > H 0 9H 'H u a k 9a 9 dk a 9 9 3 0 > Z It 9 9 aa ud bb do dd 9 ' H 0 0 9aa d V 9 C 9 bflHOH 9 9 bOU fi 11 o a 1 MHO bbdoo kkdM 99909 HH 9 0 ddHkd aaato 9 90 9V ookod 0 0 9 a 9 >>>QM d 0 •H 9a 99 •HH ma OB HO 9 0 9 0 wd 9 9 o a kd 99 >M < 1 0 3 0 0 0 •H a 9 an 9 OH d H HH o M a 9 H 0 9 k a9H MO 9 9 9 H a HH d H 9 AM9 9 BO k OHH 9 H 0 9 9 0 9 0 0 9 0 9 9 H 9 MH u aa a OH 9 9H 9 9 9 a k 9 | aoivoi d k 9 9 k 9 9 9 k 0 9 k MbdOO 0) N H 9) 0) 0 H W 9 0 < 3 < * H d« M OH 0 ¥ 9 9 d d * flA H a HH a £W 9H £H01 0 < £01 0 9< Cflv 0* «S^ 0 0 H a o 3 H a H a 9 99 0 oa 9 k k a 9 afl ¥ d£H 9 ooo 0 0 Zv9 m 9 HH 0 a n9a h k 9 9H 0 H 9 A£a 3 9 JO 9 k 90 >,9 OBOAdkk > OH 9 >>>) MbO HH Mk dd • 0 0 HH< na HH ' 0 9H 9 92 9 H k 9 9 • 03HKKA 30 • £ - 9 99 1 o a a 09 • >«0 H 0 9 9 9 OdH 30H 9H 3 na a U 9 CO Ma 9 H Hk 9 9 AH kk 0 390 aaM 90 ZH 9 0 9 1 09 9 9 9 0 0 9 MO b'H 00 do ak HHH9 MOM 0 o a nadoda 9 OX H9kdH90H k9doa39Haan k o a o a h 9 a «3 OHH03 9 0909a UOOH&ZMKUfflM HH> > H HHH b H 9 do h n a H 9 9 HH H 9 0 3 ao a M9 9 h d an 9 9 adMM <099 9 9 9 9 M H 9 3 3 9 b 0 3 a k 9 9 Ha MM a MM 0-H 9 0 OH 9 9 k k 3 9 9 9 3 3 03MH k£ MM H a d d 9 k d d H 9 9 0 H 9 9 9 H S J Z H > J J 9 H 0 Q 0 0 H 9 aid Hi o H 9 3 Mk H oa u WB do oo H 9 9 9 9 0 9 3 9 0 9 0 H9 kH h H h aa H d 9 0 k H 0 H H 09 H 9 H a 9 b dk 9 H k d 0 k HO £ a 9 9 H 0 Sa k 9 a a 9 a 0 9 A 3 a 3 9 • 9 > M Z < S s | 1 < 1 03 « u « Q 33 i h * u 0 k 3 h a H 9 9 OH03OX9 9 9 b 90 0 9H99H9a9a k > B9 99909 9 HM aa Hak Hk d k H ad 9 0 H9 o a H a o a 3 9H 9H9k9k3kHk £ kH H 9H3k9H 9 0 9 9 9 9 >oaka 9a9a o a9 990900 0 0 0 OK aQMZOH H H > 0 0 0 H 0 10 HO) H HO 9H 0) 9aH UWV I 0 I 99da 9k 3 9 Aao £93 a k£ 9 9 h 9 -a 0 k 9 0>aM h a h 9 o 9 a oh k99aH oaa d > 9 3 9 9 9 9 9 3a t/!a 9 0 9 9 h a da 9 OH Hk H a 9 9 dHH a odH o h h a 9 h h h a 9 9 9a h H dH OH 9 a d£ 9 9999 an o a SH d 9b 9 Pi 9 90 a 9 9 Man a mo b k 9 H 9 k 0 0 9 Md93 HH H 9 0 oh Hoi 99 naa a 9 3 H 9 H 9M M 99 9 3 9'na 9 0 9 dd9aad9 uaa h a a 9 9h 0 9 0 1 9 H H H d 9 bH k k MH £ a 9 9a99M k 9 k k 0 99k a 9 0 H O H 0 0 9 9 H 9 £ H M d dA OO O BA999HA99 9 0 AO 9 9 AHH daHK d MOOMK S0QQ<0<M<I ! ' [ j A OOO o i o 2 v SM M °5 • d oj MO 9h da M9 HH OB 9 0 9 9£ £ oHSna k £ 9 S 0 H 90kh0k >MUUZ< HH H > HHH H a d 9 9 a 9 9 M •H £ 9 u M< 9 b9 3 9 9 0 H a M MH M MH OH 9 Odd 90 kkS 9k H 3A 0 3MH0HM Had dH 9 39 H 9 9 OH 9 3H HSJZHKJ9 9 9 k 0 0 0 M 10 0) h h H Oj : 0 9 93 kH 9 >hn HHt^ OHH <[flv CHAPTER III PHASE I: THE PRELIMINARY INVESTIGATION— PROCEDURES, FINDINGS, AND CONCLUSIONS Introduction The main thrust of previous factor analytic studies of reading tests has been toward the existence of a general reading test factor, rather than toward factorial breakdown. None of the published studies employed, on a broad scale, current tests. There was a possibility, therefore, that instruments which are now being used reliably measure some different factors. A review of some of the test manuals would lead one to believe so. One manual, for example, indicates that its test measures "learning the mind of the author" (62), and another test manual claims to measure "read ing graphs" (69). If indeed these tests do measure these factors plus others as claimed, it would indicate that a good deal of improvement has been made in the last decade in the measurement of reading. The present study was made in an effort to determine whether these current reading tests were susceptible to factorial breakdown or if they, as their predecessors, were heavily loaded on a general 35 36 reading factor. The five tests selected were the reading tests from the most popular achievement batteries. They make up at least 90 per cent of the group standardized reading tests administered in the state of California. This phase of the study was designed to answer the question, "How many factors are needed to account for reading test score variance?" This deals with the quantitative aspect of reading test scores; that is, "how many" factors are needed to describe the variance. Procedures Instruments Under Investigation The five reading tests selected for investiga tion were those which had been approved by the state of California for use in public schools. They are the California Reading Test (California); Iowa Test of Basic Skills (Iowa), reading section; Metropolitan Reading Test (Metro); Science Research Associates Reading Test (SRA); and Sequential Test of Educational Progress (S.T.E.P.), reading section. The other test which was approved by the state, the Stanford Reading Test, was undergoing restandardization at the time 37 data were being gathered and was not included in the study. Descriptions of the selected tests are given in detail in Appendix I. Sub.jects The subjects used in this study comprised the entire fifth grade population of five public elementary schools located in the communities of Saugus and La Puente in Southern California. In previous testing these schools closely resembled the standardization population in intelligence and achievement test results. The communities served by these schools consist prima rily of persons in the upper-lower and lower-middle classifications of socio-economic status on the Warner Scale (16), as subjectively evaluated by the investiga tor. Administrative Procedures Testing time was spread over a two week period. The students, in groups of sixty to 100, were taken to the school auditorium each day for five consecutive days. The tests were administered by a school psy chologist or the school principal, with classroom teachers acting as proctors. The published directions were followed as closely as possible. The order of administration was varied in the different schools so that all tests were given equal sequential treatment, 38 according to the schedule shown in Table 7. Scoring Student responses were recorded on machine scoring answer sheets, and these were hand scored by the investigator. A score was obtained for each subtest of each test, as advocated in the manual. With the exception of the S.T.E.P. test, this division was "vocabulary" (or word knowledge) and "comprehension" (or paragraph meaning). The S.T.E.P. test does not have a vocabulary section, but does have two equivalent comprehension sections. Each subject obtained ten scores (five tests x two subtests) in this initial scoring. Each item of each test was then classified according to the Taxonomy of Educational Objectives (2) categories. The classification was handled by the investigator in the following manner. Each item was classified on three separate occasions. When the three labels agreed, the item was classified accord ingly. When there was lack of agreement, further analysis was made until a decision was reached. This method seemed to produce a satisfactory degree of consistency. The results of this classification are shown in Table 8. A complete listing of the Taxonomy TABLE 7 ORDER OF ADMINISTRATION-READING TESTS School Day 1st 2nd 3rd 4th 5th A Iowa S.T.E.P. California SRA Metro B Metro Iowa S.T.E.P. California SRA C SRA Metro Iowa S.T.E.P. California D California SRA Metro Iowa S.T.E.P. E (Group l)a S.T.E.P. California SRA Metro Iowa E (Group 2) SRA Metro Iowa S.T.E.P. California aBecause of the large number of classes at school E, they were divided into two groups for the administration of the tests. 01 TABLE 8 40 CLASSIFICATION OF ITEMS ACCORDING TO THE TAXONOMY OF EDUCATIONAL OBJECTIVES (NUMBERS ARE ITEM NUMBERS F R ( I l M l V E T M ) Category California Iowa Test TTZT Metropolitan 1.11— Knowledge of Terninology 1.12-Knowledge of Specific Facts 4, 9, 12, 14, 17, 25, 29, 33 36, 41, 42, 43 78, 93, 94, 95 25, 29, 30, 31 3, 6, 8, 12, Section I: 2, 10, U, 18, 22 97, 100, 101, 32, 34, 35, 36 13, 17, 26 4, 6,8 31, 39, 44 102,, 103, 104, 38, 39, 46, 62, Section II: 8 109,, 110 84, 86, 87, 92 Section III: 2, 1,20-Knowledge of Ways, Means of Dealing with Specifics 1,30-Knowledge of Universals 2,30-Coiprehension- Extrapolation 4,10-Analysis of Eleients 27, 74, 78, 81, 82, 83, 88 2.10— Comprehension- 72, 73, 74, 75 57, 58, 59, 68 Translation 76, 77, 79, 82, 71, 93, 94, 95 83, 84, 85, 86, 96, 98 87, 88, 89, 90 2.20— Conprehension- 91, 98, 99, 105, 26, 28, 33, 37 Interpretation 106,, 107 ,111, 40, 42, 44, 45 112,, 113,114, 47, 48, 50, 51 115,, 116,117, 55, 56, 60, 61 118,, 119 120 63, 64, 65, 67 69, 73, 76, 77 52, 53, 54, 66, 72, 75, 97 Section I: 23, 29, 33; Section II: 3, 13, 16, 18 4, 5, 6 Section IV: 2, 3, 6, 9 Section V: 6, 9 Section II: 1, 6, 13, 19, 20, 3, 5 34, 38 Section I: 2, 4, 5, 7, 9, 14, 15, 18, 19, 21, 22, 24, 25, 31, 32, 35; Section II: 1, 4, 5, 6, 7, 8, 9, 11, 15, 19, 20, 21, 23, 24, 25, 29, 30, 31, 32, 34, 35 10, 11, 16, 27 30, 12, 14, 17, 28, 33 Section I: 1, 20, 28, 34; Section II: 2, 22, 26 Section I: 1, 3, 1, 2, 3, 5, 8, 5, 9, 10 15, 16, 23, 27, Section II: 2, 28, 35, 37, 40 4, 6,7 Section III: 1, 3, 10 Section IV: 1, 5, 7,8 Section V: 1, 2, 3, 4,7 Section I: 7 7, 21, 24, 26, Section II: 9, 10 30, 32 Section IV: 4, 10 Section V: 8, 10 41 of Educational Objectives is given in Appendix III. The items were placed into seven categories. For the most part these were in the "knowledge” and "comprehension" categories of the Taxonomy. A few items of the S.T.E.P. test were placed in the "evalua tion" classification. The "following direction" subsection of the California test was not classified, but was scored as a group. All the vocabulary items of all the tests were considered to be in classifica tion 1.11 (knowledge of terminology). Each answer sheet was rescored according to the above-mentioned categories, giving a total of twenty-three Taxonomy scores plus four "vocabulary" scores. These scores were then prepared for statistical analysis. Statistical Analysis The mean scores, standard deviations, inter correlations, and factor analyses were computed on Western Data Processing equipment using Bi Med Program No. 17. This is a principal axes factor analysis solution. Squared multiple-correlation coefficients were used as estimates of communality and placed in the diagonals. Axes were rotated by an analytic method using the varimax criterion, maintaining orthogonality. 42 Findings These findings relate to the questions listed in Chapter 1. Findings Related to Question 1 Table 9 presents the findings relating to Question 1, "How many factors are needed to account for reading test score variance?1 ’ These are the results of the principal factor analysis prior to rotation. It may be seen that all the tests had heavy loadings on Factor I, and no significant load ings on any of the other factors. Apparently most of the variance of these tests can be accounted for by one factor. This is further shown by the large eigenvalue and per cent of total variance on the first factor, as shown in Table 10. Further evidence of the factorial similarity of these various tests can be seen in the fairly high intercorrelations among them (.59-.82). Table 11 gives these intercorrelations. The highest correla tion was between California Vocabulary and California Comprehension; the lowest correlation was between S.T.E.P. number one and SRA Vocabulary. 43 TABLE 9 MATRIX OF FACTOR LOADINGS— READING TESTS (30+ « SIGNIFICANT LOADING) Variable (Tests) I Factors "11 III IV California V 84 10 23 06 C 88 23 11 02 Iowa V 86 -03 07 -14 C 89 -02 -15 -03 S.T.E.P. Subtest I-C 83 21 -08 -04 Subtest II-C 83 15 -19 -04 SRA V 80 -29 01 -08 c 89 -09 11 -11 Metro V 88 -10 16 -04 c 85 -14 -06 -06 KEY: V * Vocabulary or word meaning subtests. C = Comprehension or paragraph meaning subtests. NOTE: Decimals omitted. 44 TABLE 10 EIGENVALUES AND CUMULATIVE PROPORTION OF VARIANCE READING TESTS Factors I II III IV Eigenvalues 7.308 .2523 .1774 .0568 Cumulative Proportion of Total Variance .938 .9699 .9927 1.000 TABLE 11 INTERCORRELATIONS AMONG STANDARDIZED READING TESTS FIFTH GRADE SUBJECTS (N=263) Tests 1 2 3 4 5 6 7 8 9 10 1. California V — 82 72 72 71 65 67 71 78 68 2 . c - 76 75 77 74 62 77 76 72 3. Iowa V - 76 67 70 68 73 80 76 4. c - 74 78 71 81 75 79 5. S.T.E.P. Subtest I-c - 75 59 74 71 68 6. Subtest II-c - 61 74 68 70 7. SRA V - 78 75 72 8. c - 76 79 9. Metro V - 77 0. c — KEY: V » Vocabulary or word meaning subtests. C > Comprehension or paragraph meaning subtests. NOTE: Decimals omitted. 01 46 Findings Related to Question 2 This question asked if the typical "vocabulary" or "word meaning" sections and "comprehension" or "paragraph meaning" were actually factorially distinct. Obviously, with such a large principal factor as that obtained, it is indicated that the two subsections of these tests have much in common. However, there may be a small amount of variance which is common only to the vocabulary subtests and/or a small amount of variance that is common only to the comprehension subtests. Rotating the axes was done to determine if such a vocabulary and/or comprehension factor would emerge. Table 12 shows the results of this rotation. Figures 1, 2, and 3 illustrate the data in Table 12. Three factors emerged. Factor I comes closest to being a comprehension factor, and is best represented by the S.T.E.P. tests. In all cases the comprehension tests have higher loadings on this factor than do any of the vocabulary tests. Factor II is best represented by SRA Vocabulary, but is difficult to define. The tests with highest loadings are SRA Vocabulary and Comprehension and Metro Vocabulary and Comprehension. Factor III is best represented by the California tests, both Vocabulary 47 TABLE 12 ROTATED FACTOR MATRIX— READING TESTS Tests I Factors It III .. Iv California V 3847 4332 6660 -0168 C 5464 3574 6331 -0500 Iowa V 4333 5405 4806 -2216 c 6028 5571 3491 -0968 S.T.E.P. Subtest I-c 6336 3528 4583 -0055 Subtest II-c 6729 3930 3522 -0933 SRA V 3119 7257 3273 0014 c 5446 6381 3607 0435 Metro V 3537 6130 5475 -0312 c 4694 6334 3521 -1367 KEY: V = Vocabulary or word meaning subtests. C a Comprehension or paragraph meaning subtests. NOTE: Decimals omitted. 48 I ® r x Comprehension o Vocabulary 7 Y x S.T.E.P. II x S.T.E.P. I ® I - x Iowa x SRA 5 x Calif. x Metro 4 L o Iowa o Metro o Calif. 3 . o SRA III 8 FIGURE 1 GRAPHIC ILLUSTRATION OF DATA IN TABLE 12 SHOWING FACTOR LOADINGS ON FACTORS I AND III 49 g * x Comprehension o Vocabulary 7 1 - x S.T.E.P. II x S.T.E.P. I 6 | - x Iowa x Calif. x SRA x Metro o Iowa o Calif.Q Metro 3 I - o SRA 1 II 8 9 FIGURE 2 GRAPHIC ILLUSTRATION OF DATA IN TABLE 12 SHOWING FACTOR LOADINGS ON FACTORS I AND II 50 x Comprehension 8 o Vocabulary 7 6 5 0 SRA Metro x x SRA o Metro Iowa x 0 Iowa . o Calif. 4 r X S.T.E.P.II S.T.E.P.Ix x Calif. 3 8 -• III FIGURE 3 GRAPHIC ILLUSTRATION OF DATA IN TABLE 12 SHOWING FACTOR LOADINGS ON FACTORS II AND III 51 and Comprehension. Again this is difficult to define in terms of either vocabulary or comprehension. The best conclusion seems to be that the S.T.E.P tests show promise of becoming distinctly comprehension, but none of the tests can claim to have reached a true vocabulary-comprehension dichotomy. The tests, for the most part, measure the same thing or things. An interesting observation can be made. There is a tendency for tests of one series to "cling" together. This is especially true of the California tests. In each array in Figures 1, 2, and 3 they can be seen to be quite close to each other. The same is true of the two Iowa tests, the Metro tests, and the SRA tests. It was expected that the S.T.E.P. tests would cling together, of course, since they were designed to be parallel. (They are so similar, one wonders why there is any division at all.) Even though there appears to be a possibility of separating comprehension, as represented by S.T.E.P. from another factor (SRA Vocabulary), no one series comes close to that goal. If a testee obtained significantly dif ferent vocabulary and comprehension scores, they probably would not be the result of a factorial dif ference between the two subtests. 52 Findings Relating to Question 3 This question asked if it would be possible to classify the items of the various tests according to the Taxonomy of Educational Ob.iectives to produce factors. The classification of items was made, the tests were scored on that basis, and the results were analyzed by the same factor program (Bi Med Program No. 17) as used in the previous analysis. Table 13 shows the intercorrelations of these part scores. As would be expected, the intercorrelations were positive, and for the most part significant. Table 14 shows the rotated factors. If the classification of items according to the Taxonomy would indeed produce some breakdown into factors, it j would be indicated by the patterns of the rotated factors. The subgroups of items of the same classi fication would be expected to cling together. For example, all the groups in classification 1.12 should load on more or less the same factor. This result was not obtained. No pattern was established which would justify the use of these particular categories of items. On the contrary, there was a pronounced tendency for items of the same test to cling together, regardless of how they were classified. 53 TABLE 13 INTERCORRELATIONS BETWEEN SUBTEST SCORES; TAXONOMY CLASSIFICATION 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 1 California Voc - 72 67 78 73 71 59 64 67 48 55 73 60 65 53 70 65 60 52 49 70 53 51 53 56 63 63 2 Iowa Voc - 68 80 61 64 64 69 68 55 53 77 64 57 54 74 65 59 55 52 72 51 63 54 65 66 68 3 SRA Voc - 75 57 60 55 46 64 62 56 67 60 47 51 64 58 51 55 60 75 55 57 52 59 68 61 4 Metro Voc - 66 66 65 62 67 54 59 75 66 60 54 74 64 60 57 56 74 55 63 55 65 69 67 5 California V? - 77 57 59 61 44 46 64 52 64 51 67 61 54 52 50 64 41 49 50 49 58 55 6 California 2.10 - 55 60 63 50 47 70 49 62 56 68 61 55 51 42 67 50 55 50 54 63 60 7 California 2.20 - 60 54 51 48 66 49 49 39 62 53 48 49 47 62 48 55 45 54 61 61 8 California 1.12 - 58 40 45 68 50 60 51 64 60 52 44 47 62 51 46 44 52 54 52 9 Iota 1.12 - 59 53 75 64 63 55 74 65 60 55 54 71 57 54 55 53 65 60 10 Iowa 1.20 - 48 64 53 40 49 56 49 45 41 45 63 48 49 44 43 56 52 11 Iova 2,10 m 61 56 36 36 52 47 45 38 46 59 41 56 46 53 51 50 12 Iowa 2,20 - 71 62 62 80 74 65 58 56 78 60 63 58 67 72 65 13 Iova 2.30 - 50 48 62 60 52 42 49 62 48 50 49 50 54 52 14 S.T.E.P. 1.12 - 56 75 68 65 45 40 59 50 41 47 44 58 48 15 S.T.E.P. 1.20 - 66 59 52 39 43 58 42 47 44 43 51 44 16 S.T.E.P. 2.20 - 82 74 63 49 75 55 62 55 61 73 64 17 S.T.E.P. 2.30 - 67 52 54 66 50 49 46 51 59 53 18 S.T.E.P. 4.10 - 50 44 60 47 42 44 48 57 52 19 SRA 1.12 - 50 56 42 44 43 43 49 50 20 SRA 1.20 - 66 46 49 51 47 54 52 21 SRA 2,20 - 62 60 55 60 73 69 22 SRA 2,30 - 45 41 50 56 47 23 Metro 1,11 - 54 60 64 60 24 Metro 1.12 - 55 59 56 25 Metro 1,20 - 60 59 26 Metro 2,20 ■ 68 27 Metro 2,30 - KEY; Voc = Vocabulary subtest. 2.10, 2,20, etc. * Categories of the Taxonomy of Educational Objectives. NOTE; Decimals omitted. TABLE 14 ROTATED FACTOR MATRIX 54 Factors Tests I II III IV V Yl VII vlii ~Ix t XI m Kiii m California Vocabulary 183 413 499 281 319 128 224 250 109 054 047 027 029 012 Iowa Vocabulary 228 390 262 243 475 108 371 268 026 072 040 129 063 008 SRA Vocabulary 471 252 264 217 422 256 053 323 101 015 032 099 007 016 Metro Vocabulary 221 360 339 291 470 178 244 357 071 013 009 087 023 021 California ?? 139 405 642 158 266 203 155 056 007 010 020 051 027 007 California 2.10 253 404 603 082 353 041 160 034 081 070 029 070 034 012 California 2.20 244 267 270 128 429 136 403 066 026 130 035 112 060 015 California 1.12 106 429 309 167 276 143 493 039 115 064 029 on 021 002 Iowa 1.12 353 477 257 271 329 191 159 039 115 000 161 118 043 030 Iowa 1.20 553 262 150 209 326 128 109 029 017 009 018 052 012 002 Iowa 2.10 229 184 185 420 449 148 114 042 062 026 068 027 001 009 Iowa 2.20 364 489 243 329 439 109 300 024 073 063 018 123 034 060 Iowa 2.30 295 365 155 485 314 146 157 112 027 026 077 018 002 003 S.T.E.P. 1.12 117 702 334 086 192 071 168 058 129 042 103 024 038 036 S.T.E.P. 1.20 294 525 224 096 251 119 097 044 003 216 013 057 001 003 S.T.E.P. 2.20 230 723 221 129 403 216 162 083 001 005 037 137 003 032 S.T.E.P. 2.30 208 680 203 233 232 211 171 082 005 039 080 066 003 033 S.T.E.P. 4.10 172 636 187 175 254 119 102 104 071 096 004 102 038 032 SRA 1.12 193 351 247 081 287 302 125 120 025 024 008 309 001 001 SRA 1.20 226 275 140 165 364 509 126 055 057 014 015 041 003 001 SRA 2.20 432 380 287 188 411 329 223 126 128 020 044 008 082 005 SRA 2.30 331 327 139 128 299 172 219 070 326 005 015 022 001 000 Metro 1.11 227 228 186 166 653 130 129 030 007 073 045 045 031 050 Metro 1.12 157 281 221 156 527 216 044 010 060 027 163 010 013 017 Metro 1.20 130 263 178 201 601 114 200 123 143 063 031 064 020 057 Metro 2.20 352 392 235 048 573 157 140 097 107 105 032 001 015 007 Metro 2.30 268 266 276 104 562 174 208 120 010 122 065 077 082 013 NOTE; Decinals Quitted, Factor I has the highest loadings from SRA Vocabulary, SRA 2.20, and Iowa 1.20. Factor II has the highest loadings from the five S.T.E.P. subgroups. Factor III has the highest loadings by California Vocabulary, California ?? (following directions), and California 2.10. Factor IV is most heavily loaded by Iowa 2.10, Iowa 2.20, and Iowa 2.30. Factor V is clearly a Metro factor, being most heavily loaded by the five Metro subtests. Factor VI is most heavily loaded by SRA 1.12, 1.20, and 2.20 subtests. Factor VII is seen as a combination of Iowa Vocabulary, California 2.20, and California 1.12. The remaining seven factors are rather minor and for the most part have no significant loadings. Apparently classifica tion of items, as performed in this study, fails to yield meaningful factors. Conclusions 1. Most of the variance in the instruments under investigation can be accounted for by one rather large factor. 2. Differences between vocabulary and compre hension subtests in factorial composition are minor. For the most part they measure the same factor. There appears to be a potential for separating comprehension from vocabulary. Item classification, according to the categories of the Taxonomy of Educational Objectives, did not result in factorial breakdown of the tests. CHAPTER IV PROCEDURES— PHASE II: FINAL INVESTIGATION This chapter describes the test instruments, subjects, administration, scoring, and statistical procedures used in the second phase of the study. This phase was designed to relate to Questions 4 and 5 in Chapter I. Using Guilford's "Structure-of- Intellect" model (71) as a frame of reference, an attempt was made to define those factors which con tribute to reading test score variance. Test Instruments The purpose of this study was to determine the factorial validity of current reading tests. In an interview with Dr. Guilford he suggested that only one reading test should be used, since this would maximize the potential for the greatest number of factors to emerge. The more tests of a similar nature in the battery, the greater the probability of blurring the factors. For this reason, only one test was used. Of the reading tests utilized in Phase I of the study, the California Reading Test showed the greatest relia bility and the highest loading on the general factor, therefore it was chosen as the representative of 58 reading tests. In order to determine the factorial content of a test it is necessary to include in the battery tests which have a known factorial content and which are i fairly pure. Guilford's model seemed to provide the best theoretical frame of reference and reference tests to meet the above criteria. For these reasons, tests from Guilford's battery were selected. Reference Tests: Bases For Selection Figure 4 shows a geometric representation of Guilford's "Structure-of-Intellect" model. Each cube within the large figure represents a potential factor of intellectual ability. The various categories of this theoretical model are defined as follows: OPERATIONS— Major kinds of intellectual activities or processes; things that the organism does with the raw materials of informa tion. Cognition— Immediate discovery, awareness, rediscovery, or recognition of information in various forms; comprehension or understanding. Memory— Retention or storage, with some degree of availability, of information in the same form in which it was committed to storage and in response to the same cues in connection with which it was learned. Divergent production— Generation of information From given information, where the emphasis is upon variety and quantity of output OPERATIONS Cognition Memory Divergent Production Convergent Production Evaluation PRODUCTS Units Classes Relations Systems Transformations Implications CONTENTS Figural Symbolic Semantic Behavioral SOURCE: J. P. Guilford and R. Hoepfner, "Current Summary of Structure-of- Intellect Factors and Suggested Tests/' A Report from the Psychological Laboratory, University of Southern California, December, 1963. FIGURE 4 MODEL OF THE STRUCTURE-OF-INTELLECT 01 (O 60 from the sane source. Likely to involve what has been called transfer. Convergent production— Generation of infornation from given information, where the emphasis is upon achieving unique or conventionally accepted best outcomes. It is likely that the given (cue) information fully deter mines the response. Evaluation— Reaching decisions or making Judgments concerning the goodness (correctness, suitability, adequacy, desirability, etc.) of information in terms of criteria of identity, consistency, and goal satis faction. CONTENTS— Broad classes of information. Figural content— Information in concrete form, as perceived or as recalled in the form of images. The term "figural" implies some degree of organization or structuring. Different sense modalities may be involved, e.g., visual, auditory, kinesthetic. Symbolic content— Information in the form of signs, having no significance in and of themselves, such as letters, numbers, musical notations, and other "code" elements. Semantic content— Information in the form of meanings to which words commonly become attached, hence most notable in verbal thinking and in verbal communication. Behavioral content— Information, essentially non verbal, involved in human interactions, where awareness of the attitudes, needs, desires, moods, intentions, perceptions, thoughts, etc., of other persons and of ourselves is important. PRODUCTS— Forms that information takes in the organism's processing of it. Units— Relatively segregated or circumscribed items of information having "thing" character. May be close to Gestalt psychology's "figure on a ground." 61 Classes— Recognized sets of items of information group by virtue of their common prop erties. Relations— Recognized connections between units of information based upon variables or points of contact that apply to them. Systems— Organized or structured aggregates of items of information; complexes of interrelated or interacting parts. Transformations— Changes of various kinds of existing or known information or in its use. Implications— Extrapolations of information, in the form of expectancies, predictions, known or suspected antecedents, con comitants, or consequences. (71:2) There are 120 (6x5x4) factors hypothesized in this model. Interest was centered on those factors which could potentially be involved in reading test score variance. These were selected by the following process of elimination. In the CONTENTS facet only the ' ’semantic'’ category was appropriate since the reading tests are, of course, verbal in content. This eliminated the figural, symbolic, and behavioral categories. Of the OPERATIONS categories, "memory" was eliminated because the stimulus reading material was always in front of the testee. Also, divergent and convergent production categories were eliminated because the reading tests did not require any product or original material, only recognition of the correct response (multiple-choice format). Thus, "cognition" and "evaluation" were the only operations which could be 62 involved in reading tests. Of the six PRODUCTS, only one seemed inappropriate— transformations. By eliminat ing the unlikely categories, the following remained: cognition and evaluation of semantic stimuli producing units, relations, systems, classes. and implications. Reference tests were selected from Guilford's battery which measure these factors. Figure 5 is a model of the semantic category showing those factors which were selected for use in this study. Following is a list of the tests which are measures of the selected factors, and the factors which they measure: 1. Wide range vocabulary (cognition, semantic, units (CMU)) 2. Word classification (cognition, semantic, classes (CMC)) 3. Verbal analogies I (cognition, semantic, relations (CMR)) 4. Mathematics aptitude (cognition, semantic, systems (CMS)) 5. Verbal analogies III (evaluation, semantic, relations (EMR)) 6 . Sentence selection (evaluation, semantic, implications (EMI)) These six tests are described briefly in Appendix III. OPERATIONS Cognition Memory Divergent Production Convergent Production Evaluation PRODUCTS Units Classes Relations Systems Transformations Implications CONTENTS Semantic X * Selected Factors SOURCE: J. P. Guilford and R. Hoepfner, "Current Summary of Structure-of- Intelleet Factors and Suggested Tests," A Report from the Psychological Laboratory, University of Southern California, December, 1963. FIGURE 5 MODEL OF SEMANTIC CATEGORY SHOWING THOSE FACTORS SELECTED AS REFERENCE FACTOR Sub.jects The subjects used in this study consisted of 248 seventh grade pupils in Hemet Junior High School, Hemet, California. This was the entire seventh grade population of the school, minus approximately 8 per cent who were absent or who for some other reason had had incomplete testing. Seventh grade pupils were selected because they had completed regular formal reading instruction, and because this was the youngest age at which some of the reference tests seemed appro priate. The results of other tests indicated that this population was slightly above the national average in mental maturity (e.g., October 1964, Mean CTMM IQ = 103.6). The mean chronological age was thirteen years and two months, with a range from eleven years, eleven months to fifteen years, six months at the time of testing (May 1964). The community in which the sub jects lived is small (population under 1 0,0 0 0), semi- rural, upper-lower and lower-middle socio-economically on the Warner Scale, with a few slums and some upper- middle class families. The school has less than 5 per cent non-Anglo-American students. Administration Procedures 65 The tests were administered by the investigator and a trained school counselor. The subjects were brought into the school auditorium three classes at a time (N = approximately 90) for 100-minute periods, on three consecutive days. The regular classroom teachers acted as proctors. When available, published directions and time limit suggestions were rigidly adhered to. When no directions were available, the investigator wrote directions in advance so that each group was treated equally. When no suggested time limit was available, testing was continued until at least 90 per cent of the pupils indicated they were through by placing their pencils on the table. All groups were given the tests in the same order. On the first day of testing Word Classification, Verbal Analogies 111, and Sentence Selection were administered, in that order. On the second day the California Test was given, and on the third day Wide Range Vocabulary, Verbal Analogies I, and Mathematics Aptitude were administered. Students were not pre pared for these tests. On the first day of testing they were told they would be taking tests as part of a research project. They also were told that some of the results would be known to the teacher and that they would be informed as to how they performed. A total of eight scores was obtained for each subject, including one score for each of the six reference tests and a separate score for California Reading Test Vocabulary and Comprehension section. These raw scores were punched into IBM punch cards for sta tistical analysis. computer (Bi Med Program No. 03M) through the facil ities of Western Data Processing. The formulas used were: Scoring All the tests were hand scored by the writer Statistical Analysis Means and standard deviations were obtained by Mean: Standard deviation: 67 Correlations between tests were obtained by the formula: This correlation is a product moment correlation. The factor analysis and rotations of the axes also were accomplished by Bi Med Program No. 03M. This is a principal factor solution developed by Hotelling and adapted for computers by Jacobi (11). One of the properties of this factor solution is that each suc cessive factor contributes a decreasing amount of the total communality. The first factor accounts for the maximum possible amount of variance. The second factor accounts for the maximum amount of the residual variance after the first factor is removed, and so on until all the communality is accounted for. In function the principal axes method resembles a centroid solution. Initially, the squared multiple correlation of each variable with the other remaining variables was used as an estimation of communality. This produced negative eigenvalues, however, indicating that the estimates did not preserve the positive quality of the correla tion matrix. In such an instance, Harmon recommended use of unity (1,0 0) in the diagonal in place of the 68 estimate of communality (11:187). This was done, and the data were analyzed. When unity was employed, all sources of variance, including error and specific, were accounted for (8:52). Rotation of the axes from their initial position was done, using an analytical method developed by Kaiser (11). Simple structure was reached, according to the varimax criteria of simplifying the columns rather than the rows. The axes were kept orthogonal to each other. CHAPTER V FINDINGS The purpose of this chapter was to present the findings as they related to Questions 4 and 5 listed in Chapter I. The following organization will be adhered to: 1. Findings relating to the question, "What is the nature of the vocabulary subtest factor(s)?" 2. Findings relating to the question, "What is the nature of the comprehension subtest factor(s)?" 3. Relationship to previous studies 4. Generalization to Phase I 5. Characteristics of the test battery 6 . Sex differences Findings Relating to the Question, "What Is the Nature oT the Vocabulary Subtest Factor(s)?* The Vocabulary subtest is an almost pure measure of the factor "cognition of semantic stimuli producing units." Evidence of this is seen in Table 15, which shows the final rotated factor matrix. In this table variable 1 is the Vocabulary subtest. The columns 69 TABLE 15 ROTATED FACTOR MATRIX SHOWING FACTOR LOADINGS Variables (Tests) Factors CMU I CMR II EMI III CMS IV EMR V CMC VI 1. Vocabulary 852 124 236 167 144 212 2. Comprehension 683 243 188 352 270 295 3. Word Classification 332 205 153 231 228 846 4. Verbal Analogies I 244 904 158 170 183 175 5. Sentence Selection 283 155 913 176 124 131 6. Wide Range 819 221 179 208 251 156 7. Math Aptitude 334 191 204 854 162 222 8. Verbal Analogies III 296 190 130 152 890 201 KEY: CMU = Cognition, semantic, units CMR * Cognition, semantic, relations EMI = Evaluation, semantic, implications CMS = Cognition, semantic, systems EMR = Evaluation, semantic, relations CMC » Cognition, semantic, classes NOTE: Decimals omitted. 71 (factors) can be identified by the variables which load highest on them. Column I is labeled cognition, semantic, units because of the high loading of the reference test for that factor (variable 6 , Wide Range Vocabulary). Column 11 is labeled cognition, semantic, relations because of the high loading of the reference test for that factor (variable 4, Verbal Analogies 1). Each of the six columns (factors) can be thus identi fied. The Vocabulary test can be seen to load heavily on Factor I and not significantly on any others. Actually, the Vocabulary subtest has a higher loading on this factor than does the reference test. Factor I has already been identified as cognition, semantic, units. The factorial nature of the Vocabulary subtest in percentage form is: Cognition, semantic, units 72 per cent Error variance 19 per cent Unknown variance 9 per cent The above percentages show the sources of variance of the Vocabulary subtest. The 72 per cent was calculated by the formula (A . ) x 100. A is the J factor loading of test j (in this case Vocabulary) on factor r (in this case cognition, semantic, units). Error percentage was obtained by squaring the relia- o bility coefficient (r^ x 1 0 0) obtained from the 72 test manual (69:8) and subtracting this from 100 to obtain the error percentage. The 9 per cent is what remains after subtracting 72 per cent plus 19 per cent from 100. It can be seen that most of the reliable variance has been accounted for by the one factor. The factorial structure of the Vocabulary sub test imposed on the semantic dimension of Guilford's model is shown in Figure 6 . In Guilford's theoretical model there are thirty (6 x 5) semantic factors. In this study the Vocabulary test was found to measure one of them. Findings Relating to the Question. "What Is the Nature of the Comprehension Subtest Factor(sj?r i In this study the largest factor by far in the Comprehension test was the cognition, semantic, units factor. This was the same factor found in the Vocabulary test (Table 15, page 70). Comprehension is variable 2, and the high loading on Factor I (CMU) is apparent. This variable also has significant loadings on the factors cognition, semantic, systems (CMS) and cognition, semantic, classes (CMC). It may be seen that in this analysis the Comprehension test was not as pure as the Vocabulary test: OPERATIONS Cognition Memory Divergent Production Convergent Production Evaluation PRODUCTS Units Classes Relations Systems Transformat ions Implications CONTENTS Semantic X * Significant Loading on Vocabulary Sub test SOURCE: J. P. Guilford and R. Hoepfner, "Current Summary of Structure-of- Intellect Factors and Suggested Tests/* A Report from the Psychological Laboratory, University of Southern California, December, 1963. FIGURE 6 FACTORIAL STRUCTURE OF VOCABULARY SUBTEST (FROM CALIFORNIA READING TEST) IMPOSED UPON "STRUCTURE-OF-INTELLECT" MODEL 74 Cognition, semantic, units 47 per cent Cognition, semantic, systems 12 per cent Cognition, semantic, classes 9 per cent Unknown 12 per cent Error 20 per cent The above illustrates the factorial structure of the Comprehension test. The percentages were obtained in the following manner. The factor per- centages were obtained by the formula (A . ) x 100. «J * A is the factor loading of test j (in this case Comprehension) on factor r. The error percentage was obtained by squaring the reliability coefficient O (r^ x 100) obtained from the manual (69:8) and sub tracting this from 100 (total variance). The 12 per cent unknown variance was obtained by subtracting the error and other known amounts from 100. Although the presence of the factors CMS and CMC in the Comprehension test is different from the Vocabulary factorial structure, the practical differ ence is not great. Both tests measure largely cognition, semantic, units, and the second largest source of variance is error. This would indicate that if a subject obtained significantly different scores on these two tests, it probably would be due to the unreliability of the tests or some condition 75 in the test situation rather than true difference in abilities. Some scores, of course, will differ because of true factorial differences, but in individual cases it would not be possible to distinguish true differ ences from differences due to error. In comparing the two tests (Vocabulary and Comprehension) it can be seen that both measure, for practical purposes, the factor cognition, semantic, units, but the Vocabulary test does it better from the standpoint of purity. Imposing the factorial structure of the Compre hension test on Guilford's "Structure-of-Intellect" model a clearer conception of what the test does and does not measure can be obtained, as shown in Figure 7. Of the thirty potential factors, the Comprehension test measures one prominently and two others to a smaller degree. Relationship to Previous Studies The factor cognition, semantic, units, according to Guilford, is the well established factor "verbal com prehension" (71:4). As the name implies, this factor deals with verbal stimuli rather than numerical or sym bolic. It involves the comprehension or understanding of the stimuli rather than production or "fluency." In several studies, vocabulary tests have been the best OPERATIONS Cognition Memory Divergent Production Convergent Production Evaluation PRODUCTS Units Classes Relations Systems Transformations Implications CONTENTS Semantic X * Significant Loadings on Comprehension Subtest SOURCE: J. P. Guilford and R. Hoepfner, "Current Summary of Structure-of- Intellect Factors and Suggested Tests," A Report from the Psychological Laboratory, University of Southern California, December, 1963. FIGURE 7 FACTORIAL STRUCTURE OF COMPREHENSION SUBTEST (FROM CALIFORNIA READING TEST) IMPOSED UPON "STRUCTURE-OF-INTELLECT" MODEL ■o a 77 measure of this factor (19, 33, 36, 43, 49). It is not surprising that the California Vocabulary Test is largely verbal comprehension. The finding that the Comprehension test also measures largely the one factor is consistent with most other studies, and demonstrates again the lack of success in making a reading test which is factorially complex. By comparing tests and analytic studies, French (30) concluded that this verbal comprehension factor is the major source of variance in Thurstone's Verbal P.M.A. test. Apparently this factor emerges from most batteries which include tests of verbal content. The Comprehension test showed significant load ing on two other factors. One, cognition, semantic, systems, according to Guilford, is the well-known "general reasoning" or Spearman's "g" factor (71:4). The most representative test of this factor is arith metic reasoning or arithmetic "word problems," and tests of deductive reasoning (49). The other, cognition, semantic, classes or "conceptional classi fication" is unique to Guilford's battery and no other reference to it was found. Generalization to Phase I Phase I of this study gives evidence that read ing tests of various publishers measured essentially 78 the sane factor. It was possible that the one large group factor in the Phase I battery could have broken down into two or more factors in another battery. This did not happen, however, and it appears reasonable to conclude that the prominent factor found in Phase I is "verbal comprehension," and the tests in that study are largely measures of this. The nature of the minor differences among the tests was not studied in this analysis, and generalizations are to be avoided. Characteristics of the Test Battery Table 16 shows the means and standard deviations for the tests and Kuder Richardson (Formula No. 21) (13) reliability coefficients. The latter coefficients are conservative estimates of internal consistency reliability, and do not account for variation over a period of time. Probably the reliability coefficient for the Sentence Selection test is spuriously low, since its correlations with the other tests are much higher (see Table 17). The obtained reliabilities of the Word Classification, Verbal Analogies I and III, and Sentence Selection would indicate that these were not "pure" measures of their respective factors with this group of subjects. Further evidence of this is the 79 TABLE 16 MEANS, STANDARD DEVIATIONS, AND RELIABILITY COEFFICIENTS— CALIFORNIA READING AND SELECTED REFERENCE TESTS Tests Means Standard Deviations Reliability Coefficients California Vocabulary 38.121 10.996 GO • California Comprehension 48.754 13.645 .88 Word Classification 18.020 4.422 .50 Verbal Analogies I 6.529 2.910 .45 Sentence Selection 9.996 2.552 .33 Wide Range 44.141 14.952 .89 Math Aptitude 7.552 5.670 .84 Verbal Analogies III 9,573 3.137 .52 o Kuder Richardson Formula No. 21. TABLE 17 INTERCORRELATIONS AMONG THE CALIFORNIA READING TESTS AND SELECTED REFERENCE TESTS Tests 1 2 3 4 5 6 7 8 1. California Vocabulary 76 59 47 54 79 59 52 2. California Comprehension - 68 57 54 79 69 61 3. Word Classification - 52 44 60 61 57 4. Verbal Analogies I - 43 53 50 49 5. Sentence Selection - 52 51 39 6. Wide Range - 61 59 7. Math Aptitude - 48 8. Verbal Analogies III — NOTE: Decimals omitted. 81 high intercorrelation among the tests and significant or near significant loadings of these tests on the "verbal comprehension" factor. There is a possibility that had these tests been purer measures, they might have "pulled" a greater amount of variance from the reading test being studied. This is doubtful, however, since the fairly pure Wide Range Vocabulary test pulled such a large amount of the reliable variance. Had there been larger amounts of variance unaccounted for perhaps these tests could have pulled a larger portion, but there was very little unaccounted variance in the reading test scores after the verbal comprehension factor was removed. Table 17, page 80, shows the intercorrelations among the tests. All of the correlations are positive and significant beyond the .01 level of confidence. The highest correlations are those of the Vocabulary and Comprehension tests with the Wide Range Vocabulary. An examination of the principal factor matrix yields more information on the characteristics of the test battery. Table 18 shows the matrix prior to rotation. The first factor, which pulls the maximum amount of variance, is heavily loaded by each variable. The highest loadings are the Vocabulary, Comprehension, and Wide Range Vocabulary tests. This first principal TABLE 18 UNROTATED FACTOR MATRIX— CALIFORNIA READING TESTS AND SELECTED REFERENCE TESTS Factors Tests I II III IV V VI VII VIII Vocabulary 841 222 285 054 212 073 299 138 Comprehensi on 901 022 149 -054 076 -027 -305 250 Word Classification 793 -208 037 -242 -264 441 015 -565 Verbal Analogies I 701 -334 -508 -009 365 015 043 012 Sentence Selection 682 523 -398 240 -206 070 -027 -010 Wide Range 868 097 241 087 232 -029 -119 -317 Math Aptitude 791 064 -071 -436 -185 -369 079 -040 Verbal Analogies III 732 -406 097 419 -286 -175 054 012 Cumulative Proportion of Total Variance 628 710 784 846 904 950 977 1.000 NOTE: Decimals omitted. 0 3 10 83 factor accounts for 63 per cent of the total variance. Inherent in the principal axes method of analysis is bi-polarity of all factors after the first, and we see this in the table. These negative relationships are difficult to explain on the basis of logic or content, and this is one reason why rotation of the axes is necessary. Bi-polar factors are more easily explained with measures of attitudes or other noncognitive traits such as love-hate, introversion-extraversion, or creativity-rigidity. Sex Differences The data were analyzed to determine if any sex differences were present. Table 19 shows the results of this analysis. Using the "standard errors of factor coefficient" table in Harmon (11:441), obtained by the fornula ^ \ ~1~ 5 + Af^/N ’ At was found that none of the differences in factor loadings on variables 1 and 2 were significant (1.95 x <r = .05 cl level). It was concluded that the factorial structure of the Vocabulary and Comprehension tests was similar for the male and female subjects of this study. Table 20 shows the means and standard deviations of the boys and girls on the various tests. Boys obtained significantly lower mean raw scores on the TABLE 19 ROTATED FACTOR MATRIX COMPARING BOYS AND GIRLS— CALIFORNIA READING TESTS AND SELECTED REFERENCE TESTS Factors Tests I II 111 "IV V VI Vocabulary (California) Girls 802 091 278 145 211 294 Boys 871 160 210 184 111 193 Comprehension (California) Girls 716 247 189 323 300 239 Boys 650 226 194 386 242 344 Word Classification Girls 415 205 121 253 202 813 Boys 261 218 179 220 264 849 Verbal Analogies I Girls 248 901 164 157 210 160 Boys 252 900 161 191 162 196 Sentence Selection Girls 236 150 934 162 094 093 Boys 312 167 882 196 164 169 Wide Range Girls 842 250 126 243 162 164 Boys 776 221 245 188 382 111 Math Aptitude Girls 354 178 209 845 164 226 Boys 302 203 202 863 173 209 Verbal Analogies III Girls 282 208 102 144 904 159 Boys 286 165 160 171 872 254 NOTE: Decimals omitted. TABLE 20 COMPARISON BETWEEN BOYS AND GIRLS MEANS AND STANDARD DEVIATIONS ON CALIFORNIA READING TESTS AND SELECTED REFERENCE TESTS Tests Means Standard Deviations Girls Minus Boys Significance Level Girls Minus Boys Significance Level Vocabulary (California) 3.803 .01 -1.472 Not Sig. Comprehension (California) 2.433 Not Sig. - .626 Not Sig. Word Classification - .093 Not Sig. - .870 .01 Verbal Analogies I - .095 Not Sig. .341 .01 Sentence Selection .221 Not Sig. - .133 .01 Wide Range 4.051 .01 -1.338 Not Sig. Math Aptitude .372 Not Sig. - .265 Not Sig. Verbal Analogies III .263 Not Sig. .067 Not Sig. 00 Ol Vocabulary and Wide Range Vocabulary tests. No other mean differences reached a significant level. A test for significance of difference was obtained for the standard deviations. As can be seen, the boys were significantly more variable on the Word Classification and Sentence Selection tests, and the girls were significantly more variable on Verbal Analogies I. In spite of these differences, the factor pattern remained essentially the same for each sex. CHAPTER VI SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS Summary Area of Inve st igat i on This study was designed to help discover the factorial validity of standardized, group-administered survey reading tests. Background of the Problem Each school year over two million survey reading tests are administered to students in the schools throughout the nation. The results of these tests are used in such functions as grouping, evaluation of curriculum, and evaluation of individual progress. As in any measuring instruments it is necessary, for correct interpretation of test scores, that the valid ity of these tests be known. It is therefore pertinent to ask the question, "What do survey reading tests measure?" The test manuals which accompany the most popular of the survey reading tests give little or no evidence concerning this question of validity. For the most part these manuals state only that the test items were written with certain aspects of the 87 88 reading process in mind. None of the manuals provide empirical data to support their claims. A review of the literature was conducted to obtain background information and research which would aid the test user in determining the validity of the tests in question. Several studies, dating back to the 1920's, were found which related to this problem. Articles by Cassell (19), Chein (21), Woodrow (58), Schneck (43), and Robinson and Hall (42) all indicated that reading test score variance could be attributed largely to one factor variously termed "reading competency," "reading," "verbal," or "verbal compre hension." French (30), in the process of compiling a library of factored tests, also came to the con clusion that reading test score variance could be attributed to one factor which was quite resistant to factorial breakdown. He labeled this one factor "verbal comprehension." Langsom (36) also attributed a large portion of the obtained test score variance to the verbal comprehension factor, but her study was unique in that she found a "word" factor which separated from the other verbal factor. The conclusion drawn from the review of the relevant studies was that most reading test score variance is attributable to the one factor "verbal 89 comprehension," and not much else. The manuals of current reading tests lead one to believe that there are several factors involved in their respective tests. This research was conducted to help determine if cur rent reading tests do, in fact, measure a variety of factors or if they, like their predecessors, measure largely one factor. Questions to Be Answered 1. How many factors are needed to account for reading test score variance. 2. Does the typical vocabulary (or word meaning) and paragraph meaning (or comprehension) division represent a true dichotomy? 3. Can the test items be regrouped in such a way that separate factors will emerge (using Bloom's Taxonomy of Educational Objectives (2) as a model)? 4. What is the nature of the vocabulary subtest factor(s)? 5. What is the nature of the comprehension subtest factor(s)? 90 Procedures Phase I: Preliminary Investigation The first phase of the study was designed to deal with Questions 1, 2, and 3 in the previous section. The five most used survey reading tests in the state of California were selected for study. These were the reading tests from the following achievement test batteries: (1) California Achievement Tests, Form W, Grades 4-6, 1957; (2) Iowa Test of Basic Skills, Form 1, Grades 3-9, 1955; (3) Metropolitan Achievement Tests, Form AM, Grades 5-6, 1959; (4) SRA Achievement Series, Form A, Grades 4-6, 1954; and (5) Sequential Tests of Educational Progress, Form 4A, 1957. These tests were administered to approximately 250 fifth grade public school pupils in Southern California. The tests were hand scored, and the scores were punched onto IBM cards. Intercorrelations among the tests were obtained and factor analyzed by Western Data Processing machines. The factor program used was a principal axes method, and squared multiple-correla- tion coefficients were used as estimates of communality. The axes were rotated analytically, using the varimax criterion and maintaining orthogonality. 91 Following this initial analysis the test items were reclassified according to the Taxonomy of Educa tional Objectives. It was found that the items fell into eight categories, mostly in the Knowledge and Comprehension sections. The tests were rescored on the basis of this classification and these new part scores were punched onto IBM cards. Intercorrelations, factor analyses, and rotation of the axes were com puted, using the same factor program as on the initial analysis. Phase II: Final Investigation This phase of the study was designed to obtain answers to Questions 4 and 5. To get at the nature of the factors certain reference tests from Guilford's "Structure-of-Intellect" battery were selected. The California Reading Test was selected as the representa tive for all survey reading tests. The six reference tests and the Reading Test were administered to 250 seventh grade pupils in a public junior high school in Southern California. The tests were hand scored and the scores were punched onto IBM cards. Statistical analysis was accomplished by Bi Med Program No. 03M, using a principal factor method with orthogonal rotations and the varimax criterion. 92 Findings The following findings are numbered according to the "Questions to Be Answered" to which they relate: 1. All of the survey reading tests had factor loadings of .80 or more on the first principal factor and no significant load ings on any other factor in the unrotated factor matrix. Ninety-three per cent of the total variance was accounted for by this first principal factor. 2. Following rotation three significant factors emerged. The first of these pulled heavier loadings from all the comprehension subtests than from any of the vocabulary subtests. The range of loadings on this factor for the comprehension subtests was .46 to .67. The range for the vocabulary subtests was .31 to .43. All of the load ings were statistically significant. The differences in loadings between the compre hension subtests and the vocabulary subtests were small but significant. The S.T.E.P. tests had the highest loadings on this factor. The two other factors which emerged had 93 equally heavy loadings from both vocabulary and comprehension subtests. None of the three factors were clearly either vocabulary or comprehension. 3. No meaningful factors emerged as a result of a grouping of the items according to the Taxonomy of Educational Objectives. The items had a definite tendency to cling to other items from the same test rather than to items from the same Taxonomy classifica tion. 4. It was found that the California Reading Vocabulary subtest loaded exclusively on the factor cognition, semantic, units (verbal comprehension). The loading on this factor was .85. 5. The California Reading Comprehension sub test had a high (.68) loading on the factor cognition, semantic, units. It had smaller loadings on the factors cognition, semantic, systems (.35) and on cognition, semantic, classes (.30). Supplementary Findings In addition to those findings which related directly to the questions, one other aspect of the 94 factorial validity of reading tests was studied. This was the aspect of sex differences. The data were divided by sex, and each part was factor analyzed by the same factor program (Bi Med Program No. 03M). The results showed no significant sex differences in terms of factorial loadings. Conclusions The following conclusions are based upon the findings which relate to the five questions used throughout this study: 1. How many factors are needed to account for reading test score variance? All of the reading tests which were studied in Phase I of this research were heavily loaded on the first principal factor. The second largest source of variance in each case was error. For practical purposes, the tests were measuring one general factor. 2. Do the typical "vocabulary” (or word mean ing) and "paragraph meaning" (or compre hension) dimensions represent a true dichotomy? The amount of overlap between these two subtests was greater than any degree of separation. This was especially true of the two subtests of any one test. The two California subtests (Vocabulary and Compre hension), for example, were much closer in factorial composition than either one was similar to any other subtest in the battery. Can the test items be regrouped in such a way that separate factors will emerge (using Bloom's Taxonomy of Educational Objectives as a model)? No separate factors emerged as a result of the regrouping of the items. There was a strong tendency for items of a given test to cling to each other rather than to cling to items from other tests which fell into the same Taxonomy category. What is the nature of the vocabulary subtest factor(s)? The Vocabulary subtest was a fairly pure measure of verbal comprehension. This is a factor which has been recognized for some time. This is the verbal factor which, together with general and spatial reasoning, contributes most of the variance of intelli gence test scores. It is the "verbal" factor in Thurstone's Primary Mental Ability Test (P.M.A.). The Vocabulary subtest did not have significant loadings on any other factor in this battery of tests. What is the nature of the comprehension factor(s)? The largest factor present in the Compre hension subtest was verbal comprehension. This accounted for 47 per cent of the total variance. The second largest factor present in this subtest was general reason ing. This accounted for 12 per cent of the total variance. Nine per cent of the total variance was attributed to the factor cognition, semantic, classes (Guilford’s terminology). The practical significance of these two lesser factors was not great since error variance accounted for a larger portion of the total variance than did either of these. Recommendat ions for Test Users It appears that a good deal of administrative time could be saved, without significant 97 loss of reliability or validity, by using tests with a vocabulary type of format. This would eliminate the time-consuming paragraph and question format. Since the vocabulary format appears to be a purer measure of verbal comprehension, more precise interpretations could be made. 2. Even though most of the test manuals advise using only whole test scores, they often imply that item-by-item analysis has diagnostic value. At the present time there is no evidence that this is true. It is better to use only whole test scores. 3. If an individual obtains significantly different vocabulary and paragraph meaning subtest scores it most probably is due to error of measurement than to true differ ences in any reading skill. Recommendations for Further Research The greatest need appears to be for basic research into those factors which differentiate pupils in reading. Until there is an awareness of the cognitive and perceptual determinants of reading success, it will be difficult to construct more precise instruments than are currently available. B I B L I O G R A P H Y BIBLIOGRAPHY 1. 2 . 3. 4. 5. 6. 7. 8. 9. 10. 11. Books Anastasi, Anne. Psychological Testing. New York: The MacMillan Company7 115U4. Bloom, B. S. (editor). Taxonomy of Educational Objectives. New York: David McKay Company, Bond, Guy L., and Eva Bond Wagner. Child Growth in Reading. Chicago: Lyons and Carnahan, 1955. Buros, Oscar K. (editor). The Fifth Mental Measurements Yearbook. Highland £ark, New Jersey: The Gryphon Press, 1959. Burt, Cyril. The Factors of the Mind. New York: The MacMillan Company, 1941. Burton, William H. Reading in Child Development. Indianapolis: The Bobbs-Merrill Company, Incorporated, 1959. Cronbach, Lee J. Essentials of Psychological Testing. 2nd edition. New York: Harper and Brothers, 1960. Fruchter, B. Introduction to Factor Analysis. Princeton, New Jersey: D. Van Nostrand Company, Incorporated, 1954. Guilford, J. P. Fundamental Statistics in Psychology and Education. 2nd edition. New York: McGraw-Hill Book Company, Incorporated, 1950. ______ . Psychometric Methods. New York: McGraw-Hill Book Company, Incorporated, 1954. Harman, Harry H. Modern Factor Analysis. Chicago: The University of Chicago Press, 1960. 100 101 12. Thomson, G. Factorial Analysis of Human Abilities. London: University of London Press, 1555*1 13. Thorndike, R. L., and £. Hagen. Measurement and Evaluation in Psychology and Education. 2nd edition, tfew York: John Wiley and Sons, Incorporated, 1961. 14. Thurstone, L. L. Primary Mental Abilities. Chicago: University of Chicago Press, 1938. 15. Vernon, P. E. The Structure of Human Abilities. New York: J. Wiley and Sons, 1951. 16. Warner, N. L., et al. Social Class in America. Chicago: ScTence Research Association, 1949. Periodicals 17. Anderson, Harold E., and A. T. Slivinske. "A Study of Intelligence and Achievement at the Fourth, Fifth, and Sixth Grade Levels," Journal of Experimental Education, XXXI (Summer, 1963), 425-432. 18. Brody, L. "Comparable Tests of Verbal and Non- Verbal Reasoning,” Journal of Educational Psychology, XXXI (March, 1945), l66-li)4. 19. Cassell, R. N., and E. J. Stancik. "Factorial Content of the Iowa Tests of Educational Development and Other Tests," Journal of Experimental Education, XXIX (beeember, 1960), 193-19(TI 20. Chant, S. N. F. "Multiple Factor Analysis and Psychological Concepts," Journal of Educa tional Psychology. XXVI (April, 193571 263-272. 21. Chein, I. "An Empirical Study of Verbal, Numerical and Spatial Factors in Mental Organization," The Psychological Record, III (May, 1939)1 71-94. 102 22. 23. 24. 25. 26. 27- 28. 29. 30. 31. 32. Coan, Richard W. "Facts, Factors, and Artifacts: The Quest for Psychological Meaning,” Psychological Review. LXXI (March. 1964). 123-140. Committee on Test Standards. "Technical Recom mendations for Psychological Tests and Diagnostic Techniques," Psychological Bulletin. LI (March, 1954), 201-235T Cureton, E. E. "The Principal Compulsions of Factor Analysts," Harvard Educational Review, IX (May, 1939), 28?-2$5. Davis, F. B. "Two New Measures of Reading Ability," Journal of Educational Psychology, XXXII (May, 1942),“565-372. . "Fundamental Factors of Comprehension in Reading," Psychometrika, IX (March, 1944), 185-197. Dewey, J. C. "The Acquisition of Facts as a Measure of Reading Comprehension," Elementary School Journal, XXXV (October, 1935), 346-348. Eysenck, H. J. "The Logical Basis of Factor Analysis," American Psychologist, VIII (March, 1953), 105-113. Feder, D. D. "Comprehension Maturity Tests," Journal of Educational Psychology, XXIX (November, 1938), 597-60<f. French, John W. "The Description of Aptitude and Achievement Tests in Terms of Rotated Factors," Psychometric Monographs, 1951, pp. 1-278. Gates, A. I. "An Experimental and Statistical Study of Reading and Reading Tests," Journal of Educational Psychology« XII (September, 1921), 303-314. Gipe, Melvin W., and T. A. Shellhammer. "A Study of Standardized Group Testing Programs in California Public Schools," California Schools, XXXII (1961), 3-16. 103 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. Guilford, J. P. , et al. "A Factor Analysis Study of Human Interests," Psychological Monographs, LXVIII (1954), 1-38. Guilford, J. P. "A Factor-Analytic Study of Navy Reasoning Tests with the Air Force Aircrew Classification Battery,” Educational and Psychological Measurement, !XIV (Summer, 1954), 361-323. Humphreys, L. G. "The Organization of Human Abilities,” American Psychologist, XVII (July, 1962), 4t5-4&3. Langsom, R. S. "A Factorial Analysis of Reading Ability," Journal of Experimental Education, X (September, 1&41J7 59-63. Leavell, V. W. "A Comparison of Basic Factors in Reading Patterns with Intelligence,” Peabody Journal of Education, XVI (November, 16557, 140-165. ------------- Lennon, R. T. "What Can Be Measured?" The Reading Teacher, XV (March, 1962), 326-337. Merrifield, Philip R., and Norman Cliff. "Factor Analytic Methodology," Review of Educational Research, XXXII (1963), 510-522. Michael, W. B. "Overview of Symposium," Educational and Psychological Measurement, XVIII (Autumn, 1938), 455-461. Reed, J. C., and R. S. Pepper. "The Interrelation ship of Vocabulary, Comprehension and Rate Among Disabled Readers," Journal of Experi mental Education, XXV (June, I§57TT 331-337. Robinson, F, P., and W. E. Hall. "An Analytical Approach to the Study of Reading Skills," Journal of Educational Psychology, XXXVI (October, 1945),429-445. Schneck, M. N. R. "The Measurement of Verbal and Numerical Abilities," Archives of Psychology, 1929, pp. 1-49. 104 44. 45. 46. 47. 48. 49. 50. 51. 52. Shank, S. "Student Responses in the Measurement of Reading Comprehension," Journal of Educational Research, XXII (September, 1930), 1T 3 - T 2 5 : ----------------- Sims, V. M. "The Reliability and Validity of Four Types of Vocabulary Tests," Journal of Educational Research, XX (September, 1529), 91-^6. Stalnaker, J. M. "Results from Factor Analysis with Special Reference to Primary Mental Abilities," Journal of Educational Research, XXXIII (May, 1940), 698-7d4. Stephenson, W. "Tetrad-differences for Verbal Subtests," Journal of Educational Psychology, XXII (April, 1931), 255-267. Stolurow, L. M., and J. R. Newman. "A Factorial Analysis of Objective Features of Printed Language Presumably Related to Reading Difficulty," Journal of Educational Research, LII (March, 1959), &43-251. Swineford, Frances. "The Nature of the General, Verbal, and Spatial Bi-Factors," Supplementary Educational Monographs, November, i94d, pp. 1-71. Taylor, E. A., and J. H. Crandall. "A Study of the 1 Norm Equivalence' of Certain Tests Approved for the California State Testing Program," California Journal of Educational Research. )tlll (September, 1955), 186-192. Tenopyr, M. L., and W. B. Michael. "A Comparison of Two Computer-Based Procedures of Orthogonal Analytic Rotation with a Graphical Method When a General Factor Is Present," Educational and Psychological Measurement, XXIII (Autumn, T553), 587-597. Thurstone, L. L. "A New Concept of Intelligence," The Educational Record, XVII (October, 1936), 44T-450. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 105 . "Current Misuse of the Factorial Method," Psychometrika, II (June, 1937), 73-76. _______ . "Note on a Re-Analysis of Davis' Reading Tests," Psychometrika, XI (September, 1946), 185-188. Traxler, A. E. "One Reading Test Serves the Purpose," Clearing House, XIV (March, 1940), 419-421. _______ , et al. "Ten Years of Research in Reading,^Educational Records Bulletin, 1941, pp. 1-193. Tryon, R. L. "General Dimensions of Individual Differences: Cluster Analysis vs. Multiple Factor Analysis," Educational and Psychological Measurement, XVI11 (Autumn, r9$'8) , 47?-4957 Woodrow, H. "The Common Factors in Fifty-Two Mental Tests," Psychometrika, IV (June, 1939), 99-107. Wrigley, C. "Objectivity in Factor Analysis," Educational and Psychological Measurement, XVtII (Autumn, 1958),463-476. Wyman, J. B., and M. Wendle. "What Is Reading Ability?" Journal of Educational Psychology, XII (January, 1$2977 518-531. Tests. Manuals, and Reports California Test Bureau. Glossary of Measurement Terms. Monterey, California: California Test Bureau, 1960. Cooperative Test Division. "Teacher1s Guide," Sequential Tests of Educational Progress. Princeton, New Jersey: Educational Testing Service, 1959. 106 63. 64. 65. 66. 67. 68. 69. 70. 71. Durost, W. N. (editor). Directions for Admin istration . Metropolitan Achievement Tests. Chicago: World Book Company, 1959. Esser, B. F. "A Preliminary Factor Analysis of the Scholastic Test Mathematics Section," Educational Test Service Report, TDR-62-2, June, 1962. (Mimeographed.) French, J. W., R. B. Ekstrom, and L. A. Price. Manual for Kit of Reference Tests for Cognitive Factors. Princeton, fdew Jersey: Educational Testing Service, 1963. Lindquist, E. F., and A. N. Hieronymus. Manual, Iowa Tests of Basic Skills. Boston: Houghton MifTlin Company, 1956. Teacher1s Manual, Iowa Tests of Basic Skills. Boston: Houghton Mifflin Company, 15337 ” Thorpe, L. P., D. W. Lefever, and R. A. Naslund. Examiner Manual, SRA Achievement Series. Chicago: Science Research Associates, Incorporated, 1956. Tiegs, E. W., and W. W. Clark. Manual, California Achievement Tests. Hollywood: California Test Bureau, 1957. Unpublished Materials Fortna, Richard 0. "A Factor-Analytic Study of the Cooperative School and College Ability Tests and Sequential Tests of Educational Progress." An Abstract of a Paper Presented at the 40th Meeting of the California Educa tional Research Association, Monterey, California, March, 1962. Guilford, J. P., and R. Hoepfner. "Current Summary of Structure-of-Intellect Factors and Suggested Tests." A Report from the Psychological Laboratory, University of Southern California, December, 1963. 107 72. Khan, L. "Factor Analysis of Certain Aptitude and Personality Variables." Unpublished Ph.D. dissertation, University of Southern California, 1959. 73. Michael, William B. "Contributions of Factor Analysis to the Understanding of Intelligence." A Report Based on a Presentation at the Psychology Colloquim held at University of California, Santa Barbara, November 13, 1961. APPENDIX I TESTS USED IN PHASE 109 DESCRIPTIONS OF TESTS USED IN PHASE I TEST: California Reading Test of California Achievement Test Battery Publisher: California Test Bureau Authors: E. W. Tiegs and W. W. Clark Date: 1957 Form: ff Subtests Format: Number of Items: Administra tion Time: Reliability Data: Validity Data: Level: Grades 4-6 Vocabulary Multiple choice: choose one of four words which is opposite of stimulus word in four content areas— mathematics, science, social studies, general 4 x 15 * 60 Comprehension Multiple choice: following directions, reference skills and answering question about a paragraph 86 8 minutes 60 minutes Kuder Richardson Formula No. 21 Vocabulary = .90 Comprehension .92 N - 200 Content: Items originally were selected from a "careful study1 of current (1934) curriculum objects and courses of study. Later editions were based on nation-wide testing programs. Items were evaluated by experts as to importance. (Continued) California Reading Test (Continued) Construct: Correlation Vocabulary Comprehension N CTMM. L. .78 .83 200 N.L. .70 .76 200 Metro Vocabulary .70 124 Stanford Word Meaning .75 118 Metro Reading .84 124 Stanford Paragraph .77 118 TEST: Vocabulary and Reading Comprehension Tests of the Iowa Tests of Basic Skills (ITBS) Authors: E. F. Lindquist and A. N. Hieronymus (Directors of Preparation) Format: Number of Items: Vocabulary Multiple choice: select one of four words which is the same in meaning as the stimulus word 43 Publisher: Houghton Mifflin Company Date: 1956 Form: I Level: 5th Grade Reading Comprehension Answer question about a paragraph 74 (Continued) Vocabulary and Reading Comprehension Tests (ITBS) (Continued) Administra tion Time: 17 minutes 55 minutes ------------------------------------------------------------- j . Reliability Split-half Method Data: Vocabulary .92 Reading .94 Validity Based upon discrimination of the items and item difficulty, Data: TEST: Word Knowledge and Reading Tests of Metropolitan Achievement Test Series Editor: W. N. Durost Date: 1959 Word Knowledge Subtests Format: Number of Items: Administra tion Time: Multiple-choice: choose one of four alternates which is most like stimulus word 55 14 minutes Publisher: World Book Company Form: A Level: Intermediate (Grades 5-6) Reading Multiple-choice: answer ques tions from a paragraph 44 25 minutes Validity Data: Based on study of curricula. Judgment of experts. TEST: Reading Test of SRA Achievement Series Publisher: Science Research Associates Authors: L. P. Thorpe, D. W. Lefever, R. A. Naslund Subtests Format: Number of Items: Date: 1956 Form: A Level: Grades 4-6 Vocabulary Multiple choice: choose one of alternatives which is most similar to stimulus word which is found in context in a paragraph 50 Comprehension Answer questions about a paragraph 50 Administra tion Time: 65 minutes for total test TEST: Sequential Test of Educational Progress Publisher: Educational Test- (S.T.E.P.) Reading ing Service Author: None given Date: 1957 Form: A Level: Grades 4, 5, and 6 Format: Part One Part Two Multiple choice: answering Multiple choice: answering questions about a paragraph questions about a paragraph Number of Items: 35 35 (Continued) S.T.E.P. Reading (Continued) Administra tion Time: 35 minutes 35 minutes Reliability Data: Validity Data: Alternate form reliability high. S.T.E.P. scores with grade point average .70 (7th grade). Based upon item difficulty and item discrimination. APPENDIX II TESTS USED IN PHASE II DESCRIPTIONS OF TESTS USED IN PHASE II 1 TEST: Wide Range Vocabulary Test Publisher: The Psychological Corporation Number of Items: 100 Format: Multiple choice: select one of the 4-5 words which is most like the stimulus word. Time Limit: Untimed, about 10 minutes TEST: Word Classification Publisher: Aptitude Project: University of Southern California Number of Items: 40 Format: Choose one of four words which does not belong with the others . Time Limit: 14 minutes TEST: Mathematics Aptitude Test R-l Publisher: Aptitude Project: University of Southern California Number of Items: 30 Format: a: b::c: 7 . Multiple choice: choose one of five words which best completes the analogy. Time Limit: 12 minutes 115 116 TEST: Mathematics Aptitude Test R-l Publisher: Educational Testing Service Number of Items: 30 Format: Arithmetic problems given verbally. Multiple choice: choose best of five alternates. Time Limit: 20 minutes TEST: Verbal Analogies III Publisher: Aptitude Project: University of Southern California Number of Items: 20 Format: a: b::c: 7 . Multiple choice: choose one of five words which best completes the analogy. Time Limit: 6 minutes TEST: Sentence Selection Publisher: Aptitude Project: University of Southern California Number of Items: 18 Format: Given a statement. Choose one of three sentences which is most likely true, based on the given statement. Tine Limit: 8 minutes APPENDIX III LISTING OF TAXONOMY OF EDUCATIONAL OBJECTIVES CONDENSED VERSION OF THE TAXONOMY OF EDUCATIONAL OBJECTIVES Cognitive Domain KNOWLEDGE 1.00 KNOWLEDGE Knowledge, as defined here, involves the recall of specifics and universals, the recall of methods and processes, or the recall of a pattern, structure, or setting. For measurement purposes, the recall situa tion involves little more than bringing to mind the appropriate material. Although some alteration of the material may be required, this is a relatively minor part of the task. The knowledge objectives emphasize most the psychological processes of remembering. The process of relating is also involved in that a knowledge test situation requires the organization and reorganiza tion of a problem such that it will furnish the appro priate signals and cues for the information and knowl edge the individual possesses. To use an analogy, if one thinks of the mind as a file, the problem in a knowledge test situation is that of finding in the problem or task the appropriate signals, cues, and clues which will most effectively bring out whatever knowledge is filed or stored. 1.10 KNOWLEDGE OF SPECIFICS The recall of specific and isolable bits of information. The emphasis is on symbols with concrete referents. This material, which is at a very low level of abstraction, may be thought of as the elements from which more complex and abstract forms of knowledge are built. 1.11 KNOWLEDGE OF TERMINOLOGY Knowledge of the referents for specific symbols (verbal and non-verbal). This may include knowledge of the most generally accepted symbol referent, knowledge of the variety of symbols which may be used for a single referent, or knowledge of the referent most appropriate to a given use of a symbol. 118 119 *To define technical terms by giving their attributest properties, or relations. ♦Familiarity with a large number of words in their common range of meanings. 1.12 KNOWLEDGE OF SPECIFIC FACTS Knowledge of dates, events, persons, places, etc. This may include very precise and specific information such as the specific date or exact magnitude of a phenomenon. It may also include approximate or relative information such as an approximate time period or the general order of magnitude of a phenomenon. ♦The recall of major facts about particular cultures. ♦The possession of a minimum knowledge about the organisms studied in the laboratory. 1.20 KNOWLEDGE OF WAYS AND MEANS OF dealing wifff Specifics Knowledge of the ways of organizing, studying, judging, and criticizing. This includes the methods of inquiry, the chronological sequences, and the standards of judgment within a field as well as the patterns of organization through which the areas of the fields themselves are determined and internally organized. This knowledge is at an intermediate level of abstraction between specific knowledge on the one hand and knowledge of universals on the other. It does not so much demand the activity of the student in using the materials as it does a more passive awareness of their nature. 1.21 KNOWLEDGE OF CONVENTIONS Knowledge of characteristic ways of treating and presenting ideas and phenomena. For pur poses of communication and consistency, workers in a field employ usages, styles, practices, and forms which best suit their purposes and/or which appear to suit best the phenomena with ♦Illustrative educational objectives selected from the literature. 120 which they deal. It should be recognized that although these forms and conventions are likely to be set up on arbitrary, accidental, or authoritative bases, they are retained because of the general agreement or concurrence of individuals concerned with the subject, phe nomena, or problem. ♦Familiarity with the forms and conventions of the major types of works, e.g., verse, plays, scientific papers, etc. ♦To make pupils conscious of correct form and usage in speech and writing. 1.22 KNOWLEDGE OF TRENDS AND SEQUENCES Knowledge of the processes, directions, and movements of phenomena with respect to time. *Understanding of the continuity and develop ment of American culture as exemplified in American life. * Knowledge of the basic trends underlying the development of public assistance pro grams. 1.23 KNOWLEDGE OF CLASSIFICATIONS AND CATEGORIES Knowledge of the classes, sets, divisions, and arrangements which are regarded as fundamental for a given subject field, purpose, argument, or problem. ♦To recognize the area encompassed by various kinds of problems or materials. ♦Becoming familiar with a range of types of literature. 1.24 KNOWLEDGE OF CRITERIA Knowledge of the criteria by which facts, principles, opinions, and conduct are tested or judged. ♦Familiarity with criteria for judgment appropriate to the type of work and the purpose for which it is read. 121 "Knowledge of criteria for the evaluation of recreational activities. 1.25 KNOWLEDGE OF METHODOLOGY Knowledge of the methods of inquiry, techniques, and procedures employed in a particular subject field as well as those employed in investigating particular problems and phenomena. The emphasis here is on the individual's knowledge of the method rather than his ability to use the method. "Knowledge of scientific methods for evaluat ing health concepts. "The student shall know the methods of attack relevant to the kinds of problems of concern to the social sciences. 1.30 KNOWLEDGE OF THE UNIVERSALS AND ABSTRACTIONS TN~A FlELD Knowledge of the major schemes and patterns by which phenomena and ideas are organized. These are the large structures, theories, and generali zations which dominate a subject field or which are quite generally used in studying phenomena or solving problems. These are at the highest levels of abstraction and complexity. 1.31 KNOWLEDGE OF PRINCIPLES AND GENERALIZATIONS Knowledge of particular abstractions which summarize observations of phenomena. These are the abstractions which are of value in explaining, describing, predicting, or in determining the most appropriate and relevant action or direction to be taken. "Knowledge of the important principles by which our experience with biological phe nomena is summarized. "The recall of major generalizations about particular cultures. 122 1.32 KNOWLEDGE OF THEORIES AND STRUCTURES Knowledge of the body of principles and generalizations together with their interrela tions which present a clear, rounded, and systematic view of a complex phenomenon, problem, or field. These are the most abstract formulations, and they can be used to show the interrelation and organization of a great range of specifics. *The recall of major theories about particular cultures. *Knowledge of a relatively complete formula tion of the theory of evolution. INTELLECTUAL ABILITIES AND SKILLS Abilities and skills refer to organized modes of opera tion and generalized techniques for dealing with materials and problems. The materials and problems may be of such a nature that little or no specialized and technical information is required. Such information as is required can be assumed to be part of the indi vidual's general fund of knowledge. Other problems may require specialized and technical information at a rather high level such that specific knowledge and skill in dealing with the problem and the materials are required. The abilities and skills objectives emphasize the mental processes of organizing and reorganizing material to achieve a particular purpose. The materials may be given or remembered. 2.00 COMPREHENSION This represents the lowest level of understanding. It refers to a type of understanding or apprehen sion such that the individual knows what is being communicated and can make use of the material or idea being communicated without necessarily relating it to other material or seeing its fullest implications. 2.10 TRANSLATION Comprehension as evidenced by the care and 123 accuracy with which the communication is para phrased or rendered from one language or form of communication to another. Translation is judged on the basis of faithfulness and accuracy, that is, on the extent to which the material in the original communication is preserved although the form of the communica tion has been altered. *The ability to understand non-literal statements (metaphor, symbolism, irony, exaggeration). ♦Skill in translating mathematical verbal material into symbolic statements and vice versa. 2.20 INTERPRETATION The explanation or summarization of a communica tion. Whereas translation involves an objective part-for-part rendering of a communication, interpretation involves a reordering, rearrange ment, or a new view of the material. ♦The ability to grasp the thought of the work as a whole at any desired level of gener ality. ♦The ability to interpret various types of social data. 2.30 EXTRAPOLATION The extension of trends or tendencies beyond the given data to determine implications, consequences, corollaries, effects, etc., which are in accordance with the conditions described in the original communication. ♦The ability to deal with the conclusions of a work in terms of the immediate inference made from the explicit state ments. ♦Skill in predicting continuation of trends. 124 3.°0 APPLICATION The use of abstractions in particular and concrete situations. The abstractions may be in the form of general ideas, rules of procedures, or gener alized methods. The abstractions may also be technical principles, ideas, and theories which must be remembered and applied. ‘Application to the phenomena discussed in one paper of the scientific terms or concepts used in other papers. ‘The ability to predict the probable effect of a change in a factor on a biological situation previously at equilibrium. 4.00 ANALYSIS The breakdown of a communication into its con stituent elements or parts such that the relative hierarchy of ideas is made clear and/or the relations between the ideas expressed are made explicit. Such analyses are intended to clarify the communication, to indicate how the communica tion is organized, and the way in which it manages to convey its effects, as well as its basis and arrangement. 4.10 ANALYSIS OF ELEMENTS Identification of the elements included in a communication. ‘The ability to recognize unstated assump tions. ‘Skill in distinguishing facts from hypotheses. 4-20 ANALYSES OF RELATIONSHIPS The connections and interactions between elements and parts of a communication. ‘Ability to check the consistency of hypotheses with given information and assumptions. 125 ♦Skill in comprehending the interrelation ships among the ideas in a passage. 4.30 ANALYSIS OF ORGANIZATIONAL PRINCIPLES The organization, systematic arrangement, and structure which hold the communication together. This includes the "explicit" as well as the "implicit" structure. It includes the bases, necessary arrangement, and the mechanics which make the communication a unit. ♦The ability to recognize form and pattern in literary or artistic works as a means of understanding their meaning. ♦Ability to recognize the general techniques used in persuasive materials, such as adver tising, propaganda, etc. 5.00 SYNTHESIS The putting together of elements and parts so as to form a whole. This involves the process of working with pieces, parts, elements, etc., and arranging and combining them in such a way as to constitute a pattern or structure not clearly there before. 5-10 PRODUCTION OF A UNIQUE COMMUNICATION The development of a communication in which the writer or speaker attempts to convey ideas, feelings, and/or experiences to others. ♦Skill in writing, using an excellent organization of ideas and statements. ♦Ability to tell a personal experience effectively. 5.20 PRODUCTION OF A PLAN, OR PROPOSED SET OF OPERATIONS The development of a plan of work or the pro posal of a plan of operations. The plan should satisfy requirements of the task which may be given to the student or which he may develop for himself. 126 ♦Ability to propose ways of testing hypotheses. ♦Ability to plan a unit of instruction for a particular teaching situation. 5.30 DERIVATION OF A SET OF ABSTRACT RELATIONS The development of a set of abstract relations either to classify or explain particular data or phenomena, or the deduction of propositions and relations from a set of basic propositions or symbolic representations. ♦Ability to formulate appropriate hypotheses based upon an analysis of factors involved, and to modify such hypotheses in the light of new factors and considerations. ♦Ability to make mathematical discoveries and generalizations. 6.00 EVALUATION Judgments about the value of material and methods for given purposes. Quantitative and qualitative judgments about the extent to which material and methods satisfy criteria. Use of a standard of appraisal. The criteria may be those determined by the student or those which are given to him. 6.10 JUDGMENTS IN TERMS OF INTERNAL EVIDENCE Evaluation of the accuracy of a communication from such evidence as logical accuracy, con sistency, and other internal criteria. ♦Judging by internal standards, the ability to assess general probability of accuracy in reporting facts from the care given to exactness of statement, documentation, proof, etc. ♦The ability to indicate logical fallacies in arguments. 127 6.20 JUDGMENTS IN TERMS OF EXTERNAL CRITERIA Evaluation of material with reference to selected or remembered criteria. *The comparison of major theories, gener alizations, and facts about particular cultures. ♦Judging by external standards, the ability to compare a work with the highest known standards in its field— especially with other works of recognized excellence.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
A Semantic Differential Investigation Of Significant Attitudinal Factors Related To Three Levels Of Academic Achievement Of Seventh Grade Students
PDF
A Study Of Relationships Between Grades And Measures Of Scholastic Aptitude, Creativity, And Attitudes In Junior College Students
PDF
A Study Of The Factorial Validity And Reliability Of The Individual Test Of Creativity
PDF
A Factor Analysis Of The Semantic-Evaluation Abilities
PDF
An Experimental Analysis Of The Relationship Between The Reliability Of Amultiple-Choice Examination And Various Test-Scoring Procedures
PDF
A Paradigm For The Implementation Of Accountability Measures In Bilingualeducation
PDF
A Measure Of Cultural Deprivation
PDF
Investigation Of A Self-Report Method In The Study Of Underachievement Among Elementary School Children
PDF
Behavioral (Social) Intelligence: A Factor Analysis
PDF
A Factor Analysis Of The Figural-Evaluation Abilities
PDF
Rigidity Factors And Value Choices
PDF
A Comparison Of Degrees Of Bilingualism And Measure Of School Achievementamong Mexican-American Pupils
PDF
Teacher Assessment Of Creative Potential In Fifth-Grade Students
PDF
The Relation Of Sense Of Humor To Creativity, Intelligence, And Achievement
PDF
Attitudinal Variables Among Teachers Of Exceptional And Non- Exceptional Children
PDF
A Factor-Analytic Study Of Problem-Solving Abilities
PDF
An Analysis Of The Selection Criteria For Assignment Of Students To Advanced Placement Classes In The Los Angeles Unified School District
PDF
A Factor-Analytic Study Of Military Leadership
PDF
Educational Group Counseling Within A Remedial Reading Program
PDF
A Semantic Differential Investigation Of Critical Factors Related To Achievement And Underachievement Of High School Students
Asset Metadata
Creator
Sutherland, Samuel Philip
(author)
Core Title
A Factor Analytic Study Of Tests Designed To Measure Reading Ability
Degree
Doctor of Philosophy
Degree Program
Educational Psychology
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
education, educational psychology,OAI-PMH Harvest
Language
English
Contributor
Digitized by ProQuest
(provenance)
Advisor
Metfessel, Newton S. (
committee chair
), Lovell, Constance (
committee member
), Reid, William R. (
committee member
)
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c18-215372
Unique identifier
UC11361121
Identifier
6610551.pdf (filename),usctheses-c18-215372 (legacy record id)
Legacy Identifier
6610551.pdf
Dmrecord
215372
Document Type
Dissertation
Rights
Sutherland, Samuel Philip
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the au...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus, Los Angeles, California 90089, USA
Tags
education, educational psychology