Close
USC Libraries
University of Southern California
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected 
Invert selection
Deselect all
Deselect all
 Click here to refresh results
 Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Folder
Transformational Processing Of Sentences Containing Adjectival Modifiers
(USC Thesis Other) 

Transformational Processing Of Sentences Containing Adjectival Modifiers

doctype icon
play button
PDF
 Download
 Share
 Open document
 Flip pages
 More
 Download a page range
 Download transcript
Copy asset link
Request this asset
Request accessible transcript
Transcript (if available)
Content This dissertation has been
microfilmed exactly as received
69-5073
VAN KEKERIX, Donald Lorraine, 1925-
TRANSFORMATION AL PROCESSING OF
SENTENCES CONTAINING ADJECTIVAL
MODIFIERS.
University of Southern California, Ph.D., 1968
Psychology, general
University Microfilms, Inc., Ann Arbor, Michigan
TRANSFORMATIONAL PROCESSING OF SENTENCES
CONTAINING ADJECTIVAL MODIFIERS
by
Donald Lorraine Van Kekerix
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVE-RSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(Psychology)
August 1968
UNIVERSITY O F S O U T H E R N C A LIFO R N IA
THE GRADUATE SCHOOL
UNIVERSITY PARK
LOS ANGELES, CALIFORNIA 9 0 0 0 7
This dissertation, written by
Donald _ Lorraine __Van__ Kekerix.....
under the direction of Ai.?.... Dissertation Com­
mittee, and approved by all its members, has
been presented to and accepted by the Graduate
School, in partial fulfillment of requirements
for the degree of
D O C T O R OF P H IL O S O P H Y
Dean
Date....
D.I TTEE
Okairman .airman
ACKNOWLEDGEMENTS
The author wishes to express thanks to Norman Cliff
for his encouragement and for the many helpful suggestions
provided during the preparation of this study. Thanks are
also due Daniel Davis and Maurice Van Arsdol for their
pointed criticisms and willingness to cope with an unfa­
miliar subject matter.
Grateful acknowledgement is also extended to the
Douglas Aircraft Company for scholarship support during the
school year 1965-1966.
Finally, special thanks go to my wife for support,
both moral and financial, as well as patience beyond com­
prehension. Thanks also to my daughter, Lorraine, and my
wife for serving as the alternate judges in scoring the
responses for these experiments.
TABLE OF CONTENTS
Page
ACKNOWLEDGEMENTS ...................................... ii
LIST OF TABLES........................................ V
LIST OF FIGURES...................................... vi
Chapter
I. INTRODUCTION AND BACKGROUND .    1
Modern Psycholinguistics
II. MODELS OF LANGUAGE BEHAVIOR .................. 7
Generative Grammars
Transformational Grammars
Transformational Depth Model
Surface Structure Models
Adjectival Modifiers
Predictions for Experiment I
A More Rigorous Test of Embedding
Operations
Predictions for Experiment II
III. EXPERIMENT I .................................  33
Design and Procedure
IV. RESULTS OF EXPERIMENT I .................... 38
Scoring'Method
Analysis of the Data
V. DISCUSSION OF EXPERIMENT I .................. 46
Extension of the Transformational
Depth Model _
- Refutation of d
Interval Scale Properties
Summary of Results
iii
Chapter Page
VI. EXPERIMENT I I ............................... 54
Review of the Problem
Design and Procedure
VII. RESULTS OF EXPERIMENT I I .................... 60
Scoring Procedure
Data Analysis
VIII. DISCUSSION OF EXPERIMENT II ........ 68
Confirmation of the Hypothesis
Interval Scale Properties
Sentence Reversibility
Overload and Chunking Hypothesis
IX. CONCLUSIONS AND RECOMMENDATIONS........... 82
Extension of the Transformational
Depth Model
Chunking Hypothesis
Disjoint Sentence Hypothesis
X. SUMMARY................................  91
REFERENCES............................................ 95
APPENDIXES
A. STIMULUS SENTENCE SETS..................... 106
B. BASIC DATA.................................. 113
C. INSTRUCTIONS TO SUBJECTS.................... 117
iv
LIST OF TABLES
Table Page
1. The Neg, P, and Q Sentence F a m i l y ............ 14
2. Mean Number of Words Recalled after Each
Sentence T y p e ............................  . 4 0
3. Analysis of Variance: Experiment I ......... 43
4. Individual Comparisons of Treatment Means for
Experiment I ................................. 44
5. Means of the Differences Between Singularly
Transformations as Independent Estimates
of Sentence Complexity ...................... 50
6. Mean Number of Words Recalled after Various
Sentences and Sentence Pairs ................ 61
7. Analysis of Variance for Mean Recall after
Pairs of Sentences of Varying Degrees of
Relationship...............................  . 64
8. Individual Comparisons of Treatment Means:
Experiment I I ............................... 65
9. Analysis of Variance of Recall for Reversible
and Non-Reversible Sentences ............... 76
v
LIST OF FIGURES
Figure Page
1. Tree Diagram or P-Marker of a Simple Sentence 9
2. Miller's Geometric Representation of the
Transformational Relationships Between
the Neg, P, and Q Sentences................ 13
3. Tree Diagram or P-Marker for a Kernel Sentence
with Johnson's Operation Counts and Yngve1s
Depth I n d e x ................................. 2 0
4. Tree Diagrams of Adjectivally Modified
Sentences.................................... 24
5. Mean Words Recalled for Sentences of Several
Transformational Depths ............. . . . 41
6. Mean Number of Words Recalled for Sentences
of Different d ............................... 42
7. Calculated and Observed Differences in Number
of Words Recalled Between K and Transformed
Sentences .  ............................... 52
8. Mean Words Recalled for Sentences of Different
Transformational Depths .................... 62
9. P-Marker Illustrating the Identity of the
Surface Structure of Kernel and Adjective
Modified Sentences ........................... 80
vi
CHAPTER I
INTRODUCTION AND BACKGROUND
A concern for the nature of human language has been
a preoccupation of the philosophers since the Golden Era of
Greece. Any meaningful concept of human thought must first
explain the mechanisms and processes of the symbolic system
that humans utilize in making their thoughts manifest
(Smith and Miller, 1966) . Psychologists have looked at the
problems of language usage since the early days of the sci­
ence. By the first decade of this century, the principal
areas of concern had been identified in the following rep­
resentative studies: Perception of objects (Cattell, 1886),
derivation of meaning (Jakobsen, 1911), comprehension of
sentences (Bagley, 1900), second language learning (Lukens,
1894), and acquisition of language by the infant (Dewey,
1894). Much of this early work in the psychology of lan­
guage was vitiated by the generally inadequate grammars
available. One probable reason for the inconclusive nature
of the early studies was the inability to successfully deal
with the interaction of the semantic and syntactic compo­
nents of language. The onset of Behaviorism with its
avoidance of such covert mental processes as thinking may
have hastened the decline of the language studies.
The convergence of a number of non-psychological
events in the past twenty years has triggered a new and
generally more fruitful interest in the psychology of lan­
guage, or psycholinguistics. These events were the rapid
advances in the technology of communications including
!television and the communication satellites. Shannon's
:development of Information Theory encouraged attempts to
i
!treat language in a quantitative manner. Another factor
j
was the creation of electronic computers of sufficient mem-
!ory capacity to handle the complex variables language in-
t
;volves coupled with the practical goal of achieving machine
translation. A final factor was the development of new
systems of descriptive grammars that were based upon the
interaction of the semantic and syntactic components
(Chomsky, 1957, 1965, 1966b; Hockett, 1967; Lamb, 1966).
Modern Psycholinguistics
The first significant event in modern psycholinguis­
tics was the appearance of the monograph: Psycholinguis­
tics , edited by Osgood and Sebeok (1954). In the same year
Mowrer (1954) devoted the presidential address of the Amer­
ican Psychological Association to an exposition on the psy-
:chology of language as higher order conditioning. During
■ the remainder of the fifties the psycholinguists attempted
to weave together models of language encoders and decoders
based upon various learning theories, extensions of Infor­
mation Theory and to some extent the work of descriptive
linguists such as Fries (19 52). By the sixties psycholo­
gists had learned of the revolution in linguistics created
by Chomsky's generative transformational grammar and^ began
investigating the implications of the theory.
Maclay (1964) has classified psycholinguistic re­
search into three basic types, in terms of the investiga­
tors' views as to the nature of language. The first type
views language as a collection of words. Generally such
investigators are more interested in learning processes
than in language processes. Artificial or natural language
items happen to be the stimuli they employ in their verbal
learning and paired-associates paradigms. The interest in
the verbal materials is usually restricted to identifica­
tion of strength of association or other parameters useful
for controlling the stimulus properties of the word. This
position is exemplified by Underwood (1965) and Jenkins
(1963) .
The second classification is reserved for those
studies that treat language as a set of discrete word
classes (Maclay, 1964). These studies typically look to
the data for frequency of utterance of the various gram­
matical classes, or derive the relationship between the
grammatical classes. Studies of this group are illustrated
4
I by Howe's studies of the nature of the relationship of j
i
J
adverbs, adjectives and verbs (1963, 1966). Also included j
!in this class are the studies of pauses or hesitations in
j
j natural speech situations like those of Goldman-Eisler j
| |
j (1964), and Ladefoged and Broadbent (1960) study of speech
■ segmentation.
Maclay's final category is reserved for those j
studies that conceive of language as a large set of clauses i
with rather complex interactions (Maclay, 1964). Wales and;
}
iMarshall (1966) suggest that this group should be further
:divided into the surface structure and deep structure stud- :
ies. They list Osgood (1963), Yngve (1960), and Johnson
(1965) as Phrase-Structure or surface structure studies and
models. The surface structure of a sentence is its pho­
netic or orthographic representation, what we see or hear
(Postal, 19 64b). The term Phrase-Structure refers to the
generative grammar used to produce the surface structure of
the sentence. A simplified Phrase-Structure grammar is
presented below. Osgood (1963) has presented a model of a
language user that uses Phrase-Structure rules to produce
levels or hierarchies of symbols while employing a Markov
:process of probabilistic generation of the items on any
given level.
The deep structure or transformational view of
grammar uses the Phrase-Structure rules found in the sur-
face structure models, the crucial difference being that
terminal strings of the Phrase-Structure grammar are re­
arranged in one sweeping revision of the string rather than
in steps of one symbol (Gleason, 1965). -Thus the emphasis
is on the relationship between sentence types. The stud­
ies of Miller (1962) and his students (Mehler, 1963;
McMahon, 19 63; Miller and McKean, 1964; Miller and Isard,
1963, 1964; and Marks and Miller, 1964) illustrate the con­
cern for the relationship of various sentence types.
The research reported in this paper was intended to
seek behavioral correlates to sentences exhibiting marked
differences between their surface structure and deep struc­
ture. Postal (1964b) and Halliday (1964) relate surface
structure to the phonetic representation of the sentence
while deep structure provides a description of the trans­
formational derivation of the sentence and provides the se­
mantic interpretation of the sentence. The simple subject-
verb-object, or kernel sentence (K), undergoes a marked
complication in the deep structure when an adjective modi­
fier is added (Thomas, 1965; Bach, 1964). Phrase-Structure
grammars on the other hand treat only the surface structure
and find the adjective modfier a simple item to process
(Yngve, 1960; 'Johnson, 1966; Martin and Roberts, 1966).
The purpose of the present research was to obtain behav­
ioral estimates of the storage requirements in the short­
term memory (STM) for sentences of controlled length that
vary in terms of adjectival modification or the number of
singularly transformations in its derivation. Such data
should be relevant to the determination of the relation of
grammar to the cognitive structure.
CHAPTER II
MODELS OF LANGUAGE BEHAVIOR
A brief description of the new generative and
transformational grammars will be presented to insure that
the distinction between deep and surface structure is clear.
For a more detailed exposition of the grammars and their
psychological implications, the reader is referred to
Chomsky (1965), Miller and Chomsky (1963), and Katz and
Postal (1964). Simpler, but more derivative versions are
available in Thomas (1965) and Bach (1964) and a programmed
text by Roberts (1964) .
Generative Grammars
The objective of a modern grammar is to enumerate
and describe the sentences of a language relying on formal
(non-semantical) statements. A grammar accomplishes this
by the use of rules that can be applied in such a manner
that they generate (describe) all of the sentences of the
language and none of the non-sentences of the language.
The grammar is to contain a basic symbol, S, and rules
specifying that the symbol is to be rewritten in a certain
manner to produce a string of symbols. The rules ar^
applied sequentially until the terminal vocabulary, the
spoken words are produced (Chomsky, 1957, 1965). It must
be stressed that the rules do not act upon words but the
strings of symbols that underlie the sentence. The pro­
cessing of a string or sentence is called the derivation of
the string, and is often represented by a tree diagram or
P-marker, see Figure 1. A derivation is an expansion of
the basic symbol by a sequence of rewritings. Below are a
set of unordered Phrase-Structure rules adapted from Miller
(1962). The three dashes, ---, are read as "is rewritten
as"; S is the basic symbol and initiates sentence encoding.
The lists of words separated by commas in the last few
rules indicate that the particular terminal vocabulary item
is optional.
Given: S
 D + N, Bill, John, ...
VP---- V + NP
D  the, a, ...
N  boy, girl, ball, ...
V  hit, struck, ...
With these six simple rules and limited vocabulary
it is possible to form sentences such as: John hit the
girl, The boy struck the girl. In the first sentence John
is derived from the symbol NP while the girl is derived
from the symbol VP. These relationships are most readily
seen in the linguists' tree or P-marker illustrated in
Figure 1. The terminal vocabulary items are included to
aid in the identification. The rules have been applied
sequentially, with the left most symbol being expanded
first within any given expression.
S
NP VP
NP
girl boy struck the The
Fig. 1.— Tree diagram or P-marker of a simple sentence.
The P-S rules can be applied to only one symbol at
a time. Further, a symbol must be rewritten in that manner
whenever it occurs. A convenience that has been adopted to
reduce the number of rules is to specify that certain sym­
bols may be rewritten as nulls. This is indicated by en­
closing the symbol in parentheses. Thus the sample grammar
above could be simplified to handle both versions of NP
with the rule: NP  (D) + N. P-S rules, though affording
a more parsimonious approach, cannot treat the symbols dif­
ferentially in differing contexts (Postal, 1964a). —
Transformational Grammars
The transformational rules suggested by Chomsky
(1957) permit rewritings of an entire string in one opera­
tion; in addition, the rules are context-sensitive, that
is, they may be applied only to strings of a certain speci­
fied structural description. The Question transformation
rule specifies that strings having the structure NP+Aux+V
+NP can be rewritten as Aux+NP+V+NP. A transformation
which specifies how a single sentence is to be rewritten is
called a singularly transformation. Another and more in­
teresting set of transformations rewrites several strings
into a single sentence. There are transformations which
rewrite the strings underlying The boy hit the girl and
The girl is fat, into the single string underlying The boy
hit the fat girl. This class of generalizing or combina­
tory transformations is especially interesting to the study
of language processing, since so many of the sentences of
everyday communication involve combinatory sentences. In
addition to the singularly and generalizing transformations
there are lower level transformations which provide for
agreement of subject' and verb as to number and tense of
verbs. Such morphological transformations are beyond the
syntactic levels and as such are not relevant to the prob­
lems considered here.
It must be emphasized that the grammars described
11
above are not intended to be models of a language user,
but may provide useful insights into the nature of what is
is that a language user must know (Katz, 196 4; Chomsky,
1966a). Chomsky has expressed the opinion that a grammar
which ignores the user is an empty exercise (Chomsky,
1964).
Transformational Depth Model
Lees (1964) has detailed a linguists version of
the processes that must occur within the listener before a
sentence can be comprehended. In most respects Lees has
paralleled the model of a language user described by Miller
(1962). Miller, who appears to be the psychological
spokesman for Chomsky has presented what may be the sanc­
tioned transformational model of a language user (Miller,
19 62). The model might be classed as analysis-by-synthesis
model, in that it supposes that the user develops for each
sentence processed, the base or deep structure plus a set
of operational or transformational tags to indicate the
processing the base structure underwent to derive the ter­
minal string. Since a given base can be subjected to vari­
ous transformations, it can be used to derive a series of
related sentences. The simplest representation of the base
is the simple active declarative sentence which Miller
labels the Kernel or K. Table 1 presents the family of
Passive, P, Question, Q, and Negative, Neg, sentences
12
related to the K sentence: The bear eats the honey.
Miller and his colleagues have presented evidence that sen­
tence matching latencies are related to the transforma­
tional history of a sentence, that is the number of trans­
formations required to derive the surface structure from
the base or deep structure (Miller, 1962).
The transformational depth hypothesis can most
readily be demonstrated in Miller''s geometric model illus­
trated in Figure 2. The surface structure K is set at zero
depth of transformation. The P, Q, and Neg transformations
are one transformation removed from K and so are said to
be at a depth of one transformation. Several transforma­
tion rules may be applied sequentially to the same base.
The PQ, NegQ, and NegP sentences in Table 1 are the prod­
ucts of such sequences of two transformations. Note that
in the cube these transformations are separated from K by
two legs. The triple transformed sentence NegPQ lies at a
depth of three transformations. Transformational process­
ing seems to be step-wise without any particular order;
thus, NegP might be processed in either order PNeg or NegP.
The model in Figure 2 does not treat the common transforma­
tions of the Emphatic, E, and the Imperative, I, (Thomas,
1965) but could be used to deal with most actual sentences
by substituting I or E for one of the surfaces to diagram
the relationships.
13
PQ
NegPQ
Q
NegQ
Neg
K
NegP
Fig. 2.— Miller's geometric representation of the
transformational relationships between the Neg, P, and Q
sentences.
13
PQ
NegPQ
Q
NegQ
K
NegP
Fig. 2.— Miller's geometric representation of the
transformational relationships between the Neg, P, and Q
sentences.
14
TABLE 1
THE Neg, P, AND Q SENTENCE FAMILY
Sentence Construction Symbol
The bear eats the honey. Kernel K
The honey was eaten by the bear. Passive P
The bear doesn't eat the honey. Negative Neg
Does the bear eat the honey? Question
Q
The honey isn't eaten by the bear. Negative-
Passive NegP
Isn't the honey eaten by the bear? Passive-
Question
PQ -
Doesn't the bear eat the honey Negative-
Question NegQ
Isn't honey eaten by the bear? Negative-
Passive-
Question NegPQ
15
Miller and his co-workers at Harvard have tested
■ the transformational depth hypothesis quite extensively
! over the Neg, P, Q family of sentences for the case of the
!
| singularly transformations up to a depth of three for the
! NegPQ transformation. Miller's first study (Miller, 1962)
i
j employed a sentence matching task, where S had to find sen-
i
| fences related to a target sentence by a specific trans-
I
1
| formational history. The latencies for matching were re-
i
j lated to the transformational depth of the sentences as
l
|
:estimated by the cube model. Sentences involving one
transformation took longer to discover than did K sentences,
|and sentences with two transformations required more time j
; |
jthan the one transformation Neg, P, and Q sentences. In j
;addition, the latencies of the two-transformation sentences |
' f
were predictable by summing the latencies for the two com- |
i
ponents. The NegP latency could be approximated by the sum :
of the latency pf K plus the K - Neg for the Neg and the
K - P differences. Miller and McKean (1964) in a refined j
version replicated the findings. McMahon (1963) indicated |
ithat when S_ evaluates the truth of active and passive sen- |
|fences, the passive sentences generated longer latencies.
I
Mehler (1963) demonstrated that learning difficulty could
I i
ibe related to the transformational depth of a sentence. A j
; j
"triggered" recall task by Blumenthal (1966) demonstrated j
; |
;superior recall of actives over passives regardless of the j
word class used to trigger the recall. Tannenbaum, Percy,
and Evans (1964) asked S to generate K, P, N, and NegP sen­
tences from strings V + N + N. The latencies were ordered
in the fashion predicted by the Transformational Depth
theory. Slobin (1966) had children evaluate the truth of
K, P, Neg, and NegP and found the ordering predicted for
the K and one transformation sentences, but the negative
transformation created greater latencies than the more
complex NegP. In a rather different task Coleman (1964)
had Ss read text either of active K type sentences or the
same material in passive voice with nominalizations of
verbs and numberous adjectives. Comprehension was signifi­
cantly better for the active forms. Gough (1965, 1966)
found the latencies to evaluate and verify K and trans­
formed sentences supported the Miller model. This still
obtained when a three second delay was introduced to allow
processing to be completed.
The Miller model was extended to include the Em­
phatic, E, transformation by Savin and Perchonock (1966).
They found that the various transformations created differ­
ential loading of the short-term memory. The E transforma­
tion was discovered to be the most difficult of the single
transformations.
A different kind of support is derived from the
study by Krossner (1967) utilizing the grammar and symbols
of the game WFF'N PROOF to demonstrate that self-embedding
increases the sentence complexity. This reinforces the
1964 findings of Miller and Isard (Miller and Isard, 1964).
Not all investigators have been able to make the
cube work. Clark (1965) in a sentence completion task
found evidence that different sets of actors and objects
were offered for passives and actives. Blumenthal (1966)
found that her subjects violated the base plus transforma­
tion tag schema when they process multiply self-embedded
sentences. They produced a matrix sentence plus several
semi-sentences. This could as easily be attributed to ex­
ceeding what Herdan (1964) calls the "grammar load" of a
language, or Miller's magic number (Miller, 1956). Savin
(in press) has demonstrated that the base plus T-Marker
explanation holds for the single self-embedded sentences.
Martin and Roberts (1966, 19 67) have alleged that the Mil­
ler results were an artifact arising from differing length
of the sentences. However, their sentences are of mixed
types including both singularly and combinatory transforma­
tions. Their K contains an embedded adjective and an em­
bedded adverb.
The studies by Miller and Isard (1964) and Miller
and McKean (1964) failed to demonstrate the expected in­
crease in complexity that accompanied expanding the verb
structure from John hit to John would have been hit. In
18
fact, both Miller and Isard (1964) and Marks and Miller
(1964) found that a considerable proportion of all recall
1 errors involved reporting a verb structure of greater com- "
I
I plexity than the presented sentence contained. Fodor and
I
Garrett (1966) have suggested that the base plus T-marker
holds only for the simplest transformations of the Neg, P,
Q family and cites the learning of several types of pas­
sives in which the truncated versions are more readily
learned than is the complete sentence, though the truncated
i version is derived from the full sentence and has a depth
; of one more transformation. Truncation refers to the re­
placement of the agent by a pronoun. The truncation of the
passive, John was hit by the ball, would be, John was hit
by it. The limitations of the Transformational Depth Model
listed above, plus the development of several surface
structure models of language processors led to the develop­
ment of two psychological models which treat the non-Kernel
sentences of Table 1 rather differently than does the
Miller model.
Surface Structure Models
The two surface structure models to be discussed
are based upon a sentence generating device proposed by
Yngve (1960). Martin and Roberts (1966) have adapted the
Yngve depth hypothesis to create a model of a language user
which deals solely with the surface structure of the
sentence. Johnson (1965, 1966) also uses Yngve's generator
as the basis of his error prediction model. Details of the
two models are presented below.
Yngve feels that sentence production is from top
to bottom of the P-marker or tree. The base strings are
developed and then the lexical items are selected. Follow­
ing the sequence on the P-marker in Figure 3 may clarify
the process. "S" is the start signal for the generator.
The rewriting rules, presented above, are stored in perma­
nent memory (PM). The PM is searched for the rule for re­
writing S. The expansion reads: S  NP + VP. The symbol
NP is transfered to the processor unit while the symbol VP
is placed in STM. The expansion of NP is located in the PM
collection of rewrite rules. The expansion NP  D + N is
transfered as before, the left-most symbol going to the
processor and the right symbol being placed in STM. When N
enters the STM it is placed to the left of VP. The re­
writing rule for D is next processed and the terminal vo­
cabulary item "the" is selected and transmitted to the out­
put device. When the output "the" is produced the STM con­
tains two symbols, N and VP. This is a structural depth of
two. The next item in STM, the N, now enters the processor
and is treated in the same manner. When the terminal vo­
cabulary item boy is produced the STM contains the symbol
VP so that boy is at a depth of one. Yngve uses the small
italic d to indicate the depth of a word. The final word
of any sentence always has a d of zero. In Figure 3 the
d for each word is listed. Yngve also notes that the d can
be calculated by assigning ones to all left branches of a
tree and zero to all right branches. Depth of an item is
the siim of the branches traversed in going from the word to
\
S. The largest d is designated D and is considered an in­
dex of sentence complexity.
Johnson
count
Yngve d
The
2
NP
(1)
bear
1
S-
D N
(3) (4)
(5) (6)
V
(7)
(9)
eats
1
VP
(2)
__L_
NP
(8)
15
(10)
(12)
the
1
TJ
(11)
(13)
honey,
0
Fig. 3.— Tree diagram or P-marker for a kernel sen­
tence with Johnson's operation counts and Yngve's depth
index.
Martin and Roberts (1966) refined Yngve's D index
of complexity by computing the mean of the ds to take into
account the influence of sentence length. The mean d or d
for the sentence in Figure 3 is 1.00. Martin relates dif­
ficulty of learning to the d of the sentence. The greater
the mean depth the greater the difficulty.
Johnson (1965, 1966) has utilized Yngve's model to
predict errors in learning the word to word associations
within sentences. Instead of the depth index, Johnson
counts the operations involved in deriving the lexical
items and gauges complexity by the number of operations in­
volved. The sentence nodes in Figure 3 are numbered in the
sequence that they are processed. The count of operations
for producing each word is listed on the line labeled
Johnson Count. In the case of sentences having the same
overall count, Johnson assigns the greater difficulty to
the sentence having the larger difference in count between
adjacent vocabulary items. These two models are differen­
tially sensitive, in that left branching sentences increase
difficulty for both models. Right branching adds nothing
to the Martin and Roberts mean depth, but does add opera­
tion counts to Johnson's model.
Both Johnson and Martin indicate that sentence
difficulty can be assessed by attending to the surface
structure of the sentence. The Miller transformational
depth model on the other hand suggests that the deep struc­
ture of the string determines the complexity of sentence.
This suggests that several interesting comparisons can be
22
made between the predictions that each model will make for
j
| sentences having identical surface structure but different
i
j deep structures. A sentence can be constructed with the
| surface structure of the sentence in Figure 3, but with a
transformational depth of two or four by the simple strata-
' gem of using an adjective modifier for D in either or both
! of the NP structures. For example, Big bears eat the honey,
i
and The bears eat wild honey are each two-transformation
j sentences, while Big bears eat wild honey is a four-trans-
I formation sentence.
Adjectival Modifiers
| The case of the adjective modifier is one of the
few combinatory or generalizing transformations that has
■been worked out in detail by the linguists. Smith (1961) J
: l
has provided most of the processes though Fillmore (1963)
has suggested a variation in the sequencing of operations, j
In the grossest simplification the adjective modifier is a !
|
representation of an embedded sentence. The adjective mod­
ifier tame, in the sentence, The tame bear eats meat, is j
!the representation of the sentence, The bear is tame. Thisj
;sentence is embedded in the matrix sentence, The bear eats j
■ i
I
imeat. I
I j
The stages in deriving an adjectivally modified j
1 i
sentence are j
i
i
1. derive the matrix sentence NP+V+NP i
23
2. derive the constituent sentence NP+be+Adj
3. embed the constituent in the matrix
4. reduce the constituent to Adj by deleting NP+be
One restriction exists, NP of the matrix must be identical
with NP of the constituent sentence (Smith, 1961).
In terms of the transformation depth model this
is a depth of two transformations for each adjective modi­
fier. A singly modified sentence has a transformational
depth of two and the doubly modified sentence would lie at
a depth of four transformations or markedly different in
terms of difficulty.
i
The adjectivally modified sentence can be drasti-
jcally altered in surface structure by utilizing the op-
i • ;
Itional feature of D, in the rule NP  (D) + N, to delete i
! |
| the determiner from one NP and allow it to appear for the i
i !
|next, e.g., Bears eat the sweet honey, or The big bears eat j
1 _ ' I
honey. These two sentences differ in d (Martin and Roberts,!
j
j1966) by 0.40. The trees for these sentences are presented j
i j
I in Figure 4. The other possible versions of the sentence j
jail have a tree identical to that in Figure 3, with a mean j
! I
depth of 1.00 or the same as the K sentence in the figure. |
If the Adjectival, A, sentences were so formed as to have
a d of 1.00, Martin would predict that K and A sentences
would be of equal difficulty. However when the optional
24
Yngve
Yngve
S
NP
VP
NP
AP
Ad j
Bears eat the
honey- sweet
Object modified sentence
NP
VP
D NP
Ad j
The
big
bear eats
honey
Subject modified sentence
Fig. 4.— Tree diagrams of adjectivally modified
sentences.
determiner rule is applied to delete the determiner from
the unmodified NP structure, the mean depth for the subject;
modified (Asub) is 1.20 for the sentence above. Mean depth
for the object modified sentence (Aob) is 0.80. If we
utilize the deletion option in the modified NP the d is
1.00. Thus a comparison of A sentences with K sentences
in the Martin model varies with the exercise of the dele­
tion option. When D is deleted from the modified NP, Mar­
tin finds them equivalent to K sentences in difficulty.
If the Asub version and the Aob form are compared to K, K
is judged to be more difficult than Aob, but less complex
than Asub. However Martin has since indicated that d is
related to difficulty of learning by a U-shaped function
with the optimum depth being 1.25 (Personal communication).
This would fit the results obtained by Mehler and Carey
(1968) in which sentences with a d of 1.00 had shorter ver­
ification latencies than did sentences with a d of 0.75.
However this presents an entirely new set of predictions:
K again remains intermediate, but Asub now is the easiest
and Aob the most difficult in terms of their deviation from
1.25. Johnson's operation count is thirteen for each of
the sentences, thus he would predict equal performance.
Miller on the other hand finds K at zero depth and both A
sentences at a depth of two. Thus K is easier than A sen­
tences, which are equally difficult.
26
The A sentence may also be compared with the sen­
tences of the NegP, Q family that lie at a depth of two
transformations. That is, the difficulty of the A sentence;
should fall somewhere within the range of the NegP, PQ, and:
NegQ sentences, being easier than the NegPQ sentences, but
more difficult than the Neg, p, and Q transformations.
Since the Yngve model (Yngve, 1960) is the source
of both Martin and Roberts and Johnson's models, and pur­
ports to estimate sentence complexity by the load placed
on short-term memory, it was decided to utilize a short­
term memory task to permit maximum sensitivity for the
Martin and Johnson models. Savin and Perchonock (1965)
have presented an ingenious paradigm for estimating short­
term memory, STM, loads. Starting with Miller's concept of
a finite STM (Miller, 1956) , Savin employs an Archimedian
principle of measurement by displacement. The procedure is.
to load the STM with the sentence of interest and then ob­
tain an estimate of the load created by observing how many
additional unrelated words (filler words) from a categor­
ized list can be recalled in addition to the sentence.
Sentences with simple surface structure and low transforma­
tional depth have fewer items to store. This lower demand
for STM storage space than would be required by a complex
sentence, should result in more filler words recalled for
the simpler sentence. In other words the measure is an in-
dex of the amount of STM not used to store the sentence.
The Neg, P, Q singularly transformation family has :
been investigated by Savin and Perchonock (1965) and strong
support for the transformational depth model was obtained,
with some support for the Katz and Postal (1964) suggestion
that the NegQ is really at a depth of one transformation,
and NegPQ is at a depth of two. In addition they explored
the Emphatic transformation and found it to be extremely
difficult.
It was decided to replicate the Savin and Perchonock
study using the K, P, Neg, PQ, NegP, and NegPQ singularly
transformed sentences (see Table 1) in addition to the
Asub and the Aob sentences described above, to test the
hypothesis that the transformational depth model could be
extended to the case of the adjective modifier. The inclu­
sion of the two variations of adjectival modifications per­
mitted a test of Martin's d as an index of complexity, as
well as testing the existence of the U-shaped function he
described. The operation count of Johnson was also tested
against the predictions of the transformation depth model.
Predictions for Experiment I
The specific predictions that will be tested under
the various models are:
Miller
1. Asub = Aob
28
2. K > Asub and Aob
3. Asub and Aob = NegP and PQ
4. Neg and P > Asub and Aob
5. Asub and Aob > NegPQ
Martin
Monotonic
X. Aob > K > Asub
U-shaped
1. Asub > K > Aob
Johnson
1. K = Aob = Asub
A More Rigorous Test of Embedding Operations
Confirmation of the above predictions will provide
support for the transformational depth model. However a
more convincing demonstration that the processes described
by Smith (1961) and Thomas (1965) really are utilized can
be presented. The previous discussion of the embedding
operations has indicated that S utilizes the bases of two
sentences, the matrix and the constituent plus the opera­
tional tags to create or understand sentences with an ad­
jectival modifier. Upon hearing the sentence, S stores
representations of the matrix string (D)+Ns+V+(,D)+No, where
Ns is the subject and No the object; and the constituent
string: D+Ns+be+Adj, plus the operator tags to embed Adj.
29
! Note that in the sentence, Asub, of Figure 4, the surface
|
| string reads: Adj+Ns+V+D+No. In Asub, the Adj is located
i in the higher order structure NP, while in the constituent '
j
; Adj falls in the VP structure.
The left to right or top to bottom processing of
I models like Martins's and Johnson1s would indicate real
differences in the two uses of Adj. It should also be
; noted that the terminal string lodged in STM is now four '|
«» . - i
I symbols longer in the Transformational version. Martin's j
!
j findings (1966, 1967) suggest that length is a significant
complication. Johnson's operation count would indicate six
more operations for deriving the constituent string, again ;
i
: I
a significant complication.
{ From the transformational interpretation of the
adjective modifier, S holds the two representations in STM !
when a modified sentence is heard. If the modified sen­
tence were to be immediately followed by the constituent
sentence related to the modifying adjective, this would be j
i- 1
! 1
1 redundant information in the case that transformational j
i
processing did actually occur. In that instance the incre-|
; I
! ment in load on the STM would not be markedly increased.
■ Comparisons of the STM load of the A sentences, and A sen-
I
!tences with a trailing true constituent, (Ct) sentence
would indicate the nature of the processing. Any signifi-
jcant increase in STM load for the A+Ct pairs is evidence
30
that a surface structure processing is utilized such as
suggested by Martin (1966) and Johnson (1966). Surface
structure processing could be further substantiated if A
sentences were paired with false constituents (Cf), and the
STM loads of A+Ct equalled the load of A+Cf. The Cf sen­
tences would be constructed using Adj of A and the unmodi- j
fied noun of A. Control for familiarity effects would be
established, by presenting A paired with constituent (C)
type sentences that shared no lexical items with the A sen­
tence. For Experiment II the following symbols will be
used for the constituent sentence types. For the object
I
modified sentence of Figure 4: j
j
j The bear eats the sweet honey. j
| Ct = The honey is sweet.
I
Cf = The bear is sweet.
| Cn = The dog is lazy. j
j Returning to the representations in memory still j
| !
I ‘
j another time, the matrix string underlies a K sentence. It;
; j
! might be possible to approximate the STM load of the A sen-!
j r
| I
| tence by presenting K paired with a sentence of C type that;
i * 1
| utilized the noun of NP in K (Thomas, 1965) .
I
J The results of Savin and Perchonock (1965) suggest
j that the increments in load related to the various singu­
larly transformations have interval scale properties. The !
j
independence of the various transformational operations was;
i demonstrated by the process of subtracting out the storage
i ■
I costs, in words, for each of the three basic transforma-
f
| tions; Neg, P, and Q. For example, Neg is estimated by the
i
! differences of K - Neg, P - NegP, and PQ - NegPQ. If Neg
!
! displays interval scale properties, the three differences
|
; will all be equal. The adjectival embedding transforma-
| tions might be expected to behave in the same manner. The
difference in STM load created by a sentence with double
modification should be twice as large as the increment in
| load caused by the singly modified sentence. Assuming K
\to be the zero point we can predict a monotonic relation-
| ship between memory load and the number of adjectival modi-
;fiers.
j Predictions for Experiment II
Several specific predictions as to the number of
words recalled can be made for the A sentences and A plus
C sentence pairs:
1. A+Ct = A
2. A+Ct > A+Cf
3. K+Cf = A
j
I 4. A+Cf > A+Cn
j 5. K - AA = 2 (K - A)
6. K > A > AA
j Practical considerations of subject availability
and drastic performance changes over longer sessions led
to a decision to conduct two separate experiments. This
also permitted replication of the comparison of the K and
A sentences. Study I sought to determine the increments in
STM load attributable to the adjective modification of a K
sentence by comparing recall to the recall for the K, and
the Neg, P, Q family. Study II examined the relationship
of the K sentences that act as components of the A sen­
tences and related these to the models of language users
presented above.
CHAPTER III
EXPERIMENT I
Design and Procedures
Design. The independent variables introduced are
the syntactic structures designated K, P, Neg, PQ, NegP, !
j
NegPQ and A, see Table 1, representing respectively trans- :
formational depths of 0,1,1,2,2,3, and 2. A second inde­
pendent variable was the location of the adjective modifier
j
in either the object or subject position. The five word j
length was intended as a control variable (Martin and
Roberts, 1966, 1967). The dependent variable was the num­
ber of words from the filler list recalled in addition to
*
i
i
the correct recall of the sentence.
Preliminary investigations had indicated that be- i
tween subject variance accounted for nearly half of the :
j
total variance. Therefore a within subject design seemed j
to offer the greatest sensitivity to the treatment effects, j
Materials. A set of fifty, subject-verb-object K {
■ i
sentences of five word length were formed. Twenty-four !
j
sentences were selected from this pool with a table of ran- !
dom numbers (Edwards, 1960). These sentences served as the !
34
i
base for generating four sets of stimuli sentences. A set
! containing three each of the K, Neg, P, NegP, PQ, NegPQ,
; Asub, and Aob symbols was formed. The twenty-four symbols
i
i
i
' were also randomly arranged with the restriction that not
; more than two symbols of the same type might be adjacent.
j The symbol in the first position was paired with the first
j
jK sentence. The sentence was transformed to create a sen-
| tence of the form indicated by the symbol. All pairs were
j
jtreated in the same manner to form Order I, see Appendix A.
i
( The process was repeated three times to create the remain-
!ing orders. The use of the same set of K sentences for all
|
orders was intended to reduce effects of lexical novelty
i (Cohen, 1966). The word lists to be appended to the stim-
|uli sentences were derived from Savin and Perchonock (1965).
A set of 60 strings used by Savin were systematically re­
ordered to generate four sets of 24 strings. The items !
were generated by taking item one of string one and item !
J
I
two of string two, ... item eight of string eight. Then j
| I
the process cycled back to item one of sentence two, etc. ;
When the downward sweeps had been exhausted the procedure
: i
I was reversed and proceeded from bottom to top. One change j
iin the Savin list was made to reduce phonetic confusion,
jthe word lion was substituted for sheep as Ss in prelimi­
nary studies seemed to confuse ship and sheep. Each string
I I
|of filler words contained eight one syllable words, one ]
from each of the following categories: Nature, animals,
vehicles, units of time, furniture, weather, clothing, and
color. See Appendix A for examples.
Subjects. Twenty subjects were recruited from the
subject pool of the introductory psychology classes at the
University of Southern California. The Ss participated
for class credits. Subjects were required to be native
speakers of English. By pure chance ten male and ten fe­
male subjects were recruited. The ages of the subjects
ranged from 17 to 21 with the modal age being 19.
Procedure. The Ss were run in small groups at
times convenient to the Ss. Assignment to a stimulus order
set was determined by order of signing up for the study.
All Ss were tested in groups of two or three, except for
group IV, where four showed up for the initial sitting and
a single subject was run at a later time.
When S entered the experimental room, he was pre­
sented with a pack of nine numbered IBM cards and requested
to supply the usual biographical data on the top card of
the pack. In addition, S was asked to rate his fluency in
English and to indicate his usual grade in English courses.
A dittoed explanation of the nature of the task was dis­
tributed to Sy see Appendix C. E instructed the Ss by
reading the material. The instructions outlined the logic
36
of the measurement procedure. The was told, "your task
is to remember the sentence correctly, plus as many of the
words as possible. The data will be useless if you do not
get the sentence correct. . . . Listen carefully to both
the sentence and the list."
Questions were answered by rereading the appropri­
ate portions of the sheet. If confusion still existed,
the sheet was paraphrased. After indicated that the pro­
cedure to be followed was understood, five practice sen­
tences were presented. Ss were given unlimited recall
time. After S_ indicated that the recall was complete, E
read the practice sentence and list to to provide instant
feedback. To encourage higher levels of performance, E
complimented the group upon their performance with remarks
like "Great! one trial learning!" "I don't think we will
j i
;need to do more than five practice sentences." I
i Each sentence was read aloud by E- in a normal con- j
i i
E t
;versational tone. A pause of approximately a second sepa- I
I :
r ;
!rated the sentence and the filler list. The list was read j
I . |
in a monotone at a rate of about one word per second. The :
last word was immediately followed by "go," this signaled
S to begin writing his recollection of the sentence and
string. It was stressed that the sentence must be recalled
I
I
before the lists were attempted. Pilot studies had indi-
cated that recalling the filler words first greatly j
37
increased sentence errors.
When all Ss had completed writing their responses,
E announced the next trial by saying "Sentence n is:______ .
Sentences 2, 4, 11, 15 and 20 were read back-to S immedi­
ately after the recall session to maintain motivation and
to reinforce the notion that the sentence must be correctly
recalled. A two minute break was introduced after the
twelfth sentence, as our pilot study showed marked deterio­
ration in performance after 16 trials. The second half of
the set was presented in the same manner. After the trials
had been completed, S was asked to evaluate the task in
terms of difficulty and subjective interest. They also
were asked to indicate if they would sign up for a similar j
jstudy. Any additional comments could be added to the ques-{
I !
; tionnaire. The location and time that information regard- !
! , i
I xng the outcome of the study and their performance scores
I ' !
i !
was presented to the Ss. They were asked not to discuss j
j the study with anyone until the results became available. j
CHAPTER IV
RESULTS OF EXPERIMENT I
Scoring Method
The responses of the Ss were scored by an adapta­
tion of the method used by Mehler (1963). Each sentence
recalled was judged correctly recalled if it satisfied any
j of the following criteria: (a) was identical to the pre-
j sented string, (b) varied from the presented material in
[ verb tense, (c) substituted a definite article for an in-
l
| definite article or vice versa, (d) replaced a word with a
| synonym, or (e) added an article to sentences in which the
j
I determiner had been deleted by the optional determiner I
i !
; j
| rule. For example the presented sentence, Bears eat the j
I sweet honey, might be recalled The bears eat the sweet j
! ■ j
|honey. j
! i
i 1
| The total number of correctly recalled filler words I
|was computed for each correct sentence. The second appear-!
! i
1 . !
| ance of a given filler word in the recall list was scored j
j
as an error. The sentences and word lists were scored by j
i
E and again by an alternate judge.^ The degree of agree­
ment was high, with only five differences in the sentence
-*-My wife, Edith Van Kekerix.
38
39
scores and two differences in words recalled. The sentence
* ;
differences were settled in favor of the alternate judge as
a correction for possible bias on the part of E.
The number of filler words correctly recalled after
the recall of the sentence was used to form the basic data
of Experiment I. The three filler word counts for each
treatment type were averaged to yield the basic data pre­
sented in Appendix B. Averaging was used to minimize the
learning effect that had been observed in a pilot study in
which sentence types were presented in blocks. The ran­
domization of the ord.er plus the averaging should minimize
i
I the possibility that the learning effect favored any treat-
; ment condition. Another reason for averaging the scores j
! I
i was to minimize the effects of -occasional poor performances I
i ' j
I due to attentional factors or problems arising from the j
! method of presentation. The means in Appendix B which are !
I !
| !
|starred, *, indicate that S failed to recall one of the j
j sentences of that type. The average presented is for the j
I I
! - >
| words recalled for the two correct sentences. The re- i
! |
I sponses of one were deleted from the analysis on the ba- !
! . i
i sis of incompleteness. Nine sentences were incorrectly i
t !
I i
recalled, and no words were recalled for four of the cor- j
rect sentences. Examination of the responses did not indi-j
cate any systematic relationship of errors to the syntactic
complexity of the sentences or to the lexical items of the j
40
sentences. There was a tendency to incorporate items from
the filler list into the sentences.
Analysis of the Data
The means for the treatments were computed and are
presented in Table 2. The observed means support the order
predicted by the Transformational Depth Theory, in that the
adjectivally modified sentences lead to recall superior to
the NegPQ sentences and inferior to the Neg and P sentences,
which are inferior to the K sentences. The mean recall of
the various sentence types are plotted against their trans­
formational depth in Figure 5. The mean number of words
|recalled for each sentence were also plotted against Martin
j
and Roberts d in Figure 6. The plot fails to describe |
i
_ j
either the monotonic or "U" shaped curve the d predicted. j
! • j
i ;
! TABLE 2 I
MEAN NUMBER OF WORDS RECALLED AFTER EACH SENTENCE TYPE
Sentence Type Mean Recall Sentence Type Mean Recall
K 5.45 NegP 4.22
P 4.89 Asub 4.30
Neg 4.77 Aob 4.17
PQ
4.56 NegPQ 3.84
Fig.
T3
CD
r t J
O
Q )
5-4
CD
M
O
S
MM
O
5-4
CD
P
P
£
(IS
I
5.5
5.0
4.5
4.0
3.5
3.0
2.5
2.0
t r >
PM (D
S
O i A
PM PM 3
tn w n
C D . < O
a <3 a
CM
tn
CD
a
0 1 „ 2 3
* Transformational depth
5.— Mean words recalled after sentences of several transformational Depths
H*
2
Mean number of words recalled
5.5
5.0
4.5
4.0
3.5
3.0
2.5 h
2.0
Aob
_ L _L
• K
• P
• Neg
• PQ
Asub
NegP
_L
0.80
Fig. 6.— Mean number of words recalled after sentences of different d
1.00 1.20
Martin's mean depth
• NegPQ
1.40
43
Analysis of Variance
The basic data was entered into a List X Treatments
X Subjects analysis of variance for repeated measures
(Winer, 1962) . The results of that analysis are presented
in Table 3. The highly significant treatment sum of
squares was further partitioned into the individual compar­
isons shown in Table 4. The t values related to the vari­
ous comparisons were evaluated by Scheffe's test for mul­
tiple comparisons (Edwards, 1960). The critical value for
t1 at the .05 level is 4.34. The Scheffd test is rather
conservative, but makes no assumptions about the orthogo-
|nality of the various comparisons.
I
|
j TABLE 3
i i
| ANALYSIS OF VARIANCE: EXPERIMENT I I
I - ■ ■ . ■    ■ ■ ■ ■   I
! j
I Source SS df MS F P j
Between Subjects 84.7518 j
2.145 n.s. !
t
I
I
14.84 .01
n. s.
Orders
Subjects within
groups
Within Subjects
Sentences
Order X Sentences
Sentences X Subjects
within groups
25.42 3 8.473
59.33 15 3.955
74.85 133
34.08 7 4.868
6.25 21 .298
34.52 105 .328
44
TABLE 4
INDIVIDUAL COMPARISONS OF TREATMENT MEANS
FOR EXPERIMENT I
Comparison ta
Pb
K vs Asub & Aob 7.20 .01
Asub & Aob vs PQ & NegP 1.17 n. s.
Asub vs Aob n. s.
Asub & Aob vs Neg & P 4.49 .05
Asub & Aob vs NegPQ 2.46 n. s.
adf = 105.
^significance level evaluated by the Scheffe test.
Individual Comparisons
The observed t of 7.2 0 between the K and the adjec­
tivally modified sentences is significant at the .01 level.
This supports the prediction that recall after K sentences
will be superior to the A sentences. The difference be­
tween the sentences at a depth of one transformation and
the A sentences yielded a t of 4.49, significant at the .05
level. The difference between the A sentences and the
NegPQ sentence of depth three missed being significant at
the .05 level.
The lack of a significant difference between the
A sentences and PQ and NegP sentences supported the
45
rationale that placed the A sentences at a depth of two
transformations.
The lack of significant difference between the two
types of A sentences, t near zero, leads to rejection of
Martin and Roberts' d as a useful index of sentential com­
plexity.
CHAPTER V
DISCUSSION OF EXPERIMENT I
Extension of the Transformational Depth Model
Probably the most important finding of the study
was the lack of significant differences between the A sen­
tences and the NegP and PQ sentences. This is interpreted
as indicating that the demands of the two generalizing
transformations on STM storage are roughly equivalent to
the demands created by two singularly transformations. If
this is the case, the Transformational Depth model can be
used to account for the combinatory sentences as well as
i
!
the sentences derived from single bases. Confirmation of I
I j
|such findings will greatly enhance the utility of the
j Transformational Depth model by extending the classes of I
i
isentences that it pan handle. Several investigators have
sought to test the application of the model to combinatory j
j
transformations, other than the adjectival modifier.
Blumenthal (1966) and Fodor and Garrett (1967) find that j
the Transformational Depth model's predictions are not sup-|
f
ported for multiply self-embedded sentences. Savin (in j
i
press) indicates that the single self-embedded sentence [
f
does significantly reduce the number of words recalled in j
t
46
47
in a recall -task similar to the one used in this study.
The apparent contradiction of the two sets of findings may
be related to the levels of complexity of the sentences
used by Blumenthal and Fodor, as compared to the sentences
of Savin and the present study. The studies yielding non­
support of the Transformational Depth theory used consider­
ably more complex sentences than the studies which support
the theory. Savin uses sentences of the order of two
transformations, e.g., The man the woman saw left the of­
fice. The sentences of Fodor and Garrett range in depth
from six to ten. The sentences in the present study also
I lie at depths of two transformations. This might be inter-
j
|preted to indicate that Transformational Depth theory is
|capable of explaining only the simplest sorts of language j
:usage as Fodor and Garrett (1966) suggest. An alternative \
j is that suggested by Cohen (1966) and Stolz (1967) that the ;
|more complex sentences are beyond the encoding ability of ;
|many of the Ss.
i T
Refutation of d I
! f
; j
The highly significant differences between K and j
the two types of A sentences indicates that Martin's d is I
not a very useful index of sentence complexity. This sup-
I
ports the findings of Mehler and Carey (1968). The lack j
f
i
of significant differences between the two A types of j
48
sentences is even more punishing to mean depth as the sepa­
ration in mean depth is greater than between K and the A
sentences. The finding of significant differences between
K vs Neg and P which all share a mean depth of 1.00 tends
to reinforce the rejection of mean depth as an explanation
of the observed results. This is in agreement with Martin
and Roberts (1967) findings that d was not a significant
factor in a sentence learning task.
The significant differences obtained between the K
sentences and the various singularly transformations repli-
|cates the findings of Miller (1962) , Mehler (1963), Marks
! i
|and Miller (1964), Miller and McKean (1964), McMahon (1963) j
j and Savin and Perchonock (1965). This indicates that the
jeffect is reliable over several methods of investigation: j
isentence matching, learning and immediate recall. The con-j
j i
|firmation over several different samples of subjects pro- j
j ;
■vides further evidence for the generality of the Transfor- !
[national Depth theory as an index of language processing ]
difficulty. j
!
Interval Scale Properties j
Miller (1962) indicates that the singularly trans­
formations seem to be applied in a relatively independent •
j
fashion. He also presents evidence that the latencies re- !
Jquired to locate sentences involving sequential singularly j
49
transformations can be estimated by summing the latencies
of the individual transformations involved. The latency of
going from K to NegP could be predicted by the sum of the
latencies for going from K to Neg and the latency of K to
P. The study of Miller and McKean (1964) replicated this
effect. Savin and Perchonock (1965) present further evi­
dence for this interval scale property of the singularly
transformation.
i
The differences between K and the various singu­
larly transformation means were analyzed to determine if
1
j
jthe interval scale properties existed in the present data.
;The nature of the stimulus set precludes obtaining the di- j
■rect measurement of the difference K - Q. The value for Q •
•must be derived from the difference of P - PQ or the dif­
ference NegP - NegPQ. The differences between K and Neg !
and K and P are directly obtainable. These differences are j
! presented in Table 5 along with the standard errors of the j
imeans of the differences.
, Given the value of K and the differences between K j
and the Neg, P, and Q transformations it is possible to
predict the values of all singularly transformations of
depth two, i.e., NegP, PQ, and NegQ as well as NegPQ. In
the present case only the NegP prediction can be indepen-
i
dent of the other estimates of the parameters. The ob- j
jserved difference between K and Neg is 0.68 words while the j
50
difference K - P is 0.56 words. The independent estimate
of the difference, K - NP, is 1.24 words, while the ob­
served difference in recall between K and NegP is 1.22
words. With the standard errors presented in Table 5 the
distributions intersect suggesting that the deviation from
the prediction is attributable to chance.
TABLE 5
MEANS OF THE DIFFERENCES BETWEEN SINGULARLY TRANSFORMATIONS
AS INDEPENDENT ESTIMATES OF SENTENCE COMPLEXITY
Sentence Type K - X Standard Error
Passive
K - P 0.56 0.171
N - NP 0.55 0.151
mean 0.555
Negative
K - Neg 0.68 0.155
P - NegP 0.67 0.177
PQ - NegPQ 0.75 0.133
mean 0.70
Question
P - PQ 0.36 0.158
NegP - NegPQ 0.42 0.133 -
mean 0.39
Although the estimates of the differences of K - Q
;are not independent of the observations being predicted, a j
51
value for the difference was obtained by averaging the dif
ference P - PQ and NegP - NegPQ. This value was utilized
to predict the differences of K - PQ and K - NegPQ. The
average of the Q differences proved to be 0.39 words. The
difference of K - PQ was estimated to be 0.56 plus 0.39
words or 0.95 words. The actual difference observed was
0.86 words. The difference of K - NegPQ can be approxi­
mated by the sum of the differences K - Neg, K - Q, and
K - P. The sum of the values 0.68, 0.39, and 0.56 equals
1.63 words. The observed difference is 1.61 words. The
observed differences between K and the various singularly
; transformations have been plotted against the approxima­
tions computed with the estimated parameters described
above. See Figure 7. The curve is fitted by eye to illus-j
I trate the interval properties suggested by the successful j
| i
;prediction based upon the independent estimates of the j
Summary of Results
To review the predictions under Transformational
Depth Theory and their resolution:
|parameters Neg and P
Prediction Outcome
K > Asub & Aob Supported
Asub = Aob
Supported
Asub & Aob = Neg & PQ
Supported
.5
NegPQ
.0
NegP
PQ
.5
Neg
,5 1.0 1.5 0
K - X Calculated
Fig. 7.— Calculated and observed differences in number of words recalled be­
tween K and singularly transformed sentences.
Ol
53 j
Neg & P > Asub & Aob Supported
Asub & Aob > NegPQ Not Supported
The predictions under the alternative surface
structure models were rejected by the significant differ­
ences between K and the A sentences for Johnson. That dif­
ference also led to rejection of Martin and Roberts' pre­
dictions as did the lack of difference between Asub and
Aob. Martin and Roberts' predictions do hold for the case
of the NegPQ.
The bulk of the findings of Study I have tended to
support the Miller model of Transformational Depth. In
most regards the findings also replicate the findings of
Savin and Perchonock (1965), the sole exception being the
case of the NegP transformation sequence. Savin found that
NegP yielded fewer words than did NegPQ. Savin found a re­
call of 3.85 words after the NegPQ, while NegP produced a
recall score of 3.48. words. In the present study the re­
call for NegP was 4.22 words and the recall for NegPQ was
3.84 words. While the present study most resembles Savin
and Perchonock (19 65) in procedure, the rank ordering of
Savin's sentence types in terms of estimated complexity
does deviate from the usual findings of other investigators
for this instance. The inversion in expected order ob­
served in Savin's data may be an artifact of the sentences
employed.
CHAPTER VI
EXPERIMENT II
Review of the Problem
The basic assumption of the Transformational Depth
Theory of sentence complexity is that the number of gram­
matical rules that are applied in deriving the sentence in
either the encoding or decoding process will determine the
complexity of the sentence and its difficulty in learning
and recall tasks. The greater number of rules used in the
derivation will also make greater demands on the storage
capacity of the Short-Term Memory, STM. The Subject-Verb-
Object sentence, with an adjectival modifier of either the
subject or object, creates a substantially greater load on
the STM due to the fact that it is derived from two base
strings and utilizes two transformation rules. The unmodi­
fied S—V—0 sentence is derived from one base and lies at a
depth of zero transformations.
Study II attempted to determine that the Ss actually
are processing the adjectival or A sentence as two bases
and the processing tags for the derivation of the output
string as is suggested by Miller (1962) and Mehler (1963).
If such processing does occur the S carries into memory the
bases for the matrix and constituent sentences which are
the components of A. For example, the A sentence: The
bear eats the sweet honey would be entered into STM as the
i
t
| syntactic and lexical representations of the matrix: The
| bear eats the honey, and the constituent: The honey is
! sweet, plus the transformation or T-markers for embedding
I
I the constituent into the matrix and for deleting the noun
phrase and verb from the constituent. When an is pre-
j sented with the above A sentence in a recall task, the de-
i scribed decoding operations are utilized to process the
i sentence. Now, if the A sentence were to be followed im-
■ mediately with the constituent sentence, the appended sen­
tence would be entirely redundant and should not place any
additional load on the STM.
If the processing is not by the T-marker and base
structure analysis, but rather on the basis of the surface
structure of the sentences, appending the constituent would
result in an additional base in storage and hence an incre­
ment in storage requirements. The surface structure models
like Johnson's (1965, 1966) are concerned with the opera­
tions that generate adjacent items. The A sentence illus­
trated above, presents the adjective-noun pair in the or­
der, sweet honey, while in the constituent the order is
reversed and a verb is positioned between the noun and
adjective. The operations of going from adjective to noun
in the A sentence are not likely to facilitate the noun­
verb-adjective sequence in the constituent sentence.
Study II used both A and K type sentences with, and
without, appended sentences of the constituent structure
to examine the storage requirements for the adjectival mod­
ifier. The constituent type sentences are labeled Ct for
the actual constituent of the A sentence. The Cf sentences
are of the same structure, but the adjective and noun were
not paired in the A sentence, though the words are used in
the sentence. Cn refers to sentences which are totally
unrelated lexically to the A sentence. The symbol AA will
designate a sentence having adjectival modification of both
the subject and the object. For the A sentence of Fig­
ure 4, The bear eats sweet honey, the Ct is The honey is
sweet. The Cf for the same sentence is, The bear is sweet
and the Cn might be, The picture is fuzzy.
A transformational encoding of the A sentences will
lead to the support of the following predictions as to the
number of words recalled in addition to the sentence or
sentences.
(1) A = A+Ct (4) A+Cf > A+Cn
(2) A = K+Cf (5) K - AA = 2(K - A)
(3) A+Ct > A+Cf (6) K > A > AA
Design and Procedure
Design. The design of the second experiment paral­
lels the first study reported here. The independent vari­
ables were the relationship of the appended constituent
sentences to the sentences they follow. The three types
of relationships are described in the materials section.
A second independent variable was the type of sentence to
which the constituent was appended. The dependent vari­
able, as in the first study, was the number of filler words
recalled in addition to the correct recall of the sentence
or sentences. Length of sentences was controlled as was
lexical familiarity, the latter by using the same set of
sentences in all possible treatment types.
Subjects. The Ss for Experiment II were thirty
students recruited from the introductory psychology classes
at the University of Southern California. Participation as
subjects was a class requirement for the 17 females and 13
males tested. The ages ranged from 17 to 23, with the
modal age being 19.
Materials. A set of 21 kernel or K sentences were
randomly selected from the supply of five-word sentences
constructed for Study I. An additional three sentences
were constructed with adjective modifiers of both subject
and object (AA). The same three doubly modified (AA) sen-
58
tences were used for all of the sentences. The 21 K sen-
i tences were randomly assigned to the nine K conditions and
the twelve A conditions. The A sentences were constructed
by adding an adjective modifier to either the subject or
object noun and exercising the optional rule on the deter­
miner to maintain the five word length. Care was exer­
cised in the selection of the adjectives to avoid obvious
associations with the filler list items. Order One was
created by randomly assigning each of the three K condi­
tions: K, K+Cf, and K+Cn to three of the K sentences. The
four A conditions: A, A+Ct, A+Cf, A+Cn were each assigned
to three of the A sentences in a random fashion.
An example of the Ct condition has been presented
above in the discussion of the embedding process. A Cf
pair might be: Vines cover the dead branch. The vine is
dead. The Cn for the above sentence might be: The honey
is sweet. For the K series of sentences there is no Ct
condition. A K+Cf pair would be: The vines cover the
branch. The vine is dead. The Cn for the K sentence
above could be: The honey is sweet.
After the first stimulus set was constructed, three
more sets were created by systematically rotating the con­
dition of the A and K sentences of set one. An A sentence
which appeared on list one in the A+Ct condition, would
appear in the A+Cn condition in set two and as an A sentence
in set three and the A+Cf condition in the fourth set. The
rotations were intended to spread any advantage that might
accrue to a given lexical combination over all conditions.
Since there are only three K conditions, the K sentences
repeat conditions in the fourth set. The same strings of
filler words used in Study I were randomly assigned to the
sentence sets. The four sets of sentences and the asso­
ciated filler lists are presented in Appendix A.
Procedure
The testing of small groups proceeded as for Expe­
riment I. One alteration in the procedure was that E cued
that a pair of sentences was to be presented by saying,
"Number n is a pair, . . Feedback was provided on
trials 3, 8, 14, 16, and 20. A two minute rest period was
introduced after the twelfth sentence. Several Ss re­
ported drawing complete blanks on a given sentence; when
this occurred the sentence was repeated at the end of the
session^
CHAPTER VII
RESULTS OF EXPERIMENT II
Scoring Procedure
The method of scoring was essentially the same as
that detailed in Experiment I. Credit for correct consti­
tuent sentences was given to several S_s who used ditto
marks beneath the common words of the A or K sentence to
indicate the identical items. As in Experiment I, two in­
dependent scorings were made. In cases of differences in
judgment between E and the alternate judged the alternate
judge's opinion prevailed. The agreement between the
judges was high. Only seven disputed ratings of sentences
occurred over 720 judgments. Three of those seven involved
the responses of an S displaying a rather remarkable pen­
manship. Three differences in number of words recalled
were settled by jointly counting the responses.
The data of two Ss were dropped from the analysis
on the basis of incompleteness. One S missed 15 of the 24.
sentences and failed to recall any words for three of the
correct sentences. The other subject missed nine sentences.
2 Miss Lorraine Van Kekerix.
60
61
No systematic relationship of errors to sentence complexity
seemed to exist. Investigation revealed that the two Ss
involved were not native speakers of English. Two addi­
tional Ss were recruited to maintain equal N for each stim­
ulus set.
Data Analysis
The three observations for each treatment type were
averaged to yield the basic data for Experiment II. See
Appendix B. Averaging was employed to minimize the learn­
ing effects that might be present as well as smoothing out
any extremes in performance attributable to attentional
factors. The mean number of words recalled for each treat­
ment type is presented in Table 6.
TABLE 6
MEAN NUMBER OF WORDS RECALLED AFTER VARIOUS SENTENCES
AND SENTENCE PAIRS
S entence
Type K K+Cf K+Cn A A+Ct A+Cf A+Cn AA
Number of ___ ___ _ _ ^ _ _
Words 5.07 3.87 3.25 4.43 4.31 3.90 3.20 3.19
The mean number of words recalled for each of the
treatment conditions is plotted against the estimated
transformational depth of the treatment types in Figure 8.
Mean Words Recalled
5.0
4.0
3.0
2.0
•P
u
+
<
I J - )
o
+
1 ^
d
o
+
4-t
U
+
<
d
u
+
<
, Transformational Depth.
Fig. 8.— Mean words recalled for sentences of different transformational depth,
< T \
to
63
Analysis of Variance
The basic data was entered into a List X Treatment
X Subjects analysis of variance for repeated measures
(Winer, 1962). The summary of that analysis is presented
in Table 7. The only significant F arises from the Treat­
ment effects. Following the significant Treatment effect,
orthogonal comparisons of the treatment means were made.
In addition to the planned comparisons several post hoc
comparisons were also made. The post hoc comparison t's
were evaluated by Scheffe's /f7 " (Edwards, 1960). The com­
parisons and the associated t's are presented in Table 8.
The probabilities associated with Scheffe's F' are indi­
cated by an asterisk. The critical value of F' at the .05
level is 3.83 and at the .01 level the critical value is
4.40.
Individual Comparisons
The major prediction that the mean recall for A+Ct
would equal recall for A was supported by a non-significant
t value of 0.805.
The predicted difference in number of words re­
called between the K and the A and A+Ct sentences was sup­
ported at the .01 level, t=5.38. This - replicated the
findings of Experiment I. The secondary prediction that
the A+Ct condition would lead to more words recalled than
64
TABLE 7
ANALYSIS OF VARIANCE FOR MEAN RECALL AFTER PAIRS OF
SENTENCES OF VARYING DEGREES OF RELATIONSHIP
Source SS df MS F P
Between Subjects 120.12 27
Orders 6.10
3 2.03 n. s.
Subjects within
groups 114.02 24 4.75
Within Subjects 150.97 196
Sentences 90.36 7 21.56 69.55 . 001
Orders X Subjects 7.47 21 0.355 1.14 n. s.
Sentence X Subjects
within groups 53.14 168 0.31
Totals 271.09 233
65
TABLE 8
INDIVIDUAL COMPARISONS OF TREATMENT MEANS :
EXPERIMENT II
Comparison t df P
Planned
A vs A+Ct 0.805 168 n. s.
A vs AA 8.32 168 .01
K vs A & A+Ct 5.38 168 . 01
K+Cf vs A 3. 85 168 .01
A+Ct vs A+Cf 3.61 168 . 01
Post hoc
K+Cf vs K+Cn 4.16 168 . 05*
A+Cn vs A+Cf 4.70 168 . 05*
AA vs A+Cn 168 n. s.
*Significance level evaluated by Scheffe's test.
would the A+Cf condition was also supported at the .01
level with a t of 3.61. The comparison of the lexically
novel appended sentence with the lexically familiar sen­
tence was significant at the .01 level for the A+Cf vs A+Cn
pair. A post hoc comparison of the parallel set, K+Cf vs
K+Cn yielded a t of 4.16, which is significant at the .05
level when evaluated by the Scheffe t’.
The AA treatment mean proved to be significantly
smaller than the A treatment mean with a t of 8.32 and a
probability of less than .01, supporting prediction number
six.
Prediction number two anticipates that the recall
for the K+Cf and the A conditions will not be significantly
different. The observed difference of 0.56 words yielded a
t of 3.85, significant at the .01 level, leading to reject-
tion of prediction two. A post hoc comparison of AA with
A+Cn yielded a near zero t, as did the differences of the
pair K+Cf vs A+Cf and the pair K+Cn vs A+Cn.
The expected value of K - AA, according to predic­
tion five, is twice the observed difference between K and
A. The K - A difference is 0.64 words, leading to an ex­
pected K - AA difference of 1.28. The observed K - AA
difference is 1.88 words, or 0.60 words greater than the
expected difference. With a standard error of 0,0903, the
67
difference of K - A has confidence limits at the .05 level
of 0.4549 and 0.8251. With the K - A difference doubled,
the .05 confidence interval would fall between 1.0949 and
1.4651. The standard error for the K - AA difference is
0.142, and the .05 confidence limits fall between 1.5889
and 2.1711. Thus the fifth prediction is not supported.
CHAPTER VII
DISCUSSION OF EXPERIMENT II
The basic hypothesis being tested by these two
studies has been that Ss do actually employ an encoding and
decoding process for simple English sentences containing
adjectival modifiers that is adequately described by the
Transformational Depth Theory. The results obtained in
the first study gave some strong support to the contention
that A sentences are processed as base and T-markers. Ex­
periment II sought to confirm the results of Experiment I
and establish the effect under more rigorous control condi­
tions.
The signiifcant differences that were obtained be­
tween the K sentences and the A sentences in the second
study tend to confirm the findings of Experiment I and in­
dicate that the additional demands on STM storage created
by the addition of an adjectival modifier are stable ef­
fects over the two samples, and probably generalizable to
the population of college sophomores.
Confirmation of the Hypothesis
Possibly the most significant findings of Study II
68
69
was the lack of significant difference between the A and
the A+Ct treatments. This is interpreted as a strong indi­
cation that S is indeed encoding the adjectivally modified
sentence as matrix plus consituent plus T-markers. The
transformational encoding is further supported by the sig­
nificant difference in storage requirements for the A+Ct
and the A+Cf conditions. This demonstrates that the lack
of difference between A and A+Ct cannot be attributed
solely to the fact that the two sentences share the same
lexical items. The A+Cf condition has an equal number of
shared lexical items in the pair of sentences.
Further evidence favoring an explanation which min­
imizes the familiarity of the vocabulary items was the
significant difference between the A+Cf and the A+Cn con­
ditions. This difference clearly demonstrates that the
effect of familiarity is indeed potent. The differences
attributable to lexical differences in the above pair are
replicated in the comparison of K+Cf and K+Cn. This series
of significant differences indicate that the effect of the
lexical familiarity is reliable over treatment types. A
non-significant difference between the Cf and Cn treatments
for the K and A conditions indicates that both are probably
estimates of the same encoding operation.
From the preceding discussion it can be inferred
that the A+Ct condition cannot be accounted for by the
70
lexical items involved. It can be demonstrated with the
above comparisons that lexical familiarity can only be a
partial explanation of the results of Experiment II.
The results of the second study can also be ana­
lyzed from the viewpoint of Martin and Roberts (1966, 1967).
If their suggestion that sentence length is the crucial
variable in the recall and learning of sentences, then A+Ct
represents an 80 per cent increase in length, with atten­
dant increases in difficulty and storage requirements.
Surely this is a sufficient increase to alter the storage
requirements of the A+Ct sentences. The differences be­
tween A and A+Cf and K and K+Cf illustrate that the 8 0 per
cent increase in the length of the recalled strings may be
a real factor in the cases where the additional string is
not the embedded constituent sentence.
The findings of Experiment II are not inconsistent
with the several published studies that have utilized em­
bedded constructions. A study by Wales, reported in Wales
and Marshall (1966), indicated that the learning of sen­
tences containing self-embedded clauses was more difficult
than sentences without embedding. Savin (in press), using
the same task as the present studies, found fewer words re­
called for self-embedded sentences than for the same seman­
tic content cast in sentences that were right branching
(sentences containing relative clauses) (cited in Fodor and
71
Garrett, 1966) . Miller and Isard (1964) also found self­
embedded sentences to be more difficult. Stolz (1967)
found the sentences used by Miller and Isard to be so dif­
ficult that half his subjects were not able to perform his
comprehension task. The embedded adjective seems to pos­
sess the same properties as the self-embedded sentence in
terms of increased storage requirements.
Interval Scale Properties
The results of Experiment I indicated that the A
sentence exhibited the same requirements on STM storage
that the two singularly transformations did. In addition
the study replicated Savin and Perchonock's (1965) finding
of interval scale properties for the storage requirements
of the Neg, P and Q transformations. Given the values of
K and the above transformations it is possible to compute
estimates of the sequential application of the singularly
transformations such as NegP or PQ. If the embedding
transformations are comparable to the singularly transfor­
mations, we might expect the same sort of estimates to be
possible, given K and K - A values.
The differences between the A and A+Ct conditions
might be considered as an estimate of the storage require­
ments for the constituent structure. The observed differ­
ence is 0.12 words. In the same manner the differences
72
between K and A might be considered to contain the embed­
ding and deletion transformations plus the base for the
constituent or 0.64 words. Subtracting out the 0.12 words
for the base leaves 0.52 words attributable to the trans­
formations .
The cost in terms of storage space for the lexical
items in the appended sentences can also be estimated by
the subtractive process. The K+Cf - K+Cn difference of
0.62 words is one estimate of the cost of the lexical items
of the subject portion of the appended sentence. The dif­
ference between A+Cf and A+Cn is 0.70 words or virtually
identical to the previous estimate of the lexical items of
the subject portion of the appended sentence.
The difference between the A and the A+Cf sentences
might also be considered as an estimate of the storage re­
quired to store the syntactic component of the constituent
sentence. The observed difference is 0.53 words. The dif­
ference between A+Cf and A+Cn may be considered as an esti­
mate of the storage required for the lexical items of a
constituent sentence. If these values are used to approxi­
mate the storage requirements of the Cf sentence in the
K+Cf condition we have K = 5.05 - Cf syntax of 0.53, minus
Cf lexical of 0.70, or 3.84 words recalled. The observed
recall of 3.87 words agrees very closely with the predicted
value.
73
The interval scale properties of the storage re­
quirements for the various components leads to the assump­
tion that the recall scores for the various A conditions
could be estimated by subtracting the difference of K - A
from the appropriate K+ sentence. The observed K - A
difference is 0.64 words. The predicted recall score for
A+Cf is the recall score of K+Cf minus 0.64, or 3.87 minus
0.64 or 3.23 words. The observed A+Cf recall is 3.90.
This is 0.67 words greater than the predicted value. It is
just as though the embedded adjective is "free." This
seems hard to defend in light of the significantly differ­
ent recall scores for the K and A sentences in both this
study and Experiment I. The predicted recall score for the
A+Cn condition is 3.25 minus 0.64, or 2.61. The observed
recall is 3.20, 0.59 words greater than the predicted
value. Again the adjective seems to be "free." Obviously
something is occurring in these treatments that causes the
K+Cf and the A+Cf sentences to be processed in the same
manner.
The expected interval property of the embedded ad­
jective were also utilized to predict the recall score for
the doubly modified or AA sentences. The observed recall
for the A sentences was 4.43, or 0.64 words less than re­
called for the K sentences. AA was predicted to be equal
to K minus two times the difference of K and A, or 3.79
74
words. The observed value is 3.19 words. The difference
of 0.6 0 words is again about the estimated cost of the
single embedded adjective. In this case the recall is less
than the predicted value.
Sentence Reversibility
A possible explanation of what happened in the case
of the AA sentences to increase the storage requirements
for the doubly modified sentences, may be the reversible
nature of several of the sentences employed in this study.
A reversible sentence is one in which the subject and ob­
ject nouns can be interchanged to create another acceptable
sentence. For example: The dog chases the cow, can be
rewritten: The cow chases the dog, by reversing the sub­
ject and object. A recent study by Turner and Rommetveit
(1967) has indicated that children were better able to imi­
tate, recall, comprehend, and produce non-reversible sen­
tences than sentences of comparable length and structure
that were reversible. The three AA sentences used for all
stimuli sets were examined for reversibility. One proved
to be non-reversible: Simple foods please young children.
A second sentence seemed reversible in both the nouns and
the adjective modifiers: Kind thoughts encourage good
deeds. The final sentence was reversible in terms of the
adjective modifiers: Wild burros eat desert plants.
75
Examination of the responses to the three AA sen­
tences indicated that a systematic difference in response
to the non-reversible and reversible sentences seemed
likely. The mean recall for the non-reversible sentence
was 3.53. The reversible sentences were clustered at 3.05
and 2.9 9 words recalled. The number of words recalled were
entered into a one way analysis of variance. See Table 9.
The obtained F of 3.89, with two and 81 degrees of freedom,
is significant at the .05 level. Comparison of the non-
reversible sentence against the two reversible sentences
yielded a t of 7.93 which is significant at the .01 level.
The reversibility of the two sentences had obviously low­
ered the recall for the AA sentences. If the recall score
observed for the non-reversible sentence were used as an
estimation of recall for the AA sentences, the difference
between the predicted value of 3.79 and the observed value
of 3.53 words recalled would be reduced to 0.26 words.
Using the standard error of - the K - AA differences as a
conservative estimate of the standard error of the non-
reversible sentences, the .05 confidence limits of the AA
sentences are 3.24 and 3.82. Since this intersects with
.05 confidence limits of the K - 2(K - A) estimates of the
AA recall scores of 3.6049 and 3.9751, the adjusted differ­
ence is probably not significant. The reversibility factor
explains the failure of the predicted recall of AA sentences
76
and also may offer some answers to the problem of the
"free" adjective noted in comparing A+Cf to K+Cf.
TABLE 9
ANALYSIS OF VARIANCE OF RECALL FOR REVERSIBLE
AND NON-REVERSIBLE SENTENCES
Source SS df MS F P
Between Sentences 4 .14 2 2.07 3.89 .05
Within 43.16 81 0.532
Total 47.30 83
The appended sentence in the A+Cf condition might
create the same sorts of problems that the reversible sen­
tences created in the AA condition. In the A+Cf sentence
pairs, the adjective is used with both the subject and the
object of the matrix sentence. This undoubtedly calls for
some bookkeeping rules to assign the correct NP to the Cf
sentence, or to prevent recall as an AA sentence. The re­
versible sentences in the AA condition required 0.5 0 more
words of storage than did the non-reversible sentence. If
the cost, in words recalled, for A was 0.64 words less than
for K and the cost of the bookkeeping rules was 0.50 words,
the difference, K - (A+Cf) might be estimated by the sum of
the components. The computed estimate of that difference
77
is 1.14 words while the observed difference was 1.17.
On the other hand this will not provide any expla­
nation for the equality of recall for K+Cn and A+Cn. If
the reversibility is a factor in the A+Cf conditions it
might be just as reasonable to suggest that the expectan­
cies should be like that of the AA sentences. This ob­
viously did not occur.
Overload and Chunking Hypothesis
All of the preceding analyses have been based upon
a transformational encoding of the adjectival modifier and
have failed to provide an adequate explanation for all of
the results of this study. It would seem that the Trans­
formational Depth Theory can only explain a limited class
of events as Fodor .and Garrett (1966) claim. The results
of Study I are a remarkably good fit of the predictions
generated by the Transformational Depth Theory. In the
present study the results of the K, A, and A+Ct treatments
have been successfully predicted by the transformational
depth of the sentences. The AA sentences satisfied the or­
dinal prediction, but failed to support the interval scale
prediction. By appeal to the effect of the reversible sen­
tences in the AA set, it was possible to suggest that the
interval scale prediction might also be satisfied. The
failure of the Transformational Depth Theory occurs only in
those sentence pairs K+Cf, K+Cn, A+Cf, and A+Cn. All these
78
control sentences fail to show the predicted differences
attributable to the embedded adjective.
The element common to the set of deviant treatments
is that all are sentence pairs. The well behaved treat­
ments on the other hand involve only single sentences with
the exception cf the A+Ct condition. The Ct sentence is
entirely redundant, if encoding is in a transformational
fashion, and seems to be perceived as a single sentence.
It should also be noted that the S_ was operating in a con­
dition of maximum load on the STM. This might be a con­
tributing factor to the dilemma of the "free" adjective.
The list of filler words plus the sentences insured full
utilization of the STM capacity. Miller (1956) has sug­
gested that the finite nature of the memory capacity can be
increased or circumvented to an appreciable degree by the
process of "chunking," that is using higher order units to
subsume a number of items. Cohen (1966) suggested that the
hearer’s analysis might not. always be to the deep structure
of the sentence. Processing to a lesser level could be a
form of chunking in that embedded constituents would be
dropped from the items in storage. In the presence of
heavy.loading on the STM, the S_ perceived that his capacity
was being taxed. Presumably this was an uncomfortable
situation which S tried to relieve by the expediency of
chunking. This chunking could have taken the form of
79
making an anlysis of only the surface structure of sen­
tences when the perceived load exceeded some threshold
value.
The surface and the deep structure of the A sen­
tences are quite disparate as Chapter II has indicated.
The surface and deep structure of the K sentences on the
other hand are very similar. The sentences employed in
these studies have identical surface structures. Figure 9
illustrates the surface structure of the A and K sentences
of Experiment II. The illustration is of the subject modi­
fied A sentence. The D in parentheses indicates that in
the K form the Adj is replaced by a determiner. By utiliz­
ing a second modifier the tree can also represent the sur­
face structure of the AA sentences.
Because the well behaved sentences were all single
sentences while the deviant results were associated with
the sentence pairs, it is suggested that the Ss perceived
the pairs as being markedly more difficult in the presence
of the load created by the filler words. When E announced
"number j is a pair," felt overloaded and shifted to an
analysis of the surface structure to relieve the load. In
a normal conversation such a shift might reduce comprehen­
sion if the deep structure were not congruent with the sur­
face structure. In the recall task set for S in this study,
comprehension was not a necessary part of the processing.
80
NP- VP
(D)*Adj NP
(Adj)*D
Big honey the bears eat
Fig. 9.— P-marker illustrating the identity of the
surface structure of Kernel and Adjective modified sen­
tences .
*The starred items are interchangeable to form sub­
ject or object modified sentences.
81
Assuming does regress to a surface analysis of
the sentence pairs, the storage required for the A and the
K sentences would be identical, for they share the same
structure. The structure of the constituent type sentences
are all identical, as well. Thus the S, who switched to
surface structure processing, stored the same syntactical
information for each of the four deviant treatment types.
The sole source of difference in the four types lies in the
number of lexical items in the constituent that are shared
with the matrix or A sentence. In the case of the Cf con­
stituents for the A sentences all the lexical items are
common with the exception of the verb "be." The K+Cf con­
dition employs a constituent with two common items. In the
Cn conditions all words in the constituent are novel. As
indicated earlier, familiarity is a significant factor.
The recall scores cluster into familiar and non-familiar
lexical items in the constituent. By this analysis it is
possible to account for all of the unsupported predictions
except the interval scale prediction for the AA treatment.
CHAPTER IX
CONCLUSIONS AND RECOMMENDATIONS
The two experiments described in this report have
tended to support the Transformational Depth Theory. Sev­
eral of the results seem to be particularly meaningful to
the general theory of language processing as well as to
the specific hypotheses generated by the Transformational
Depth Theory that were tested in these experiments.
Extension of the Transformational
Depth Model
Possibly the most important finding was the lack of
a significant difference between the A and the A+Ct sen­
tences in Experiment II. This finding, in conjunction with
the significant difference observed between the K and A
sentences provides substantial evidence that £ 5 was actually
employing a decoding process of base plus T-markers. The
significant difference between A+Ct and K+Cf indicates that
the use of the adjectival modifier can represent a reduc­
tion in load on the STM over the storage of the two compo­
nent sentences. The high frequency of everyday usage of
the adjective modifier, rather than a string of K type sen­
tences supports Yngve's (196 0) contention that the grammat­
82
ical forms which make the least demands on STM would be
the most frequently used. In addition to the possible re­
duction in load on the STM, the embedded adjective reduces
the likelihood that the listener will fail to associate
the constituent sentence with the NP of the matrix sen­
tence. The listener's expectancies are that the sentence
will exhaust the description of the NP, or that the con­
tinuity will be indicated by the substitution of a pronoun
for the NP.
The serial presentation of the sentence pairs in
Experiment II .increased the time that the £[ was required to
hold the sentence representations in STM, increasing the
chance that decay might have occurred. The significant
differences between the A+Ct treatment and all of the other
treatments involving serially presented sentence pairs dem­
onstrates that decay of the memory trace cannot account for
the results of Study II. The increases in presentation
time for the pairs of sentences over the single, sentences
average approximately four seconds. Gough (1966) utilized
a three second delay in his verification task to allow sen­
tence processing to be completed before verification. He
finds that the delay does not alter the accuracy of per­
formance that decay of the trace would imply. Sachs (1967)
found recall for the syntactic structure of the sentence
declined over time while recall of the semantic content
84
held constant. Sachs' delays are presented in terms of
the number of syllables of connected discourse intervening
between the sentence and the recall signal. At twenty syl­
lables the effect began to appear. The average stimulus
pair in the present study never exceeded twenty syllables.
In addition to the time required to present the twenty syl­
lables, Sachs' method required presentation of the recall
test items creating another time delay. It is not reason­
able to assume that the delays introduced in the second
study are comparable in magnitude to Sachs'.
Chunking Hypothesis
A second significant finding of the second experi­
ment was the apparent shift in the processing methodology
for the sentence pairs that did not utilize the true con­
stituent as the appended sentence, that is, processing was
different for sentences in which the appended sentence was
not related to the matrix by embedding. This apparent in­
ability of the Transformational Depth Theory to account
for all data has been commented upon by Fodor and Garrett
(1966). They cite the results of Miller and Isard (1964),
Blumenthal (1966), and their study (Fodor and Garrett,
19 67) as evidence that the theory can only account for a
few special cases. The sentences used in their studies are
remarkably complex multiply self-embedded sentences of the
85
type: The vase that the maid that the agency hired dropped
broke on the floor. The estimated transformational depth
is six. Stolz (1967) used the Miller-Isard sentences in a
comprehension task which required £ 3 to list the clauses in
the self-embedded sentence. Approximately half of the Ss
were unable to analyze the sentences even though the
printed version of the sentence was available to them and
three readings were presented to provide ample prosodic
cues. Stolz feels that the sentences involve more grammar
than the average knows. Herdan (1964) suggests that the
"grammar load" of a language is^that portion of the grammar
of a language that is actually employed in everyday usage.
In his terms the multiply self-embedded sentences exceed
the "grammar load" of English.
In the experiments reported here, it is not likely
that the sentential complexity exceeds the grammar load of
English. The perceived load explanation that was presented
in the discussion of.Study II, is probably attributable to
either the list of unrelated filler words or to the length
of the two strings to be recalled. The shift to a surface
structure processing could represent an extremely valuable
piece of information if the exact cross-over point could
be established. A surface structure analysis is likely to
yield semantic errors in comprehension in those cases in
which the deep structure is not congruent with the surface
86
structure (Katz and Postal, 1964; Postal, 1964b). For
accuracy in communication it would see'm necessary to either
avoid sentences of such complexity that overload conditions
were perceived, or to use only sentences in which surface
and deep structures were similar. Numerous applications
to the writing of educational materials and operational
manuals for industrial and military equipment are possible.
An alternative hypothesis of why the non-embedded
constituent sentences seemed to be recalled as the surface
structure processing model would predict, was alluded to
in the opening paragraphs of this discussion. It is very
likely that the failed to perceive that the pair K+Cf
were related in the same fashion that__the derived matrix
and constituent of the A sentence are related. Without the
context of connected discourse the connection might be
rather difficult to notice, especially if the task is com­
plicated by the need to recall the list of filler words.
A simple test of the disjoint sentence hypothesis
against the perceived depth hypothesis can be made. If the
sentence pairs of Study II were presented with filler lists
of varying lengths the two hypotheses lead to different
predictions. The disjoint sentence processing expects the
A and K series to exhibit the same equality of recall re­
gardless of the filler load. The overload hypothesis would
expect the reduced filler load to permit deep structure
87
processing which would lead to differing numbers of re­
called words for the A and K series of sentence pairs.
Thus recall for A+Cf would be significantly less than for
K+Cf when some part of the load created by the eight filler
items was removed.
Disjoint Sentence Hypothesis
 . Experiment I has demonstrated that the generalized
or embedding transformations associated with adjective mod­
ification of NPs may be treated much the same as the equiv­
alent number of singularly transformations when assessing
the complexity of sentences. The finding that sentences
containing a single adjective modifier had approximately
the same STM storage requirements as did sentences contain­
ing two singularly transformations suggests a number of
experiments to map out more clearly the properties of this
transformation.
The sentences of the first study used only a single
adjective modifier. In Experiment II adjective modifica­
tion was applied to both the subject and object NP. In
both instances the increase in the number of adjectives led
to significantly less words recalled. There are other pat­
terns for the use of adjective modifers. A single NP can
be modified with an infinite number of adjectives. On a
practical level it would be interesting to determine if
88
double modification of a subject NP resulted in the same
magnitude of decrement observed in the AA sentences of'
Study II. The sentence, Tall pine trees shade the cabin,
would be compared with, Tall trees shade the log cabin.
Bach (1964) suggests that the encoding of the individual
constituent sentences for the multiply modified NP produces
a structure that seems more complex than the situation de­
mands. He feels that a more appropriate encoding would
combine the individual constituents into a coordinated con­
stituent, e.g., The trees are tall and are pine. If the
coordinated constituent is employed the sentence with the
doubly modified NP should lead to greater recall scores
than for the AA sentence.
The results of Experiment I, as well as the find­
ings of Savin and Perchonock (1965) confirm Miller's (1962)
findings that the singularly transformations are indepen­
dently applied and exhibit some properties of interval
scales. If the embedded adjective behaves in the same
fashion, sentences involving a singularly transformation
and an embedded adjective should exhibit a constant amount
of decrease in recall when compared with sentences involv­
ing the same singularly transformation without the adjec­
tive modifier. For example the difference of Neg - A Neg
should equal the difference P - AP.
89
Refutation of Mean Depth
The results of Study I have replicated the findings
of Mehler and Carey (1968) and Martin and Roberts (1967)
that the surface structure complexity index d is not a very
useful predictor of learning or recall performance. The
extreme lability of d, as demonstrated by the exercise in
subject and object modification of Experiment I, suggest
that d, over a wide range, seems to be rather independent
of the sentences' perceived complexity. The relationship
between d and sentence learning that Martin found in his
1966 study may have been a fortunate selection of sentences
in which surface structure was correlated with the com­
plexity of the deep structure. The most telling finding
of Martin and Roberts was the extremely low scores attrib­
uted to the "kernel." The so-called "kernels" were of the
form: She will (certainly) visit her (two) sisters. The
parentheses added to indicate the embedded adverb and
adjective. In light of the deep structure complexity of
the "kernel," Martin's finding offers real support to the
Transformational Depth Theory rather than refuting the the­
ory as he suggests.
The above criticism of the Martin and Roberts'
study illustrates one of the problems of comparisons be­
tween the studies of various investigators. The Es either
assume that the S is an absolute idiot and raid the primers
90
and first grade readers for stimulus materials/ or the
assumption is made that S_ knows all of the grammar of Eng­
lish, and testing involves only the most complex forms.
The present studies possibly err in the direction of the
primer, though some effort was expended to form sentences
of interest. A much needed investigation is a determina­
tion of the "grammar load" of everyday spoken English.
With such a baseline it would be far more likely that the
psycholinguistic studies might focus upon the significant
aspects of the grammar rather than attacking willy-nilly.
Another neglected aspect of the psycholinguistic
study has been the context of the sentence. The sterile
isolated sentences of these studies are not in all likeli­
hood eliciting a very natural response. The paradigm of
Sachs (1967) is a refreshing contrast in that it looks at
sentence processing in connected discourse. If such
studies can tease out the relationship between the strings
of sentences involved, meaningful discussion of the compre­
hension of paragraphs should be possible.
CHAPTER X
SUMMARY
Psycholinguistic research has indicated that the
number of grammatical transformations (transformational
depth) required to derive a sentence from the "base" or
"kernel" structure, is positively related to the difficulty
experienced in processing the sentence. The singularly
transformations, e.g., negative, question, and passive,
which revise a single sentence, have been most extensively
studied. The present experiments were designed to test the
Transformational Depth Model's extension to the transfor­
mations which combine several sentences into a single sen­
tence. A second purpose was to test the Miller-Chomsky
Transformational Depth Model against two surface structure
models.
Transformational grammars view the adjective modi­
fier as the embedded representation of a constituent (C)
sentence. In the sentence, The green tree produces kapok,
green is the embedded transformation of the C sentence, The
tree is green. The C is combined with the matrix (M) sen­
tence, The tree produces kapok, to produce the adjectivally
modified (A) sentence above by an embedding transformation
and a deletion transformation. The sample sentence has a
91
depth of two transformations. The A sentences were com­
pared with sentences having a depth of one, two, three, or
no singularly transformations in an immediate recall task.
The measure of interest was the unused short-term memory
capacity, estimated by the number of words from a list that
could be recalled in addition to the sentence. An analysis
of variance yielded a significant sentence type effect.
Comparisons of the treatment means indicated that the two
combinatory transformations used in adjectival modification
reduced the number of words recalled by the same amount as
did two singularly transformations. The results of com­
parisons between the singularly transformations replicated
the findings of previous investigators. The predictions of
the two surface structure models were not supported.
A second experiment tested more rigorously the
validity of the hypothesis of transformational processing
of adjectival modifiers. The S on hearing an A sentence,
reduces the sentence to representations of M and C, plus
the transformational processing information. If S were
asked to recall a pair of sentences, the load on memory
would not increase when A was paired with C, as C is en­
tirely redundant. Control for familiarity of vocabulary
was accomplished by sentences of the C construction using
the adjective and unmodified noun of A. The sample sen­
tence would be paired with, The kapok is green, or the
93
novel C, The ham is salty. A parallel set of sentences,
having the construction of M, were paired with C sentences
with shared or novel vocabulary items to test for interval
scale properties of the reduction in recall created by the
adjective, as were single sentences containing two adjec­
tives. Analysis indicated the additional true C sentence
did not reduce recall scores, while all false Cs resulted
in significantly lower recall scores. The predicted dif­
ference in recall between the modified and unmodified sen­
tences paired with false C sentences were not observed,
however.
The conclusions drawn were: (a) the Transforma­
tional Depth Model of language processing more adequately
described the data than did either of the surface structure
models tested; (b) the reduction in recall score after the
adjectivally modified sentence appeared to be a reliable
event over the samples tested; (c) the Transformational
Depth Model of language processing can be extended to the
combinatory transformations involved in adjectival modifi­
cation; and (d) that the second experiment should be re­
peated with several alterations to determine why the A plus
false pairs led to recall scores equal to the scores for
the M with false C pairs.
REFERENCES
94
REFERENCES
Bach, E. W. An introduction to transformational grammars.
New York: Holt, Rinehart and Winston, 1964.
Bagley, W. C. The appreciation of the spoken sentence: a
study in the psychology of language, American
Journal of Psychology, 1901, 12_r 80-134.
Blumenthal, A. L. Observations with self-embedded sen­
tences, Psychonomic Science, 1966, 6^, 453-454.
Cattell, J. M. The time it takes to see and name things,
Mind, 1886, LI, 64.
Clark, H. H. Some structural properties of simple active
and passive sentences, Journal of Verbal Learning
and Verbal Behavior, 1965, !>, 65-70.
Cohen, L. J. Comment. In Lyons, J., & Wales, J. C.
(Eds.), Psycholinguistic papers. Edinburgh:
Edinburgh University Press, 1966. Pp. 174-185.
Coleman, E. B. The comprehensibility of several grammati­
cal transformations, Journal of Applied Psychology,
1964, 4J3, 186-190.
Chomsky, N. A. Syntactic structures. 'sGravenhage:
Mouton,1957.
Chomsky, N. A. Current issues in linguistic theory. The
Hague: Mouton, 1964.
95
'96
Chomsky, N. A. Aspects of the theory of syntax, Caiti5 -
bridge, Mass.: MIT Press, 1965.
Chomsky, N. A. Cartesian linguistics. New York: Harper
& Row, 1966. (a)
Chomsky, N. A. Topics in the theory of generative grammar.
The Hague: Mouton, 1966. (b)
Dewey, J. The psychology of infant language, Psychological
Review, 1894, 1_, 63-66.
Edwards, A. L. Experimental design in psychological re­
search. New York: Holt, Rinehart and Winston,
1960.
Fillmore, C. J. The position of embedding transformations
in a grammar, Word, 1963, _19, 208-231.
Fodor, J. A., & Garrett, M. Some reflections on competence
and performance. In Lyons, J., & Wales, J. C.,
Psycholinguistic papers. Edinburgh: Edinburgh
University Press, 196 6.
Fries, C. C. The structure of English. New York: Har-
court, Brace & World, 1952.
Gleason, H. A., Jr. Linguistics and English grammar. New
York: Holt, Rinehart and Winston, 1965.
Goldman-Eisler, F. Discussion and further comment. In
Lenneberg, Ei H. (Ed.), New directions in the
study of language. Cambridge, Mass.: MIT Press,
1964. Pp. 109-130.
97
Gough, P. B. Grammatical transformations and speed of
understanding, Journal of Verbal Learning and Ver­
bal Behavior, 1965, £, 107-111.
Gough, P. B. The verification of sentences: The effect
of delay of evidence and sentence length, Journal
of Verbal Learning and Verbal Behavior, 1966, 5^,
492-496.
Halliday, M. A. K. Some notes on deep grammar, Journal of
Linguistics, 1966, _2, 57-67.
Herdan, G. The advanced theory of language as choice and
chance. Berlin: Springer-Verlag, 1966.
Hockett, C. F. Language, mathematics and linguistics.
The Hague: Mouton, 1967.
Howe, E. S. Probabilistic adverbial qualification of ad­
jectives, Journal of Verbal Learning and Verbal
Behavior, 1963, 1, 225-242.
Howe, E. S. Verb tense, negatives and other determinants
of the intensity of evaluative meanings, Journal
of Verbal Learning and Verbal Behavior, 1966, 5,
147-155.
Jakobsen, E. On meaning arid understanding, American Jour­
nal of Psychology, 1911, 12_, 553.
Jenkins, J. J. Mediated associations: Paradigms and
associations. In Cofer, C. N. , & Musgrave, B.
(Eds.), Verbal behavior and learning. New York:
98
McGraw-Hill, 1963. Pp. 210-244.
Johnson, N. F. Linguistic models and functional units of
language behavior. In Rosenberg, S. (Ed.), Direc­
tions in psycholinguistics. New York: Macmillan,
1965. Pp. 29-65.
Johnson, N. F. The influence of associations between ele­
ments on structured verbal responses, Journal of
Verbal Learning and Verbal Behavior, 1966, 5,
369-374.
Katz, J. J, Mentalism in linguistics, Language, 1964, 40,
124-137.
Katz, J. J.‘ , & Postal, P. M. An integrated theory of
linguistic description. Cambridge, Mass.: MIT
Press, 1964.
Krossner, W. J. The evaluation of strings in an artificial
logical language, Psychonomic Science, 1967, S_,
321-322.
Ladefoged, P., & Broadbent, D. E. Perception of sequence
in auditory events. Quarterly Journal of Experi­
mental Psychology, 1960, 12^, 162-170.
Lamb, S. M. Outline of stratificational grammar. Washing­
ton, D.C.: Georgetown University Press, 1966.
Lees, R. B. Model for a language users knowledge of gram­
matical form, Journal of Communication, 1964, 14,
74-85.
Lukens,
Maclay,
Marks,
Martin,
Martin,
McMahon
Mehler,
Mehler,
Miller,
99
H. Preliminary report on the learning of language,^
Pedagogical Seminar, .1894, 3_, 324-360.
H. Linguistics and language behavior, Journal of
Communication, 1964, L4, 66-73.
i., & Miller, G. A. The role of semantic and syn­
tactic constraints in the memorization of English
sentences, Journal of Verbal Learning and Verbal
Behavior, 1964, 3^, 1-5.
E., & Roberts, K. H. Grammatical factors in sen­
tence retention, Journal of Verbal Learning and
Verbal Behavior, 1966, 5^, 211-218.
E., & Roberts, K. H. Sentence length in the free-
learning situation, Psychonomic Science, 1967, £,
535-536.
L. Grammatical analysis as a part of understand­
ing a sentence. Unpublished doctoral dissertation,
Harvard University, 1963.
J. Some effects of grammatical transformations on
the recall of sentences, Journal of Verbal Learning
and Verbal Behavior, 1963, 2_, 346-351.
J., & Carey, P. The interaction of veracity and
syntax in the processing of sentences, Perception
and Psychophysics, 1968, 3_, 109-111.
G. A. The magical number: Seven plus or minus two.
Psychological Review, 1956, 6_3, 81-97.
Miller,
Miller,
j
i
; Miller,
:Miller,
Miller,
Miller,
Mowrer,
Osgood,
Osgood,
100
G. A. Some psychological studies of grammar,
American Psychologist, 1962, 17_, 748-762.
G. A. Language and psychology. In Lenneberg, E. H.
(Ed.), New directions in the study of language.
Cambridge, Mass.: MIT Press, 1964. Pp. 89-107.
G. A., & Chomsky, N. Finitary models of language
users. In Luce, R. D., Bush, R. R., & Galanter, E.
(Eds.), Handbook of mathematical psychology.
Vol. II. New York: Wiley, 1963. Pp. 419-491.
G. A., & Isard, S. Some perceptual consequences
of linguistic rules, Journal of Verbal Learning and
Verbal Behavior, 1963, 2^ 217-228.
G. A., & Isard, S. Free recall of self-embedded
English sentences, Information and Control, 1964,
7, 292-303.
G. A., & McKean, K. O. A chronometric study of
some relations between sentences, Quarterly Journal
of Experimental Psychology, 1964, 16_, 297-308.
0. H. The psychologist looks at language, American
Psychologist, 1954, 9_, 660-694.
C. E., & Sebeok, T. A. Psycholinguistics: A sur­
vey of theory and research problems. Part 2, sup­
plement to Journal of Abnormal and Social Psychol­
ogy, 1954, 49^, whole issue. Reprinted by Indiana
University Press, 1965.
C. E. On understanding and creating sentences,
American Psychologist, 1963, 18^ 735-751.
Postal, P. M. Limitations of phrase-structure grammars.
In Fodor, J. A., & Katz, J. J. (Eds.), The struc­
ture of language. New York: Prentice-Hall,
1964. (a) Pp. 137-151.
Postal, P. M. Underlying and superficial linguistic
structure, Harvard Educational Review, 1964, 34,
246-266. (b)
Roberts, P. English syntax: A programed introduction to
transformational grammar. New York: Harcourt,
Brace & World, 1964.
Sachs, J. S. Recognition memory for syntactic and semantic
aspects of connected discourse, Perception and
Psychophysics, 1967, 2^ 437-442.
Savin, H. B. Grammatical structure and the immediate re­
call of English sentences: Embedded clauses.
(In press.)
Savin, H. B., & Perchonock, E. Grammatical structure and
the immediate recall of English sentences, Journal
of Verbal Learning and Verbal Behavior, 1965, £,
348-353.
Slobin, D. I. Grammatical transformations and sentence
comprehension in childhood and adulthood, Journal
of Verbal Learning and Verbal Behavior, 1966, _5,
102
Smith, C. S. A class of complex modifiers in English,
Language, 19.61, 31_, 342-365.
Smith, F., & Miller, G. A. (Eds.). The genesis of lan­
guage . Cambridge, Mass.: MIT Press, 1966.
Stolz, W. S. A study of the ability to decode grammati­
cally novel sentences, Journal of Verbal Learning
and Verbal Behavior, 1967, 6^, 867-873.
Thomas, O. Transformational grammar and the teacher of
English. New York: Holt, Rinehart and Winston,
1965.
Turner, E. A., & Rommetveit, R. The acquisition of sen­
tence voice and reversibility, Child Development,
1967, 38, 649-660.
Tannenbaum, P. H., Evans, R. R., & Williams, F. An experi­
ment in the generation of simple sentence struc­
tures, Journal of Communication, 1964, 1£, 113-117.
Underwood, B. J. The language repetoire and some problems
in verbal learning. In Rosenberg, S. (Ed.), Direc­
tions in psycholinguistics. New. York: Macmillan,
1965. P p . 99-120.
Wales, R. J., & Marshall, J. C. The organization of lin­
guistic performance. In Lyons, J. , S t Wales, R. J.
(Eds.), Psycholinguistic papers. Edinburgh:
Edinburgh University Press, 1966. Pp. 29-80.
Winer,
Yngve,
103
B. J. Statistical principles in experimental de­
sign. New York: McGraw-Hill, 1962.
V. H. A model and an hypothesis for language
structure., Proceedings, American Philosophical
Society, 1960, 104, 444-466.
i
APPENDIXES
104
APPENDIX A
STIMULUS SENTENCE SETS
105
KERNEL SENTENCES USED IN EXPERIMENT I AND TRANSFORMATION FOR EACH ORDER
Sentence I II III IV
A lottery selects the victims,
taie detectives discover the clues.
The tourists pay the taxes.
A submarine supplies the laboratory.
The dealer buys the products.
The Indians raid the ranches.
The Europeans eat the bugs.
The Gods envy the mortals.
The awning shades the bench.
The wall conceals the garage.
The pelicans eat the crabs.
The cliffs surround the camp.
The results please the boss.
The truth shocks the nation.
The patterns please the artist.
The whales eat the squid.
The practice improves the memory.
A knife scratches the diamonds.
The kangaroos avoid the clover.
The soldiers ambush the forces.
A cross marks the graves.
The stress prevents the growth.
The walls border the street.
A Christian observes the holiday.
NegPQ K
P NegPQ
NegP Neg
K
PQ
Aob K
P Neg
Neg K
PQ
P
Aob Aob
K Asub
NegP Neg
Asub P
NegPQ Aob
P NegP
Neg NegPQ
Asub
PQ
PQ
NegP
NegP NegPQ
K P
Aob
PQ
Neg Asub
NegPQ Aob
Asub NegP
PQ
Asub
P NegP
Neg
PQ
PQ
K
NegP Neg
NegPQ Asub
K
PQ
P NegP
Asub K
Neg P
NegPQ Neg
Aob Asub
PQ
K
NegP P
Aob NegPQ
K • Aob
Aob Neg
Asub Aob
K P
Asub NegPQ
P NegP
NegPQ K'
PQ
Asub
Neg Aob
NegP NegPQ
107
STIMULUS SET I— EXPERIMENT II
1. The dealer buys local products. The dealer is local,
rock, cow, ship, day, chair, sleet, hat, blue.
2. The patterns rest the eyes. The eyes are sad. bush,
dog, bus, hour, desk, rain, hat, green.
3. The girl drops the pencils. The leaves are tender,
tree, dog, truck, month, desk, snow, shirt, white.
4. Rebel leaders control the army, grass, cat, bus,
week, bed, snow, dress, blue.
5. The stress stops normal growth: The dog is hungry,
tree, cow, ship, year, lamp, storm, coat, red.
6. The truth shocks the mayor. stream, dog, car, hour,
rug, rain, shirt, black.
7. The staff plans the battles. The burn is painful,
rock, horse, truck, month, desk, sleet, hat, blue.
8. Wild burros eat desert plants. bush, lion, bus,
month, rug, storm, dress, green.
9. Radical ideas thrill the minority. The dishes are
new. tree, lion, truck, day, bed, snow, shirt, green.
10. Heavy tractors pull the missiles. The tractors are
heavy, rock, cow, car, year, rug, hail, coat, white.
11. The gulls attack the pelicans. The gulls are hungry,
stream, horse, train, year, chair, storm, suit, green.
12. The magician visits sick children. The men seem lazy,
bush, cat, car, week, chair, snow, hat, green.
13. The old horse trots home. The home is old. tree,
dig, ship, hour, lamp, rain, coat, red.
14. The bank offers commercial services. The services are
commercial. bush, cat, car, day, bed, snow, hat, red.
15. The Gods envy the mortals. Rock, cow, car, year, rug,
storm, shirt, black.
16. A soldier cleans the cannon. The mountains are
rugged. tree, horse, ship, hour, lamp, sleet, coat,
green.
17. -Kind thoughts encourage good deeds, rock, lion, car,
day, chair, hail, dress, black.
18. The file contains secret papers, bush, dog, bus, hour,
desk, rain, hat, green.
19. The hedge conceals the garden. The hedge is tall,
grass, lion, truck, hour, bed, sleet, dress, blue.
20. A submarine supplies the laboratory, grass, horse,
truck, month, rug, storm, suit, red.
21. A military band plays marches. The marches are mili­
tary. stream, lion, bus, year, bed, snow, hat, white.
22. Simple foods please young children. tree, cat, ship,
year, lamp, snow, coat, black.
108
23. The awnings shade narrow streets. The streets are
narrow, bush, cow, bus, day, desk, hail, hat, green.
24. The natives eat insect grubs, stream, dog, train,
hour, chair, rain, dress, white.
STIMULUS SET II— EXPERIMENT II
1. The natives eat insect grubs. The grubs are insects,
grass, lion, ship, hour, desk, sleet, suit, white.
2. A submarine supplies the laboratory. The laboratory
is busy, rock, cow, bus, year, rug, storm, hat,
black.
3. The old horse trots home. The horse is old. tree,
lion, car, hour, lamp, sleet, coat, green.
4. Wild burros eat desert plants, rock, cat, car, day,
chair, hail, dress, black.
5. The magician visits sick children. The children are
sick. bush, dog, bus, hour, bed, sleet, dress, blue.
6. A soldier cleans the cannon, grass, lion, truck,
hour, bed, rain, hat, green.
7. The staff plans the battles. The staff is old.
stream, horse, truck, month, rug, storm, suit, red.
8. The bank offers commercial services. The bank is
commercial, grass, horse, bus, year, bed, storm, hat,
white.
9. Kind thoughts encourage good deeds. tree, cat, ship,
year, lamp, snow, coat, black.
10. The file contains secret papers. The hedge is tall,
bush, cow, bus, day, desk, hail, hat, green.
11. Simple foods please young children. stream, dog,
train, hour, chair, rain, dress, white.
12. The hedge conceals a garden. The gulls are hungry,
grass, horse, truck, hour, lamp, sleet, shirt, white.
13. The Gods envy the mortals. The tree is dead. rock
cow, ship, day, chair, sleet, hat, blue.
14. The stress stops normal growth. The stress is normal,
bush, dog, bus, hour, desk, rain, hat, green.
15. Heavy tractors pull the missiles. The missiles are
heavy. tree, dog, bus, month, desk, snow, hat, white.
16. The truth shocks the mayor. The need is great. tree,
cat, bus, week, bed, snow, dress, blue.
17. Rebel leaders control the army. The men seem lazy,
tree, cow, ship, year, lamp, storm, coat, red.
18. The girl drops the pencils. rock, horse, truck,
month, desk, sleet, hat, blue.
19. The awnings shade narrow streets. The dog is hungry,
bush, horse, bus, month, rug, storm, dress, green.
109
20. Radical ideas thrill the minority. tree, lion, truck,|
day, bed, snow, shirt, green. 1
21. The gulls attack the pelicans. The pelicans are
large. rock, cow, car, year, rug, hail, coat, white.
22. A military band plays marches. stream, horse, train,
year, chair, storm, suit, green.
23. The patterns rest the eyes, bush, cat, car,'week,
chair, snow, hat, green.
24. The dealer buys local products. tree, dog, ship,
hour, lamp, rain, coat, red.
STIMULUS SET III— EXPERIMENT II
1. The file contains secret papers. The papers are
secret. rock, horse, truck, month, desk, sleet, hat,
blue.
2. Simple foods please young children. tree, cow, ship,
year, lamp, storm, coat, red.
3. The staff plans the battles. tree, cat, bus, week,
bed, snow, dress, blue.
4. The hedge conceals the garden. bush, dog, truck,
month, desk, snow, shirt, white.
5. Heavy tractors pull the missiles. The burn is pain­
ful. stream, dog, bus, hour, desk, rain, hat, green.
6. The truth shocks the mayor. The need is great. rock,
cow, ship, day, chair, sleet, hat, blue.
7. The gulls attack the pelicans, grass, horse, truck,
hour, lamp, sleet, shirt, white.
8. The bank offers commercial services. stream, dog,
train, hour, chair, rain, dress, white.
9. Rebel leaders control the army. The army is rebel,
bush, cow, bus, day, desk, hail, hat, green.
10. A submarine supplies the laboratory. The house is
small. tree, cat, ship, year, lamp, snow, coat,
black.
11. A military band plays marches. The band is military,
stream, horse, bus, year, bed, storm, hat, black.
12. Kind thoughts encourage good deeds. stream, horse,
truck, month, rug, storm, suit, red.
13. The girl drops the pencils. The pencils are new.
grass, lion, truck, hour, bed, rain, hat, green.
14. Radical ideas thrill the minority. The minority is
radical. bush, dog, bus, hour, bed, sleet, hat,
black.
15. The pattern rests the eyes. The dishes are new. rock,
cat, car, day, chair, hail, dress, green.
110
16. The old horse trots home. tree, lion, car, hour,
lamp, sleet, coat, blue.
17. A soldier cleans the cannon, rock, cow, bus, year,
rug, storm, shirt, black.
18. Wild burros eat desert plants, grass, lion, ship,
hour, desk, sleet, suit, white.
19. The magician visits sick children. tree, dog, ship,
hour, lamp, rain, coat, red.
20. The stress stops normal growth. The growth is normal
bush, cat, car, week, chair, snow, hat, green.
21. The Gods envy the mortals. The Gods are ancient,
stream, horse, train, year, chair, storm, suit, green
22. The natives eat insect grubs. The staff is tired,
rock, cow, car, year, rug, hail, coat, white.
23. The dealer buys local products. The products are
local. tree, lion, truck, day, bed, snow, shirt,
green.
24. The awnings shade narrow streets. The awnings are
narrow, bush, horse, bus, month, rug, storm, dress,
green.
STIMULUS SET IV— EXPERIMENT II
1. The hedge conceals a garden. stream, cow, ship, year
rug, sleet, coat, green.
2. The bank offers commercial services. The tree is
dead. bush, cow, bus, year, lamp, snow, hat, red.
3. The old horse trots home. The burn is painful,
stream, dog, train, day, chair, rain, dress, white. -
4. A submarine supplies the laboratory. The pencils are
new. bush, lion, truck, year, lamp, hail, dress,
white.
5. The staff plans the battles. The battles are fierce,
grass, dog, truck, month, bed, snow, hat, green.
6. Heavy tractors pull the missiles. grass, horse,
truck, month, rug, storm, suit, red.
7. Simple foods please young children. stream, lion,
bus, year, bed, snow, hat, white.
8. The Gods envy the mortals. The mortals are ancient,
bush, dog, bus, hour, desk, rain, hat, green.
9. The magician visits sick children. The magician is
sick. rock, dog, truck, week, lamp, rain, hat, green
10. A soldier cleans the cannon. The soldier is tall,
bush, dog, bus, year, rug, sleet, dress, green.
11. Wild burros eat desert plants, tree, cat, ship, hour
rug, snow, coat, white.
Ill
12.
13.
14.
15.
16.
17.
18.
19.
20.
Rebel leaders control the army. The leaders are
rebels. grass, cow, car, month, rug, snow, dress,
white.
The natives eat insect grubs. The natives are in­
sects. tree, dog, truck, month, bed, hail, coat,
green.
The awnings shade narrow streets. stream, horse, bus,
day, lamp, storm, hat, black.
The pattern rests the eyes. The need is great,
rock, horse, car, hour, lamp, snow, suit, white.
A military band plays marches. The dishes are new.
tree, cow, ship, hour, chair, storm, coat, white.
Radical ideas thrill the minority. The ideas are
radical. stream, horse, train, year, chair, suit,
green.
Kind thoughts encourage good deeds, bush, cat, car,
week, chair, snow, hat, black.
The gulls attack the pelicans. tree, lion, truck,
day, bed, snow, shirt, green.
The stress stops normal growth . stream, cat, car,
day, rug, hail, suit, black.
APPENDIX B
BASIC DATA
112
BASIC DATA FOR EXPERIMENT I
(Mean Number of Words Recalled after Each Sentence Type)
Subject Grout3
Sentence Type
Number GrouP
K P Neg PQ NegP Asub Aob NegPQ
36 I 4.67 3.67 4.33 3.67 2.67 3.67 4.00 3.33
31 I 4.33 4.00 3.33 4.00 4.00 4.67 3.00 3.67
30 I 4.67 5.00 3.67 4.00* 3.67 3.67 4.00 3.67
34 I 5.33 4.67 3.33 4.33 3.33 3.00 4.33 2.33
35 I 4.33 4.67 4.00* 3.67 4.00 4.33 4.00 3.67
33 II 5.00 5.33 4.33 3.67 4.00 4.00 3.00 2.67
32 II 4.33 4.33 4.67 3.00 3.00 2.67 4.00 3. 00
29 II 4.67 4.67 4.33 4.33 4.67 4.00 3.33 3.33
39 II 6.33 5.67 5.00 4.67 4.67 4.33 4.67 4.00*
41 II 6.33 6.00 6.00 5.33 5.00 6.00* 5.67 5.33
: 44 III 7.33 6.67 6.00 6.33 5.67 6.33 6.33 6.00
45 III 4.67 5.33 4.33 4.00 3.33 3.67 4.00 3.33
49 III 6.67 4.67 5.00 5.00 4.67 4.67 5.00* 4.33
38 III 5.67 4.00* 5.33 4.33 3.67 4.33 3.33 3.33
42 IV 6.67 4.33 5.67 5.67 5.33 4.00 5.33 4.00
40 IV 6.33 5.33 4.67 5.33 5.00 5.00 4.00 4. 00*
46 IV 5.00 5.00 5.00 5.00 4.67 4.67 3.00 4.33
43 IV 7.00 6.33 6.33 6.00 5.00 5.33 5.33 6.00
47 IV 4.33 3.33 5.33 4.33 4.00* 3.33 3.00 2.67
*Indicates a mean based upon the recall of two sentences. ^
H
i —1
OJ
BASIC DATA FOR EXPERIMENT II
(Mean Number of Words Recalled after Each Sentence Type)
Subnect „
Sentence Type
Number Grou?
K K+Cf K+Cn A A+Ct A+Cf A+Cn AA
53 I 3.00 3.33 3.00 3.67 3.33 2.67 1.67 1,67
54 I 5.67 4.67 4.00 5.00 4.33 5.33 2.33 3.33
55 I 5.00 4.67 4.33 4.50* 5.00 4.33 3.33 4.67
62 I 5.67 4.33 4.00 4.67 5.33 4.00 4.33 3.33
76 I 4.33 3.33 2.00* 3.33 3.33 3.33 2.33 1.67
65 I 4.67 2.67 2.33 3.33 3.33 3.00 2.00 2.00
81 I 5.00 4.00 3.67 4.67 4.67 4.33 2.50* 4.00*
89 II 5.00 2.67 3.33 4.00 4.00 3.67 4.00 3.67
91 II 4.33 3.00 3.33 3.67 3.00 3.67 3.33 3.33
92 II 6.00 4.67 4.33 4.67 4.33 3.67 3.33 3.33
87 II 5.00 4.00 3.00 4.33 4.67 3.33 2.67 2.33
85 II 6.67 5.33 4.67 6.00 5.67 5.67 5.33 5.00
88 II 5.33 4.67 3.67 4.33 4.33 4.33 3.33 3.33
42 II 6.00 4.00 4.00 4.67 4.33 3.50* 3.00 3.00
51 III 6.00 3.33 4.00 5.33 4.33 4.33 3.33 3.33
60 III 4.00 3.00* 2.00 3.33 3.00 2.67 3.00 2.33
45 III 5.00 1.00 2.00 4.67 4.33 3.67 2.00 4.00
74 III 4.33 4.. 00 1.00* 4.67 4.33 3.67 2.67 2.67
73 III 5.67 5,33 4.00 5.33 5.33 5. 00 4.00 4.33
75 III 5.00 4.00 3.00 4.00 4.67 4.33 3.00* 2.33
58 III 6.33 4. 00 3.33 5.33 5.33 4.00 4.67 4.00 114
BASIC DATA FOR EXPERIMENT II— Continued
Subject
Number GrouP
Sentence Type
K K+Cf K+Cn A A+Ct A+Cf A+Cn AA
56 IV 3.33 2.33 1.67 3.00 2.67 2.33 1.67 2.67
57 IV 4.67 3.67 4.00 4.67 4.00* 4.33 2.50* 2.67
77 IV 6.67 6.33 4.67 6.00 5.67 5.00 4.67 4.00
78 IV 5.33 5.00* 3.00 5.33 5.67 5.00 4.33 4.67
80 IV 4.67 4.00 2.33 4.33 4.33 3.67 3.67 3.33
43 IV 5.00* 3.67 3.00 3.67 3.33 2.33 3.00 2.00
79 IV 4.33 3.33 3.33 3.67 4.00 4.00 3.67 2.50
*This mean is based upon recall of two correct sentences. !
; I
115
APPENDIX C
INSTRUCTIONS TO SUBJECTS
116
117
SHORT-TERM MEMORY FOR WORDS
This is a study in how people use language. We
hope to get at this by measuring the load on the short­
term memory created by several different kinds of simple
sentences of five or six words. To measure the load cre­
ated by a sentence, we use the concept of a memory capacity
or memory span. A person can repeat seven or eight digits
in immediate recall, this is his memory span for numbers.
I There are also memory spans for other units, including un-
i related words.
In sentences the words are not unrelated. They
: seem to group together into larger units, perhaps the
; subject and predicate might be units that are stored. If
| this is the case, the person can store more words as sen-
I tences than as unrelated lists. We could measure how much
; of the short-term memory was not used by the sentence to
estimate the load on memory the sentence creates.
To measure the amount of the memory capacity not
used to store the sentence, we fill the memory to capacity
with a list of unrelated words. This is done by reading
! the sentence and a list of words immediately after the sen­
tence and asking that both be recalled. For example:
Vines cover the dying tree.
(list) ROCK, SHEEP, BUS, MONTH, LAMP, SLEET
HAT, GREEN
As subjects, your task will be to remember the
sentence correctly, plus as many of the words as possible.
However the data will be useless if you do not get the
sentence correct.
The list of words will be drawn from the eight
categories listed on the board. The words will always ap­
pear in the order given on the board. You may list the
words in any order they come to mind. Use the category
list to jog your memory. There are a limited number of
words in each category, so they will repeat in some random
fashion. Please do not feel that you must guess a word
for each category. We are interested in your memory span,
not your ability to guess. The usual performance is 3 to
6 words plus the sentence correct.
We will try several practice sentences on the
back of this sheet and then move into the experiment
118
proper. The "for keeps" responses will be listed on the
back of IBM cards, four sentences to a card. Please use
the cards horizontally. Write the sentence and below
list the words recalled as in the sample sentence above.
Listen carefully to both the sentence and the list.
When I reach the end of the list, I will say "go." Imme­
diately write down the sentence and as many of the words
as you can remember. The sentence must be correct. When
you finish please look at me so that I will know you are
ready for the next sentence. Are there any questions? 
Asset Metadata
Creator Van Kekerix, Donald Lorraine (author) 
Core Title Transformational Processing Of Sentences Containing Adjectival Modifiers 
Contributor Digitized by ProQuest (provenance) 
Degree Doctor of Philosophy 
Degree Program Psychology 
Publisher University of Southern California (original), University of Southern California. Libraries (digital) 
Tag OAI-PMH Harvest,psychology, general 
Language English
Advisor Cliff, Norman (committee chair), Davis, Daniel J. (committee member), Van Arsdol, Maurice D., Jr. (committee member) 
Permanent Link (DOI) https://doi.org/10.25549/usctheses-c18-653392 
Unique identifier UC11361161 
Identifier 6905073.pdf (filename),usctheses-c18-653392 (legacy record id) 
Legacy Identifier 6905073.pdf 
Dmrecord 653392 
Document Type Dissertation 
Rights Van Kekerix, Donald Lorraine 
Type texts
Source University of Southern California (contributing entity), University of Southern California Dissertations and Theses (collection) 
Access Conditions The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the au... 
Repository Name University of Southern California Digital Library
Repository Location USC Digital Library, University of Southern California, University Park Campus, Los Angeles, California 90089, USA
Tags
psychology, general
Linked assets
University of Southern California Dissertations and Theses
doctype icon
University of Southern California Dissertations and Theses 
Action button