Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Decision Making: An Individually Parameterized Deterministic Model
(USC Thesis Other)
Decision Making: An Individually Parameterized Deterministic Model
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
DECISION MAKING: AN INDIVIDUALLY
PARAMETERIZED DETERMINISTIC MODEL
by
Richard Shelly Lynn
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(Psychology)
August 1970
71-12,399
LY1IN, Richard Shelly, 1934-
DECISION MAKING: AN INDIVIDUALLY
PARAMETERIZED DETERMINISTIC MODEL.
University of Southern California, Ph.D., 1970
Psychology, experimental
University Microfilms. A X E R O X Company, Ann Arbor, Michigan
THIS DISSERTATION HAS BEEN MICROFILMED EXACTLY AS RECEIVED
UNIVERSITY OF SOUTHERN CALIFORNIA
THE GRADUATE SCHOOL
UNIVERSITY PARK
LOS ANOELES. CALI FORNIA 0 0 0 0 7
This dissertation, written by
.........Ri.chaxjd...Sh£lLy..iwLynn..........
under the direction of his. Dissertation ComÂ
mittee, and approved by all its members, has
been presented to and accepted by T he GraduÂ
ate School, in partial fulfillm ent of requireÂ
ments of the degree of
D O C T O R O F P H IL O S O P H Y
( / Dean
Date...AMgaj3X..]SJQ.
DISSERTATION COMMITTEE
a ■a • * a a «
" "7/9
Chair
PLEASE NOTE:
Some pages have small
and indistinct type.
Filmed as received.
University Microfilms
ACKNOWLEDGMENTS
i
j
I wish to acknowledge my appreciation for the advice
and criticism offered by Drs. Norman Cliff, Ronald Weitzman,
and William Michael, my dissertation committee. I am
thankful to Dr. Albert Marston for setting me back on the
correct track during a difficult period in the research. I
am indebted to colleagues Frank Dean and Maurice Braun for
their suggestions on evaluation of theory parameters. I
wish to acknowledge the influence of the papers of Nico
Frijda, Earl Hunt, and Lee Gregg and Herbert Simon.
My graduate training in psychological measurement
began and continued because of the kindness and encourageÂ
ment of Dr. J. P. Guilford. I shall always be grateful to
my wife and children, who kept faith during the long years
of study, research, and writing.
ii
TABLE OF CONTENTS
ACKNOWLEDGMENTS ....................................
LIST OF TABLES ....................................
LIST OF ILLUSTRATIONS ..............................
Chapter
I. PURPOSE OF THE STUDY .....................
II. BACKGROUND OF THE STUDY ...................
i
Computer Models
Artificial Intelligence
Existing Game-Playing and Problem-
Solving Machines
III. THE EXPERIMENT ...........................
The Game
Experimental Procedure
Subjects
Apparatus
Analysis of Verbal Behavior
IV. THE MODEL..................................
Induction of Process Rules
Development of the Computer Program
Simulation Program Structure and Function
Derivation of Individual Parameter Values
iii
Chapter Page
V. EVALUATION OF THE MODEL........................ 48
I Simulation of Individual Subjects
Sensitivity of Model to Parametric
Variation
Cross-Validation of the Model
VI. SUMMARY AND CONCLUSIONS........................ 75
APPENDIXES
Appendix A: Game Instructions..................... 84
Appendix B: Excerpts from All Protocols.......... 87
Appendix C: Synopsis of Decisions for All
Subjects.................................. 113
Appendix D: The Simulation Program Source
Code...................................... 118
Appendix E: Example Simulations ................... 126
REFERENCES...............................................132
LIST OF TABLES
Table Page
1. Pre-Game Questionnaire Responses ............... 19
2. Parameter Estimates............. 47
3. Summary Statistics .............................. 49
4. Meaning of B i t s ................................ 59
5. Average Simulation Error Per Cent Associated
with Each Model Variation................... 60
6. Partitioning of Average Error of Simulation . . 63
7. Error Per Cent Summary.......................... 70
8. Error Matrices for 16 Analyzed Hands ....... 71
9. Error Matrices for 14 Holdout Hands, Using
Parameters Estimated from the 16 Analyzed
H a n d s ........................................ 72
10. Error Matrices for 14 Holdout Hands, Using
Parameters Estimated from the Holdout
H a n d s ........................................ 73
v
Figure
1.
LIST OF ILLUSTRATIONS
Page
Flow Chart of Decision-Making Model ........... 35
CHAPTER I
PURPOSE OF THE STUDY
The purpose of the research was to study and develop
methodology applicable to computer simulation of manifest
behavior in simple decision making or game playing situaÂ
tions, and to the evaluation of the adequacy of the simulaÂ
tion. It was not the goal of the research to learn anything
about problem solving, choice making, or any other kind of
cognitive behavior. The techniques to be described might,
however, be employed by researches which do have such goals.
The research was not experimental, in the usual sense of
this word. No hypotheses regarding natural laws were enterÂ
tained or tested. No assumptions were made about the popuÂ
lation of Ss sampled, and no generalizations beyond the
particular sample were made. The sample of Ss was too small
and too homogeneous, and the sample of stimuli too limited,
to permit such generalization. From its inception, the
research was not experimental, but methodological.
1
2
A simple experimental task requiring a sequence of
binary choices was presented in the form of a one-person
game. The particular game used was not important to the
purpose; the basic research approach would have been the
same for a wide variety of tasks. It is of course simpler
if the choices are binary, but this is not necessary. The
stimuli should be easily quantifiable to facilitate analyÂ
sis, simulation, and evaluation. The memory load imposed by
the task should be assumed to be negligible; i.e., all
stimuli should remain in view after presentation. Each S_
must receive the same stimuli in the same sequence. The
methodology is especially applicable in situations in which
the effects of each decision are propagated throughout the
remainder of the sequence. The methodology, in whole or in
part, has potential applications to the study of a variety
of natural decision making problems. Possible examples are:
bid or no bid on a request for proposal, acceptance or reÂ
jection of a proposal, securities portfolio selection, and
personnel decisions such as hiring, promoting, layoff, and
firing. However, no analysis of any natural decision making
was performed to support the suggestion of applicability.
Some of the techniques employed in this research
were suggested by the background literature. Other
3
techniques were invented, only to be discovered later in
more extensive reading. The methodology represented by the
particular combination of techniques is offered as being
unique.
The model is based upon formally stated processes
which are strictly deterministic. A formal process model
may be heavily dependent upon stochastic variables for sevÂ
eral reasons. A theorist may take the approach that random
I
variables may be used to simulate the effects of a number of
variables he chooses to ignore. Another theorist may admit
that he uses random variables to bridge the gaps in his
knowledge of the natural processes. Without explicit refÂ
erence to the need to span areas of ignorance, a theory may
posit that the behavior being simulated, at some points in
an otherwise deterministic process, is fundamentally random
by nature. Stochastic elements were completely excluded
from the model in this research partly for esthetic reasons
and partly because the quantitative evaluation employed
demanded it.
The computer program mechanizing the model makes
decisions which are in effect predictions of what an inÂ
dividual S will do. When these predictions are in error,
the program is set back on the correct track. If an
4
incorrect move were allowed to stand, the program and S_
ibeing simulated would diverge, and the situations faced by
the machine and the human would no longer be the same. The
bases from which the machine made its subsequent moves would
be different from those used by the S.. The setting-back-on-
the-track technique is fundamental and essential to the
methodology of the research. This technique is necessary
because of the conditional prediction effect, described by
Feldman (1963). The use of setting-back-on-the-track in
this research is due to Feldman.
I The choice of a programming language in which to
I
express the model is definitely a methodological matter,
with significant implications for psychological theory.
Some of these implications have been described by Newell and
iSimon (1963b) :
To what extent do we make implicit assumptions of
i psychological theory when we decide to write a simuÂ
lation program in an information-processing language?
. . . The mode of expression almost surely influences
the thought . . . It is probable that psychological
postulates enter the simulation by way of the strucÂ
ture of the programming language. . . . We conclude
that a list-processing language like IPL-V is a (weak)
psychological theory. It is an assertion that the
elementary information processes that will be disÂ
covered to underlie human behavior are easily con-
structable from the instructions of the list-
processing system, that its memory is organized in
terms of lists and list structures of associated
symbols, and that it is capable of executing sequences
5
of behaviors organized as hierarchical list strucÂ
tures. (pp. 422-425)
It must not be understood from the above quotation that proÂ
gramming languages other than IPL (Information Processing
Language) could not fairly easily express the same strucÂ
tures or that IPL must necessarily mechanize only structures
of a certain kind. What is true is that the use of IPL
tends to facilitate characteristic approaches to the model-
!
jing of human behavior. This is one of the reasons for not
I
i
using IPL in the present research; it would have tended to
give direction to the analysis and simulation process when
what was wanted was a more naiive approach. The programming
language chosen was a dialect of Fortran. Fortran is perÂ
haps more neutral with respect to psychological theory imÂ
plications than other higher level languages, and has the
considerable advantage of being available as a standard (if
not the only) language at nearly every computing installaÂ
tion.
As a matter of methodology, the analysis which led
to the model was intended to be free of hypotheses, no matÂ
ter how plausible, about the processes the Ss brought to
bear upon the decision making task. No particular notions
of memory processes or cognitive concepts were assumed prior
6
to detailed protocol examination. It was intended that the
protocols speak for themselves, at least in early analysis.
The model predicts the behavior of individual Ss,
not general or ideal behavior. The idiographic approach was
selected because of an interest in predicting individual
variations in behavior. Several Ss were used, not just one j
or two. The model was to be structurally homogeneous for
all Ss. To serve as a model for several different individÂ
uals, parameterization was needed. Just as Weizenbaum's
!
(1966) Eliza program works from different scripts to produce
different verbal behavior, the model in this research works
from profiles of different parameter values to behave as
different subjects.
Frijda (1967) asks,
When a program is constructed on the basis of a given
set of data, how can we make sure that it applies to
other sets? With abstract tasks there is not too much
difficulty. One varies the inputs for both subjects
and program. (p. 66)
i
This implies cross-validation. The model is individually
parameterized, using parameter estimates which are optimal
for the sample of behavior analyzed. The process of finding
optimal values for these parameters capitalizes upon chance.
Any model which is supposed to simulate human behavior, but
which is not cross-validated, is of unknown generality or
specificity. (However, a model which attempts to predict
7
the behavior of several Ss is necessarily of some generalÂ
ity, in its structure at least.) The research employs
cross-validation as a means of evaluating the generality of
the estimated values of individual parameters.
Fundamental to the original purpose of the research
was the development of a quantified measure of goodness of
simulation, preferably a measure which could be directly
related to a classical distribution function. In contrast
with this approach would be the usual reliance upon subjecÂ
tive evaluation of model output, used with nearly all procÂ
ess models. Stochastic theories, typified by statistical
learning theories, are also evaluated by essentially subÂ
jective techniques, but even when highly quantified techÂ
niques are used, they "must be interpreted as validations of
the laws of probability rather than of the psychological
assumptions of the theories, and . . . the classical tests
of statistical significance cannot properly be applied to
testing theories of this kind" (Gregg and Simon, 1967, p.
275). Because the model was not to include stochastic ele-
ments, there was the hope that classical tests of statistiÂ
cal significance could properly be applied.
It was the purpose of this research to study a
8
methodology composed of techniques and approaches which were
determined prior to the gathering of any experimental data.
In summary, the methodology included: (a) a strictly deÂ
terministic process model, (b) with predictions kept on the
correct track, (c) mechanized by a computer program written
in Fortran, (d) based upon decision making processes emÂ
pirically induced from verbatim protocols, (e) vhich preÂ
dicted the behavior of several Ss, based upon individual
i
parameterization, (f) with cross-validation input data, and
(g) evaluated by quantitative measures of goodness of fit.
CHAPTER II
BACKGROUND OF THE STUDY
Computer Models
A theory to account for individual playing behavior
in a simple card game was defined pursuant to detailed
i
analysis of verbatim protocols. The theory is similar to an
information processing model. This class of models has been
described by Feigenbaum (1963) as follows:
a. These are models of mental processes, not
brain hardware. They are psychological models of
mental function. No physiological or neurological
assumptions are made, nor is any attempt made to
explain information processes in terms of more eleÂ
mentary neural processes.
b. These models conceive of the brain as an
information processor with sense organs as input
channels, effector organs as output devices, and
with internal programs for testing, comparing, anaÂ
lyzing, rearranging, and storing information.
c. The central processing mechanism is assumed
to be serial; i.e., capable of doing only one (or a
very few) things at a time.
d. These models use as a basic unit the inforÂ
mation symbol: i.e., a pattern of bits which is
assumed to be the brain's internal representation
of environmental data.
9
10
e. These models are essentially deterministic.
not probabilistic. Random variables play no funda-
' mental role in them. (pp- 297-298)
Similarly, Simon and Newell (1966) remark:
Information-processing theories of human thinking
employ unobserved entities— symbols— and unobserved
processes— elementary information processes. The
theories provide explanations of behavior that are
mechanistic without being physiological. That they
are mechanistic— that they postulate only processes
capable of being effected by mechanism— is guaranteed
by simulating the behavior predicted on ordinary digiÂ
tal computers. Simulation provides a basis for testÂ
ing the predictions of the theories but does not imply
that the protoplasm in the brain resembles the elecÂ
tronic components of the computer. (p. 337)
Basically, the theory posits that the subject inputs
relevant features of the environment as information, procÂ
esses this information through stored procedures, changes
the environment, and modifies the stored information. DifÂ
ferential employment of specific processes would ideally be
sufficient to account for all observed individual differÂ
ences in behavior. The theory is expressed as a process
model (Gregg and Simon, 1967) in the form of a computer
program. Computer simulation of psychological processes is
one facet of the research. Frijda (1967) describes funcÂ
tions in which this simulation can be of value:
Computer programs can serve as unambiguous forÂ
mulations of a theory. The program language is
11
precise; the meaning of a given process is fully
defined by what it does. . . . Computer simulation
is a means to demonstrate and test the consistency
and sufficiency of a theory. If the behavioral data
which the theory wants to explain are in fact reproÂ
duced by running the program, the theory has been
proved capable of explaining these facts.... ExÂ
tensive experimentation is possible by running difÂ
ferent versions of the program; decreasing or inÂ
creasing fit with behavioral data can indicate the
role of various components and parameters. (p. 59)
Artificial Intelligence
The research was centered on quantitative methodÂ
ology in psychological theory construction and testing.
Because of its computer mechanized aspects, it has a seconÂ
dary relationship with artificial intelligence. The goal
of artificial intelligence is the construction of machines
that exhibit behavior one would call intelligent if it were
observed in animals.
Slagle (1967) writes:
The main purposes of Artificial Intelligence and
therefore heuristic programming are to understand
(human) intelligence and to use machine intelligence
to acquire knowledge and solve intellectually diffiÂ
cult problems. A researcher having the first purpose
is a psychologist. He observes subjects thinking
aloud while trying to solve intellectually difficult
problems. He constructs a model of such problemÂ
solving in the form of a computer program. He notes
how the performance of his program deviates from the
performance of his subjects. He observes the subjects
some more and constructs an improved model. The cycle
is repeated over and over. There are two important
advantages in embodying a model in a computer program.
12
The model is completely specified and consequences
of the model may be obtained by simply running the
program on a computer. A researcher having the
second purpose is interested in getting intelligent
behavior and does not care whether the computer uses
methods that people use. (pp. 3-4)
Slagle thus identifies both psychological and maÂ
chine performance interests within the area of artificial
intelligence. In contrast are Feigenbaum and Feldman
(1963). They draw a distinction between artificial intelli
gence research and simulation of cognitive processes reÂ
search :
An artificial intelligence researcher interested
in programming a computer to play chess would be
happy only if his program played good chess, preferÂ
ably better chess than the best human player. HowÂ
ever, the researcher interested in simulating the
chess-playing behavior of a given individual would
be unhappy if his program played better (or worse)
than that individual, for this researcher wants his
program to make the same moves as the human player,
regardless of whether these moves are good, bad, or
indifferent. (p. 269)
Existing Game-Playing and Problem-
Solving Machines
Three categories of machines may be distinguished.
In the first and largest category are machines designed to
behave intelligently without simulating human techniques.
In the second are machines that are supposed to simulate
generalized or idealized human techniques and processes.
13
In the third and smallest category are machines that attempt
t
to simulate specific individuals. Category 1 machines may
employ search techniques that are virtually impossible for
humans to use, being based upon extreme depth of search and
)
superhuman speed and precision. Few of the game-playing
machines reported in the literature examined appear clearly
to be in categories 2 or 3, though the authors of the NSS
chess player indicate that their machine uses some human
i
techniques (Newell, Shaw, and Simon, 1958; see De Groot,
1965, p. 376). The General Problem Solver (GPS) seems to be
a category 2 problem-solving system (Newell and Simon,
1963a; Ernst and Newell, 1967). Category 3 programs are
Feldman's (1963) binary choice simulator, Clarkson's (1963)
trust investment selection program, and Johnson's (1964)
concept-formation model. The General Game Playing Program
(GGPP) of Thomas Williams (1965) can play many games, by
following rules and making legal moves. Thiele, Lemke, and
Fu (1963) programmed a machine to play a modified game of
hearts. Balzer (1966) has written a machine program that
decides vriiich cards to pass in the game of hearts. Berle-
kamp (1963) has described a program for solving problems in
no-trump double-dummy bridge problems. Except for GPS and
the Johnson model, the aforementioned systems seem to be
14
moribund. Still very much alive is the checker player of
Samuel (1959). No published references have been located on
the chess player of McCarthy at Stanford. Greenblatt (1967)
at MIT has described a successful chess player. Both chess
machines are in current research and development.
Feigenbaurn's (1963) EPAM system demonstrates certain
phenomena usually associated with human behavior in paired
associates and serial anticipation learning of nonsense
syllables. Laughery and Gregg (1962) developed a program
that would follow a scheme to simulate the behavior of inÂ
dividual Ss in serial learning. Simon and Kotovsky (1963)
say they have modeled human techniques in serial pattern
learning. Findler (1966) presents an attempt to model beÂ
havior in a complex man-machine interactive problem.
Hunt has done a significant amount of research in
this area and presents a good review of computer simulation
studies (1968). He warns, "Teachers of general psychology
should be aware that, as of this date, no program has been
shown to simulate human problem solving, although there have
been several programs vfliich solve problems" (p. 160). In an
overview of artificial intelligence studies, Solomonoff
(1966) reviews no work of much significance to psychological
theory.
CHAPTER III
THE EXPERIMENT
The Game
Instructing the subject to verbalize about his beÂ
havior is a good way to get information about aspects of
behavior that are difficult to observe. In a competitive
game, concurrent or post hoc verbalization would disclose
strategy, etc., to the opponents and disrupt the competitive
aspect of the play. Therefore, a one-person game was conÂ
sidered necessary. A simple card game of unknown origin was
selected. It is coincidental that one S vaguely recalled a
similar two-handed game. In the game, a 24-card deck of
ordinary playing cards is used, ace through queen of one red
and one black suit. The top 10 cards, one at a time, are
exposed. The player must accept five of the 10 cards. The
goal of the game is to maximize the sum of the values of the
five selected cards, with ace worth one point and queen
worth 12 points. The reds and blacks are summed separately,
15
16
and the larger sum is the recorded score. (See the game
instructions in Appendix A.)
The game was not analyzed prior to the experimental
sessions. It is possible that an optimum strategy does
exist, involving the expected value of the unexposed reds
and blacks. Computation of these expected values requires
fractions with denominators which quickly become too large
for unaided calculation in any reasonable time. Regardless
of the rules of thumb, heuristics, or calculations a S. might
use, the game consists of a sequence of binary decisions,
each of which is necessarily influenced by the effects of
previous decisions in the same hand. When the first of the
10 possible cards to be offered is exposed, S. must decide
to accept or reject it. S. may reject the first five cards,
in which case the acceptance of the last five is forced. If
the offered card is accepted, it is held by S.; otherwise it
is left on the table face up. Within the first few cards,
S_ will accept one. Now color (suit) necessarily becomes of
some importance. Only the suit with the highest sum counts
for the score on a hand. A priori, a bias for accepting
cards of the same color as the first would seem reasonable.
If S. has accepted what he considers a high card of one
color, he might thereafter accept only cards of that color.
17
If he has selected only one or two cards with a modest parÂ
tial sum, he may switch to the other color if a card of
sufficiently high magnitude is offered. Whether his hand
has one or both suits, he should select cards which will
i
tend to give him the highest final score. He might wish to
hold out for only high cards, but he knows or quickly learns
that the unseen remaining cards might all be lower than the
ones he has passed up. As more cards are displayed, the
player has increasingly more information about the unexposed
cards, but he has fewer opportunities left to make selecÂ
tions. The minimum number of choices is five, which obtains
if S_ either accepts or rejects all of the first five offered
cards. The maximum number of choices is nine, consisting of
four rejections and five acceptances, in any order.
Experimental Procedure
The 12 Ss were recruited and tested by an independÂ
ent experimenter as part of a separate research project.
Each S. was administered the same sequence of decks, 30 in
each of two one-hour sessions spaced one week apart. During
the play, E displayed the top 10 cards, one at a time, callÂ
ing out the color and value. S_ was instructed to "think
aloud" as he decided whether to accept or reject each card.
18
After all hands had been played, E conducted an inquiry,
;
asking S what processes he employed. The verbalization was
tape-recorded, and the verbatim transcription of the record-
i
ings constituted the experimental data.
Subjects
The 12 Ss, six male and six female, were enrolled
in introductory psychology classes at the University of
Southern California. They volunteered to participate in the
experiment in partial fulfillment of an obligation imposed
upon all such students. No monetary reward or other incenÂ
tive to participate was offered. No incentive was employed
to motivate high scores in the experimental task.
Subject responses to a pre-game questionnaire are
summarized in Table 1.
The grade point average is based upon A = 4 points.
The last four columns of the table are based upon responses
to the following question and instructions. "Which of the
following games have you played during the past 12 months?
Please answer each of the following with a check mark." The
games specified were blackjack, bridge, canasta, hearts,
pinochle, poker, and rummy. The checks could be made under
"often" (average of once per week), "sometimes" (average of
TABLE 1
PRE-GAME QUESTIONNAIRE RESPONSES
Subject
Number
Grade
Average
Enjoy
Cards?
Frequency of Playing 7 Games
Sex Class Age Major Often Sometimes Seldom Never
1 F Fresh 19 Dental
hygiene
2.75 Yes 0 2 1 4
2 F Soph 18 Drama 3.25 No 0 0 2 5
3 F Soph 20 Elem educ 2.3 Yes 0 1 3 3
4 M Soph 19 Philosophy 3.0 No 0 0 0 7
5 M Fresh 19 Bus admin 3.0 No 0 3 2 2
6 M Fresh 18 Premedical 3.2 Yes 0 2 2 3
7 F Soph 19 Physical
therapy
2.7 Yes 2 2 3
8 M Senior 22 Political
science
3.25 Yes 2 2 3
9 M Fresh 19 Premedical 3.0 Yes 0 2 2 3
10 F Fresh 18 Biology 3.25 Yes 0 0 1 6
11 M Junior 20 Mech engrg 3.25 Yes 0 0 5 2
12 F Fresh 17 Political
science
3.0 Yes 0 0 3 4
i -1
VO
20
once per month), "seldom" (average of four times a year),
and "never."
Apparatus
! Ordinary playing cards were used as stimulus mateÂ
rials. Because the decks consisted of only two suits— red
or black and aces through queens— each standard 52-card deck
yielded two 24-card decks. A computer program was used to
shuffle the cards in a pseudorandom arrangement. Sixty inÂ
dependent shufflings were used to produce the 60 decks used
in the experiment. For each trial, the integers 1 through
24 were randomly permuted. With the cards arranged black
ace through black queen and red ace through red queen,
numerals representing the permuted integers were marked on
the backs with a felt-tip pen. Arranging the decks with the
numerals in monotonically increasing order produced the
desired random shuffle. By marking the trial number also
on the back of each card, it was simple to present each S.
with the same sequence of stimuli by reordering the cards
between sessions. A tape recorder was used to record S_'s
response to each stimulus and his concurrent verbalization.
A room ordinarily used for research with human Ss was emÂ
ployed .
21
Analysis of Verbal Behavior
Ss were instructed to "think aloud" while responding
to the offered cards. They were not, however, encouraged to
be especially productive verbally. To have reminded Ss
frequently to explain why they accepted or rejected each
card would have been disruptive. Furthermore, it may have
been threatening in instances where choices might have
seemed unwise upon second thought. Requiring a reason for
each choice would have tended to produce behavior that was
more than ordinarily analytic for most Ss. Whatever the
disadvantages of oversoliciting verbal behavior concurrent
with choice making, it appears in retrospect that the inÂ
structions should have strongly encouraged each S_ to give a
reason for each choice, even if in a highly abbreviated
manner. E could have reminded Ss to give reasons when verÂ
balization became too sparse. Only a few Ss gave reasons
for most of their choices. This more or less meager verbal
component to the game-playing behavior dictated that the
analysis focus upon a single bit of information at each
choice point rather than upon stated reasons for making a
move. This facilitated automatic analysis of the behavior,
but it was not included in the preliminary research scheme.
After all 60 hands were played, each S. was asked if
22
he had developed any strategy or style of play. The reasons
i
given were not sufficient in themselves to permit developÂ
ment of a theory to account for the playing behavior. Ss
did not necessarily play the way they said they had. HowÂ
ever, results of this inquiry, plus the verbalization during
play, were useful in suggesting a model and eliminating some
a priori expectations. For example, Ss were not quantitaÂ
tive in approach. One S, stated that he played as if both
colors were equally likely to be offered, although he had
the opportunity to notice from the exposed cards that the
cards not yet offered would be relatively richer in one
color or the other. No complex strategies were verbalized,
hence the model had to be correspondingly simple. Any
theory developed had to be consistent with the verbalizaÂ
tions during play or inquiry, taking into consideration,
however, the inconsistency in playing behavior itself.
CHAPTER IV
' THE MODEL
Induction of Process Rules
The first session provided an opportunity for the
Ss to overlearn the game and to develop stable playing beÂ
havior. The second session of 30 hands provided the experiÂ
mental data. The transcriptions were arranged with two
hands per page. To permit cross-validation, alternate pages
were set aside before detailed analysis was begun. Sixteen
hands thus constituted the corpus for derivation of a feaÂ
sible model and parameter values. Basic to the research was
the notion of a single model nomothetic in structure but
idiosyncratic in function, with the idiosyncracy due to
individual parameter values. No a priori ideas about the
structure dictated the analysis, however. Having no inforÂ
mation about the subjects except that contained in the proÂ
tocols, the analyst did not know that of the 12 students
only one had an academic major interest related to
23
24
quantitative methods. Point-counting strategies for the
game of blackjack have been well publicized, so some simiÂ
larly quantitative approach was the object of a shallow
search through the protocols, especially the postplay sumÂ
mary statement by the Ss. No point-counting strategy was
apparent. One protocol was then selected for detailed
analysis, move by move. The protocol selected was that of
Sll, a relatively verbal S., who also had the highest average
score over the entire 60 hands.
The recorded verbal content alone was not enough to
permit induction of a theory to account for the decisions
made during play. (See the protocol of Sll in Appendix B.)
Serious protocol analysis was begun with an attempt to enÂ
code detailed behavior. It was hoped that some basis for
understanding might emerge once the essential elements were
exposed by stripping away the not-very-helpful verbal mateÂ
rial. Hand 31 was thus encoded; eight decisions. It became
apparent, even with this very small sample, that context was
involved and that no simple coding of each decision would be
adequate. What seemed better was encoding of the entire
hand. For Sll on H31, the very first hand analyzed, no
understanding emerged from the sequence of the successive
decisions, which were related, of course, to the offered
25
cards. Examining the eight decisions as a whole suggested
that a 7 was high enough to accept as the first card, a 12
was high enough to accept in the opposite color, and that a
4 was too low to accept any time. Two more hands were anaÂ
lyzed for these three parameters, after which it seemed
that three were insufficient. The insufficiency was indiÂ
cated in H35. After the first accepted card, no others were
rejected. The notion of dominance was defined to facilitate
a description of taking cards of opposite colors. A color
was said to be dominant if the sum of the values of the
cards already accepted of that color was greater than the
corresponding sum for the other color. Before any cards
were taken, neither color was dominant, and tied sums are
not especially rare. The first four hands were scored for
five parameters: highest reject before acceptance, first
acceptance, lowest acceptance of nondominant card, highest
reject of dominant card, and highest reject of nondominant
card. Then, because more context information seemed imÂ
portant, the dominant sum at the time of the decision making
was linked to four of the previously mentioned parameters,
and lowest acceptance of a dominant card was added. That
made a total of 10 parameters. In addition, the number of
changes in dominance (which might be called color switching)
26
was recorded, as was the number of forced acceptances. Note
that forced acceptances were not counted as choices, and no
notice was taken of the color or magnitude of these cards in
any analysis. The first seven hands were scored or rescored
for these 12 kinds of information. Because color switching
may occur four times in a hand, the number of entries for a
hand under this scoring scheme may exceed 12.
In the first seven hands analyzed, Sll both accepted
and rejected 6 in considering the first card to count toward
his score. In trying to put this into context, it was noted
that a 6 was accepted as the fifth or third offer, but not
as the first. This called for addition of another statistic
— the number of cards rejected before first acceptance. To
facilitate subsequent computation of goodness of simulation,
another statistic was added to the list— number of choices.
All 16 hands were scored for these 14 statistics. The anaÂ
lyst then attempted to organize all this information to deÂ
velop the first model, one that would describe the play of
Sll.
Three pieces of information were recorded about the
first acceptance: highest reject before acceptance, the
number of cards offered before acceptance, and the magnitude
of the first accepted card. On H51, Sll accepted a 5 after
27
having already rejected a 5 and a 6. This, plus his acceptÂ
ing a 6 as the fifth- or third- but not the first-offered
card led to this induction: the threshold for first acceptÂ
ance changed as a function of the number of cards offered.
For Sll, it was true that he always accepted a 7 or higher
and accepted as low as 5 on the third or subsequent offer.
This kind of behavior— changing the threshold as play proÂ
gressed— became a candidate for inclusion into the model.
The structure of the model related to the first acceptance
seemed satisfactory.
Attention next turned to acceptance of nondominant
color cards— color switching. For the 16 trials analyzed,
the highest reject of nondominant cards ranged from 2 to 12.
There were six rejects of 12. Yet S_ll four times accepted
nondominant cards such as 6, 8, and 9. Clearly, something
besides magnitude would be needed to describe his behavior
in accepting the nondominant color. That something was a
context factor— the dominant sum at the time of the deciÂ
sion. Thirteen times out of 16 when S. accepted a nondomiÂ
nant color he shifted the dominance margin— the amount by
which the dominant sum exceeded the nondominant sum. Put
another way, his partial score at that point would be higher
if he took the nondominant color card. Twice he took as the
28
second card a card of the same magnitude but opposite color.
But twice he rejected the same kind of offer. He verbalized
about "opening up options" by taking a card of opposite
color. The analyst formed the hypothesis that Sll's play
with respect to nondominant cards could be described by this
I
rule: if you have already accepted a card, accept the same
value in the opposite color to open up options. After acÂ
cepting two or more cards, you accept a card of nondominant
color only if that color will become dominant if the card is
accepted. On H60, Sll took an 11 red when his partial sum
in black was 19. This was his only instance of taking a
nondominant card, after already taking two cards, when it
did not shift the color dominance. It was assumed to be
anomalous behavior since it did not "open up options" and
led to a below-average score.
A consistent description of behavior with respect
to decisions about dominant-color cards was difficult. He
both accepted and rejected 2, 3, and 4 of dominant color.
No simple threshold for acceptance would be adequate. An
attempt was made to take context into account. Already reÂ
corded was the dominant total at the time of the decision.
Perhaps the desirability of a card was relative to what he
already had in his hand. A detailed analysis of the partial
29
sums of the cards already accepted indicated that no conÂ
sistent rule could be formed by including partial sums.
Similarly, taking into account the number of choices
remaining did not yield consistency for Sll. Magnitude
alone was tentatively selected as the only parameter of the
decision to take a card of dominant color.
The first model, based upon the protocol of Sll, was
expressed as five process rules.
Considering first acceptance:
1. Accept first or second offer if greater than or
equal to 7.
2. Accept third or subsequent offers if greater
than or equal to 5.
Considering subsequent acceptances:
3. If only one card has been accepted, "open up
options" by accepting a card of equal value in other color.
4. if more than one card has been accepted, "stick
to" dominant color by accepting only cards greater than or
equal to 6 in the dominant color.
5. However, accept card of nondominant color if it
gives a higher partial score.
These five process rules refer only to card-
magnitude values. A fundamental notion to be tested was
30
that other Ss would behave similarly, but with some variaÂ
tion in the magnitudes, which constituted thresholds. ImÂ
plicit in the five rules is another hind of threshold: the
point at which the magnitude threshold is changed. The
offer number on which the threshold for first acceptance
changes can be viewed as a threshold. It was predicted that
some Ss would require more than just a higher partial score
in accepting a nondominant card. They might require the
partial score to be changed by a minimum amount. Similarly,
it was predicted that other Ss might not accept opposite but
equal cards as the second acceptance. They might not want
to "open up options." The latter two predictions were a,
priori. as no other protocols had been analyzed. They were
hypotheses to be tested.
The five process rules were rewritten to permit
seven parameters. With slight wording change, the rules as
hypothesized after analysis of Sll were used throughout the
remainder of the research.
1. Considering first acceptance, reject values less
than P1, until offer
2. Then first acceptance must be as large as P^.
3. Considering subsequent acceptances, if the numÂ
ber of cards already accepted is less than P^, accept a card
31
of the nondominant color if it will result in a shift in the
dominance margin (gain) hy as much as P,_.
4. If the number of cards already accepted is equal
to or greater than P4, then the gain must be as much as Pfi.
5. Accept a card of the dominant color, or of
either color if neither is dominant, if it is as much as P^.
Closely related to these five process rules is a
description of the seven parameters.
P^— Initial threshold for acceptance of first card,
in terms of card magnitude.
p£ — Offer number on which initial threshold is
changed.
P^— Secondary threshold for acceptance of first
card, coming into effect if S does not accept a
card before offer Pj*
P4— Number of cards accepted before changing gain
threshold (for acceptance of nondominant suit
cards). P4 is inversely related to the number
of choices remaining.
P5— Initial gain threshold (for accepting nondomiÂ
nant suit cards).
Pc— Secondary gain threshold, coming into effect if
D
S has accepted P^ cards.
32
— Threshold for acceptance of cards of dominant
suit, in terms of card magnitude.
The parameter values for Sll, based upon the origiÂ
nal five process rules, would be 7, 3, 5, 2, 0, 1, and 6.
Once the structure of the first model was erected,
the next step was to see how well it fit other Ss. The
process of analyzing the protocol of Sll was tedious, taking
many hours of labor. To maximize the return from this kind
of labor, it was decided to select as the next subject to be
analyzed one whose behavior differed from Sll's.
Before the next protocol was selected for analysis,
the similarity of responses between Ss was computed by findÂ
ing the correlation between the Ss based upon the cards acÂ
cepted. The maximum number of choices per hand was nine,
because of the forced acceptance rule, but each of the 160
cards that could enter into the scoring was included in the
correlation computation. The correlations with Sll were
ranked, and S6 was selected. He had a low correlation with
Sll and was reasonably verbal.
S6 was not difficult to analyze with respect to the
first three parameters, which refer to the first acceptance.
The lowest acceptance was 7, except for a 6 taken after four
consecutive rejects. The first three parameters were
33
therefore 7, 5, and 6. S6 was less consistent in decisions
about nondominant cards. Seven times he took nondominant
cards that did not increase his partial score. Defining
gain to be the difference in partial score to be realized if
a nondominant card were taken, it was noted that only six of
the 13 acceptances of nondominant cards resulted in positive
gains, three yielded zero gains, and four resulted in negaÂ
tive gains. Perhaps the very magnitude of the nondominant
card accepted was the determining aspect of the decision.
However, the magnitudes were 9, 4, 12, and 11 for the inÂ
stances of negative gain; only two were "face" cards. For
the three instances of zero gain, all were second acceptÂ
ances, equal in magnitude, but opposite in color, with magÂ
nitudes of 7, 6, and 9. No simple hypothesis about acceptÂ
ing nondominant on the basis of magnitude could be supported
by the recorded play.
Values for the three parameters related to acceptÂ
ance of nondominant cards were estimated to be 2, 0, and 2,
meaning take nondominant cards for zero or greater gain as
the second acceptance, but raise this threshold to 2 for any
subsequent acceptances.
With respect to decisions about taking dominant
cards, S6 was much more consistent than Sll. While the
34
optimum threshold for acceptance of dominant cards yielded
eight instances of inconsistency for Sll, the corresponding
I
jnumber for S6 was only two. S6 both accepted 5 and rejected
6, but he accepted 5 only twice and rejected only one 6. It
was decided to set the parameter at 5, because, in his sumÂ
mary statement, S6 mentioned "4 or possibly 5 as a minimum"
in reference to taking what has been defined as dominant.
For S6, the seven parameter values were estimated
to be 7, 4, 6, 2, 0, 2, and 5. Over-all, S6 seemed to fit
the model induced from the behavior of Sll rather well. In
fact, there were fewer inconsistencies for S6 than for Sll
(7 versus 10), given that the model was an acceptable deÂ
scription of the way Ss made decisions. Therefore, the
first model appeared to have some general validity, based
upon this tiny sample.
Development of the Computer Program
The process rules were transformed into a computer
program flow chart (Figure 1). By ordinary programming
techniques, several procedures, each performing only a small
part in the administration and playing simulation, were
written in Fortran and checked out. It should be noted that
what was simulated were the decisions to accept or reject
ftCMD V
Figure 1 . Flow Chart of Decision-Malting Model
I
w !
U1
36
the offered cards. The cognitive behavior that accompanied
!
these decisions was not observable and, therefore, could not
be simulated. The programming language itself has convenÂ
tions that tend to obscure the psychological functions the
program procedures perform. The administrator functions are
mixed in with the subject functions in most of the procedÂ
ures. Still, the coding reflects the seven process rules
accurately, and it should be understandable to almost everyÂ
one acquainted with Fortran. Most existing computer pro-
grams related to artificial intelligence or game playing
were written in IPL or LISP. The languages were designed
for these very applications, but are not readily available
everywhere.
Simulation Program Structure
and Function
In its latest version, the computer program that
mechanizes the model, based upon analysis of Sll's protocol
and partly supported by analysis of S6's, consists of a main
procedure and nine entry points (subroutines). The followÂ
ing description should be read in conjunction with the
source code in Appendix D and the flow chart (Figure 1).
MAIN calls SHUFFL, which reads in the already arÂ
ranged stimulus cards and the responses of each S. to each
37
card. MAIN then goes into the outermost loop, iterated once
per S, by calling NXSUB, which reads in subject identificaÂ
tion and parameter values and prints them out. MAIN then
|
|ca11s NXDECK, which gets the identifications of the next
deck, the cards for the trial, and S_'s responses to them.
NXDECK also heads a new page with information about
the hand, initializes several program variables, and estabÂ
lishes the initial thresholds for first acceptance (P^) and
for acceptance of nondominant cards (P5). MAIN then calls
NXCARD, an administrator function that offers each card in
turn. If, by the rules of the game, S is forced to accept
the card, MAIN has S_ do so immediately after NXCARD offers
it. Beginning with offer Pj, the threshold for first acÂ
ceptance is changed from ^ to Pj. If the magnitude of the
card is less than the threshold, MAIN calls REJECT, which
compares the decision of the model with that recorded for
the human S.. REJECT helps keep count of the number of erÂ
rors and choices made. If the model is correct in rejecting
the offer, control is passed to statement 10, and another
card is offered. If the model has incorrectly rejected, an
error message is printed and control passes to MAIN’s call
on ACCEPT. The model is not allowed to diverge from £'s
track. Along with REJECT, ACCEPT keeps count of errors and
38
choices. ACCEPT compares the model's acceptance with S_'s
recorded decision. If they match, the card is added to S_'s
hand. If the hand is not complete, control is returned to
jMAIN. Otherwise, ACCEPT calls COUNT, which first determines
if there has been any change in S.'s hand since it was last
called. This is to correspond with the assumption that
human Ss probably do not count up their hands again unless
there has been a change. They remember. The partial red
and black scores are counted, and it is determined which
color is dominant. The partial score for the hand (the
final score if five cards have been accepted) is the red sum
if red is dominant, the black sum if black is dominant, and
the color of the last card accepted if neither is dominant.
COUNT returns to ACCEPT, which then calls QUIT.
QUIT keeps count of the total numbers of errors and choices
over all hands, computes error percentage, and prints cerÂ
tain information at the end of each hand and at the end of
each subject's session. QUIT returns to ACCEPT, which reÂ
turns to MAIN at statement 1. However, if ACCEPT is called
after a model decision that does not match the corresponding
human S_'s decision, the card is not added to the hand, and
control is passed to statement 10 in MAIN, after recording
the error and printing a message. The model is kept on S_’s
39
track. The first card acceptance loop in MAIN continues
until the human S_ accepts a card, either forced or volunÂ
tary. Forced acceptances are represented by the coding beÂ
ginning at statement 41 in MAIN. Program control remains in
a short loop until five cards are accepted, in the case of
forced acceptance. (Once S. has rejected so many cards that
an acceptance is forced, all subsequent acceptances are also
forced, obviously.) If acceptance is not forced, then conÂ
trol in MAIN is passed to statement 5, which calls COUNT.
The next statement tests whether the card under consideraÂ
tion is of the dominant or equal color. If it is, then if
its magnitude is greater than or equal to P7, the model deÂ
cides to accept it.
The error checking described for first acceptance is
performed similarly for acceptance/rejection of dominant and
nondominant cards. Whether the decision was to accept or
reject a dominant card, control is passed to statement 4,
where another card is offered. (Any time ACCEPT takes the
fifth and last card, control goes through COUNT and QUIT, as
described.) If the card under consideration is of the nonÂ
dominant color, control in MAIN is passed to statement 7.
If as many as P4 cards have already been accepted, then the
threshold for acceptance of nondominant is changed from £5
40
to Pg. The threshold is for the minimum gain that would be
realized by accepting the nondominant card. MAIN calls
SAIN, which calculates this value. If the gain is as great
as the threshold, the card is accepted; otherwise it is reÂ
jected. Incorrect decisions regarding nondominant are corÂ
rected and recorded, and a message is printed. Whether the
decision was to accept or reject, control is passed to
statement 4 in MAIN to look at the next offer. Note that
the program is never permitted to make a move that differs
from the recorded move of the S. it is simulating.
Because of the many IF statements, a verbal descripÂ
tion of the flow of control is awkward, although the program
itself is actually as simple as the process rules that conÂ
stitute the model.
Derivation of Individual
Parameter Values
The analysis necessary to derive the seven parameter
value estimates for Ss 11 and 6 was lengthy and tedious.
Partly in anticipation of a more complicated model with more
parameters, many context statistics were determined for each
choice point. As many as 30 or more statistics were reÂ
corded for a single hand. The analyst attempted to discover
process rules that might account for divergences from the
seven-parameter model. The tedious analysis was carried out
on the protocols of SI and S2. It was thereafter decided
not to complicate the model further with more parameters,
even though this promised to improve upon the proportion of
identical choices between the human and simulated subjects.
The number of statistics for each hand was cut to 10, which
were needed to estimate the existing seven parameters. Any
one hand might have more or less than 10 because of inapÂ
plicability or multiple changes in dominant color. The
statistics were highest reject before first acceptance and
the associated offer number, the first acceptance and assoÂ
ciated offer number, the gain (plus or minus) taken in acÂ
cepting nondominant and the number of cards already acÂ
cepted before making the decision, the gain (zero or plus)
not taken in rejecting nondominant and the number of cards
already accepted, the highest reject of dominant, and the
jlowest acceptance of dominant. (Dominant, in general, inÂ
cludes both colors if partial sums are equal.)
The simulation program was modified to play each
hand exactly as S, had played it (the model made no deciÂ
sions) and to output information that would help define the
10 statistics. From this point on, the original protocols,
with their fairly meager verbalization, were not referenced.
42
The verbalization had helped in inducing the basic model,
and in estimating parameter values for Sll and S6, but Sll
and S6 were among the most verbal Ss. Only the decision
itself— accept or reject— was used in subsequent analysis.
This modified program played the 16 hands of the 12 Ss, and
the results were compared with the already completed analyÂ
ses of Ss 11, 6, 1, and 2. Analysis of S3, using the proÂ
gram output, took only one hour, instead of the tens of
hours required for the earlier analyses. This was due in
part to the decision not to look for more than seven model
parameters, but was possible only because the output highÂ
lighted the information needed.
Because only the binary information at each choice
point was to be included in the analysis, the statistics for
any hand would be the same for all Ss playing it the same
way. The only rarely stated reasons for making a particular
decision might differ among various Ss, but this information
was not considered. To save analysis time, it was noted
which hands had been played the same way by two or more Ss,
and thereafter each particular play of a hand was analyzed
for the 10 statistic values only once. The last eight
analyses took only about five hours total, as compared with
tens of hours each for the first few. This great economy
43
was due to the restricted set of statistics recorded, the
assistance of a computer program that abstracted the sigÂ
nificant information, and the one-time analysis of hands
Jplayed in a given way by any number of Ss.
Only the gathering of statistics was computer-aided.
Estimation of parameter values was a matter of judgment by
the analyst. For some of the parameters, this judgment was
easy to make. P7 is the threshold for acceptance of domiÂ
nant. For all Ss, the highest reject was greater than the
lowest acceptance, over the 16 hands. Thus, no matter what
estimate of was chosen, there would be some errors of
simulation in accepting dominant. The estimate chosen was
the one that would yield the fewest errors. P7 may be conÂ
sidered as a one-dimensional parameter, but the other six
parameters belong to two sets that may be considered to be
three-dimensional: accept card value X until Y cards have
been offered or accepted, then accept card value Z. This
made it more difficult to find the point of minimum error.
The search for reasonable parameter estimates was
systematic, involving the construction of several sets of
two-dimensional tables and careful search for points of
overlap or inconsistency between rejected and accepted
values. The elements in the tables represented mainly the
44
extrema, e.g., highest reject and lowest acceptance.
After the parameter values were estimated, the simuÂ
lation program played the 16 hands by inputting the estiÂ
mates and behaving at the choice points accordingly. The
simulation results were generally good. Only one unusual S
was simulated with less than 90 per cent accuracy. Because
the parameter estimates were made through a systematic, but
not completely deterministic technique, there remained the
possibility that the estimates were not optimal. Examining
only the extrema might be overlooking some useful informaÂ
tion. Early spot checks indicated that the critical region,
around the threshold, was similar whether all information
for each hand or just the extrema were included. Still,
there were differences that might affect the over-all accuÂ
racy of simulation. Consideration of all information was
out of the question for a manual effort, feasible only if it
could be computer-aided. Several of the modules of the
simulation program were used intact and several others were
drastically modified, and a program was developed that comÂ
puted the error of simulation associated with every possible
parameter value. The MAIN and NXSUB procedures were comÂ
pletely rewritten to record the errors associated with each
possible P7 estimate in a one-dimensional array, and the
45
errors associated with the possible P2“P2“P3 and P^-Pg-Pg
triples in three-dimensional tables. For each S., 1,120
error-table entries were computed. To estimate optimum
parameter values, it was sufficient to select those associÂ
ated with the fewest errors. In many cases, the minimum
error appeared at more than one point in a table, and sevÂ
eral rules were considered for resolving the ambiguity. The
rule finally chosen was the simplest: to use the lowest
parameter estimate if two or more yielded the same number of
errors. For example, if three errors would result whether
the P7 estimate were 4, 5, or 6, the one selected would be
4.
By means of this exhaustive-enumeration-1ike analyÂ
sis, the 84 (7x12) parameter estimates were made in a matter
of minutes, once the computer output was available. To deÂ
sign, code, and check out the program to perform the exÂ
haustive analysis took longer by an order of magnitude than
the manual but computer-output-aided analysis. Were several
dozens of Ss to be analyzed, the time advantage would be
reversed. Of the 84 parameter estimates, 29 were different
from those made earlier. However, the goodness of simulaÂ
tion was improved for only three Ss. This is attributable
to the minima (least errors) appearing several times in some
46
tables. Several sets of estimates were optimal. Some difÂ
ferences were due to errors in the manual analysis. The
averages presented in Table 2 omit the aberrant S4.
In a sense, there are only three parameters in the
model, not seven. The parameters within the triples (p^“
P2-p3 and P4-P5~pg) are not independent, but in fact closely
related. Excluding S4, P^ > P^ for nine out of 11 Ss. If
p = 1, then P must equal P , and the converse is also
A X J
true. Pc < P, in all cases, although this is not necessary.
b o
If P. =1. then P_ must ecrual P,, and the converse is true
4 5 6
also. P^ is unrelated to other parameters. There was most
variation in the first triple, less in the second, and least
in P^, corresponding to nine, seven, and four different
sets, excluding S4.
47
TABLE 2
PARAMETER ESTIMATES
Subject
Number
Derived via
Manual Analysis
Parameter Number
Derived via Exhaustive
(Computer) Analysis
Parameter Number
1 2 3 4 5 6 7 1 2 3 4 5 6 7
1 6 1 6 1 0 0 7 6 4 7 1 0 0 6
2 8 2 7 4 0 5 6 8 1 8 1 0 0 7
3 7 3 6 4 0 6 4 7 3 6 4 0 6 4
4 4 4 3 3 10 0 2 4 2 12 1 10 10 2
5 7 2 6 2 0 6 5 7 2 6 2 0 6 5
6 7 5 6 2 0 2 5 7 4 2 2 0 1 5
7 7 2 6 3 2 3 5 7 2 6 1 1 1 5
8 7 4 6 2 0 3 4 7 4 2 1 0 0 4
9 5 2 6 2 2 2 4 5 2 6 1 8 8 4
10 6 1 6 3 1 3 5 6 1 6 1 1 1 5
11 7 3 5 2 0 3 6 7 3 5 1 0 0 6
12 7 2 7 2 6 6 4 7 1 7 1 6 6 4
Mean
(rounded) 7 2 6 2 1 4 5 7 2 6 1 1 3 5
Mode 7 2 6 2 0 3 4, 7
1,
6 1 0 0 4,
5, 2,
4,
CHAPTER V
EVALUATION OF THE MODEL
Simulation of Individual Subjects
The model was nomothetic in structure and idiosynÂ
cratic in function: nomothetic because a single model was
used to describe the behavior of the 11 synoptic Ss and
idiosyncratic because the model predicted the peculiar beÂ
havior of these Ss if initialized with their peculiar
parameter estimates.
A model can be evaluated by the amount of agreement
between predictions generated by the model and observations
of natural behavior. Because the computer program mechanizÂ
ing the game-playing model was set back on the correct track
after each incorrect prediction, the measurement of agreeÂ
ment was simple. The program played through the 16 analyzed
hands, computing certain statistics for each hand and sumÂ
mary statistics across the hands (Table 3).
The model shows the least agreement for S4, which
48
TABLE 3
SUMMARY STATISTICS
Subject 1 2 3 4 5 6 7 8 9 10 11 12
Number of errors 8 12 10 19 6 7 4 11 9 8 10 9
Number of choices 125 122 128 120 129 126 126 131 130 128 134 121
Number forced 30 33 26 36 28 31 29 24 24 24 18 36
Per cent correct 93.6 90.2 92.2 84.2 95.3 94.4 96.8 91.6 93.1 93.8 92.5 94.2
Per cent error 6.4 9.8 7.8 15.8 4.7 5.6 3.2 8.4 6.9 6.2 7.5 5.8
â– b
VO
50
was because S4 rejected virtually all offers of black cards,
regardless of magnitude. If the model had a parameter for
color bias, better agreement could have been achieved, but
at the expense of introducing an ad hoc modification for a
single S.. This would have been antithetical in a model that
should be descriptive of general behavior. No test for
color bias was made for any S., but no such bias was manifest
except in S4.
The model was based upon the behavior of Sll. HowÂ
ever, this did not lead to predictions for Sll that were
more nearly accurate than those for other Ss. The fit of
the model was better for seven Ss and worse for three (synÂ
optic) Ss. The S. whose behavior was best simulated by the
model was S7. His optimal parameter estimates were the most
nearly typical. The typicalness of S7 is further gauged by
the fact that he alone failed to play a single hand in a
unique way. That is, his pattern of choices was identical
to that of at least one other S on each hand, while S4
played nine hands uniquely.
Among the synoptic Ss, the percentage of disagreeÂ
ments between the model and observed behavior ranged from
3.17 to 9.84, with a mean of 6.57. The possible range of
error per cent was from 0 to 100. The obtained range, for
51
the 11 Ss on the 16 hands, was from 0 to 71 per cent.
Averaging across the Ss puts the range for the hands from 1
to 16 per cent. Some hands, apparently, were much easier to
simulate than others. When the decisions are easy for Ss to
make, because of extremes of card magnitudes, then of course
i
the decisions can be predicted with less error.
The only goodness-of-fit measure presented is error
per cent. Another measure considered was phi, the fourfold
point correlation coefficient. Phi is quite sensitive to
the direction of the error— whether model-accepted but S.-
rejected or vice versa— while error per cent is completely
insensitive to it. Phi is undefined for the several cases
of zero marginal frequencies. When phi is unity, proportion
correct is also unity, but the converse is not true because
of the undefined phi cases. Error per cent was selected as
it is well behaved and easily interpreted. Tests for the
statistical significance of proportions or of differences
between proportions are inappropriate because the decisions,
upon which the error per cent was based, were highly correÂ
lated because of the conditional prediction effect. UnforÂ
tunately, then, evaluation of the model as a description of
the behavior of this sample of Ss remains subjective. A
sometimes suggested test of computer program output
52
agreement with recorded natural behavior is a kind of
Turing*s test (Turing, 1963, p. 11). Turing called it the
"imitation game." Stylized protocols of human subjects and
computer program output, where the program did not use the
conditional prediction technique, could be presented to
naive judges. The judges would then try to segregate the
machine and human protocols. However, this would not test
the model, but only whether machines can be programmed to
play the game involved, an obviously unnecessary demonstraÂ
tion.
In discussing the model evaluation problem, Newell
(1966) notes that
in assessing the validity of the program to describe
or explain the subject's behavior, two things are
missing to which psychologists have been accustomed.
First, there is no acceptable way to quantify the
degree of correspondence between the trace of the
program and the protocol. This is not a problem of
making the inference definite or public. Trace and
protocol can be laid side by side. . . . However,
comparison still must be made between an elaborate
output statement and a free linguistic utterance.
Although a human can assess each instance qualitaÂ
tively, there are no available techniques for quanÂ
tifying the comparison, or summarizing the results
of a large set of comparisons.
Second, the program has been created partly with
the subject's protocol in view. Thus, something
analogous to the calculation of degrees of freedom
used in fitting curves with free parameters to data
is appropriate. But programs are not parameterized
in any simple way and no analytic framework yet ex-
ists for allowing for degrees of freedom. (pp. 3-4)
53
The present research cannot claim to have solved the
jproblem of assessment, and indeed it has not, but error per
cent based upon conditional prediction is a reasonable quanÂ
tification of comparisons, and the model was cross-validated
with protocols that were not in view during program creaÂ
tion.
Sensitivity of Model to
Parametric Variation
The behavior of the computer program was of course
completely determined by the parameter values. The preÂ
dictive ability of the computer program was determined by
the proportion of variance in natural behavior accounted for
in terms of the parameters. Information on the relative
importance of the parameters can be obtained by comparing
models with different parameterization. If a parameter were
added to the original model, differences between the preÂ
dictive ability of the original and augmented models could
be attributed to the added parameter. By generating model
variations of different strengths, attributable to different
parameterizations, the predictive validity of the individuÂ
alizing parameters can be estimated.
During the manual analysis of the protocols, some
parameters were considered in addition to the seven. For
54
example, after the simulations were performed, and the
optimal parameter estimates computed, it seemed that the
triplet for accepting nondominant was insufficient to preÂ
dict the associated behavior. Although during the original
i
analysis of Sll, the notion of gain seemed important, in
several hands for each of the Ss the magnitude of the nonÂ
dominant offerings apparently led to acceptance of a card
that could not hope to lead to an over-all higher score.
There seemed to be a "face card" effect, and there was some
mention of this in the summary statements. It was decided,
however, not to add a parameter for face card effect in
accepting nondominant offerings. This and any other posÂ
sible parameters would have complicated the model. The goal
of the research was not to obtain the most accurate simulaÂ
tion possible, but to explore and understand methodology.
It can safely be contended that additional parameters would
have resulted in a better average fit. It will be shown
that each of the seven parameters contributed some covariÂ
ance in the prediction process. It would be no surprise,
and no contribution to theory, to discover that adding
parameters (predictors, in effect) resulted in greater corÂ
respondence between model and S behavior. As long as there
were non-zero correlation between the additional parameter
55
and human play and non-unity correlation with other paramÂ
eters , an increase in correspondence is a mathematical inÂ
exorability. It was more interesting to make the model less
predictive by removing the effects of parameters in certain
ways. By using exactly the same MAIN program, it is posÂ
sible to examine the power of weaker models.
Consider a model with no change in threshold for
first acceptance. Using the existing simulation program,
we get exactly this simpler model by setting P2 = 1, and
P3 = P^; only P^ can have any variation in the first tripÂ
let. In effect, the model then has only five parameters.
Another simplification would permit no change in threshold
for gain in accepting nondominant. This is achieved by setÂ
ting P^ = 1, and Pg = Pg, leaving only one variable in the
second triplet. Were both of these simplifications made,
the model would have only three parameters. If P^ were set
to some fixed value, instead of being allowed to take on the
value optimal for each S., then it would no longer function
as a parameter, but as a constant. The elements of the
triplets must be treated together, as they are closely conÂ
nected. The triplets and P? could all be set constant, and
the result could no longer be termed a parametric model.
If the sample had been very large, then a model
56
that had only average values as constants would still conÂ
tain information about human behavior. Such a model could
I
predict only average behavior. The errors of simulation
would vary for each S_, but there would be an average error.
Nullifying the effect of the various parameters by holding
them constant would lead to increases in average error, if
the parameters had any predictive power. For example, alÂ
lowing only to vary, holding the triplets to modal values
would yield a virtually one-parameter model. The difference
in average error between such a model and one in which all
values were constant would measure the effect of P^.
There are 32 possible model variations of different
strengths mechanized by the same MAIN program. The first
acceptance and nondominant acceptance parameters may have
one or three elements each, and the parameter estimates used
in the simulation may be the idiosyncratic optimal values or
held constant to the average (modal) value. The 32 variaÂ
tions belong to four submodel groups, according to the numÂ
ber of elements involved in making the decisions for first
acceptance and for acceptance of nondominant suit cards.
Because the several variations are frequently referred to
in the following discussions, it is convenient to label
these four submodel groups with more or less arbitrary
57
numeric identifiers.
Submodel 0— no change in threshold for accepting
first card as more cards are offered and rejected. Only one
parameter involved in accepting first card. No change in
gain threshold (for accepting nondominant suit cards) as
more cards are added to hand. Only one parameter involved
in accepting nondominant suit cards.
Submodel 1— only one parameter involved in accepting
first card. Three parameters involved in accepting nonÂ
dominant suit cards, due to change in gain threshold as more
cards are added to hand.
Submodel 2— three parameters involved in accepting
first card, because of a change in threshold for acceptance.
Only one parameter involved in accepting nondominant suit.
Submodel 3— three parameters involved in accepting
first card and three parameters involved in accepting nonÂ
dominant suit cards.
There are three distinct kinds of decisions: (1)
accepting cards of dominant suit, (2) accepting cards of
nondominant suit, and (3) accepting first card. The paramÂ
eters involved in making these three kinds of decisions may
be optimal or modal. There are eight ways that the three
3
kinds of decisions can be made (2 = 8). Because parameters
58
can be optimal or modal (two states), binary notation is
convenient. More or less arbitrarily, the low order three
bits of a five bit binary number can be used to convey inÂ
formation about these three kinds of decisions. Table 4
shows the meanings of all five bits. The bits are numbered
from low order to high order, from 1 to 5. The two high
order bits are used to identify which submodel group is
being considered. The five bits identify the four submodels
and the eight variations within each submodel. Octal notaÂ
tion is more compact than binary notation, and at some
points in the following discussion, variations are identiÂ
fied by an octal number. Variations 00-07 are in submodel
0, 10-17 are in submodel 1, etc., according to the first
octal character. The second octal character identifies the
variation within the submodel and represents the low order
three bits in the binary identifier.
As described earlier, there are three kinds of deÂ
cisions, and predictions of these decisions can be based
upon optimal or modal parameter estimates. In Table 5, the
variations within each submodel group are listed in order of
the number kinds of decisions based upon optimal parameter
estimates. There is one none-optimal variation, three one-
optimal variations, three two-optimal variations, and one
59
TABLE 4
MEANINGS OF BITS
Bit True (=1)
Three first take paramÂ
eters
Three nondominant take
parameters
First take parameter(s)
optimal
Nondominant take paramÂ
eter (s) optimal
Dominant take parameter
optimal
False (=0)
P2 = 1, P3 = P^; one first
take parameter
P4 = 1, Pt = Pe; one nonÂ
dominant take parameter
First take parameter(s)
modal
Nondominant take paramÂ
eter (s) modal
Dominant take parameter
modal
TABLE 5
AVERAGE SIMULATION ERROR PER CENT ASSOCIATED WITH EACH MODEL VARIATION
Variation within Submodel Submodel 0 Submodel 1 Submodel 2
1
Submodel 3
Binary
Identifier
Octal
Identifier
Constant First
Constant Gain
Constant First
Changing Gain
Changing First
Constant Gain
Changing First
Changing Gain
000 0 10.57 10.57 10.16 10.16
001 1 9.71 9.71 9.30 9.30
010 2 9.70 9.28 9.29 8.87
100 4 9.78 9.78 8.72 8.72
011 3 8.84 8.42 8.43 8.01
101 5 8.92 8.92 7.86 7.86
110 6 8.91 8.48 7.85 7.43
111 7 8.05 7.63 6.99 6.57
The submodel identifier is an octal number which stands for bits 5 and 4, while
the binary identifier for variation within submodel gives bits 3, 2, and 1. All of the
32 entries in this table are accounted for by the expression:
10.57 - •86B1 - .87B2 - .79B3 - *41B5 - .42B2B4 - .653^.
< T i
O
61
three-optimal variations.
Variation 00 uses three distinct values (P^, P^, and
P_), all of which are modal, while P , P , P., and P are
7 2 j 4 6
nullified in effect. Variation 37 uses seven values (the
seven parameters), all of which are optimal. The variations
have from zero to seven model elements that can take on
idiosyncratic optimal values. It can be shown that the numÂ
ber of idiosyncratic elements in a variation is given by the
expression B^ + B2 + B3 + ^B2B4 + 2B3B5* where Bi represents
bit 1, etc. The products an<* B3B5 rePresent interacÂ
tions (joint effects). The predictive power of a variation
is related to the number of idiosyncratic elements. The
expression should be compared with the expression for error
of simulation, to be described later.
For the sample of Ss examined, there is no change
in threshold for nondominant, on the average. That is, P4
is modally 1, and the threshold for gain in this case is
always P,.. Therefore, when is false, gives no addiÂ
tional information. Variations represented by B^ false have
the nondominant take parameters modal. The modal value of
P^ is 1, which means that there is no change in threshold
and thus only one nondominant parameter; B^ false implies
B^ false, for this sample. Thus, some of the variations are
redundant. The pairs of identical variations are 00-10,
62
01-11, 04-14, 05-15, 20-30, 21-31, 24-34, and 25-35. There
are only 24, not 32, distinct variations.
The simulation program played the 16 hands based
upon the 24 variations for each of the 12 Ss. The results
are shown in Table 5. Based upon the average errors of
simulation associated with each variation, it is possible to
isolate the contribution of each bit and of each interacÂ
tion.
The interactions are between and B^, and between
B^ and B,.. If there are three parameter elements for acÂ
ceptance of nondominant, then the effect of allowing idioÂ
syncratic variation is greater than for the case of only one
parameter element, the basis for the interaction.
Because, in this sample, the modal values are such that B^
false implies B^ false, B^ can have no contribution of its
own. By examining the differences among the average errors
i
in Table 5, it is possible to write an expression that gives
these errors as a function of the bit values. The expresÂ
sion is exact. Thus the sensitivity of the model to its
constituent parameters can be estimated. Each error of
simulation yields about 0.8 per cent error, considering the
approximately 127 choices predicted. For the experimental
sample, the error of simulation for the several variations
is given by 10.57 - - .87B2 - .79B3 - .0B4 - .41B5 -
i
.42B2B4 - ,65B3B5. To determine the error reduction assoÂ
ciated with each bit, the average error per cent for the
variation represented by the binary number with only that
bit true is subtracted from the average error per cent for
variation 00 (all bits false). The difference in error is
the reduction associated with setting that bit true. The
results of the several subtractions are given in Table 6.
TABLE 6
PARTITIONING OF AVERAGE ERROR OF SIMULATION
Bit (s)
Binary
Identifier
Octal
Identifier
Error
Reduction
B1
00001 01 .86
B2
00010 02 .87
B3
00100 04 .79
B4
01000 10 .00
B5
10000 20 .41
2&4 01010 12 .42
3&5 10100 24 .65
64
The difference between 00 and 37 (all bits true) is 4.00,
but the sum of the first order effects associated with each
bit is only 2.93; there remains 1.07 to be accounted for by
interaction effects. If there were no obvious interaction
effects to look for, it would be necessary to make many subÂ
tractions, according to a regular search scheme. In the
present case, the obvious interactions are between the numÂ
ber of parameter elements in a decision and whether these
elements are optimal or modal. The difference between 00
and 24 is 1.85, but this can be partitioned into a efÂ
fect, a B_ effect, and a B_B_ interaction effect. The B,B_
D 3 d 3 d
reduction is given by 1.85 - .79 - .41 = .65. Similarly,
the difference between 00 and 12 is 1.29, which is the sum
of the B2 and B2B4 effects. (The B4 effect is zero.) The
B2B4 interaction is 1.29 - .87 = .42.
According to Table 4, setting any one bit true must
yield a non-negative contribution to the strength of the
model. This is obvious for bits 1, 2, and 3 from the fact
that they stand for optimal parameter values. Bits 4 and 5
each stand for the addition of two parameters to the model,
which implies greater predictive strength. The research did
not examine the theoretical foundations for the assumption
that the effects represented by the bits are necessarily
65
additive in the manner shown. The calculation of inter-
I
action effects is dependent upon this assumption. It should
be mentioned parenthetically, however, that the partitioning
described is valid for two other error tables not included
in this report.
The coefficient for is zero, as implied previÂ
ously. It can be seen from the expression that B^, B^, and
B^ each contribute to error reduction by about one error,
while the B.B. interaction and B_ each account for about
2 4 5
half an error, and the B^B^ interaction accounts for about
three-quarters of an error.
The relative contributions of each of the model eleÂ
ments can be expressed in a slightly different way, using
the same error information:
Dominant acceptance optimal instead of modal . . . 0.86
Nondominant acceptance optimal instead of modal . 0.87
First acceptance optimal instead of modal .... 0.79
Three nondominant acceptance parameters instead
of one
If parameters optimal......................... 0.42
If parameters m o d a l ........................... 0.00
Three first-acceptance parameters instead of one
If parameters optimal ....................... 1.06
If parameters modal ......................... 0.41
66
Table 5 indicates that the model is not very sensiÂ
tive to parametric variation. There are several reasons for
this. The conditional prediction effect tends to keep the
error per cent low, reducing the range of variation. The
subjects tended to play in much the same way, although the
individual variation was sufficient to be interesting. (See
Appendix C for a synopsis.) The dispersion of idiosyncratic
optimal parameter estimates about the mode is the source of
increased error of simulation when the model element is set
to the modal value. This dispersion was small for the exÂ
perimental sample. Also, for many of the parameters, sevÂ
eral estimates were optimal for the particular Ss. When
more than one estimate was associated with optimal predicÂ
tion, the convention used was to select the numerically
smallest of the optimal estimates. The modal value was one
of the optimal values in many cases.
In spite of the fact that no one parameter accounted
for more than about one error on each hand, on the average,
the combined effect of the parameters was very significant.
The difference between the performances of models using
all idiosyncratically optimal parameter estimates and
models using constant parameter values (the modal values)
is 10.57 - 6.57 = 4.00. Based upon correlated samples, and
67
using an appropriate normalizing transform, t = 6.62, df =
!
.10, p < .001. The model predicts significantly better if it
incorporates individually optimal parameter estimates, even
though the effect of any single parameter is not very great.
Cross-Validation of the Model
The model was derived from analysis of behavior on
16 trials out of 60. The first 30 were considered as pracÂ
tice trials. Fourteen trials were set aside from the second
30 as cross-validation data. No information in these trials
was used, or even known, before the model had been derived
and mechanized. If the model could predict behavior only
on the analyzed hands, it would hardly be of interest. If,
on the other hand, the model could predict almost as well
the behavior on the cross-validation sample, it could proÂ
vide the basis for further research and improvement, if the
decision-making behavior in this game were intrinsically
interesting.
Both the model structure and the individual paramÂ
eter estimates were empirically derived from the analyzed
protocols. The parameter estimates were especially subject
to capitalization upon chance. By playing the 14 holdout
hands using the parameter estimates that were optimal for
68
the analyzed hands, this capitalization is completely
eliminated. Table 7 records the results of this cross-
validation. The cross-validation shrinkage for the strongÂ
est variation (37) is from 6.57 per cent to 7.86 per cent,
a shrinkage of less than 20 per cent. This difference is
statistically significant, based upon a t-test of the difÂ
ference between the correlated samples. Using the approÂ
priate normalizing transform,Arcsine square root (p/100),
t. = 1.91 with df = 10, P < .05. The choices of each subject
were determined jointly by the situation (the offered cards
as stimuli), individual propensity, and personal-situation
interaction. The parameter estimates were determined
strictly from the recorded choices, and therefore from these
three factors (plus "chance"). The cross-validation shrinkÂ
age of 20 per cent reflects the elimination of the personal-
situation interaction factor. But this shrinkage is conÂ
founded with whatever differences in model structure might
have arisen from the situation (the particular 16 hands
analyzed). The truth is that the structure of the model was
induced from the behavior of just one S. and originally supÂ
ported by analysis of just one other S. Still, had the 14
hold-out hands been the behavior originally analyzed, the
model might have been structurally different. The
69
independence of the model structure from the particular
I
stimuli in the analyzed 16 hands cannot be tested directly.
If the model used parameter estimates which were optimal for
the 14 cross-validation hands, and if the predictions for
these hands were significantly inferior to the corresponding
i
predictions for the 16 analyzed hands, also using optimal
parameter estimates, then it could be supposed that the
model was determined largely from the characteristics of the
16 hands. The results summarized in Table 7 show that the
model predicts better for the cross-validation hands than
for the analyzed hands, with an average error of only 5.65
per cent as compared with 6.57 per cent, for the strongest
variation. This supports the conclusion that the model
structure is valid for behavior which could not have inÂ
fluenced its derivation. That the difference is in favor
of the cross-validation hands can be attributed partly to
easier predictions, but no analysis for this ease of predicÂ
tion factor was carried out.
Table 7, Error Per Cent Summary, gives the error of
prediction for six combinations of variation and sample.
The information for each subject is averaged over the 16
hands of the analyzed sample and the 14 hands of the holdÂ
out sample. Tables 8, 9, and 10 give this information for
TABLE 7
ERROR PER CENT SUJWARY
Cross-Validation Sample (14 Hands) Analyzed Sample (16 Hands)
Subject All Parameters
Optimal*
(Variation 37)
All Parameters
Modal*
(Variation 00)
All Parameters
Optimal
(Variation 37)
All Parameters
Modal
(Variation 00)
All Parameters All Parameters
Optimal Modal
(Variation 37) (Variation 00)
1 7.2 7.2 6.3 7.2 6.4 9.6
2 7.0 6.1 3.5 7.0 9.8 13.1
3 8.0 8.0 7.1 8.0 7.8 10.9
4 21.2 28.8 14.4 28.8 15.8 33.3
5 3.5 4.4 3.5 4.4 4.7 10.1
6 7.0 7.8 3.5 8.7 5.6 7.1
7 6.2 8.0 6.2 6.2 3.2 6.4
8 9.0 12.6 7.2 10.8 8.4 9.2
9 8.2 10.9 5.5 9.1 6.9 15.4
10 9.8 10.7 6.3 9.8 6.2 11.7
11 9.6 7.9 6.1 6.1 7.5 11.2
12 11.0 17.0 7.0 15.0 5.8 11.6
Average
(omitting
§4) 7.86 9.14 5.65 8.39 6.57 10.57
*Optimal or modal with respect to the 16 analyzed hands, not the 14 cross-validation hands.
TABLE 8
ERROR MATRICES FOR 16 ANALYZED HANDS
ONE FIRST. ONE NONDON. FIRST MOOAL. NONDOM MODAL. DOM MOOAL 116) 0, 10
SUBJECT DECK NUMBERS
31 32 35 36 39 AO A3 AA A7 A8 51 52 55 56 59 60 AVRG C OVERALL
1 12.5 0.0 0.0 2B.6 0.0 22.2 11.1 0.0 12.5 0.0 12.5 1A.3 12.5 12.5 0.0 12.5 9.A5 9.60
2 A2.9 1A.3 0.0 0.0 0.0 0.0 A2.9 22.2 0.0 0.0 25.0 0.0 20.0 12.5 11.1 22.2 13.32 13.11
3 12.5 0.0 22.2 22.2 0.0 11.1 12.5 0.0 12.5 0.0 1A.3 1A.3 20.0 12.5 11.1 11.1 11.02 10.9A
* 22.2 20.0 22.2 2B.6 55.6 33.3 12.5 75.0 50.0 11.1 66.7 50.0 0.0 0.0 A2.9 50.0 33.75 33.33
5 12.5 0.0 22.2 28.6 0.0 12.5 12.5 0.0 12.5 22.2 12.5 12.5 0.0 0.0 11.1 0.0 9.95 10.08
6 12.5 0.0 12.5 0.0 0.0 1A.3 11.1 12.5 0.0 0.0 0.0 0.0 12.5 11.1 0.0 22.2 6.80 7.'1A
T 12.5 0.0 33.3 28.6 0.0 0.0 12.5 0.0 0.0 0.0 12.5 0.0 0.0 0.0 11.1 0.0 6.91 6.35
B A2.9 0.0 22.2 12.5 0.0 0.0 12.5 0.0 12.5 11.1 25.0 0.0 0.0 0.0 11.1 0.0 9.36 9.16
9 3T.5 0.0 22.2 12.5 0.0 33.3 12.5 0.0 12.5 11.1 66.7 12.5 0.0 12.5 11.1 11.1 15.97 15.38
10 33.3 0.0 33.3 25.0 0.0 22.2 12.5 0.0 0.0 0.0 22.2 0.0 0.0 11.1 11.1 22.2 12.07 11.72
11 12.5 0.0 22.2 33.3 0.0 0.0 12.5 0.0 12.5 0.0 33.3 0.0 0.0 12.5 11.1 22.2 10.76 11.19
12 22.2 16.7 0.0 0.0 11.1 16.7 12.5 0.0 AO. 0 0.0 22.2 12.5 12.5 0.0 11.1 12.5 11.87 11.57
AVERAGE 23.00 A.25 17.71 18.32 5.56 13.81 1A.80 9.1A 13.75 A.63 26.07 9.67 6.A6 7.06 11.90 15.51 12.60 12.A7
AVERAGE
NINUS SA 23.07 2.81 17.30 17.39 1.01 12.03 15.01 3.16 10.A5 A.OA 22.38 6.01 7.05 7.70 9.09 12.37 10.68 10.57
THREE FIRST. THREE NONDON. FIRST OPT . NONOOH OPT . DOM OPT (161 37
SUBJECT DECK NUMBERS
31 32 35 36 39 AO A3 AA A7 A8 51 52 55 56 59 60 AVRG I OVERALL
1 12.5 0.0 0.0 0.0 0.0 11.1 11.1 0.0 12.5 0.0 12.5 1A.3 12.5 12.5 0.0 0.0 6.19 6.AO
2 1A.3 1A.3 0.0 1A.3 0.0 11.1 28.6 11.1 0.0 0.0 25.0 0.0 0.0 12.5 11.1 11.1 9.59 9.8A
3 37.5 0.0 0.0 11.1 0.0 11.1 0.0 0.0 12.5 0.0 0.0 1A.3 20.0 12.5 0.0 11.1 8.13 7.81
A 11.1 20.0 11.1 1A.3 22.2 11.1 0.0 25.0 33.3 11.1 0.0 25.0 1A.3 1A.3 1A.3 33.3 16.28 15.83
5 0.0 0.0 11.1 1 A. 3 0.0 0.0 12.5 0.0 12.5 11.1 0.0 0.0 0.0 0.0 11.1 0.0 A.5A A.65
6 0.0 0.0 0.0 0.0 0.0 1A.3 11.1 12.5 0.0 0.0 0.0 0.0 12.5 11.1 0.0 22.2 5.23 5.56
T 0.0 0.0 0.0 1A.3 0.0 0.0 12.5 0.0 0.0 0.0 0.0 11.1 0.0 0.0 11.1 0.0 3.06 3.17
8 71 .A 0.0 0.0 12.5 0.0 11.1 0.0 0.0 12.5 11.1 12.5 0.0 1A.3 0.0 0.0 0.0 9.09 8. AO
9 0.0 1A.3 11.1 25.0 0.0 0.0 0.0 0.0 12.5 11.1 0.0 0.0 1A.3 12.5 0.0 11.1 6.99 6.92
10 16.7 0.0 0.0 0.0 0.0 0.0 12.5 0.0 0.0 0.0 11.1 11.1 0.0 11.1 11.1 22.2 5.99 6.25
It 12.5 0.0 11.1 11.1 0.0 0.0 12.5 0.0 12.5 0.0 22.2 0.0 0.0 12.5 11.1 11.1 7.29 7.A6
12 11.1 0.0 0.0 0.0 11.1 0.0 0.0 0.0 AO.O 0.0 11.1 12.5 0.0 0.0 0.0 12.5 6.15 5.79
AVERAGE 15.59 A.05 3.70 9.7A 2.78 5.82 B.AO A.05 12.36 3.70 7.87 7.36 7.32 8.25 5.82 11.23 7.38 7.3A
AVERAGE
MINUS SA 16.00 2.60 3.03 9.33 1.01 S.3A 9.16 2.15 10.A5 3.03 8.59 5.75 6.69 7.70 5.95
*•2?
6.57 6.57
TABUS 9
ERROR MATRICES FOR 14 HOLDOUT HARDS, USIRQ PARAMETERS ESTIMATED FROM THE 16 AMAUrSED HAWS
ONE FIRST, ONE NONOON, FIRST NOOAl. NCINOOH NODAL. DON NOOAL 1161 0, 10
SUBJECT DECK NUMBERS
99 94 9T 98 41 42 49 46 49 90 93 94 9T 98 AVRG 1 OVERALL
1 12.9 12.9 22.2 0.0 0.0 11.1 0.0 20.0 0.0 0.0 11.1 0.0 12.9 0.0 7.) T.2
2 0.0 0.0 0.0 0.0 0.0 11.1 0.0 99.9 0.0 0.0 11.1 11.1 11.1 0.0 9.4 6.1
9 14.T 12.9 11.1 0.0 0.0 11.1 0.0 11.1 0.0 11.1 11.1 16.T 12.9 0.0 8.1 8.0
4 16.T 0.0 14.9 99.9 20.0 11.1 66.T 3T.9 40.0 9T.1 11.1 0.0 28.6 99.6 28.0 28.8
9 0.0 12.9 11.1 0.0 0.0 0.0 0.0 12.9 0.0 0.0 11.1 0.0 12.9 0.0 4.9 4.4
6 12.9 0.0 11.1 0.0 0.0 12.9 0.0 93.9 0.0 0.0 11.1 11.1 12.9 0.0 7.4 T.8
T 12.9 0.0 12.9 22.2 0.0 11.1 0.0 12.9 0.0 0.0 11.1 0.0 22.2 0.0 T.4 8.0
a 22.2 0.0 14.9 22.2 0.0 11.1 0.0 12.9 12.9 11.1 11.1 16.T 3T.9 0.0 12.2 12.6
a 0.0 12.9 0.0 93.9 0.0 11.1 0.0 12.9 12.9 11.1 29.0 16. T 12.9 0.0 10.9 10.9
to 12.9 0.0 11.1 90.0 0.0 12.9 0.0 93.9 0.0 0.0 11.1 0.0 12.9 0.0 10.2 10.T
it 0.0 0.0 12.9 99.9 0.0 11.1 0.0 12.9 0.0 11.1 11.1 0.0 12.9 0.0 7.4 T.9
12 0.0 0.0 0.0 42.9 20.0 29.0 0.0 12.9 0.0 9T.1 11.1 0.0 90.0 40.0 18.9 17.0
AVERAGE 0.80 4.1T 10.02 19.TO 9.99 11.9T 9.94 20.30 9.42 19.29 12.2T 6.02 19.T4 T.96 10.90 10.78
AVERAGE
NINUS S4 8.08 4.99 9.49 18.94 1.82 11.62 0.0 18.T4 2.2T 9.24 12.9T 6.9T 18.94 9.44 9.00 9.14
THREE FIRST* THREE NONDON. FIRST OFT . NONOON OFT • DON OFT 1161 3T
SUBJECT DECK NUMBERS
99 94 9T 98 41 42 49 46 49 90 99 94 9T 98 AVRG 6 OVERALL
1 0.0 12.9 22.2 0.0 0.0 22.2 0.0 20.0 0.0 0.0 11.1 0.0 12.9 0.0 T.2 T.2
2 11.1 0.0 0.0 0.0 0.0 22.2 0.0 93.3 0.0 0.0 11.1 0.0 11.1 0.0 6.9 T.O
9 0.0 12.9 11.1 0.0 0.0 0.0 0.0 22.2 11.1 11.1 0.0 16.T 29.0 0.0 7.8 8.0
4 0.0 14.9 28.6 39.9 0.0 0.0 99.9 29.0 0.0 42.9 22.2 28.6 0.0 44.4 19.9 21.2
9 0.0 12.9 11.1 0.0 0.0 0.0 0.0 12.9 0.0 0.0 11.1 0.0 0.0 0.0 9.4 9.9
4 12.9 0.0 11.1 0.0 0.0 12.9 0.0 39.9 0.0 0.0 11.1 11.1 0.0 0.0 6.9 T.O
T 12.9 0.0 12.9 11.1 0.0 11.1 0.0 12.9 0.0 0.0 11.1 0.0 11.1 0.0 9.9 4.2
8 11.1 0.0 14.9 22.2 0.0 22.2 0.0 0.0 0.0 11.1 0.0 16.T 29.0 0.0 8.8 9.0
* 11.1 12.9 0.0 39.3 12.9 0.0 0.0 0.0 0.0 0.0 0.0 16.T 29.0 0.0 7.9 8.2
10 12.9 0.0 11.1 9T.9 0.0 29.0 0.0 39.3 0.0 0.0 11.1 0.0 0.0 0.0 9.9 9.0
11 11.1 0.0 12.9 99.9 0.0 22.2 0.0 12.9 0.0 11.1 11.1 0.0 12.9 0.0 9.0 9.6
12 11.1 0.0 0.0 0.0 20.0 12.9 0.0 0.0 11.1 42.9 22.2 0.0 0.0 40.0 11.4 11.0
AVERAGE T.TS 9.94 11.21 14.24 2.T1 12.90 2.T8 IT .06 1.09 9.92 10.19 T.4T 10.19 T .04 8.99 6.97
AVERAGE
NINUS S4 0.44 4.99 8.69 12.90 2.99 19.44 0.0 16.94 2.02 6.99 9.09 9.96 11.11 9.64 T.40 7.06
to
TABLE 10
BUtOR MATRICES FOR 14 HOLDOUT HANDS, USING PARAMETERS ESTIMATED FROM TOE
ONE FIRST, ONE NONOOM, FIRST MOOAL, NONOON MODAL, DON NOOAL 1141 0, 10
SUBJECT DECK NUMBERS
33 34 3T SB 41 42 4S 44 49 SO S3 54
HOLDOUT HANDS
57 SB AVRG I OVERALL
1 12.9 12.9 22.2 11.1 0.0 11.1 o.o 20.0 0.0 0.0 U.l 0.0 0.0 0.0 7.2 7.2
2 0.0 0.0 0.0 II.1 0.0 11.1 0.0 33.3 0.0 0.0 11.1 U.l U.l 0.0 4.3 7.0
3 14.T 12.9 11.1 11.1 0.0 11.1 0.0 U.l 0.0 U.l U.l 14.7 0.0 0.0 0.0 0.0
4 14.T 0.0 14.3 33.3 20.0 11.1 44.7 37.9 40.0 97.1 U.l 0.0 20.4 95.4 20.0 20.0
9 0.0 12.9 11.1 11.1 0.0 0.0 0.0 12.9 0.0 0.0 U.l 0.0 0.0 0.0 4.2 4.4
» 12.9 0.0 11.1 II.1 0.0 29.0 0.0 33.3 0.0 0.0 U.l U.l 0.0 0.0 0.2 0.7
T . 12.9 0.0 12.9 11.1 0.0 11.1 0.0 12.9 0.0 0.0 U.l 0.0 U.l 0.0 9.9 4.2
S 22.2 0.0 14.3 11.1 0.0 U.l 0.0 12.9 12.9 11.1 U.l 14.7 29.0 0.0 10.9 io.a
9 0.0 12.9 0.0 22.2 0.0 11.1 0.0 12.9 12.9 U.l 29.0 14.7 0.0 0.0 0.0 9.1
10 12.9 0.0 11.1 3T.9 0.0 29.0 0.0 33.3 0.0 0.0 U.l 0.0 0.0 0.0 9.3 9.0
It 0.0 0.0 12.9 22.2 0.0 11.1 0.0 12.9 0.0 U.l U.l 0.0 0.0 0.0 5.0 4.1
12 0.0 0.0 0.0 20.4 20.0 29.0 0.0 12.9 0.0 97.1 11.1 0.0 33.3 40.0 14.3 19.0
AVERAGE 0.00 4.1T 10.02 10.4T 3.33 13.44 9.94 20.30 9.42 13.23 12.27 4.02 9.09 T.94 9.00 10.09
AVERAGE
MINUS S4 0.00 4.99 9.43 IT.12 1.02 13.09 0.0 10.74 2.2T 9.24 12.37 4.97 7.32 3.44 0.23 0.39
THREE FIRST, THREE NONOON, FIRST OFT , NONOON OFT ,
SUBJECT DECK NUMBERS
33 - 34 37 30 41 42 4$
DON OFT . 1141 37
44 49 90 53 54 97 50 AVRG C OVERALL
1 12.9 12.9 22.2 0.0 0.0 U.l 0.0 20.0 0.0 12.9 0.0 0.0 0.0 0.0 4.9 4.3
2 0.0 0.0 0.0 0.0 0.0 U.l 0.0 U.l 0.0 0.0 U.l U.l 0.0 0.0 3.2 3.5
3 0.0 12.9 11.1 0.0 0.0 0.0 0.0 22.2 U.l U.l 0.0 14.7 12.5 0.0 4.9 7.1
4 0.0 0.0 14,3 11.1 20.0 U.l U.l 0.0 0.0 14.3 33.3 20.4 20.4 22.2 13.9 14.4
9 0.0 12.9 11.1 0.0 0.0 0.0 0.0 12.9 0.0 0.0 U.l 0.0 0.0 0.0 3.4 3.9
4 0.0 0.0 11.1 0.0 0 .0 0.0 0.0 U.l 0.0 0.0 U.l U.l 0.0 0.0 3.2 3.9
T 12.9 0,0 12.9 11.1 0.0 U.l 0.0 12.9 0.0 0.0 U.l 0.0 U.l 0.0 5.9 4.2
G 11.1 0,0 14.3 11.1 0.0 22.2 0.0 0.0 0.0 U.l 0.0 14.T 12.5 0.0 T.l T.2
9 11.1 12.9 0.0 22.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 14.7 12.5 0.0 5.4 5.5
10 0.0 0.0 11.1 29.0 12.9 12.9 0.0 U.l 0.0 0.0 U.l 0.0 0.0 0.0 4.0 4.3
11 0.9 0.0 12.9 22.2 0.0 U.l 0.0 12.9 0.0 U.l U.l 0.0 0.0 0.0 9.0 4.1
12 0.0 0.0 0.0 0.0 0.0 29.0 0.0 12.9 0.0 42.9 0.0 0.0 0.0 20.0 7.2 7.0
AVERAGE 0.94 4. IT 10.02 0.94 2.71 9.41 0.93 10.44 0.93 0.90 0.33 0.40 4.43 3.52 4.10 4.30
AVERAGE
NINUS S4 4.29 4.99 9.43 0.33 1.14 9.4T 0.0 11.41 1.01 0.04 4.04 4.97 4.42 1.02 5.40 5.45 U >
74
each hand, and the same information averaged over Ss. Table
8 gives this information for the 16 analyzed hands, using
the parameter estimates which were optimal or modal for the
16 analyzed hands, according to the rules for forming the
variations of different strengths. Table 9 gives the errors
of prediction for the 14 holdout hands using the parameter
estimates which were optimal or modal for the 16 analyzed
hands. Table 10 shows the results of using the parameter
estimates which were optimal or modal for the 14 holdout
hands in the prediction of behavior on the 14 holdout hands.
i
CHAPTER VI
SUMMARY AND CONCLUSIONS
i
The purpose of the research, as outlined previously,
was essentially methodological. The basic approach and most
of the techniques applied were established before any data
were collected, and in fact before the game was selected.
The outcomes of the research will be discussed with respect
to the stated purpose.
Five process rules constitute the model. Decisions
to accept offered cards are based solely upon the rules of
the game, the color and magnitude of the offered card, and
the previous decisions. The model is completely determinisÂ
tic.
The computer program is successfully kept on the
correct track by very simple features in the source code
associated with the entry points ACCEPT and REJECT. After
a prediction which differs from the recorded human choice,
an error of simulation is scored and the program is forced
75
76
to make the recorded choice. Thus, there are no problems
associated with the conditional prediction effect.
Evaluating the use of Fortran as the programming
language has two aspects. Certainly the model was simple
enough to express in Fortran, and only basic skills were
needed to develop and test the program. The main purpose,
however, for using Fortran was to make public all of the
model mechanization. The understandability of the source
code was not tested or measured in any way, but most sciÂ
entists do have some familiarity with Fortran or access to
others whb do. The source code presented in Appendix D
does include some dialect features associated with a parÂ
ticular manufacturer's implementation of Fortran, but the
code could quite easily be rewritten in any other dialect.
The successful use of Fortran in this research is therefore
a reasonable but unproven conclusion.
The only evidence for the strictly empirical deriÂ
vation of the model is in the details of the analysis procÂ
ess described in Chapter IV. The analysis was successfully
free of any conscious bias. No assumptions regarding cogÂ
nitive processes or the structure of the model were held in
advance of protocol analysis. The process rules induced by
77
other rules might emerge from the same protocol data for
other analysts. Had the first protocols analyzed been those
of Ss other than Sll and S6, somewhat different rules could
be expected. Had the sample Ss been different, the model
might have been very different. Nonexperimental anecdotal
support for this is found in the fact that a haphazard
sample of two scientific computer programmers, upon reading
the rules of the experimental game, at once proceeded to try
to discover optimal playing strategy based upon very complex
probability calculations. It would appear in retrospect
that to empirically derive a more nearly general model it
would be necessary to ensure a more diverse subject sample.
The model was reasonably successful in predicting
the behavior of 11 individual Ss, each of whom played the
game at least somewhat differently. A twelfth S., unlike any
of the others, had a very strong color bias which was not
accounted for by the model. The model has seven numeric
parameters which were fitted to individual S. behavior. Each
S. had a unique set of parameter estimates. One of the purÂ
poses of the research was to develop a model which was genÂ
eral in structure but individual in function. The generalÂ
ity of structure is indicated by the fact that a single
short main computer program predicted behavior for all
subjects, with 93.4 per cent accuracy for 11 synoptic subÂ
jects . The individuality of function was provided by the
incorporation of individually fitted parameter estimates.
The importance of these individualizing parameters is seen
from the fact that when their values are held constant inÂ
stead of being allowed to vary according to individual difÂ
ferences, model performance is very significantly degraded.
The results may be interpreted to support the conclusion
that individuals can be characterized in terms of their
parameter estimates, and that the parameterized model can
account for individual differences in behavior.
The research purposed to develop a measure of the
goodness of simulation. The only measure presented was
error per cent. For the case of sequential decisions, the
conditional prediction effect obtains, so that error per
cent reflects the highly interdependent decisions made by
the model. It would be invalid to use error per cent in the
usual formulae for the significance of a proportion, the
significance of differences between proportions, etc. The
research did not develop a method to test for whether a
model predicted significantly better than chance. For the
cross-validation 14 hands, the average error per cent was
5.65 per cent. The question vriiich should be asked and
79
answered is this: does this model account for all but an
Insignificant proportion of the variance in the observed
natural behavior? For the author, this remains a challengÂ
ing open question.
On the other hand, it may be that error per cent is
i
quite satisfactory for many purposes. In classical statisÂ
tical hypothesis testing, the researcher chooses, presumably
in advance, a type I error per cent he is willing to tolerÂ
ate. For experimental decision-making situations like the
one used in this research, the model can incorporate inÂ
creasingly more parameters, and use up degrees of freedom,
until every single decision point is represented by a
parameter. As the number of parameters approached the numÂ
ber of choice points, the error per cent would approach
zero, inexorably. The researcher could establish in advance
an error per cent tolerable for his purposes, and add paramÂ
eters to his model until this value was reached. Comparing
models in terms of average error per cent is valid. Error
per cent is directly based upon numbers of errors, which are
arrived at by counting. Ratio scale status for the error
per cent measure is not required for the arithmetic operaÂ
tions used in the t^-tests, however. The theoretical distriÂ
bution of this error per cent measure is not known (to the
80
author, at any rate). For each S., the denominator used in
calculating the error per cent might be different, as it was
for 10 out of the 12 Ss in this study. The conditional preÂ
diction effect upon the distribution is probably not amenÂ
able to any closed-form analysis. It is, however, reason-
i
able to assume that this measure may be manipulated as beÂ
longing to an interval scale, with tolerable inaccuracy.
The research developed a technique for studying the
effects of the parameters on the predictive power of the
model. The technique involves the generation of models of
varying strengths by selectively nullifying the effects of
the various parameters, and calculation of the error reducÂ
tion associated with particular parameter combinations. The
technique permits the derivation of an exact expression for
the contribution of the parameters to the predictive
strength of the model. Arbitrary though it might seem, the
bit notation used facilitates the calculations of parameter
contributions, while the octal identifiers for the variaÂ
tions are much more convenient than mnemonic identifiers
composed of English words. The technique involving model
variation and error partition emerged from the search for a
means to test simulation adequacy in terms of classical
hypothesis testing. That search is not over, but the
81
methodology described in this report is offered as part of
a feasible approach to that goal.
I
82
A P P E N D I X A
GAME INSTRUCTIONS
83
APPENDIX A
GAME INSTRUCTIONS
The card game you are about to play was developed as
part of a research program designed to define a methodology
for understanding observed behavior in a problem-solving
(game-playing) situation.
1. The card deck for this game has 24 cards (12 of
one red suit, ace through queen, and 12 of one black suit,
ace through queen). Each deck has been randomly shuffled
using a list of random numbers, and the sequence of decks is
itself a random arrangement (there is no pattern in the presÂ
entation of successive decks). Each person will play the
same hands in the same sequence.
2. I will show you the top 10 cards, one at a time.
As each card is exposed, I will call out the color and value
of the exposed card and you decide whether to accept or
reject it. "Think aloud" as you make your choice. Of the
top 10 cards, you must accept five.
84
85
3. The numbered cards have a value equal to the
number on the card; the ace is worth one point, the jack,
11 points, and the queen, 12 points. The goal of the game
is to get the maximum total score for the five cards you
select. The values of the red cards and the black cards
will be added separately, and the larger sum will be your
total score for that particular hand. Thirty hands of the
same game will be played during each one-hour session. You
should be able to improve your performance with practice.
4. You are asked to do all of your thinking aloud,
and a tape recording will be made during the course of play.
You may use pencil and paper if you wish, provided you conÂ
currently verbalize what you are doing.
A P P E N D I X B
EXCERPTS FROM ALL PROTOCOLS
86
| APPENDIX B
I
I
I
l
EXCERPTS FROM ALL PROTOCOLS
51 (after H30)
I think it works best to try to get all cards of the
same color instead of mixing them.
(Summary comment, on inquiry)
I found that the best system that I worked out was
trying to get all of one color, instead of mixing the two
colors and having half red and half black, and get all reds
or blacks, which generally gave a higher score.
52 (at start of H4)
BIO I accept it; I'm going to work on blacks and
high numbers.
Rll I think I'll accept that too so I can work on
both.
R3 I reject it.
B12 That settles it; I'm going to be working on
the blacks now.
81
88
(At start of H26)
R? I accept it and hope I don't see a high black.
BJ.0 Oh darn! Since the 10 is higher I'll take
that.
I
B6 I accept it and work on blacks.
(After accepting three cards on H40)
Rll Darn! Since the jack and 10 are higher than
the 7 and 9, I'll switch back to reds again and
accept it.
(After accepting three cards on H50)
R12 It's too late to switch colors.
(Summary statement)
It's best to wait for a high card when you're first
starting and stick with the high cards in that one color.
If two high cards come up in a row, one in each color, then
you take them both and wait for the higher cards in whichÂ
ever color shows up. But half the time, when the last cards
are coming around, you have to gamble, and that's what I do.
S3 (on H5, after accepting four cards)
B9 I'm holding 12 black in my hand, 26 red in my
hand. There are nine cards showing. I'll
leave it and gamble that the last card is red.
89
(On H21, after taking two cards)
' B5 I'll take it, because I'm building blacks and
the other cards have been small, so there might
not be many high cards in this hand.
(On H23, after taking B8 and Bll)
R9 Six cards showing. I'll take the red 9, in
case the rest of the cards are red.
Rll I'll take it; high red which gives me a higher
score in the red than in the black.
(Summary statement, on inquiry)
My strategy would be that I wait for a 7 or better
in the first three cards before I accept a card and the
color of that card determines which color I will start colÂ
lecting, trying always to get five of one color. If a high
card shows up in each color in the first three cards, I
usually take them both and let the color of the next high
card decide which color I will build on.
S4 (after Hi)
Since I started collecting blacks, it doesn't seem
worthwhile to take high reds since in all probability that
section won't be counted anyway, and I don't know how the
blacks will be stacked because of the random distribution.
90
(After H4)
It seems as though in the previous hands, when I had
started accepting one color, high cards in the other color
I
I
would come up and I didn't choose those and it seems as
though the large numbers have been coming in series, so I
decided to collect high numbers in both colors.
(During H7)
After you've passed up one or two high cards in a
color (red) because you've started with the other color
(black) you may as well continue with the blacks and reject
all reds.
(During H21)
. . . the theory now being that now that I've
started with the black series I'll take any blacks over
five, and reject the rest.
(On H31, the beginning of the second session)
i
R7 I accept, because it's red and on the way up
the stairs today I decided to go with the reds
on all hands today. On some hands I might have
less of a chance, but on the over-all 30 hands,
I thought it would be nice to have some kind of
a system.
91
(On H32, considering first card offered)
B9 Reject, black. It looks like a nice card and
I'm tempted to take it, but I'm going to go
ahead with the reds so that I can have the
security of knowing that I'm doing something.
(After H39)
Once again, I think it is just as well to go on with
the reds, because although it might not always be best on an
individual hand, since there is no way of knowing what is
coming up on the different hands or what their distribution
is, sticking to the reds seems to work well over-all. The
fact that so many blacks came up this time is just a special
case.
(During H44, after B12, B7, B6)
I have the urge, when I see a high black card on the
first card, to accept, but I decide to be consistent with
the reds because it doesn't mean anything, it's just like a
good omen.
(At start of H54)
B9 Accept. I just want to see what happens if I
go with the blacks this time.
(After H54)
My new philosophy is this: since it doesn't seem to
92
make much difference whether I stick to the reds or change
i
to the blacks, I'm going to play each hand as I please for
the remainder of the session.
I
(During H56)
As it turns out, I would have been better off to
change to blacks when the black 9 came up because the black
9 and 12 would have been higher than the red 11 and 8, but
it's too late now and I'll have to take the last three.
(At start of H57)
Accept, red, and I'm going back to my original sysÂ
tem for this session and accept only reds, because when you
have to play 30 hands, it's better to stick with one system.
(Summary, after H60)
Since you're not gaining or losing anything, it
seems at first that it would be better to play each hand by
ear and kind of try a system for each hand, but then, beÂ
cause of my own temperament, I decided to go with the reds
for the second session. It would seem that it is better to
have some system rather than none at all, and over the long
haul it might be more beneficial to go with the reds, even
though I knew definitely that I'd miss out on a few hands.
93
S5 (H4 complete)
BIO I'll take the black 10; high black.
Rll I'll take the red jack, too, so I can accept
high cards in either color.
R3 I'll let the red 3 ride; too low.
B12 I'11 take the black queen, high black, and now
I'm looking for blacks.
B6 I'll take the black 6. I'd rather take a
fairly high black early than be stuck with the
low cards later in the hand.
B8 I'll take the black 8, high black, and there
may not be a higher black left.
(At start of H8)
r6 I'll let the red 6 go; I'm looking for someÂ
thing higher to start with.
B7 I'll take the black 7. Sevens are high enough
to start with, and I have a feeling there will
be more high blacks.
(Hi 7, after rejecting R5 and accepting R9)
R6 I'll take the red 6 because it looks like there
might be a string of reds.
(Summary, after H60)
What I did, I examined the first few cards coming
94
out, and saw whether they were high cards or low cards, and
after I had accepted two relatively high cards in one color,
I stuck with that color and tried to get as many in that one
color as I could, because that's how the high point hands
came. That basically was my strategy. That is, to try to
get as many cards as possible in one suit and try to get
them high.
S6 (after Hi)
For the first time around, I wasn't exactly sure how
they would come up but I suspected that I wouldn't take anyÂ
thing lower than a 7 or 8. Other than that I had no real
strategy for the first time around.
(Start of H20)
R3 I'll probably never take a 3, especially so
early in the hand. I'll pass it up.
B2 Same with the 2.
R8 I'll take that; I'd take that most any time.
(After taking R7, RIO, B9, on H43)
B12 I didn't want this, but I better take it beÂ
cause the 12 and 9 are higher than the 7 and 10.
(Summary statement, after H60)
First of all a hunch that I had in the beginning
95
proved out: that it was never really worth taking lower
than a 4 and possible a 5. I think I learned a little later
in the game than I should have that I shouldn't have split
in many cases where I did, early in the game. That is, I'd
have two or three good cards in one color and a high card
would come up in the other color, and I was so impressed
that I grabbed it real quick, without realizing that my
chances were so slim of bettering my hand in this second
color, even though it might be a jack or queen. Lastly,
whenever I had three [cards] in one color, it was never
worth taking the opposite color; you'd never build up anyÂ
thing on two cards.
S7 (summary statement, after H60)
I'd wait for the first high card to accept, and if
it were red, for example, I'd pretty much play the reds.
Sometimes I'd have to wait through as many as five cards
before a card, high enough to take, would come up. I tried
to get them all of one color, even if I had to take a rather
low number like a 4 or a 3, rather than to take a high card
of the other color because getting all five cards of one
color seemed to give more points than taking three of one
color and two of another. This, generally, seemed to work
96
best and is what I tried to do.
i
S8 (at the start of H31)
R7 I'll reject it; too low.
B7 Reject it; too low.
B12 Accept, high.
B4 Reject; too low.
R12 Accept; high and possibility of going either
way.
(Summary statement, after H60)
If a card comes up which is a 7 or an 8 in either
the red or black suit, you take that as the first card; beÂ
low that you wait until you get one of those. Then you
build on the color of the first high card accepted, always
taking a 7 or above in that color. Then, toward the end,
when there are only three or four cards left and you still
need two cards to complete the hand, you take any card in
the suit you're building on no matter how low it is. Also,
if, in the beginning, you have rejected three or four cards
for being too low, and two high cards come up, one in each
color, take both of them and see what color the next high
card is, and accept it, build on that color, and take any
cards of that color that come up after that.
97
S9 (H24, complete)
i
B2 Pass; low card.
B3 Pass; still too low.
R9 Take it.
BIO Take it.
Rl Pass.
B6 I will take it; semi-high.
R7 Will not take; stick to blacks.
R8 I will take it.
RlO I'll take that one.
(Summary statement)
Basically, I think how I did try to play the game
-
was if a card was a 5 or higher, I would try to take that
card; if it was lower, I would just pass. And whatever
color it would be, I would try to stick to that color inÂ
stead of trying to split colors. I would split points or
whatever and not come up so high. Usually, when we got past
four cards, I would take [pause]. You see, if the cards
were coming out quite low, usually I would try to take the
color that came out the worst high, vfcich doesn't make
sense. But say if it was a split color and they were coming
out black and reds, and they were coming out quite low, I
would try to take that color that came up next. But,
98
basically, what I did try to do, within the first three or
four cards, I would try to take the card that would be 5 or
higher and basically stick to that color, unless I saw that
the opposite color was coming up and quite high cards, I
would try to maybe take one of those cards and build up the
other color.
S10 (H18, complete)
R4 Reject it; too low.
B7 Accept it; high.
B2 Reject; too low.
R2 Reject it; how.
B9 Accept it; high.
Bll Accept it; it's high.
B6 Accept it; it's black and it's high.
BIO Accept it; it's high.
(Summary statement)
I guess it would be take the first high card and,
depending on what color it is, take mostly the cards that
are of that color. Or else, if you have a high black and
right away have a high red, if the next card is a high
black, take all the rest blacks, or if the next card is a
high red, take all the rest red.
99
Sll (H4, complete)
BIO I'll take it.
Rll I'll take that, too
R3 Don't want it.
B12 I'll take it.
B6 No.
B8 I'll take it.
Rl No, I don't want it
R6 No.
B9 Yes.
(H26, complete)
R? Yes.
BIO Yes, I'll take that, too.
B6 No; too low.
R3 No.
R5 Not too low any more; I'll take the 5.
B5 No; I'm going for reds.
R6 I'll take the 6.
R12 Take it.
Sll, session 2 (H31)
R7 Accept, high.
B7 Reject, black.
100
B12 Accept, high.
B4 Reject, low.
R12 Accept, high.
R6 Accept, red.
R4 Reject, too low.
RIO Accept, high red
B8
R2
Score 35R
(H32)
B9 Accept, high
R4 Reject, low.
B3 Reject, low.
Rl Reject, low.
R2 Reject, low.
Rll Accept, high
B1 Reject, low.
B8 Accept.
R12 Accept.
B6 Accept.
Score 23R, 23B
(H35)
R3 No, low.
101
R5 No, too low.
B4 No, low.
Rl No, low.
R6 Take it, high enough
B6 Take it, high enough
Rll Take it, high red.
R4 Take it, red.
RIO Take it, high red.
R7
Score 31R
(H36)
R4 too low.
R2 Too low.
R6 I'll take it, high enough.
BIO Yes, I'll take it, high and I can work both
ways.
B5 No, too low.
Bl No, too low.
B2 I'll take it.
R? Yes, high red.
B9 Yes, I'll take it.
B12
Score 2IB
(H39)
Bll Take it, high
B? Yes, black.
B2 No, low.
B7 Yes, high.
R5 No, red.
R2 No, red.
R12 Nope, red.
RIO No.
B8 Accept.
B4 Accept.
Score 39B
(H40)
R6 Too low.
B7 Yes, take it, high enough
B2 No, too low.
RIO Yes, I'll take it, high.
R4 No, low.
R2 No, low.
B9 Yes, high.
Rll Yes, high.
BIO Yes, high black.
B4
Score 26B
103
(H43)
! R7 Yes, that's high enough.
RlO Yes, high red.
B2 No, low.
I
B4 No, low black.
B? No, black.
B5 No, black.
R4 Yes, red.
B12 Nope.
R9 Yes, high red [had to accept].
R5 Yes [had to accept].
Score 35R
(H44)
B12 Yes, high.
B7 Yes, high enough
B6 Take it.
B8 Yes, high black.
R6 No, red.
R3 No, red.
R2 No, red.
B2 No, low.
R12 No, red.
R9 Accept.
Score 33B
104
(H47)
I
(H48)
B4 Too low.
Bl Too low.
Bll Well, I hope enough black come up, I'll take
the jack.
R3 No, red.
B12 Take it, high black.
B2 Yes, I'll take it, I don't think too many more
blacks will come, so I'll take the 2.
R6 No, red.
Rl No, red.
B3 Accept.
B8 Accept.
Score 36B
R8 Yes, it's high enough.
B3 No, too low.
B6 Too low, black.
B2 No, low black.
R7 Take it, red and high enough.
Rl No, I think better reds will come up
R12 Yes, high red.
Bll Nope, black.
105
Rll Accept.
B12 Accept.
Score 38R
(H51)
R5 No, low.
B6 No, too low.
B5 It's kind of hard to make a choice, they're all
mediocre; I'll take the 5.
Bl No, too low.
B12 That, I'll take; high black.
B4 Yes, black.
RIO No, I don't want the 10, it's red.
B3 Yes, I'll take the 3, it's black.
BIO I'll take it, high black.
R9
Score 34B
(H52)
B5 No, low.
Rl Too low.
R3 Too low.
B9 That's good, I'll take it.
R9 I'll take the red 9 too, open up my options
B8 I'll take the 8, high black.
(H55)
106
R2 No, too low.
Rll Eight cards, I'll take the jack, and take the
higher of the next two cards coming up.
R12 Can't be any higher than that, take it.
B2
Score 32R
B7 That's pretty good, I'll take it.
B4 That's too low.
Rl No, low red.
Bl Too low.
R4 No, it's red.
Bll I'll take it, high black.
R12 It's the seventh, no it's red, I'll take the
last three.
R5 Accept.
R2 Accept.
R9 Accept.
Score 18B
(H56)
Rll That's good, high red.
Rl Too low.
R3 I'll take it, red.
107
R8 That too, high red.
B4 No, black.
B? No, it's black.
B12 No, it's black.
BIO Black [reject].
B5 Accept.
Bl Accept.
Score 22R
(H59)
B8 That's high enough [accept]
Bll Yes, high black.
R12 No, I don't want that, red.
R2 No, that's red.
B4 Yes, I'll take the 4, black
B12 Yes, high black.
B2 It's too low [reject].
Rll No, it's red.
B9 I'll take it.
B3
Score 44B
(H60)
B9 Take it.
BIO Take it, high black.
108
B3 No, too low.
Rll Take it, open up my options.
Bll It'll be black [his option— accept].
B5 Too low.
R7 It's red.
R4 Nope.
B2 It's the ninth card; no, I'll take my chances
on the last card.
R8 Accept.
Score 30B
(Summary statement)
Well, the strategy varies with the first three or
four cards laid down. Usually it depended on whether the
first card thrown down, as far as color was concerned, was
a 7 or higher, and if any other card came by of the same
color, 6 or higher, I would usually stay with that color.
But if a card of a very high magnitude, let's say a jack or
queen, came by of the opposite color, I would take it and
open up my options, in most cases. Then I would take the
next higher card that came along in either color, and then
play it out in that color. That worked the best, in most
cases. The hands I had the most trouble with most often
were the hands in which the mediocre cards were thrown down,
109
let's say a 4, 5, or a 6, in sequence. In one hand, espe-
!
cially, the first three or four cards were like that and I
didn't know what to do. I tried to assume that about as
many black cards as red cards would be in the deck. That's
what it came to when I got to about seven or eight cards in
a deck and I held four of one color, I would assume that of
the remaining cards, half would come up black and half red,
and I didn't pay too much attention to the cards that had
already been thrown down, as far as the number of cards of
each color. That, in general, is the strategy that develÂ
oped.
S12 (H4, complete)
BIO I'll take that; high number
Rll No, it's red.
R3 No, red and not high enough
B12 Yes.
B6 Take that.
B8 Take that.
Rl No.
R6 No.
B9 Take that.
110
(Summary statement)
I think the best approach was to pick the color of
I
the first high card that came up and then continue to choose
that color of card. I took a lot of chances because I found
out that usually I got better results when I took chances
than when I played it safe and just picked the first five.
Number of Times (Hands) S Played Uniquely
s
1 2
2 4
3 4
4 9
5 3
6 5
7 0
8 1
9 1
10 3
11 2
12 8
Ill
For the 16 Hands, the Sets of Ss
Responding Identically
Number of Wavs
Hand Played
31
(1,
3) (4, 12) (5, 6) (7, 11) 9
32
(1, 3,
5, 6, 7, 8, 9, 10, 11)
4
35
(1, 2,
12) (3, 5, 8, 9, 11) (7, 10) 6
36
(1, 5,
7) (2, 6, 12) (8, 9) 7
39
(1, 2,
3, 5, 6, 7, 8, 9, 10, 11) 3
40
(1,
10) (2, 7, 8, 11) (4, 9) 7
43
(1,
6) (3, 4, 5, 7, 8, 9, 10, 11, 12) 3
44
(1, 3,
5, 7, 8, 9, 10, 11, 12) 4
47 (2, 6,
7, 10) (3, 5, 8, 9, 11) 5
48
(1,
2,
3, 6, 7, 10, 11, 12) (4, 8, 9) 3
51
(1*
5,
7) (2, 8) (4, 9) 8
52
(1,
3) (2, 6, 7, 8, 10, 11) (9, 12) 5
55 (2, 3) {4, 5, 7, 8, 9, 10, 11) 5
56
(1,
2) (3, 9, 11) (4, 5, 7, 8, 12) (6, 10) 4
59
(1,
6) (2, 10) (5, 7, 8, 9, 11, 12) 5
60
(1,
12) (2, 6, 10, 11) (3, 9) (5, 7, 8) 5
The average number of ways each hand was played by
the 12 Ss was 5.2; excluding S4, the average was 4.6.
APPENDIX C
SYNOPSIS OF DECISIONS FOR ALL SUBJECTS
112
113
DECK SUBJECT
DECK SUB
32
ECT
DECK SUB
-35
ECT
DECK SUB
-36
ECT
... 12 .
DECK SUBJECT
.39 . . . 1 1 . . . . 9 2___- 7.. -5 -2. -12 -10.. B 4
t A A A A A
2 A A A A A
3 A A A A A
4 A A A A A
__ 5___ .A__ -A_
A . _ . . . . . . A A
6 A A A A A
7 A A A A A
8 A A
A A
9 A A A A A
10 A A A A A
____ .11____A__„A . A_ — - . . — _ . . A — .. . A .
12 A A A A A
DECK SUBJECT
—40
.....
-6.
1
o
t
.. -A . -2. .9 .rll... 1 0 . . . _ . 4
1 A A A A A
2 A . A A A
3 A A A A A
4 A A A A A
... 5.. . . . . A . . .______A— . « . . . , t m t . . . . _A. .. . —. . . .. . A . . . A
6 A A A A A
7 A A A A A
..8 . . A . . . A A A
9 A a ’ A A A
10 A A A A A
11
. A A . . .
x
12 A A ~ A A A
DECK SUBJECT
43 -7 -10 2 4 4 5 -4. 12 - a . .. r5
1 A A A A A
2 A .A A A
3 A A A A A
4 A A A A A
. 5. . A A A_. .A
6 A A A A A
7 A A A A A
..B.. . A. A A
9 A A A A A
10 A A A A A
11 A
A
4 _
.A.
12 A A A .A A
OECK SUBJECT
_44 -12__7 ._— Jb___B_ -A__r3— .-2 .__2_ r 12__.-9
1 A A A A
A
2 A A A A
3 A A ' A A
A
4 A A A A A
__5__ A_ A_ A A A.
6 A A A A A
7 A A A A A
8 ..A. A A A A
9 A A A A A
10
« f
A
A
A
A
A A
A A
A
A
XI--
12
_ . A . —
A
. . .
A
_.A ^ —
A A
.M
A
115
DECK SUBJECT
*7 4 I 11 -3 12 -. . . 2 -6 -1 3 B_
1 A A A A A
2 A A A A A
3 A A A A A
4 A A A A A
.5__
A - A__.A.. ..-__. _A._ _A_
6 A A A A A
7 A A A A A
\
8 A A A
9 ......A A A A A
10 A A A A A
11- ______ A__
___ A . . A. ___________ A.__ . A .
12
A A A A A
DECK SUBJECT
48
â– -.
-8 .1 2 -7 -1 -12 11 . -11.. 12.
1 A A A A A
2 A A A A
3 A A A A A
4 A A A A A
'
5 - — ______ A_. . . . A
_ ......A.__
_
6 A A A A A
7 A A A A A
8 A A A A
9 A A A A A
10 A A A A A
- 11- A.- A .
_. . . . . __A . . . . _ - A . _.A
12 A A A A A
DECK SUBJECT
-51
.....
-5. . A_. -5 . . 1 12-
. . * -10 3___10- -a
1 A A A A A
2 A A A .
3 A A A A A
4 A A A A A
S A A . A . A
6 A A A A A
7 A A A A A
a.. A A
9 A A A A A
10 A A A A A
____
LI__ __———. A _ ——— _ A. . . . . . .A
_ _ .
__A.___A
12 A A A A A
DECK SUBJECT
-52
..
-5-
- “1--• •
. 9 •9 . 8 -2 -11. -12..2
1 A A A A A
2 A A A A
3 A A A A A
4 A A A A A
5 -
_______
A. . . . A ______A. . . .. A
6 A A A A A
7 A A A A A
8 A A - A A A
9 A A A A A
10 A A A A A
. 11 A A . .A A
12 A A A A A
DECK SUBJECT
55_________.7___*__rJ___ 1__tA. _ . 11-12 .. -5— -2---9
1 A A A A A
2 A A A A A
) A A A A A
A A A A A A
_______5_.___A---------------------A------- A A A_
6 A A A A A
7 A A A A A
8 . A ________ ____ A.......A . . . . A . A
9 A ' A A A A
10 A A A A A
______ 11____ A_____________________A--------A A A.
12 A A A A A
DECK SUBJECT
56________=11__ -1___=3 = J B A__SL -1*---10 S--- 1_
1 A A A A A
......2....A A ... . . . . A A A ...... ....
3 A A A A A
A A A A A A
_______S____A.____________A________________ A A A_
6 A A A A A
7 A A A A A
B A____________A______ - ...A A A
9 A A A A A
10 A A A A A
11 A________A A_____________________A---A
12 A A A A A
DECK SUBJECT
59__________B__LI__-12 -?___A__12___2 -11____9___J
1 A A A A A
2 A . . . .A A ......... A............A..
3 A A A A A
* A A A A A
_______ S----- A-- A____________ A. . A_____________ A - -- -
A A A A A A
7 A A A A A
. . . . 8_____A__A_________ __A. A. A....
9 A A A A A
10 A A A A A
______11_____A__ A____________A___A_____________A----
12 A A A A A
DECK SUBJECT
-60-- 9--10___3_. =11___LL S_=7 -A----2--=0-
1 A A A A A
2 A A. A A A
3 A A A A A
6 A A A A A
6 A A T~ ’ A ... * A
7 A A A A A
B A A A A A
9 A A A A A
10 A A A A A
11. _. A . . . A......_JL.__A ___________ A
12 A A A A A
116
APPENDIX D
THE SIMULATION PROGRAM SOURCE CODE
117
o n
OS/360 FORTRAN H
MAIN PROGRAM ***************
COMMON / MAINC / IDECK, CARONR* PI, P2, P3, P4, P5, P6, P7,
£ THRSH1, THRSH2, CARD, NRIH, FORCED, DOMARG
INTEGER I DECK, CARONR, PI, P2, P3, P4, P5, P6, P7,
£ THRSHlf THRSH2, CARO, NRIH, FORCEO, OOMARG
CALL SHUFFL
I IF ( I DECK .EQ. 1 I CALL NXSUB
CALL NXDECK
10 CALL NXCARO
IF I CARONR .EQ. 6 I GOTO 41
IF ( CARONR .EQ. P2 I THRSH1 * P3
IF I IABSI CARD) .LT. THRSH1 I CALL REJECTU10I
CALL ACCEPT t£l,£10)
4 CALL NXCARO
IF I I CARONR - NRIH I .LE. 5 ) GOTO 5
41 FORCED * FORCED * â– 1
CALL ACCEPTI£l*£4)
GOTO 4
5 CALL COUNT
IF I CARO*DOMARG .LT. 0 1 GOTO 7
IF I IABSICARO) .LT. P7 ) CALL REJECT!£4)
CALL ACCEPTl£lf£4)
GOTO 4
7 IF I NRIH .GE. P4 ) THRSH2 * P6
CALL GAIN I NET )
IF I NET «LT. THRSH2 ) CALL REJECTU4)
CALL ACCEPTI£1f£4)
GOTO 4
END__________________________________________
OS/360 FORTRAN H
SUBROUTINE QUIT
COMMON / MAINC / I DECK, CARONR# PI, P2# P3, P4, P5, P6, P7,
6 THRSH1, THRSH2, CARO# NRIH, FORCEO* DOMARG
INTEGER IOECK, CARONR# PI# P2, P3, PA, P5, P6# P7,
£ TBLACK, TREO, TOTAL(21# OECKNR, LSTOKN, NOECKS,
£ THRSH1# THRSH2# CARO, NRIH# FORCED# OOMARG
£ # SCORE , HAND
COMMON / OTHERC / HANDI5), CHOICE, ERRORS# OECKNR# LSTDKN#
£ TBLACK, TREO
INTEGER TOFORC / 0 /
OATA TOCHOI, TOTERR / 2*0.0 /
q**********************************************************************
PCTOK *100 *1(CHOICE - ERRORS I/CHOICE >
PCTERR * 100.0 - PCTOK
PRINT 902# HAND
PRINT 903, SCORE
PRINT 904# ERRORS, CHOICE # FORCED, PCTOK , PCTERR
PRINT 901, DECKNR
C
C SUM CHOICES# ERRORS# ANO FORCED
C
TOCHOI * TOCHOI * CHOICE
TOTERR » TOTERR ♦ ERRORS
TOFORC * TOFORC ♦ FORCEO
C IF THIS SUBJECT IS FINISHED, PRINT SUMMARY OATA
IF I OECKNR .NE. LSTOKN ) RETURN
PCTOK « 100*11 TOCHOI - TOTERR ) / TOCHOI I
PCTERR » 100.0 - PCTOK
PRINT 905
PRINT 904# TOTERR#TOCHOI,TOFORC# PCTOK , PCTERR
TOCHOI « 0.
TOTERR » 0.
TOFORC » 0
C
ENTRY COUNT
INTEGER OLDNR*OLOECK
IF I NRIH .EQ. OLDNR .AND. OECKNR *EQ. OLOECK ) RETURN
OLDNR * NRIH
OLOECK * OECKNR
TBLACK * 0
TREO » 0
DO 1 I * I * NR IH
IF I HANOI 11 .GT. 0 « GOTO 4
TREO * TREO ♦ HANOI I)
GOTO 1
4 TBLACK * TBLACK ♦ HANOI I)
1 CONTINUE
OOMARG * TBLACK ♦ TREO
IF I OOMARG I 10*11*12
10 SCORE * TREO
RETURN
11 IF I HANOINRIH) .LT. 0 ) GOTO 10
12 SCORE * TBLACK
RETURN
901 FORMAT I'ODECKNR »'• 14)
902 FORMAT I*OTOOK *',5151
903 FORMAT! *0SCORE »• 15)
904 FORMAT I'ONUMBER OF ERRORS »'F5.0/'0NUMBER OF CHOICES *'F5.0/
L 'ONUMBER FORCEO *•I5/'0PERCENT CORRECT «'F5.1/'OPERCENT ERROR *•
B F5.ll
905 FORMATIT20**SUMMARY*I
END
0S/360 FORTRAN H
SUBROUTINE NXSUB
COMMON / OTHERC / HANOI5), CHOICE, ERRORS* OECKNR* LSTOKN*
& TBLACK* TRED
COMMON / MAINC / (OECK, CARONR* PI, P2. P3, PA* P5* P6* P7*
£ THRSHI, THRSH2* CARD* NRIH* FORCED* OOMARG
INTEGER IOECK* CARONR* PI* P2* P3, PA. P5* P6» P7*
£ TBLACK* TREO* TOTAL(21* OECKNR* LSTOKN* NOECKS*
£ THRSHI* THRSH2* CARO, NRIH, FORCEO, OOMARG
£ *IOECKAf16)* POOL(10,16), SOPOOLllO* 16* 12)
£ , PARAMSI7)* HANO* OECK(IO), SUBNR
EQUIVALENCE! PARAMSI1)«Pit
£ * (TOTAL!1)* TBLACK)* IFORCEL* FORCED)
LOGICAL SUBOI0!10)* FORCEL
REAL*8 HEADER!10)
C*******************************************************************************
READI5 *900*EN0*1) SUBNR* PARAMS
IOECK*I
C NEXT READ MAKES THE INPUT COMPATIBLE WITH ANALYSIS PROGRAM
IF ( SUBNR .NE. 1 ) GOTO 5
REAO 901* HEADER
PRINT 902* HEADER
5 PRINT 903* SUBNR* PARAMS
RETURN
1 STOP
C
ENTRY REJECT!*)
CHOICE * CHOICE * 1.
IF!.NOT. SUBOI0!CARONR)) RETURN 1
CHOICE * CHOICE - 1.
ERRORS * ERRORS *1.
PRINT 904* CARO
re turn
C RETURN 1 FOR VALID REJECT. RETURN 0 FOR INVALID REJECT
o o n o o
ENTRY ACCEPT****)
IF ( .NOT. FORCEL I CHOICE * CHOICE ♦ I.
IF!.NOT. SUBOI01CARONRI) GOTO -2
NRIH«NRIH+1
HAND(NRIH)~CARD
IF(NRIH .LT. 5) RETURN
CALL COUNT
CALL QUIT
RETURN I
2 PRINT 90S* CARO
IF ( .NOT. FORCEL I ERRORS * ERRORS • - I.
RETURN 2
RETURN 0 AFTER VALID ACCEPT. (NORMAL RETURN)
RETURN I AFTER ACCEPTING 5 CARDS. (AFTER QUIT)
RETURN 2 FOR INVALID ACCEPT BY MOOEL
ENTRY NXCARO
CARONR * CARONR ♦ 1
CARO * DECK*CARONR)
RETURN
C
ENTRY GAIN! NET )
INTEGER OTHER* THIS
C BLACK * 1, REO * 2
C THIS * COLOR!CARO)
THIS » 1
IF ( CARO .LT. 0 ) THIS * 2
OTHER * 3 " THIS
NET * IABS(CARO+TOTAL(THIS)) - IABS(TOTAL(OTHER))
RETURN
no
ENTRY NXDECK
OECKNR-IOECKAII DECK)
00 3 1-lflO
SUBOI01 I) * SOPOOLU » IOECK, SUBNR)
3 DECK 111 * POOL!I *IOECK )
PRINT 910* SUBNR* OECKNR* DECK* SUBOID
IDECK-IOECK+1
IF I IOECK .EQ. (NDECKS+l)I IOECK - 1
DO 4 1-1*5
4 HANOI 11-0
CARONR * 0
NRIH * 0
THRSHI * PI
THRSH2 » P5
ERRORS * 0.
CHOICE * 0.
FORCEO « 0
RETURN
ENTRY SHUFFL
IOECK*1
REAO 906* LSTOKN* NOECKS* IOECKA
READ 907* POOL
PRINT 908* POOL
C READ SUBOIO POOL
READ 909*1(ISDPOOLII*J*K1*1-1*10)»J-1*N0ECKS)*K-1*12)
RETURN
C
900 FORMAT I 8151
901 FORMAT(10A8)
902 FORMAT 1*1*10A8)
903 FORMAT(* 1 SUBJECT NUMBER* 13/
a ^CONSIDERING FIRST ACCEPTANCE* REJECT VALUES LESS THAN*13/
3 • UNTIL OFFER* 13/
a • THEREAFTER FIRST ACCEPTANCE MUST BE AS LARGE AS* 13/
a •OCONSIOERING SUBSEQUENT ACCEPTANCES**
a /* IF THE NUMBER OF CARDS ALREAOY ACCEPTED IS LESS THAN* 13/
a * ACCEPT A CARD OF THE NON-DOMINANT COLOR*
a /• IF IT KILL RESULT IN A SHIFT IN THE DOMINANCE MARGIN (GAIN
a) BY AS MUCH AS* 13/
a * IF THE NUMBER OF CARDS ALREADY ACCEPTED IS EQUAL OR GREATER**
a /• THEN THE GAIN MUST BE AS MUCH AS* 13/
a *OACCEPT A CARD OF THE DOMINANT COLOR* OR OF EITHER COLOR*
a /• IF NEtTHER IS DOMINANT* IF ITS VALUE IS AS MUCH AS* 13)
90* FORMAT!*0*T35** MOOEL REJECTED*!*,• i S ACCEPTEO IT*I
905 FORMAT(*0M0DEL ACCEPTED*,14,• { S REJECTED IT*)
906 FORMAT(215*1613)
907 FORMAT((8X*10(13,1X)/))
908 FORMAT(•1ALL DECKS*/(•0*10(5))
909 FORMAT(10(11*IX))
910 FORMAT(*1SU6NR »•14,5X**OECKNR »*I4/
L *ODECK • 1015 /• SUB—TOOK*1015)
END
124
APPENDIX E
EXAMPLE SIMULATIONS
125
SUBJECT NUMBER II
C O N S I D E R I N G F I R S T A C C E P T A N C E . R E J E C T V A L U E S L E S S T H A N 7
U N T I L O F F E R 3
T H E R E A F T E R F I R S T A C C E P T A N C E N U S T B E A S L A R G E A S S
C O N S I D E R I N G S U B S E Q U E N T A C C E P T A N C E S .
I F T H E N U M B E R O F C A R O S A L R E A D V A C C E P T E D I S L E S S T H A N I
A C C E P T A C A R O O F T H E N O N - O O M I N A N T C O L O R
I F I T M I L L R E S U L T I N A S H I F T I N T H E O O H I N A N C E M A R G I N I G A I N I B V A S M U C H A S
I F T H E N U M B E R O F C A R O S A L R E A O V A C C E P T E D I S E Q U A L O R G R E A T E R .
T H E N T H E G A I N M U S T B E A S M U C H A S 0
A C C E P T A C A R D O F T H E D O M I N A N T C O L O R . O R O F E I T H E R C O L O R
I F N E I T P E R I S O O N I N A N T . I F I T S V A L U E I S A S M U C H A S 6
S U B N R II O E C K N R
D E C K - T
S U B . T O O K 1
12
I
31
A -12
0 I
—A
0
-10
I
-2
0
M C O E L A C C E P T E D T ; S R E J E C T E O I T
T O O K • - 7 1 2 - 1 2 — 6 - 1 0
S C O R E » - 3 5
N U M B E R O F E R R O R S « 1 .
N U M B E R O F C H O I C E S • B.
N U M B E R F O R C E O > 0
P E R C E N T C O R R E C T - 8 7 . 3
P E R C E N T E R R O R - 1 2 . 5
O E C K N R > 3 1
S U B N R II O E C K N R 32
O E C K 9 - A 3 -
S U B . T O O K 1 0 0
T O O K > 9 - 1 1 8 - 1 2
S C O R E - 2 3
N U M B E R O F E R R O R S • 0 .
N U M B E R O F C H O I C E S > 7 .
N U M B E R F O R C E O â– 3
P E R C E N T C O R R E C T > 1 0 0 . 0
P E R C E N T E R R O R > 0 . 0
O E C K N R > 3 2
-11
1
-12
1
| S U B N R - 1 1 O E C K N R > 3 5
O E C K - 3 - 5 5 - 1 - 6 6 - 1 1 - 6 - 1 0 - 7
; s u b _ t o o k 0 0 0 0 1 1 1 1 1 0
N O O E L R E J E C T E O - 6 t S A C C E P T E D I T
T O O K « - 6 6 - 1 1 — 6 - 1 0
S C O R E » - 3 1
N U M B E R O F E R R O R S > 1 .
N U M B E R C F C H O I C E S - 9 .
N U M B E R F O R C E O > 0
P E R C E N T C O R R E C T « B 8 . 9
P E R C E N T E R R O R * 1 1 . 1
O E C K N R > 3 5
i S U B N R > 1 1 C E C K N R « 3 9
|
I O E C K I I 9 2 T - 5 - 2 - 1 2 - 1 0 B 6
! S U 6 . T 0 0 K 1 1 0 1 0 0 0 0 1 1
| T O O K > 1 1 9 7 B 6
! S C O R E â– 3 9
I
• N U M B E R O F E R R O R S • 0 .
j N U M B E R O F C H O I C E S * B .
N U M B E R F O R C E O • 2
P E R C E N T C O R R E C T > 1 0 0 . 0
P E R C E N T E R R O R » 0 . 0
O E C K N R > 3 9 _ _ _ _ _ _ _ _
S U B N R « 1 1 O E C K N R * 3 6
D E C K - 6 - 2 - 6 1 0 5 1 2 - 9 9 1 2
S U B . T O O K 0 0 1 1 0 0 1 . 1 1 0
T O O K > - 6 1 0 2
S C O R E > 2 1
N U M B E R O F E R R O R S > 1 .
N U M B E R O F C H O I C E S > 9 .
N U M B E R F O R C E D > 0
P E R C E N T C O R R E C T > S B . 9
P E R C E N T E R R O R - 1 1 . 1
O E C K N R > 3 6
M O D E L R E J E C T E O 2 I S A C C E P T E D I T
S U B N R > 1 1 O E C K N R > 6 0
O E C K - 6 7 2 - 1 0 - 6 - 2 9 - 1 1 1 0 6
S U B . T O O K 0 1 0 1 0 0 1 1 1 0
T O O K > 7 - 1 0 9 - 1 1 1 0
S C O R E » 2 6
N U M B E R O F E R R O R S > 0 .
N U M B E R O F C H O I C E S > 9 .
N U M B E R F O R C E O > 0
P E R C E N T C O R R E C T > 1 0 0 . 0
P E R C E N T E R R O R > 0 . 0
O E C K N R > 6 0
127
j S U B N R • I I O E C K N R • . 4 3
j D E C K - T - 1 0 2 4 9 5 - 4 1 2 - 9 - S
; S U B . T O O K 1 1 0 0 0 0 1 0 1 1
! M O O E L R E J E C T E O - 4 1 S A C C E P T E D I T
i T O O K » - 7 - 1 0 - 4 - 9 - 5
! S C O R E • - 3 5
N U M B E R O T E R R O R S - 1 .
: N U M B E R O F C H O I C E S * S .
I N U M B E R F O R C E O • 2
| P E R C E N T C O R R E C T • 8 T . 5
P E R C E N T E R R O R > 1 2 . 5
i O E C K N R • 4 3
S U B N R > I I O E C K N R • 4 7
; O E C K 4 1 1 1 - 3 1 2 2 - 6 - 1 3 S
S U B . T O O K 0 0 1 0 1 1 0 0 1 1
M O D E L R E J E C T E O - 2 1 S A C C E P T E D I T
| T O O K • 1 1 1 2 2
| S C O R E â– 3 6
N U M B E R O F E R R O R S â– 1 .
N U M B E R O F C H O I C E S - 8 .
N U M B E R F O R C E O > 2
P E R C E N T C O R R E C T > S 7 . 5
P E R C E N T E R R O R • 1 2 . 5
O E C K N R - 4 7
S U B N R • 1 1 O E C K N R > 4 4
O E C K 1 2 7 6 8 - 6 - 3 - 2 2 - 1 2 - 9
S U B . T O O K 1 1 1 1 0 0 0 0 0 1
T O O K • 1 2 7 6 8 - 9
S C O R E - 3 3
N U M B E R O F E R R O R S - 0 .
N U M B E R O F C H O I C E S - 9 .
N U M B E R F O R C E O - 1
P E R C E N T C O R R E C T > 1 0 0 . 0
P E R C E N T E R R O R - 0 . 0
O E C K N R • 4 4
S U B N R - 1 1 O E C K N R > 4 8
O E C K - B 3 6 2 - 7 - 1 - 1 2 1 1 - 1 1 1 2
S U B . T O O K 1 0 0 0 1 0 1 0 1 1
T O O K - - 8 - 7 - 1 2 - 1 1 1 2
S C O R E > - 3 8
N U M B E R O F E R R O R S > 0 .
N U M B E R O F C H O I C E S > S .
N U M B E R F O R C E O > 2
P E R C E N T C O R R E C T > 1 C 0 . 0
P E R C E N T E R R O R > 0 . 0
O E C K N R - 4 8
128
S U B N R > 1 1 O E C K I M - 9 1
i O E C K - 5 • 9 1 1 2 4 - 1 0 3 I I
S U B . T O O K 0 0 1 0 1 1 0 1 1
N O O E L R E J E C T E O 4
N O O E L R E J E C T E O 3
: T O O K • 9 1 2 4 3 1 0
| S C C R E • 3 4
i N U M B E R O F E R R O R S • 2 .
| N U M B E R O F C H O I C E S • 9 .
i N U M B E R F O R C E O • 0
P E R C E N T C O R R E C T « T T . 8
P E R C E N T E R R O R « 2 2 . 2
O E C K N R • 9 1
S U B N R - 1 1 O E C K N R • 9 9
! D E C K 7 4 - 1 1 - 4 1 1 - 1 2 - 9
i S U B . T O O K 1 0 0 0 0 1 0 1
j T O C K - 7 1 1 - 9 - 2 - 9
' S C C R E m 1 8
N U M B E R O F E R R O R S • 0 .
> N U M B E R O F C H O I C E S m 7 .
N U M B E R F O R C E O - 3
P E R C E N T C O R R E C T > 1 0 0 . 0
P E R C E N T E R R O R > 0 . 0
O E C K N R > 9 9
- 9
0
1 S A C C E P T E O I T
! S A C C E P T E O I T
- 9
1
S U B N R > 1 1 O E C K N R > 9 2
O E C K 9 - 1 - 3 9 - 9 8 - 2 - 1 1 - 1 2 2
S U B . T O O K 0 0 0 1 1 1 0 1 1 0
T O O K « 9 - 9 8 - 1 1 - 1 2
S C O R E > - 3 2
N U M B E R O F E R R O R S > 0 .
N U M B E R O F C H O I C E S - 9 .
N U M B E R F O R C E D > 0
P E R C E N T C O R R E C T > 1 0 0 . 0
P E R C E N T E R R C R > 0 . 0
O E C K N R > 9 2
S U B N R > 1 1 O E C K N R > 9 *
O E C K - 1 1 - 1 - 3 - 8 4 9 1 2 1 0 9 1
S U B . T O O K 1 0 1 1 0 0 0 0 1 1
M O O E L R E J E C T E O - 3 : S A C C E P T E O I I
T O O K > - 1 1 - 3 - 8 9 1
S C O R E » - 2 2
N U M B E R C F E R R O R S > 1 .
N U M B E R O F C H O I C E S > 8 .
N U M B E R F O R C E O > 2
P E R C E N T C O R R E C T > 8 7 . 9
P E R C E N T E R R O R > 1 2 . 9
O E C K N R - 9 6 :
VO |
SUBNR • 11 OECKNR - SB
oeck s 11 -la -a * la a - n i
SUB.TOOK 1 1 0 0 1 1 0 0 1
NOOEL REJECTEO 6
TOOK • S 11 1 12 B
SCCRE • BB
NUMBER OF ERRORS « 1.
NUMBER OF CHOICES * B.
NUMBER FORCEO â– 0
FERCENT CORRECT - BB.B
RERCENT ERROR - 11.1
OECKNR • SB
3
0
I S RCCEFTEO IT
i
SUBNR « II OECKNR * 60
OECK B 10 3 -11 11 5 -T -B
SUB. TOOK 1 I 0 1 1 0 0 0 I
NOOEL REJECTED -II
TOOK « B 10 -11 11 -B
SCORE « 30
NUMBER OF ERRORS « 1.
NUMBER OF CHOICES * B.
NUMBER FORCEO â– I
FERCENT CORRECT » BB.B
FERCENT ERROR « 11.1
OECKNR • 60
SUNN6RT
NUMBER OF ERRORS â– 10.
NUMBER OF CHOICES > 13B.
NUMBER FORCEO - IB
PERCENT XORRECT • 92.5
FERCENT ERROR â– T.S
-B
1
I S ACCEPTEO IT
i
130
131
REFERENCES
Balzer, R. M. A mathematical model for performing a complex
task in a card game. Behavioral Science. 1966, 11.
219-226.
Berlekamp, E. R. Program for double-dummy bridge problems—
A new strategy for mechanical game learning. JourÂ
nal of the Association for Computing Machinery.
1963, 10, 357-364.
Clarkson, G. P. E. A model of the trust investment process.
In E. A. Feigenbaum and J. Feldman (Eds.), Computers
and Thought. New York: McGraw-Hill, 1963. Pp.
329-346.
De Groot, A. D. Thought and Choice in Chess. New York:
Basic Books, 1965.
Ernst, G. W. and Newell, A. Some issues of representation
in a general problem solver. Proceedings of the
Spring Joint Computer Conference. 1967, 583-600.
Feigenbaum, E. A. The simulation of verbal learning behavÂ
ior. In E. A. Feigenbaum and J. Feldman (Eds.),
Computers and Thought. New York: McGraw-Hill, 1963.
Pp. 297-309.
Feigenbaum, E. A. and Feldman, J. Simulation of cognitive
processes. In E. A. Feigenbaum and J. Feldman,
Computers and Thought. New York: McGraw-Hill,
1963. Pp. 269-276.
Feldman, J. Simulation of behavior in the binary choice
experiment. In E. A. Feigenbaum and J. Feldman
(Eds.), Computers and Thought. New York: McGraw-
Hill, 1963. Pp. 329-346.
132
133
Findler, N. V. Human decision making under uncertainty and
risk. Kybernetik. 1966, 3., 82-93.
iFrijda, N. H. Problems of computer simulation. Behavioral
j Science. 1967, 12. 59-67.
Greenblatt, R. D. The Greenblatt chess program. ProceedÂ
ings of the Fall Joint Computer Conference. 1967,
801-810.
Gregg, L. W. and Simon, H. A. Process models and stochastic
theories of simple concept formation. Journal of
Mathematical Psychology. 1967, 4, 246-276.
Hunt, E. B. Computer simulation: Artificial intelligence
studies and their relevance to psychology. In P. R.
Farnsworth (Ed.), Annual Review of Psychology. Palo
Alto, Calif.: Annual Reviews, 1968. Pp. 135-168.
Johnson, E. S. An information processing model of one kind
of problem solving. Psychological Monographs. 1964,
78 (4, Whole No. 581).
Laughery, K. R. and Gregg, L. W. Simulation of human
problem-solving behavior. Psychometrika. 1962, 27.
265-282.
A. On the analysis of human problem solving protoÂ
cols. Presented at International Symposium on
Mathematical and Computational Methods in the Social
Sciences, 1966. AD658863.
A., Shaw, J. C., and Simon, H. A. Chess-playing
programs and the problem of complexity. IBM Journal
of Research and Development. 1958, 2_, 329-335.
A., and Simon, H. A. GPS, a program that simulates
human thought. In E. A. Feigenbaum and J. Feldman
(Eds.), Computers and Thought. New York: McGraw-
Hill, 1963a. Pp. 279-293.
A. and Simon, H. A. Computers in psychology. In
R. D. Luce, R. R. Bush, and E. Galanter (Eds.).,
Handbook of Mathematical Psychology. New York:
Wiley, 1963b. Pp. 361-428.
Newell,
Newell,
Newell,
Newell,
134
Samuel, A. L. Some studies in machine learning using the
game of checkers. IBM Journal of Research and DeÂ
velopment. 1959, _3, 210-229.
jsimon, H. A. and Kotovsky, K. Human acquisition of conÂ
cepts for sequential patterns. Psychological Re-
| view. 1963, 70, 534-546.
Simon, H. A. and Newell, A. Information processing computer
and man. In W. R. Brode (Ed.), Science in Progress.
New Haven: Yale University Press, 1966. Pp. 333-
362.
Slagle, J. Unpublished lecture notes for UCLA Extension
course, The heuristic programming approach to artiÂ
ficial intelligence. 1967.
Solomonoff, R. J. Some recent work in artificial intelliÂ
gence. Proceedings of the IEEE. 1966, 54. 1687-
1697.
Thiele, T. N., Lemke, R. R., and Fu, K. S. A digital comÂ
puter card-playing program. Behavioral Science.
1963, 8, 362-368.
Turing, A. M. Computing machinery and intelligence. In
E. A. Feigenbaum and J. Feldman (Eds.), Computers
and Thought. New York: McGraw-Hill, 1963. Pp. 11-
35.
Weizeribaum, J. Eliza, a computer program for the study of
natural language communication between man and
machine. Communications of the Association for
Computing Machinery. 1966, 9_, 36-45.
Williams, T. G. Some studies in game playing with a digital
computer. Unpublished doctoral dissertation,
AD634 821, Carnegie Institute of Technology, 1965.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Figural And Symbolic Divergent-Production Abilities In Adults And Adolescents
PDF
The Effect Of Subject Sophistication On Ratio And Discrimination Scales
PDF
The Effect Of Discriminability On The Partial Reinforcement Effect In Human Gsr Conditioning
PDF
A Multidimensional Similarities Analysis Of Twelve Choice Probability Learning With Payoffs
PDF
Some Factors Affecting Ucr Diminution
PDF
Age, Sex, And Task Difficulty As Predictors Of Social Conformity: A Search For General Tendencies Of Conformity Behavior
PDF
An Exploration Of Interpersonal Behavioral Possibilities And Probabilities
PDF
The Effects Of Making Social Desirability Judgments On Personality Inventory Scores Of Schizophrenics
PDF
Conflicting Motives In The Prisoner'S Dilemma Game
PDF
The Role Of Self-Concept In The Expectancy Phenomenon
PDF
Children'S Resistance To Competing And Distracting Stimuli In The Classroom
PDF
The Role Of Intellectual Abilities In Concept Learning
PDF
Age Differences In Serial Reaction Time As A Function Of Stimulus Complexity Under Conditions Of Noise And Muscular Tension
PDF
Some Aspects Of The Dimensionality Of School Adjustment Of Fifth Grade Boys
PDF
A factor-analytic study of symbolic-memory abilities
PDF
A Monte Carlo Evaluation Of A Computer-Interactive Extended Transitivity Dominance Scaling Model
PDF
The Subjective Utility Function As An Estimator Of Optimal Test Weights For Personnel Selection
PDF
Measuring Thought Process As An Ego Function In Schizophrenic, Mentally Retarded And Normal Adolescents By Means Of The Rorschach
PDF
Non-Specific Treatment Factors And Deconditioning In Fear Reduction
PDF
A Rotational Approach To Psychological Invariance
Asset Metadata
Creator
Lynn, Richard Shelly (author)
Core Title
Decision Making: An Individually Parameterized Deterministic Model
Degree
Doctor of Philosophy
Degree Program
Psychology
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
OAI-PMH Harvest,psychology, experimental
Language
English
Contributor
Digitized by ProQuest
(provenance)
Advisor
Cliff, Norman (
committee chair
), Michael, William B. (
committee member
), Weitzman, Ronald A. (
committee member
)
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c18-457566
Unique identifier
UC11363247
Identifier
7112399.pdf (filename),usctheses-c18-457566 (legacy record id)
Legacy Identifier
7112399
Dmrecord
457566
Document Type
Dissertation
Rights
Lynn, Richard Shelly
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the au...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus, Los Angeles, California 90089, USA
Tags
psychology, experimental