Close
Logo
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected 
Invert selection
Deselect all
Deselect all
 Click here to refresh results
 Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
How to use intuitions in philosophy
(USC Thesis Other) 

How to use intuitions in philosophy

doctype icon
play button
PDF
 Download
 Share
 Open document
 Flip pages
 More
 Summarize
 Download a page range
 Download transcript
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content HOW TO USE INTUITIONS IN PHILOSOPHY
by
Brian Talbot
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(PHILOSOPHY)
December 2009
Copyright 2009 Brian Talbot
ii
Acknowledgements
Thank you to my advisors, James Van Cleve, Janet Levin, Kadri Vihvelin, Stephen Finlay, and
Richard John, for their feedback, guidance, support, and for believing that this was a viable and
worthwhile project.  Thank you to all the friends who listened to me talk interminably about
intuitions, who put up with my sometimes ridiculous ideas, who argued with me when I needed
arguing with, who proofread, edited, and helped keep my head above water:  Julia Staffel, Adam
Keeney, Geoff Georgi, Brian Bowman, Robert Shanklin, and Jeff Pyor.  Thank you to the friends
who I didn’t talk philosophy with, and who consequently helped keep me sane, especially Daniel
and Joaquin.  Thank you to my mom, my sister, my grandparents, my aunt and uncle, and my
cousins for their love and support.  Thank you to all the students who I used as guinea pigs and
sounding boards when developing many of these ideas.  Finally, I started this dissertation
(unknowingly) while teaching summer classes at the Center for Talented Youth, and some of the
core ideas got developed in classrooms there over the years; thank you to CTY for letting me
teach what I wanted, and to my CTY kids for helping me to love philosophy.
iii
Table of Contents
Acknowledgements ii
Abstract iv
Chapter 1 1
Chapter 2 28
Chapter 3 93
Chapter 4 114
Chapter 5 139
Chapter 6 162
Bibliography 189
iv
Abstract
Intuitions currently play a central evidential role in much of the practice of philosophy.  There
are, however, a number of concerns raised about this role.  Some have claimed that intuitions are
intuitions are in no way good evidence; others, that they are evidence about only a limited range
of (mostly mental) phenomena.  I argue that these are empirical claims, and that the proper way to
evaluate them is to come to an understanding of what intuitions are, how they are generated, and
what factors affect them.  Further, I argue that this should be based on a systematic, empirically-
founded theory of the general operation of our unconscious minds.  This means that an evaluation
of philosophical methodology is intertwined with an understanding of psychology and cognitive
science.  I present the outlines of such a theory based on a synthesis of a wide range of
psychological research; in so doing I engage with several current debates in psychology and
cognitive science about issues including the innateness of knowledge, mental modularity,
unconscious intelligence, and the structure of our concepts.  Our intuitive faculties are, it turns
out, a good source of data – better in many ways than our conscious minds – about certain
domains of philosophy, but not all.  This means that philosophers generally, no matter their
interests, should look to a psychologically informed understanding of the mind to see if intuitions
will be good evidence in their research, and also to see the pitfalls and biases intuitions about
their topic might succumb to.  To illustrate this process, I consider the application of intuitions to
specific questions in epistemology and the metaphysics of causation.
1
Chapter 1
Introduction
Philosophers use intuitions when doing philosophy.  Not exclusively, not always, and
perhaps not all philosophers, but most of us and quite often.  Intuitions often play the role that
observation does in science – they are data that must be explained, confirmers or the falsifiers of
theories.  However, worries crop up when we consider whether or not intuitions should play this
role in philosophy:  intuitions give what are provably wrong answers from time to time; intuitions
are not shared by all, and people sometimes have intuitions that conflict with those of others (or
other of their own intuitions); and intuitions are in certain ways mysterious, since we do not know
how they come about.  Because of this, there is controversy around the use of intuitions in
philosophy.  While many philosophers argue that we can continue to use intuitions as we have
traditionally done, some have called for the rejection of all intuitions, or the drastic limitation of
their use.  Others, so-called experimental philosophers, have argued that our methods of gathering
intuitions must change, and that they should be gathered from larger populations, using the
methods of experimental psychology.
In this dissertation I will articulate and argue for an alternative to these three views of the
role of intuitions in philosophy.  My view is that we must take many of the worries about
intuitions quite seriously, and that if we are to responsibly use intuitions at all these worries
should be addressed.  If we are to deal with the problems associated with intuitions, with their
mysteriousness, their fallibility, and their conflicts, we must understand what intuitions are, how
they come to be, when they work, and when they do not.  This understanding cannot be gained
through the use of traditional philosophical methods:  introspection and reasoning based on a
priori knowable premises.  Nor can applying experimental rigor to our gathering of intuitions
2
give us this understanding, because (as we will see) it will not tell us the source of our intuitions
or enough about the factors that affect them.  Instead, we must look to current psychological
research, which sheds a great deal of light on intuitions and the processes which generate them.
Psychologists, however, are typically not interested in the exact same questions and ends
as are philosophers, nor are they entirely in agreement on how intuitions come about.  For that
reason we cannot simply consult a text which will tell us all we need to know about intuitions.
There is a great deal of synthesizing, sorting out, and applying of psychological findings which
needs doing.  This should be aimed at constructing a systemic, empirically-based theory of the
mental processes behind intuitions in general, which I call a general theory of intuitions
(henceforth I will underline the initial definition of any key terms I use).  Once we have a general
theory of intuitions, in order to determine the extent to which intuitions are useful sources of
evidence about philosophical questions, we will need to consider first what standards a potential
source of evidence needs to meet to be a good source of evidence, and then ask whether or not
intuitions meet those standards.  The answers to both of these questions will likely vary from
philosophical domain to philosophical domain.  However, my view is that the correct general
theory of intuitions will show us that we not only can use intuitions in philosophy (within limits),
but also that we sometimes should.  It will also show us when we should not, by helping us
understand the limits of what intuitions can do and the factors that cause intuitions to go wrong.
This understanding will help us to resolve conflicts between intuitions, to know when to accept
the theories for which they are evidence, and to know when to not reject the ones they go against.
In this chapter I will argue in more detail for the claim that we have legitimate reasons to
worry about the use of intuitions in philosophy, and that addressing these worries requires a
psychologically informed general theory of intuitions.  I will explain what an intuition is, and
argue that psychological studies of intuitions can give us useful information about the kinds of
3
intuitions we use in philosophy.  In chapter 2 I will give and argue for my general theory of
intuitions.  In chapter 3 I will explain some factors that in certain circumstances can affect the
generation of our intuitions, and discuss some of the implications of these for the use and
gathering of intuitions in philosophy.  In chapter 4 I will return to the concerns I raise in this
chapter about the use of intuitions, and see what light my general theory of intuitions sheds on
them.  In chapters 5 and 6 I apply the findings of chapters 2 and 3 to the use of intuitions to
answer certain questions in philosophy; I show how the general theory of intuitions gives us
reason to worry about some intuitions used in epistemology, but gives support for the use of
intuitions in the study of causation.
Concerns About Intuitions
It is uncontroversial that intuitions are important to philosophy as it is currently practiced.
Intuitions are used as evidence for and against theories in all domains of philosophy, although
they are not always called “intuitions;” let’s use the following (rough) working definition of
“intuition” for now (with more elucidation to come later):  an intuition is a relatively unreflective
reaction that a proposition is true or false.  So when philosophers talk about “what we would say”
about a certain case, they are often talking about our intuitive reactions to that case; this is also
often true when they assert that something is clearly or obviously true.  In philosophy of mind,
intuitions about pain are used by David Lewis to argue for a version of functionalism (Lewis,
1983).  Intuitions about understanding Chinese are used by John Searle to argue against what he
calls “strong AI” (Searle, 1980).  In the philosophy of action, intuitions about playing video
games are used by Michael Bratman to argue that we can try to do something without intending
to do it (Bratman, 1987).  Intuitions are used as counter-evidence against compatibilism (Bok,
1998).  One of the most famous use of intuitions as counter evidence comes from epistemology:
4
Gettier cases (Gettier, 1963).  In metaphysics, intuitions are appealed to to argue for theories of
causation (e.g., Lewis, 1993), and against them (by pointing out that they have counter-intuitive
consequences, such as transitivity) (Hall, 2000).  In ethics, Judith Jarvis Thomson uses intuitions
about violinists and carpets to argue for her claim that abortion can be morally acceptable despite
having a right to life (Thomson, 1986).  Bernard Williams uses intuitions about killing rebels as
counter-evidence against utilitarianism (Williams & Smart, 1973).  In the philosophy of language,
Tyler Burge uses intuitions about “arthritis” to argue for meaning externalism (Burge, 1979), and
Saul Kripke uses intuitions about Gödel to argue against a descriptivist view of names (Kripke,
1972).  This list goes on and on.
Most arguments and counter-arguments in philosophy do not rest solely on intuitions, but
a lot rides on their veracity.  Theories are advanced on the basis of their agreement with our
intuitions.  Plausible and useful theories, such as the justified true belief theory of knowledge or
the theory that causation is counter-factual dependence,  have been either wholly discarded or
have become very controversial because of intuitions against them; often no more viable theory
has been offered in the place of the ones discredited.  Generally authors do not give reasons why
we should accept the intuitions they are using; instead, they just use them.  There are no widely
agreed upon views of the sources of intuitions, and certainly none based solidly on empirical
research.  The degree of reliance on intuitions, combined with the lack of understanding of what
they are, should make us worry about them.  That is a lot of weight to rest on a foundation the
solidity of which we have little understanding.
The mysteriousness of intuitions is not the only reason to be concerned about them.  We
also know for certain that many intuitions are false.  For example, our intuitions about the relative
sizes of sets (e.g. that there are more integers than even integers) conflict with proven theorems.
A single person will sometimes have conflicting intuitions such that at least one of them must not
5
be true.  The problem of moral luck, for example, is due to the fact that “it is intuitively plausible
that people cannot be morally assessed for what is not their fault…” (Nagel, 1979) while at the
same time a person whose actions led to a bad result intuitively seems more blameworthy than a
person who acted the same but with no bad result.  Different people also have different (and
contradictory) intuitions about the same case or claim, again showing that at least some of these
intuitions must be false.  Robert Cummins points out that there was wide-spread divergence in
intuitions about Hilary Putnam’s twin earth examples when they were first published (he says that
this disagreement has died to out to some extent because those that did not share Putnam’s
intuitions were not invited to the next conference) (Cummins, 1998).  Meta-ethicists, to give
another example, argue about whether it is intuitively possible for a person to recognize the truth
of moral claims but be unmotivated by them.  Thus, we know that intuitions are not always
correct.  This gives us reason to worry about the intuitions philosophers rely on – if some are
false, do we have reason to think that not all (or most or at least half) are false?   I will consider
some possible objections to this later; for now, let’s continue considering reasons to worry about
intuitions.
Robert Cummins (1998) has a two pronged argument to the conclusion that intuitions are
“epistemologically useless” (p.125) and that we should stop using them entirely.  His first line of
attack begins with the claim that a source of evidence needs a way to be calibrated; that is, if we
are to use it, we need a way of checking whether or not it is accurate and of determining under
what conditions it is more (or less) reliable.  In order to calibrate some source of evidence about
X, we need to be able to get correct answers about X without using that source, since in order to
calibrate something we need to be able to check its results.  For example, if we want to calibrate a
scale, we need a way of determining how much things weigh without using that scale.  If
philosophical intuitions are to be a source of evidence, we need to be able to calibrate them,
6
which means we need some alternate (not based in intuitions) way of determining the answers to
the sorts of philosophical questions about which we normally consult our intuitions.  However,
Cummins argues, if we have intuition-independent access to answers to philosophical questions,
then we do not need intuitions.  And if we do not have such access, then we cannot calibrate
intuitions and should not use them.
Cummins’ second line of attack is to consider what the possible sources of intuitions are.
For each, he concludes that they are not a good source of evidence, either because they are
inherently not useful or because they are likely subject to biases.  For example, he says that
intuitions might be due to our explicitly held theories; if they are, they obviously cannot be used
as evidence for these theories.  Alternately, intuitions might reflect tacitly held theories.
Cummins claims that tacitly held theories should be expected to be inaccurate because they are
“more guided by effectiveness than by accuracy.” (Cummins, 1998, p. 123)  By this he means
that whatever methods we use to tacitly form theories will be aimed at producing theories that
help us survive and accomplish our goals, rather than at producing correct theories.  He also
claims that these methods will be overly swayed by the beliefs of those close to us.
Hilary Kornblith (2006) criticizes the use of intuitions in epistemology, but his arguments
can plausibly be generalized to other areas of philosophy.  He argues that intuitions are best seen
as giving us evidence only about our concepts.  However, many of us are interested in learning
about more than our concepts; we want to learn about things themselves.  For example, we do not
want to know just what our concept of “knowledge” is, we want to know what knowledge itself
is.  To the extent to which we build our theories on intuitions, we are circumscribing the
ambitions of philosophy, limiting what our theories are about.  This criticism is echoed by
defenders of the use of intuitions such as Frank Jackson, whose view of the role of intuitions is
that “[c]onceptual analysis [based in part on intuitions] is not being given a role in determining
7
the fundamental nature of our world; it is, rather, being given a central role in determining what
to say in less fundamental terms given an account of the world stated in more fundamental
terms.”  (Jackson, 1998, 44)
Several philosophers, such as Michael Bishop and J.D. Trout (2005) and Richard Miller
(2000), have argued that intuitions are not a good source of evidence by considering how fruitful
the use of intuitions has been.  Consider the state of philosophy, they say.  There is little
agreement on most key issues, we have produced few theories that have been very successful or
survived criticism, and philosophy has accomplished little of practical significance in the last few
hundred years.  There is nothing about the subject matter of philosophy that makes these results
inevitable; most of us believe that there are answers out there to be found, and at least some
philosophical disciplines can produce useful results.  This gives us reason to think that we are
studying the right stuff but in the wrong way.  Some aspects of our methodology – logic and
rigorous thought – are beyond criticism, and thus, they say, the blame for philosophy’s lack of
success falls on our use of intuitions.
Finally, intuitions are criticized sometimes by experimental philosophers.  Several
philosophical experiments have recently suggested that intuitions vary from culture to culture, or
that they can be swayed by various irrelevant factors (Swain, et al, 2008).
1
 These findings are
problematic, because they raise the question, “Which intuitions should we use as evidence?”
Absent a way of resolving these conflicts, intuitions do not look like a good source of evidence
since at least one set of these conflicting intuitions must be wrong, and we have no way of
determining which.
                                                     
1
I will put the debate about the methodology of these experiments to the side because I am not aware of
any conclusive arguments that they are fatally flawed.
8
Each of these reasons to worry about intuitions is empirically based.  It is an empirical
fact that intuitions are widely used, that they are mysterious (as of now), and that we know they
are wrong at times and do not know how often they are right; further, the arguments of the several
critics I have cited are all based on empirical claims:  that we either cannot determine how
reliable intuitions are or we do not need to use them, that their sources make them unreliable, that
they have not done us any good so far, and that they cannot tell us much of philosophical interest.
This is a crucial point, because a proper response to these arguments should address these
empirical claims.  It should show that intuitions are a reliable source of evidence about
philosophical matters, and how practical concerns about their use – concerns about our inability
to calibrate them or to chose between culturally varying intuitions – can be addressed.  As I will
argue later, the proper responses to these claims must be empirically based as well.  However,
such responses have not been forthcoming from philosophers.  In the next section I will consider
some pre-eminent responses to critics of intuitions to show the types of arguments defenders of
intuitions generally employ.
Some Philosophical Responses
Responses to these criticisms have been numerous and wide-ranging.  Rather than trying
to survey all of them, in this section I will look at responses of certain types, before I argue that
no responses of these types will successfully address all of the criticisms from the previous
section.
One of the most common defenses of intuitions is to argue that the use of intuitions in
philosophy is inescapable or inevitable.  This has been argued by Ernest Sosa (2005), George
Bealer (1998), and Lawrence BonJour (1998).  Typically, philosophers arguing this point bring
up intuitions about logical truths as ones that we cannot help but rely on, logic being without
9
doubt the cornerstone of good philosophy.  Intuitions about modal facts – facts about what is
necessary or possible – are also claimed to be essential to philosophy.  If we cannot help but use
intuitions as a source of evidence when we do philosophy, then philosophers better take them to
be a good source of evidence.  Bealer (1998) and BonJour (1998) have gone on to give accounts
of what intuitions are and how they could work.  These accounts start from the assumption that
intuitions are a trustworthy source of evidence, and explain what they could be such that that is
the case.
Another type of response is to argue that we are justified in trusting our intuitions despite
the kinds of criticisms I have given in the previous section.  Sosa (1998) and Joel Pust (2000), for
example, argues that we have worries about the trustworthiness of sense perception (e.g. that
sense perception is demonstrably not always reliable) similar to our worries about intuitions.
Since it is obviously perfectly acceptable to rely on our senses despite these worries, it must
similarly be perfectly acceptable to rely on intuitions despite our worries about them.  Richard
Foley (1998) has argued that, even when a given intellectual faculty of ours (such as our intuitive
faculty) is called into question, if we take that faculty to be trustworthy we are justified in
employing that faculty to evaluate the criticisms of it.  Those of us, he argues, who take our
intuitive faculties as trustworthy are justified in relying on intuitions when making arguments in
their defense.
I have only considered a few responses to criticisms of intuitions (although these are
some of the most prominent ones), but these are enough to give us the sense of the sort of
responses that philosophers make.  What is important to recognize about these responses is that
they are purely philosophical responses to these criticisms – they use only the traditional, a priori,
tools of philosophy.  As we will see, any such response will be unsatisfying.
10
Why These Responses are Unsatisfying
These responses do not engage directly with the (quite forceful) criticisms of intuitions I
brought up earlier.  Critics of intuitions are largely worried about the ability of intuitions to
accurately tell us facts of philosophical importance.  They argue that we cannot determine
whether or not intuitions are reliable because we cannot calibrate them; they argue that they can
only tell us about our concepts, and not anything more important; they argue that they must be
useless because they have not been very useful so far.  The responses to these criticisms give us
reasons to want to trust intuitions, or reasons why we might be justified in trusting intuitions, but
no evidence that they are particularly reliable or useful.  Critics of intuitions could accept many of
the claims the defenders of intuitions make while still maintaining their critical stance.  Even if it
is true that we are justified in using intuitions, this is small consolation for those that are
concerned about the accuracy of intuitions, since it would be a shame for our philosophical
theories to be false, because based on unreliable intuitions, even if we were wholly justified in
believing them.  The claim that intuitions are essential to philosophy is disputable, but even if it is
true, it is as much a condemnation of philosophy as a defense of intuitions, since if someone were
to claim that fortune telling could only be done using crystal balls (or tarot cards or astrology or
tea leaves), we would not take this as a defense of crystal balls but as a reason not to try to tell
fortunes.  Accounts of how intuitions work given by defenders of intuitions similarly miss the
point, because they start from the assumption that intuitions are reliable despite the critics’ claims
that we have no evidence that this is true.
My main goal here is to argue that responses to the worries I have raised about intuitions
are bound to be unsatisfying as long as they are based on a priori claims.  This does not require
me to engage much with the actual responses to these worries as much as to make a more general
argument about the nature of the worries I have raised and the sort of evidence that would be
11
needed to address them.  However, for the sake of completeness let me say a bit more about some
specific arguments in defense of intuitions.  Those of you interested more in the main thrust of
this chapter can skip to the end of this section.
First, let’s consider the argument that worries about intuitions are similar to worries about
perception.  We know some sense experience is false, just as we know some intuitions are false,
so why not doubt all of our sense experience?  Since that is ridiculous, perhaps the fact that
intuitions go astray sometimes does not give us reason to doubt intuitions in general.  However,
there is a crucial difference between intuition and sense perception.  There are (at least) two kinds
of concerns about the trustworthiness of sense perception:  those based on merely-logically-
possible skeptical scenarios such as Descartes’ evil demon, and those based on known failings of
our senses.  I am not raising Cartesian-style skeptical worries about intuitions, but rather pointing
out concerns akin to the second kind of concerns about perception.  With regards to these second
kinds of concerns, the difference between intuition and perception is that by examining when our
senses deceive us, or when they conflict, we have come to a good understanding of when our
senses go wrong, an understanding that we lack when it comes to intuitions.  Thus, while the
unreliability of some sense experience need not cast doubt on all sense experience, because we
have ways to distinguish reliable from unreliable sense experience, the unreliability of some
intuitions does cast doubt on the reliability of intuition in general.
Second, let’s consider the argument that intuitions are essential to the practice of
philosophy.  As I have already said, this is really no defense of intuitions at all, since the
argument could just as well be considered a reductio argument against philosophy.  That to the
side, there are good reasons to doubt the necessity of intuitions to philosophy.  Two sorts of
intuitions are cited by Bealer and BonJour as absolutely necessary to philosophy:  modal
intuitions (intuitions about what is necessary or possible) and intuitions about logic.  Let’s accept
12
for the sake of argument that the practice of philosophy requires access to facts about what is
necessary and possible.  Does this mean that it requires modal intuitions?  That is only true if
modal intuitions are our only means of access to modal facts, but that is not the case.  Since
actuality entails possibility, our ordinary (non-intuitive) means of information gathering about the
actual world give us access to a vast number of modal facts.  Since logical contradiction entails
logical impossibility, our (non-intuitive) use of logic gives us access to other facts.  The facts we
have access to in these ways are probably insufficient to do everything we might want in
philosophy (metaphysics might suffer quite a bit if limited in this way), but that does not mean
that people interested in philosophy in general must accept the modal intuitions as evidence.
Exploring this issue in the detail it truly deserves would require at least a paper or chapter of its
own, so because it is really only tangential to the task at hand I will leave it at this.
The argument for the necessity of intuitions about logic is that our use of logic can only
be justified by intuitions.  Rather than showing how logic can be non-intuitively justified
(something I have thoughts about but no solid argument for), let me sketch some concerns for the
claim that logic is intuitively justified.  I am going to assume that almost everyone is justified in
using at least basic predicate logic (two valued logic using negation, and, or, and the conditional).
I think this assumption is plausible given our practice of criticizing people for beliefs which
violate standard logic without further inquiry as to what they know or do not know about logic.
On that assumption, let’s ask if (almost) everyone has the sorts of intuitions that justify their use
of logic.  The standard method of using intuitions as evidence works in this way:  if one has an
intuitions of the sort “All Xs are Y” or any other type of general or categorical intuition, and they
have the intuition that some specific X is not Y, the specific intuition trumps, or defeats, the more
general one (in the absence of a method of explaining away the specific intuition).  In such cases
of intuitive conflict, one is not justified in believing the general claim, at least not based on one’s
13
intuitions alone.  Let’s assume (although I think this is false) that the general population finds
sentences that express the various rules of basic logic intuitive.  There is a large body of research
that shows that a large percentage of the population has intuitions about specific arguments that
violate the rules of basic logic.  For example, a large percentage of subjects (at least 50% in a
number of studies) finds certain instances of arguments that denying the antecedent or affirm the
consequent valid (e.g. Barrouillet, et al, 1997).  It would seem then that these people are not
intuitively justified in accepting basic logic.  There is, of course, much more to be said here; one
might argue with the research (although these results are quite robust) in a number of ways, or
with their interpretation, or one might claim that there are ways of explaining away the intuitions
about the specific invalid arguments.  Again, this is not the place to get into these details
(although I am developing this argument elsewhere), but I will say that I think any way of
explaining away these intuitions (so that only the general intuitions about the rules of logic stand)
will require appealing to some non-intuitively based facts, which makes more plausible the claim
that there is a non-intuitive source of justification of logic.
But let’s return to my main point:  traditionally philosophical  responses to criticisms of
intuitions will be unsatisfying because they are based on the traditional, a priori, tools of
philosophy, while the criticisms of intuitions are grounded in the empirical.  A priori arguments
can trump arguments based on empirical premises, but this is not to be expected with respect to
the criticisms made of intuitions because, as I will argue below, none of the claims about
intuitions made by their critics – that intuitions cannot be calibrated, that they are based on
unreliable sources, that they cannot give us useful results, that they can only tell us about our
concepts, that they vary from culture to culture or situation to situation – are necessarily false (nor
do they look like examples of contingent a priori falsehoods).  As we will see, in order to
evaluate the role of intuitions in philosophy and to respond to the worries I have raised, we need
14
to know how intuitions work in the actual world, and that means we need to get our hands dirty in
psychology.  But first, let’s explore in some more detail what intuitions are.
What are Intuitions and Where Do they Come From?
In colloquial use, “intuition” refers to a faculty and also to the deliverances of that
faculty.  I will use the term to refer only to the deliverances, in part because that is how the term
is used in contemporary philosophy, and in part because I believe that different intuitions come
about by different processes, so that there really is no single faculty of intuition.  Intuitions are
had by people; I will call a person who has a given intuition an intuitor.  Intuitions have
propositional content – one never has an intuition that is not an intuition that P.  Since intuitions
have content, they represent things as being a certain way, in the broadest sense of the words
“being a certain way,” since intuitions can represent possibilities, and in the broadest sense of the
word “things,” since properties and relations can be the subject matter of intuitions.  I will call the
things that are represented by an intuition the subject matter of the intuition.  So, if Fred has the
intuition that murder is wrong, Fred is the intuitor, “murder is wrong” is the content of the
intuition, and murder and wrongness are both the subject matters of the intuition.
But what is the intuition?
2
 An intuition is not its content, just as beliefs and desires are
not their contents.  Rather, an intuition is a kind of experience (Ernest Sosa (1998) claims that
intuitions are a kind of disposition, although he is not committed to this view, but that
misrepresents how we use the word – no one seems to talk about an intuition without talking
about something that has actually happened to them – although saying something is intuitive to
                                                     
2
This paragraph is a survey of the consensus view of intuitions, unless otherwise stated.  See Sosa, 1998,
Bealer, 1998, Pust, 2000, BonJour, 1998, Cohen, 1986, for more detail on the claims in this paragraph.
15
someone may be attributing a disposition to them).
3
 George Bealer (1998) calls an intuition a
seeming – an intuition is some content seeming to be true – and I think this captures the
phenomenon well.  Not every intuition involves an equally strong seeming-to-be-true, and not
every intuition brings with it an acceptance of its content by the intuitor (consider, for example,
intuitions the content of which we know to be false).
Not every seeming-to-be-true is an intuition.  Right now the proposition “I am typing”
seems true to me, but that seeming is not an intuition, but rather a case of perception.  It also
seems true to me that George Washington was the first president of the United States, but that
seeming is not an intuition, but rather a case of recollection.  No one considers seemings of these
sorts to be intuitions.  In a number of places, intuitions have been distinguished from seemings
due to perception or recollection (e.g. Bealer, 1998, Sosa, 1998), or from hunches and guesses
(Bealer, 1998).  These are a good distinctions to make to an extent.  However, I am not willing to
go as far as some in distinguishing intuitions from all recollections or hunches.  My reason is this:
currently, our only way to identify a given seeming as an intuition in the wild (so to speak) is
based on phenomenology, what the experience of the seeming is like.  The experience of seeing
that I am typing, or recalling that George Washington was the first president, is clearly different
from the experience of clear cases of philosophical intuitions such as the intuition that torture is
wrong or that Smith does not know that the man who will be hired has ten coins in his pocket.
No one would mistake such clear cases of perception or recollection for intuitions; in fact, I
expect that no one would ever mistake any case of a perceptual seeming for an intuition, nor
would they mistake any case of conscious recollection for an intuition (by “conscious
                                                     
3
See Pust, 2000, for a discussion of why accounts that allow intuitions to be dispositions, or non-occurrent
in some way (thus not necessarily experiences) fail.  Pust’s argument is aimed at Sosa’s account of
intuitions (Sosa, 1998), but Sosa is not committed to intuition not being occurrent, and his view is not
intended to be incompatible with this claim (see Sosa, 1998, p. 259-260).
16
recollection” I mean a recollection that one experiences as a recollection).  However, I do not
know for sure that some sorts of recollection do not have the same phenomenological feel as
intuitions.  For all I know (and in fact some research supports this claim), some types of
unintentional, or automatic, recall might not feel like recall at all, but rather just the same as clear
cases of the sorts of things philosophers want to call intuitions.  Because of this, there may be
cases where philosophers use “intuition” to refer (unknowingly) to something that seems true
because it is remembered, or that is a sort of hunch or (perhaps unconscious) guess.
Because of this, limiting intuitions to seemings that are not cases of any sort of
recollection, or any type of hunch, might give us a definition of the term “intuition” that does not
capture the extension of the term as it is actually used.  I think we can safely say that intuitions
are not seemings due to immediate perception (although they may indirectly be due to perception;
it is hard to claim that experience plays no causal role in my intuitions about the badness of
suffering, for example) or conscious recollection.  Saying just what sort of seeming they are,
though, will be quite difficult given the lack of terms that describe the phenomenology of our
various experiences.  So, unfortunately, I will not be able to give a complete and satisfactory
definition of the term; the best I can hope to do is narrow it enough to use it well.  This will
require one more narrowing-down.
Intuitions must be the product of processes that are not wholly transparent to the intuitor
(the person having the intuition).  There must be some aspect of the process that generates
intuitions that the intuitor is not directly aware of – either the data used to generate the intuition,
or the way in which that data is used.  I say this because intuitions are at least conceivably a
source of evidence – no critic of intuitions claims that the use of intuitions as evidence is some
sort of category mistake – so whatever intuitions are, they must be something that could possibly
be evidence.  However, if an intuition that P was based entirely on transparent processes, then the
17
fact that P seems true would provide one with no evidence for P.  After all, if the seeming that P
is produced by a transparent process, then the intuitor is aware of all of the evidence this seeming
is based on, and of the way in which they arrived at the seeming that P based on that evidence.
The seeming itself adds nothing to their reasons for believing P:  if their evidence and reasoning
is good, then the fact that P seems true based on that gives them no more reason to believe P, and
if their evidence or reasoning is flawed, the fact that P seems true based on it gives them no
reason to believe P.    Since intuitions are mental states that are supposed to give us evidence for
their content, they cannot be based solely on transparent mental processes.
So here is the best “definition” I can give for “intuition” (scare quotes are being used here
since this definition is, as admitted above, incomplete):  an intuition is the seeming to be true
(although not necessarily acceptance of or belief in) of some proposition that is not a perceptual
seeming, or due to conscious recollection, or due entirely to transparent mental processes.  Given
the last aspect of this definition, we should expect that intuitions are at least partly generated or
affected by unconscious mental processes.
4
 As far as we know, every mental phenomena
involves some mental processes – even vision, which on many accounts involves the world as it
is being “given” directly to us, involves mental processing (see, e.g., Mack & Rock, 1998).  Since
intuitions cannot be based solely on transparent processes, and are mental phenomena, they most
likely involve some unconscious mental processes.  Because of this, defenses of intuitions based
on traditional philosophical tools are not adequate.
                                                     
4
Although the word “unconscious” has associations with Freudian psychoanalysis, I intend to use it in as
theory-neutral a way as possible (the term is used by a great number of researchers who do not endorse
Freud’s views).
18
Intuitions and Traditional Philosophy
Discussions of intuitions that use only the traditional tools of philosophy – introspection
and a priori reasoning – can only draw limited conclusions about whether or not what we
normally call “intuitions” are useful evidence.  Since the mental events we normally call
“intuitions” must be generated by non-transparent processes (on the assumption that they are even
possibly evidence), we cannot determine through introspection how they are produced, since we
cannot introspect about non-transparent processes.  Further, since there are an enormous number
of possible processes that could give rise to the types of seemings that we normally call
“intuitions,” which range in reliability from infallible to wholly inaccurate, reasoning about these
seemings a priori will tell us little about their accuracy.  For the same reasons, neither a priori
reasoning nor introspection can argue against the numerous other concerns raised above – that
intuitions vary from culture to culture, that they only tell us about our concepts, that they are not
good evidence because they have failed us for thousands of years.  There is nothing logically or
conceptually or analytically true about seemings-to-be-true that are not perceptual or based on
conscious recollection or transparent processes that make these concerns inapplicable.  We may
be able to argue a priori that we are justified in using these sorts of seemings as evidence, but
while that might be satisfying from a certain standpoint, I find it distinctly underwhelming given
that if such justification is had in this case it is had despite the cited worries about the accuracy of
intuitions.
As long as one’s theory of what intuitions are fails to specify facts about their reliability,
we will be unable to evaluate their reliability a priori.  We could, of course, give a stipulative
definition of what “intuition” refers to (perhaps something produced by a certain kind of reliable
process) and a philosophical theory of how they work given this, but if we do this we still have
open the question of whether or not the mental phenomena we experience and normally call
19
“intuitions” are actually intuitions, and this cannot be answered a priori or through introspection
for the majority of interesting philosophical intuitions.  Of course, there will be some classes of
intuitions (notably those about logical facts) the reliability or accuracy of which we can
determine, and thus we might be able to rule out a priori as being produced by certain types of
processes, but as Cummins has pointed out, for much of philosophy we lack standards by which
to calibrate our intuitions and thus we cannot determine a priori to extent to which they are
produced by reliable processes.
So, we cannot use the tools of philosophy alone to determine if philosophy should
continue as it is because we cannot tell if our mental phenomena of a certain type are good
evidence about philosophical questions.  If we are going to have a fruitful discussion of the use of
intuitions in philosophy, we need empirical information, information about our minds.  Empirical
information about our minds comes from psychology.
This point works against both defenders and critics of intuitions.  As I have pointed out,
defenders of intuitions have based their arguments too much on a priori claims.  But critics of
intuitions have overlooked empirical results as well.  Cummins cites almost no psychological
findings in his attack on intuitions; his discussion of the sources of intuitions makes claims about
their biases that are not well supported by research.  Similarly, the claim that intuitions can only
tell us about our concepts is an empirical one – many mental processes are able to tell us about
the external world, so intuitions in principle could as well – but no research is given to support it.
Criticisms based on variations in intuitions rely tacitly on the claim that there is no way to choose
between these intuitions – that intuitions in one circumstance or of one group cannot be shown to
be less reliable than others – which is certainly not a necessary truth.  This leaves open the
possibility that intuitions may be defended on empirical ground, since it is certainly possibility
that the facts do not support the claims made by the critics of intuitions.  Such a defense is a
20
project for a later chapter, but I at least want to underline the fact that my goal is not to call upon
psychology to argue categorically for or against the use of intuitions in philosophy.  In the end, I
think (and I will argue) that intuitions will be quite useful, even necessary, for some philosophical
project but not all, and that they are not appropriate for others, or under certain conditions;
further, there are principled means of determining all of this.  I think that even where
psychological research indicates that certain intuitions are likely to be inaccurate, or that whole
categories of intuitions are not good evidence, this will overall benefit philosophy.  This has the
potential to resolve some problems due to conflicting intuitions, since some of the conflicting
intuitions may be shown to be unreliable and not to be taken seriously; it also has the potential to
free some domains of philosophy from the burden of having to conform to our intuitions, a
burden that has been too heavy to bear in many cases (e.g. articulating a theory of knowledge).
However, before we can look to psychology and try to articulate a general theory of intuitions in
order to apply it to their use in philosophy, there is crucial objections to this project that must be
responded to.  This will occupy the remainder of this chapter.
Objection:  Philosophical Intuitions are Sui Generis
The rest of this dissertation draws upon psychological research on intuitions.  But this is
only appropriate if psychologists have done research on the sort of things philosophers mean by
“intuitions.”  Philosophers such as George Bealer and Antti Kauppinen have argued that the
intuitions used in philosophy are of a different kind than things studied by psychologists (or
psychologically-oriented philosophers) – Bealer says that they are a sui generis type of
propositional attitude – and thus that current psychological research has no bearing on philosophy
21
(Bealer, 1998, Kauppinen, 2007).
5
 In this section I will argue that that is not the case, and that the
types of intuitions philosophers call upon are of the same kind as those psychologists currently
study.
There are certainly reasons to hope that the mental phenomena that some psychologists
study are of the same kind as philosophical intuitions:  if not, we cannot respond to Cummins’
calibration argument.  That argument is based on the premise that we cannot determine whether
or not intuitions are reliable without having non-intuitive access to the answers to philosophical
questions.  However, if psychologists who study intuitions about non-philosophical subjects are
studying the same sort of thing that philosophers use, then we can calibrate philosophical
intuitions in the following way:  by learning what sorts of information intuitions are normally
based on and what factors affect them for well and for ill – calibrating them in non-philosophical
domains, where we do have non-intuitive access to correct answers – and then transferring what
we have learned to philosophical intuitions (more on this later).
That said, a number of aspects of philosophical intuitions supposedly set them apart from
what psychologists study.  The most obvious is their subject matter:  intuitions on philosophical
questions are not often studied by psychologists.
6
 However, a difference in subject matter
between two propositional attitudes does not automatically make them different kinds of
propositional attitudes – my belief that Jupiter is striped and my belief that grass is green are the
same sort of attitudes (both are beliefs), despite being about different subjects.  Other cited
differences are:  philosophical intuitions are extremely stable, philosophical intuitions are
“robust,” and philosophical intuitions “present themselves as necessary.”  Let’s consider each of
                                                     
5
This is somewhat of an extension of Kauppinen’s claims, but I think a reasonable one.
6
However, psychologists have studied intuitions about or directly relevant to ethics (Greene, et al, 2001),
essentialism (Smith & Sloman, 1994), and logic (Barrouillet, et al, 2000), among other topics.
22
these in turn and see if they really differentiate philosophical intuitions from what psychologists
have studied.
George Bealer cites the stability of philosophical intuitions as evidence that they are
distinct from ordinary intuitions (Bealer, 1998).  It is claimed that philosophical intuitions tend
not to change over time, and they tend to be resistant to changes in reflective belief.  For example,
my understanding and acceptance of Cantor’s diagonalization proof does not make it any less
intuitive to me that there is nothing larger than infinity.  However, intuitions about philosophical
topics are not always stable over time.  Experimental philosophers have shown that intuitions
about a thought experiment can be changed by the consideration of other thought experiments
immediately before (Swain, Alexander, and Weinberg, 2008).  There is a great deal of anecdotal
evidence to support the claim that philosophical intuitions change over time.  Many of the
philosophers I have spoken to on this topic have agreed that some of their intuitions have changed
as they develop their views, as they learn more about a topic, and over the course of their
careers.
7
 So it seems that many philosophical intuitions do change over time.  The intuitions that
psychologists study can also be remarkably stable.  Psychologists who study implicit learning and
unconscious attitudes – learning and attitudes of which one is not aware – often measure these by
studying intuitions.  Unconscious attitudes tend to be very stable, lasting years and surviving
changes in conscious attitudes and beliefs (Wilson, et al, 2000).  For example, unconscious
stereotypes tend to persist even when their holder consciously believes that they are false; they
still are manifested in the subject’s unreflective behavior.  Implicitly learned rules or patterns can
survive exposure to ambiguous or partly contradictory data and can cause intuitions despite the
subject believing that they have learned nothing (Lewicki, et al, 1989).  Stability over time and
                                                     
7
David Kaplan, for example, talks about experiencing “a resurgence of an atavistic Fregeanism” in his
“Afterthoughts”.  Thanks to Geoff Georgi for pointing out this example.
23
through changes of belief is not unique to philosophical intuitions, and is not characteristic of all
philosophical intuitions.
Antti Kauppinen (2007) argues that philosophers are interested in robust rather than
surface intuitions, and that experimental philosophers (and by extension psychologists who study
intuitions) elicit intuitions of the wrong type in their studies.  Robust intuitions are intuitions had
(1) by “competent users of the concept in question,” (2) “in sufficiently ideal conditions,” and (3)
“influenced only by semantic considerations.”  Kauppinnen is interested in the use of intuitions to
analyze concepts, but it makes sense to generalize his criteria so as to apply to intuitions about
things beyond concepts; obviously intuitions formed in ideal conditions are better evidence than
those not, and we also want intuitions influenced only by consideration of the things themselves,
rather than irrelevant factors.
If we extend Kauppinen’s definition of “robust intuition” in this way, then either
psychologists study robust intuitions at least some of the time or the use of intuitions in
philosophy is hopelessly problematic.  The reason is this:  if psychologists do not study robust
intuitions at least some of the time, then we cannot determine if any intuitions are robust.  In
order to determine if a given intuition is robust or not, we need to determine if it is influenced by
irrelevant factors and formed under ideal circumstances.  In order to do this, we need to determine
what factors might affect the intuition (by determining what factors affect intuitions generally),
and what circumstances are conducive to having accurate intuitions.  This brings up Cummins’
calibration argument again:  we cannot learn this if we do not have non-intuitive access to correct
answers.  If we cannot study the accuracy of intuitions about non-philosophical questions and
apply what we learn to philosophical intuitions, then we cannot determine what intuitions are
robust.  Further, there is reason to think that psychologists are interested in robust intuitions,
since many do study the different factors that affect the accuracy of intuitions, and the data these
24
tend to be based on.  Psychologists are interested in whether intuitions are robust, just not
(usually) in robust intuitions about philosophical questions.
According to Bealer (1998) and Joel Pust (2000), the big difference between
philosophical and ordinary intuitions is that philosophical intuitions “present themselves as
necessary.”  Bealer does not explain quite what this means, saying he is unsure how to analyze it
(Bealer, 1998, p. 27).  For Pust, when one has the philosophical intuition that P, one is also
disposed to have the intuition that necessarily P – one would have the intuition that necessarily P
if they considered the question of P’s necessity (Pust, 2000, p. 97).  The principle of charity
demands that we amend this a bit, however.  Consider my intuitions about a Gettier case:  when I
have the intuition that the person described in the thought experiment does not know some thing,
I am not inclined to also intuit that they necessarily lack this knowledge, because it is obvious to
me that they do not.  Further, there are many philosophical intuitions which are about possibility,
such as the intuition that it is possible for Aristotle not to have studied with Plato (Kripke, 1972).
There are two things Bealer and Pust might plausibly mean when they say philosophical
intuitions present themselves as necessary:  philosophical intuitions are intuitions about modality,
so that every intuition that P is accompanied by the dispositions to have the intuition that
necessarily P, or possibly P, or something of that sort;
8
alternately, philosophical intuitions are
always accompanied by dispositions to have intuitions about entailment whose content is of the
form “Necessarily if this situation is the case, then P.”
9
 It is important that both of these
interpretations talk about philosophical intuitions have associated dispositions to have other
intuitions, because it would be misrepresenting the phenomenology of intuitions to say that every
intuition one has is explicitly about modality or entailment.
                                                     
8
This was suggested to me by Kadri Vihvelin.
9
This was suggested to me by Geoff Georgi.
25
Let’s look at the first interpretation first.  Perhaps every philosophical intuition that P is
accompanied by a disposition to intuit either “Necessarily P,” or “Possibly P,” or even “P is
possible.”  This does not differentiate intuitions as studied by psychologists from philosophical
intuitions, since most intuitions studied by psychologists are elicited by cases which are or seem
actual.  Presumably these would be accompanied by the disposition to have an intuition about
possibility, since these intuitions are about actual cases.  Further, some psychologists study
intuitions about categories, and some of what is intuitive about categories will be intuitively
necessary, such as “Dogs are animals,” or “Chairs are furniture.”  Thus, if we interpret Bealer and
Pust as claiming that all philosophical intuitions are about modality, or are associated with
dispositions to have related modal intuitions, they are not distinct in this way from what some
psychologists study.
So let’s consider the second interpretation.  Perhaps every philosophical intuition that P is
accompanied by a disposition to intuit that “Necessarily if this situation obtains, P.”  Under this
view, philosophical intuitions have to do with entailment.  This does seem to capture the
difference between my intuitions about Gettier cases, for example, where I am inclined to say that
the situation entails a lack of knowledge, and intuitions in psychological studies of character, for
example, where I doubt a subject would assert, “Given these facts, this person is necessarily
likeable,” if asked.  But why make this claim about all philosophical intuitions?  Absent these
accompanying dispositions, philosophical intuitions can still do a great deal of the work they are
used to do.  Imagine that I had the intuition about a Gettier case “Smith does not know (e),” but I
lacked the disposition to have the intuition “Necessarily in this situation Smith does not know
(e).”  This is still a counterexample to the justified-true-belief theory of knowledge, since it is still
a case of a person who has a justified true belief but fails to know that which they believe.  The
same applies to other philosophical counterexamples:  to provide counter-evidence to a theory
26
that X is sufficient for Y, one needs only the intuition that Y is absent in at least one situation
where X is present, regardless of whether or not this seems necessarily true about this situation;
similarly, to provide counter-evidence to a theory that X is necessary for Y, one only needs the
intuition that in some situation Y occurs without X, whether or not this seems necessarily true
about this situation.  Philosophical intuitions can do a lot of the work that they do even if they say
nothing about entailment.  I do not see the motive for the second interpretation of Bealer and
Pust’s claim as a requirement for something to fill the role that philosophical intuitions do; if it is
only meant as a descriptive requirement (claiming as a matter of fact that all of our philosophical
intuitions happen to be accompanied by intuitions about entailment) they need to do a lot more
work to provide a solid inductive argument to this conclusion.
So, the first interpretation of the claim that philosophical intuitions “present themselves
as necessary,” does not differentiate philosophical from non-philosopical intuitions, and the
second interpretation does not give a real requirement for something to be a philosophical
intuition, and so also does not set philosophical intuitions apart from those studied by
psychologists.
10
Characteristics associated with philosophical intuitions, such as stability or modal
content, are not shared by all philosophical intuitions (and some are not shared by most).  They
(and robustness) are also possessed by many intuitions that psychologists study.  Thus, it seems
                                                     
10
Further, psychologists study so-called intuitions that one would expect to be accompanied by intuitions
about entailment, although psychologists, not being interested in necessity, never ask about it.  For
example, psychologists study so-called intuitions about category membership (e.g., Murphy, 2002), and
such studies are crucial to our understanding of how the unconscious mind works.  Not all category
membership is necessary – one can be categorized by job, for example, or by personality type – but quite a
bit is.  For example, the so-called intuitions “Dogs are animals” and “That is a dog” could both be
accompanied by intuitions about entailment:  “Necessarily, if these (or any) facts obtain, dogs are animals,”
and “Necessarily, if these facts obtain (e.g. this is a poodle), this is a dog.”  Psychologists do study so-
called intuitions that we would expect to be accompanied by intuitions about entailment.
27
that philosophical intuitions are a subset of the type of things many psychologists study, and
psychology can teach us quite a bit about how and when they work.
Conclusion
Intuitions have played a very significant role in analytic philosophy.  Many of us are
concerned about this, and rightly so.  However, these concerns do not merit completely
abandoning the use of intuitions.  Instead, we should understand how intuitions come about and
what factors can affect them.  Unfortunately, intuitions must be something that cannot be studied
solely philosophically, so understanding how they come about and what factors affect them –
generating a a general theory of intuitions – requires looking to empirical research from
psychology.  In the next chapter, I will present and argue for an (incomplete) general theory of
intuitions.  I will then return to the concerns I have raised about the use of intuitions in
philosophy and see what this theory tells us about them.
28
Chapter 2
Introduction
I argued in the previous chapter that to evaluate the role of intuitions in philosophy we
need a general theory of intuitions – an account of the processes that, in general, give rise to
intuitions. In this chapter I will present the general theory of intuitions that I think best fits what is
known about the functioning of our unconscious minds.  I focus on the unconscious since
intuitions must be generated at least in part by non-transparent mental processes, and of course
such processes are by their nature unconscious.  It is likely that in some circumstances intuitions
will come about due to processes other than the ones I discuss in this chapter, or be affected by
factors which are not accounted for or explained by my general theory of intuitions.  I will
enumerate some of the most important of these in the next chapter.  However, absent evidence
that some other process or factor generates or affects an intuition, my claim is that we should
assume that that intuition is a result of the processes described in this chapter; that’s why this is a
general theory of intuitions.  The processes described in this chapter generate intuitions which
have a better chance at being good evidence than those generated or affected by the factors we
will discuss in the next chapter.  That does not mean that intuitions generated by the processes
described in this chapter will always be good evidence; there are subjects in philosophy about
which intuitions are much less likely to be reliable (I will discuss one of these subjects –
knowledge – in Chapter 5).
In order to understand how our unconscious mind generates intuitions, we need to
understand how the unconscious mind learns.  Much, potentially all, of what our intuitions are
based on must be learned; even those philosophers who believe that philosophical inquiry
generates a priori knowledge admit that competence with the concepts philosophers investigate
29
must be acquired (in other words, learned) through experience.  For this reason, I will spend a
significant amount of time discussing how it is that our unconscious mind learns, in addition to
discussing how it applies what it has learned to make the judgments that we experience as
intuitions.
From this point forward, when I use a term which refers to a mental property, event, state,
or object, or even to the mind itself, I am referring to the unconscious version of that property or
object unless otherwise noted.  “Mind” will refer to the unconscious mind, “belief” will refer to
unconscious beliefs, “judgment” will refer to unconscious judgments, and so forth.  The sole
exception is “experience,” which refers to both conscious and unconscious experiences.
11
 I
understand that some will object to the idea that certain mental phenomena can have unconscious
versions.  For example, one might think that one cannot have a belief which is not at some point
consciously occurrent, and thus that there is really no such thing as an unconscious belief.  My
use of such terms is not intended to reflect any substantive claims about the nature of mental
phenomena; instead I use these terms for convenience and ease of understanding.  All I mean by
“unconscious X” is a mental thing that functions in the unconscious mind, or plays the role in our
unconscious mental lives, analogously to X in our conscious mental lives.  So by “unconscious
belief” I mean the unconscious mental thing that plays the role in the unconscious mind that
belief does in the conscious mind – something, for example, that our unconscious mind takes to
be true in the sense of judging in conformity with it.  I will also throughout this chapter (and the
next) often anthropomorphize the unconscious mind; I will talk about it “using concepts” or
“making judgments” as if it had the ability to act intentionally.  This is meant purely
metaphorically, and again is done for the sake of easy reading of writing.  I will use the following
                                                     
11
I understand that it is odd to refer to unconscious experiences, but there certainly are unconscious mental
events that play the same role in unconscious learning as ordinary experience does
30
convention to refer to concepts (both conscious and unconscious):  when I refer to a concept, I
will give what it is the concept of in small capitals.  Thus, “HUMAN” refers to a concept (or an
unconscious concept), whereas “human” refers to a type of being.  Finally, I will tend to focus on
examples of non-philosophical intuitions, “ordinary” concepts, and the learning of non-
philosophically contentious facts.  I have argued that philosophical and non-philosophical
intuitions are of the same kind, so by talking about the one I can talk about the other; however,
most psychological research on intuitions or learning is on non-philosophical topics, and in
addition, using such examples tends to bring in less baggage than using more philosophically
oriented examples.
In order to make this chapter easier to read for the first time, and easier to use as a
reference after the first reading, I will first present an overview of my view.  In the overview I
will present my view without argument or evidence.  Subsequently, I will give detailed,
empirically supported arguments for the contentious aspects of my view.
Overview of My View
12
The unconscious mind learns through association.  To learn that dogs play with sticks, for
example, we have to form an association between dogs and playing with sticks. I will henceforth
use the term “learn” loosely, in a non-factive sense, so that one can learn something which is not
true.  Associations occur between experiences, and an experience of X is any mental event with X
in its content – a thought about X, perceiving X, believing X, judging X to be the case, etc.
                                                     
12
I refer to the theory I explain in this chapter as “my view” because it is the view I endorse.  I certainly
cannot claim that the view was originated by me.  Some aspects of it, and most of the arguments for it that I
present here, I came up with myself (based on the results of studies I had read), but much of it comes either
directly or indirectly from theories proposed by psychologists I have read.  After two years of reading and
re-reading on this subject, it is hard to say exactly where every idea I discuss comes from; this theory is
heavily influenced from (and parts are adapted directly from) the work of Steven Sloman, Daniel
Kahneman, Shane Frederick, Elanor Rosch, Carolyn Mervis, Ap Dijksterhuis, Pawel Lewicki, Thomas Hill,
Tilman Betsch, Timothy Wilson, and Jonathan Schooler.
31
Unconscious mental events with content count as experiences in my view.  I accept that this may
be an extension of the term “experience” as it is normally used in English, and I do not intend any
substantive claims about what the nature of experience “really is.”  The important point is that
unconscious thoughts about X – for example, the unconscious judgment that X is the case, or
unconsciously recalling X, or unconscious perceiving X – which occur at the same time as some
other experience – say, noticing that Y is the case – can lead to associations between X and Y.
Associations are formed between contents of the same experience, or between contents of
experiences which occur at roughly the same time (I do not have a view on how close together in
time experiences must be for associations to form).  To form an association between dogs and
stick-playing, I have to have an experience (or roughly simultaneous experiences) that has both
dog content and stick-playing content.  How that content is represented will make a difference to
what association is formed.  If my experience of a dog is as a dog – that is, I have an experience
with content something like “that is a dog” – then I can associate dogs with stick-playing.  If my
experience of a dog is of a dog, but does not represent it as a dog – I see only the shape and color
of the dog, and do not realize on any level that the thing with that shape and color is a dog – then
I will form an association between that shape and color and stick-playing.
13
Association occurs automatically and effortlessly, by which I mean it occurs whether or
not we notice it or intend for it to occur, and it occurs even when we lack attention, energy, or
will.  In all likelihood we cannot prevent associates from occurring (except by preventing
experiences):  any and all experiences had at roughly the same time get associated together.  If an
                                                     
13
I am intentionally remaining agnostic about certain issues in the philosophy of perception, i.e. does our
experience have non-conceptual content?  I do not think that the answer to this question makes much of a
difference to my view.  Both the conceptualist and the non-conceptualist think that some of our experience
has conceptual content, and also that all kinds of sense data makes it into the content of our experience.
Whether or not this data is all conceptualized, as long as it is in the content of our experience, it can be a
part of an association.
32
association of the relevant sort already exists then when two things are experienced together the
already existing association gets strengthened.
There are facts about our brains that limit and guide how we associate our experiences.
For example, there is good evidence that we have an innate mental grammar that affects how we
learn language, so that language learning is due to a combination of experience and innate factors.
However, as I will argue later, we do not have good evidence that there are many such innate
structures that affect our unconscious learning, and so in the absence of good evidence that our
learning about some topic is affected by one, we should assume that it is not.
Associations are used by our unconscious minds in two ways.  They use them to
categorize things.  If I see that some object has a certain shape, has a wagging tail, and barks, and
those traits are associated with the concept DOG, then I will categorize the objects as a dog.  If I
note that some action is intentional, produces suffering, and brings about no good outcome, and
those are associated with BAD, then I am likely to categorize the act as bad.  We also use
associations to make inferences.  If I see that something is a dog, and associate dogs with stick-
playing, then I am likely to infer that that thing will like to play with sticks, even if I have no
direct evidence that this is true.  If I judge that some act is bad, and associate the performance of
one bad act with the performance of other bad acts, I may infer that the agent is likely to do other
bad things in the future.  Categorization and inference are really the same process – categorization
is inference from trait possession, or membership in one category, to membership in another
category, rather than inference about the other traits a thing possesses.  Both inference and
categorization are types of judgments.  Categorization and inference judgments which are based
on unconsciously learned associations are unconscious judgments.  Unconscious judgments can
be consciously manifested – we can be aware of the content of these judgments, in the form of
intuitions – but sometimes they are not.  The processes which lead up to them are never
33
conscious.  It is worth noting again that unconscious judgments are a form of experience (as I am
using the term) and can strengthen pre-existing associations.
Unconscious judgments occur only in the presence of stimuli experiences; when one
experiences X, one will tend to make judgments that Y (when X and Y are associated), but such
unconscious judgments will not occur spontaneously.   Not every association will cause
judgments in accordance with it; whether or not an association between X and Y triggers a Y
judgment when X is experienced will depend on how strongly X and Y are associated together,
and with how strongly X is associated with other traits; the stronger the association (especially
relative to other associations with X) the more likely the judgment is.  Inference and
categorization can occur in multiple steps; if X is associated with Y, and Y is associated with Z,
then experiencing X may cause an inference that Y, which (since it is a Y experience) may cause
an inference that Z.  This can occur because unconscious judgment, like association, is automatic
and effortless.
When we have learned enough about some type of thing, we form something that
behaves like an abstract representation of that class of thing and the traits associated with it,
which I will call an unconscious concept (unconscious concepts are sufficiently unlike “normal”
concepts that I avoid referring to them simply as “concepts”).  Unconscious concepts are used to
make judgments things (or the category captured by the concept) without the need to recall any
instance of the unconscious concept other than the one under consideration.  For example, as I
begin to learn about fairness, I may judge some action to be fair by comparing it to other actions I
classify as fair.  As I learn more and more about fairness, by experiencing more fair and unfair
actions, I construct an abstract representation of what fairness is and what tends to true about fair
acts; this is my unconscious concept FAIR.  In the future, when I make judgments that some action
is fair or not, I do so by comparing facts about the action to the traits I associate with FAIR.  The
34
exact mechanism by which this occurs is not entirely clear – there are competing models that
describe this process.  However, it must work roughly in the following manner:  objects are
judged to be instances of a given concept if they have a large enough percentage of the traits
associated with the concept, with “adjustments” made for possessing traits strongly associated
with the concept (i.e. traits more strongly associated with the concept are weighted more heavily
when categorizing).  If I categorize an act as fair, I may then make judgments about it based on
associations between FAIR and other unconscious concepts or traits; for example, I may judge that
the act is good (an instance of the unconscious concept GOOD).  I do this without comparing this
action to any other fair or unfair action.
Unconscious concepts are formed through learning of the sort I have been describing,
which means that they are clusters of associations (this is one thing that distinguishes them from
concepts as philosophers generally think of them).  Consider, for example, my unconscious
concept DOG.
14
 As I have experiences with dogs, I form associations between a number of traits:
tail wagging, friendliness, fur, being a certain shape, stick-chasing, and so forth.  Some of these
traits are very strongly associated with all of the others; for example, seeing that something is a
certain shape may often be enough for me to judge that it is a dog, and thus that it will like stick-
chasing and be friendly.  Speaking from experience, a dog statue seen out of the corner of my eye
can seem like a dog based only on shape, although that same statue may not seem like a dog when
seen square on, because it has many other traits – material, color, and so forth – which are
associated with STATUE, and STATUE and DOG are incompatible concepts.  Traits other than shape
are less strongly associated with DOG, and the presence of any one of these may not be enough to
                                                     
14
Again, I prefer to discuss non-philosophical concepts because I can make relatively uncontroversial
claims about what traits are associated with them that we use in our judgments; this is much more difficult
to do when it comes to philosophical concepts like GOOD or KNOWLEDGE.
35
cause one to categorize that something is a DOG or to infer that it will have DOG-associated traits.
However, if enough of these are present, DOG judgments are likely to occur.
Notice that nothing about this story requires that any traits will be necessary for
categorization.  In fact, for many unconscious concepts, there will be few or no traits such that
things which lack those traits will not be unconsciously judged to be instances of the concept.
15
This is because of the way unconscious concepts are formed:  to put things overly simply, one
might first form the concept from experience with one object; upon seeing a new object with
many (but not all) traits in common with that object one might classify it as a member of the
category; a third object with some traits in common with both the first and the second object, but
none had by both, might be have enough traits which are associated with the concept to be judged
as a member of the category.
Intuitions are the conscious manifestations of unconscious judgments.  Intuitions have
propositional content.  Typically, the content of the intuitions we use in philosophy are expressed
in sentences.  For example, when philosophers have the intuition that harvesting the organs of a
healthy patient to save the lives of five other patients is wrong, they generally do so by hearing
this sort of case described, and having the feeling that the sentence expressing the moral judgment
about the case true.  Arguably, intuitions need not always be about sentences; one might see a
doctor harvesting the organs of an innocent patient and feel that it is wrong, even if that feeling is
not immediately expressed in language.  I have some ambivalence about how to describe cases
like this, since one might say that one must at least think “That is wrong,” to have the intuition
that the behavior described is wrong, but that question is only tangential.  We can agree that most
                                                     
15
See Rosch & Mervis (1975) and Mervis & Rosch (1981) for early research indicating that many of the
categories we naturally use lack (many) necessary and sufficient conditions for membership, although
much of their research is not on unconscious categorization.
36
intuitions philosophers are interested in directly involve linguistic expressions of their content.
Since intuitions are unconscious judgments made manifest, in order to have an intuition of this
sort, one must be able to linguistically express the unconscious judgment they have made.
However, not every unconscious judgment can be linguistically expressed.  In order for an
unconscious judgment to be linguistically expressed, the various parts of the judgment must be
associated with terms.  But much of what we unconsciously learn is not associated with any
words.  Thus, we should expect that some unconscious judgments will not be directly manifested,
or manifest-able, in intuitions.  For example, as I will discuss in Chapter 5, we can learn to
unconsciously judge that some way of thinking reliably generates true beliefs by repeatedly
associating that way of thinking with truth.  This looks like a judgment that the concept RELIABLE
applies to these ways of thinking.  However, these learned associations need never involve
associations with the word “reliable” because we can experience truth and the ways of thinking
together without also experiencing utterances of the word “reliable,” nor thinking about the word.
We might then not ever have the intuition that a given way of thinking is reliable, even though we
unconsciously judge that it is.
For much the same reason, we should not expect that information that we accept as true,
and which is expressed linguistically, will always properly influence our intuitions or
unconscious judgments.  We might, for example, believe consciously the sentence “X is bad”
without unconsciously judging that X is bad, if the term “bad” were not associated to the right
degree with the unconscious concept BAD, or if it were, but the traits of X are such that it is more
easily unconsciously judged good than bad.  Thus, our intuitions and other unconscious
judgments will not always be in line with or influenced by what we believe or know.
Unconscious learning operates separately from conscious learning, although not entirely
separately.  As we have just discussed, our unconscious concepts might be disassociated from
37
language in some cases, so that what we consciously judge to be X does not influence what we
unconsciously learn to be a member of X.  Likewise, since conscious learning need not involve
repeated simultaneous experiences of things, we might learn something consciously while never
forming unconsciously associations that reflect this learning.  Conversely, we can unconsciously
learn that two things are associated from experiences we do not notice, and so not realize we have
learned anything.  We can also have experiences which we believe do not reflect the world in
general; since unconsciously learning is effortless and automatic, the unconscious beliefs we form
as a result might conflict with what we consciously believe.  However, our conscious beliefs do
play a role in unconscious learning, since what we consciously believe affects the judgments we
make consciously, and these judgments are experiences that will form associations that
potentially affect future unconscious judgments.
The fact that unconscious learning and judgment can and often does come apart from
conscious learning and judgment is part of why those unconscious judgments of which we are
aware – intuitions – are philosophically interesting.  They tell us things we did not already know,
because they can reflect information we do not consciously pick up on, and/or employ
information in novel and useful ways.  The unconscious takes in and learns from a phenomenal
amount of information, much of which we do not consciously use because we are not aware of it,
or do not attend to it, or have difficulty thinking about it because we cannot easily speak about it.
It also is able to use more of this information than are our conscious minds:  our unconscious
keeps track of all, or a great deal of, this information over huge periods of time, and employ all of
it whenever they make a judgment, whereas our conscious minds can only think about a very
small amount at once.  The unconscious does this by storing this information in the form of
associations; although associations may fade over time (the jury is still out on this), the fact still
remains that by forming associations between the contents of all sorts of experiences our
38
unconscious minds is doing something equivalent to tracking a phenomenal number of
connections simultaneously, and brings much of this to bear whenever it makes a judgment.  As I
will argue in a later chapter, this cannot help but produce useful data about some domains of
philosophy.
The other reason why the existence of unconscious concepts is important is that it allows
for the possibility of reliable general intuitions.  General intuitions are intuitions about the truth of
general claims, such as claims about categories of things, or concepts, rather than about single
objects.  For example, the intuitions “Dogs are animals” or “Causation is transitive” are general
intuitions because they are about types of things or phenomena, rather than any specific dog or
causal sequence.  It would be much more difficult to have reliable general intuitions if we did not
have concepts, because in order to make a reliable judgment about some category, we would have
to think about a very large subset of instances of that category.  Given unconscious concepts,
however, we can in principle do the equivalent simply by accessing the knowledge associated
with the concept.  To judge whether or not dogs are animals, we might simply see if there is a
strong association between DOG and ANIMAL.  Unfortunately, a) we do not know that general
intuitions are formed in this way (or how they are formed at all), and b) although general
intuitions formed in this way are evidence for their content, they are relatively easily defeasible
evidence.  After all, strong associations can exist between two unconscious concepts even though
there is no necessary connection between the two.  There is no good reason to think that the
unconscious will have an intuition of the form “All Xs are Y” only when there is no evidence that
some Xs are not Y.
Let me quickly summarize the main points of my general theory of intuitions:
 Intuitions are the conscious manifestations of unconscious judgments
39
 Unconscious judgments are judgments the processes behind which occur
unconsciously
 Some unconscious judgments are never consciously manifested
 Unconscious judgments – either inferences or categorizations – that X is Y are the
product of learned associations between X (or things sufficiently like X) and Y
 Associations between X and Y always occur, automatically and effortlessly,
whenever we have experiences with X and Y in their content at roughly the same
time (if such associations already exist, these experiences strengthen them)
 Given sufficient experience with a type of thing, we will form an unconscious
concept of that thing; such a concept is a network of associations
 Our unconscious judgments need not be linguistically expressible, nor will
linguistically encoded information always activate the relevant associations or
concepts
The remainder of this chapter will present empirically based arguments for various
aspects of the general theory of intuitions I have just given.  These arguments are largely
psychological, rather than philosophical (although I do make arguments about philosophically
contentious claims about the mind), and are necessary because my views are not uncontroversial.
However, it is not crucial that you understand these arguments in order to understand the
philosophical arguments I make in later chapters; if you are uninterested in cognitive science or
psychology, feel free to skip to the next chapter.
Why Think that the Unconscious Learns and Judges Through Association?
The foundation of my view of intuitions is my view of unconscious learning and
judgment.  Let’s call my view the associative model of unconscious learning and judgment (“the
40
associative model” for short); the view claims that unconscious learning occurs through the
formation of associations between the contents of experiences, and unconscious judgments are
the result of these associations.  I am not claiming that all unconscious learning is based on pure
association:  some, such as that involved in language acquisition, is influenced by innate
tendencies.  However, my view is that our default assumption should be that unconscious learning
about a given subject occurs purely through association, and we should doubt this only when we
have good evidence that there is an innate tendency that influences learning in certain
circumstances or about certain subjects.
Like all broad empirical claims, the associative model of unconscious learning and
judgment is not going to ever be definitively proven.  Instead of trying to definitively prove it,
then, I will argue for the associative model in the following manner:  I will show that at least
some unconscious learning occurs through association, and that this learning occurs in a variety
of domains.  I will then show that the view that unconscious learning is associative explains
several important and interesting general facts about how we make unconscious judgments.
These facts, taken together, give us good evidence that the associate model is correct.  I will then
consider some possible counter-evidence to the view that the unconscious learns through
association, and argue for each that it either the evidence does not exist or that it is consistent
with my view.  At that point I will rest my case.  This is how broad empirical theories typically
get argued for:  by presenting data which suggests the theory in the first place, showing how the
theory accounts for other known data (thus showing that it has predictive/explanatory power), and
looking for and failing to find falsifying data.  Since the associative model of unconscious
learning is central to my view – many of my other claims are based in part on it – I will devote a
great deal of attention and effort to arguing for it.
41
Associative learning occurs
Why think associative unconscious learning ever occurs?  One obvious answer to this
question is, “Because quite a large number of psychologists tell us it does,” (see Sloman, 1996,
for example), but we should also consider more direct evidence of associative learning. Clear
evidence for it comes from research on what is called “implicit learning.”  Implicit learning is
learning that occurs without awareness that anything has been learned, and of which the learner is
often never becomes aware; thus, it is a perfect example of unconscious learning.  The type of
implicit learning which most clearly demonstrates learning through association is learning to
detect hidden covariations, which are covariations that are very subtle or are hard to
(consciously) detect because one or both of the covarying factors is hard to detect.
Numerous studies of hidden covariation detection demonstrate that people can learn to
detect covariations without realizing that they have learned anything, or that there is anything to
learn; at the end of the study, the subjects studied do not realize that there was any covariations in
the data at all.  For example, one important series of studies required subjects to look at “brain
scans” – images that look somewhat like brains, and are composed of various symbols (#, %, @,
and so forth) (see, for example, Lewicki, et al, 1994).  In the training phase of the experiment,
subjects were shown various scans and told which ones were from intelligent people and which
were not.  Later, subjects were shown new scans and asked to judge which were intelligent.
Unbeknownst to the subjects, there was a covariation between intelligence and the percentage of
a “brain scan” that consisted of a certain symbol.  13% of the symbols in an intelligent scan were
that symbol, whereas in non-intelligent scans 11% were.  After being trained, subjects were able
to identify scans as intelligent in accordance with the covariation at greater than chance levels –
that is, scans with the right percentage of the certain character were more likely to be identified as
intelligent.  However, when asked later, subjects stated that they had not learned anything, that
42
there was no pattern to the data, and could not identify the rule they had learned when presented
with plausible alternatives (Lewicki, et al, 1989, Lewicki, personal communication).  In fact,
other subjects were given the data used in these experiments, told there was a pattern in it, and
given unlimited time to find the pattern.  They were unable to do so (Lewicki, et al, 1987).  Thus,
the learning in these brain scan studies was unconscious.  Another, related, experiment involved a
covariation between changes in the pitch of a tone played when subjects viewed a brain scan and
the intelligence of the scan (Lewicki, et al, 1994).  The differences in pitch were subtle and not
consciously detected – one pitch used was D, and the other was “one third the distance between D
and E flat,” a difference that even a trained musician would be hard pressed to notice.
16
 Although
after training subjects identified as intelligent scans in the presence of the appropriate tone, they
again did not realize that there was any pattern in the data or that they had learned anything,
showing that the learning that occurred was not conscious.  Other studies of hidden covariation
detection involved patterns so complex they could not be consciously tracked without writing
things down (Lewicki, et al, 1987); again, subjects learned the patterns but were not aware of
having done so.  Studies have also involved covariations between more evident things:  gender
and sadness, hair length and kindness, color of card and rewards, objects and pleasantness, but
while subjects learned the covariations, they were not aware of this learning or that the
covariations existed (Hill, et al, 1989, Lewicki, 1986, Bechara, et al, 1997, Olson & Fazio, 2001).
In each of these experiments, subjects unconsciously learned that if X occurred, Y was
more likely to occur (e.g. if a scan had 13% of a certain symbol, it was more likely to be
intelligent than otherwise, or if a person was of a certain gender, they were more likely to be sad
than otherwise).  They learned this through repeated experiences in which X and Y occurred
                                                     
16
This was verified by a highly-trained professional musician.
43
together, and repeated experiences of Y not occurring in the absence of X.  This looks exactly
like learning through association.  In each of these cases, subjects learned unconscious, learning
something that they could not articulate and the content of which they were unaware, and were
not even aware that they had learned anything at all; in some cases, subjects learned something
that other subjects could not learn consciously (Lewicki, et al 1994, Lewicki, et al, 1987).  Thus,
we can see that that we can learn unconsciously through association.  Further, we know that we
can learn in this way about things in a variety of domains, and using a variety of types of
experiences.  Subjects learned to make judgments about intelligence or social facts such as
personality traits, or evaluations of pleasantness, forecasts of the future, and so forth.  They
learned from visual experiences, auditory experiences, linguistically presented information, or
their own judgments (of sadness, for example).  Thus, we know that associative unconscious
learning can occur in a variety of contexts and use a variety of kinds of inputs.  This suggests the
hypothesis that unconscious learning about any subject and from any inputs is associative (except
in atypical cases).
Associative learning has explanatory power
The associative model of unconscious learning explains facts about our unconscious
mind, facts that modern psychologists consider central to an understanding of how we learn about
and judge the world around us.  These facts are:
1. Given a sufficient amount of learning, categorization judgments seem based on data
about categories  which is encoded abstractly (concepts), rather than on comparisons
to previously encountered specific instances of the category (exemplars).
44
2. Concepts exhibit what are called typicality effects:  certain instances of a concept are
more easily recalled than others, more often used as examples of the concept, and
more readily judged to be instances of the concept than others.
3. Concepts can be put into hierarchies, with concepts higher in the hierarchy including
concepts lower down.  Certain levels in this hierarchy are basic levels, and concepts
at the basic level are easier to think about.
The first of these is currently a subject of debate, although I will argue for it later, but each of
these has informed current theories of how we learn and conceptualize.  A model of unconscious
learning which accounts for them thus has great explanatory power.  Let’s see how the associative
model of unconscious learning explains each of these.
Consider someone who encounters an object they have never seen before.  Their mind
attempts to categorize it.  One way of doing this is to compare it to specific objects they have seen
before, exemplars of various categories, and see which it matches most closely – it does not look
like this dog or this dog or this dog, it does look like this cat but not this cat or that cat, it does
look like this raccoon and that raccoon and that raccoon, so it probably is a raccoon.  Another
way of doing the same thing would be to pick out the various traits the object has – four legs,
puffy striped tail, dark circles around the eyes – and then see what category of things has most or
all of those traits associated with it – DOG has “four legs” associated with it, but that’s it, CAT has
all of them, but “puffy striped tail” and “dark circles around the eyes” are only weakly associated
with CAT, but RACCOON has all of these traits strongly associated with it, so this object probably
is a raccoon.  I will argue later in this chapter that, given enough experience with members of a
category, we tend to make judgments in this second way, by comparing what we know about
objects to be categorized with what we know about categories, not with what we know about
exemplars of those categories.  This requires that we have things in our minds that behave like
45
representations of categories – that we abstract away from our experience of individual things –
although it does not require that these things actually be representations (I am agnostic about
whether or not our unconscious minds can represent
17
).  For now, let’s assume that my promised
argument will be persuasive, and that we do categorize novel things using abstract representations
of categories.  How does the associative model of unconscious learning account for this?
Concept acquisition has been shown to take repeated exposure to exemplars of the
concept-to-be-formed (Murphy, 2002).  If, as the associative model of unconscious learning
claims, we associate every thing we experience at the same time, then every time we experience
an exemplar of a category we should associate all of its traits with each other.  That is, every time
we experience a raccoon, we form associations between striped tails and circles around eyes (as
well as between the raccoon we just saw and these traits).  Over time, these traits will be more
strongly associated with each other than with any exemplar of the category, since they have been
encountered with each other more often than with any category exemplar.  Once that occurs, upon
encountering things which possess some or all of these traits, we will make judgments about them
without recalling any exemplar of the category.  Consider dogs once more.  As we encounter
many dogs over time, we start to associate certain traits with each other – tail wagging, fur, stick
chasing, and so forth.  We do so because these traits tend to occur together.  We also associate
these traits with specific dogs, the dogs that we have encountered that exhibit these traits.  For a
while, the associations between a specific dog, say Lassie, and dog-traits will be almost as strong
as that between one dog-trait and another.  This is because we have encountered relatively few
                                                     
17
I am agnostic as to whether unconscious concepts are representations or not, as I am not certain what it
takes for something to be a representation, or to represent.  I leave this question to the reader; my point is,
even if unconscious concepts are not representations (and concepts as they are normally thought of are),
unconscious concepts fill the role in our unconscious mental lives that concepts do in our conscious mental
lives, and thus the psychologically observed fact that we use what seems like abstract representations to
make judgment is consistent with our using unconscious concepts.
46
dogs, so have experienced Lassie together with stick-chasing almost as many times as we have
experienced stick-chasing with tail wagging.  Thus, when we encounter a new dog, say Rin Tin
Tin, and we see that this thing is furry, four legged, has a wagging tail, we are likely to
unconsciously recall Lassie and compare Rin Tin Tin to her in order to judge that Rin Tin Tin is a
dog.  However, as the range of dog exemplars we have encountered grows, the associations
between one dog-trait and another will become much stronger than that between any given dog
and any given dog-trait.  That is because we will have encountered stick-chasing and tail-wagging
together much more often than we have encountered Lassie and stick-chasing together.  After a
while, when we encounter a new thing –  Fido – and see that it chases sticks and has a wagging
tail, we need not recall any dog exemplar at all.  Instead, since we have a strong association
between stick-chasing, tail-wagging, and other dog-traits, we judge that Fido is a dog based only
on its having of these traits, and is likely to have these other traits.  The associative model of
learning predicts that, over time and with greater experience of instances of a category, we will
make judgments about new instances of the category without referring to category exemplars.
Instead, we will use clusters of associated traits, which I call “unconscious concepts,” that
function like abstract representations of categories.
Let's now consider how the associative model of unconscious learning explains fact 2, the
fact that some instances of a category or concept are taken to be better examples of the category
concept than others, and these are more easily and readily thought of:
Think of a fish, any fish.  Did you think of something like a trout or a shark, or did you think of an
eel or a flounder?  Most people would admit to thinking of something like the first:  a torpedo-
shaped object with small fins, bilaterally symmetrical, which swims in the water by moving its tail
from side to side.  Eels are much longer, and they move by waving their body in the vertical
direction.  Although all of these things are technically fish, they do not all seem to be equally good
examples of fish.  The typical category members are the good examples – what you normally think
of when you think of the category.  (Murphy, 2002, 22)
47
A typical category member is a category member one is more likely to think of when thinking of
an example of the category.
18
 Here are some facts about typicality:  People tend to strongly agree
on how typical different members of categories are.  People are much quicker to judge that a
typical instance of a category is a member of that category than an atypical member.  When
learning new categories, people tend to learn that typical category members are category
members before the atypical.  People learn about categories more quickly when they are taught
using typical examples.  All of these facts about typicality are all the result, at least in part, of
unconscious processes:  it is not that we try to recall typical exemplars when thinking of a
category, for example, we just do so automatically.
What characterizes typical members of categories, other than these typicality effects, is
that they have more traits in common with more members of the category than do atypical
members (Rosch & Mervis, 1975).  In other words, if a category member is typical, it will have a
lot of different traits in common with other members of the category, whereas an atypical member
will not have as many.  For example, a robin is a typical bird and a penguin is an atypical bird.
Robins share have many with most other types of birds – they have feathers, they lay eggs, they
fly, they live in trees, they make nests, and so forth.  Penguins share relatively few traits with
most other birds – they have feathers, lay eggs, have beaks, but they do not fly or live in trees or
make nests.  So if a trait is possessed by a typical category member, it is more likely to also be
possessed by other members of a category than a trait possessed by an atypical member.  Typical
category members also tend to not have traits possessed by members of similar categories (Rosch
& Mervis, 1975).  Bird and mammal are similar categories:  they are both categories of animals,
but not too specific of categories (not as specific as Doberman or dolphin); robins are much more
                                                     
18
Unless otherwise stated, the information is this paragraph comes from Murphy, 2002, which gives a good
overview of specific research on typicality effects.
48
clearly distinct from various types of mammals, for example, than are penguins, since penguins
have quite a bit in common with sea lions or walruses.  If we are to explain typicality effects –
why we have an advantage in learning and thinking about typical category members – we should
do so by employing the fact that typical category members are more like other category members
and more unlike members of other categories than atypical category members.
The associative model explains typicality effects.  When we acquire the unconscious
concept BIRD, we do so by experiencing many instances of a wide range of birds – robins,
sparrows, pigeons, chickens, penguins, eagles, etc.  Most of these birds have many traits in
common – they have feathers, beaks, fly, etc. – but of course not every bird has every one of
these traits.  Each of these traits will get associated with the concept BIRD to some degree; the
traits shared by more birds are more strongly associated with the concept because one experiences
birds as having these traits more often.  Typical category members have many of these traits,
more than atypical members.  Thus, when encountering a typical category member, we should
expect to judge that it is a member of the category very quickly, because it has a number of traits
associated with the category, whereas atypical member have fewer associated traits, and so
should be categorized more slowly.  Further, because typical members are more unlike members
of other categories, when categorizing them we are unlikely to have to decide between competing
categories to place them in.  When recalling an example of the category, we should expect to
recall typical examples rather than atypical ones, because they are more strongly associated with
the concept in virtue of having more traits associated with the concept.  When we think “bird,”
and activate all the traits associated with it, we are much more likely to recall a bird which has
more associative links to the concept (that is, a typical bird) than one which has fewer.
Associative learning also explains why typical category members are learned more quickly and
are the best exemplars to learn about a concept from.  As we learn about a category, it is easier to
49
judge that a typical category member is a member than a non-typical member, because a typical
member will have more associative links both to the concept and to previously viewed exemplars
of the concept.  Thus, even at early stages of concept acquisition, typical exemplars will be easier
to make judgments about, and so should be learned more quickly.  Further, since typical
exemplars also have more traits in common with most category members, learning by exposure to
them allows subjects to acquire associations between more concept-relevant traits than would be
gained by using atypical members, and thus the concept can be learned faster (by exposure to
fewer exemplars) when learned from typical than atypical members.  Associative learning
explains typicality effects.
How does the associative model account for fact 3, about the “basic level” of conceptual
hierarchies?  First, let’s look at what these facts are.  We can think of concepts as being organized
into hierarchies (Murphy, 2002).  Concepts higher up in the hierarchy include the extensions of
concept below them in their extension. For example, for the concepts ANIMAL, DOG, and PITBULL,
ANIMAL is higher up than DOG, and DOG is higher than PITBULL, because every dog is an animal,
and every pitbull is a dog.  Concepts at a certain levels in the hierarchy seem easier to think about
in certain ways; these concepts are called basic level concepts and we have a “basic level
advantage” when we think about them.  Our unconscious is faster to categorize things as being
instances of concepts at this level than as instances of concepts at other levels, and faster to recall
examples of these concepts than concepts at other levels (Mervis & Rosch, 1981).
Just like typical category members, basic level concepts have certain properties in
common, properties that an explanation of the basic level advantage out to rely on:  members of
basic level concepts share a large number of traits with each other, and share relatively few traits
with members of related concepts (Murphy, 2002).  Higher level concepts, on the other hand,
tend to be more distinct from concepts at the same level of the hierarchy than are concepts at
50
lower levels, but have fewer traits which are shared by all instances of the concept.  Lower level
concepts have more traits which are shared by all instances of the concept than do concepts above
them, but also have more traits in common with related concepts at the same level.  To illustrate
these facts, consider the following three concepts:  ANIMAL, DOG, and PITBULL.  Of these
concepts, DOG is basic level (for most people).  Members of ANIMAL are more different from
members of PLANT (a concept at the same hierarchical level) than DOGS are from CATS (which is
at the same level as DOG),  but there are fewer traits that all or most animals have in common than
there are traits all or most dogs have in common.  Conversely, every pitbull has more in common
with every other pitbull than every dog does with every other dog, but a pitbull is more like a
Labrador (a member of a concept at the same level as PITBULL) than a dog is like a cat.
The associative model explains the advantages we have when thinking about basic level
concepts.  A concept is a cluster of associations, and judging that X is an instance of concept X
requires that X is experienced as having a sufficient number of traits associated strongly enough
with x.  Basic level concepts have a larger number of traits that are shared by all or most instances
of the concept than do higher level concepts.  For this reason, most objects will have a higher
percentage of the traits associated with the basic level category it belongs to than of the higher
level categories it belong to.  When we see the average dog, it should have almost every trait
associated with DOG, but just many traits associated with ANIMAL, so the judgment that it is a dog
is easier and faster to make than the judgment that it is an animal.  Further, since this judgment
will have been easier on many occasions, especially when learning the concepts DOG and ANIMAL
(DOG being easier to learn because dogs share more traits than animals), we should make more
judgments that dogs are dogs than they are animals, which makes a stronger association between
dog-traits and DOG than ANIMAL.
51
Why, then, is it easier to apply basic level concepts than lower level concepts?  Why is it
easier to judge that a dog is a dog than a pitbull or labrador or german shepherd?  We are much
less likely to have extensive experience with any given breed of dog than we have with dogs in
general, so that we are likely to have weaker associations between PITBULL and pitbull-traits than
between DOG and dog-traits.  Since these associations are weaker, we should be slower to
recognize a dog as a pitbull than as a dog.  This is not the case for every person, however.  People
who live and work extensively with dogs (breeders, for example) may be so accustomed to
thinking of dogs as specific breeds, and have so much experience with specific breeds, that they
have as strong or stronger associations between specific-breed-traits and specific-breed-concepts
than between dog-traits and DOG.  This fits with the facts about basic level categories:  for experts
on specific categories, lower levels of conceptualization in the category of expertise can become
the basic level (Lakoff, 1987).
This same reasoning also explains why it is easier to recall examples of basic level
concepts than other levels of concepts; we have stronger associations between exemplars of DOG
and “dog” than we do with any given exemplar of ANIMAL and “animal,” because specific
exemplars of DOG have more of the traits associated with the concept than do exemplars of
ANIMAL.  And because we are less likely to have encountered as many pitbulls as we have dogs,
we will have weaker associations between pitbull exemplars and PITBULL or pitbull-traits and
PITBULL, so that recalling an example of a pitbull is slow.  We have, however, strong associations
between dog-traits and DOG, so that when we think about DOG we activate very strong
52
associations which quickly bring to mind a breed of dog which exemplifies these traits.  Thus,
associative learning explains why we think most readily about basic level concepts.
19
We have seen that the unconscious mind does learn by association in a wide range of
circumstances, and that positing that it learns in general by association explains some important
facts about unconscious judgment.  This makes the hypothesis that the unconscious generally
learns and judges by association seem a strong one.  We should now consider possible evidence
against this hypothesis.
Massive modularity
The associative model of unconscious learning and judgment looks like what a cognitive
scientist would call a domain general account of learning and judgment.  A domain general
account of learning and judgment is an account that posits a single mechanism (or a few
mechanisms) to account for learning and judgment in all, or most, cognitive domains.
20
 The most
direct challenge to domain general accounts of cognition is the massive modularity thesis.  The
typical version of the massive modularity thesis is that the (unconscious) mind consists of a large
number of mental modules, which are responsible for all or most mental life.  A mental module is,
on the “traditional” account (which we owe to Fodor, 1983), a domain specific, encapsulated,
inaccessible, innate part of the mind.  I recognize that that definition is not terribly helpful, and I
will shortly explain what each of those terms means.  But first, a quick and non-technical
explanation of what a mental module is:  a mental module is a part of the mind that is built to
                                                     
19
There are likely to be other reasons for this as well.  For example, Gricean considerations mean we
should be more inclined to tell others than some thing is a dog as opposed to an animal, which means we
are more likely to have the word “dog” associated with dog-traits than the word “animal.”
20
From this point forward, “cognitive” and “cognition” refer to unconscious mental processes; the types of
theories I am going to discuss are non-starters if one takes them to describe the operations of conscious
cognition.
53
handle just one specific type of task or problem.  The massive modularity thesis is that such
modules are responsible for pretty much all unconscious cognition.  The conflict between this
thesis and the associate model domain general accounts of cognition should be apparent:
according to the massive modularity thesis, since modules are built into our heads, more or less
from birth (more on this soon), there are no domain general cognitive processes.  Since the
associate model posts a domain general cognitive process, the traditional massive modularity
thesis is incompatible with it.
In the rest of this section, I give different versions of the massive modularity thesis, some
of which are compatible with the associative model of unconscious learning, and some which are
not.  For those versions that are incompatible with the model, I will discuss the evidence for these
versions and argue that this evidence is quite weak.  A full-on response to every argument raised
by proponents of massive modularity would be a dissertation in itself, and I do not hope to sway
any committed proponents of this view here; rather, I hope to give enough of an argument to
convince those sitting on the fence with regards to massive modularity that my view is the better
one.
In order to understand different versions of the massive modularity thesis, we have to
understand the different traits modules might have.  There are some traits that have to do with
neuro-biology that I will not discuss here, since the associative model is agnostic about such
matters.  The important potential traits of mental modules have to do with the inputs, outputs,
processes, and databases used in cognition.  In performing some cognitive task, such as deciding
what to eat, the mind (or some part of it) typically uses some inputs, which are factors that may,
and usually do, change to some degree every time the task is performed.  When deciding what to
eat, inputs might be how hungry the person is, what food they have nearby, and what food they
have eaten recently.  In addition to inputs, the mind, or systems which are part of the mind, can
54
also sometimes draw up a database of information.  This database differs from the input in that it
is relatively stable, changing little (if at all) each time the task is performed.  When deciding what
to eat, the mind’s database might consist in part of the knowledge of what the person likes to eat,
or what food is healthy.  The inputs and information in the database get combined through some
process; the mind might for example weigh how convenient some food is versus how enjoyable it
is to eat versus how hungry the person is.  The result of these processes is an output, which in this
example would be a decision as to what to eat.
Mental modules, on the traditional, Fodorian, view, are domain specific, encapsulated,
and inaccessible.  These are facts about the inputs, outputs, processes, and databases of the
modules.  Domain specificity has to do with limits on the inputs and outputs of the module.  If the
social judgment module, for example (which is a module many cognitive scientists claim exists
(e.g. Scholl & Leslie, 1999)) were domain specific with regards to its inputs, it would only accept
a certain type of data.  This can be more or less limited – the social judgment module might only
take “social” data (whatever that means), whatever the source, or it might use only a very specific
type of data, such as facial expressions and body language.  Modules also have domain specific
output, which means they only produce output related to the tasks they exist to handle.  Modules
are supposed to be encapsulated, which means that they generally have a database of information
that only they can draw upon (although they may also be able to draw upon other sources as
well).  So, if the mind has a shape processing module,  the module may know that cubes have
eight vertices (this is stored in the shape processing module’s database) but no other module can
use this information.  Modules are also supposed to be inaccessible, meaning that no other
module or mental system can use their processes – no other system can use the rules or
algorithms that are part of that module, although other systems might have similar (or even
functionally identical) rules and algorithms – nor can any other system “see” those processes as
55
they work; they can only see the outputs of the processes.  To summarize, on the traditional view,
mental modules have a limited range of inputs and outputs, have their own databases that no other
module can use, and employ only their own processes to deal with inputs and information in their
databases.
Modules are also supposed to be innate on the traditional view.  What this means is
somewhat up for grabs.  In its weakest version, it means that some of the databases, inputs,
outputs, or processes that make up a module are partly genetically determined.  Anyone who
accepts that there are mental modules will accept that they are innate in this weak sense, because
anyone who accepts that there are mental modules will accept that their workings are influenced
to some degree by biological facts about our brain and nervous system that are genetically
determined, if only because the having of mental modules is enabled by the having of a brain.  I
am concerned with a stronger, more controversial view of the innateness of mental modules.
According to this strong view of innateness, all (or the majority) of the allowable inputs,
allowable outputs, databases, and processes of every module are entirely genetically determined;
by genetically determined  I mean “determined by our genes and not other factors,” as opposed to
merely influenced by our genes.  Saying that trait X is genetically determined does not mean that
having the appropriate gene is sufficient for having trait X, since of course there are necessary
conditions for having any biological traits, such as being alive, eating sufficiently, and so forth.
But, other than enabling our genes to be expressed, these factors do not influence genetically
determined traits.  I will call modules whose inputs, outputs, and databases are almost completely
genetically determined, strongly innate mental modules.  The version of the massive modularity
thesis that holds that most or all of our unconscious cognition is due to strongly innate mental
modules is the strongly innate massive modularity thesis.
56
Massive modularity theorists need not adopt the traditional account of mental modules
entirely.  While everyone (myself included) accepts that mental architecture is partly genetically
determined, many proponents of massive modularity reject all but the very weak version of
innateness (Karmiloff-Smith, 1992).  Such theorists believe that our mental life is due largely to a
large number of systems that operate on a specific, somewhat limited, range of inputs, use their
own proprietary databases and processes in processing these inputs, and only produce a limited
range of outputs, but that the inputs, outputs, databases, and processes (or some subset of the
four) of these mental systems are determined mostly by environmental or other non-genetic
factors.
21
The associative model of unconscious learning and judgment is compatible with versions
of the massive modularity thesis that see mental modules as innate only in the weak sense.  This
is because unconscious concepts, which are responsible for most of our unconscious judgments
according to the associative model, are in essence mental modules.  Unconscious concepts are
clusters of associations.  When one judges at some specific time that the unconscious concept X
applies to some object, one can only make that judgment based on information that activates
associations with X.  Thus, unconscious concepts are domain specific, because they only use a
limited range of information which is specified in advance of the judgment by the associations
                                                     
21
In fact, some proponents of massive modularity want to drop domain specificity of inputs also, pointing
out that certain mental systems can seemingly use any sort of input whatsoever (one example given is the
practical reasoning module (Carruthers, 2005)).  If we consider a mental module to be a part of the mind
whose processes and database are not accessible to other parts of the mind, but which is not completely
genetically determined, and which can operate on more or less any input and produce a wide range of
outputs, then the associative learning system I advocate looks just like a mental module.  This works as
long as we consider episodic and semantic memory as producing inputs, rather than as databases, since
these are accessible by other mental systems, namely the conscious mind.  This, though, is not to advocate
massive modularity, since massive modularity requires that not only is all of our cognition due to mental
modules, but also that there be a great number of modules, but calling the domain general associative
learning system a mental module greatly reduces the number of mental modules needed to account for our
mental lives.
57
that constitute the unconscious concept.  Judgments about Xs can only use information that is
associated with X and not with other concepts (at least, not with concepts not themselves
associated with X).  Thus, unconscious concepts are encapsulated.  Likewise, other concepts do
not use information associated with X (given that that information is not also associated with
them).  Thus, unconscious concepts are inaccessible.  Unconscious concepts look like mental
modules, and according to the associative model of unconscious learning, we have quite a few of
them.  Thus, the associative model of unconscious learning is compatible with the massive
modularity thesis, provided that one be willing to accept that modules are only weakly innate
(that they are shaped in small part by genetic factors, but mostly by environmental ones).
Many proponents of massive modularity do not accept that mental modules are only
weakly innate.  They think that we have a large number of mental modules whose processes,
inputs, databases, and outputs are genetically determined.
22
 I accept that we have some such
modules.  We seem to have strongly innate sense processing modules.  We also probably have
some sort of innate grammar module; although it is not clear exactly how this works, it might be
strongly innate.  It is likely that we have some sort of strongly innate module which imitates other
people and which helps us to determine something about what they are feeling.  I differ from
these theorists with regard to the number of strongly innate mental modules we have; my view is
that we have only a limited number of these, although I am agnostic about how many and what
they do,
23
and that absent strong evidence that we have a module associated with a given task, we
should assume that performance on that task is due to domain general processes such as
associative learning.  My view is that modules which do exist send their outputs to the
                                                     
22
Samuels (1998) cites Pinker, Jackendoff, Cosmides & Toobey as making such claims.
23
I do rely on social judgment being largely not due to modules, as many of the arguments I make about
unconscious learning are based on research on unconscious social judgment.
58
unconscious mind, where these outputs are used in associative learning and judgment; my
associative model is also incompatible with massive modularity in that it requires that the inputs
and databases for most judgments are learned, rather than innate.
Before I consider (and criticize) arguments for strongly innate massive modularity, I want
to point out how prima facie wrong it should seem.  Strongly innate modules have genetically
determined outputs.  This means that the judgments that these mental modules produce are
determined by our genes.  Our genes are they way they are because of evolution.  Evolution takes
an enormously long time in humans.  In order for a trait to have evolved in us, it would have to be
advantageous for much of our history.  We make unconscious judgments whose contents contain
concepts we did not use for most of our history as a species.  We can have intuitions, for example,
that someone is an accountant, or that they like to play computer games, or that they are likely to
be arterial diseases (Ambady & Rosenthal, 1992).  Examples like this can be cited from almost
every area of human judgment – social judgment, psychological judgment, moral judgment,
object categorization, animal categorization, and so forth.  The ability to produce such judgments
cannot be largely genetically determined.  We simply cannot have evolved an accountant-
recognizing module.  Even if these judgments are due to more general modules (a job recognizing
module for example or a human categorizing module), the outputs of such modules cannot be
genetically determined, because we have to learn that people can be categorized as accountants.
If the outputs of a module are not genetically determined, then the inputs and databases of that
module are not genetically determined.  If we have to learn that accountant is a category which
can be applied to people, then we must also have to learn how to apply this category.  This means
that there are a huge number of judgments which we make unconscious that are not the result of
strongly innate mental modules.
59
Further, even when we look at outputs which could be innate, because humans have
needed to make such judgments for most of our history – judgments that a person is not
trustworthy, for example, or that some decision is a good one – many of these cannot involve
only genetically determined databases or based only on genetically determined inputs.  Some
people intuitively mistrust individuals who are car salesmen.  This means they must have “car
salesmen are untrustworthy” in the database for their trustworthiness-judging-module.  Some
people intuitively prefer certain types of soft drinks, or certain types of cars, or certain types of
computers.  This means that one’s preference judgment module must have stuff in its database
that is not genetically determined.  If the databases for these modules have some information
which is learned, then there cannot be genetically determined limits to the inputs to these
modules.  If my trust-module database has “do not trust car salesmen” in it, then “X is a car
salesman” must be a valid input into the module.  But this is not a valid input for every person.
Thus, what inputs are accepted by a module is not genetically determined.  One might argue that
the specific inputs which are accepted are not genetically determined, but only the general types
of inputs.  The trust-module might accept information about a person's job only because our
genes tell it to do so.  Consider, though, the research on implicit learning cited earlier in this
chapter.  In some studies people were taught to make judgments about intelligence based in part
on what tone they were hearing, or based on what percentage of a certain character appeared in an
image they saw.  In other studies, people were taught to judge niceness based on hair length.  If
these sorts of social judgments can be based on these sorts of normally totally irrelevant data,
then what limitations are our genes putting on the inputs which can be accepted by the relevant
modules?  It does not seem like there are many, if any at all.  If these modules do not have
genetically determined limits on the inputs they can accept, then we have no reason to think
modules in general do.
60
Thus, we have very good reasons to reject strongly innate massive modularity out of
hand, since we can see that a huge range of unconscious cognition is not due to strongly innate
mental modules.  What reasons are there to accept strongly innate massive modularity?  The main
arguments for this thesis are based on evolution, pathology, and development.
The evolutionary argument for strongly innate massive modularity
The evolutionary argument is as follows:
[T]here is a great deal of evidence from across many different levels in biology to the effect that
complex functional systems are built up out of assemblies of sub-components... we should expect
[this] to be true of cognition also, provided that it is appropriate to think of cognitive systems as
biological ones, which have been subject to natural selection.  (Carruthers, 2005,  p. 8)
This author goes on to claim that minds are biological systems subject to natural selection.  In
other words, the argument goes:  complex functional systems (complex systems that do stuff) that
are the results of natural selection are built up out of smaller, functional, parts (parts that do
things on their own).  This is because a) it is very difficult for complex systems to evolve without
their parts evolving first on their own, given that the more complex a system gets, the more
unlikely it is to evolve at all, and b) it would be very difficult for some part to evolve unless it
served some function on its own, since parts that do not function on their own can only be
selected for once the rest of the complex functional system is in place (before then, there is no
advantage to these parts existing).  So, since the mind is a complex functional system that is the
result of natural selection, it should be built up out of smaller, functional  parts.  That’s massive
modularity.
It is not always explicit that this argument is for strongly innate massive modularity, but
it is.  One central claim of the argument is that our mind is a complex biological system whose
structure is the product of evolution.  This claim must be that the complex structure of the mind is
almost entirely the product of evolution, rather than just partly the result of evolution or enabled
61
by the results of evolution, since the argument relies on claims about how evolution produces
complex systems.  If the argument were that the the complexity of the mind arises out of other
(possibly non-complex) facts that are the product of evolution, then it would fail, since the non-
complex facts that give rise to the complexity of the mind need not be composed of small sub-
systems.
The argument can be given more formally thusly:
1. If X is a complex biological system, then X is (most likely) made up of simpler
subsystems, that individually serve useful functions.
2. The mind is a complex biological system.
3. Thus, the mind is made up of simpler subsystems, which individually serve useful
functions.
4. Simple subsystems which individually serve useful functions are mental modules.
I’ve already sketched why this argument is plausible, but let’s consider the premises more
closely.  Why think premise 1 is true?  It certainly is not true if “biological system” means any
system which is made up of organic molecules, or any living system, since such a system could
be built (for example by technologically sophisticated human beings) so as to be complex but not
made up of simple and functional subsystems.  Alternately, such a system could be shaped by its
environment so that it is complex but not built of functional subsystems.  For example, imagine a
Turning machine were to evolve, and then be programmed to perform a  variety of complex
operations by environmental factors (such as human beings).  Turing machines are simple in the
sense that there are a small set of straightforward rules which govern their operation, but they can
become complex through programming.  Premise 1 is also not true if “biological system” means
any system whose traits are determined by their genes, because we can imagine future humans
designing complex organisms using genetic engineering which are not made up of simple and
62
functional subsystems.  While these are far-fetched examples, they point out a crucial issue:  the
argument only works for systems whose complexity is due largely to evolution.  If the complexity
arises from design or environmental factors, it need not consist of functional sub-systems.  Thus,
premise 1 is only true if it is about systems whose complexity is due (almost) entirely to genetic
evolution.
Given that, we can read premise 2 as saying “The mind is a complex system whose traits
are due (almost) entirely to genetic evolution.”  Traits which are due entirely to genetic evolution
are strongly innate.  So premise 2 then means “The mind is a complex system whose traits are
strongly innate.”  The traits that make the mind complex, according to proponents of massive
modularity, are the numerous functions it can perform (Carruthers, 2005).  So premise 2 means
“The mind is a system whose abilities to perform numerous functions are strongly innate.”  But
this is the claim that the argument is supposed to support, and the claim that people like me deny.
Thus, this evolutionary argument is not really an argument at all.
Even if one was inclined to think that the complexity of the human mind was the result of
evolution, one need not think it is the result of genetic evolution.  Another form of evolution is
cultural evolution.  New practices arise due to chance or human ingenuity.  Cultures that have
useful practices survive, and pass these on, while those that do not die out.  Cultural evolution can
affect our unconscious mental lives as long as a) the unconscious mind can learn, and b) cultural
practices can be learned at least partly unconsciously.  We have seen in previous sections that a)
is demonstrably true in a wide range of cases, and b) seems quite likely since cultural practices
are practiced – that is, they manifest themselves in behavior which can be seen or heard.  Mental
complexity might be evolved, but this does not mean it is largely genetic or innate.
The mind's complexity could also be the result of the interaction of a relatively few
biological systems; this goes against the “massive” part of the massive modularity thesis.  There
63
are numerous examples of very simple systems interacting in incredibly complex ways.  The
three body problem is an obvious example – the motion of two gravitationally attractive bodies
can be easily predicted, but the motion of three bodies is so far impossible to predict or express
mathematically.
24
 A human example comes from conscious learning.  Using just our sensory
systems, memory, language, and our conscious minds, a human can learn a huge number of
games (for example), producing a very complex and varied mental life.  This mental life does not
require additional biological complexity, above and beyond the systems posited, to explain.  Even
if the systems given are modules, there are only a few of them needed to give rise to a complex
mental life.  This shows that mental complexity need not be due to massive modularity.
The pathology argument for strongly innate massive modularity
So we see that the evolutionary argument for strongly innate massive modularity is weak
at best.  Let us now turn to another argument for strongly innate massive modularity, the
argument from pathology.  This argument points out that certain mental functions can be harmed
without harming other mental functions; the evidence for this is generally from people with
certain kinds of brain damage (Carruthers, 2005, 2003).  This is evidence that these functions
occur in isolated parts of the brain, and that other functions do not depend on them.  This looks
like evidence for mental modules, since it is evidence that the databases and processes used for
different mental functions are somewhat independent.  However, much of this evidence is not
evidence for strongly innate mental modules, since even if the databases and processes
underlying certain abilities “live” independently in isolated parts of our brains, this does this
require that these databases or processes are genetically determined.  Further, this evidence is not
                                                     
24
This meets the technical definition of complexity, which has to do with the ability to encode or
communicate how a system operates.  The equations of physics give us a way to express the behavior of
physical systems in a simple way.  A system whose behavior cannot be encoded or expressed by other than
stating its behavior is maximally complex.
64
evidence for massive modularity, because there are only a few functions that have been shown to
be isolated in this way, many of which are ones that we already know are due in part to innate
mental modules (such as visual processing or language ability).
There are, however, some arguments from pathology that are evidence for strongly innate
modules, and that are indirect evidence for massive modularity.  These are based on claims about
autism and William's syndrome.  These disorders are developmental – they affect people from
very early ages – and have strong genetic components (Carruthers, 2005, 2003).  Because of this,
it would seem that the cognitive functioning they affect has strong innate components.  It is
claimed that these disorders affect only single, or relatively few, types of cognitive functioning;
that is, that their effects are isolated to a narrow range of abilities.  If true, this would indicate that
these abilities are isolated from the rest of our cognition.  Further, the types of cognition they are
supposed to affect – social cognition and practical problem solving – are the sorts of cognition
that people like me think are prototypical cases of non-modular cognition.  If these sorts of
cognition are due to mental modules, then the case for domain general cognitive processes looks
very weak.
According to Carruthers, “autism is a developmental condition in which children show
selective impairment in the domain of naïve psychology – they have difficulty in understanding
and attributing mental states to both themselves and others, but can be otherwise normal
intelligence.  Conversely, children with Williams' syndrome are socially and linguistically
precocious,  but have severe difficulties in the domain of practical problem solving.” (Carruthers,
2003).  Autism is supposed to affect social cognition only, while Williams' syndrome affects
practical problem solving only.
These claims are based on quite shaky evidence.  It is not clear at all that autism only
affects “naive psychology.”  People with autism exhibit repetitive or ritualistic behavior (this
65
behavior is necessary for a diagnosis of autism) (American Psychiatric Association, 1994).  They
also generally have difficulties in communicating that are not easily explained by a lack of naïve
psychology, such as delays in learning to communicate or repetitive speech (American
Psychiatric Association, 1994).  These facts undermine the claim that autism affects an isolated,
modular, cognitive function.  There are also many theories of autism that explain autistic peoples’
problems with social judgments, difficulty communicating, and repetitive behavior by appealing
to one or more basic processing problems.  For example, one theory holds that autism is due to a
basic difference in perceptual processing between autistic and non-autistic people (Mottron, et al
2006).  There is evidence that people with autism have fairly significant differences from non-
autistic people in the size of their cellebellums, a part of the brain implicated in coordinating
cognition with external stimuli, and because “children with autism are only impaired on social
perception tasks when there is more than one cue,” there is reason to think that  lack of ability to
pay attention to and integrate various stimuli play a crucial role in autism (Preston & de Waal,
2002).  This suggests that autism does not arise from deficits in an isolated module, but due to an
impairment in a part or parts of the brain that are tightly intertwined with a number of integrated
functions.  Even if autism was a problem with only social judgments, some theorists think it is
due to disorders of “mirror neurons,” which are brain systems which do some basic processing of
face expression and body language (Williams, et al, 2001).  These theories, if true, do not show
that we have a social judgment module, but rather that there is a module which produces a certain
kind of output that is very important to learning to make social judgments.  This is not evidence
for massive modularity, just as the fact that blind people cannot read does not show that we have
a reading module, but rather that learning to read is dependent on the output of a certain
perceptual module.
66
There are similar problems with arguments based on Williams' syndrome.  People with
Williams' syndrome typically have general cognitive deficiencies (lower than average IQ), and
problems with spatial perception (Karmiloff-Smith, 2007).  In addition, while they often have
large vocabularies and are extremely outgoing, they generally do exhibit disabilities in making
social judgments (“their social behavior is as inappropriate as that of individuals with autism...”
Karmiloff-Smith, 2007, p.R1036), and their word choice is often "syntactically correct, but
semantically just a little off the mark." (Finn, 1991).  Thus, it is inaccurate to say that people with
Williams' syndrome exhibit deficits with a single cognitive function, or that they exhibit deficits
in isolated mental systems.
What we see from this is that arguments from pathology at best give evidence that we
have some mental modules, but not for massive modularity, and that the facts as we currently
know them are consistent with the claim that we have relatively few isolated cognitive systems.
The developmental argument for strongly innate massive modularity
The final type of argument for strongly innate massive modularity that I will discuss is
the argument from development.  These arguments are similar to the “poverty of the stimulus”
arguments that are made to support the existence of an innate grammar module.  These arguments
point to the fact that children display certain abilities at an extremely early age – they seem to
understand something about physics, for example, and psychology – and that it hard to see how
they can have these abilities unless they are innate (Carruthers, 2003).
Space does not permit me to explore every version of this argument that can be made, so
instead I will give three general replies.  The first is simply to point out that showing that we have
some abilities early in development is, at best, evidence for some strongly innate modules, not
evidence for strongly innate massive modularity.  The second is based on an argument made by
67
Jerry Fodor (2000).  Fodor points out that innate abilities need not be due to innate modules, but
rather to innate “knowledge.”  Remember that mental modules have their own, isolated,
processes, inputs, outputs, and databases.   Innate knowledge is something like a database, but it
need not be isolated (we could in principle use innate knowledge of psychology to make non-
psychological judgments).  Innate knowledge of physics could explain children’s early facility
with physics without needing to appeal to an innate physics module, which would have its own
inaccessible processes, inputs, and database.  The final response is to point to research that
indicates that children may be better at learning than are adults partly because they have less
working memory than do adults (Kareev, 1994, Kareev, et al, 1997).  Apparently, statistical
analysis predicts that correlations are more easily spotted when individuals have less information
to work with at one time (within certain limits).  This might be able to explain the rapid
acquisition of abilities by children in absence of what seems to us to be sufficient data to learn
from.  Alternately, a combination of these factors – innate knowledge, some mental modules, and
rapid learning of correlations – might explain the remarkable speed at which children develop
mentally.  We need not appeal to massive numbers of strongly innate modules.
In conclusion, the massive modularity thesis is only incompatible with the associative
model of unconscious learning and judgment if we believe that all or most unconscious cognition
is due to mental modules which operate on innately specified inputs, using innately specified
databases of information, to produce innately specified outputs.  The evolutionary argument for
this view begs the question, and ignores the fact that our mental lives may be complex due to
cultural evolution, or the complex interaction of just a few mental systems.  Further, it ignores the
fact that we demonstrably have unconscious mental systems which do not operate on innately
specified inputs, which do not use only innately specified databases, and which produce outputs
which cannot be innately specified.  The argument for strongly innate massive modularity from
68
pathology is based on a very simplistic and inaccurate understanding of certain pathologies, and
the argument from development is based on data that can be explained without embracing
strongly innate massive modularity.  Thus, the massive modularity thesis should not be seen as a
threat to the associative model of unconscious learning and judgment.
The associative model versus unconscious thinking
Some psychologists claim that the unconscious thinks, which contradicts the associative
model of unconscious learning judgment.  The view of the unconscious that I have been pushing
holds that the unconscious does not think at all, as we normally conceive of thinking.  Pre-
theoretically, thinking involves reflecting – considering information one has already gathered –
and putting information together based on this reflection in ways that one had not done when the
information was gathered.  Thinking can and does occur in the absence of direct stimuli; we can
simply think about what is already in our head and come to new conclusions, without needing
new inputs.  To use an analogy, we might think of one’s mind as a filing cabinet full of files; as
we go through life we put the information we gather from different experiences into different
files.  When we think, we move information from one file to another, copy information into an
additional files, or discard information all together.  Applying this metaphor to the associative
model of unconscious learning, the unconsciously puts information into files as it is gathered, and
makes judgments by consults these files.  However, according to the associative model, the
unconscious does not reflect – it does not perform operations on files in the absence of immediate
stimuli – and it only adds information, never deleting associations – it does not take information
out of files (although it may be that associations deteriorate over time).  If there was evidence that
the unconscious thought in the traditional sense –that it added or removed information from
69
“files” in the absence of immediately experienced stimuli – this would be evidence against to the
associative model.
Traditional, the view that the unconscious thinks is associated with Freud and
psychoanalysis.  Psychoanalysis has been alternately repudiated and (partially) supported by
research psychology; in doing a meta-analysis of the current literature on unconscious cognition,
Anthony Greenwald (1992) argues that we are currently on the third major re-analysis of
psychoanalysis and the unconscious.  There is much evidence about how the unconscious
operates, Greenwald argues, but convincing evidence of unconscious thinking (in the sense I am
using the term) has failed to turn up for it (Greenwald, 1992).  Greenwald argues that what
evidence has been found can be readily accounted for by associative processes of the sort I have
been outlining.
Some psychologists dispute this, claiming to have found experimental evidence of
unconscious thought.  I will focus on the work of one psychologist, Ap Dijksterhuis, who has
recently and strongly advocated the view that the unconscious thinks (Dijsketerhuis & van Olden,
2006, Dijskterhuis & Nordgren, 2006); since he is a strong advocate of the claim, we can assume
he will have canvassed the literature for support for his view, and since he is a recent advocate,
we can assume that he has not overlooked relatively modern research.  It being notorious difficult
to show that there is not evidence for a claim, I will be content if I can show that Dijskterhuis has
not turned up evidence of unconscious thought; this will show that such evidence is not likely to
currently exist.
The experiments Dijkserthuis cites as evidence of unconscious thought employ the
following paradigm:  subjects are presented with data relevant to a decision or evaluative
judgment.  They are given more data than can be easily consciously tracked.  Some subjects give
their decision/evaluation immediately after being presented with the data.  Other subjects are
70
given time to think.  A third group is distracted for some time so that they cannot consciously
think, and then make their judgment.  The third group makes better decisions or judgments, both
objectively and subjectively (generally being more satisfied with their judgment), than the other
two groups.  Since this third group could not think consciously about their decision, they
presumably thought unconsciously about it if they thought about it at all.  Further, since their
better judgments occurred only after a delay, presumably thinking occurred.  For example, in one
experiment all participants were given a great deal of data about a series of apartments, and asked
to pick which was preferable.  The researchers had determined which apartment was better
though complex means which I am willing to trust for the sake of argument.  The group that was
distracted before making their decision picked better apartments than the other two groups.
When considered in the light of other research, these experiments fail to strongly support
Dijksterhuis’ claim.  An extremely similar experiment by Betsch et al. led those experimenters to
the conclusion that humans keep an unconscious running tally of evaluations (Betsch, et al,
2001).  That is, as we are exposed to information that says something good or bad about some
thing or things, we associate this data, and its good/bad-ness, with that thing.  When we are asked
to evaluate the thing, we simply consult the “score” we have generated along the way.  Betsch et
al.’s experimental paradigm was extremely similar to Dijksterhuis’.  Subjects were presented with
information, all of which they could not consciously keep track of but that they were conscious
of.  This information was in the form of a stock ticker running along the bottom of a TV screen,
reporting the price changes of hypothetical stocks, that they had to read out loud.  Subjects could
not keep track of this information consciously because they were instructed to pay attention to
advertisements flashed above the ticker (subjects were told that the ads were the focus of the
experiment, and that reading the stock prices was to distract them from the ads).  At the end of the
experiments, subjects were asked to rank the stocks from best to worst.  Subjects were able to so
71
with a high degree of accuracy, but were unable to consciously report stock prices, average
prices, or how they ranked the stocks.
In postexperimental interviews, the majority of [the] participants stated that it was neither possible
for them to deliberately evaluate the shares while reading the values nor was it possible for them
to remember any particular aspect of the distributions.  Most of them had no confidence in their
judgments and were not willing to believe that in fact their judgments were highly systematic.
(Betsch, et al, 2001, 249)
This shows that the process by which these stocks were ranked was unconscious.  Although some
subjects were distracted for a time before ranking the stocks, others were not distracted between
the presentation of the information and being questioned about it (Betsch, et al. 2001, experiment
2). Both groups gave accurate answers.  This gives evidence that the unconscious does not need
to think about the data after it is collected and “tabulated” – rather, it processes it as it was
collected.
Why, then, did subjects in Djiksterhuis’ studies only perform better when they were
distracted before responding?  Ironically, Dijksterhuis gives a possible answer to this question
(that goes against his own view) in another paper, in a discussion of insight (Dijskterhuis, 2004).
Insight, he argues, is often due to something called “incubation” – keeping an idea simmering in
the back of one’s head, so to speak, until an answer comes to consciousness unbidden.  And
incubation need not involve unconscious thinking:
The empirical evidence for incubation available these days is usually not explained by
unconscious thought.  Instead, incubation is seen as fruitful because one is distracted from the
problem at hand.  Not thinking about a problem for a while may lead people to forget wrong
heuristics or inappropriate strategies in general.  Distraction, then, allows people to give the
problem a fresh look (Dijksterhuis, 2004, 588).
In Dijksterhuis' experiments, incubation could result in subjects falling back on their intuitive
reactions to the data, and discarding (or forgetting) their conscious reactions.
25
 Subjects in these
                                                     
25
Dijksterhuis addresses this possibility off-handedly.  He claims that Betsch's research shows that the
unconscious can “integrate” large amounts of information – process it and put it together – ignoring the fact
72
experiments were given information that they knew they were supposed to make judgments about
(unlike in Betsch, et al, where subjects thought the stock information was an irrelevant
distraction).  Both subjects who were asked to consciously deliberate on this information, and
those asked for judgments with no time for deliberation, probably had consciously calculated
answers ready when asked for them, since they expected to be asked for these answers at the time
the information was given to them.  People who have both a conscious and an intuitive answer to
a question often present the conscious answer (and may not even be aware of the intuitive
answer).  On the other hand, distracted may have prevented the distracted subjects’ conscious
minds from blocking the answers their unconscious minds had already generated.  This was not
necessary in Betsch's experiments because the subjects were presented with so much information
that they could not even attempt to process it consciously, and which they were told they should
not process consciously (because it was irrelevant).  Subjects in Betsch's experiments,  unlike
those in Dijskterhuis', were not likely to generate conscious judgments about the data which
would interfere with presenting their intuitive judgments.
Thus, the best evidence Dijksterhuis found for unconscious thinking is not very good
evidence  that it occurs.  What's more, there is some evidence that the unconscious does not think.
Recall the analogy of the mind as a filing cabinet.  One potential effect of thinking is the
movement of information from one file to another, or the removal of information from a file
altogether.  When either occurs, the information should no longer be in the (first) file.  This is of
course not the only possible effect of thinking, but it is a type of thinking which cannot be easily
explained by association, since association of files never leads to a a loss of information from
either file.  There is evidence that associations, once formed, persist even if there is reason to
                                                                                                                                                                   
that Betsch's research does not support the claim that this integration occurs in the absence of immediate
stimuli, and thus that it is not evidence for the kind of thinking Dijskterhuis and I are talking about.
73
“delete” them.  For example, one’s memory of an event can sometimes be intentionally changed
by researchers, so that one cannot consciously access the first memory formed at the time of the
event (Loftus, 2003). The process by which the memory is changed involves manipulation of the
unconscious – subjects are not aware that their memories are being altered.  Many researchers
argue that the initial memory persists, and can still be accessed in the right conditions, generally
conditions which involve unconscious recall (Wilson, et al, 2000).  In other research,
unconsciously formed attitudes, such as stereotypes, have been shown to be extremely persistent.
Attitudes are evaluations of things, people, events, and so forth; on my model they are
associations between these things or concepts and positive or negative concepts.  Once an attitude
is unconsciously formed, it is very difficult to change, even in the face of evidence that it is
clearly false or misplaced, and despite the possession of a contradictory conscious attitude
(Wilson, et al, 2000). “Implicit attitudes, like old habits, change more slowly…” (Wilson, et al,
2000, 104)  Unconscious attitudes have been shown to survive cognitive dissonance, which
typically results in changes in conscious attitudes (Wilson, et al, 2000).  Reflecting on this, one
researcher writes, “Useful though the nonconscious pattern detector is, it is tied to the here-and-
now…  Nor can the adaptive unconscious muse about the past and integrate into a coherent self-
narrative…”  (Wilson, 2002, 50-51)  Evidence that the unconscious mind thinks would be
evidence against the associative model of unconscious learning, but this evidence has not been
found.
Summary
We have seen that the unconscious mind does learn about a wide range of domains by
association, so that we know that unconscious associative learning occurs and is widespread.  The
associative model explains other observed facts about how our unconscious minds learn and
74
judge.  This is good evidence that the associative model is a good model of how our unconscious
minds learn and judge in general.  We have also examined some potential counter-evidence to the
model, and seen that it is actually consistent with the associative model.  The associative learning
model is simple, because it posits one mechanism which can explain most of our unconscious
judgment (although, again, we know that some of our unconscious judgment does come from
innate concepts).  A model which is simple, fits the given data, has strong explanatory and
predictive power, and is not falsified by counter-evidence that we know of, is a good scientific
theory.
What Experiences Get Associated?
I have claimed that everything in the contents of (roughly) simultaneous experiences gets
unconsciously associated together.  This does not mean that all possible information about objects
we encounter gets associated together; associations only occur between information that is
experienced, either consciously or unconsciously.  This also does not mean that all, or even most,
of these associations are used in unconscious judgments.  For example, if I once saw a bear with a
cowboy hat, I am not likely to have a strong association between bears and cowboy hats,
especially relative to the associations I have between cowboy hats and other things (like
cowboys), so seeing something wearing a cowboy hat in the future will not make me more likely
to classify it as a bear.  It also does not mean that all this information is stored forever – some of
it may be lost over time, as memories may decay without being reinforced, and information
stored in short term memory may not make it into long term memory; this is an issue which needs
more research.
26
                                                     
26
For example, see Ebbesen & Rienick, 1998.  They found that subjects' recall of information told to them
decreased rapidly until the point at which subjects recalled that information; their degree of recall remained
75
Why think that everything we experience together gets associated together?  The
judgments we make about a given type of things change over time.  This indicates that we make
novel associations, since in order for my concept X to change, I have to form new associations
between X and something else.  The experiments on implicit learning discussed above show that
we can learn to make judgments based on information we have no conscious awareness of, or that
we do not consciously believe is relevant to these judgments.  These studies involved learned
associations between a wide range of stimuli.  I have not seen evidence of a boundary on the
types of experiences that can be associated together.  Other studies, such as those by Betsch et al
that I cited above, demonstrate that we unconsciously learn from information even when the
amount of information we are given at one time is too much for us to consciously keep track of
(Betsch, et al, 2001).
27
 The simplest explanation for this is that we form associations between
everything we experience, and not just some subset of things.  Given all this, we should expect
that associations can and do occur between the contents of any experiences that occur at roughly
the same time (leaving open for now the question of what span of time counts as simultaneous
enough).
                                                                                                                                                                   
relatively constant after that point, at least within the time limits of the study (28 days).  On the other hand,
subjects' recall of attributes of people they had themselves perceived did not decrease significantly within
the limits of the study.  This is not the say that subjects remembered everything about the people they had
seen, but that if they recalled something within a day, they would also remember it after 28 days.  It isn’t
clear to what degree subjects forget things they had initially noticed about people.  It could be that subjects
noticed many items that they forgot within 24 hours; on the other hand, it may be that they didn’t notice
everything they could have, and remembered everything they had initially noticed.  This suggests, however,
that association with exemplars may be fairly stable in the long term, whereas explicit categorical
information (i.e. information we are told about concepts) may be much easier to forget.
For a discussion of why information is forgotten see Wixted, 2004.  He argues that information needs to be
consolidated by the hippocampus in order to be stored for the long term.  If demands are placed on the
hippocampus’ resources before this can occur, some information may be lost before consolidated.
Formation of new memories and “mental exertion” are some such demands.  For example, if one engages
in a series of novel experiences, the later experiences may demand resources that prevent the earlier ones
from being fully consolidated.  Thus, information from the earlier experiences may not make it into our
concepts.
27
Similar results have been shown for pattern tracking (Lewicki, et al, 1987) and preference judgments
(Dijksterhuis, 2004).
76
When I say “experiences,” I am referring to a wide range of mental events – not only
perceptual experiences, but also judgments (both conscious and unconscious).  It should be
obvious that we consciously learn from information we perceive and consciously notice, but we
also unconsciously learn from this as well.  Anyone who has consciously practiced until they can
perform it almost instinctively has experienced this process.  Studies on implicit learning also
illustrate this, since in some of these studies subjects formed associations in part based on
information they were consciously aware of (e.g. they were explicitly told which “brain scans”
were intelligent, and associated this intelligence with aspects of the brain scan).  Implicit learning
research also shows that information we perceive but do not consciously notice can affect future
unconscious or intuitive judgments; for example, subjects in experiments have learned
covariations involving changes in musical notes too subtle for them to consciously discern.
However, I am also claiming that we unconsciously learn from our intuitive or
unconscious judgments (this is quite important because it means that our intuitions reinforce
themselves – every time we have the intuition that P, we “learn” that P, and make it more likely
to intuit that P in the future).  Why think that intuitive judgments are the sorts of things that can
form associations?  This has been demonstrated in a series of studies on unconscious learning
(Hill, et al, 1989, and Lewicki, et al, 1989).  In these studies, experimenters created a situation in
which subjects were unconsciously trained to notice a hidden correlation between two factors.
Some of these studies were similar to the “brain scan” studies discussed above, in which the
correlation was between a certain type of “brain scan” and an intelligence rating.  In another
study, personality traits of people were associated with the lengths of their legs.  In another study,
differing positions of a number in the visual field was associated with different patterns of
preceding stimuli.  Each of these studies consisted of a training phase in which the subjects were
repeatedly presented with the perceivable part of the correlation (the brain scan, the images of the
77
body, or the stimuli patterns) and also with the correlated information (the intelligence rating, the
personality traits, the number in the visual field).  After the subjects had been trained (although no
subjects realized that they had learned anything), they were then presented only with one aspect
of the correlation – the certain type of brain scan, the leg length, the pattern – but not the other.
They were not told whether or not the scan was intelligent, or what traits the person had; in one
experiment, the number they had been trained to spot did not actually appear on the screen after
they had been exposed to the pattern they had learned to associate with it.  This is the “testing
phase” of the experiment.  In each case, after a number of exposures to the first stimuli in the
testing phase, subjects responded as if the correlation the had already learned had been
reinforced; they got better at responding in accordance with the covariation they had previously
learned, even though the covariation no longer occurred in the stimuli they were explicitly
presented.  In the number pattern experiments, for example, subjects never even saw the number
in the visual field during the testing phase, but they reporting seeing it, and got better at “seeing”
it where it should have been had the correlation been maintained.  This shows that, during the
testing phase of these experiments, the association between some X and some Y was reinforced.
However, the experience of Y did not come from the external world.  Subjects could only have
been experiencing Y by judging Y to be present in the items presented them.  In some of these
experiments, these experiences were the conscious manifestations of unconscious judgments;
they were conscious because subjects reported having them (e.g. they reported seeing the number
where it ought to be in the pattern), and they were the manifestations of unconscious judgments
because the subjects were not able to consciously determine what stimuli they “ought” to be
experiencing.  In other studies (e.g. in brain scan studies) subjects made unconscious judgments
that were not directly consciously manifested; e.g. they judged brain scans with the appropriate
properties to be intelligent.  These judgments were unconscious again because the subjects were
78
not aware of what properties were correlated with intelligence, and they were not directly
consciously manifested because the subjects never consciously “saw” these properties – that is,
they were not consciously aware of what percent of the brain scan was a certain symbol.  In both
of these types of cases, the association between X and Y that had been created during the training
phase was reinforced.  This shows that unconscious judgments, both consciously manifested as
intuitions and not, can create or reinforce associations.
Are there some domains that cannot be learned about through association?  Strictly
perceptual experiences can clearly be parts of associations, as can categorization of natural kinds.
This does not seem surprising, given that lower animals also probably learn through association.
But this does not mean that our unconscious mind can form associations using uniquely human
experiences, such as moral or social judgments.
Is our unconscious limited in the types of content it can learn from and form associations
between?  It does not seem so.  We can learn to make social judgments through association,
showing that social judgments and experiences of facts relevant to them are the sorts of things
that can be parts of associations.  This is demonstrated by the fact that we make accurate
judgments about people’s personalities, tendencies, emotions, future behavior, and so forth based
on relatively little information (Ambady & Rosenthal, 1992
28
).  These judgments are very likely
to be largely intuitive judgments for several reasons.  First, subjects did not improve their
performance significantly by having more information (30 seconds of observation was about as
good as 4 minutes), even though conscious reasoning generally improves given more time to
think and take in information.  Second, subjects performed better when making judgments based
on non-verbal cues.  Non-verbal cues are harder to use in a conscious manner than verbal cues,
                                                     
28
This is a meta-analysis of 44 studies.
79
because we are less likely to have words for non-verbal cues like facial expressions, gestures, or
ways of moving, or to be able to articulate how we use them in judgment.
29
 Other studies have
shown that people tend to pick up more on non-verbal cues unconsciously (Gilbert & Krull,
1988).  Since accurate social judgments are often made unconsciously, or due to unconscious
processes, the unconscious must be able to learn to make social judgments.  Moral judgments can
also be made based on associative learning.  One example comes from a study in which subjects
were told facts about two people:  a convicted sex offender and the prosecutor who convicted
them (Wilson, et al, 2000). They were shown photographs of both while the facts were recited.
Later, half the subjects were told that a mistake was made and the photographs had been
incorrectly switched around.  Subsequently, subjects were asked questions about the traits of each
person under both rushed and normal conditions, and also asked other questions to reveal their
attitudes towards the two people.  Some of these questions were designed to measure explicit, or
conscious, attitudes, and some to measure implicit, or unconscious attitudes (rushed answers to
questions, for example, are more likely to elicit unconscious judgments (Kahneman & Frederick,
2001)).  What came out was the subjects who had the images switched had more negative
unconscious attitudes towards the prosecutor who they had initially thought was the sex offender,
and more positive unconscious attitudes towards the sex offender who they had initially thought
was the prosecutor versus the subjects who had not had the images switched.  However, their
conscious attitudes were basically the same.  What this shows is that the initial moral judgments
were associated with the two people in a way that was still accessible to the unconscious mind.
                                                     
29
For example, humans have numerous facial expressions for which we have no words and can’t point out
as meaningful, yet which we regularly use without realizing it (Gladwell, 2005).
80
A variety of other experienced information can become part of an association:  history
(that is, what events preceded or lead up to something),
30
heat/cold or pleasure/pain, emotions,
31
frequency of occurrence,
32
and information presented linguistically.
33
 Given all of this, there
seems no principled reason why any two experiences cannot be associated, given that they are
experienced at roughly the same time, and much reason to think they can.
Unconscious Concepts:  Judgment Based on Abstractions
There has been a debate in psychology since at least the 1970s about how our mind stores
information about categories and uses this information in making judgments.  I will be focusing
on the aspects of this debate that apply to unconscious learning and judgment.  The two main
camps are  exemplar theories and abstractionist theories.
34
 Exemplar theorists hold that
categories in the mind are represented by examples of category members (potentially every
example the subject can remember).
35
 When a person makes an unconscious categorization
judgment, they compare the thing the judgment is about to some or all exemplars of various
categories.  Abstractionists, on the other hand, think that data about categories is stored abstractly,
                                                     
30
This can be seen in e.g. Stadler, 1989, or in the success of classical conditioning.
31
Wilson, et al, 2000.  See Wilson, 2002, chapter 7 for an in depth discussion of unconscious emotions and
judgments about emotions.  Wilson argues that we can make unconscious judgments about emotions we are
not actually feeling; for example, he cites a study in which subjects were approached by an attractive
woman in a dangerous setting.  Subjects reacted more as if attracted to the woman than they did in a safer
setting, suggesting that subjects judged they were attracted to the woman based on cues normally
associated with attraction (heart racing, etc.).
32
Begg, et al, 1992.
33
Most experiments on unconscious judgment involve the subjects being told some information about
items or people they will go on to make judgments about.
34
This terminology is adapted from Klein, et al, 1992.
35
See Medin, & Schaffer, 1978.  This is the locus classicus of the exemplar theory.  There are, however,
modern versions with somewhat different details, such as the Exemplar Based Random Walk Theory
(Nosofsky & Palmeri, 1997), which claims that, rather than using all exemplars in making judgments, all
exemplars “race”  to be used, but only some are actually recalled quickly enough to contribute to the
decision.  This view potentially accounts for some data that typical exemplars theories can’t (such as basic
category advantages, see below).
81
that there is something in our unconscious minds like a representation of the category itself, rather
than just its members, and that judgments about objects are often made based on these
abstractions, often without recall of any specific instance of the category (Murphy, 2002). This
second view includes several related theories, such as prototype theories, schema theories, and
rule-based theories, the details of which are not important here.  The so-called classical theory of
categories – that concepts are represented by sets of necessary and sufficient conditions – is an
extreme abstractionist view, but abstractionists need not believe that traits be either necessary or
sufficient for category membership (Lakoff, 1987).  I have claimed earlier that one piece of
evidence for the associative model of unconscious learning is that it explains why and how we
store the data we use to make judgments about categories abstractly.  This requires that we
actually do store category data abstractly.
36  
Defending the associative model, then, requires
arguing that abstractionist views of categorization and judgment are superior to exemplar
theories.  However, since the associative model does not require that we always use abstractions
to make unconscious judgments (in fact, it claims that we will only do so in certain conditions), I
will not try to defend extremely strong abstractionist theories, but only those that claim that we
often use abstractions when making unconscious judgments.
There is a great deal of data that supports exemplar theories of categorization.  In a great
number of studies (unsurprisingly produced by exemplar theorists), people’s categorization
judgments have conformed better to the predictions of exemplar theories rather than abstractionist
theories.
37
 Recently, however, significant criticisms of these studies have been raised.  These
studies typically involve subjects learning artificial categories that are generated especially for the
                                                     
36
The associative model does allow for the use of exemplars in some cases of judgment (it is not unique
among abstractionist theories in that regard).  See, for example, Murphy, 2002, Klein et al 1992, Palmeri &
Nosofsky, 1995.
37
See Murphy, 2002, for an overview.
82
study.  These categories have very few members, and are not representative of the categories
people normally use:  members of the categories have very little in common, many features of the
categories are “irrelevant or misleading,” and they contain members who “are opposite in every
respect [from one another.]”  (Murphy, 2002, 104) Further, because of their size, the extension of
the categories are relatively easily memorized, meaning that the conscious use of exemplars is
heavily favored in these studies.  They do not shed much light into how unconscious judgments
are made under normal conditions.  Further, at least one researcher has argued that, in fact, a great
deal of the data generated in the studies is consistent with abstractionist theories (although I
cannot explain these arguments well, not understanding the math behind them).
38
Thus, the great
deal of evidence seeming to support exemplar theories of categorization does strongly support the
claim that categorization is in general based on use of exemplars.
On the other hand, there is clear evidence that judgments can be made without consulting
exemplars at all.  Subjects in a series of studies were asked to make judgments about the character
traits of themselves and other people they knew (Klein, et al, 1992, Klein, et al, 2001). They were
then tested on their recall of any behaviors consistent with the traits they had made judgments
about.  Recalling a memory once makes it and related memories easier (quicker) to recall in the
near future; this is a phenomenon known as priming.  In these studies, if the subject had based
their judgments on exemplars – examples of the person asked about exhibiting the character trait
– some memories of the behavior being exhibited should have been primed, and should have
subsequently been recalled more quickly than in a control condition.  This was not found when
the subjects were asked questions about themselves, or about traits they were very familiar with
in other people.  This indicates that in these conditions the subjects had not used exemplars.  (The
                                                     
38
Smith & Minda, 2000.  However, see Nosofsky, 2000, for a response to this argument.
83
fact that they did not use exemplars only in these conditions suggests that we only form abstract
representations of a person given sufficient exposure to them, and that certain traits are only part
of these abstractions given a sufficient number experiences of that person exhibiting that trait)
These studies shed light on unconscious judgment for two reasons.  First, while these judgments
were conscious in the sense that the subjects were aware that they were reporting on the trait of
some person, the subjects simply reported their feelings about the person without reasoning
(consciously) to these feelings.  Second, people typically rely on intuitive judgments in situations
when they do not expect to justify the judgments they make (Sloman, 2001); this looks like such a
case.  Third, unconscious judgment is automatic – it occurs whether we want it to or not,
whenever we are in a situation where the unconscious can produce a judgment on a topic
(Sloman, 2001).  In this study, these subjects were asked questions that their unconscious minds
should be able to make judgments about.  Thus, we should expect them to have made intuitive
judgments in addition to whatever conscious judgments they  may have made.  If these intuitive
judgments were based on recall of exemplars, then these exemplars should have been primed.
Since they were not, we can conclude that exemplars were not used in making these unconscious
judgments.
Partly abstractionist theories also do a better job of accounting for things psychologists
have learned about human categorization.  The clearest set of facts that exemplar theories do a
poor job of explaining is the hierarchical structure of concepts.  As discussed above, concepts can
seen as hierarchical, where the extension of some concepts are subsets of other concepts.
Concepts at a certain level – the basic level – are more easily used, which we will call the basic
84
level advantage.
39
  Levels above basic are called superordinate and levels below it are called
subordinate.  The basic level advantage, it turns out, is hard to explain within exemplar theories.
Why is this?  For exemplar theories, when I see a dog, for example, I recall things I have
previously seen that are similar in some way to what I am now seeing – same size, same shape,
same color, etc.  If enough of them are dogs, then I classify this thing as a dog.  However, since
every dog is an animal, I ought to recognize that this thing is an animal just as fast as I recognize
that it is a dog.  This does not normally happen – one basic level advantage is that items are
recognized as belonging to basic level categories more quickly than superordinate level
categories.  In order to account for this, it seems that not every dog exemplar in my memory can
have the “animal” label attached to it – if they all did, then every exemplar that confirms this is
dog also confirms it is an animal, and I ought to make both classifications in roughly the same
amount of time.  However, if this is the case, then there must be a reason why some dogs
exemplars in my memory have the “animal” label, and some do not.  There are Gricean reasons
why we do not normally articulate the fact that every dog we see is a mammal, but these do not
explain unconscious classifications of entities as dogs but not animals.  There seems to be nothing
about classifying something as a dog that suppresses our ability to classify it in other ways, since
I can recognize a dog as also being hungry, or nice, or mean, and so forth.  It also does not make
sense in the exemplar theory to say that recognizing something as a dog prevents me from
recognizing it as a member of a higher level category, since that requires some representation of
categories independent of exemplars; in order to “know” that recognizing this thing as an animal
should be suppressed if I have already recognized it as a dog, the relationship between the two
categories must be represented somehow.  But exemplar theories do not want categories
                                                     
39
Term borrowed from Murphy, 2002.  The discussion of this advantage is based in large part on
arguments made in Muphy, 2002.
85
themselves to be mentally represented.  While I am not ruling out the possibility of a non ad hoc
explanation of basic level advantages that comes from exemplar theory, I know of none at this
time.  Further, any such explanation will have to add a layer of complexity to exemplar theories,
because the theories by themselves do not predict basic level advantages, while abstractionist
theories have no such problems and, in fact, the associative model predicts basic level advantages
(as discussed above).
Lots of other evidence is advanced against exemplar theories – for example, they do not
seem to explain facts about how humans normally make inductions between category levels
(from dogs to animals, for example) (Murphy, 2002), and results of some studies on category
learning seem to involve use of rules to categorize, rather than exemplars (e.g. Smith & Minda,
1998) – but much of this evidence can potentially be explained by people using abstractions
consciously, while using exemplars when making unconscious judgments.  Rather than get
further into these debates, let’s summarize what we have so far.  We have clear evidence that
people do not always use exemplars when making intuitive judgments, and evidence about
categorization phenomena that exemplar theories currently do not explain, and seem unlikely to
in the future.  In addition, the main evidence for exemplar theories comes from studies of a very
limited type of categorization.  At best this evidence shows that we use exemplars for a limited
range of judgments.  This is compatible with most abstractionist views, and certainly with the
associative model.
Unconscious and Conscious Concepts and Judgments at Odds
Our unconscious concepts can come apart from concepts (in the ordinary sense) in a
number of ways.  One important point that we have not spent much time on so far is that
unconscious concepts need not be hooked up to our language in the same way that (many
86
philosophers believe) ordinary concepts are.  It would be odd if our ordinary concept DOG did not
determine to a very significant extent our use of the word “dog.”  However, it very well could be
that, for some people, the unconscious concept DOG does not affect how they use the word “dog,”
and/or that it is not activated when the word “dog” is heard.  For an unconscious concept to be
activated by exposure to a word, that word has to be associated with that concept.  This is
similarly true if an unconscious concept is to affect how a word is used.  We might argue about
how we label a given unconscious concept if not by looking at the words that are associated with
that concept; in my view we can often do so by seeing what it is a concept of.  In other words, if I
acquire an unconscious concept through repeated exposure to dogs, then it is the unconscious
concept DOG, whether or not it has anything to do with my use or understanding of the word
“dog.”   This is a crucial point because a) there are a number of words we can use that are likely
to not be strongly associated with much in our unconscious minds, perhaps because we have not
used them enough; b) there are a number of unconscious concepts we may have that are not
associated with words (or with the “right” words), because while we have experienced certain
things a number of times, we may not have consistently heard or articulated words to label them
at those times.  This means that our linguistic abilities and our abilities to make unconscious
judgments can and likely will come apart to some extent.
So, our unconscious learning is based in part on information we are not aware of or do
not consider important; our unconscious minds track more information than we can consciously,
because it can take in more information at one time than we can consciously and because (as it
forms unconscious concepts) it aggregates information from more experiences than we can
consciously attend to at once; words are not always associated with our unconscious concepts as
we would expect.  For these reasons, we should expect unconscious judgments and concepts to
sometimes be at odds with our conscious judgments and concepts.  There is extensive
87
experimental evidence that confirms this.  We have already discussed some of this evidence –
evidence that we can unconsciously learn without ever being consciously aware of it, for
example.  Let us quickly survey some other evidence.
There are obviously numerous anecdotal examples of this:  I have numerous concepts
that I can employ consciously (usually based on a great deal of thought) but that so far have never
been the contents of any intuitions.  Philosophers can also easily cite a number of cases in which
they have intuitions which conflict with their conscious knowledge or beliefs.  Numerous
examples can be found in psychological research as well.  Data on eye tracking in a
categorization task, which is supposed to be an indicator of unconscious “thought” processes even
if the final categorization decision reflects conscious judgment as well, suggests that for many
subjects, WHALE is almost as strongly associated with the category FISH as with MAMMAL
(Nederhouser & Spivey, 2004), even though we all know that whales are not at all fish.  Research
on judgments of preferences show that people’s conscious decision making sometimes does a
worse job of tracking their preferences than their unconscious judgments, showing partly that the
conscious is easily confused but also that sometimes we are not consciously aware of what our
preferences are or why we prefer them (Dijksterhuis, 2004, Wilson & Schooler, 1991).  Research
on attitudes – essentially evaluative beliefs (“Pizza is tasty,” or “Horror movies are not good”) –
shows that people can have unconscious attitudes and beliefs that they heartily disagree with or
are not aware of (Wilson, et al, 2000, Greenwald & Banaji, 1995).  Research indicates that people
have certain implicit goals and desires that they are not aware of (Wilson, 2002).  One can have
intuitive responses that are based on information one knows to be false or irrelevant, such as the
feeling phobics get that the object of their phobia is dangerous or frightening, even when they
know it cannot hurt them (such as when it is only an image).  Further, these unconscious beliefs,
88
attitudes, desires, and judgments do not change even when one’s conscious beliefs, attitudes,
desires, and judgments change in response to consciously accepted information.
These various studies are part of a large tradition of research in a number of different
domains on conflicts between unconscious/intuitive judgment and conscious/rational judgment.
The ones cited include studies of object categorization, judgments of preference, social
judgments, and evaluation.  In each of these, subjects had unconscious judgments which did not
reflect their conscious judgments, or vice versa.  This supports the claim that unconscious
learning and judgment can and often do come apart from conscious learning and judgment.
What about General Intuitions?
The previous discussion has mostly focused on intuitions whose content is a singluar
proposition:  “This is a dog,” or “This is good.”  However, sometimes general philosophical
thesis are claimed to be supported directly by intuitions.  That is, the theses themselves are
claimed to be intuitive.  For example, Thomas Nagel says, “Prior to reflection it is intuitively
plausible that people cannot be morally assessed for what is not their fault…” (Nagel, 1979)
Other philosophers claim that some things are intuitively impossible; the claim that no object is
both entirely red and entirely green is one such a claim.  Claims of this sort are also general, since
they are logically equivalent to “All Xs are not Y.”  Let’s call an intuition whose content is a
proposition about some class of things a general intuition.  My view on general intuitions is that
we do not know enough about them at this point to really evaluate know how good evidence they
are in normal circumstances, although we do know enough to say that in certain situations they
will not be reliable.  Because we do not know enough about how general intuitions work, this
section will be a bit speculative.  What I will do is sketch the (realistic) best-case scenario for
how general intuitions work, and discuss what evidence there is for it.  I will focus on intuitions
89
whose content has the form “All Xs are Y,”  because these are the most common appealed to, and
because we can other general intuitions will have content for which there is a logically equivalent
conditional.
Assuming we have unconscious concepts for X and Y, what might generate the intuition
“All Xs are Y?”  The most plausible candidate is that such intuitions would be based on the
strength of the association between X and Y.  This leaves open the possibility that our general
intuitions are often inaccurate.  Strong associations can exist between two concepts even if not
every exemplar of one concept is an exemplar of the other.  Take, for example, the concepts CAR
and WHEELED.  These two concepts are very strongly associated in my mind because I see
hundreds, if not thousands, of cars every day, and every single one has wheels; this has been true
for almost my entire life.  I have seen some cars without wheels – I have even taken the wheels
off of cars – but very few.  Thus, we should expect a strong association between CAR and
WHEELED; if this association is what generates general intuitions, then we I might have the
general intuition “All cars have wheels.”  (Speaking from experience, the statement “All cars
have wheels” does feel true)
This is not the best-case scenario for general intuitions on the associative model,
however.  It is also consistent with the associative model that we will only have a general
intuition with the content “All Xs are Y” when we have a strong association between X and Y,
and when we have never experienced (or cannot recall) an X that is not Y.  And, in fact, there is
some evidence to suggest that, when people have general intuitions, the unconscious mind
sometimes tries to recall exceptions to the content of the intuition (see Klein, et al, 2001, and
Klein, et al, 2002), indicating that it is feasible for our unconscious minds to “block” a general
intuitions when their is evidence that its content is false.  However, its not clear that this is
generally the case.  In the studies cited, many subjects still had general intuitions for which they
90
could recall exceptions, indicating that knowing of an exception does not guarantee that general
intuitions will be blocked.  There are also survival advantages to having a relatively low threshold
for producing general intuitions; that is, to having minds which generate general intuitions even
when there is known data that contradicts them (Queller, et al, 2006).  This is because it is often
more efficient or safer to over generalize about things, especially when the cost of not quickly
identifying something as a member of a certain category (e.g. as dangerous) is high.
There is another way general intuitions could be generated.  They might be generated by
recall of exemplars of concepts.  This would be likely to occur in situations where an intuitor
considers whether or not all Xs are Y, but lacks a strong association between X and Y, or has an
association between the concepts which is not articulable (for example because X did not have the
word “X” associated with it). When (and if) this occurs, what exemplars are likely to be recalled?
There are four kinds of exemplars that people have been shown to recall readily in
various circumstances.  These are typical exemplars (Mervis & Rosch, 1981), salient exemplars
(e.g. Fabiani & Donchin, 1995),
40
recently encountered exemplars (Tan & Ward, 2000), and
confirmatory exemplars (Kunda, 1990, Nickerson, 1998).  Typical exemplars are those that are
highly representative of a concept, salient exemplars are those that stand out from others, and
                                                     
40
The specific experiments in that study did not look into speed of recall of salient things, they did study
degree of recall, concluding that salient objects were more likely to be recalled.  See also Rojhan, K. &
Pettigrew, T.F., “Memory for Schema-Relevant Information:  A Meta-Analytic Resolution,” British Journal
of Social Psychology, 1992, v.31, n.2, 81-109, and Palmeri & Nosofsky, 1995.  This latter article doesn’t
explicitly address salience; however, the authors perform a series of studies of categorization learning.  In
each study, stimuli which are “exceptions” to a hypothesized rule governing category membership (but still
members of the category) were more easily remembered than any other stimuli.  These stimuli would be
salient, because they are distinct from the average member of the category (which conformed to the rule).
The study also seems to show that salient stimuli were better remembered than prototypes.  However, this
may not be the case.  Subjects were more confident about whether or not they had seen the exceptional
cases before, showing that they recalled the exceptional cases they had seen, and also that they could
distinguish these cases from other cases.  Subjects were less confident about whether or not they had seen
prototypical cases before.  This may show only that it is easy to confuse prototypical stimuli with other
stimuli (since prototypes are more like other members of the category, Rosch (1975)), so that, even if a
prototype is recalled, it is hard to be sure it is actually recalled or just familiar seeming.
91
confirmatory exemplars are that which confirm hypotheses under consideration.  Some of these
provide better evidence than others:  if a trait is possessed by a typical exemplar, it is more likely
to also be possessed by other members of a category than a trait possessed by an atpyical
exemplar (Rosch & Mervis, 1975).  But even typical exemplars are not particularly good
evidence, since they will still have many traits which are not possessed by all members of the
category.  General intuitions based on a combination of these types of exemplars, especially on a
mix of typical and salient exemplars, are better evidence for their content; typical exemplars give
traits which are had by most category members, and salient exemplars are likely to be atypical
cases such that if both they and the typical exemplar have the same traits, it is somewhat likely to
be shared by all category members.  However, even in this ideal circumstance we cannot expect
these general intuitions to be entirely accurate, and we have no way of judging whether or not a
given general intuition is based on this idea combination of exemplars or on an evidentially
inferior mix.
It seems, then, that general intuitions should be provisionally considered as evidence for
their content, but not very strong evidence.  The general intuitions “All Xs are Y” indicates that
Xs are quite often Y, but does not rule out exceptions to this regularity.  This type of evidence can
quite easily be defeated by counter-evidence such as specific intuitions about cases which violate
the rule.
Conclusion
To summarize this chapter:  our unconscious thought processes are largely associative
and our unconscious learning is based on association.  This learning can be shaped by innate
mental structures, such as our perceptual systems, or those that influence language learning, but
in the absence of strong evidence that such structures affect learning or judgment about a given
92
domain, we should assume that it is due entirely to associative processes.  Whenever two
experiences occur close together in time, a connection is formed between the contents of these
experiences.  This is an association.  The more times two things are experienced together, the
stronger the association between the two becomes.  When an association between two things is
strong enough, experiencing one will cause the unconscious mind to think of the other.  This
allows the unconscious mind to categorize objects and to make inferences.  Given sufficient
experiences with a type of thing, our unconscious mind comes to store data about that type of
thing in abstract form; it forms an unconscious concept of that type of thing.  Unconscious
concepts are clusters of associations between properties that were often experienced together.
These unconscious concepts are used to make judgments; if some thing has a sufficient number of
traits that are associated with that unconscious concept, it will be judged to be an exemplar of that
concept, and if something is taken to be an exemplar of an unconscious concept, our unconscious
minds will typically infer that it has traits associated with the concept.  Unconscious concepts
might also be used to make judgments about types of things; these are general intuitions.  Our
unconscious minds form associations between the contents of all sorts of experiences – memories,
judgments, emotions, and so forth.  Any mental event which has X in its content is an experience
of X.  Often we are not aware of our experiences, or not aware that we are learning from them, or
not aware what it is we are learning from them.  Experiences need not have content which is, or
can be, expressed linguistically, which means that we can form associations between things
which we cannot easily express in words.  For these reasons, our unconscious judgments and
concepts need not reflect our conscious beliefs or judgments, and we will sometimes not be able
to have intuitions which reflect what we have unconsciously learned about the world, because our
unconscious minds cannot put this learning into words.
93
Chapter 3
Introduction
In the previous chapter we discussed the normal functioning of our intuitive faculties:
how the unconscious gathers information, what sort of information it gathers, how it puts that
information together, and how this leads to the formation of intuitions.  That general theory of
intuitions presents intuitions at their best; to the extent to which our intuitive faculties deviate
from this normal functioning, their evidential status is always worse off.  And, unfortunately,
there are a number of factors that can affect the normal functioning of our intuitions.  In order to
have a robust and useful understanding of when our intuitions are and are not a good source of
evidence in philosophy, we need to understand these factors that degrade the normal functioning
of our intuitive faculties.  That will take up most of this chapter.  I do not expect to give an
exhaustive list of these, both because it is likely that some of them are as yet undiscovered or
unverified, and also because not all of them are relevant to the practice of philosophy.  I will
instead focus on those most relevant to philosophical methodology.  At the end of the chapter, I
will give some thoughts on the general upshot of some of this discussion for the practice of
philosophy.
Directional Motivation
Our motivations affect how our mind works in ways that sometimes affect the intuitions
we have.  There are three main motivations discussed in the psychological literature on
motivational effects on thought:  accuracy motivation, which is the motivation to draw the correct
conclusion, regardless of what it is; structural motivation, which is the motivation to come to
some conclusion (relatively quickly), regardless of what it is; and directional motivation, which is
94
the need to come to a specific conclusion, usually due to self-interest (terms from Kunda, 1990,
and Kruglanski & Freund, 1983).  To illustrate each, consider my visit to the doctor.  During the
checkup, the doctor asks me questions related to my health, and I am motivated to give accurate
answers so that the doctor has the information they need to properly diagnose me.  Before the
checkup, however, when I am waiting nervously in the waiting room, another person in the
waiting room starts talking to me and asks me a number of rather probing questions; here, I just
want to give them some answer to satisfy their curiosity and get them to stop bothering me, but I
do not particularly care if what I tell this is particularly accurate.  At the end of the checkup, the
doctor tells me that she needs to wait for test results to diagnose me.  At this point, I try to predict
what the diagnosis will be, but because I want to think all is well for me, I hope to predict that the
doctor will give me a clean bill of health.  This example illustrates accuracy motivation, structural
motivation, and directional motivation respectively.
It has been demonstrated that these motivations affect the way we think, and that
structural and directional motivations mostly affect it for the worse (accuracy motivation can
cause problems under some circumstances, however (Kunda, 1990)).
41
 They affect the amount of
thought and attention we give propositions (Kruglanski & Freund, 1983, Kruglanski & Mayselles,
1987, Ditto & Lopez, 1992) and they affect the type of thought processes we use in thinking
about propositions (Kunda, 1990).  I am going to focus my discussion on directional motivation
(the motivation to make some specific judgment) because it is more likely to cause bad intuitions
than is accuracy motivation, and because there is little research on structural motivation.
                                                     
41
For many years it was argued that we could not tell if it was motivation affecting our thought formation
or the existence of prior beliefs that caused the motivations.  For example, say you have a friend who may
or may not be telling you the truth.  You tend to interpret what he says so as to make him seem honest.
This may be because you are motivated to think he is honest, or it may be that you became his friend
because you thought he was an honest person, and thus you think you have good reason to interpret his
behavior as you do.  There is now good evidence that a motivations themselves affect thought processes
(Kunda, 1990).
95
In study after study, when different subjects are given the same information, the
judgments of people with directional motivations vary systematically from those of people given
no directional motivation.  Directional motivation causes people to judge other people as more
likable, more competent, or harder working; to judge themselves to have different personality
traits (depending on which traits they are told are more desirable); to judge some studies to be
stronger than others; or to differently interpret the causes of events, or their likelihood.
42
 How
does this happen?  Reasoning methods that generate the desired conclusions tend to be chosen
over others (Kunda, 1990), information consistent with the conclusion one wishes to draw is more
easily recalled than information inconsistent with that conclusion (Kunda, 1990, Sanitoso, et al,
1990), and information inconsistent with that conclusion seems less plausible (Ditto & Lopez,
1992, Kunda, 1990).  These factors are likely linked, because memories that are more quickly and
easily recalled seem more plausible than those which are not (Schwartz & Vaughn, 2002,
Baranski & Petrusic, 1998), so since information consistent with a desired conclusion is more
quickly recalled than information inconsistent with it, it will also seem more plausible.
While directional motivation affects the way we reason, the question we are interested in
is, is it likely to affect our intuitions?  I will argue that directional motivation can affect our
intuitions both directly and indirectly.  Let’s look at its direct effects first, and consider three
different types of intuitions:  intuitions about recalled things, or things we know about mostly
from recollection, intuitions about novel things (things we experience for the first time at about
the time we have the intuition), and intuitions about general statements or propositions.  For
example, the intuition “I could not have had parents other than my actual parents,” is an intuition
of the first kind, the intuition “This person in this thought experiment I have just heard could not
                                                     
42
See Kunda, 1990, for an extensive review of the literature on this topic. See also Freund & Kruglanski,
1993, Kruglansky & Freund, 1983, Ditto & Lopez 1992.
96
have had parents other than those s/he actually has,” is of the second kind, and the intuition “No
one could have parents other than those they actually have,” is of the third kind.  Given that
directional motivation affects recall, it should clearly be able to affect intuitions about recalled
objects, or intuitions about objects that we mostly know about due to recall.  Directional
motivation should also be able to affect some of our general intuitions, those based on exemplars
(see Chapter Two), since which exemplars are recalled can be affected by directional motivation,
and this could affect the judgment made based on those exemplars.
What about judgments about novel objects?  These are judgments that are based on
information presented (more or less) in the present, and not stored in (long term) memory, so it is
not as obvious that directional motivation’s effect on recall will have an effect on these intuitions.
However, studies have shown that directional motivation can affect intuitions about novel things.
Most studies of the affects of directional motivation on judgment involved recall of information
from long-term memory, but there have been some that have not.  In two experiments, subjects
were told that they would work on a project with a stranger (Neuberg & Fiske, 1987, experiments
1 and 3).  Some subjects were told that they would be rewarded for the project based on their
group’s results, and some were told that they would be rewarded based on their individual results.
Linking the reward to group performance was supposed to give subjects a motivation to like their
partner, as they would need to work well with them.  Subjects were given a brief description of
the partner, which took them an average of 75 – 95 seconds to read, and then were immediately
given a questionnaire whose first question was on the likeability of their partner.  Rated
likeability was higher for those subjects who had a motivation to find the partner likeable.  Since
the subjects were reacting to information they had been given a very short time before, this was
not based on biased recall from long-term memory, but should be based on biased interpretation
of information given to them at that moment.  Other studies involving similar paradigms –
97
subjects given an incentive to like a person, given information about that person, and asked about
their likeability – with somewhat greater but still short time spans before testing (4 minutes in one
case (Neuberg & Fiske, 1987, experiment 2), 7.5 minutes in another (Berscheild, et al, 1976)),
found similar results were found:  subjects’ ratings of the other person were affected by
motivation.  The experimenters in these studies did not test whether or not these results were due
in part to unconscious processes, but is quite likely, first, because it is hard to believe that the
subjects systematically decided to try to like the stranger in each case, and, second, because the
fact that the subjects did not seem to have much time to think about the material presented to
them.  It seems that subjects paid differing amounts of attention to aspects of the object being
categorized in accordance with their motivation, or dismissed or reinterpreted information that
was inconsistent with their preferred view (an effect which motivation can cause).  Either way,
their directional motivations affected the information they took in, and this in turn should have an
effect on intuitions.  Thus, directional motivation can affect unconscious judgment both about
objects being recalled, and of objects experienced for the first time.
Directional motivation can also affect intuitions in another way.  If one is systematically
motivated to make certain judgments – e.g. that what they do is good – this will lead to
associative learning that will affect future judgments.  For example, if I am forced to steal to
survive, and motivated to judge my own actions as good, or at least as acceptable, then I am
likely to come to associate stealing with moral acceptability (at least, more so than the average
person).  If this happens often enough – for example, if I often think about the one time I had to
steal – then I should form a fairly strong association between stealing and moral acceptability,
such that in the future when I hear about others stealing I will be less likely to judge their actions
harshly.  This does not require directional motivation to directly affect intuitions in order to affect
them at all.  Even if directional motivation only directly affected non-intuitive judgments, by
98
affecting the reasoning methods one consciously employs, for example, it could lead to intuitive
judgments consonant with one’s motivations.  This is especially likely to occur when the concept
or category that is the subject of the intuition is one that the intuitor is likely to think about or
make judgments about often, as this will allow for there to be a great many conscious judgments
affected by the directional motivation, which in turn will lead to stronger associations, and thus to
intuitions in line with the motivation.
Having a directional motivation does not mean that one’s intuitions will always turn out
as one hopes.  One cannot always ignore evidence against a judgment, especially if that evidence
is obvious and strong.  Research shows that, when subjects are given a motivation to think X, but
are aware of evidence against X, their ability to think X is limited by that evidence – they are less
confident in their judgment, their judgment is not as strong (in the cases of non-binary judgments,
like “so and so is nice” versus “so and so is very nice), or they fail to make the judgment at all:
Prior self-concepts similarly appear to constrain directional shifts toward desired [judgments
about] selves. Prior beliefs about how performance reflects ability appear to constrain motivated
perceptions of the ability of a person or of a sports team. And prior beliefs about the strength of
scientific methods appear to constrain motivated evaluations of scientific research. The existence
of such constraints indicates that prior knowledge is accessed in the process of arriving at desired
conclusions; the existence of bias implies that not all relevant prior knowledge is accessed.
[citations omitted] (Kunda, 1990, p. 493)
The evidence cited for this is mostly limited to judgments based on memory search, but this idea
should apply to intuitions about novel objects as well.  When one encounters a novel object that
one is motivated to categorize in a certain way, it should be hard to completely ignore features of
that object that weigh against that categorization if they are sufficiently salient.  Directional
motivation will make the most difference in borderline or difficult cases, since the things to be
categorized in these cases will not have as many salient cues giving their proper category
membership.  It will also make a difference when it exists over long periods of time, as biased
judgments about borderline cases will lead to change in the “shape” of an unconscious concept,
99
moving its borders and making once central cases open to biased judgment.  This is of special
concern to philosophers using intuitions as counter-examples, because these intuitions are often
about borderline or odd cases (which are the most likely to get overlooked by the originator of the
thesis being challenged).
The Illusory Truth Effect
Let’s say I am asked, “Intuitively, does responsibility imply freedom?”  Coincidentally, I
have recently been told that it does.  Will having been told this affect my response to the
question?  The short answer is “yes.”  The term illusory truth effect refers to the effect that being
exposed to a proposition, whether or not it is believed at the time of exposure, has on future
responses to questions about that same subject; this exposure makes people more likely to
respond as if the proposition were true.
43
 So, having heard recently that responsibility implies
freedom, the statement is more likely to feel true (intuitively) when I am asked about it.
In numerous studies, subjects have reported that familiar sentences seem more true than
unfamiliar sentences, all other things being equal (e.g. when subjects did not consider other
evidence of the truth or falsity of these sentences at the time of judgment).
44
 This occurs even
when the subjects were told the sentences were false on first exposure (Begg et al, 1992, Gilbert
et al, 1993).  For example, in one study subjects were exposed to statements from a variety of
sources and told which sources were telling the truth and which were not; when tested later on
which were true and which false, they rated 66% of the true statements as true, 59% of the false
statements as true, and 45% of the novel statements (ones they had not heard before) as true
(Begg, et al, 1992, experiment 1). Similar studies have been done with sentences involving
                                                     
43
I have borrowed this term from Begg, et al, 1998, who use it in a somewhat more limited way than I do.
44
See Begg, et al, 1992, for an overview.
100
nonsense words (although subjects were told that these words were words in Hopi) (Gilbert, et al,
1990). The use of nonsense words in these studies eliminates any possibility of subjects using
logic or prior belief to answer the questions asked of them; they could only answer based on their
recall of the initial sentences or based on what felt true, that is, on intuition.  Other studies have
shown that exposure to sentences one is told to be false can affect future judgments beyond ones
of the sort “Is this sentence true?”  For example, one study involved exposure to sentences about
an imaginary criminal.  Later, subjects were asked to sentence the criminal; sentencing was
harsher when the subjects had been exposed to (known) false sentences that cast the criminal in a
bad light (however, these effects only occurred when the subjects were not allowed to think much
about these sentence on first exposure) (Gilbert, et al, 1993).
Mere exposure to a statement can cause people to later have intuitions that reflect a
seeming acceptance of the statement, even in the absence of belief in it.  As in the case of
directional motivation, these effects are significant but they do not completely dominate all other
causes of intuition.  In one study, ratings of truth increased from 45% to 59% based on familiarity
(Begg, et al, 1992); in another, subjects exposed to word lists gave wrong (but plausible) answers
from the list 21% of the time (Kelley & Lindsay, 1993); in one study of anchoring, the difference
between answers given by subjects not exposed to an anchor and subjects exposed to an anchor
was only about 10% of the anchor value (Wilson, et al, 1996). These results shows that the mere
exposure effect affects intuitions, but that the effect of exposure is mediated by other
considerations, such as knowing what the limits of what a plausible answer to a question could
be.  Further, the effects of mere exposure can be eliminated by prior knowledge of the actual
answer to the question.
45
                                                     
45
See for example Wilson, et al, 1996.
101
Verbal Overshadowing
The line between intuitions and judgments due to conscious processes can be blurry,
because it can be difficult to recall how a given feeling came about in ourselves, and even more
difficult to tell how one came about in another person.  Feelings that are reported as being
intuitions may in fact not be intuitions by the definition we are using.  I bring this up because the
problem I am about to discuss may not be due to unconscious processes generating unreliable
intuitions, but rather due to unreliable conscious judgments being reported as intuitions.  In either
case, though, this problem can affect the data we get when we gather intuitions.
It has been found that causing people to verbalize aspects of tasks that are normally
performed non-verbally causes changes in how they perform these tasks, changes which are
detrimental to the results produced; this is verbal overshadowing.  This is probably easiest
understood by first considering some examples.
Some of the first research into verbal overshadowing was on face recognition (see Ryan
& Schooler, 1998, for an overview).  It was found that asking subjects to describe a face they had
previously seen significantly decreased their ability to tell that face apart from others (this
decrease was not just statistically significant, but fairly large – subjects went from almost 80%
accuracy to below 50%, Schooler, et al, 1997); similar effects were seen with color recognition,
(Schooler, et al, 1997), taste recognition (specifically with wine tasting, Melcher & Schooler,
1996) and music recognition (cited in Melcher & Schooler, 1996).  Studies have also been done
with preference judgments; subjects asked to list their reasons for making preference judgments,
or to say how important different facts were to their preference judgments made worse judgments
of preference than subjects who performed these tasks without verbalization (“worse” here being
measured in several ways, among them by their long-term happiness with their choice (Wilson &
Schooler, 1991)).  Subjects asked to articulate their thought processes showed decreased ability to
102
solve “insight problems” – these are problems where one does not carefully and consciously work
through a process to the solution, but rather where the solution suddenly comes to one seemingly
out of nowhere (Schooler, et al, 1993).
One thing all of these studies have in common is that they involved tasks that people
normally perform partly without conscious thinking or deliberation.  Face recognition, or taste or
sound recognition, in large part is automatic and non-reflective; we see/hear/taste something and
simply recognize it, typically without thinking through the similarities and differences to what we
have seen/heard/tasted before.  Similarly, preference or evaluative judgments are quite often
made partly without deliberation or conscious thought – aspects of these judgments may be
conscious and deliberative, but other aspects and automatic, emotional, or unconscious.  The
same is true for solving insight problems (by definition).
Another commonality all of these have is that the subjects who experienced verbal
overshadowing were put in a position where they were very likely to use conscious processes to
perform these tasks, even when not explicitly asked to do so.  In face recognition studies, for
example, subjects were not told to pick out the face that looked like the description they gave.
Rather, subjects were asked to describe a face, and later asked to recognize it; it is to be expected
that subjects asked to do so would be likely to employ different strategies to recognize the face
(i.e. they would use their description to some extent) than they otherwise would, but this was not
asked of them by the experimenter.  Subjects in preference studies were asked to think about and
articulate their reasons for forming their preferences, which should cause them try to bring the
reasons behind their preferences to mind, which we often do not do at all.  Subjects in studies of
insight problems were asked to explain their thought processes as they thought, rather than
allowed to let the answers to these problems come as they normally do, in an unarticulated flash
of insight.  Interestingly, there is no verbal overshadowing problem in studies where subjects
103
verbalize about tasks that they normally perform using conscious thought processes, such as
memorizing lists or solving analytic (rule-based) problems (Ryan & Schooler, 1998).  There are
two points here:  one is that it is that it is not verbalization itself that is problematic, it is that
verbalization is likely to cause subjects to rely on different methods to perform tasks than they
normally do; the second is that this can occur even when subjects are not explicitly told to use
these different methods.
A final point is that in many of these studies there remains a mental process that could, in
principle, allow subjects to perform the given task well, but that subjects are not relying on it.
Verbalization does not, for example, affect one’s visual experience, or even one’s initial
perception of the face seen, since the description occurs after the initial seeing of the face.
Intuitive judgments occur automatically and effortlessly, whether we want them to or not, and
whether or not we also make conscious judgments about these tasks; what verbal overshadowing
does is causes subjects to use a conscious, non-intuitive process to perform a task, rather than
reporting the results of the unconscious abilities they still possess.
Another interesting fact about verbal overshadowing is that experts in a domain – wine
experts, for example – seem unaffected by it.  This raises the question of whether or not
philosophers will be affected by verbal overshadowing.  I will return to this question at the end of
this chapter, when I discuss how we might alter our gathering of intuitions given the various
effects I have been discussing.
104
Context Effects
A subject’s intuitions about a thought experiment can be affected by previously (and
recently) presented thought experiments.
46
 This is an instance of a regularly observed
phenomenon in psychology called order effect; the order effect is that giving information, or
asking questions, in one order can generate different responses in subjects than administering
them in a different order.  Intuitions can also seemingly be changed by the way in which they are
elicited, which is called the compatibility effect.
47
 For example, asking subjects to chose one bet
over another will generate different (and contradictory) results from asking them to give the
lowest price for which they would be willing to sell the bet (Shafir, 1998); the majority of
subjects in one experiment said they would prefer bet A over bet B, yet subjects asked to price the
two bets priced B higher than A (showing they would be less willing to give up B).
Compatibility and order effects are thought to be due to context making one aspect of the
situation or thought experiment more salient than it would normally be; judgments then tend to be
more affected by that aspect than otherwise.  This process is generally unconscious, and is
resistant to conscious efforts to avoid it (Slovic, et al, 2002).
These effects can be relatively unproblematic in cases where the feature subjects paid
more attention to (due to context) was one they ought to be paying more attention to than they
normally did, but that is rarely the case when soliciting philosophical intuitions, as we typically
do not know what features one ought to be basing one’s intuitions on.  These effects can also be
used to our advantage in constructing thought experiments; if we wish to test whether or not a
certain features makes a difference in intuitions (or what sort of difference it makes), we can
                                                     
46
For example, Swain, et al, forthcoming, or Shafir, 1998.
47
The compatibility effect also refers to changes in non-intuitive responses based on how questions are
asked.
105
make these features more salient by asking questions in a certain way or by preceding the thought
experiment with others that emphasize them.  This sort of subtle manipulation of attention can be
more effective than overtly asking subjects to pay attention to a feature, which may cause them to
engage conscious judgment processes.
Some Miscellaneous Biases
There are numerous other biases that affect unconscious judgment, and that have been
experimentally studied, but that only are likely to affect intuitions about certain domains or types
of cases.  Some of these, such as systematic biases in thinking about probabilities versus
frequencies, may not make a difference in the sorts of intuitions that most philosophers care
about.
48
 I will briefly discuss a couple of biases that plausibly affect intuitions about specific
questions or domains in philosophy.  The point of this discussion is to highlight the importance of
doing further investigation into the psychological literature when one wishes to consult intuitions
in philosophy, since even a solid general understanding of how intuitions work may not be
enough to avoid the use of unreliable intuitions.
One interesting bias is hindsight bias: when subjects know how that an event occurred,
their estimations of how likely it was to occur rise, often to the point that the event seemed
inevitable (Schwartz & Vaughn, 2002).  Further, they often judge (contrary to fact) that they had
always thought it was that likely to occur.  This is most likely due to fact that, once an event
occurs, its occurrence becomes much easier to recall.  Another bias with likely similar causes is
the emotional amplification effect:  subject’s emotional reactions to events become stronger when
the cause of the event is seen as less normal (Kahneman & Miller, 2002).  For example, subjects
                                                     
48
Although see Bishop & Trout, 2005, for an argument that this bias are exactly the sort of thing
philosophers doing epistemology should be concerned with.
106
were to imagine two men who both miss their flight.  Both got to the airport 45 minutes late, but
one man’s flight had been delayed 30 minutes, so that he only missed it by 15 minutes, whereas
the other missed their flight by the whole 45 minutes.  Subjects reported that the man who had
missed his flight by only 15 minutes would be more upset (Kahneman & Tversky, 1982).  The
emotional amplification effect seems to occur because the less normal cause is more easily
imagined not happening than the more normal cause; in other words, the alternative (the event not
happening) seems more likely (Kahneman & Miller, 2002).
These two biases have important implications for the use of philosophical intuitions in
areas like ethics and the metaphysics of causation.  For example, I think that these biases are
responsible in large part for intuitions about the act/omission distinction and certain types of
moral luck (more on some of this in Chapter 4).  The emotional amplification effect causes us to
feel more strongly about the results of acts rather than omissions, because (research suggests
(Kahneman & Miller, 2002)) it is easier to imagine an act not occurring than an omission.  Since
the key intuitive data that underlies making the metaphysical act/omission distinction is that acts
seem morally different than omissions, the emotional amplification effect gives us a reason to
think that this distinction is not a real metaphysical one.  Hindsight bias plausibly accounts for
intuitions about cases of moral luck of outcomes; thought experiments which generate moral luck
problems are always such that two agents performed very similar actions but caused very
different outcomes.  Due to hindsight bias, we may feel the agents knew (or should have known)
that their actions would cause the outcome they did, and this feeling may cause our moral
intuitions.  This, if true, dispels the problem of moral luck of outcomes, since these intuitions do
not conflict with the intuitive principle that no one is morally responsible for what is outside their
control.
107
Another philosophically interesting bias is the fundamental attribution error.  The
fundamental attribution error is the tendency of subjects to judge that a person has some character
trait based on their behavior, when that behavior is equally well explained by circumstances
beyond the person’s control, and subjects are aware of these circumstances:
Basketball players who are randomly assigned to shoot free throws in badly lighted gyms may, on
average be judged as less capable than players who are randomly assigned to shoot free throws on
a well-lighted court; politicians who are randomly assigned to read pro-choice speeches may, on
average, be judged as more pro-choice than politicians who are randomly assigned to read pro-life
speeches; students who are randomly assigned to receive bad news may, on average, be judged as
more chronically depressed than students who are randomly assigned to receive good news.
[citations omitted] (Gilbert, 2002, 169)
It has been called, “as robust and reliable a phenomenon as any in the literature on person
perception.”
49
 This is clearly of relevance to intuitions in the domain of ethics, mind, and
philosophy of action.  In fact, Darren Dromsky has argued that this bias explains the intuitions
that give rise to the problem of moral luck (at least with regards to outcomes), and thus that these
intuitions can be safely ignored (Dromsky, 2004).  Various explanations have been given for the
fundamental attribution error; I think that it can best be explained by the nature of concepts and
unconscious category judgments.  The fundamental attribution error occurs when subjects
observe some behavior (of a person) that can be explained by circumstance or by one of that
person’s character traits.  They judge that the person has the character trait.  In these experiments,
we should expect that subjects have a stronger association between the behavior and the trait than
between the behavior and the circumstances, because the circumstances are usually such that the
subjects is unlikely to previously have encountered them much.  We rarely see people shooting
free throws on badly lit courts; we rarely see politicians being randomly assigned to read
                                                     
49
Gilbert, 2002, p. 169, quoting Quattrone, G.A., “Overattribution and Unit Formation:  When Behavior
Engulfs the Person,” Journal of Personality and Social Psychology, 1982, v.42, 593-607.
108
speeches.  Thus, upon seeing someone miss a free throw, we are more likely to have the concept
BAD BASKETBALL PLAYER activated than PLAYING IN THE DARK.
Whose Intuitions Should We Use?
The existence of the biases and effect I have discussed bring up a number of questions
about how we should gather and use intuitions.  We obviously want to build our thought
experiments so as to avoid these biases as much as possible, and in interpreting results we need to
keep these effects in mind.  I have, for example, argued elsewhere that verbal overshadowing
raises problems for an approach gathering intuitions suggested by Antti Kauppinen.
50
 In what
remains of this chapter, I want to consider what these effects tell us about the question, “Whose
intuitions should we use when doing philosophy?”
The short answer to this question is, “Usually, non-philosophers.”  There are good
empirical reasons for this answer, many of which are implicit in the concern often raised about
intuitions:  that they are too “theory laden.”
My general theory of intuitions suggests that the intuitions of non-philosophers should
generally be no less reliable than those of philosophers.  Intuitions come from unconscious
learning, which comes from experience.  Philosophers and non-philosophers live in the same
world, and encounter mostly the same things and properties.  We should expect them to have
largely the same sorts of experiences, and to have these experiences with similar frequency.
Thus, we should expect philosophers and non-philosophers to have roughly the same raw
materials from which unconscious judgments can be generated.  There are differences between
the sorts of experiences philosophers and non-philosophers will tend to have.  Philosophers as a
group are more likely to think carefully about certain types of things, to have discussed these
                                                     
50
See my “Ethical Intuitions, Expert Judgment, and Empirical Philosophy,” (manuscript)
109
things at great length, and to have developed theories about these things.  This makes
philosophers’ intuitions generally less trustworthy as evidence than the intuitions of non-
philosophers.
Consider the biases discussed in this chapter:  directional motivation, the illusory truth
effect, and verbal overshadowing.  Directional motivation exists when one has a motivation to
want to draw a specific conclusion rather than (or in addition to) the motivation to draw the right
conclusion.  It can affect intuitions both directly and indirectly.  At any given time, having
directional motivation can cause one to have a different intuition than one would have had had
they lacked that motivation at that moment.  Over time, directional motivation can shape one’s
unconscious concepts (due to learning from intuitions directly affected by one’s motivation), so
that, even if that motivation were to disappear, one would continue to have intuitions that reflect
that motivation.  The illusory truth effect is the result of exposure to a proposition, including
thinking about it; this can cause one be more likely to have the intuition that this proposition is
true.  These can also affect intuitions indirectly.  Over time, repeated mere exposure effects can
shape one’s unconscious concepts so that these concepts reflect “belief” in the proposition one
was exposed to, even when too much time has elapsed for mere exposure to have a direct effect.
Finally, verbal overshadowing occurs when one is pushed to consciously reason a normally
intuitively made judgment.  This negatively affects the reliability of the intuitive
judgment/decision.  It should also affect intuitions directly and indirectly; directly, by negatively
affecting intuitions when one tries to articulate the process behind them, and indirectly, by
negatively affecting numerous intuitions so as to shape unconscious concepts.
Philosophers’ intuitions are generally much more likely to have been affected by these
biases than the intuitions of non-philosophers.  Philosophers are more likely to have a career or
personal interest in the truth of a given theory, either because it is a theory they advocate (or
110
dispute), or because it is related to one they advocate/dispute and they can see this connection.
For many, potentially most, areas of philosophy, non-philosophers either do not have a stake in
what theory turns out to be correct or if they do they do not realize it because they have not
thought about the issues enough to see how they are connected.  This is not to say that non-
philosophers are less intelligent than non-philosophers, or less interested in deep questions, but
just to say that many of the questions philosophers study do not immediately and obviously
connect to questions that non-philosophers find compelling and have thought about at any length
(as we all realize when we try to explain and motivate our research to our friends and relatives).
Thus, when it comes to philosophical questions, philosophers are more likely to be affected by
directional motivation than non-philosophers.  Philosophers are also, obviously, more likely to
have talked about the philosophical questions that we try to explore using intuitions, and are more
likely to have theorized about the nature of the concepts or properties which we study using
intuitions, than are non-philosophers.  This means that we are more likely to have our
unconscious concepts affected by the illusory truth effect and verbal overshadowing either
directly, because we have talked about the subject recently, or indirectly, because our talking
about it in the past affected our intuitions then, which affected our unconscious concepts, which
affects our intuitions now.
Now, it might be argued that, at least with regards to verbal overshadowing, we need not
worry about philosophers’ intuitions.  Experts in a domain – wine experts, for example – seem
unaffected by verbal overshadowing.  Perhaps philosophers would be unaffected by verbal
overshadowing due to our expertise in the domain of philosophy.  The simplest explanation of
why verbal overshadowing does not affect experts in a domain is that these experts are verbal
experts:  they trained to recognize and articulate their judgments and judgment procedures
(Gladwell, 2005, Melcher & Schooler, 1996).   Achieving this sort of verbal expertise requires
111
training of a sort – either explicit or implicit.  The kind of training it requires is a large number of
exposures to paradigm cases of X and Y (where “X” and “Y” are the terms one is to achieve
verbal expertise in using), and feedback about what terms ought to be used and whether or not
one used the correct terms.  I am dubious of the claim that many philosophers get this.  Most of us
spend most of our time thinking about non-paradigmatic cases, because these are the most likely
to have been overlooked by those we disagree with, and thus make the best counter-examples.
Think of the number of papers written about trolley cases, which are quite strange and atypical
cases, and how few papers (comparatively) are written about giving to charity out of the goodness
of one’s heart (a paradigmatically good act) or about cold-blooded contract killing (a
paradigmatic wrong act).  Further, we get relatively little univocal feedback about our
philosophical judgments, partly because there is so little agreement in philosophy, partly because
philosophy is often quite solitary, and partly because we spend so much time thinking about odd
cases that are the subject of disagreement.  I do think that it might be possible to train people to
avoid some level of verbal overshadowing when it comes to philosophical intuitions (see again
my “Ethical Intuitions, Expert Judgment, and Empirical Philosophy,”) but I am dubious of the
claim that most philosophers have this training at this moment).
Thus, we see that, in general, philosophers and non-philosopher are equally likely to have
the experiences necessary to build the unconscious concepts needed to have accurate intuitions
about what we want to study, but that philosophers are more likely than non-philosophers to have
their intuitions affected by biases.  This means that non-philosophers are, generally, a better
source of intuitions than non-philosophers. It is not the case that every non-philosopher will have
reliable intuitions about every subject.  Non-philosophers will also be subject to biases of the sort
discussed above, so that we need to be careful when choosing non-philosophers to query.  In
addition, we can never be sure that any given person is not subject to some odd quirk of history or
112
character such that their intuitions about a given subject are off.  For this reason, we should
always solicit intuitions from large groups.  This will automatically correct for any non-
systematic errors individuals make.
There may be subjects about which philosophers will have better intuitions than non-
philosophers.  Perhaps there are aspects of experience to which philosophers are more sensitive
than non-philosophers.  Or perhaps the fact that philosophers tend to have terms that capture
distinctions or relations that normal speech does not allows philosophers to access certain
unconscious concepts linguistically that non-philosophers cannot.  Although these are possible, I
see reasons to be dubious.  If philosophers are differently sensitive to experiences than non-
philosophers, this is probably due to differences in how we consciously allocate our attention.
This difference would probably be due to our theories of what is and is not important.  If that is
the case, then our intuitions might only be circular evidence for our theories.  These issues need
further research.  Given the current state of our knowledge, we have empirically based reasons to
be more doubtful of the philosophical intuitions of philosophers than of non-philosophers; absent
empirical evidence contradicting these, when we do use intuitions as evidence for and against our
theories, we should use the intuitions of non-philosophers.
Conclusion
In this chapter we have discussed problems that will affect intuitions but that fall outside
a systematic account of the general processes behind intuitions.  When we use philosophical
intuitions, we need to do our best to make sure that these problems do not arise.  When we use
philosophical intuitions, we want to ensure that a subject’s intuitions to accurately reflect what
they have learned (in an unbiased manner), not to be thrown off by trying to articulate
unconscious judgments processes, not to tell us what the subject wants to believe, or not to tell us
113
what they have recently been told.   Of course, even when intuitions do reflect the best type of
unconscious judgment and learning, they may not be good evidence; we still need to consider
what a general theory of intuitions tells about how these intuitions come about, and their subject,
and ask if the process which gives rise to them can generate good evidence about that subject.
But when the factors discussed in this chapter are affecting a given intuition (or are likely to be
doing so – we typically cannot know for sure) then that intuition is not going to be good evidence
for its content.
114
Chapter 4
Introduction
In Chapter 1 I argued that there are a number of compelling reasons to worry about the
use of intuitions as evidence about philosophical questions, and that assessing the role intuitions
ought to play in philosophy in light of these worries requires what I call a “general theory of
intuitions:”  an empirically founded theory of the source(s) of intuitions in general.  In Chapter 2 I
presented and argued for a general theory of intuitions, and in Chapter 3 I presented some specific
biases and problems with intuitions that can arise in certain cases, but that do not plague
intuitions in general.  Given this new understanding of what intuitions are, how they work, and
when they will not work, we can return to the questions and concerns raised in Chapter 1 about
the role of intuitions in philosophy.  This chapter will be concerned with talking in a somewhat
general way about the extent to which intuitions can and should be used as evidence in
philosophy and with looking at what Chapters 2 and 3 tell us about the concerns raised in Chapter
1.  I say “somewhat general way” because I do not think there is a simple answer to the question
“should intuitions be used as evidence in philosophy;” some intuitions should be used as evidence
about some philosophical claims, but to a large extent determining when they should and should
not be used must be done on a case by case basis.  For that reason, I can only outline the sort of
thought process one should go through to make that sort of determination.  In the next two
chapters I will illustrate this thought process by applying it to some specific intuitions.
The most fundamental worry about intuitions is:  can they be a reliable source of
evidence about philosophical issues?  I will turn my attention to this question before addressing
the more specific arguments against the use of intuitions I discussed in Chapter 1.  To answer this
question, we have to consider two different types of things intuitions one might use intuitions to
115
learn about:  concepts, and what I will call “things themselves.”  Some philosophers have seen
themselves as doing (at least some of the time) conceptual analysis:  striving to understand our
concepts.  A conceptual analyst in epistemology might ask, “When we think about knowledge,
what are we talking about?  What are the necessary and sufficient conditions on our application of
the concept KNOWLEDGE?”  On the other hand, some philosophers (or the same philosophers at
other times) are interested in the study of “things themselves:”  things that are not concepts.  In
epistemology, one might say, “I am not interested in what our concept of knowledge is, I am
interested in what knowledge itself is.”  Rather than getting into a discussion of the respective
merits and demerits of either approach to philosophy, let’s consider what our empirically founded
understanding of intuitions tells us about the use of intuitions for each philosophical project.
Intuitions and “Things Themselves”
Let’s consider some facts about the capabilities of our unconscious minds.  These
capabilities are the product of the ways in which our unconscious minds learn.  The unconscious
is superior to the conscious mind at processing information in some ways.  In fact, when we use
conscious faculties to make judgments that are usually made unconsciously, the results are often
inferior to what the unconscious mind would have produced (e.g., Wilson & Schooler, 1991).
Our unconscious minds can track relationships that occur over longer periods of time, or are more
complex, than our conscious minds can track without mechanical assistance (Lewicki, et al,
1992).  The reason for this is that the unconscious learns from all of our experiences – they all go
to form or strengthen associations – and so tracks and uses vast amounts of information,
essentially unlimited by the limits on working memory that hamper conscious reasoning.  Our
unconscious is also sensitive to information that our conscious minds will not normally notice.
Since it learns from all experience, our unconscious detects, processes, and makes judgments
116
based on information which we consciously consider irrelevant, or that we are not conscious of.
This is important because this information can actually be relevant to judgments without our
knowing it.  Our unconscious can use information for which we have no words, whereas this can
be quite difficult for our conscious minds.
51
Since the unconscious operates automatically and
effortlessly, it is less sensitive to distraction and to other mental demands than is the conscious
mind – it keeps noticing and processing information even when the conscious mind is
overwhelmed (e.g. Betch, et al, 2001, Dijksterhuis, 2004).
So, our unconscious minds gather and aggregate huge amounts of information, more than
we can consciously (at least without mechanical assistance), and much of which we have little
conscious access to.  For this reason, we should expect it to be superior to our conscious minds in
making a number of types of judgments.  And it demonstrably is.  Some specific examples of
judgments that the unconscious excels at making are social judgments and judgments about our
own mental states:  as discussed in Chapter 2, our unconscious ability to notice patterns often
exceeds our ability to do so consciously, our unconscious minds make better preference or
evaluative judgments than our conscious minds (in certain situations); further, we are able to
ascertain the feelings of others, predict their behavior, and judge when they are honest without
knowing how we do so (Ambady & Rosenthal, 1992), and our unconscious has access to
information about certain beliefs, desires, motivations, and opinions that our conscious has no
direct access to (Wilson, 2002).
Intuitions are the products of unconscious judgment, based on unconscious learning.
Since all of the above cited facts are facts about how our unconscious minds learn and make
judgments about the external world, they all shed light on intuitions about things themselves.  It
                                                     
51
See for example Murphy, 2002, or research on infants such as Campos, et al, 1978.
117
seems that intuitions should often be an excellent source of evidence about certain types of facts
about things themselves:  those that can be learned through experience.  Are any of these of
philosophical interest?
Yes.  There are a number of philosophical subjects that we should expect to be able to
learn about partly through experience.  Let’s consider three:  responsibility, intention, and
causation.  None of these can be directly experienced, but each of these manifests itself in facts
that we experience, and by looking at our experiences we should be able to learn about each of
these.  And, in fact, learning about each of these demands the sorts of abilities that our
unconscious mind possess to a greater degree than our conscious minds.  Attributions of
responsibility, or intentionality, or causality, should be sensitive to subtle, hard to detect,
distinctions between people, or mental states, or physical relationships.  Both causation and
responsibility come in degrees, and the amount of each which should be attributed most likely
depends on a multitude of small and easily overlooked factors present in different situations.  The
ability to make accurate judgments about these three should require the ability to put together vast
amounts of minute details and notice patterns that occur over long periods of time.  These are
exactly the sorts of things our unconscious mind is capable of doing better than our conscious
mind.  Thus, we have reason to think that in some cases intuitions about responsibility, intention,
or causation will be based on real and important distinctions that we would be likely to overlook
using only our conscious faculties.  Parallel arguments can plausibly be made for numerous topics
in metaphysics, ethics, epistemology, the philosophy of action, and the philosophy of mind.
This is by no means an argument that all intuitions are good evidence about all things
themselves.  There are without a doubt philosophical subjects that experience can tell us little to
nothing about; abstract objects are probably a good example.  And there will be intuitions that, for
one reason or another, are bad evidence due to the sorts of biases I discussed in Chapter 3, or
118
other flukes of our minds.  Given the fact that intuitions are just the result of a great deal of
information aggregation, we should not treat them as the sort of trump card they are often seen as
in philosophy; even a very good information aggregator is only as good as the information it has
been given, and there is no guarantee we have been exposed to the right information to rule
conclusively on any given case.
Figuring out whether or not intuitions are good evidence about a thing itself requires two
things:  one, a general theory of intuitions, and two, some knowledge about that thing itself.
Given these, there are a series of stages one must go through in determining the quality of
evidence intuitions about the thing itself give us.  The first stage is to investigate whether or not
intuitions are even potentially a good source of evidence about that thing, given what is known
about it. In order to determine if intuitions are a potential source of evidence about some thing,
one must consider what information the unconscious mind must be able to detect and use in order
to give any insight into that thing, and then see, based on the general theory of intuitions, whether
or not the unconscious mind can detect and use this information.  For example, in Chapter Six I
talk about how it is that the unconscious mind might learn about causation, assuming that
causation is (or is generally tracked by) counterfactual dependence.  Counterfactual dependence
seems to involve facts about what things would be like in other possible worlds, and one might be
concerned that our unconscious minds have no epistemic access to other possible worlds.  In
order to show that our intuitions about causation could possibly be evidence about causation, one
would have explain how it is that our unconscious minds could learn about counterfactual
dependence from experience with only the actual world.  If this had turned out to be impossible,
and if we knew (or had good reason to believe) that causation involved counterfactual
dependence, then we would have had to say that intuitions about causation could not be evidence
about causation.
119
Once we have determined that the unconscious mind can use the sort of information it
needs to use for intuitions to be evidence about some thing itself, we need to determine if the
unconscious mind does use this sort of information, and how well it uses it.  This will tell us
which, if any, intuitions about that thing itself we should use as evidence.  This requires us to use
our general theory of intuitions to figure out how sensitive our unconscious minds are to the
appropriate information, what sorts of biases there may be in the information available to us, and
what sort of mistaken information the unconscious is likely to learn from.  I illustrate this process
in Chapters Five and Six.  In each chapter, I discuss the evidence that our unconscious minds
misuse information relevant to learning about a certain phenomena, and see if this misuse can
give rise to mistaken intuitions. This process will help us to determine if the unconscious
concepts we study using intuitions really give us insight into the concepts or properties we are
trying to study in all cases, some cases, or none.
If we go through these steps and have reason to think, based on what we know about
some thing itself and what we know about how our unconscious mind works, that intuitions could
be evidence about the thing itself, and that they are reliable enough for use, then we can and
should use intuitions about that thing itself when philosophically investigating it.
This no doubt raises the following question in some of your minds:  how do we come to
know enough about a thing itself to tell what kind of information our minds must be sensitive to
to give us good intuitions about it?  This better not involve intuitions about that thing itself, since
we do not yet know if these intuitions are trustworthy.  This is a larger question than I can
satisfactorily answer here, but I will throw out a couple of ideas.  One way would be to start with
central, indisputable cases; the indisputability of these cases might be due to convention.  For
example, we might all agree that kicking an innocent puppy for fun is bad, not because we have
the intuition that it is bad but because we all agree that badness is whatever property kicking an
120
innocent puppy and other central cases share.  We might also learn about a thing non-intuitively
by looking at the role it plays in our lives.  If knowledge is anything, it is something that is
supposed to govern our mental lives in certain ways.  If we can determine some of these ways
non-intuitively, we can learn something about knowledge.  There are other miscellaneous ways of
learning about things:  for example, we can learn that something is valuable by learning that some
people value it.
I do not want to dismiss this question of how we can non-intuitively learn about things
themselves in order to evaluate whether or not we can intuitively learn about things themselves,
as I think it is a challenging question.  But it seems to me that only a few philosophers think that
all knowledge of things themselves rests entirely on intuitions, so the claim that we can learn
something about things themselves intuitively is relatively uncontroversial.  Given this,
answering the question “Should we use intuitions as a source of evidence about this thing itself”
becomes a partly philosophical and partly psychological question.
Intuitions and Conceptual Analysis
A discussion of the use of intuitions for conceptual analysis is in many ways an inversion
of the discussion of the use of intuitions to learn about things themselves.  While the claim that
intuitions can tell us about things themselves is somewhat controversial, and it requires a decent
amount of arguing to show how intuitions can tell us about things themselves given what we now
know about them, it is generally taken as a given that intuitions are good evidence about
concepts.  In fact, conceptual analysts may see themselves as more or less immune to the sorts of
worries I have raised about the use of intuitions, and thus may think my project irrelevant to their
practice; this is because many see conceptual analysis as a prototypically armchair practice, to be
121
conducted in an entirely a priori manner.
52
 So, in this section rather than arguing that we should
take intuitions more seriously as a source of evidence, I am going to argue that we should be a bit
more concerned about them as evidence with regards to conceptual analysis.
My argument (in brief) is:  1.  Our intuitions are either caused by or potentially
influenced by our unconscious minds.  2.  Our unconscious minds a) can and do ignore
information that is apparent to our conscious minds, b) use information that is not apparent to our
conscious minds, c) ‘see’ information that is not actually present, and d) make mistakes about the
information that they are given. 3.  We cannot analyze concepts well using intuitions if we do not
know when (or if) our unconscious minds are doing a) – d).  In order to have a chance at
determining this, we need the help of my general theory of intuitions.  Premise 1 has already been
extensively argued for earlier, so I will not say anything more about it here.  Premise 2 is also a
consequence of my general theory of intuitions, but I think is worth a quick review.
Why believe claim a), that our unconscious minds sometimes ignore information which is
apparent to our conscious minds?  This is a consequence of the associative model of unconscious
learning and judgment.  A given piece of information, call it I, can only be used in an
unconscious judgment if it activates an association we have previously made.  This can only
happen if we have previously had an experience like the experience we now have of I.  If we have
never experienced anything like I before, I will activate no associations now and consequently be
ignored; if we have experienced I before, but in a different mode than we now experience it, it
will now activate no associations and consequently be ignored.  For example, imagine that I am
presented with a thought experiment about some action, and part of the information given to me
in the thought experiment is that the action is supererogatory.  If I have never had any experience
                                                     
52
See Fumerton, 1999, for an example of an argument based on the claim that conceptual analysis is an a
priori practice.
122
with supererogation, then I will have no associations with supererogation in my mind, and my
unconscious will be unable to use the given information (I might also have had just a little such
experience, but not enough to strong enough associations).  Alternately, perhaps I have thought
about supererogation before, but I have always used the phrase “above and beyond the call of
duty.”  Consequently, although I do have some associations relevant to the information I am
given about this thought experiment, but these cannot be activated by what I am told (unless
someone rephrases what I am told).  In both of these versions of the example, it does not matter if
I completely consciously understand that supererogation is relevant to the judgment I am asked to
make; because of the lack of the appropriate associations, my unconscious mind can make no use
of this information.
The inverse of claim a) is claim b):  our unconscious minds also draw upon information
that our conscious minds do not notice.  This is a clear consequent of the associative model:
since our unconscious minds can learn from information we do not notice, or do not consciously
employ, we should expect some unconscious judgments to reflect this.
Claim c) is that our unconscious minds sometimes ‘see’ things that we consciously do not
because those things are not there to be seen.  We have all likely experienced the phenomena of
‘seeing’ what we want (or expect) in the world around us, even when the world does not in fact
conform to our desires or expectations and those who lack those desires or expectations do not
draw the conclusions we do.  We should typically attribute this to unconscious processes, since
people who are perfectly sincere and well intentioned can still experience this phenomenon.  This
can occur when we have strong associations between what we are currently experiencing and
some thing else.  In some of these cases, we may infer that this other thing is present, even when
it is not.  This is seen in some studies on hidden covariation detection (cited in Chapter 2) (Hill, et
al, 1989, Lewicki, et al, 1989).  In these studies subjects unconsciously learned that some X and
123
Y were correlated, so that where there was an X, they should expect to find a Y.  Once the
subjects had learned the correlations, they were given data in which X was present without Y.
Subjects reacted to this data as if Y was present:  subjects who were to spot the position of a
number indicated that it was in the place it should have been based on the pattern they had
learned, even though no number was there, subjects who had previously been exposed to data
which correlated likeability based on leg length continued to rate people as more or less likeable
in accordance with the length of their legs even when they were given no information about the
person’s personality, and so forth.  Over time, subjects’ unconscious ‘belief’ in the correlation
between X and Y increased, as demonstrated by their responses, which increasingly conformed to
the pattern they had initially been taught.  The only explanation for this is that, when presented
with X, their unconscious minds also ‘saw’ Y in the data they were given, despite it not being
there, and learned from this ‘experience.’
53
Claim d) – that our unconscious minds sometimes misinterpret the data they are given –
is true for similar reasons.  Given the right associations, our unconscious mind can make
categorization judgments that are inappropriate.  For example, imagine I am told that Smith has
come to believe that there is a sheep in a field even though they are, in fact, looking at a rock.
Perhaps I have a great deal of experience of people making mistakes about what they see due to
eye problems, or inattention.  If I have these associations, I might unconsciously take Smith to be
a bad witness overall, or to have poor vision, even if this is not stated in the thought experiment,
or even if it is explicitly or implicitly ruled out in the thought experiment.
Given what we know about how our unconscious minds learn and judge, we cannot
expect them to use all or only the information we consciously believe it is presented with, and we
                                                     
53
I am not trying to make any claims about perception here; it may be that subjects had no perceptual
experiences of data that was not there.  It is enough for my point that they reacted as if they had.
124
can also expect that our unconscious mind will sometimes use information presented to it in
surprising ways.  This is premise 2 of my argument.
Premise 3 says that we cannot analyze concepts well using intuitions if we cannot tell
when our unconscious minds are doing what premise 2 claims they can do.  In order to
understand why this is we need to think about how we analyze concepts using intuitions.  It is
uncontroversial that conceptual analysis often relies on intuitions about specific cases.  As a
starting point for the analysis of some concept C, we typically consider some scenario and ask if
some thing T that is part of the scenario intuitively falls within the extension of C.  Once we have
in this way established some things that are and are not within the extension of the concept, we
can then begin the actual analysis of C.  This involves trying to determine the traits that are
necessary and sufficient for concept membership by looking at the traits shared by those things
that are members of concept C and not had by those which are not members.  Naturally, when we
do this we consider traits that are apparent to us in a certain way; let’s use the term obvious traits
to refer to those traits of a thing described in a thought experiment which are either part of its
description or that we would consciously infer are had by it from that description.  There are a
limited number of conclusions we can draw about C from considering intuitions about T and its
obvious traits.  If T intuitively falls within the extension of C, the natural conclusion is that none
of T’s obvious traits are incompatible with being a member of C, and that some subset of T’s
obvious traits are sufficient for membership in C.  The question then becomes, what are those
traits that are sufficient for being a member of C?  To find out, we might find another thing, AT,
which is also intuitively a member of concept C, and that differs from T in some way(s).
Naturally, we conclude that no trait not shared by both T and AT is necessary to be a member of
C, and that some subset of T and AT’s shared obvious traits are sufficient for membership in C.
If T is intuitively not in the extension of C, then the natural conclusion is that T lacks a trait
125
necessary to be a member of C (or has some obvious trait which is incompatible with being a
member of C), and that no subset of T’s obvious traits are sufficient to be a member of C.  To
investigate further, we can compare T with a different thing (DT) that is in the extension of C; we
know that some obvious trait had by DT but not by T is necessary for being a member of C.
The reason premise 3 is true is that all of these natural conclusions are undermined by the
fact that our unconscious minds can make use of ‘traits’ that are not obvious to our conscious
minds – either because they do not detect them or because they are not there – and can make
mistakes about or ignore obvious traits.  If T is intuitively taken to be a member of C, this may be
because our unconscious minds have ignored some obvious trait had by T that is incompatible
with C; thus, we cannot conclude that none of T’s obvious traits are incompatible with C
membership.  It may also be that T’s membership in C is due not to T’s obvious traits, but due to
hidden traits; these are traits that our unconscious minds detect (which may or may not actually
be present) but that are not readily apparent to our conscious minds.  Thus, we cannot conclude
that any subset of T’s obvious traits is sufficient for membership in C.  If T is intuitively not a
member of C, this may be because our unconscious minds have overlooked some obvious trait of
T’s which is necessary for C membership (so that our unconscious takes T to lack this necessary
trait even though it does not); alternately, T may have a hidden trait which is incompatible with
membership in C.  We cannot conclude that no subset of T’s obvious traits are sufficient for
membership in C, nor that any of T’s obvious traits are incompatible with membership in C.  If
we have two things, one of which is intuitively a member of C and one of which is not, we cannot
conclude that some obvious trait had by one and not the other is necessary for membership in C,
because one may lack a hidden trait necessary for C membership, or because our unconscious
minds may have mistakenly judged that one lacked a trait necessary to be in the extension of C.
Finally, for two things that are both intuitively members of C, we cannot conclude that no trait
126
not shared by them is necessary for C membership, because our unconscious minds may be
mistakenly attributing traits to one or another that they obviously lack.
54
I understand that the above paragraphs were quite abstract, so let me give an example of
the way in which we might misinterpret intuitions in conceptual analysis if we do not look to an
empirical understanding of how the mind works:  the intuitions that generate the problem of
resultant moral luck.  To stay simple, this discussion will focus on a single, specific psychological
bias that might affect these intuitions, rather than something from my general theory of intuitions.
The problem of resultant moral luck stems from the fact that, while we generally tend to think
that ‘people cannot be morally assessed for what is not their fault…’ (Nagel, 1979), intuitions
about specific cases often conflict with this general maxim.  To generate the problem,
philosophers who write on moral luck use thought experiments that describe people who by
stipulation perform identical actions with identical intentions, but which result in different
outcomes due to forces outside of the agents’ control.  For example, a thought experiment might
describe two people who both drive home somewhat drunk (to exactly the same degree), in
exactly the same circumstances, with exactly the same mental states, with one running a person
                                                     
54
I said earlier in this paper that my argument is relevant even to philosophers who do attempt to construct
necessary and sufficient conditions via analysis (as long as they use intuitions).  Let me briefly explain why
that is, now that I have given the meat of my argument.
Some philosophers who do not think concepts can be analyzed into necessary and sufficient
conditions for concept membership still think that we can determine necessary conditions for concept
membership (Williamson may be an example of such a philosopher with regard to ‘knowledge’ (see
Williamson, 2000) although it is hard to tell whether or not he considers himself to be discussing the
concept of knowledge or knowledge itself).  Since I point out that the issues discussed in premise 2
undermine our ability to determine what traits are necessary for concept membership, my argument is
clearly relevant to these philosophers.
Other philosophers may not be interested in necessary conditions for concept membership at all
(they may, for example, accept a view of concepts as ‘family resemblance categories’ for psychological or
Wittgensteinian reasons).  These philosophers may still be interested in determining traits that are relevant
to concept membership.  The issues I raise apply to their work as well; if obvious traits can potentially be
ignored by our unconscious minds, then the fact that some obvious trait is correlated with intuitive
membership in C does not tell us that it is actually relevant to membership in C.  Likewise, the fact that
some trait is obviously lacking in many things which are members of C does not give us very good
evidence that it is not relevant to C membership, since that trait may be a hidden trait of those things.
127
over and the other not due merely to luck (this is taken from Nagel, 1979).  In response to these
thought experiments, we typically have the intuition that one person is to be morally evaluated
differently than the other.  There is some disagreement in the literature as to exactly what moral
concept this is supposed to shed light on, but for simplicity’s sake let’s say that we have the
intuition that one person is more blameworthy than the other.  This is supposed to tell us
something about our concept of blameworthiness – namely that it conflicts with the general
maxim about assessing people for what is outside of their control, and that blameworthiness can
be influenced by sheer luck.  However, this claim is based on the natural interpretation of our
intuitions:  since the only obvious trait that differs between the two situations described in the
thought experiments is the outcome of the action, we naturally conclude that having a certain type
of outcome is necessary for ascriptions of the concept ‘blameworthy,’ and that no description of
an action that does not contain the outcome of the action is sufficient for application of the
concept.
What can research psychology tell us about these conclusions?  People have an intuitive
bias that is known as hindsight bias (discussed in Chapter 3).  To repeat my earlier discussion,
this bias causes people, once they know the outcome of an action or event, to intuitively believe
that the outcome was more or less inevitable, and that prior to the action or event people could
have or did know that the outcome was inevitable (Schwartz & Vaughn, 2002).  This bias can
affect intuitions about moral luck in several different ways, which I will illustrate through
discussion of the drunk driving example I cited above.  First, hindsight bias is likely to make it
intuitively seem that it was almost inevitable that the driver who ran the pedestrian over would do
so, and almost inevitable that the other would not.  This is an example of the unconscious
attributing hidden traits to the two drivers, and also ignoring their obvious traits (the stipulation
that there was no difference in their situations).  Second, because of this bias it is likely that
128
people take the drivers to have been in different epistemic situations:  we unconsciously take the
driver who ran the person over to have been able to predict that he or she would do so.  Again,
this involves unconsciously attributing hidden traits to the drivers and ignoring their obvious
traits.  For this reason, we may not want to conclude that application of the concept of
blameworthiness is based on factors outside an agent’s control such as the outcome of their
actions; rather, it may be based only on the actions people take and their expectations of what the
outcomes of those actions will be.
55
 Given this, there may not be a problem reconciling our
intuitions with general maxims about blame, since from the perspective of our intuitive faculties,
these drivers could control whether or not they drove home given how dangerous their driving
was and that they knew they were or were not likely to run someone over.  Deciding this is not of
course something we can do by consulting psychological research alone (nor based on the
evidence I have cited alone).  We still need to do conceptual analysis of the concept of
blameworthiness, informed not only by a better understanding of what affects our intuitions about
these specific cases but also by consideration of other intuitions and what we know about related
concepts.  However, this example does illustrate how an understanding of psychological research
can give us a way of understanding our intuitions and combining them with our general views
that we would not otherwise have had.
What we see, then, is that not every intuition that seems to be about a given concept is
good evidence about that concept, and even when intuitions are good evidence about the concepts
we are trying to study, they may not always tell us what they seem to tell us.  In order to
determine when our intuitions are good evidence about the concepts they seem to reflect, and
what they in fact tell us about those concepts, we need an understanding of the sorts of
                                                     
55
See Dromsky, 2004, for a different take on what psychology tells us about moral luck.
129
information that they are likely to be based on and the factors that affect them, the sort of
understanding that can only come from a general theory of intuitions.  Thus, when we do
conceptual analysis we first need to look at the intuition pumps we are using and try to determine
what in those pumps our unconscious minds are likely to use, what they are likely to ignore, what
they might misinterpret, and what they might add of their own accord.  Given a sophisticated
understanding of our unconscious minds, we can build intuition pumps that better test what we
really hope to test.
Responding to Concerns About the Use of Intuitions in Philosophy
Let’s review the main worries I raised in Chapter 1 about the use of intuitions as evidence
in philosophy.  They were:
 We do not know where intuitions come from or what factors affect them
 We know some are false and have no way of determining when they are reliable
 They can only tell us about concepts, and not things themselves (Kornblith, 2006)
 Their potential sources are not good sources of evidence (Cummins, 1998)
 If we can calibrate them, we do not need them, and if we cannot calibrate them, we
should not use them (Cummins, 1998)
 We can tell that they are a bad source of data because they have not been fruitful so far
(Bishop & Trout, 2005, Miller, 2000)
 Intuitions conflict, and we cannot tell which to use (Weinberg, et al, 2001, Swain, et al,
2008)
So far, we have addressed the first four worries (even where I have not mentioned Cummins or
Kornblith by name).  Let’s consider what my general theory of intuitions tells us about the others.
130
The Calibration Argument
Calibration is the process of determining how accurate an observation tool is, and what
sort of distortions in measurement or results it is prone to.  Calibration is necessary so that we do
not rely on a tool to give us results it is not capable of giving accurately, and so that we can
correct for errors.  If a tool is not calibrated, then we will not know when it is misleading us.
Intuitions are supposed to serve the function of an observation tool in philosophy.  They tell us
facts about the world such as whether a certain person knows something, or whether some action
is good or bad, and so forth.  If we are to use this tool, then we must calibrate it.  Cummins argues
that calibration requires checking the results of the observational tool against results obtained
without the tool that are known to be accurate.  If we calibrate using results obtained using the
tool, all we are doing is checking the consistency of the tool’s results, but not their accuracy.
According to Cummins, the only way to calibrate intuitions in philosophy would be to use results
obtained from non-intuitively based sources.  However, if we have non-intuitively based sources
of philosophical knowledge, then we do not need intuitions.  If intuitions could be calibrated, they
would be unnecessary, and if they can’t be calibrated, then they shouldn’t be used.
Before I get to my reply to Cummins, I want to discuss other replies that have been made
to his argument.  The first reply is that we might have partial non-intuitively based philosophical
knowledge, and want to use intuitions to extend this knowledge (let’s not worry for now about
how this would be possible
56
).
57
 We might know some facts about causation, or responsibility,
without intuition, but not all of them.  We could use the facts we know to calibrate our intuitions,
and once intuition is calibrated (assuming it turns out reliable), we can learn more about causation
                                                     
56
One way this could be possible would be some form of stipulation of some of the extension of a term.
We might say, “I don’t know what the nature of causation is, but the term ‘causation’ will refer to whatever
these things have in common:” followed by a partial list of situations we wish “causation” to refer to.
57
This idea arises in an unpublished paper by Weinberg, et al, 2005.
131
or responsibility or whatever using it.  The proponents of this idea see calibration as occurring
within a domain in philosophy.  That is, if we can calibrate some of our intuitions about fairness,
for example, then this gives us license to use our other intuitions about fairness (to the degree the
calibrated ones checked out properly).  However, calibrating intuitions about fairness does not
give us any basis to use intuitions about natural laws, for example; we would first have to
calibrate intuitions in this domain.
This points out the main flaw in this form of calibration.  If the accuracy of our
philosophical intuitions in one domain is not supposed to tell us about the accuracy of our
philosophical intuitions in another domain, then why does the accuracy of some intuitions in one
domain give us reasons to think that other intuitions in that domain will be accurate?
58
 It may
turn out that intuitions in a domain tend to be accurate together, but this is an empirical question
that we can only answer by learning about how our minds work.  We have good reasons to
suspect that accuracy about some intuitions within a domain does not confer accuracy on other
intuitions within that domain.  Some of the interesting philosophical debates that revolve around
intuitions start from disagreement on just a few key cases, despite large-scale agreement on other
cases.  In other words, the two debating parties agree in most of their intuitions about the domain,
but they disagree on some crucially important cases.  There are numerous examples of such
debates in ethics.  We all tend to agree on most ethical issues:  murder is typically bad, as is lying
and betrayal, helping the unfortunately is good, etc.  However, when we get to the very difficult
cases, intuitions start to diverge.  We disagree, for example, on whether a right to life always
trumps the right to control one’s body.  If we could calibrate ethical intuitions overall by checking
them against certain known cases, then it would seem that both sides of these debates have
                                                     
58
Of course, it gives us some reason, but not very strong reasons.
132
accurate (or reliable) intuitions.  Of course, this is certainly possible, since calibration doesn’t
guarantee reliability.  But it points out the lack of utility of this form of calibration.  It fails to give
us a form of calibration that will resolve conflicts of intuition, and resolving conflicts of intuition
is one of the goals of studying intuitions.  The interesting cases of conflicts of intuitions occur
when either two different people have different intuitions, but share many others (metaethicists
who disagree on the amoralist case are examples of this), or where one person has two conflicting
intuitions (moral luck is an example of this).  In the first case, the two philosophers who disagree
will typically look equally reliable.  Calibration will not help us to resolve this conflict.  In the
second case, if many of the person’s intuitions about the domain turn out to reliable, calibration
will not help them to resolve their personal conflict of intuitions.
This does not mean that Cummins’ argument succeeds.  It relies on the unstated
assumption that philosophical intuitions are sui generis, that they are different in kind, and
generated by a different process, than other intuitions.  If philosophical intuitions are not sui
generis, we can calibrate them by studying ordinary intuitions are determining how they occur,
and when they are and are not reliable.  To see why this might work, consider an example
Cummins gives in his paper.  In making his calibration argument, Cummins talks about
calibrating telescopes:  we calibrate a telescope by looking at something we already know about
and seeing if the telescope tells us what it ought to.  If it does, presumably we can properly use
the telescope to look at things we do not already know about – if we know the telescope tells us
about nearby mountains, then we are justified in using it to look at something like the moon.  If
this works for telescopes, it ought to work for intuitions:  we can calibrate them using information
not from the domain we are interested in (the moon), but based on information about how they
work in another domain (mountains).  By studying how intuitions work in non-philosophical
domains, we can learn what sort of data they are likely to be sensitive to and what data they are
133
likely to ignore.  We can learn what factors make them more or less accurate and how they do so.
We can then consider what sort of data we need them to be sensitive to if they are going to give
us good evidence about specific philosophical questions (since this will vary from question to
question) and how likely those factors are to affect intuitions elicited about these questions.  As
we have already seen, we have good reasons to think that this process will allow us to calibrate
and use intuitions in a variety of philosophical domains.  We have good reasons to think that
intuitions about, for example, causation or intentionality will be reliably accurate, because we
know how these intuitions come about, since we know how intuitions work in general, and we
can see that intuitions that come about in that way should be good evidence about these subjects.
The Last Two Worries
My responses to the last two concerns – intuitions have not been fruitful, and so are not a
good source of data, and they conflict, and so are not a good source of data – are quite similar.
Let’s consider the first worry first:  if intuitions can potentially tell us so much, why has
philosophy done so little?  The answer to this may be that we have not been using only the right
intuitions.  Psychological research shows that intuitions can go astray (in predictable ways).  If
we use any and all intuitions as evidence, we are likely to become deadlocked as we try to
reconcile conflicting intuitions when one side of the conflict is potentially not very good
evidence.  However, understanding the psychological mechanisms behind intuitions may allow to
determine when intuitions go astray, and to only use ones that have not.  To see this, consider
how one might reply to experimental philosophy’s claim that intuitions conflict either cross
culturally or based on circumstances.
I will focus on one of the most famous pieces of experimental philosophy:  the work of
Jonathan Weinberg, Shaun Nichols, and Stephen Stich on epistemic intuitions (2001).  They
134
advance the following claims:  intuitions about knowledge and justification tend to vary from
culture to culture, and this undermines our reasons for using intuitions about knowledge and
justification as evidence for epistemic theories.  They argue that when intuitions about
epistemology differ between groups, we have no good reason to choose one set of intuitions as
evidence over the other.  If we have no principled way to choose between the intuitions, and
intuitions are supposed to be the basis for our theory, we have no way to choose between a theory
based on one set of intuitions or the other.  It is inappropriate to choose a philosophical theory
arbitrarily, or based on provincial considerations (e.g., we are more accustomed to one theory),
especially when that theory is normative, as are theories of justification or knowledge.  Thus,
differences in intuitions about epistemology between groups are problematic for those who wish
to found their theories on intuitions.
The evidence Weinberg, Nichols, and Stich give that there are cultural variations in
intuitions comes from experiments in which they presented versions of various classical
epistemological thought experiments (such as Gettier cases) to subjects from different cultural
backgrounds, specifically East Asians and Westerners, and to subjects of different socio-
economic status.  They found statistically significant variations in reactions to some of these
thought experiments.  From this, they concluded that there are differences in intuitions between
the groups, and thus that basing theories of knowledge or justification on intuitions is
problematic.
Consideration of psychological findings on cultural differences undermines Weinberg,
Nichols, and Stich’s argument.  Research cited favorably by Weinberg, Nichols, and Stich
indicates that it may be possible to reconcile these differences in intuitions, or to reasonably
prefer one set of intuitions to the other, given sufficient understanding of the underlying
psychology.  According to them, “Richard Nisbett and his collaborators have shown that there are
135
large and systematic differences between East Asians and Westerners on a long list of basic
cognitive processes including perception, attention and memory.” (Weinberg, et al, 2001)  The
existence of differences in cognitive processes between the groups Weinberg, Nichols, and Stich
tested opens up the possibility that these differences may have caused differences in interpretation
of the thought experiments used, just as I argue above that hindsight bias might cause intuitors to
interpret thought experiments about moral luck in ways their creators did not intend.  If this is the
case, it may be that when these differences are controlled for or factored out, Asians and
Westerners agree about the thought experiments.  Alternately, the processes that generate one
culture’s intuitions may be systematically better at generating the intuitions in question; one
group may be prone to a bias that makes their intuitions less relevant to epistemology.  If either of
these were the case, then Weinberg, Nichols, and Stich’s results do not give us evidence of a
problem for the use of intuitions in epistemology.  Although I do not have the space to do an
exhaustive review of Weinberg, Nichols, and Stich’s results given findings on cultural cognitive
differences, by considering one example I can show that it plausible that the discovered intuitive
differences are due to cultural cognitive differences, and thus show that this research would
benefit greatly by examining the psychology behind our intuitions.
According to the researchers cited by Weinberg, Nichols, and Stich, East Asians are
supposed to be more subject to hindsight bias than Americans, whereas American subjects are
more likely than East Asians to make what is called the “fundamental attribution error.”
(Norenzayan, et al, 2002).
59
 As discussed above, hindsight bias is the tendency, once one knows
how something turns out, to think that that outcome was more or less inevitable and predictable
from the outset.  The fundamental attribution error is the tendency to “make inferences about the
                                                     
59
Weinberg, Nichols, and Stich cite Nisbett, et al, 2001, whereas I am citing Norenzayan, et al, 2002.
However, the two papers share two co-authors, and both refer to similar bodies of research.
136
dispositions of others even when situational forces explain the behavior quite nicely.”  (Gilbert,
2002, p.169)  For example, in one study, “students who are randomly assigned to receive bad
news may, on average, be judged as more chronically depressed than students who are randomly
assigned to receive good news” (ibid).  These differences could explain differential reactions to
thought experiments such as Gettier cases.
What is typical of a Gettier case is that a person is described who uses a belief forming
method (such as deductive reasoning) which normally is a good one to use; they end up forming a
true belief, but because of details about their situation, this is due largely to luck.  Westerners
were more likely than Asians to say that person described in Gettier cases only believed
something, rather than knew it.  Note that in Gettier cases there is a fact – believing something
true due to luck – which is perfectly well explained by a quirk of the situational the believer finds
themselves in.  A person prone to the fundamental attribution error, and thus likely to make
character judgments based on single events, might explain this fact however as due to the
character of the believer and think that the person described normally relies on luck in their
reasoning.  How might this affect their intuitions?  It is plausible that the ways in which one
typically reasons and forms beliefs makes a difference as to whether or not one knows any given
proposition they believe; this is the core of virtue epistemology.  If we are tacit virtue
epistemologists, then seeing someone as a person who normally relies on luck to form their
beliefs would give us reason to think that a specific belief they formed in this way would not
count as something they know.  However, if we saw that person as someone who did not
normally rely on luck to form true beliefs, then the fact that luck played a role in this case might
not entail a lack of knowledge (in this case).  Thus, Westerners’ greater tendency to say that
people described in Gettier cases do not know what they believe makes sense given their greater
tendency to commit the fundamental attribution error.
137
Likewise, cultural differences in intuitions about Gettier cases might also be due to
hindsight bias.  This bias makes people prone to think that the way things actually turned out was
inevitable and predictable from the onset.  In Gettier cases, one comes to form a true belief
through luck; however, if coming to this true belief was inevitable, or predictable, then it looks
less like the product of luck.  If the role of luck is part of why the believer in Gettier cases fails to
know what they believe, as many philosophers claim, then it makes sense that people who see
luck as less of a factor in the situation described also tend to think that the believer in question
really knows what they believe.  Asians’ greater tendency to experience hindsight bias might
explain why they have a greater tendency to intuit that people in Gettier cases really do know
what they believe.
This is only the sketch of an argument, and more research – both philosophical and
empirical – is needed.  However, this should be enough to make plausible the following:  cultural
differences in intuitions about Gettier cases may not be due to anything about how different
cultures see or understand knowledge, but rather due to how they interpret thought experiments.
These differences can potentially be predicted and controlled for, and given that the two biases I
have discussed seem to explain the differences in intuitions, it is plausible that intuitions do
largely agree cross culturally.  Alternately, an argument might be made that we are not really
interested in the epistemic intuitions of people who tend to commit the fundamental attribution
error (or people who are prone to hindsight bias).  In either case, we can potentially resolve the
conflict in intuitions, given an understanding of the psychology behind them.  This does not mean
that we will resolve this conflict, or all such conflicts, but it means that we cannot use these
conflicts to argue against the use of intuitions without much more work.
138
Conclusion
We have seen that the general theory of intuitions gives us reason to think intuitions are a
good source of evidence about certain things themselves, and that determining when that is is a
task that combines philosophical and psychological understanding.  We have seen that if we wish
to use intuitions as a source of evidence about concepts, we need to use an understanding of the
psychology behind intuitions to guide our construction and interpretation of thought experiments.
Finally, we have seen that the various worries raised against the use of intuitions in Chapter 1 can
be replied to, to a certain extent, once we understand how our minds generate intuitions, and that
we should not discard intuitions as a source of evidence based on these worries.  In the next two
chapters I will look at some specific uses of philosophical intuitions in order to give examples of
how an understanding of the psychology behind intuitions can be deployed in philosophy.
139
Chapter 5
Introduction
Here is a famous thought experiment: Smith has the justified belief that Jones will get the
job that the two of them applied for. He also has the justified belief that Jones has 10 coins in his
pocket. Based on these two beliefs, Smith forms the additional belief that the man who will get
the job has 10 coins in his pocket. Jones does not get the job, but Smith does. It turns out that
Smith had 10 coins in his pocket.
60
 Did Smith know that the man who will get the job had 10
coins in his pocket?  Here is another, about justification rather than knowledge:  Norman is
clairvoyant; he has some form of extra-sensory perception that, in the right circumstances, is
completely accurate.  However, he has no reason to think he is a clairvoyant, nor any reason to
think he is not, and he has no reasons to think clairvoyance is or is not possible. One day,
Norman’s ESP causes him to believe that the President is in New York City.  This belief is true,
although Norman has no reason to hold it other than his ESP.
61
 Is Norman’s belief justified?
The above two thought experiments are prototypical examples of one of the standard
tools in the epistemologist’s toolbox.  These types of thought experiments attempt to elicit what I
will call specific case epistemic intuitions:  intuitions about whether or not a given epistemic
phenomenon (e.g. knowledge or justification) is or is not present in a specific scenario.  These
types of thought experiments are standard tools for epistemologists because specific case
epistemic intuitions are taken to accurately tell us whether or not epistemic phenomena really are
present in the scenarios described.  Relying on that assumption, specific case epistemic intuitions
are used to support theories that agree with them and criticize those that conflict with them; a
                                                     
60
This thought experiment adapted from Gettier, 1963.
61
This thought experiment is from BonJour, 1985.
140
practice I am sure we are all familiar enough with.  In this chapter, I will use my general theory of
intuitions along with psychological data specifically relevant to epistemic intuitions to argue that
we should drastically limit this use of specific case epistemic intuitions.
62
 My argument will be
based on the claim that we have good reasons to be suspicious of the accuracy of certain
intuitions about whether or not epistemic phenomena are present or absent in a specific situation
and we have a viable alternative to using them.
In making this argument, I want to do more than just make a point about the practice of
epistemology.  I also want to illustrate how we can generally apply our understanding of the
psychological mechanisms behind intuitions to questions of the following sort: “Can intuitions
about (philosophical subject) X be good evidence about X?”  As articulated in the previous
chapter, my general strategy for answering this sort of question has two steps:  first, using what
we know about subject X, figure out what our unconscious minds would have to be capable of to
generate good evidence about X (typically, what information they would have to be sensitive to
and how they would have to be able to use it); second, using what we know about how the
unconscious mind works, see if this is plausibly the case.
Before I give my argument, I want to point out that, given my general theory of
intuitions, specific case epistemic intuitions look like pretty good evidence.  Epistemology is
concerned, in part, with telling us how we ought to think.  One sign of a good way of thinking is
that its use is correlated with the production of true results.  This sort of correlation is, prima
facie, the sort of thing our unconscious minds ought to be good at spotting, which suggests that
we ought to use our specific case epistemic intuitions as evidence in epistemology.  However,
                                                     
62
My target is only intuitions about specific cases, not more general intuitions about epistemic properties
such as “Knowledge is valuable.”
141
when we look more closely we will see that many specific case epistemic intuitions (and many of
the most interesting ones) are not in fact good evidence about how we ought to think and inquire.
The Arguments
Most contemporary epistemologists – internalists and externalists alike – acknowledge
that there is a strong belief between justification and truth, so that the traits that make a belief
justified must be truth-conducive; meaning, roughly, that beliefs with these traits are likely to be
true.  Lawrence BonJour (1978) explicitly endorses this view, as does Ernest Sosa (BonJour &
Sosa, 2003), among others (see David, 2001, DePaul, 2001).  My arguments are aimed at
epistemologists who share this view, and I will talk for the rest of the paper as if this view is
correct.  This view should not be surprising, since truth is an end that epistemic phenomena are
characteristically related to.  There is of course controversy between epistemologists about what
exactly “truth-conduciveness” means, what traits of justified beliefs are truth-conducive, and how
much truth-conduciveness is required for a belief to count as known or justified; in this chapter I
will remain agnostic about these debates and advance arguments that do not rely on any particular
solution to them.
Since having truth-conducive traits is a necessary condition for a belief’s being known or
justified, if our unconscious minds are going to accurately identify which beliefs are known or
justified, and produce intuitions that they are known and justified, they should be able to
recognize when they have truth-conducive traits.  This, we should expect, involves our
unconscious mind accurately identifying traits as truth-conducive.
63
 Given my general theory of
intuitions, this ability must be learned.
                                                     
63
One might disagree, saying, “In order to identify beliefs as justified, our intuitions need only identify a
belief has having some set of traits (whichever ones make them justified), they do not need be able to
142
My first argument is that, given what we know about how our unconscious minds learn,
we should expect them to make serious mistakes in identifying traits as truth-conducive, mistakes
that will lead to mistaken intuitions about justification or knowledge.  They might identify a trait
as less truth-conducive than it in fact is, and thus claim that a belief with that trait is not justified
(or known) when it in actuality is, or they might over-estimate the truth-conduciveness of a trait,
and identify a belief as justified or known when it is not.  This may be what is happening in
Gettier cases; we mistakenly take beliefs described in these cases to have not been formed in a
truth-conducive manner, and this is why these beliefs intuitively seem not to be known.  My
second argument is that, given what we know about how our unconscious minds use what they
have learned, we should expect them to sometimes ignore stipulations in thought experiments
about truth-conduciveness.  They might take a trait (stipulated to be truth-conducive) to be non-
truth-conducive in a thought experiment because normally in that situation the trait would not be
truth-conducive.  This may be occurring in anti-reliabilist thought experiments such as that about
Norman (described in my opening paragraph); we take Norman’s ESP to be unreliable because
we have learned there is no such thing as reliable ESP, and that learning trumps what we are told
is the case.
Argument 1
Learning about truth-conduciveness
Our unconscious minds learn through experience.  Thus, they learn about the truth-
conduciveness of traits through experience.  This occurs in one of two ways.  One is learning that
                                                                                                                                                                   
identify these traits as being truth conducive.”  If that were true, however, then it turns out to be (in a
certain sense) coincidental that justification is tied to truth-conduciveness, since our unconscious minds do
not consider truth-conduciveness when making judgments about justification.  Even if the traits that make
beliefs justified are necessarily truth-conducive, and it is coincidence that our intuitions pick up on them to
identify beliefs as justified.  This is incompatible with the stance that justification is strongly tied to truth.
143
a given trait is truth-conducive by seeing that its occurrence is correlated with (or causes) the
occurrence of true beliefs (or learning that it is not truth-conducive by seeing that its occurrence is
correlated with false beliefs).  The other involves learning that a one trait is truth-conducive
through experience with that process, and then inferring that another trait is truth-conducive
because it has some relevant commonalities with the first.  Both of these procedures require
correlating the possession of a trait with the truth or falsity of beliefs.  Let us consider some of the
ways we might come to discover these correlations.
Every time one forms a belief, one represents the content of that belief as true.  This gives
one reason to believe that the traits that belief has are truth-conducive – after all, as far as the
believer is concerned, they are had by a true claim.  The more beliefs one forms that have a given
trait, the more “evidence” one has that it is truth-conducive.  It should be apparent that this is not
a good way of learning about truth-conduciveness.  It is independent both of the actual truth-
conduciveness of traits and also of their reflectively judged truth-conduciveness.  Everyone uses
cognitive processes that are not actually truth-conducive.  Some examples of this are obvious and
easy to scoff at – 31% of Americans believe in astrology, for example.
64
 Other examples are less
obvious but even more pernicious; studies show that people in general tend to be overconfident,
and this is especially true of higher-educated professionals (Russo & Schoemaker, 1992, Montier,
2005).
65
 Even worse, everyone (philosophers included) uses cognitive processes that we
ourselves (in our more reflective moments) take to be not truth-conducive; we can all be self-
                                                     
64
Taylor, 2003.
65
This leads to another non-truth-conducive cognitive process, which is trusting expert judgments in
certain fields (and not in others).  Experts in fields such as business (especially investing) tend to do no
better than chance in making predictions, yet are widely seen as good to consult on business matters
(Surowieki, 2005, Montier, 2005).  On the other hand, weathermen tend to be both extremely accurate in
their predictions and also in assessing the accuracy of their predictions, but are popularly seen as almost
completely useless (Surowieki, 2005, Montier, 2005).  If you want a good scare, look up studies on the
predictive success of doctors as compared their confidence in their own judgments (Montier, 2005).
144
deceptive, or trust people who tell us what we want to hear, or intentionally overlook evidence
which goes against our views.  We use these processes out of habit or epistemic akrasia.  Since
we experience anything we consciously come to believe as true, no matter how we came to
believe it, experience will potentially teach us that actual (and known) bad processes are truth-
conducive, and we should expect it to lead us to overestimate the truth-conductivity of even good
traits of beliefs.
Another way of learning about truth-conduciveness is by having intuitions that represent
newly formed beliefs as true or false.  As we have seen, intuitions are a type of experience that
can be associated with other experiences; thus, the intuition “This is true” had at the same time as
a claim is considered will cause that claim (and its salient traits) to be associated with truth.
When we reason to some belief (identifying it as having traits we consciously think to be truth-
conducive), we often have intuitions that agree or disagree with what reason tells us.
Unfortunately, our intuitions often misidentify true beliefs as false, because many of the
reasoning methods we use regularly generate counter-intuitive results (which is why we use them
rather than relying on intuitions).  This will lead to us learning false things about truth-
conduciveness.
As an illustration, consider what we might learn about the cognitive processes that
produce our beliefs.  Consider the range of important and reliable reasoning methods that
regularly generate counter-intuitive results – methods which say something is false which
intuitively seems true, or vice versa.  These include science, math, statistics, decision theory, and
logic.  It is well known that each of these reasoning methods generates some counter-intuitive
results about extreme or outlier cases – infinite sets, or travel at the speed of light, for example –
but each of them also generates counter-intuitive results about central, common cases.  Folk-
science goes wrong about Newtonian mechanics such as the behavior of levers, inertia, or
145
forces.
66
 Math learners make numerous intuitive mistakes about relatively basic algebra
concepts, such as canceling with complex fractions.
67
 Statistical reasoning and reasoning about
probabilities are other well-studied areas in which proper reasoning generates counter-intuitive
results; for example, people regularly have the intuition that X&Y is more probable than either X
or Y by themselves.
68
 Basic decision theory generates counter-intuitive results.  Studies have
shown subjects to sometimes prefer decisions that they know are less likely to result in good
outcomes to those which are more likely, even given the same costs and potential benefits for
both.
69
 Finally, certain logical errors, such as affirming the consequent and denying the
antecedent, are intuitively compelling to many subjects in many cases, and certain valid argument
forms, such as modus tollens, can generate arguments that can seem intuitively invalid.
70
                                                     
66
Zietsman & Clement, 2005, Chi, 2005.
67
Confrey, 1990.
68
This is demonstrated by the famous Linda the bank teller experiment:  “We first presented the description
of Linda to a group of 142 undergraduates at UBC and asked them to check which of two alternatives was
more probable:  Linda is a bank teller. (T)  Linda is a bank teller and is active in the feminist movement
(T&F)…  Overall, 85% of respondents indicated that T&F was more probable than T…”  (Tversky &
Kahneman, 2002)
69
Consider the following experiment, in which subjects were given a choice of two bowls from which to
randomly draw a jelly bean: “The small bowl always contained 10 beans, one of which was red.  The large
bowl always contained 100 beans, and anywhere between 5 and 9 of them… were red… [Subjects] would
win $1 if they drew a red bean… the experimenter called attention to the respective probabilities in the
bowls… 82% of subjects made one or more nonoptimal choices.”  (Denes-Raj & Epstein, 1994)  As
evidence that these errors are intuitive errors, note that subjects in the study who made bad choices reported
knowing that the odds were against them but that the worse decision looked better; some subjects who
made good choices reported feeling that the bad choices were tempting, despite knowing that they were
bad.
70
Rader & Sloutsky, 2002, de Neys, et al, 2003.
For some of these examples one might argue that it is not decision theory, or statistics, or logic themselves
that is generating counter-intuitive results, but rather there is something in how people encounter the
information they are presented with, or how they interpret problems, that leads them getting answers that
are wrong from the standpoint of these reasoning methods.  For example, people might be confused by how
some arguments are represented and this causes them to give incorrect responses, or people might be
misled by superficial facts about the data presented to them about decisions.  However, it does not matter
why people get the wrong answers in these cases – whether they stem from an intuitive misunderstanding
of logic or statistics, or from misperceptions of data – as long as in relatively normal circumstances people
have intuitions that conflict with the results of proper reasoning.  (This concern was brought to my attention
by Stephen Finlay)
146
The list given above is far from exhaustive, but it should demonstrate the following:
there are extremely important reasoning methods – mathematics, logic, statistics, scientific
reasoning – that generate counter-intuitive results.  These results are not just about odd or
outlying cases, but about common, simple, central cases in each domain.  This does not show that
all, or most, of our intuitions about the domains of these reasoning processes will be wrong, but it
does suggest that conflicts between intuition and truth will be fairly common and significant.
There are more general reasons to expect this as well.  Why would we develop and use elaborate
reasoning mechanisms for domains in which our intuitions were trustworthy?  We tend to simply
trust our intuitions generally in domains in which they are clearly reliable – social judgments and
judgments based on our preferences  – only doubting them when we have salient reasons to do
so.
71
 We should expect, as the examples in the previous paragraph illustrate, that we use
conscious reasoning when we either have no intuitions on a subject or we do not trust our
intuitions.  Thus, intuitive representations of beliefs formed due to complex reasoning methods
should represent these beliefs as false more often than they are actually false.  One way our
unconscious mind will learn how truth-conducive our cognitive processes are is to see whether
they produce true seeming (i.e. intuitively true) beliefs.  This involves correlating their use with
the intuitively judged truth or falsity of their results.  This is a way of inaccurately learning how
truth-conducive these cognitive processes are.
72
The last method of learning I will discuss involves recall.  We can learn that traits are or
are not truth-conducive by discovering that some belief is true (or false) and recalling the traits
                                                     
71
See Ambady & Rosenthal, 1992, and Dijksterhuis, 2004, for discussion on the trustworthiness of our
intuitions in several domains, and Wilson, et al, 2002, for discussion of our disregarding of intuitions when
given salient reasons to do so.
72
We can point out similar processes for learning about other types of traits; any traits that we normally use
to identify the products of conscious reasoning as truth will often be associated with falseness because any
traits we look to to form beliefs consciously, rather than intuitively, should regularly identify beliefs as true
which are counter-intuitive.
147
the belief had that led us to adopt it in the first place.  This is also a less than ideal way of learning
about the truth-conduciveness of traits.  Research shows that when events are surprising or
unpleasant we are more likely to look for explanations for them than when events are as expected
and/or pleasant (Wong & Weiner, 1981, Hilton, 1990).  Other research shows that subjects prefer
not to change beliefs they already have (Kunda, 1990).  We are much more likely to be surprised
when we find out that what we believe is false than if we find out it is true, and since we prefer
for our beliefs to be confirmed, discovering that our beliefs are false is also more likely to be
unpleasant than discovering they are true.  Together, these indicate that having our beliefs
disconfirmed is more likely to be surprising or unpleasant than having them confirmed, and thus
we are more prone to consider why we held a belief when it turns out to be wrong than when it
turns out to be right.  This in turn means we are more likely to discover how we generated a false
belief than a true one.  Because of this, the number of times we correlate traits with false results
should be disproportionate to the number of times they actually produce false beliefs.  This will
make traits look less truth-conducive than they actually are.
To illustrate, consider the following example (numbers are invented):  let’s say that when
we find out we are wrong, we recall the reasons for our belief 40% of the time.  When we turn out
to be right, we only recall the reasons for belief 20% of the time.  Consider a belief forming
mechanism that is 70% reliable (which is fairly trustworthy).  If we form 100 beliefs with this
mechanism, 30 of them will be false.  Of those, we will consciously recall that we formed 12 of
them through this mechanism.  Of the 70 correct beliefs, we will recall that we formed 14 of them
through this mechanism.  That means that we will learn that almost half of the beliefs formed by
this belief forming mechanism are false, which makes this mechanism look hardly more reliable
than flipping a coin, significantly worse than it actually is.
Let’s summarize what we have seen.  In order for our specific case epistemic intuitions to
148
be accurate, our unconscious minds need to be able to accurately recognize whether or not
specific traits are truth-conducive.  This requires that our unconscious minds learn whether or not
these traits are truth-conducive.  This will involve learning from experience.  We have seen
several ways of learning which are likely to give rise to error, by teaching us to under- or over-
estimate the truth-conduciveness of traits; although this is not an exhaustive list, these are some of
the main routes to acquiring this “knowledge.”  Thus, we have good reason to think that our
unconscious minds will make mistakes in assessing the truth-conduciveness of traits, which will
lead to mistakes in identifying whether or not beliefs are known or justified.
However, most of us never expected our intuitions to be right in every case; some degree of
error is acceptable.  In order to show that we should be concerned about the possibility of
mistaken specific case epistemic intuitions, I have to show that these are serious errors.  I will do
so in the next section.
Mistakes about truth-conduciveness are serious
If we do not expect our intuitions to be infallible, what is an appropriate standard of
accuracy to expect from them if we are to consider them good evidence?  Although I lack an
account of exactly what something has to be to be evidence, I can give at least some standard:
nothing can be a good source of evidence if it is known to make errors that cannot be corrected
for.  If we expect evidence-producing-tools to make errors, and we know that we cannot correct
for these errors, then we are in a position where not only is there the mere possibility (in the sense
that anything is possible) that any piece of “evidence” they produce is wrong, there is also a very
real possibility of this, which undermines the justification we have for accepting the evidence.
What do I mean by “errors that can(not) be corrected for?”  To correct for an error in
some data is to prevent it from having a serious impact on the conclusions drawn from that data.
149
For example, if we knew that 10% of our specific case epistemic intuitions were wrong, we could
correct for this, because we know that the small number of our intuitions which conflicted with
theories which account for the rest of our intuitions were likely false and could be discarded.  If
we knew that a large percentage of intuitions (but not the majority) were inaccurate, and errors
were randomly distributed, we could correct for this by gathering the intuitions of large groups of
people.  If errors are non-randomly distributed, and we know how they are distributed, we can
correct for this by either avoiding the conditions that are likely to lead to error, or taking the
negations of inaccurate intuitions.  However, errors cannot be corrected for if they are
unpredictable and systematic.  These are errors that we can expect to be shared between subjects
(systematic) and thus non-randomly distributed, but whose occurrence, frequency, and type we
cannot predict.  These cannot be corrected for by looking at the intuitions of large groups,
because they might be systematic, nor by avoiding certain cases or taking the negations of
answers, because their occurrence is unpredictable.  And because the frequency of errors is
unpredictable, we cannot know that we have a low error rate and correct for errors in the
appropriate way.  If we have reason to think that our specific case epistemic intuitions make
unpredictable systematic errors, we know they are not a good source of evidence.
Do we have reason to think this?  We know that there are forces that push people to
overestimate the truth-conduciveness of cognitive processes they actually use.  This should lead
to systematic errors, since people tend to use similar cognitive processes.  These errors should be
unpredictable, because we do not know the extent to which this will affect intuitions, nor do we
know how much overestimating the truth-conduciveness of one process will affect judgments
about another.  (Remember that one way to learn that a cognitive process is truth-conducive is to
learn about the truth-conduciveness of another and infer about the first from that)  Similarly, we
know that there are forces that push people to underestimate the truth-conduciveness of complex
150
reasoning methods.  These should again produce systematic errors, because the studies that show
that these reasoning methods produce counter-intuitive results show that these results are
widespread in the population (these are not studies in abnormal psychology).  And, again,
because we do not know how strong this effect will be, or how it will be generalized from process
to process, these errors are unpredictable.  We also know that there are forces that cause people to
underestimate the truth-conduciveness of any cognitive process (due to biases in when people
search their memory).  These errors should be systematic as well, because people in general have
been shown to tend to have similar biases in when they recall explanations for events, and these
errors are unpredictable for the same reasons we saw for the previous types of errors.  Finally, a
further complication:  we do not know how all these different sources of error will interact.  They
may reinforce each other.  They may cancel each other out.  Since the level and kind of
interaction is unpredictable, this makes the frequency and types of errors that will occur
unpredictable.  We should expect at least some mistakes in our specific case epistemic intuitions,
as the result of inaccuracies in identifying cognitive processes as truth-conducive or not, and we
cannot correct for these mistakes.  For that reason, there is the very real possibility that a
substantial part of the foundation upon which our epistemic theories are based is wrong.
My argument does not require mistakes about the truth-conduciveness of our cognitive
processes to be radical.  It need not be the case that we mistake completely non-truth-conducive
processes for perfectly reliable ones, or take wholly reliable processes to be always inaccurate.
Nor need we make mistakes about the truth-conduciveness of a cognitive process in general.  It is
enough that we make mistakes about the degree of truth-conduciveness of our cognitive processes
and that we can make mistakes about a cognitive processes’ truth-conduciveness in certain
circumstances only.  It is quite plausible that being justified requires the use of cognitive
processes that are more than just slightly-truth conducive (consider the question reliabilists face
151
about just how reliable a process must be for it to give rise to justified beliefs or knowledge – a
51% or even 70% reliable process might not be reliable enough).  And it is possible that acquiring
knowledge requires using even more truth-conducive cognitive processes.  So mistaking an 80%
reliable process for a 70% reliable one could cause an impact on our epistemic intuitions.
What I have given so far is a fairly abstract account of why we should expect our
unconscious minds to make errors in judging the truth-conduciveness of cognitive processes,
errors that are of the sort which undermine the evidential status of intuitions based on such
judgments.  Later, I will illustrate how these various factors might affect our intuitions
(specifically, our intuitions about Gettier cases).  But first I want to give my second argument
against the use of epistemic intuitions, because this second argument will head off one objection
to my first, and so that I can apply both arguments to cases at the same time (for efficiencies’
sake).
Argument 2
My second worry about epistemic intuitions has to do with cases where we accurately
learn to assess the actual truth-conduciveness of traits, but are called upon to ignore that for the
sake of a thought experiment.  For example, in the Norman case cited above, we are called upon
to imagine a person who has reliable ESP.  This contradicts what most of us have learned through
experience, which is that there is no such thing, and that forming beliefs like Norman has is not
truth-conducive.  My concern is that, when it comes to intuitive judgments, learning often trumps
stipulation; our intuitive faculties will apply what experience has taught them, rather than the
“facts” described in a thought experiment.
Intuitions are generated by automatic and uncontrollable mental processes.  Unconscious
judgments occur automatically, whenever something we experience has been strongly associated
152
with something else, whether we want it to or not.  We also know that unconscious categorization
can contradict our consciously held beliefs, and ignore very salient information, when that
information does not activate any of our associations.  Based on this, when should we expect our
experience to be the basis of an intuitions, rather than what has been stipulated to be the case in a
thought experiment?  For this to happen requires that there be a strong association, based on
consistent categorization in the past, between a certain set of traits and a certain type of judgment.
For example, if a belief described in a thought experiment has traits is of a sort, or very similar to
a sort, we would expect people to have consistently classified in the past as non-truth-conducive,
then we should expect them to automatically judge it to be non-truth-conducive, regardless of
how its truth-conduciveness is described in the thought experiment.
This may seem like a more radical criticism of intuitions and thought experiments than I
intend.  After all, thought experiments often describe odd circumstances, and stipulate that facts
are not as we normally take them to be.  Am I claiming that none of these can be trusted?  Hardly;
thought experiments who describe situations unlike those we have ever encountered in our lives –
for example, though experiments from the personal identity literature about teleportation – cannot
be expected to trigger unconscious judgments which contradict the facts they describe, because
we lack of past experiences with situations of these sorts and thus the sorts of associations I am
describing.  Thought experiments which describe situations like ones we have previously
encountered, and which ask us to take them differently than we normally would, can also be
trusted when they give us enough information to trigger the right sorts of associations.  For
example, consider a thought experiment about a person who commits a crime but who is
supposed to be a good a person; even though we might normally consider criminals to be bad
people, if the thought experiment gave us enough facts about this person – that their crime was
justified by sufficiently described circumstances – we can expect associations with the evaluation
153
“good person” to be triggered, and we can expect our unconscious to categorize the person
consistently with the design of the thought experiment.  It is only thought experiments where we
are asked to set aside our normal judgments, and where we are not given any facts to trigger a
new judgment to replace it – where quite abnormal facts are stipulated, rather than motivated by
description – that we should suspect that our intuitions may be based on what we have learned
rather than the facts as set out in the thought experiment.
Applying the arguments to cases
Here are our concerns:  some specific case epistemic intuitions may be based on mistaken
judgments about the truth-conduciveness of cognitive processes, and other specific case epistemic
intuitions may not be based on the stipulations we expect them to be based on.  There are some
classes of specific case epistemic intuitions for which we can immediately rule out these
possibilities.  Some beliefs are produced by cognitive processes that any epistemologist will
consider sufficiently truth-conducive to produce knowledge or justification.  For example, we can
all agree that seeing normal objects in normal (or ideal) circumstances is truth-conducive enough
to give us justification and knowledge.
73
 Consider a belief produced by such a cognitive process
that we intuit is justified or known.  If we have this intuition, presumably our intuitive faculties
have identified the cognitive process that produced the belief as truth-conducive.  And, since we
know it really is, we know that our intuitive faculties cannot have made a mistake about this.  We
can trust these sorts of intuition (unless someone else raises some other worry about the accuracy
of specific case epistemic intuitions).  This means that we can in good faith use intuitions that tell
                                                     
73
If we restrict this class of beliefs to only those which are uncontroversially truth-conducive enough to be
known or justified, we sidestep controversies about what exactly truth-conduciveness is and how much is
required for knowledge or justification.
154
us that beliefs due to sense perception, use of logic, and so forth are justified or known.  This
gives us an essential starting place for epistemology.
74
However, most of the intuitions that have been interesting to philosophers are not of this
type.  Let’s call all specific case epistemic intuitions that are not that beliefs with
uncontroversially truth-conducive traits are justified or known problematic epistemic intuitions.
Intuitions about Gettier cases are problematic epistemic intuitions, since they are supposed to
involve beliefs with uncontroversially truth-conducive traits, but are intuitions that these are not
instances of knowledge.
One might argue that Gettier cases are not about beliefs with uncontroversially truth-
conducive traits, but this goes against the intent behind these cases; Gettier cases are supposed to
involve clearly justified beliefs, and so are supposed to involve beliefs with truth-conducive traits.
On this assumption, what is the problem with Gettier cases?  The problem is that we cannot rule
out the possibility that our intuition that Smith (or whoever), does not know what he believes is
due to an under-estimation of the truth-conduciveness of the cognitive process Smith uses to form
this belief.  All that is required for this sort of under-estimation to occur is for us to have learned
that beliefs formed in certain circumstances are not as reliable as they in fact are (or as they are
supposed to be in this thought experiment).  Perhaps, for example, we have improperly learned
not to trust our own deductive conclusions when we reason from false premises (as people in
several Gettier cases do).  This would cause us to unconsciously judge that a person described in
a Gettier case has formed a belief in a non truth conducive way, and this would cause us to have
the intuition that their belief did not count as knowledge.
                                                     
74
Even critics of the use of intuitions in epistemology, such as Kornblith (2006) and Weinberg (2006),
agree that we do not want a way of doing epistemology that is overly radical and could completely depart
from what we take ourselves to be studying when we do epistemology.  Being able to use at least some
specific case epistemic intuitions helps us to avoid this possibility.
155
This may seem very odd to you – several commentators have told me that their intuitions
about Gettier cases do not seem to them to be based on judgments about truth-conduciveness.
This seeming is irrelevant; our intuitive faculties are opaque to introspection, and they can and do
operate in ways contrary to our conscious expectations.  While we may feel that our intuitions
about Gettier cases have nothing to do with truth-conduciveness, this feeling is not good data
about the actual source of these intuitions.  Rather than asking ourselves if we feel that our
intuitions are tracking the right stuff in the right way, we need to consider more impersonal
evidence.
Anti-reliability intuitions, such as our intuitions about Norman, are another type of
problematic epistemic intuition.  Norman’s belief (that the President is in New York) is produced
by an uncontroversially truth-conducive cognitive process, since the process is stipulated to be
reliable.  We have the intuition that Norman is not justified in his belief.  However, everything we
have learned about psychics and ESP has (most likely) shown us that it is not a reliable source of
evidence.  This should give us an inclination to have the intuition that Norman’s belief is not
justified.  The question, then, is whether or not the stipulation that Norman’s ESP is reliable is
used by our unconscious mind at all; if it is not, then this intuition does not go against reliabilism,
since it is not an intuition that a reliably formed belief is unjustified.  It would be unsurprising if
the stipulation was not attended to by our unconscious minds.  For it to be used, the word
“reliable” or “accurate” must be associated in the right way with the unconscious concepts
relevant to our intuitions about justification.  However, many of us may not have heard or thought
the word “reliable” or “accurate” often enough and at the right times to have formed a strong
association between those words and the appropriate unconscious concepts; further, even if this
association did exist, it might not be as strong as the association between ESP and unreliability.
156
We certainly are able to use these terms consciously with perfect facility, but this does not
guarantee that they are hooked up in our unconscious minds in the right way.
It is important to note that I am not claiming that our intuition about Norman’s belief, or
our intuitions about Gettier cases, are wrong.  Determining whether or not they are would require
defending a theory of what justification and knowledge are and when they can be had, which is
beyond what I want to do in this paper.  It may very well be that, according to the correct theory
of justification, Norman is not justified, and people in Gettier cases do not know, and that nothing
has gone wrong with our intuitive faculties’ judgments.  My claim here is only that we might be
in error and we cannot tell if we are or not, and that that “might” is stronger than a “might” of
mere logical possibility; it is a might that gives us actual and good reasons to mistrust our
intuitions.
75
 I acknowledge that this by itself is not enough reason to discard the intuition – for
that I need to argue (as I will below) that we have a better source of evidence than this intuition.
To summarize:  we have seen that we can make mistakes about the truth-conduciveness
of cognitive processes.  Such mistakes can lead to mistakes about whether or not a belief is
known or justified.  These mistakes will be largely undetectable for two reasons.  First, we cannot
currently predict whether or not we make mistakes about truth-conduciveness.  Second, for
problematic epistemic intuitions we cannot tell the difference (either introspectively or from the
third-person perspective) between intuitions caused by mistakes about truth-conduciveness and
those based on proper judgments about truth-conduciveness.  The possibility that we make such
errors and are not aware of them undermines the evidential status of our intuitions about many
important cases in epistemology, such as Gettier cases or anti-reliability thought experiments
                                                     
75
It is also possible that our intuitive faculties got the correct results based on incorrect judgments.  This
latter result is still worrisome; we do not want to rely on our evidential sources getting us correct results
through fortuitous accidents.
157
such as the one about Norman, but, it is not enough by itself to show that we should not use these
intuitions.  In a moment, I will then consider a way of doing epistemology without specific case
epistemic intuitions.  The existence of alternate sources of evidence about epistemic phenomena
that are immune to the types of errors I have discussed in this paper shows that we should reject
the use of problematic specific case epistemic intuitions.  First, however, I would like to suggest a
method of testing our intuitions for errors of the sort I am concerned with.
A Test
Here’s a problem:  we are concerned that our unconscious minds are making incorrect
judgments about truth-conduciveness in certain cases, either because they have learned false
things about truth-conduciveness, or because they are ignoring stipulations in thought
experiments.  We cannot tell introspectively if they are or are not.  And we cannot tell if we are
making mistakes by looking at the intuitions we produce, since we do not know what correct
intuitions would look like.  How can we tell if we are making incorrect judgments about truth-
conduciveness?
One way would be to see if we have other intuitions that reflect incorrect intuitions.  So,
we might ask if people if, intuitively, Smith is justified in his belief, or if Norman is intuitively
using a reliable source of data.  If we are making mistakes about truth conduciveness, then these
should be reflected in our intuitions.  Unfortunately, we should not expect people to accurately
report their intuitions about these test questions if they are making these mistakes.  The reason is
that Smith is clearly supposed to be justified and Norman clearly is using a reliable source of data
– that’s built right into the thought experiments.  When the correct answer is so salient and
obvious to the conscious mind, we should expect people to report that answer, and not the
contradictory workings of their unconscious.
158
We need a more subtle test; fortunately, it exists.  Psychologists who have studied
automatic evaluations have developed tests to determine if people have unconsciously employed
certain evaluative concepts (Fazio, 2001).  Automatic evaluation is a sort of unconscious
judgment (an unconscious value judgment) that often never reaches the conscious mind.  For this
reason, they can be very difficult to detect.  However, psychologists have found that if someone
makes the automatic evaluation that “This is good,” then they will be quicker to identify other
things as good for a short period of time afterwards (and if they automatically evaluate something
as bad, they will be quicker to identify other things as bad for a short time afterwards).  So, to test
for automatic evaluations, subjects are presented with some stimuli (the one that may or may not
be automatically evaluated), and then rapidly given another stimuli and asked an evaluative
question about it.  If subjects regularly respond more quickly to the question than does the control
group, we have learned how they are automatically evaluating the first stimuli.  We can adapt this
sort of test (called the affective priming test) to epistemology.  Roughly, we would present
subjects with the Norman case, or a Gettier case.  Then we would rapidly present them with
something that could be evaluated as reliable or unreliable, and ask them to make this evaluation.
If they do so faster than normal, we can conclude that they are unconsciously making similar
judgments about the first thought experiments.  This needs some development; I have some
doubts that our unconscious judgments about reliability are associated with the word “reliable,”
and so we will need some way of asking for evaluations that is will be affected by unconscious
judgments about reliability, but I am confident this can be found.
Doing Epistemology Without All These Specific Case Epistemic Intuitions
So far we have shown that we have real reasons to worry about the reliability of certain
of our epistemic intuitions.  We have not established that these epistemic intuitions are always, or
159
mostly, wrong – part of the worry is that we cannot currently diagnose how often they make
mistakes.  To argue that one should reject a potential source of evidence, one needs to show either
that it is no source of evidence at all – for example, by showing that it is no more accurate than
guessing – or show that better sources of evidence exist.  Although I take myself to have shown
that certain specific case epistemic intuitions are not very good evidence about epistemic
phenomena, I do not think I have shown that they are not evidence about these phenomena at all.
Is there a way to construct theories of epistemic phenomena without using the classes of
intuitions I am claiming are suspect?
Here is a sketch of one way we might do so.
76
It is an approach to epistemology that
Wayne Riggs (2006) calls “value-driven epistemology.”  As Riggs describes value-driven
epistemology,
a preferred way of beginning … an epistemological investigation is by asking what epistemic
values are involved, and how those values get transmitted or transferred to the object under
investigation. This will by no means exhaust the epistemological work to be done, but it will both
provide important details about the phenomenon being investigated as well as placing additional
constraints on the possible answers to other questions that might be asked.  (Riggs, 2006, p 27)
People pursue knowledge and they seek to hold beliefs that are justified.  They do so, in part,
because knowing and being justified are valuable states to be in.  Now, of course, people can and
do pursue specific pieces of knowledge for their own idiosyncratic reasons which shed little to no
light on knowledge in general; for example, I might seek to know the phone number of a good
local pizza place in order to be able to eat good pizza, and this pursuit is not indicative of any
larger truth about knowing.  All the same, there seems to be something generally and
characteristically valuable about knowledge (or the state of knowing) and justification (or being
justified).  To borrow an apt phrase, “knowledge is the ‘good stuff’” (Weinberg, et al, 2001); put
                                                     
76
Another method might be the sort of naturalism advocated by Hilary Kornblith (2002), in which we
consider the role the term “knowledge” plays in science.
160
another way, “knowing [is] a state that we want ourselves and others to be in.” (Jones, 1997)
Why is this?  There are a range of answers.  For example, some philosophers see knowledge as
valuable because identifying others as having knowledge identifies them as people to trust.
77
Others see knowledge as valuable because knowledge is more stable than beliefs that are not
known to be true.
78  
If we had a better sense of why epistemic phenomena were valuable, then we
would be in a position to figure out what they were, by figuring out what phenomena were
valuable in that way.
 
This latter part of the project does not require specific case epistemic
intuitions; it requires careful, conscious evaluation of the costs and benefits of various methods of
reasoning.
79
 If, for example, the state of knowing was valuable because it allowed others to
recognize the knower as someone whose beliefs should be trusted, then knowledge should be
such that determining when X knows or does not know Y does not require information that third
parties have no access to.  If, on the other hand, knowledge and justification get their value from
how they help agents to guide their own thought processes, this speaks in favor of internalist
epistemic theories.  Here, then, is a way to develop theories of epistemic phenomena.  Rather than
trying to develop theories by looking at specific examples and trying to determine what all the
examples of justification have in common than non-justified beliefs lack, we start with general
facts we know about epistemic phenomena – their value – and develop our theories from these.
Classifying specific cases into “known” and “not known” and so forth comes last, not first.
Why think that this sort of project has any chance at success?  First, it is plausible that we
can eventually agree on an account of the value of epistemic phenomena.  We already know
something about where epistemic value comes from (truth, in part), which is arguably more than
                                                     
77
This is a view advocated by Colin McGuinn and David Armstrong.
78
Plato claims this (in the Meno), as does Timothy Williamson (2000).
79
For an example of this, which relies on no specific case epistemic intuitions, see Bishop & Trout (2005).
161
we can say about the value of moral phenomena.  Second, there is room for a plurality of views.
If there is disagreement on what the value of knowledge is, we can still (potentially) agree that
our opponents are talking about something of value even if it is not what is properly called
knowledge or justification.
80
 This sort of disagreement looks more like merely verbal
disagreement than substantive disagreement about the nature of things, and it seems likely that
philosophers who disagree in this way will devote more of their time to exploring how to pursue
their goals rather than arguing about word use.  Third, to find states with the appropriate value we
can bring to bear time-tested and robust methodologies, such as the use of statistics and the
scientific method.  Finally, we are more likely to be successful in discovering an account of
knowledge that satisfies only explicit criteria (i.e. that establishes knowledge as valuable in the
appropriate way) than we are to find one that both satisfies this sort of criteria and agrees with our
specific case intuitions.  We have always had some idea of our goals in developing theories of
knowledge, whether it be refuting skepticism or giving norms of belief formation, while at the
same time we have wanted to be faithful to our intuitions.  We thought that doing one was
helping us do the other, but as I have been arguing, we have reason to think the two can come into
conflict.  Given potential and actual conflicts between these two putative sources of evidence
about epistemic phenomena, we are better off only using one of them.
                                                     
80
Intractable disagreement would not be surprising, given the range of goals epistemologists are interested
in.  Descartes, for example, wanted certainty, whereas others are interested in reasoning that leads to good
outcomes (see Bishop & Trout, 2005).
162
Chapter 6
Introduction
Many philosophers claim that it is intuitive that causation is transitive:  that, intuitively,
whenever some event A causes an event B, and B causes an event C, A causes C.  In the literature
on causation, transitivity is often seen as an obvious fact about causation that does not need to be
demonstrated or argued for.  Ned Hall, for example, states, “That causation is, necessarily, a
transitive relation on events seems to many a bedrock datum...” (Hall, 2000, p.198)  David Lewis,
in his first full articulation of his counterfactual theory of causation, says, without offering any
argument in support, “Causation must always be transitive.” (Lewis, 1993, p.200)  More recently,
Jonathan Schaffer writes, “[T]ransitivity is intuitive …. The transitive inference feels virtually
analytic.” (Schaffer, 2005, p. 309)
What does it mean to say that causation is intuitively transitive?  Certainly, it is often the
case that, for some specific sequence of events A, B, and C, when A causes B and B causes C, we
have the intuition that A causes C.  While this is evidence that causation is in general transitive,
any single such intuition is extremely weak evidence for transitivity, since such a case is perfectly
compatible with causation not being transitive; sometimes when A is friends with B, and B with
C, A is also friends with C, but the “friends with” relation is not transitive.  Given enough single
cases consistent with transitivity we could make an inductive argument that causation was
transitive, but philosophers are typically not in the habit of supporting their metaphysical claims
using this sort of enumerative inductive argument.  When philosophers say that it is intuitive that
causation is transitive, they do not only mean that we often have intuitions about specific
sequences of events that are consistent with causation being transitive.  What they mean is that a
163
proposition like “Causation is transitive,” (or some equivalent) is intuitively true.  Let’s call
intuitions with content equivalent to “Causation is transitive” transitivity intuitions.
There are many thought experiments offered as counterexamples to causal transitivity.
These all present cases in which we have the intuition that event A causes event B and B causes
event C, but also have the intuition that A does not cause C.  For example:
[A] man's finger is severed in a factory accident. He is rushed to the hospital, where an expert
surgeon reattaches the finger, doing such a splendid job that a year later, it functions as well as if
the accident had never happened. The accident causes the surgery, which, in turn, causes the finger
to be in a healthy state a year later. But, intuitively, the accident does not cause the finger to be in
a healthy state a year later.

(Hall, 2000, giving an example from Kvart, 1991)
Let’s call the intuitions generated by these examples anti-transitivity intuitions; they all have the
following content:  “A caused B, B caused C, A did not cause C.”
The existence of both transitivity and anti-transitivity intuitions presents us with an
interesting problem.  How do we deal with the conflict between these intuitions?
81
 There are four
ways one could deal with this conflict:  reject both sets of intuitions, reject the anti-transitivity
intuitions, reject the transitivity intuitions, or somehow show that they are not actually in conflict.
As we will see, we should at least take the third option (rejecting the transitivity intuitions)
because they are not very good evidence.  However, this does not rule out the fourth option
(showing that the intuitions are not in conflict) for the following reason:  rejecting our transitivity
intuitions does not mean that we should think causation is not transitive, it only means that we
stop using these intuitions as evidence that causation is transitive.  We may have good reasons to
think that causation is transitive other than these intuitions.  For example, the best theory of
                                                     
81
As an aside, this conflict is not only between the intuitions of different intuitors but also between the
intuitions of single intuitors.  Philosophers such as Ned Hall admit to possessing both sets of intuitions
(Hall, 2000), and I have also done surveys of non-philosophers in which many of the respondents reported
having both types of intuitions.
164
causation we can generate may entail the transitivity of causation; this would give us an incentive
to try to show that our anti-transitivity intuitions can be reconciled with causal transitivity.
As with the last chapter, the point of this chapter is not only to make an argument about
the use of certain intuitions in philosophy, it is also to illustrate the application of the ideas I have
developed in the first four chapters of this dissertation.  In Chapter 4 I claimed that deciding
whether or not to use intuitions as evidence about a given philosophical question or topic was a
multi-step process.  First, we have to use the general theory of intuitions, along with some
philosophical understanding of a topic, to see if intuitions about that certain phenomenon are
potentially good evidence about that phenomenon; this involves asking if our intuitions could be
based on the sort of information they would need to be based on to tell us about this phenomenon.
Second, we have to look at specific psychological findings along with the general theory of
intuitions to determine if our intuitions actually are based on that information, and if there are any
potential biases that cause them to go astray.  I will illustrate both of these steps in this chapter,
and show how we can at the end of these steps conclude that certain intuitions are good evidence
about a philosophical issue.  In order to best illustrate this, I will explore questions that have not
(to the best of my knowledge) been raised by (m)any philosophers, for example: “Are any of our
intuitions about causation good evidence about causation?”
This chapter has the following structure: first, I discuss whether or not intuitions about
causation are potentially good evidence about causation (and conclude that they are); second, I
discuss how reliable these intuitions are; third, I argue that we have no good reason to reject anti-
transitivity intuitions; fourth, I argue that we have good reason to treat transitivity intuitions as
very weak evidence; and finally I talk about what, if anything, this tells us about the nature of
causation.
165
Can Intuitions about Causation be Good Evidence about Causation?
This is a question that, as far as I know, is not on the table in current debates about
causation, except to the extent that some philosophers think that no intuitions are good evidence
about anything in philosophy.  In epistemology, a number of philosophers such as Hilary
Kornblith, Michael Bishop, or J.D. Trout have proposed radical changes to methodology that
move away from the use of intuitions, but changes of this sort do not seem to be much under
discussion in the study of causation.  However, I think philosophers should at least consider the
use of intuitions in the study of causation for two reasons.  First, we have a general duty as
philosophers to think about how we ought to do philosophy, and there are good reasons to think
that methods that work in one domain of philosophy might not work in others.  Second, as I have
argued previously, an empirical-based understanding of intuitions will not justify the use of
intuitions in all areas of philosophy.  There are likely to be subjects about which our intuitions
cannot give us good evidence.  So, for each issue we work on as philosophers we need to ask
ourselves whether or not intuitions can be good evidence about it.
One might raise a Humean worry about intuitions about causation:  we never come into
direct sensory contact with causation, and since our intuitions (when reliable) are based ultimately
on learning from experience, our intuitions cannot even possibly be reliable evidence about
causation.  To reply to this, we need to consider how our unconscious minds might learn about
causation such that they can generate reliably accurate intuitions about it.  I think that the best
place to start in addressing this Humean worry is counterfactual dependence; whether or not
causation is counterfactual dependence, it is widely accepted that counterfactual dependence is a
good test for causation.  Roughly, when B counterfactually depends on A, B and A both did
occur, but if A had not occurred, then B would not have occurred (given certain caveats, such that
we look at the closest possible world in which A does not occur).  With regard to causation, if E
166
counterfactually depends on C, it is almost always the case that C causes E, and if NE does not
counterfactually depend on NC, it is almost always the case that NC did not cause NC.  Learning
about counterfactual dependence – that is, learning what counterfactually depends on what (and
what does not) – will certainly teach us quite a bit about what causes what (and what does not).
Can our unconscious minds learn about counterfactual dependence?
For any given instance of putative causation – say, for example, a rock hitting a window,
which then breaks – we certainly cannot directly experience the counterfactual scenario in which
the alleged cause not occurring.  We never experience counterfactuals, since they are, by
definition, not there to be experienced.  This does not mean that we cannot learn about
counterfactual dependence through experience.  The counterfactual dependence of C on E
involves two facts:  1) C and E both actually occurred, and 2) had C not occurred, E would not
have occurred.  The first fact is easily available to our senses.  The second can be learned about
by experience.  I know what happens in the counterfactual scenario in which the rock did not hit
the window because I have witnessed (countless times) windows not being hit by rocks.
Further, when checking for counterfactual dependence we are supposed to check the
closest counterfactual situation.  We learn what the closest counterfactual situation is like by
learning what normally happens in situations like the counterfactual situation; that is, what
happens in situations like the counterfactual that are not atypical and thus further away in terms of
possible worlds.  The normal outcome of the counterfactual scenario (given roughly the same
circumstances) will be good evidence about what the closest possible counterfactual world looks
like.  One way to learn what the normal outcome of some event is is through repeated experience,
since even if at some previous time a window did not break when a rock did not hit it, that
occurrence might have been wildly atypical, and be bad evidence that when this window was not
hit by this rock, it would not break.
167
So we can potentially learn if some specific E counterfactually depends on some specific
C through a) experience of C and E, and b) numerous experiences of situations involving the
absence of events like C.  This sort of learning seems potentially able to give us the ability to
make accurate judgments that some specific event caused some other specific event, and making
enough of these judgments allows us to create an unconscious concept (or theory) of causation
that could generate more general intuitions about causation.  Further, this sort of learning would
also allow us to learn about causation even if it is not counterfactual dependence, but rather
something like law-like connections between events, or constant conjunction of events, or
probabilistic connections between events (other theories of causation advanced by philosophers).
Thus, we have a response to the Humean worry:  the unconscious mind could plausibly
learn about causation by learning how different types of events in the world covary with other
types of events in the world.  This is something our unconscious mind is good at, as we discussed
at great length in Chapter 2.  But being sensitive to the right information is not enough for us to
have properly reliable intuitions about causation.  We also need to be sensitive to it in the right
way – there must not be something about us, our situations, or how our minds work such that our
use of this information will lead to unpredictable systematic errors about causation.  In the next
section I will bring up some worries one might raise about our unconscious minds’ ability to
accurately learn about counterfactual dependence, and argue that, while we can and do make
errors about counterfactual dependence, these are predictable and can be corrected for.
Do We Accurately Learn About Causation?
In this section I will in many ways have a conversation with myself.  While the Humean
worry has, to some extent, been raised by others (Hume, for example, although he was worried
168
about our general knowledge of causation, not just intuitions on the subject), no one in
philosophy has, as far as I know, raised empirically-founded concerns about the accuracy of our
intuitions about causation or how we learn about causation, so it may seem odd for me to go out
of my way to address them.  However, remember that the point of this chapter is not only to
engage with current debates in metaphysics but also to give an example of the sort of discussions
one can and should have about intuitions once one has an empirically-founded understanding of
them.
There are a number of worries one might raise about our ability to unconsciously learn
about causation by learning about counter-factual dependence, but I am only going to raise one.
There is a great deal of data that suggests that humans are very bad at noticing and thinking about
patterns; specifically, we often see patterns where there are none.  If our intuitions about
causation are based on unconscious learning, and we are supposed to unconsciously learn about
causation by noticing patterns in our environment, this suggests that we should be suspicious of
our intuitions about causation.
82
 However, as we will see, the sorts of mistakes we tend to make
can be corrected for, and thus do not undermine the evidential status of our intuitions about
causation.
There are three main sources of evidence that we might see patterns that do not exist:
research on belief in the law of small numbers, research on the hot hand fallacy, and research on
illusory correlations.  Let’s consider each in turn, and see what they tell us about our ability to
detect patterns.
                                                     
82
One might point out that this only casts doubt on our intuitions which say that one thing caused another:
if we notice patterns that are not really there, this should only lead us to think there are more causal
connections in the world than there really are.  Thus, one might argue, we do not need to worry about anti-
transitivity intuitions because they involve a lack of a causal connection.  This of course is not entirely true,
since in order for an intuition to be an anti-transitivity intuitions, intuitively A must cause B and B cause C
(but A must intuitive not cause C); if we have reason to generally doubt our intuitions that one thing caused
another, then we must also doubt anti-transitivity intuitions.
169
The law of small numbers
Belief in the law of small numbers is a bias named by the psychologists Amos Tversky
and Daniel Kahneman in a 1971 paper.  Belief in the law of small numbers is the belief that small
samples of data should contain the same distribution of properties as the larger population they
are taken from.  For example, a person who knows that a coin has a 50% chance of coming up
heads and a 50% chance of coming up tails, and thus that the set of all coin flips should contain
half heads and half tails results, might think that a fair coin flipped only four times should also
contain half heads and half tails results. In fact, a set of four fair coin flips is less likely than not
to have an even number of heads and tails (the odds of an even number of heads and tails in four
coin flips is 3/8).  This belief results in people making mistakes about whether or not sets of data
are randomly or non-randomly generated (Falk & Konold, 1997).  Further, this leads people to
see patterns that are not really there.  If people expect the distribution of data in small data sets to
be like the distribution in a larger population, then they will generalize inappropriately from small
data sets to larger populations.  For example, given a small series of binary outcomes (1s and 0s,
X and Os, heads and tails) that does not have an even distribution of the two outcomes, they will
tend to expect that a larger population generated in the same way will have the same distribution
of outcomes; in other words, they will see a pattern in the small data set that they will apply to a
larger population.  This sort of mistake is partly responsible for phenomena such as our excessive
trust in experts; if we only know of a few outcomes of an expert’s judgment, and most of these
are successful, we will tend to (unjustifiably) assume that the entire population of that expert’s
judgment will contain roughly that rate of success.
Although studies of this sort of mistake tend to not talk about unconscious pattern
spotting or intuitions, the general theory of intuitions predicts that our unconscious mind will act
as if it believes in the law of small numbers.  Our unconscious mind is built to make inferences
170
from the data presented to it.  If it is presented with only a small set of data, it will make
inferences from this.  If the small set of data seems to follow a pattern (even if this is simply due
to chance), our unconscious mind will expect future members of that category to follow the same
pattern.  Thus, we should expect our unconscious mind to see patterns that are not really there
(see e.g. Lewicki, 1987).  However, this does not mean that our unconscious minds are generally
bad at spotting patterns.  We will spot “patterns” in small data sets inappropriately, but as these
data sets get larger – as we have more experiences of the relevant sort – what we have “learned”
about the alleged pattern will only be reinforced if the pattern continues to be exhibited.  If it does
not, we will have stronger associations that go against the previously seen pattern, and will be
unlikely to unconsciously infer as if that pattern holds.  In other words, we should only expect a
belief in the law of small numbers to be evidenced by unconscious learning when our
unconscious learns from only a small amount of data about certain types of events.  This means
belief in the law of small numbers gives rise to predictable errors; we should not trust causal
intuitions that are based on event sequences of a type we have only a small amount of experience
with.
The hot hand fallacy
The hot hand fallacy is a very specific type of mental bias that only occurs in sports-
related contexts, but it is supposed to be evidence for a more general cognitive bias.  The hot
hand fallacy involves the (potentially tacit) belief that certain basketball players are more likely to
shoot in streaks than chance predicts.
83
 Streak shooting occurs when players tend to make and
miss baskets in clusters; rather than successful baskets being evenly or randomly distributed
                                                     
83
Similar beliefs occur in other sports, but the main focus of study has been basketball, probably because
the initial study of the phenomenon was about basketball.
171
among all attempts, they tend to come in a row more often than chance would predict.  This belief
is supposed to be fallacious because (it is claimed) statistical analysis shows that it is not
warranted by the data; performing a variety of statistical tests on data from numerous professional
basketball games and controlled free throw shooting gave no evidence for the hot hand (Gilovich,
Vallone, Tversky, 1985).  This fallacy involves the seeing of a pattern – in a streak shooter, the
ability to make a shot more readily than normal covaries with their having made previous shots;
when a streak shooter has made several previous shots, they are more likely than normal to make
the next shot – that is not there.  Since most people do not consciously track data on player’s
success shooting baskets with much care, the seeing of this covariation is likely due (at least in
part) to unconscious processes.  If this belief is really fallacious, it shows us that our unconscious
is biased to see false covariations.
This is one of the most direct types of evidence for the claim that people see false
covariation in series of experiences, and it is also quite worrisome because people who watch
basketball tend to have a very large set of data to learn from and there does not seem to be
anything special about basketball which gives rise to this fallacious belief.  If the belief in streaky
basketball shooting really is fallacious, this seems to be evidence that we might generally see
patterns that are not really there.
However, there is a great deal of dispute about whether or not belief in the hot hand is
really fallacious.  The dispute is largely based on different types of statistically analyses, and
those of you not in the mood for some math can skip this paragraph with the take away lesson
that the claim that we commit the hot hand fallacy is fairly dubious.  For the rest of you, there are
two main reasons for disputing the hot hand fallacy.  First, it has been argued that the statistical
tests used by Gilovich, Vallone, and Tversky to show that there are no streak shooters were not
appropriate, because they would have been unable to detect streak shooters in the data if they
172
exist (Korb & Stillwell, 2003).  Second, Gilovich, Vallone, and Tversky’s results come from
looking at the their data in a certain way – they looked at each basketball player individually, and
showed that there was no evidence for the hot hand in each player’s results.  However, when one
aggregates the data from all players and considers it as a mass, suddenly one sees evidence for the
hot hand (Wardrop, 1995).  What does it mean to “aggregate the data from all players?”  In the
original study, the researchers compared player X’s chance of making a shot after having made
the previous shot to their chance of making a shot in general.  In order to demonstrate a hot hand,
X’s chance of making the second shot in a row would have to be significantly higher than their
normal odds of making a shot.  When one aggregates the data, though, one puts together all data
from all players to calculate the average chance of a player making a shot after making their
previous shot, and the average chance of a player making a shot after missing their previous shot.
When the data is aggregated, data from good players – who are likely to make a higher
percentage of their shots in general – is mixed with data from average players – who are less
likely to make shots in general.  To determine if streak shooting occurs, we compare the average
change of making a shot after one has already made a shot to the average chance of making a
shot; if one’s chances go up after making one shot, then we seem to have evidence for streak
shooting.  When data is aggregated, the chance of making a shot at all is based on data from both
more and less skilled player, but the data on making a second shot in a row is based more on data
from good players than from less skilled players.  This is because good players are more likely to
have made a first shot than are less skilled players.  Since good players are also more likely to
make shots in general than are less skilled players, they are thus more likely to make the second
shot than are less skilled players.  Because good players make up a higher percentage of the data
on second shots than they do on first shots, the data on second shots will show a higher
percentage of them being made than the general data on shooting shows.  Thus, it will seem that
173
one is more likely to make a shot given that they have already made a shot than they would be to
make the first shot.  As the statistician who demonstrated this points out, it is unlikely that any
basketball fan can or will (even unconsciously) monitor the shooting tendencies of every player
individually.  Instead, it makes more sense to think that they will monitor the aggregated shooting
tendencies of players, and in this case the belief in the hot hand does not demonstrate the seeing
of a false covariation.
Illusory correlations
The final piece of evidence that people see false covariations comes from research on
illusory correlations.  This research explores human perception of covariations in data that does
not actually demonstrate covariations, largely from studies of clinicians (e.g. Chapman &
Chapman, 1967).  While the results of these studies are interesting, they do not show that we
should worry about our intuitions about causation.  The studies show that people are apt to see
false covariations when they expect, prior to seeing the data, that those covariations will occur (de
Jong, et al, 1998).
84
 This does not mean that humans have a tendency to generally see false
covariations.  We already know that people have confirmation biases (see Chapter 3 on
directional motivation) that incline them to look for data that confirms their expectations and
discount data that does not.  Clinicians who are inclined to believe that X and Y are correlated
might exhibit this confirmation bias by looking for cases in their data where X and Y occur
together, rather than by looking at how frequent these cases are in comparison to cases where
they do not occur together.  Further, they might also look for evidence that the cases in which X
and Y do not occur together are not relevant; we see this even in philosophy, where we can be
                                                     
84
This may also explain, in part, the hot hand fallacy.  Since the existence of hot hands is extremely
plausible, we may be predisposed to see evidence for them in our experience.
174
prone to explain away intuitions that work against our theories and focus on those that confirm
them.  Research also shows that people’s thought can be affected by the desire to confirm what
they want to believe, and this can explain some perception of illusory correlations.  Finally, even
if illusory correlations were the result of unconscious biases in how we see covariations, rather
than in how we look at data in general, they would cause predictable errors.  Since they occur in
cases in which subjects have prior reasons to expect covariations to occur which are independent
of the actual data the covariations are seen in, we can discount intuitions about such cases.  This
is true in any case, because we already know that people exhibit confirmation biases.  This means
we should discount intuitions about causation where the intuitor has some reason to believe that
causation occurred where that reason is not based in their experience of causation.
85
Summary
In conclusion, although there is some evidence that people do unconscious “see”
correlations when these do not really exist, we can predict when this is likely to occur.  It is likely
to occur when people are dealing with small sets of data, and when people are dealing with data
about which they had preconceived expectations about causation that are independent of their
knowledge of causation.  As long as we avoid use of intuitions about these sorts of cases, I know
of no good, empirically-based reasons to be suspicious of intuitions about causation in general.
                                                     
85
Most cases in which we intuit that C caused E are cases in which we are disposed to think that events like
C cause events like E, even before experiencing C and E themselves.  These dispositions, however, are
based in our understanding and experience of causation; that is, they are based on experience with the
relationship between C and E.  This is a perfectly acceptable reason to be disposed to think C caused E.
One could have an unacceptable disposition to think that C caused E if, for example, C causing E would be
good for their career, or fits their theory (rather than their experience) of causation, or if accorded with their
moral or metaphysical views.
175
Should We Reject Anti-Transitivity Intuitions?
Now that we have seen that we have no reason to reject intuitions about causation
generally, we must consider another way of resolving the conflict between transitivity and anti-
transitivity intuitions.  Perhaps there is something wrong with anti-transitivity intuitions.  As we
discussed in Chapter 3, there are a number of ways that specific intuitions can go wrong, even if
one expects that intuitions in general can give one accurate answers on a subject.  Perhaps one of
these causes us to have anti-transitivity intuitions despite the fact that causation is transitive.
There are three major factors that can cause our intuitions to go wrong.  These are
directional motivation, the illusory truth effects, and verbal overshadowing.  Directional
motivation occurs when one is motivated to draw a specific conclusion, rather than draw the right
conclusion.  For example, a person might see it as good for their career to hold certain views of
causation, and this might unconsciously affect the information they attend to and eventually affect
their intuitions.  The illusory truth effect occurs when a person is exposed to a certain proposition,
for example by reading a sentence that expresses it.  For a period of time after this, the
proposition will seem more true.  Verbal overshadowing refers to the fact that thinking
consciously about how to make judgments on a subject interferes with our ability to have accurate
intuitions about it.  As discussed in Chapter 3, each of these three factors is likely to affect
philosophers’ intuitions about causation.  Philosophers who work directly on causation are likely
to have career-related motivations to believe certain things about causation. Philosophers who do
not work directly on causation will often see how the metaphysics of causation intersects with
domains in which they do work, or about which they hold theories, and this gives them a
motivation to have certain views on causation.  I am not accusing anyone of fraud or deceit; these
biases operate on an unconscious level. Philosophers are also very likely to be affected by the
176
mere exposure effect and verbal overshadowing because they are likely to have discussed the
metaphysics of causation to a substantial degree.
However, this does not give us reason to reject anti-transitivity intuitions, because non-
philosophers also have these intuitions, and non-philosophers are not likely to be experiencing the
above three biases.  I have done some empirical research on the intuitions of non-philosophers
with regards to transitivity.  I conducted a survey of 130 undergraduates in a general education
philosophy course.
86
 The majority of them had never taken a philosophy course before.  I
presented them with a series of thought experiments, each of which described three or more
events.  For each thought experiment, they were asked if the first event caused the second, the
second caused the third, and if the first caused the third.  95% of the respondents reported in at
least one case that the first event caused the second, the second caused the third, but the first did
not cause the third.  The average respondent gave that report to five out of eight cases I presented.
It is unlikely that undergraduates have discussed or thought about the metaphysics of causation in
any way, and certainly not to the extent that would lead to directional motivations, mere exposure
effects, or verbal overshadowing.  One might argue that one’s views on causation can impact
their views on free will or moral responsibility, and thus that undergraduates might have some
directional motivation to hold certain views on causation.  While we cannot rule this out a priori,
it is implausible and the sort of thing one would have to show strong empirical support for before
it should be taken seriously.
87
                                                     
86
I will be the first to admit that my study had some issues; I think some of the prompts were confusing,
and it is possible that the questions might have elicited different answers were they worded differently.
However, I think that the huge amount of agreement is very good evidence that anti-transitivity intuitions
are wide-spread among non-philosophers, and put the burden empirical proof on any philosopher who
wishes to dispute this.
87
When pre-testing my survey, I did notice that some subjects consciously employed something like the
counterfactual dependence test on the questions.  This seemed relatively rare (and was contrary to the
directions given).  This issue needs further study, and I think that a study needs to be done using similar
177
That said, there are reasons why one might worry about the anti-transitivity intuitions
even of non-philosophers.  Anti-transitivity intuitions might be the result of the
representativeness heuristic.  As explained in an article by Thomas Gilovich and Kenneth
Savitsky,
[t]he representativeness heuristic consists of the reflexive tendency to assess the “fit” or similarity
of objects and events along salient dimensions and to organize them on the basis of one
overarching rule:  “Like goes with like.” (Gilovich & Savistky, 2002, p. 618)
One interesting consequence of the representativeness heuristic is that people tend to think that
causes should resemble their effects (Gilovich & Savitsky, 2002).  For example, in one study
subjects were told about a tribe who hunts wild boar and turtles.  Some subjects were told that the
tribe hunted boar for meat and turtles for their shells, while the others were told that the tribe
hunted boar for their tusks and turtle for their meat.  The subjects were asked to guess other
characteristics of the tribe – what they looked like, their temperament, etc.  Subjects tended to
assign traits to the tribe that accorded with what they ate – those told that the tribe ate boar
assessed them as more aggressive (and hairier).  Turtle eaters were assessed as living longer and
swimming better.  This result was despite the fact that the tribespeople in both conditions hunted
the same animals.  One might think that this heuristic affects anti-transitivity intuitions for the
following reason:  most of the common anti-transitivity thought experiments involve events with
opposite polarities.  One event, either the first or last in the described causal chain, is perceived as
strongly negative and the other is perceived as positive (or at least neutral).  For example, in the
thought experiment devised by Igal Kvart, a man has his finger cut off (a negative event), which
causes him to have it reattached, which causes it to be healthy a year later (a positive event).
Perhaps the representativeness heuristic makes us feel that negative events cannot cause positive
                                                                                                                                                                   
thought experiments but putting subjects in conditions which tend to rule out conscious reasoning.  For
now, the evidence indicates that undergraduates do have anti-transitivity intuitions, but further study is
required.
178
outcomes, and vice versa, and thus see the initial and final events in these causal chains as not
being causally related.
I tested this possibility empirically by including in my survey of undergraduates
examples where the initial and final events had the same polarity.  70% of the subjects reported
anti-transitivity intuitions about the example with a good initial event and good final event, and
45% reported anti-transitivity intuitions about the example with a bad initial event and bad final
event.  The lower result on the bad/bad example may be due to its being a poor example, as
almost 30% of the subjects reported that the second event in the causal chain did not cause the
third, so that to them this series of events was not even a potential counterexample to transitivity.
The good/good case is structurally quite similar to Kvart’s thought experiment.  In it, person A
bakes a peanut butter cake for person B, and leaves it at B’s house.  B’s roommate sees the case
and eats it.  It turns out that B is allergic to peanuts, and if they had eaten the cake they would be
horribly sick.  Subjects reported that baking the cake and leaving it at the house caused the
roommate to eat the cake, and the roommate eating the cake caused B to be healthy in the end,
but that baking the cake and leaving it at the house did not cause A to be healthy in the end.
Subjects responses to these cases gives us good evidence that subjects can and do have anti-
transitivity intuitions that cannot be explained by the representativeness heuristic.
Another claim one might make is that people’s anti-transitivity intuitions are not really
intuitions about causation, but rather intuitions about responsibility.  Perhaps people called upon
to make judgments about causation employ a heuristic and use judgments about responsibility in
lieu of judgments about causation.  It does seem that judgments of responsibility are sometimes
easier to make than causal judgments; it is often very clear that someone should or should not be
blamed (or praised) for an outcome even when it is not clear what to say about their causal
relationship to the outcome.
179
This possibility is undermined by the fact that people have anti-transitivity intuitions
about cases that do not involve human beings.  For example, in the study I did of undergraduates,
I used the following thought experiment:
The desert Lily is a rare flower that only blooms once a day, exactly at noon.  However, it will not
bloom when it is wet.  G has a Desert Lily in his garden.  He has also installed a machine that
detects moisture in the atmosphere, and if it detects more than a certain amount, opens an umbrella
over the Desert Lily.  At 11:55 am, it starts to rain on G’s garden, heavily enough that the Desert
Lily would be very wet by noon.  The machine quickly opens its umbrella over the Desert Lily.
The Desert Lily is dry at 12:00 and blooms.  Is the rain one of the causes of the Desert Lily
blooming at noon this day?  Is the rain one of the causes of the umbrella opening over the Desert
Lily?  Is the umbrella opening one of the causes of Desert Lily blooming at noon this day?
72% of respondents reported anti-transitivity intuitions in response to this thought experiment.  I
also gave a thought experiment involving computer viruses and computers that generated anti-
transitivity intuitions in 70% of respondents.  It is extremely implausible that subjects judge
computer viruses or garden machines as being capable of being responsible for outcomes (in the
sense of the word “responsible” that differs in meaning from “caused”).
88
 At least some anti-
transitivity intuitions seem to be clearly based on judgments about causation, not responsibility.
One might argue that anti-transitivity thought experiments all involve odd causal chains
with which we have little experience.  Thus, even if we can generally have reliable intuitions
about causation based on learning from experience, we should not expect experience to allow us
to have reliable intuitions about the cases described in anti-transitivity thought experiments.  Does
this argument hold water?  First, anti-transitivity thought experiments do often involve odd causal
                                                     
88
Judith Jarvis Thomson, however, does argue that we can and should hold non-agents responsible; at the
very least, we can hold them at fault (Thomson, 2003).  Her reasons for claiming this are quite weak; she
claims that only such a view can properly account for the truth of sentences like “The gardener’s failing to
water the plants caused the plants to die.”  Her argument is that there is no law that connects the failure to
water to the death of the plants.  But that is only true if one considers only laws of physics.  I think that is
too narrow-minded; humans cause things to happen all the time, and not only through purely physical
interaction, but through social interactions as well.  If causation is to be governed by laws, it should be
governed by social laws in addition to physical ones.  Given this, it is plausible that there is some mixture
of social and physical laws that explains why a gardener’s failure to water would result in the death of the
plants he was paid to water (whereas the Queen’s failure to water them did not cause their death).
180
chains; my Desert Lily thought experiment is a good example of this.  But they do not always.
Kvart’s thought experiment, in which a finger is severed in a factory and reattached by doctors,
involves relatively normal causal processes.  Further, we experience situations that violate
transitivity on a regular basis in real life.  For example, consider a cause in which my sick friend
sneezes on me, in response to which I take a bunch of vitamin C, which helps me avoid getting
sick myself.  In that case we have a clear causal chain from the sneeze to the not getting sick, but
intuitively the sneeze does not cause me to remain healthy.  Another ordinary example is the
following:  A sits down to play video games, but doing so reminds him that he has a big test to
study for, so he studies for it and does well (one can substitute any leisure time activity for video
games, and any responsibility for studying, and get an example which occurs regularly in our
lives).  These sorts of cases are ones we are familiar with from our daily lives; given that we have
good reason to think that our intuitions about causation are generally accurate about “ordinary”
sequences of events, we should not discount our intuitions about these cases.  Since these
intuitions are in accord with our intuitions about the odder anti-transitivity cases, we should not
reject out anti-transitivity intuitions because they are sometimes elicited by outlying cases.
Intuitions about causation, in general, are good evidence about causation.  There does not
seem to be any reason to doubt the evidentiary value of anti-transitivity intuitions in general.
This means that we have good evidence that causation is not transitive.  Do we have good
intuitive evidence that causation is transitive?
Should We Reject Transitivity Intuitions?
What is the intuitive support for the claim that causation is transitive?  There are specific
case intuitions that are consistent with transitivity, but for most non-transitive relations there will
be situations in which sets of objects behave in ways which are consistent with the relation being
181
transitive, so single cases consistent with causal transitivity are not very good evidence that
causation is transitive.  There are also direct transitivity intuitions:  intuitions with the content
“Causation is transitive,” or something equivalent.  We will discuss those in a moment.  Finally,
there is the fact that much of our natural, unreflective talk about causation and use of the concept
is consistent with transitivity.  For example, it is intuitive that we can cause E2 by causing E1; for
example, I can cause my floor to become clean by causing someone to decide to clean my floor.
When children deny that they brought about some bad outcome (e.g. making their brother cry),
we point out that they caused something that, in turn, caused that outcome (e.g. they broke their
brother’s action figure).
89
I think this last type of evidence is not terribly strong evidence that causation is transitive.
Causation might look transitive quite often without actually being transitive; perhaps most of the
time when A causes B and B causes C, A also causes C (but this is not entailed by A causing B
and B causing C).  The fact that we can cause E2 by causing E1 is not very strong evidence that
causation is transitive, because there are many cases in which X can be done by doing Y, but
where doing Y does not also involve doing X.  For example, I can go to work by going to an
address in Los Angeles, but going to that address does not entail that I am going to work (since I
could work someplace else, or could go to that address for some other reason).
If there is strong evidence that causation is transitive, it comes from our transitivity
intuitions.  Now, we probably all agree that no one, or almost no one, hears the sentence
“Causation is transitive,” and has a strong intuition that it is true unless they have been studying
the philosophy of causation for a long time, in which case this intuition is quite likely to be due in
part to mere exposure effects and verbal overshadowing.  This should not be surprising; it would
                                                     
89
This latter type of evidence for transitivity was brought to my attention by Kadri Vihvelin.
182
be quite odd if most human beings had a word like “transitive” associated with their unconscious
concept CAUSE.  Instead, the sorts of general sentences about the transitivity of causation which
are intuitively compelling are sentences more like “For any three events, if the first causes the
second and the second causes the third, then the first always causes the third.”
90
 The intuitiveness
of a sentence like this is not very good evidence for its truth.
As discussed in Chapter 2, although we do not know for sure how general intuitions come
about, the best-case scenario for them is that they directly evoke rules encoded in our unconscious
concepts.  For example, I might have my unconscious concepts of DOG and ANIMAL strongly
associated, so that I am likely to have the intuition that dogs are animals.  Since this strong
association exists only if I typically experience something as being an animal when I experience
it as being a dog, it is evidence of a strong connection in reality between the things associated.
Let’s call these intuitions abstraction based general intuitions; they are general intuitions
(intuitions about a type of thing, rather than some specific instance of a type), and they are direct
expressions of facts about our unconscious concepts, which are in a sense abstractions from
experience.  In order for us to have an abstraction based general intuition about a sentence – for
example, in order for the sentence “Dogs are animals,” to seem intuitively true – the sentence
must use words that are associated in the right way with the relevant unconscious concepts.  This
requires that certain of the words in the sentence have been experienced at the same time as the
unconscious concepts they are supposed to express were deployed.  The sentence “Les chiens
sont animaux,” is not intuitively true to most Americans, even though the concepts DOG and
ANIMAL are strongly associated for most everyone, because the words in the sentence are not
associated with the concepts DOG and ANIMAL.  Even an American who spoke some French, and
                                                     
90
Certainly every philosopher I have talked to about this has held out general sentences like this as the ones
that are intuitive.
183
could (with effort) understand the sentence would not find it intuitively true; rather, they would
only find the English translation of the sentence intuitive, because while they understand the
French terms, they have not associated them with the relevant concepts strongly enough yet.  This
is reflected in research on mathematical intuitions in second language learners; it is quite hard to
do relatively simple math in a second language even though a large number of mathematical
propositions are intuitively true (at least when expressed in one’s first language) (Holt, 2008).
In order for a sentence to express an abstraction based general intuition, the words in the
sentence must be connected to the unconscious concepts involved in the right way.  We cannot, of
course, observe this connection directly.  So, when we find some general sentence intuitive, and
we want to know if it likely expresses an abstraction based general intuition (and is thus fairly
good evidence for its content), we have to ask ourselves if it is plausible that the words in the
sentence are associated with the relevant unconscious concepts.  In the present case, we must ask
if we have reason to believe that the words in the sentence “For any three events, if the first
causes the second and the second causes the third, then the first always causes the third,” (or
sentences like this) are associated with our unconscious concept CAUSE.  I find this implausible,
because it seems very unlikely that when we think about specific chains of events (which are
generally how we learn about causation) we think about any of these words (other than
“causes/caused”) very often.
If the words in that sentence are not associated with CAUSE, then the sentence probably
does not express an abstraction based general intuition.  But why then is it intuitive?  We
probably also have general intuitions that are due to unconscious consideration of exemplars of
concepts, rather than direct accessing of facts encoded in our unconscious concepts.  Let’s call
these exemplar based general intuitions.  As we saw in Chapter 2, when we unconsciously
consider exemplars of concepts, we typically recall prototypical exemplars, salient exemplars,
184
and exemplars that confirm the hypothesis under consideration.  Prototypical exemplars are those
exemplars that are “normal,” those that are like a great number of other exemplars of the concept.
Salient exemplars are those that stand out in our memory for one reason or another.
It should be no surprise that prototypical exemplars of causation look transitive; the most
prototypical examples of causation (at least to philosophers) are things like billiard ball collisions.
Likewise, salient examples of causation are also likely to be transitive, because what is likely to
stand out in our memory are surprising causal chains; these will generally be fairly interesting
because the first event in the chain was not expected to cause the last yet it did.  And, of course,
exemplars recalled due to confirmation bias should be expected to look transitive.  So, the general
intuition that causation is transitive is likely due to unconscious recollection of exemplars, and
does not give us a significant amount of information about causation beyond what we already
had.  We already knew that many examples (and quite common ones) of causal chains look as if
they could be transitive; we did not need a general intuition to tell us that.
General intuitions are also, at best, inductive evidence for their content, of a sort that is
very vulnerable to counter-examples.  Not all claims based on inductive evidence are so
vulnerable.  In science, for example, we often have extremely strong inductive evidence for
claims, evidence so strong that when confronted with a single counter-example, or even a few
counter-examples, we reject the counter-example rather than the claim they are supposed to
falsify.  We can do this by re-interpreting the counter-example, or seeing it as an error or artifact
of our measuring process, or by simply remaining agnostic on its status.  General intuitions,
however, can come about either due to consideration of only a few exemplars, or due to
associations formed by the majority, but not entirely, of a certain type of experience.  Because of
this, when we have any decent counter-example to a general intuition, we should see the content
of that general intuition as falsified.
185
The sort of intuitions that philosophers generally point to as evidence that causation is
transitive, and which are the only ones they plausibly could point to, do not give us much good
evidence about the nature of causation, and, even if they did, the claims they support would be
quite vulnerable to counter-example.  Thus, we do not have very good intuitive evidence that
causation is transitive.
Reconciling Intuitions
This does not mean that causation is not transitive.  What it shows is that we have
evidence that a good theory of causation should respect – anti-transitivity intuitions – and that a
good theory of causation should be consistent with this evidence.  And it shows that if one wishes
to argue that causation is transitive, one must furnish evidence for transitivity other than our
intuitions about transitivity.  For example, one might argue that the theory that best fits the
evidence we have about causation entails that it is transitive.  It is possible to articulate a theory
of causation according to which both causation is transitive and our anti-transitivity intuitions are
correct.
Consider, for example, Jonathan Schaffer’s theory of contrastive causation (Schaffer,
2005).  According to Schaffer, causation is a quaternary relation.  In order for X to cause Y, the
occurrence of X, rather than some other event (X’), must cause Y to occur, rather than some other
event (Y’); so causation is a relation between X, Y, X’, and Y’.  This theory allows our anti-
transitivity intuitions to be consistent with the transitivity of causation.  Consider the example of
my friend sneezing on me, which causes me to take medicine, which causes me to stay healthy,
yet where the sneeze did not cause me to stay healthy.  There are three (relevant) potential causal
connections in this story:  the sneeze causing me to take medicine, the medicine causing me to
stay healthy, and the sneeze causing me to stay healthy.  Let’s consider each within the
186
contrastive causation framework.  The sneeze’s occurring, rather than not occurring, causes me to
take medicine rather than not, since if the sneeze had not occurred I would not have taken
medicine.  Taking medicine, rather than not taking medicine, causes me to stay healthy, rather
than get sick, since in that situation had I not taken medicine I would have gotten sick.  The
interesting question is:  is it the case that the sneeze’s occurring, rather than not occurring, causes
me to get not get sick, rather than get sick?  It seems this is not the case, and that this is still a
counterexample to transitivity.
Schaffer has a response to this sort of case.  He argues that the middle contrast (taking
medicine rather than not taking medicine) changes when we talk about what the sneeze causes
versus what causes me to remaining healthy.  Under Schaffer’s analysis, the sneeze causes the
event of me taking medicine in the presence of germs, in contrast to the event of me not taking
medicine in the absence of germs.  However, what causes me to remain healthy rather than get
sick is me taking medicine in the presence of germs, as opposed to the event of me not taking
medicine in the presence of germs.  This is true because the contrast to the first event – the sneeze
– is a non-sneeze.  The sneezing causes the taking of medicine, but the contrast cause to taking
medicine is the case that would occur if the sneezing had not occurred, which involves an absence
of germs.  This contrast situation is not one in which I would have gotten sick had I not taken the
medicine.  Thus, there is no causal chain with all the same relata connecting the sneeze to the
health, and thus the fact that the sneeze does not cause the health does not imply that causation is
intransitive.  At the same time, Schaffer’s theory respects our intuitions, because he agrees that
the sneeze causes the medicine taking, and the medicine taking cause me to be healthy.
187
Conclusion
I have attempted to demonstrate something not about causation itself but about theories
of causation:  a good theory of causation must respect our anti-transitivity intuitions and cannot
assume the transitivity of causation.  I have also attempted to demonstrate several things about the
practice of philosophy:  I have shown that psychology can justify our use of intuitions about
certain issues in philosophy, I have shown how one can respond to psychologically informed
criticisms of the use of certain types of intuitions, I have shown how we can use what we know
about how intuitions work to resolve conflicts between intuitions by arguing on principled
grounds that one intuition is better evidence than another.  We have seen both a criticism of and a
defense of intuitions, based both on my general theory of intuitions and an understanding of
specific cognitive biases.  The sort of back and forth we see in this chapter is something that I
believe should become part of the normal course of philosophy.  But, one might ask, is this still
philosophy?
It is.  Philosophers have used intuitions since philosophy began.  We have always
understood that we have to be careful when we use intuitions, because not every intuition is
correct, and not every intuition tells us what they on the surface seem to.  Whether we wished to
do conceptual analysis, or understand things themselves, we have understood that intuitions are
just one sort of data, and that putting them together with our other data to form theories requires
not only informing our theories with our intuitions, but also informing our use of intuitions with
our theories.  I am not suggesting that we give up philosophical analysis.  I am not suggesting that
we give up the idea of reflective equilibrium.  What I am suggesting is that we should know more
about intuitions in order to better use them as evidence.  Understanding when to use or not use
intuitions is not a merely psychological question; as we have seen, it requires answering
philosophical questions.  Further, when this hybrid of psychology and philosophy tells us that we
188
can and should use intuitions as evidence about some subject, our project has just started.  We
still need to do philosophy to determine what it is that intuitions tell us, and what theory best
accounts for these facts.  The sort of practice I advocate is philosophy, it is just philosophy that
better understands its evidence and how to use it.
189
Bibliography
Allen, S.W. & Brooks, L.R., “Specializing the Operation of an Explicit Rule,” Journal of
Experimental Psychology: General, 1991, v.120, n.1, 3-19.
Ambady, N. & Rosenthal, R. “Thin Slices of Expressive Behavior as Predictors of Interpersonal
Consequences:  A Meta-Analysis,” Psychological Bulletin, 1992, v.111, n.2, 256-274.
American Psychiatric Association, Diagnostic and Statistical Manual of Mental Disorders IV,
1994, Washington, D.C.:  American Psychiatric Association.
Babey, S.H., Queller, S., Klein, S.B. “The Role of Expectancy Violating Behaviors in the
Representation of Trait Knowledge:  A Summary-Plus-Exception Model of Social Memory,”
Social Cognition, 1998, v.16, n.3, 287-339.
Baranski, J.V. & Petrusic, W.M., “Probing the Locus of Confidence Judgments:  Experiments on
the Time to Determine Confidence,” Journal of Experimental Psychology:  Human Perception
and  Performance, 1998, v.24, n.3, 929-945.
Barrouillet, P., Grosset, N., Lecas, J., “Conditional Reasoning by Mental Models:  Chronometric
and Developmental Evidence,” Cognition, 2000, v.756, 237-266.
Bealer, G., “Intuition and the Autonomy of Philosophy,” in DePaul, M.R. & Ramsey, W., eds.,
Rethinking Intuitions, 1998, Lanham, MA:  Rowman & Littlefield Publishers.
Bechara, A., Damiaso, H., Tranel, D., Damiaso, A.R., “Deciding Advantageously Before
Knowing the Advantageous Strategy,” Science, 1997, v.275, n.5304, 1293-1295.
Begg, I.M., Anas, A., Farinacci, S., “Dissociation of Processes in Belief:  Source Recollection,
Statement Familiarity, and the Illusion of Truth, “ Journal of Experimental Psychology:  General,
1992, v.121, n.4, 446-458.
Berscheid, E., Graziano, W., Monson, T., Dermer, M., “Outcome dependency: Attention,
attribution, and attraction,” Journal of Personality and Social Psychology, 1976, v.34, 978-989.
Betsch, T., Plessner, H., Schwieren, C., Gutig, R., “I Like It but I Don't Know Why:  A Value-
Account Approach to Implicit Attitude Formation,” Personality and Social Psychology Bulletin,
2001, v.27, n.2, 242-253.
Bishop, M.A. & Trout, J.D., Epistemology and the Psychology of Human Judgment, 2005, New
York:  Oxford University Press.
Bok, H., Freedom and Responsibility, 1998, Princeton, NJ:  Princeton University Press.
BonJour, L., “Can Empirical Knowledge Have a Foundation?” American Philosophical
Quarterly, 1978, v.15, n.1, 1-13.
190
BonJour, L., The Structure of Empirical Knowledge, 1985, Cambridge, MA:  Harvard University
Press.
BonJour, L., In Defense of Pure Reason, 1998, Cambridge:  Cambridge University Press.
BonJour, L. & Sosa, E., Epistemic Justification:  Internalism vs. Externalism, Foundations vs.
Virtues, 2003, Malden, MA:  Blackwell Publishing.
Bratman, M.E., Intentions, Plans, and Practical Reason, 1987, Boston:  Harvard University Press.
Brooks, L.R., Norman, G.R., Allen, S.W., “Role of Specific Similarity in a Medical Diagnostic
Task,” Journal of Experimental Psychology: General, 1991, v.120, n.3, 278-287.
Burge, T. Individualism and the Mental, Midwest Studies in Philosophy, 1979, v.4, 73-121.
Campos, J.J., “The Visual Cliff and Fear of Heights,”
http://babycenter.berkeley.edu/VisualCliff.htm
Carruthers, P., “The Case for Massively Modular Models of Mind,” in Stainton, R. (ed),
Contemporary Debates in Cognitive Science, 2005, Oxford:  Blackwell.
Chapman, G.B. & Johnson, E.J., “Incorporating the Irrelevant,” in Gilovich, T., Griffin, D.,
Kahneman, D., (eds.) Heuristics and Biases:  the Psychology of Intuitive Judgment, 2002, New
York:  Cambridge University Press.
Chapman, L.J. & Chapman, J.P., “Genesis of Popular but Erroneous Psychodiagnostic
Observations,” Journal of Abnormal Psychology, 1967, 72, 193-204.
Chi, M.T.H. “Commonsense Misconceptions of Emergent Processes:  Why Some Misconceptions
are Robust,” The Journal of the Learning Sciences, 2005, v.14, n.2, 161-199.
Cleeremans, A. & McClelland, J.L., “Learning the Structure of Event Sequences,” Journal of
Experimental Psychology: General, 1991, v.120, n.3, 235-253.
Cohen, L.J, The Dialogue of Reason, 1986, New York:  Oxford University Press.
Confrey, J. “A Review of the Research on Student Misconceptions in Mathematics, Science, and
Programming,” Review of Research in Education, 1990, v.16, 3 – 56.
Cummins, R. “Reflections on Reflective Equilibrium,” in DePaul, M.R. & Ramsey, W., eds.,
Rethinking Intuitions, 1998, Lanham, MA:  Rowman & Littlefield Publishers.
Daniel, D.B. & Klaczynski, P.A., “Developmental and Individual Differences in Conditional
Reasoning:  Effects of Logic Instructions and Alternative Antecedents,” Child Development,
2006, v.77, n.2, 339-354.
David, M., “Truth as the Epistemic Goal,” in Steup, M., (ed) Knowledge, Truth, and Duty, 2001,
New York:  Oxford University Press.
191
de Jong, P.J., Merckelbath, H., Bögels, S., Kindt, M., “Illusory Correlation and Social Anxiety,”
Behavior Research and Therapy, 1998, v.36, 1063-1073.
Denes-Raj, V. & Epstein, S. “Conflict Between Intuitive and Rational Processing: When People
Behave Against Their Better Judgment, ”Journal of Personality and Social Psychology, 1994,
Vol. 66, No. 5, 819-829.
De Neys, W. Schaeken, W., d'Ydewalle, G., “Inference Suppression and Semantic Memory
Retrieval:  Every Counterexample Counts,” Memory & Cognition, 2003, v.31, n.4, 581-595.
DePaul, M.R., “Value Monism in Epistemology,” in Steup, M., (ed) Knowledge, Truth, and Duty,
2001, New York:  Oxford University Press.
Dijksterhuis, A., “Think Different:  The Merits of Unconscious Thought in Preference and
Decision Making,” Journal of Personality and Social Psychology, 2004, v.87, n.5, 586-598.
Dijksterhuis, A. & Nordgren, L.F., “A Theory of Unconscious Thought,” Perspectives on
Psychological Science, 2006, v.1, n.2, 95-109.
Dijksterhius, A., van Older, Z., “On the Benefits of Thinking Unconsciously:  Unconscious
Thought can Increase Post-Choice Satisfaction,” Journal of Experimental Social Psychology,
2006, v.42, 627-631.
Ditto, P.H. & Lopez, D.F., “Motivated Skepticism:  Use of Differential Decision Criteria for
Preferred and Nonpreferred Conclusions,” Journal of Personality and Social Psychology, 1992, v.
63, no. 4, 568-584
Ditto, P.H., Scepanskiy, J.A., Munro, G.D., Apanovitch, A.M., Lockhart, L.K. “Motivated
Sensitivity to Preference Inconsistent Information,” Journal of Personality and Social Psychology,
1998, v.75, n.1, 53-69.
Doris J.M. & Stich, S. “As a Matter of Fact: Empirical Perspectives on Ethics,”  in Jackson, F. &
Smith, M. eds., The Oxford Handbook of Contemporary Analytic Philosophy,  2005, Oxford:
Oxford University Press, 114-152.
Dromsky, D., “There is No Door:  Finally Solving the Problem of Moral Luck,” The Journal of
Philosophy, 2004, v. 51, no. 9, p. 1 – 20.
Dunning, D., Meyerowitz, J.A., Holzberg, A.D., “Ambiguity and Self-Evaluation:  The Role of
Idiosyncratic Trait Definitions in Self-Serving Assessments of Ability,” Journal of Personality
and Social Psychology, 1989, v.57, n.6, 1082-1090.
Ebbesen, E.B. & Rienick, C.B. “Retention Interval and Eyewitness Memory for Events and
Personal Identifying Attributes,” Journal of Applied Psychology, 1998, v.83, n.5, 745-762
Epley, N. & Dunning, D., “Feeling ‘Holier than Thou:’ Are Self-Serving Assessments Produced
by Errors in Self- or Social Prediction?” Journal of Personality and Social Psychology, v.79, 861-
75.
192
Fabiani, M. & Donchin, E., “Encoding Processes and Memory Organization: A Model of the von
Restorff Effect,” Journal of Experimental Psychology: Learning, Memory, and Cognition, 1995,
v.21, n.1, 224-240.
Falk, A., Desire and Belief, 2004, Lantham, MD:  Hamilton Books.
Falk, R. & Konold, C., “Making Sense of Randomness:  Implicit Encoding as a Basis for
Judgment,” Psychological Review, 1997, v.104, n.2, 301-318.
Fazio, R.H.,  “On the automatic activation of associated emotions:  an overview,”  Cognition and
Emotion, 2001, 15, 115-141.
Finn, R., “Different Minds,” Discover, 1991, v.12, 55-58.
Fodor, J.A., A Theory of Content and Other Essays, 1990, Cambridge:  The MIT Press.
Fodor, J.A., The Mind Doesn’t Work that Way, 2000, Cambridge:  The MIT Press.
Freund, T., Kruglanski, A.W., Shpitzajzen, A., “The Freezing and Unfreezing of Impressional
Primacy,” Personality and Social Psychology Bulletin, 1983, v.11 n.4, p. 479-487.
Fumerton, R., “A priori philosophy after an a posteriori turn,” Midwest Studies in Philosophy,
1999, XXIII, 21-33.
Galanter, E.H., & Smith, W.A.S., “Some Experiments on a Simple Thought-Problem,” American
Journal of Psychology, 1958, v.71, 359-366.
Gettier, E.L., “Is Justified True Belief Knowledge?” Analysis, 1963, v.23, 121-123.
Gilbert, D.T. “How Mental Systems Believe,” American Psychologist, 1991, v.46, n.2, 107-119.
Gilbert, D.T. “Inferential Correction,” in Gilovich, T., Griffin, D., Kahneman, D., (eds.)
Heuristics and Biases:  the Psychology of Intuitive Judgment, 2002, New York:  Cambridge
University Press.
Gilbert, D.T. & Krull, D.S. “Seeing Less and Knowing More:  The Benefits of Perceptual
Ignorance,” Journal of Personality and Social Psychology, 1988, v.54, n.2, 193-202.
Gilbert, D.T., Krull, D.S., & Malone, P.S. “Unbelieving the Unbelievable:  Some Problems in the
Rejection of False Information,” Journal of Personality and Social Psychology, 1990, v.59, n.4,
601-613.
Gilbert, D.T., Tafarodi, R.W., & Malone, P.S., “You Can’t Not Believe Everything You Read,”
Journal of Personality and Social Psychology, 1993, v.65, n.2, 221-233.
Gilovich, T. & Savitsky, K., “Like Goes with Like:  The Role of Representativeness in Erroneous
and Pseudo-Scientific Beliefs,” in Gilovich, T., Griffin, D., Kahneman, D., (eds.) Heuristics and
Biases:  the Psychology of Intuitive Judgment, 2002, New York:  Cambridge University Press.
193
Gilovich, T., Vallone, R., Tversky, A., “The Hot Hand in Basketball: On the Misperception of
Random Sequences,” Cognitive Psychology, 1985, XVII, 295–314.
Gladwell, M., Blink, 2005, New York:  Allen Lane.
Graesser, A.C., Baggett, W., Williams, K. “Question-Driven Explanatory Reasoning,” Applied
Cognitive Psychology, 1996, v.10, S17-S31.
Greenwald, A.G., “New Look 3:  Unconscious Cognition Reclaimed,” American Psychologist,
1992, v.47, n.6, 766-779.
Greenwald, A.G. & Banaji, M.R., “Implicit Social Cognition:  Attitudes, Self-Esteem, and
Stereotypes,” Psychological Review, 1995, v.102, n.1, 4-27.
Hall, N., “Causation and the Price of Transitivity,” Journal of Philosophy, 2000, 198-222.
Heuer, R.J., Jr., Psychology of Intelligence Analysis, 1999, Center for Study of Intelligence,
Central Intelligence Agency.
Hill, T., Lewicki, P., Czyzewska, M., Boss, A., “Self-Perpetuating Development of Encoding
Biases in Person Perception,” Journal of Personality and Social Psychology, 1989, v.57, 373-387.
Hilton, D.J. “Conversational Processes and Causal Explanation,” Psychological Bulletin, 1990,
v.107, n.1, 65-81.
Holt, J., “Numbers Guy:  Are Our Brains Wired for Math?” New Yorker, 2008, March 3, 42-47.
Hughes, A.D., & Whittlesea, B.W.A., “Long Term Semantic Transfer:  An Overlapping
Operations Account,” Memory & Cognition, 2003, v.31, n.3, 401-411.
Jackson, F., From Metaphysics to Ethics: A Defense of Conceptual Analysis, 1998, Oxford
University Press.
Jones, W.E. “Why Do We Value Knowledge?” American Philosophical Quarterly, 1997, v.34,
n.4, 423-439.
Kahneman, D. & Frederick, S. “Representativeness Revisited:  Attribute Substitution in Intuitive
Judgment,” in Gilovich, T., Griffin, D., Kahneman, D. (eds), Heuristics and Biases, 2002, New
York:  Cambridge University Press.
Kahneman, D. & Miller, D.T., “Norm Theory: Comparing Reality to Its Alternatives,” in
Gilovich, T., Griffin, D., Kahneman, D. (eds), Heuristics and Biases, 2002, New York:
Cambridge University Press.
Kahneman, D. & Tversky, A., “The Simulation Heuristic,” in Kahneman, D., Slovic, P.,Tversky,
A. (eds.), Judgment Under Uncertainty:  Heuristics and Biases, 1982, Cambridge:  Cambridge
University Press.
194
Karmiloff-Smith, A., “From Meta-Processes to Conscious Access:  Evidence from Children’s
Metalinguistic and Repair Data,” Cognition, 1986, v.23, 95-147.
Karmiloff-Smith, A., Beyond Modularity:  A Developmental Perspective on Cognitive Science,
1992, Boston: MIT Press.
Karmiloff-Smith, A., “Williams Syndrome,” Current Biology, 2007, v.17, R1035-R1036.
Kauppinen, A., “The Rise and Fall of Experimental Philosophy,” Philosophical Explorations,
2007, v.10, n.2, 95-118.
Kelley, C.M. & Linday, D.S., “Remembering Mistaken for Knowing:  Ease of Retrieval as a
Basis for Confidence in Answers to General Knowledge Questions,” Journal of Memory and
Language, 1993, v. 32, 1-24.
Kiess, H.O., Statistical Concepts for the Behavioral Sciences, 2002, Boston:  Allyn & Bacon.
Kirby, K.N., “Probabilities and Utilities of Fictional Outcomes in Wason’s Four-Card Selection
Task,” Cognition, 1994, v.51, 1-28.
Klayman, J. & Ha, Y. “Confirmation, Discontinuation, and Information in Hypothesis Testing”
Psychological Review, 1987, v.4, no.2, 211-228.
Klein, S.B, Cosmides, L., Tooby, J., Chance, S., “Priming Exceptions:  A Test of the Scope
Hypothesis in Naturalistic Trait Judgments,” Social Cognition, 2001, v.19, n.4, 443-468.
Klein, S.B, Cosmides, L., Tooby, J., Chance, S., “Decisions and the Evolution of Memory:
Multiple Systems, Multiple Functions,” Psychological Review, 2002, v.109, n.2, 306-329.
Klein, S.B, Loftus, J., Trafton, J.G., Fuhrman, R.W., “Use of Exemplars and Abstractions in Trait
Judgments:  A Model of Trait Knowledge About the Self and Others,” Journal of Personality and
Social Psychology, 1992, v.63, n.5, 739-753.
Korb, K., Stillwell, M., “The Story of the Hot Hand:  Powerful Myth or Powerless Critique?”
presented to the International Conference of Cognitive Science, 2003.
Kornblith, H., “Appeals to intuition and the ambitions of epistemology,” in Hetherington, S., (ed.)
Epistemology Futures, 2006, Oxford: Oxford University Press.
Knowles, E.S., & Condon, C.A., “Why People Say ‘Yes’: A Dual-Process Theory Of
Acquiescence,” Journal of Personality and Social Psychology, 1999, Vol. 77, No. 2, p 379-386
Kripke, S., Naming and Necessity, 1972, Boston:  Harvard University Press.
Kruglanski, A. W. & Freund, T., “The freezing and unfreezing of lay-inferences: Effects on
impressional primacy, ethnic stereotyping, and numerical anchoring,” Journal of Experimental
Social Psychology, 1983, 19, 448-468.
195
Kruglanski, A.W. & Mayselless, O., “Motivational Effects in the Social Comparison of
Opinions,” Journal of Personality and Social Psychology, 1987, v.53, n.5, p. 834-842.
Kunda, Z., “Motivated Inference:  Self-Serving Generalization and Evaluation of Causal
Theories,” Journal of Personality and Social Psychology, 1987, v.53, n.4, 636-647.
Kunda, Z., “The Case for Motivated Reasoning,” Psychological Bulletin, 1990, v.108, n.3, 480-
498.
Kvart, I., “Transitivity and Preemption of Causal Relevance,” Philosophical Studies, 1991, LXIV,
126-160.
Lakoff, G., Women, Fire, and Dangerous Things, 1987, Chicago:  University of Chicago Press.
Lewicki, P., “Nonconscious Biasing Effects of Single Instances on Subsequent Judgment,”
Journal of Personality and Social Psychology, 1985, v.48, n.3, 563-574.
Lewicki, P. ,“Processing Information About Covariations that Cannot Be Articulated,” Journal of
Experimental Psychology:  Learning, Memory, and Cognition, 1986, v.12, n.1, 135-146.
Lewicki, P., Czyzewska, M., Hoffman, H., “Unconscious Acquisition of Complex Procedural
Knowledge,” Journal of Experimental Psychology:  Learning, Memory, and Cognition. 1987,
v.13, 523-530.
Lewicki, P., Hill, T., Czyzewska, M., “Nonconscious Acquisition of Information,” American
Psychologist, 1992, v.47, n.6, 796-801.
Lewicki, P., Hill, T., Czyzewska, M., “Nonconscious Indirect Inferences in Encoding,” Journal of
Experimental Psychology:  General, 1994, v.123, n.3, 257-263.
Lewicki, P., Hill, T., Sasaki, I., “Self-Perpetuating Development of Encoding Biases,” Journal of
Experimental Psychology:  General, 1989, 118, 323-337.
Lewis, D., “Causation,” in Sosa, E. & M. Tooley Causation, 1993, New York:  Oxford University
Press [originally printed in Journal of Philosophy (1973), p. 556-567]
Lin, E.L. & Murphy, G.L., “Effects of Background Knowledge on Object Categorization and Part
Detection,” Journal of Experimental Psychology: Human Perception and Performance, 1997,
v.23, n.4, 1153-1169.
Linsky, B. & Zalta, E.N., “Naturalized Platonism versus Platonized Naturalism,” The Journal of
Philosophy, 1995, v.92, n.10, 525-555.
Loftus, E.F., “Make-Believe Memories,” American Psychologist, 2003, v.58, n.11, 867-873.
Machery, E., Mallon, R., Nichols, S., Stich, S., “Semantics, Cross-Cultural Style,” Cognition,
2004, v.92, B1- B12.
196
Mackie, J.L., Ethics:  Inventing Right and Wrong, 1977, excerpted in Darwall, S., Gibbard, A.,
Railton, P., Moral Discourse & Practice, 1997, New York:  Oxford University Press.
Malt, B.C., “An On-Line Investigation of Prototype and Exemplar Strategies in Classification,”
Journal of Experimental Psychology: Learning, Memory, and Cognition, 1989, v.15, n.4, 539-
555.
Markman, A.B. & Ross, B.H., “Category Use and Category Learning,” Psychological Bulletin,
2003, v.129, n.4, 592-613.
McKinley, S.C. & Nosofsky, R.M., “Investigation of Exemplar and Decision Bound Models in
Large, Ill-Defined Category Structures,” Journal of Experimental Psychology:  Human Perception
and Performance, 1995, v.21, n.1, 128-148.
Medin, D.L. & Schaffer, M.M, “Context Theory of Classification Learning,” Psychological
Review, 1978, v.85, n.3, 207-238.
Melcher, J.M. & Schooler, J.W., “The Misremembrance of Wines Past: Verbal and Perceptual
Expertise Differentially Mediate Verbal Overshadowing of Taste Memory,” Journal of Memory
and Language, 1996, v.35, 231-245.
Memon, A. & Bartlett, J., “The Effects of Verbalization on Face Recognition in Young and Older
Adults,” Applied Cognitive Psychology, 2002, v.16, 635-650.
Mervis, C.B. & Rosch, E., “Categorization of Natural Objects,” Annual Review of Psychology,
1981, v.32, p. 89-115.
Miller, R.B., “Without Intuitions,” Metaphilosophy, 2000, v.31, n.3, p. 231-250.
Montier, J., “The Folly of Forecasting:  Ignore All Economists, Strategists, & Analysts,” Global
Equity Strategy, 2005, (Dresdner, Kleinwort, Wasserstein).
Mottron, L, Dawson, M., Soulieres, I., Hubert, B., Burack, J., “Enhanced Perceptual Functioning
in Autism:  An Update, and Eight Principles of Autistic Perception,” Journal of Autism and
Developmental Disorders, 2006, v.36, 27-43.
Murphy, G.L., The Big Book of Concepts, Cambridge, Massachusetts :  MIT Press, 2002.
Nagel, T., “Moral Luck,”Mortal Questions, 1979, reprinted in Perry, J. & Bratman, M., eds.,
Introduction to Philosophy, 1986, 468-476.
Nederhouser, M. & Spivey, M. “Eye Tracking and Simulating the Temporal Dynamics of
Categorization,” 2004, presentation to the 26
th
Annual Meeting of the Cognitive Science Society.
Neilens, H.L., Handley, S.J., Newstead, S.J., “Dual Processes and Training in Statistical
Principles,” in B.G.Bara, L. Barsalou, & M. Bucciarelli (Eds.). Proceedings of the 27th Annual
Conference of the Cognitive Science Society, 2005, 1612-1617.
197
Neuberg, S. L. & Fiske, S. T., “Motivational influences on impression formation: Dependency,
accuracy-driven attention, and individuating information,” Journal of Personality and Social
Psychology, 1987, 53, 431-444.
Nichols, S., Stich, S., Weinberg, J. “Meta-Skepticism: Meditations in Ethno-Epistemology,” in
Luper, S., (ed.) The Skeptics, 2003, Aldershot, U.K.:  Ashgate Publishing.
Nickerson, R.S., “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises,” Review of
General Psychology, 1998, v.2, n.2, 175-220.
Nisbett, R.E., Peng, K., Choi, I. & Norenzayan, A., “Culture and Systems of Thought:  Holistic
Versus Analytic Cognition,” Psychological Review, 2001, v. 108, n. 2, 291-310.
Norenzayan, A., Smith, E.E., Kim, B.J., Nisbett, R.E.,  “Cultural preferences for formal versus
intuitive reasoning,” Cognitive Science, 2002, 26, 653-684.
Nosofsky, R.M, “Exemplar Representation Without Generalization? Comment on Smith and
Minda's (2000) “Thirty Categorization Results in Search of a Model,” Journal of Experimental
Psychology:  Learning, Memory, and Cognition, 2000, v.26, n.6, 1735-1743.
Nosofsky, R.M. & Palmeri, T.J., “An Exemplar Based Random Walk Model of Speeded
Classification,” Psychological Review, 1997, v.104, n.2, 266-300.
Olson, M.A. & Fazio, R.H., “Implicit Attitude Formation Through Classical Conditioning,”
Psychological Science, 2001, v.12, n.5, 413-417.
Palmeri, T.J. & Blalock, C., “The Role of Background Knowledge in Speeded Perceptual
Categorization,” Cognition, 2000, v.77, B45-B57.
Palmeri, T.J. & Nosofsky, R.M., “Recognition Memory for Exceptions to the Category Rule,”
1995, v.21, n.3, 548 – 568.
Perruchet, P. & Gallego, J. “A Subjective Unit Formation Account of Implicit Learning,” in
Berry, D., (ed.) How Implicit is Implicit Learning, 1997, New York:  Oxford University Press.
Perruchet, P., & Vinter, A., “The Self Organizing Consciousness:  A Framework for Implicit
Learning,” in French, R.M., & Cleeremans, A., (eds.) Implicit Learning and Consciousness, 2002,
New York:  Taylor and Francis.
Preston, S.D. & de Waal, F.B.M., “Empathy:  Its Ultimate and Proximate Bases,” Behavioral and
Brain Sciences, 2002, v.25, 1-72.
Pronin, E., Gilovich, T., Ross, L. “Objectivity in the Eye of the Beholder:  Divergent Perceptions
of Bias in Self Versus Others,” Psychological Review, 2004, v.111, n.3, 781-799.
Pust, J.E., Intuitions as Evidence, 2000, New York:  Garland.
198
Queller, S., Schell, T., Mason, W., “A Novel View of Between-Categories Contrast and Within-
Category Assimilation,” Journal of Personality and Social Psychology, 2006, v.91, n.3, 406-422.
Rader, A.W. & Sloutsky, V.M. “Processing of Logically Valid and Logically Invalid Conditional
Inferences in Discourse Comprehension,” Journal of Experimental Psychology:  Learning,
Memory, and Cognition, 2002, v.28, n.1, 59-68.
Reber, A.S., “Implicit Learning and Tacit Knowledge,” Journal of Experimental Psychology:
General, 1989, v.118, n.3, 219-235.
Riggs, W., “The Value Turn in Epistemology,” in Hendricks, V. & Pritchard, D.H., (eds.) New
Waves in Epistemology, Aldershot: Ashgate.
Rips, L.J, “Similarity, Typicality, and Categorization,” in Vosniadou, S. & Ortony, A., (eds.)
Similarity and Analogical Reasoning, 1989, New York: Cambridge University Press, 21-59.
Roediger, H.L., III, “Implicit Memory:  Retention Without Remembering,” American
Pyschologist, 1990, v.45, 1043-1056.
Rosch, E. & Mervis, C.B., “Family Resemblances:  Studies in the Internal Structure of
Categories,” Cognitive Psychology, 1975, v.7, 573-605.
Rosch, E. & Mervis, C.B., “Family Resemblances:  Studies in the Internal Structure of
Categories,” Cognitive Psychology, 1975, v.7, 573-605.
Rossnagel, C.S., “Revealing Hidden Covariation Detection:  Evidence for Implicit Abstraction at
Study,” Journal of Experimental Psychology:  Learning, Memory, and Cognition, 2001, v.27, n.5,
1276-1288.
Russo, J.E. & Schoemaker, P.J.H., “Managing Overconfidence,” Sloan Management Review,
1992, v.33, n.2, 7-17.
Ryan, R.S. & Schooler, J.W., “Whom Do Words Hurt? Individual Differences in Susceptibility to
Verbal Overshadowing,” Applied Cognitive Psychology, 1998, v.12, S105-S125.
Samuels, R., “Evolutionary Psychology and the Massive Modularity Hypothesis,” The British
Journal for the Philosophy of Science, 1998, v.49, 575-602.
Sanitoso, R., Kunda, Z., Fong, G.T., “Motivated Recruitment of Autobiographical Memories,”
Journal of Personality and Social Psychology, 1990, v.59, n.2, 229-241.
Schaffer, J., “Contrastive Causation,” The Philosophical Review, 2005, v.114, n.3, 297-328.
Scholl, B.J. & Leslie, A.M., “Modularity, Development, and ‘Theory of Mind,’” Mind and
Language, 1999, v.14, n.1, 131-153.
Schooler, J.W., Fiore, S.M, Brandimonte, M.A., “At a Loss From Words:  Verbal Overshadowing
of Perceptual Memories,” The Psychology of Learning and Motivation, 1997, v.37, 291-340.
199
Schooler, J.W., Ohlsson, S., Brooks, K., “Thoughts Beyond Words: When Verbalization
Overshadows Insight,” Journal of Experimental Psychology:  General, 1993, v.122, 166-183.
Schroyens, W., Schaeken, W., Handley, S., “In Search of Counter-Examples:  Deductive
Rationality in Human Reasoning,” The Quarterly Journal of Experimental Psychology, 2003,
v.56a, n.7, 1129-1145.
Schwartz, N. & Vaughn, L.A., “The Availability Heuristic Revisited:  Ease of Recall and Content
of Recall as Distinct Sources of Information,” in Gilovich, T., Griffin, D., Kahneman, D., (eds.)
Heuristics and Biases:  the Psychology of Intuitive Judgment, 2002, New York:  Cambridge
University Press.
Schwitzgebel, E., “A Phenomenal, Dispositional Account of Belief,” Nous, 2002, v.36, n.2, 249-
275.
Searle, J.R., “Minds, Brains, and Programs,” in Behavioral and Brain Sciences, v.3., 1980,
Cambridge:  Cambridge University Press.
Seger, C.A., “Implicit Learning,” Psychological Bulletin, 1994, v.115, n.2, 163-196.
Shafir, E., “Philosophical Intuitions and Cognitive Mechanisms,” in DePaul, M.R. & Ramsey,
W., eds., Rethinking Intuitions, 1998, Lanham, MA:  Rowman & Littlefield Publishers.
Siegler, R.S., “Unconscious Insights,” Current Directions in Psychological Science, 2000, v.9,
n.3, 79-83.
Sloman, S.A., “The Empirical Case for Two Systems of Reasoning,” Psychological Bulletin,
1996, v.119, n.1, 3-22.
Sloman, S.A., “Two Systems of Reasoning,” in Gilovich, T., Griffin, D., Kahneman, D., (eds.)
Heuristics and Biases:  the Psychology of Intuitive Judgment, 2002, New York:  Cambridge
University Press.
Slovic, P., Griffin, D., Tversky, A., “Compatibility Effects in Judgment and Choice,” in Gilovich,
T., Griffin, D., Kahneman, D., (eds.) Heuristics and Biases:  the Psychology of Intuitive
Judgment, 2002, New York:  Cambridge University Press.
Smith, A.D., The Problem of Perception, 2002, Boston:  Harvard University Press
Smith, E.E., Langston, C., Nisbett, R., “The Case for Rules in Reasoning,” Cognitive Science,
1992, v.16, 1- 40.
Smith, E.E. & Sloman, S.A., “Similarity- Versus Rule-Based Categorization,” Memory &
Cognition, 1994, v.22, n.4, 377-386.
Smith, J.D. & Minda, J.P., “Prototypes in the Mist:  The Early Epochs of Category Learning,”
Journal of Experimental Psychology:  Learning, Memory, and Cognition, 1998, v.24, n.6, 1411 –
1436.
200
Smith, J.D. & Minda, J.P., “Thirty Categorization Results in Search of a Model,” Journal of
Experimental Psychology:  Learning, Memory, and Cognition, 2000, v.26, n.1, 3 – 27.
Sosa, E. “Minimal Intuition,” in DePaul, M.R. & Ramsey, W., eds., Rethinking Intuitions, 1998,
Lanham, MA:  Rowman & Littlefield Publishers.
Stadler, M.A., “On Learning Complex Procedural Knowledge,” Journal of Experimental
Psychology:  Learning, Memory, and Cognition, 1989, v.15, n.6, 1061-1069.
Stanovitch, K.E. & West, R.F., “Individual Differences in Reasoning,” in Gilovich, T., Griffin,
D., Kahneman, D., (eds.) Heuristics and Biases:  the Psychology of Intuitive Judgment, 2002,
New York:  Cambridge University Press.
Stich, S.  (1990).  The Fragmentation of Reason.  Cambridge, MA:  MIT Press.
Strack, F., Mussweiler, T., “Explaining the Enigmatic Anchoring Effect: Mechanisms of
Selective Accessibility,” Journal of Personality and Social Psychology, 1997, v.73, n.3, 437-446
Surowiecki, J. The Wisdom of Crowds, 2005, New York:  Anchor Books.
Swain, S., Alexander, J., & Weinberg, J.M., “The Instability of Philosophical Intuitions:  Running
Hot and Cold on Truetemp,” Philosophy and Phenomenological Research, 2008, v.76, n.1, 138-
155.
Tan, L. & Ward, G., “A Recency-Based Account of the Primacy Effect in Free Recall,” Journal
of Experimental Psychology:  Learning, Memory, and Cognition, 2000, v.26, n.6, 1589-1625.
Taylor, H. “The Religious and Other Beliefs of Americans 2003,” The Harris Poll #11, 2003,
http://www.harrisinteractive.com/harris_poll/index.asp?PID=359
Taylor, S.E. & Brown, J.D., “Illusion and Well-Being: A Social Psychological Perspective on
Mental Health,” in Psychological Bulletin, March 1988, Vol. 103, No. 2, p 193-210
Thomas, A.K., and Loftus, E.F., “Creating Bizarre False Memories Through Imagination,”
Memory & Cognition, 2002, v.30, n.3, 423-431.
Thomson, J.J., “A Defense of Abortion,” Philosophy and Public Affairs, 1973, v.1, n.1.
Thomson, J.J., “Causation: Omissions,” Philosophy and Phenomenological Research, 2003, v.66,
n.1, 81-103.
Tversky, A. & Kahneman, D., “Belief in the Law of Small Numbers,” Psychological Bulletin,
1971, v.76, n.2, 105-110.
Tversky, A. & Kahneman, D., “Judgments of Representativeness,” in Kahneman, D., Slovic,
P.,Tversky, A. (eds.), Judgment Under Uncertainty:  Heuristics and Biases, 1982, Cambridge:
Cambridge University Press.
201
Tversky, A. & Kahneman, D. “Extensional versus Intuitive Reasoning:  the Conjunction Fallacy
in Probability Judgment,” in Gilovich, T., Griffin, D., Kahneman, D., (eds.) Heuristics and
Biases:  the Psychology of Intuitive Judgment, 2002, New York:  Cambridge University Press.
Tzelgov, J., Yehene, V., Kotler, L., Alon, A., “Automatic Comparisons of Digits Never Learned:
Learning Linear Ordering Relations,” Journal of Experimental Psychology:  Learning, Memory,
Cognition. 2000, v.26, n.1, 103-120.
Wardrop, R.L, “Simpson's Paradox and the Hot Hand in Basketball,” The American Statistician,
1995, v.49, n.1, 24-28.
Wason, P. C., "Reasoning", in Foss, B. M., (ed.) New horizons in psychology, 1966, New York:
Penguin.
Weinberg, J.M., “What’s Epistemology For?  The Case for Neopragmatism in Normative
Metaepistemology,” in Hetherington, S., (ed.) Epistemology Futures, 2006, Oxford:  Oxford
University Press.
Weinberg, J.M., Crowley, S., Gonnerman, C., Swain, S., Vandewalker, I., “Intuition and
Calibration,” unpublished, 2005.
Weinberg, J.M., Nichols, S., Stich, S., “Normativity and Epistemic Intuitions,” Philosophical
Topics, 2001, v. 29, n. 1-2, 429-460.
Weinstein, N.D. & Klein, W.M., “Resistance of Personal Risk Perceptions to Debiasing
Interventions,” in Gilovich, T., Griffin, D., Kahneman, D., (eds.) Heuristics and Biases:  the
Psychology of Intuitive Judgment, 2002, New York:  Cambridge University Press.
Wheatherson, B. “What Good Are Counterexamples?” Philosophical Studies 115, 2003, 1 – 31.
Williams, B., “Truth in Ethics,” Ratio, 1995, v.8, n.8, 227-242.
Williams, B. & Smart, J.J.C., Utilitarianism:  For and Against, 1973, Cambridge University Press.
Williams, D.A. & Braker, D.S., “Influence of Past Experience on the Coding of Complex
Stimuli,” Journal of Experimental Psychology: Animal Behavior Processes, 1999, v.25, n.4, 461-
474.
Williams, J.H.G., Whiten, A., Suddendorf, T., Perrett, D.I., “Imitation, Mirror Neurons, and
Autism,” Neuroscience and Biobehavioral Reviews, 2001, 25, 287-295.
Williamson, T. Knowledge and Its Limits, 2000, Oxford:  Oxford University Press.
Williamson, T., “Philosophical ‘Intuitions’ and Skepticism About Judgment,” Dialectica, 2004,
v.58, n.1, 109-153.
Wilson, T.D., Strangers to Ourselves:  Discovering the Adaptive Unconscious, 2002, Cambridge:
Harvard University Press.
202
Wilson, T.D., Houston, C.E., Etling, K.M., & Brekke, N., “A New Look at Anchoring Effects:
Basic Anchoring and Its Antecedents,” Journal of Experimental Psychology:  General, v.125, n.4,
387-402.
Wilson, T.D., Lindsey, S. & Schooler, T.Y., “A Model of Dual Attitudes,” Psychological Review,
2000, v.107, n.1, 101-126.
Wilson, T.D. & Schooler, J.W., “Thinking Too Much:  Introspection Can Reduce the Quality of
Preferences and Decisions,” Journal of Personality and Social Psychology, 1991, v.60, 181-192.
Wixted, J.T. “The Psychology and Neuroscience of Forgetting,” Annual Review of Psychology,
2004, v.55, p.235-269.
Wong, P.T.P. & Weiner, B. “Why People Ask ‘Why’ Questions, and the Heuristics of
Attributional Search,” Journal of Personality and Social Psychology, 1981, v.40, n.4, 650-663.
Zeitsman, A. & Clement, J. “The Role of Extreme Case Reasoning in Instruction for Conceptual
Change,” The Journal of the Learning Sciences, 1997, v.6, n.1, 61-89. 
Abstract (if available)
Abstract Intuitions currently play a central evidential role in much of the practice of philosophy.  There are, however, a number of concerns raised about this role.  Some have claimed that intuitions are intuitions are in no way good evidence 
Linked assets
University of Southern California Dissertations and Theses
doctype icon
University of Southern California Dissertations and Theses 
Conceptually similar
Molyneux's question answered!
PDF
Molyneux's question answered! 
Merely verbal disputes in philosophy
PDF
Merely verbal disputes in philosophy 
Imagined audiences: intuitive and technical knowledge in Hollywood
PDF
Imagined audiences: intuitive and technical knowledge in Hollywood 
A deontological explanation of accessibilism
PDF
A deontological explanation of accessibilism 
Case studies of national cultural protection in the era of globalization
PDF
Case studies of national cultural protection in the era of globalization 
Theism and secular modality
PDF
Theism and secular modality 
Healers and witches in Oku: an occult system of knowledge in northwest Cameroon
PDF
Healers and witches in Oku: an occult system of knowledge in northwest Cameroon 
Minimal sensory modalities and spatial perception
PDF
Minimal sensory modalities and spatial perception 
The role of framing in health social comparisons by multiple sclerosis patients
PDF
The role of framing in health social comparisons by multiple sclerosis patients 
The scope and significance of George Berkeley's language model
PDF
The scope and significance of George Berkeley's language model 
A perceptual model of evaluative knowledge
PDF
A perceptual model of evaluative knowledge 
Reasons, obligations, and the structure of good reasoning
PDF
Reasons, obligations, and the structure of good reasoning 
Beliefs that wrong
PDF
Beliefs that wrong 
An intervention for stereotype automaticity in therapist-trainees: a pilot study in implicit multicultural social cognition
PDF
An intervention for stereotype automaticity in therapist-trainees: a pilot study in implicit multicultural social cognition 
Disappearing in plain sight, or, How we used to have casual sex before craigslist
PDF
Disappearing in plain sight, or, How we used to have casual sex before craigslist 
Assessing therapeutic change mechanisms in motivational interviewing using the articulated thoughts in simulated situations paradigm
PDF
Assessing therapeutic change mechanisms in motivational interviewing using the articulated thoughts in simulated situations paradigm 
Charter schools, data use, and the 21st century: how charter schools use data to inform instruction that prepares students for the 21st century
PDF
Charter schools, data use, and the 21st century: how charter schools use data to inform instruction that prepares students for the 21st century 
Thick concepts, reflection, and the loss of ethical knowledge
PDF
Thick concepts, reflection, and the loss of ethical knowledge 
Cross-case analysis of the use of student performance data to increase student achievement in California public schools on the elementary level
PDF
Cross-case analysis of the use of student performance data to increase student achievement in California public schools on the elementary level 
Fevered metropolis: epidemic disease and isolation in Victorian London
PDF
Fevered metropolis: epidemic disease and isolation in Victorian London 
Asset Metadata
Creator Talbot, Brian (author) 
Core Title How to use intuitions in philosophy 
School College of Letters, Arts and Sciences 
Degree Doctor of Philosophy 
Degree Program Philosophy 
Publication Date 09/01/2009 
Defense Date 07/24/2009 
Publisher University of Southern California (original), University of Southern California. Libraries (digital) 
Tag a priori,intuitions,methodology,OAI-PMH Harvest,philosophy,Psychology,reflective equilibrium,unconscious 
Language English
Contributor Electronically uploaded by the author (provenance) 
Advisor Van Cleve, James (committee chair), Finlay, Stephen (committee member), John, Richard S. (committee member), Levin, Janet (committee member), Vihvelin, Kadri (committee member) 
Creator Email btalbot@usc.edu,philosophy@bigfatgenius.com 
Permanent Link (DOI) https://doi.org/10.25549/usctheses-m2579
Unique identifier UC1248907 
Identifier etd-Talbot-3190 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-260424 (legacy record id),usctheses-m2579 (legacy record id) 
Legacy Identifier etd-Talbot-3190.pdf 
Dmrecord 260424 
Document Type Dissertation 
Rights Talbot, Brian 
Type texts
Source University of Southern California (contributing entity), University of Southern California Dissertations and Theses (collection) 
Repository Name Libraries, University of Southern California
Repository Location Los Angeles, California
Repository Email cisadmin@lib.usc.edu
Tags
a priori
intuitions
methodology
reflective equilibrium
unconscious