Close
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Microsoft Word - Enkrasia, Rationality, and Commitment.doc
(USC Thesis Other)
Microsoft Word - Enkrasia, Rationality, and Commitment.doc
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
ENKRASIA, RATIONALITY, AND COMMITMENT
by
Samuel H.M. Shpall
__________________________________________________
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(PHILOSOPHY)
August 2011
Copyright 2011 Samuel H.M. Shpall
ii
EPIGRAPH
Now, no one goes willingly toward the bad or what he believes to be bad…
Plato, Protagoras
Being enlightened, the argument goes on, and seeing where his real advantage lay, he
would realize that it was in acting virtuously. And, since it is well established that a man
will not act deliberately against his own interests, it follows that he would have no choice
but to become good. Oh, the innocence of it!
Dostoevsky, Notes from Underground
iii
DEDICATION
To my parents
iv
ACKNOWLEDGMENTS
This material has benefited from the comments of many brilliant individuals, among
them Andrew Alwood, Brian Bowman, John Brunero, Kory DeClark, Nate Gadd, Geoff
Georgi, Benjamin Kiesewetter, Ryan Hay, Hsin-Wen Lee, Alida Liberman, Errol Lord,
Ryan Millsap, Shyam Nair, Kenny Pearce, Lewis Powell, Johannes Schmitt, Robert
Shanklin, Julia Staffel, Jonathan Way, Aness Webster, and Luka Yovetich. I am lucky to
have all of them as philosophical advisors and luckier to have many of them as friends.
Scott Altman generously agreed to my extremely late request that he serve on my
defense committee. I am very appreciative and look forward to hearing his thoughts.
I owe special thanks to George Wilson. Though he has not been especially
involved with this project, his friendship, support, and guidance have done much to make
it possible.
Gary Watson has given me an important and unique perspective on many of the
central issues of this dissertation, and has been an especially insightful critic of my views
about moral commitments and promissory obligation. He has also turned my attention to
topics in the philosophy of action that I regard as intimately related to the current project
and which I hope to explore in future work. Gary is a model of the ideal philosophical
temperament—he is curious, rigorous, generous, kind, and, above all, not prone to
missing the forest for the trees—and I hope that some of this temperament has rubbed off
on me. I am very grateful for his interest in and encouragement of my work.
Steve Finlay has probably been the harshest critic of my ideas; for this reason, and
many others, he has long had my unmitigated respect. It was in Steve’s seminar on
metaethics that I first became persuaded that I had enough to say about moral issues to
warrant calling myself a philosopher. And it was Steve’s comments on my paper for that
seminar that convinced me that I knew next to nothing about good philosophical writing.
In a way, these were the two most important lessons I learned during my early years of
graduate school. Though I’m not confident that I have convinced him of much, I do hope
that his skepticism about the existence of rational norms has been somewhat shaken by
v
my arguments. In any case, I thank Steve for his generosity and for his consistent friendly
advice.
Jake Ross, who I regard as part human and part counterexample-generating
machine, has played a fundamental role in shaping my philosophical development. I have
always found him engaging, challenging, and possessed of much good sense (except
when he starts talking about infinities or Jean-Luc Godard!). It was Jake who most
acutely recognized my undeveloped potential as a philosopher—a difficult task, given
that this potential was buried under a mass of incomplete education, sloppy writing, and
an ostensibly literary, but ultimately just imprecise, mode of expressing myself. And it
was Jake who developed this potential by pushing me in the right ways to think more
rigorously, write more succinctly, and speak my mind more directly. He has, throughout
the entire dissertation writing process, proven one of my most insightful advisors: most of
the biggest lingering worries I have about this material are worries that Jake brought to
my attention months or years ago, and that I have been thinking about since. I am grateful
for the many illuminating conversations we have had about a very diverse range of
topics, and for the energetic spirit with which Jake always approaches philosophical
inquiry.
My greatest philosophical debt is to Mark Schroeder. Mark has been my primary
advisor for somewhere in the neighborhood of four years, and in this time has
communicated so many vital lessons to me that I worry about having monopolized his
time. (Fortunately, most of his advisees have similar worries. We have concluded that,
contrary to appearances, “Mark” is actually a set of indistinguishable persons
masquerading as a single individual. I will speak as if the name has a unique referent
merely as a matter of convenience.) Mark is so knowledgeable, and such a systematic
thinker, that it is sometimes hard to feel like you can add anything to a conversation in
which he is participating. I am grateful to him for regularly fostering the illusion that I
was accomplishing this feat; he probably deserves some kind of layman’s Oscar for his
performances. It would not be an exaggeration to say that this dissertation owes its
existence to Mark as much as me: his dedication as an advisor approaches the Platonic
vi
form of that commitment. I will always recall with fondness our long, brisk walks
through the USC campus. We probably looked like madmen barreling along Trousdale
parkway, but those conversations were challenging, enjoyable, and a main ingredient in
my growth as a philosopher.
Finally, I would like to thank the non-philosophers in my life who have made my
philosophical accomplishments (such as they are) a possibility. My friends have always
reminded me, in their own ways, that the most important thing about being a philosopher
is that it might help me to be a better person. They have also allowed me to stop being a
philosopher whenever I choose—a major reason for my still being moderately sane.
There are too many people to name here, but they know who they are and how much I
owe them.
My sister Suzanne and my parents, Bill and Sherry, have always supported me,
and my intellectual pursuits, in the most wholehearted fashion and with an amazing
degree of love, understanding, and good judgment. I am sometimes embarrassed to have
been so smiled upon by the blind draw of fortune; I am always proud to be a member of
such a wonderful family. There is no way to express my gratitude and there is no way I
could love them any more.
I would not close without adding one more note of thanks, to that wonderful muse named
philosophy. Like most things worth loving she can be an agonizing nightmare and an
unmerciful tyrant. But in the end I am hopelessly enamored of her charms. As Boethius
says at one point in his Consolation: “She had stopped singing, but the enchantment of
her song left me spellbound.”
Samuel H.M. Shpall
April 2011
vii
TABLE OF CONTENTS
Epigraph ii
Dedication iii
Acknowledgments iv
Abstract viii
Introduction 1
Chapter One: The Non-Akrasia Requirement 10
Chapter Two: Wide Scoping and Rational Requirement 38
Chapter Three: Two Myths about Akrasia 74
Chapter Four: Wide and Narrow Scope 97
Chapter Five: Moral and Rational Commitment 125
Chapter Six: Applications 164
Bibliography 201
Appendices
Appendix A: Some Thoughts about the Instrumental Requirement 206
Appendix B: Rational Norms and Explanatory Priority 208
Appendix C: Moral Commitments and “Success Enough” 212
viii
ABSTRACT
This dissertation argues that we cannot understand the error of akrasia—the error of
believing that you ought to do something, but not intending to do that thing—without
understanding the concept of rational commitment. I propose an account of this notion
that links it intimately to the sort of moral commitment that we invoke in speaking of
promises. I then argue that the normative notion of commitment that emerges is useful in
helping us to sort out several issues of interest in practical philosophy, issues pertaining
not just to the nature of rational norms and promissory obligation, but also to moral
dilemmas, exclusionary reasons, and the structure of our normative theories.
1
INTRODUCTION
It seems to me that the main ideas developed in this dissertation are all traceable to one
exceedingly simple observation I made several years ago. In first attempting to think
through the difficult contemporary discussions of the norms of rationality, I found it odd
that none of the authors I’d read appealed to the natural thought that we can be committed
to having certain attitudes in virtue of having others. This claim was so obvious to me,
and so clearly affirmed by our everyday speech, that I suspected it was of no importance.
Yet I did feel an insistent nagging, which was only exacerbated as I realized how many
writers discussed related issues without mentioning the word commitment at all.
A major part of my philosophical education has been the gradual—or better, the
painfully slow—cultivation of the capacity to distinguish ideas that are small from ideas
that are big—or better, less small. When I began thinking about these issues, I had very
little sense that the development of that initial thought could turn into a dissertation, and I
was prone to getting sidetracked by other ideas, related or unrelated, that were ultimately
unpromising or simply too minor to warrant sustained investigation. Indeed, I had little
optimism that I would ever finish a dissertation, given that I had almost nothing to say
about the nature of commitment besides some inarticulate paraphrases of that original
kernel. I owe the emergence of actual arguments, accounts, and applications mostly to the
foresight and persistence of Steve Finlay, Jake Ross, and especially Mark Schroeder, and
to two mutually beneficial narcotics: caffeine and anxiety. It took a lot of slogging along,
a lot of frustration, and a lot of tension headaches, but somehow the road eventually led
to Rome.
I think this is one reason why writing a dissertation is valuable. It is a Platonic
undertaking—a process of excavation that essentially involves squinting and agonizing
and dead ends—but it hopefully eventuates in that rare state of understanding we call
self-knowledge.
I started off by thinking about two related issues: the nature of akrasia, and the
contemporary debate about the form of rational requirements. The insight about
2
commitment was to provide my distinctive answer to the problem of akrasia, and my
distinctive take on the debate about the form of rational requirements. Let me introduce
the project by saying a few things about each of these topics.
Socrates famously denied the possibility of akrasia—where akrasia is, for our
purposes, believing that you ought to x and lacking the intention to x.
1
He was wrong to
do so. Though the Socratic intuition can be pumped (“Do you really believe that you
shouldn’t watch television if you’re still plumped on your ass in front of the tube?”), the
contrary one is overwhelmingly more compelling. Think especially of cases involving
temptation. I know several cigarette smokers who, upon reflection, would consider
themselves akratic. Likewise, I know of cases of infidelity in which the guilty party is of
the same opinion. On several occasions I have known—not merely believed—that I ought
to refrain from having another drink, or to refrain from taking a certain drug, and I have
nonetheless gone on to intentionally imbibe. Similar cases can be multiplied with ease.
And it is simply not plausible to contend that in all of these cases the agents in question
are massively self-deceived about their normative beliefs. It is much more reasonable to
suppose that to err is human. We can consciously act against what we judge to be best,
because we are not automata but flawed creatures of passion, whim, and pettiness.
This argument is an appeal to intuition, and I think it is decisive. But the Socratic
view about akrasia also has a controversial implication about the motivational power of
(all things considered) normative beliefs: it implies that they are necessarily tied to
motivation in the extremely strong sense that they produce, or are always accompanied
by, an effective motivational state, one that actually produces the relevant action or
intention.
2
The claim that such normative beliefs are always motivationally efficacious is
1
Plato (1997: Protagoras 358d).
2
For the notion of effective desire see Frankfurt (1971).
3
controversial in metaethics.
3
The claim that they always produce, or are accompanied by,
sufficient motivation for intention or action, even in the face of contrary desires, is not
popular.
For these reasons I believe it is safe to assume that akrasia is possible.
4
In my
view, it is all too possible—I often worry that I am systematically akratic. (When
combined with the initially appealing view in the philosophy of action
5
that being akratic
undermines one’s agency and hence one’s personhood, this worry can get very disturbing
very quickly.) But if akrasia is possible, it seems like it has got to be some kind of error
or failure of agency. The interesting question to ask about akrasia is not whether it is
possible but what kind of failure it is.
In recent years, the case of akrasia—for a long time a topic in its own right—has been
incorporated into the increasingly large investigation of the norms of rationality. In other
words, it has been suggested that the problem with akrasia is that it is necessarily
irrational.
My first goal, in chapters one and two, is to explore some relatively technical
issues about this suggestion. I agree that akrasia is necessarily irrational. It turns out,
though, that it isn’t entirely clear how to capture this fact in the form of a non-akrasia, or
3
For the most plausible version of this doctrine of motivational internalism, see Smith
(1994), who argues that such normative beliefs motivate insofar as one is rational. See
also Korsgaard (1986).
4
These reasons also suffice to undermine Davidson’s (1969) view about how to solve the
“paradox” of akrasia. Davidson’s famous paper is too confusing to discuss at any length
here. The relevant point is that Davidson accepts something close to the Socratic intuition
in motivating his paradox: “in so far as a person acts intentionally he acts in the light of
what he imagines (judges) to be better” (73). Positing a necessary connection between
normative judgments and motivation is what generates the issue; this is good reason, in
my view, for denying such a connection (denying Davidson’s P2).
5
E.g. Frankfurt (1976).
4
enkrasia, requirement.
6
We’ll leave those details for later. For the purposes of this
introduction, it is more important to provide some background on why I take the project
of the first two chapters to be worthwhile.
In recent philosophical discussion of the requirements of rationality, the enkrasia
norm has often taken a back seat to the norm of means-end coherence. What follow are
some reflections about why the enkrasia norm is worthy of study in its own right.
The subject of instrumental rationality—a subject whose roots go all the way back
to Kant’s distinction in the Groundwork between categorical and hypothetical
imperatives—has received a startling amount of attention from contemporary theorists.
7
This attention is deserved. The relation that obtains between intending an end, believing
that a means is necessary for that end, and intending that means is a relationship that
appears to be one of the most basic and indubitably normative relationships that we
observe in everyday life. Indeed, according to many commentators, Hume maintained
that the only form of practical irrationality was means-end incoherence. Some
contemporary Humeans have adopted this view, arguing that the instrumental principle is
the only principle of practical reason.
8
This skepticism about the reach of practical reason
is controversial, and in my view indefensible. But it is reasonable to assume that by
coming to have a deeper understanding of the nature of the instrumental norm we may
put ourselves in a better position to resolve more general issues about the scope of
practical reason.
Nevertheless, one of my motivations for considering the enkrasia norm in detail is
the worry that the investigation of the very general topics at issue (the structure and
content of the norms of rationality, for instance) may ultimately be constrained by
6
Here I pay terminological homage to one of my main influences, John Broome. See for
example his (ms). Other writers label this the principle of conscience, as we’ll see in
chapter one.
7
Kant (1997), Korsgaard (1997), Hampton (1998), Wallace (2001), Raz (2005), Setiya
(2007), Schroeder (2009), Finlay (2009), Way (2010a).
8
Hubin (2001).
5
excessive focus on a single test case. The focus on the enkrasia norm will serve as a
useful lens through which we can filter the large amount of research on the instrumental
principle. In particular, it will allow us to evaluate some important objections to certain
formulations of that principle, and to see if the objections raise general problems. For
example, we will see in chapter one that a certain type of transmission argument against
“wide scope” principles (to be defined later) will be easier to run against the enkrasia
norm than against the norm of means-end coherence. This is so because the argument
turns on the claim that you cannot give up one of the attitudes governed by the norm, and
it is more plausible to contend that you could be incapable of giving up a belief than an
intention. Since beliefs are evidence-responsive, it is hard to argue that they can be
formed and revised without changes in evidence; whereas many theorists would agree
that intentions are the kind of attitude that we can form, and revise, more or less
spontaneously, and on the basis of poor reasons or no reasons at all. Understanding these
comparative issues will be an important part of any adequate evaluation of the arguments.
As I’ll note in chapter one, many writers who endorse one formulation of the
instrumental principle endorse (or at least have in mind) an analogous formulation of the
principle of enkrasia. This makes comparative evaluations like the ones I will undertake a
crucial component of a rigorous study of these norms.
Furthermore, the case of enkrasia has an important simplifying feature, namely
the fact that it relates only two attitudes (normative belief and intention), whereas
instrumental rationality relates three (intention for the end, belief about the necessary
means, and intention for the means). With each attitude that must be included as a
component of the principle there arise additional logical possibilities concerning how the
principle is to be formulated. For the propositional operators may now either range over
that attitude or not. But this means that while there are several similar formulations of the
enkrasia and instrumental norms, there is an additional type of formulation of the
instrumental principle that is prima facie attractive. This is one in which whatever
normative term is employed (for example ‘ought’ or ‘rational requirement’) takes
6
intermediate rather than wide or narrow scope.
9
This is not a logical possibility in the
case of enkrasia.
10
In addition, I regard the enkrasia norm as more fundamental, in terms of the place
it deserves in the theory of normativity, than the norm of means-end coherence. First,
cases of akrasia are, I suspect, more common than cases of means-end incoherence.
According to an unorthodox view proposed by Steve Finlay, it is in fact impossible to
violate the instrumental norm.
11
Whether or not this view is correct, it is plausibly harder
than is commonly supposed to provide real life examples of means-end incoherence.
12
(A
rough and ready argument: intentions are motivational states; so intending an end
constitutively involves being motivated towards it; being motivated towards an end
constitutively involves being motivated towards the perceived means to the end; so
9
Way (2010a) endorses this intermediate scope view, where the means-end belief is
outside the scope of the propositional operator. There are other ways to hold an
intermediate scope view, e.g. to have the intention for the end outside the scope of the
normative term. Note that my talk of propositional operators is just a convenience; I do
not mean to take a stand on whether deontic concepts like ‘ought’ operate on
propositions, states of affairs, or whatever.
10
The investigation of enkrasia also gives us a new point of reference from which we can
criticize ambitious views that take as a starting point a particular theory of instrumental
rationality. I am thinking specifically about Kieran Setiya’s (2007) cognitivist theory of
intentions, which he motivates by arguing against other versions of the instrumental
principle. I think a cognitivist interpretation of the enkrasia norm is unlikely to be
successful. To put the point as briefly as possible: according to the cognitivist, intentions
are just beliefs that you will. But then this means that if enkrasia is rationally required, it
is because you cannot rationally [believe that you ought to x and lack the belief that you
will x]. That just seems false. I believe that I ought to stretch today, but also believe that I
won’t stretch, because I get so consumed by writing that it often slips my mind. This is
not irrational.
11
Finlay (2008), Hume (1978).
12
At least if the principle enjoins us to take the necessary means to our ends. See
Appendix A for a different conception of the instrumental requirement, which is more
commonly violated.
7
intending an end constitutively involves being motivated towards the perceived means.
This doesn’t show that the motivation to the means is effective, of course.)
But the same is not true for akrasia. As I’ve suggested, we are all familiar with
smokers who believe that they should quit, just as we are probably familiar with over-
indulgers of various stripes who believe that they should be more temperate, carnivores
who believe that they should stop eating meat, and individuals who believe that they
should be faithful to their significant others while nonetheless committing infidelities. To
put the point simply, if contentiously: all things considered normative beliefs are not
necessarily motivating, but intentions are.
13
So akrasia is plausibly more common than
means-end incoherence.
Besides being more commonplace, I would suggest that these cases are more
interesting than cases of means-end incoherence, partially because they involve, in a most
explicit way, a deep conflict between the agent’s judgments and her practical
implementation of these judgments. The akratic agent is in some way at war with herself.
(You might initially regard this formulation as hyperbolic. But see Davidson (1969: 89):
“What is special in incontinence is that the actor cannot understand himself: he
recognizes, in his own intentional behavior, something essentially absurd.”) Cases of
means-end incoherence may involve an analogous gulf between judgment and intention.
But when they do, they are often also cases of akrasia.
14
Related to this is the fact that the enkrasia principle is central to an account of
practical normativity because it aims to account for a rational constraint on the transition
13
The same point could be put in terms of degrees. If such normative beliefs do
necessarily motivate, then the motivation is not always as strong as the motivation
typically provided by (or accompanying) intentions.
14
That is, they involve the agent’s believing that he ought to take the means while failing
to intend to do so. There is a complication, which is why I say “often”. In a case in which
an agent intends an end that she believes she ought not to intend, it will not be the case
that her failure to intend the means counts as akratic, since she does not believe that she
ought to take the means. Nonetheless, there is something very much like akrasia going
on. Perhaps we could describe it by saying that she believes that relative to her end, she
ought to take the means, and yet fails to do so. The possibility of restricted akrasia of this
form is an interesting one that deserves more consideration than I can give it here.
8
from cognitive to practical attitudes. In this respect it is unique. Principles of theoretical
rationality, such as closure or consistency norms, govern relations between beliefs. The
instrumental principle governs relations between intentions. (The instrumental principle’s
norm on intentions is mediated by a belief, but the belief is a non-normative one
concerning the causal relationship between the means and the end.) But the enkrasia
principle governs the relation between a normative belief and an intention. Since
intentions are at least partially practical attitudes, which have the function of, among
other things, facilitating action, it can seem puzzling that there can be normative pressure
to form them on the basis of a cognitive attitude alone. But this pressure certainly exists;
it is a salient feature of our experience of agency long before we formulate the
philosophical issues that it raises.
I’ve only really set up the first two chapters in detail. This is intentional; to overly
anticipate what follows would spoil the fun and probably confuse things unnecessarily.
I’ll close by briefly returning to the notion of commitment, that kernel of insight I alluded
to above.
Believing that you ought to x commits you to intending to x. So being akratic is
irrational partially because it involves violating a commitment. What does this mean?
The annoying answer is that you need to read this dissertation, in particular chapter five,
to find out. The Platonic answer—probably a better one—is that you already know what
it means. The Talmudic answer—really just a stylistic variation of the Platonic one—is:
why are you feigning skepticism about a concept that is obviously indispensable?
The general idea is that you appeal to commitments of this sort all the time,
especially if you’re a philosopher, so you must have some sense of what I’m getting at.
You say that Rawls commits himself to an implausible view about a person’s essence. Or
you say that believing in the existence of novels commits one to an ontology of abstracta.
Or you say that accepting a theory commits you to denying its negation.
You should likewise be willing to say that believing that you ought to help
Granny commits you to intending to help Granny. Besides being intuitively true, this can
9
explain some things that cannot be explained by the related fact that you are rationally
required to avoid having this belief and lacking the corresponding intention.
15
And
appreciating this natural insight about what I’ll call rational commitment opens the door
for thinking about the nature of commitment as a more general category of normative
relation or concept.
One of the main arguments of the dissertation is that this is a good door to open. I
am probably less concerned than prudence requires me to be about whether everything I
say about the nature of commitment is true. Pretty much every important philosophical
theory is false. It might even be said that being right puts one in bad company. It should
surely be said that being wrong in an interesting way is about the best we can hope for.
Besides amplifying our conception of the norms of rationality, thinking about
commitment prompts us to look at a whole range of topics in a new light: issues of
promissory obligation and, more generally, the normative status of agreements; the moral
(and rational) import of reactive attitudes such as regret; arguments for the possibility of
moral dilemmas; the relationship between norms that enjoin ideal behavior and
comparative judgments of less than ideal agents; and more. There is a lot of territory to
be explored. This dissertation is probably best conceived as a first pass at such an
expedition.
15
Throughout the dissertation, I use ‘corresponding intention’ in similar contexts to refer
to the intention to x that is at least partially based on one’s belief that he ought to x.
10
CHAPTER ONE: THE NON-AKRASIA REQUIREMENT
There is something wrong with you if you are akratic—that is, as a first approximation, if
you believe that you ought to φ and you nonetheless lack the intention to φ. Since there is
something wrong with akrasia, we should be able to formulate a principle that accurately
captures what it is that is wrong. Arriving at a true non-akrasia (aka enkrasia) principle is
the first main task of this dissertation.
I begin this chapter by introducing a folk moral version of the enkrasia
requirement—the narrow scope ought version—and arguing that it faces a decisive
objection, which I call the problem of fallibility. I then consider our options for
reformulating the principle. Specifically, I distinguish between two general strategies for
avoiding the problem of fallibility: employing a normative concept other than ‘ought’
(e.g. reason or rational requirement), and altering the principle’s logical form. This
chapter considers the first strategy. I argue that it is not sufficient. This motivates the
project that I undertake in chapter two, where I offer a detailed analysis of “wide
scoping” (to be defined later). Ultimately, I argue that both strategies are necessary: the
single best formulation of the enkrasia requirement is a wide scope principle that employs
the concept of rational requirement rather than the concept of ought.
I conclude the present chapter by considering an important objection to every
formulation of the principle that employs a belief about what the agent ought to do.
Following Jake Ross, I call this objection the three envelope problem, and I discuss the
refinements this problem necessitates.
1.1 Ewing’s Problem, Conscience, and Enkrasia
A commonplace assumption of our folk morality is that an agent’s conscience should
play a fundamental role in his reasoning about how to behave. We often counsel a friend
with the words “let your conscience be your guide.” Or we say, when someone is faced
with difficult decisions, that the only thing to do is to “follow your heart.” More
11
poetically, perhaps, we may suggest that “you gotta do what you gotta do.” These
maxims express a deep fact about the way many of us pre-theoretically conceive of a
central constraint on sound practical deliberation: it requires staying true to oneself, to
one’s own convictions about what is right.
It is natural to want to integrate these intuitions about the role of “conscience”
into our theory of practical reasoning. However, there is an immediate problem with
doing so, as A.C. Ewing pointed out nearly sixty years ago.
1
We don’t think it would
always be appropriate for people to follow their conscience. A suicide bomber may very
well think that killing children is what he ought to do. But we think, typically, that he’s
wrong. He shouldn’t kill children. So he shouldn’t follow his conscience. Call this the
problem of the fallibility of conscience.
2
This problem shows that we will have to accommodate these intuitions about the
role of conscience in a more nuanced way. On the one hand, we should want an accurate
theory of practical reasoning to tell us what kind of normative pressure attaches to the
verdicts of an agent’s conscience. We want to preserve, in other words, something of the
pervasive and forceful commonsense intuitions just mentioned. On the other hand, we do
not want a theory that makes agents infallible authorities about what they ought to do. We
should take it for granted that people can be wrong about how they should act, just as
they can be wrong about nearly everything else.
We get Ewing’s problem by considering the role of conscience in practical
reasoning, and noticing the tension between the intuition that you ought to follow your
conscience and the intuition that your conscience is not infallible. We should notice,
though, that Ewing’s problem is more general than he made it look.
3
By appealing to our
intuitions about an agent’s conscience we implicitly restrict the class of relevant cases to
those in which an agent believes that he has overriding moral reason to act in a certain
1
Ewing (1953).
2
Also known as Ewing’s Problem. See Piller (2007).
3
This is also pointed out in Finlay (2010).
12
way.
4
But the problem of fallibility arises in a far broader class of cases. Whenever an
agent believes that he ought to do something—in the general sense of thinking that he has
most reason to do that thing—he seems to be under some kind of normative pressure to
do it. But if we say that he ought to do it, we put ourselves in the uncomfortable position
of making everyone his own infallible authority about what to do. Since people can be
immoral, irrational, or mistaken about relevant facts, we should take it for granted that we
need to avoid endorsing the conclusion that they necessarily act as they ought to insofar
as they act in accordance with their beliefs about what they ought to do.
The more general problem can be articulated as I stated it in the introduction—
that is, in terms of the phenomenon of akrasia, or believing that you ought to φ and
consciously failing to intend to φ. We intuitively think that there is something wrong with
being akratic. But does this mean that we ought to be enkratic? How can we make sense
of this claim given that agents sometimes have false beliefs about what they ought to do?
And if it’s not true that agents ought to be enkratic, how can we capture the normativity
of enkrasia—or is there simply no normative force that attaches to the verdicts or our all
things considered judgments about how to act? In the next few sections I attempt to sort
out our possibilities.
1.2 Three Normative Relations
Our preliminary reflections have already ruled out one possible formulation of the non-
akrasia norm. We noted that it isn’t always the case that an agent ought to do what he
thinks he ought to do. Sometimes an agent mistakenly believes that he ought to φ, and in
these cases his acting in accordance with this belief will result in his performing an action
that he ought not to perform.
While I think that this conclusion is fairly obvious, it is absolutely crucial to be
clear on the fact that here, and in the remainder of this chapter, I stipulate a usage for the
term ‘ought’, namely ‘having most reason’, and use ought only in that sense. This is
4
Or at least this morally loaded interpretation is a natural consequence of our concept of
conscience.
13
perfectly compatible with a whole range of positions concerning how many “senses”
ought really has. All the reader need grant is that ought does have this one “objective”
sense—and he should grant this, it seems to me, because this sense of ought is a central
one in both moral philosophy and ordinary language.
5
This restriction does not
compromise my investigation in these sections. On the contrary, it is natural to start with
some basic, relatively ubiquitous normative concepts and see what can be done with
them. A fair amount of confusion may be avoided by sticking clearly to one sense of a
term that is frequently employed in an ambiguous manner.
This means that the following enkrasia principle is untenable:
O*: If you believe that you ought to φ, then you ought to φ.
It is just not true that, for example, believing that you ought to murder the President
makes it the case that you ought to murder him. There is no necessary connection
between beliefs of the form ‘I ought to φ’ and the corresponding actions of φ-ing; in other
words, such beliefs are perfectly capable of being false. So this formulation of the
principle is too strong, in the sense that it gives beliefs about what one ought to do way
more legitimizing power than they intuitively have. In other words, O* should be rejected
because it implausibly entails that a belief of the form ‘I ought to φ’ makes φ-ing what the
agent with the belief ought to do.
6
5
For varying positions on the debate about the senses of ought see Ewing (1953),
Broome (2004), Piller (2007), Finlay (2010), and Kolodny and MacFarlane (ms).
6
You might deny that the formulation gives beliefs this legitimizing power by
contending that there is merely a necessary connection between the possession of such
beliefs and it’s being the case that you ought to φ. (The idea would be that it isn’t that the
belief makes it the case that you ought to perform the action; rather, it is simply a brute
normative fact that the two come together.) But this is a suspicious move. Prima facie it
is clear that I can have mistaken beliefs about what I ought to do. Imagine that I have
such a belief (e.g. that I ought to kill Obama). Now if it is necessarily true that, given my
possession of this belief, it is true that I ought to kill Obama, the likely explanation is that
the belief is what makes this true. If I did not have the belief, we would probably
conclude that I ought to refrain from killing Obama.
14
There is a closely related principle that we should likewise reject:
O: If you believe that you ought to φ, then you ought to intend to φ.
This formulation is problematic for reasons similar to the ones that doom O*. If there is
no necessary connection between beliefs of the form ‘I ought to φ’ and corresponding
actions of φ-ing, then there are slim grounds for supposing that there could be such a
necessary connection between such beliefs and corresponding intentions to φ. Again, this
would accord beliefs of the form ‘I ought to φ’ an implausible legitimizing power.
I take it that the vast majority of philosophers will share my skepticism about
principle O. Nonetheless, let us imagine for a moment that this principle were true.
Notice that it would then be the case that agents could not have mistaken beliefs about
what they ought to do, at least in the sense that these beliefs would necessarily entail that
the agent ought to have a corresponding intention. But why should this restricted class of
beliefs be so special? We can have mistaken beliefs about almost everything.
7
We can
mistakenly believe that it is Wednesday, or that we desire the well being of an ex-partner;
we can mistakenly believe that it would be rational to believe a certain proposition. In
general, such mistaken beliefs do not intuitively have the power to make it the case that
we ought to have further attitudes that antecedently we ought not to have had. For
example, believing irrationally that it is Wednesday does not seem to completely
legitimate belief in the proposition that tomorrow is Thursday; there is still something
wrong with my believing that tomorrow is Thursday, even if I believe that today is
Wednesday; it is simply not the case that I ought to believe either proposition (assume for
simplicity that I have scant evidence for my belief that today is Wednesday). Mistaken
beliefs do not legitimate further attitudes in this strong sense; the idea that they do is as
implausible as the idea that mistaken beliefs can legitimate actions in this strong sense.
7
The likely exceptions being Cartesian beliefs like ‘These are thoughts’. But it would
take a hefty argument to assimilate beliefs about what we ought to do into this special
class of beliefs.
15
People can have false and/or irrational beliefs about what they ought to do, and
respecting this obvious fact requires us to reject both O* and O, since these principles
posit necessary connection between believing that you ought to φ and φ-ing, and
believing that you ought to φ and intending to φ. Similar considerations will be revisited
in more rigorous detail in chapter two, but for now I take them to establish a good prima
facie case against these principles. At the very least we should hope to do better.
We need a weaker formulation of the enkrasia principle, one that does not get us worried
about infallibility and necessary connections between beliefs and corresponding actions
or intentions. There are two general strategies for weakening the principle that I will
consider in the first two chapters of this dissertation. First, we might weaken the principle
by invoking a different normative concept in the consequent of the conditional.
8
Second,
we might weaken it by changing its structure.
This chapter will consider the first strategy. Here is one way to motivate this kind
of approach.
The ought relation is a decisive one. If I know that I stand in the ought relation to
some action, and I fail to intend to perform the action, then I am definitely failing in a
crucial way.
9
But not all normative relations are decisive. The obvious candidate for a
weaker one is the reason relation. If I have a reason to φ, it doesn’t follow that I ought to
φ, or that I am failing in any way if I don’t intend to φ. I may have a stronger reason not
8
The conditional being: If you believe that you ought to φ, then you [insert normative
term, e.g. ‘ought to’, or ‘have a reason to’] to intend to φ.
9
Though the discussion of the three envelope problem at the end of this chapter will
show us the sense in which such a claim must be qualified. For this claim see e.g.
Broome (1999).
16
to φ. Nonetheless, having a reason is normatively significant. If I have a reason to φ then,
absent any reasons to the contrary, it follows that I ought to φ.
10
The Reason Relation
According to this line of thought, the non-akrasia norm should be formulated as such:
Reason: If you believe that you ought to φ, then you have a reason to intend to φ.
11
But this principle will not do. First, it runs into a problem similar to the one that plagued
O*. Though Reason doesn’t entail that agents are infallible in their judgments about what
they ought to do, it does entail something that is troubling in a similar way, namely that
agents’ judgments about what to do create reasons. Call this the bootstrapping worry.
12
The principle implies bootstrapping of an illegitimate sort in the following way.
Imagine an agent who has little reason to go to the market and pretty good reason to stay
home—suppose he has already done all the shopping he needs for the week, he is trying
to be as frugal as he can, and he has a lot of work to get done tonight. The worry is that if
Reason is true, this agent might be able to make it the case that he ought to intend to go to
the market just by (falsely) believing that he ought to. For if believing that you ought to φ
gives you a reason to intend to φ, then the strength of this reason will, in at least some
cases, tip the scales in favor of an option for which the agent antecedently had less
reason.
13
In this type of case, the bootstrapping problem is, as should be obvious, quite
related to the problem of fallibility with which we began.
10
More in depth discussions of the distinction between decisive and non-decisive (or pro
tanto) normative relations will occur throughout this dissertation. See especially chapters
four and five.
11
See Schroeder (2004). He dismisses this strategy in Schroeder (2009).
12
See Bratman (1981), Broome (1999).
13
See Broome (1999) for a similar argument. As he says, your belief that you ought to φ
“cannot make itself true even in this special case” (4).
17
Stated differently, the problem is that if we grant that people can be mistaken
about what they ought to do, it seems odd to claim that the very fact that someone
believes that she ought to φ gives her a reason to intend to φ. Why should reasons for
intention come so cheap—why should we think that false or irrational beliefs can
generate such reasons? Shouldn’t reasons for intention be derived from what is good
about the object of the intention, and not from a totally independent (and fallible)
judgment about it? Many of us have the intuition that our judgment that the suicide
bomber has no reason to intend to kill children is rightly unaffected by our knowledge
that he believes that he ought to. And likewise our judgment that an agent should on
balance stay home from the market is rightly unaffected by our knowledge that he
believes that he ought to go. There is simply no obvious justification for the idea that the
possession of a false and/or irrational belief could ground the existence of a new
normative reason
14
, as long as normative reasons are construed as (something like) facts
that are of the right sort to make actions/attitudes the ones we ought to have.
15
There is one tempting way to respond to this line of argument. We might say that
the reason that gets bootstrapped by the normative belief is a very insignificant one. So
though such beliefs do create reasons, they do so in an innocuous way. For example,
imagine that the reasons we get in these situations are mere tiebreaking reasons—that is,
they have only enough weight to tip the scales in favor of an option if the agent has
sufficient reason to intend one of two or more actions, and the belief concerns one of
these sufficiently supported options. The weight of the generated reason might be so
insignificant that often, when we think about them, we will claim that there is no reason
at all.
16
14
Actually the difficulty I mention in the text is more general, since it is unclear why
even true or rational beliefs would ground the existence of new normative reasons.
15
This general picture of normative reasons is shared by philosophers with very different
theoretical proclivities. See e.g. Scanlon (1998) and Broome (2004).
16
See Schroeder (2008) and (2009).
18
This is an interesting view about the weight of reasons, and it is especially helpful
in explaining some pragmatic facts about reason ascriptions. It might well be true that
reasons can have such insignificant, merely tiebreaking weights; and it may also be true
that when we say that there is a reason, there is a general conversational presumption to
the effect that the reason cited is a fairly weighty one. One view that I find somewhat
seductive claims that this is the correct account of a different phenomenon, namely the
phenomenon of what I would call, for lack of a better term, inertial reasons.
17
But both
this view about reasons and the application I’ve suggested to ‘inertial reasons’ are highly
contentious; more importantly, accepting them does not give us a satisfactory response to
the bootstrapping worry. For the ‘weak reason’ view would only give us a response to
bootstrapping at the price of making akrasia insignificant. Having a merely tiebreaking
reason to avoid akrasia means that being akratic constitutes a failure in only a small range
of cases (that is, ties). This shows that there is a second, and more important, problem
17
Define an inertial reason as a reason to φ (or to intend to φ) that an agent has in virtue
of two facts: first, the fact that he has sufficient reason to φ or ψ, but insufficient reason
to φ rather than ψ, and insufficient reason to ψ rather than φ; second, the fact that he has
already expended cognitive resources in coming to the conclusion that he ought to φ
and/or in coming to intend to φ. The main reason for postulating the existence of these
reasons is as a way to answer Buridan’s ass puzzles. I’m deliberating about whether to
take the milk carton on the left or its qualitative duplicate on the right. I recognize that,
though I ought to choose one of them, I have no reason to choose either over the other.
How do I get myself to act? (I assume that the perfectly rational agent does act here.)
Plausibly I “plump”—that is, I simply pick one of them. Suppose I intend to pick the one
on the left. If this intention does not change the normative landscape in any way, then it
would be just as rational for me to revise the intention as it would be to implement it by
reaching for the milk carton on the left. I have the admittedly weak but nonetheless
persistent intuition that this is wrong. Having formed the intention, it seems to me that
implementing it is slightly more rational than revising it. Hence it seems to me that
intentions, in these special kinds of cases, generate tiebreaking reasons in virtue of their
role as streamliners of our cognitive energies. Again, this kind of reason is only suited for
resolving this particular kind of choice dilemma, and cannot substantively alter any other
sort of decision problem. For a similar thought see Bratman (1987), whose view I discuss
in chapter six. Note that in chapter three I begin to offer an account of commitment that
I’ll argue does better at capturing this inertial feature of intentions.
19
with the proposal embodied by Reason, a problem that acceptance of the weak reason
view does not help us to solve.
Here’s the problem. The enkrasia principle needs to be strict.
18
It needs to
sanction the conclusion that there is something going wrong on every occasion in which a
person is akratic. But the reason formulation is not strict. I can believe that I ought to φ,
not intend to φ, and still be in conformity with the principle. For the mere fact that I have
a reason to φ does not make it the case that there is anything at all wrong with my failing
to intend to φ. I might have a better reason not to φ (for example that φ-ing is difficult).
19
An example: I believe that I ought to spend Saturday afternoon working. I also believe
that spending the afternoon watching March Madness would be more fun. If my
judgment that I ought to spend the day working only entails that I have a reason to do so,
then there might be nothing wrong with my deciding to spend the afternoon watching
basketball, since I have reasons to have fun as well. This is a strange result, since we
started off by saying that we intuitively suppose that good reasoning, proper functioning,
or some general norm of rational agency requires avoiding akrasia.
It might be thought that there is a tension between the desire to preserve the
strictness of the non-akrasia principle and the desire to weaken it so as to avoid
sanctioning agents’ infallibility. In fact there is no tension. We want to be able to say that
18
For this terminology, and the first explicit appeal to this form of argument, see Broome
(1999).
19
Consider the analogous claim about the highly contested formulation of the principle of
instrumental rationality. In that case it is likewise true that any plausible formulation must
be strict. There is something wrong when an agent fails to take the known necessary
means to his intended ends, regardless of whether we endorse his pursuit of those ends.
We need to be able to make a distinction between the instrumentally rational axe-
murderer, who carries out his meticulous plan to perfection, and his instrumentally
irrational counterpart. (One illustration of this: when we see movies with characters such
as these, we often “admire” the former killer’s calculation and laugh at, or even malign,
the latter’s ineptitude. Experiences with fictions are illuminating because they neutralize
the natural abhorrence we feel towards actual individuals with evil ends. Our disgust with
these individuals can lead us to disregard the fact that there are clearly ways in which
they can be exceedingly rational.)
20
there is something going wrong in every case in which an agent fails to be enkratic. But
we also want to say that it is not the case that an agent ought to form the enkratic
intention on every occasion. These statements are perfectly compatible. There is
something wrong with the suicide bomber if he genuinely believes that killing children is
what he ought to do, and yet he does not intend to kill children. This does not imply,
however, that he ought to kill children. It only implies that there is something wrong with
him if he is in the state described. What is wrong with him may be relatively
inconsequential, at least in terms of the normative force that attaches to it—it may well be
massively outweighed by what would be going right with him if he refrained from
bombing.
20
By analogy, consider again a case of means-end incoherence.
21
Imagine that I am
horribly drunk but out of booze, and I form the intention to get some more. I know that to
get another bottle I need to drive to the liquor store. However, instead of getting in my
car, I remain in my house, continually getting distracted by the television. There is
something going wrong here. If I am going to keep intending to get more alcohol, then I
am irrational for failing to form the intention to drive to the liquor store. Of course this
doesn’t mean that I ought to intend to drive anywhere, since I shouldn’t even think about
driving in my condition. It just means that I am (luckily) instrumentally irrational.
Similarly, imagine that Max believes that he ought to break up with his girlfriend because
she doesn’t like gangster movies. If he doesn’t intend to break up with her then he is
definitely irrational in some respect. But it might be that this lack of an intention is, on its
own or in relation to many other mental states of Max’s (like his deep love for his
20
On this point see Arpaly (2003), especially chapter two. On one reading, Arpaly shows
that agents may be irrational in the sense of having incoherent sets of attitudes (like an
intention to act against what they believe they ought to do), and we will still want to say
that the akratic intention can be more rational than the enkratic one would have been. I
agree with this point, but, as should be expected, I disagree with her claim (61) that there
is nothing special about the error exhibited by the akratic agent. I’ll say more about these
issues in chapter three.
21
Cases like this are discussed in Wallace (2001).
21
girlfriend, his belief that she is the best woman he has ever met, etc.), far more defensible
than the intention to break up with her would be. It is highly implausible to suggest that
Max ought to intend to end the relationship. Such an intention would be manifestly worse
than the form of irrationality he actually exhibits.
These reflections bring us to another way of weakening the enkrasia principle,
which I will turn to in just a moment. First, though, I want to anticipate a worry about my
treatment that it would be natural to have at this point. I have invoked the notion of
strictness in my arguments against versions of the enkrasia principle that appeal to
reasons we have for avoiding akrasia. But I have not offered a precise characterization of
this concept; rather, I have appealed to the vague idea that a strict normative relation is
one whose violation entails that you are definitely failing in some way. This vagueness is
unfortunate but intentional. Strictness is seldom discussed in the literature, and it is
poorly understood.
22
One of the central positive contributions of this dissertation will be
an account of strictness that I take to be more illuminating than the one I have borrowed
from Broome. But the details of this account will have to wait until chapters four and
five. Hopefully the reader will see the force of the objection I have been pushing even
before such an account is provided. It should be clear that there is some fundamental
difference between formulations of the enkrasia norm that entail that all instances of
akrasia are to be avoided, and formulations that do not entail this. That distinction is all
we need for the time being.
The Rational Requirement Relation
We have considered two normative relations that can obtain between an agent and an
intention, the ought relation (where this is defined as the relation of having most reason)
and the reason relation, and we have seen how utilizing these relations by placing them in
the consequent of the relevant enkratic conditional leads to implausible results. I’ll now
mention a third type of normative relation, which might be thought immune from similar
22
Bratman (1987) seems to be getting at something close to this. Schroeder (2009) and
Finlay (2010) also consider strictness (or “stringency”) as properties of related principles.
22
criticisms. This is the relation of a rational requirement, and it gives us the following
formulation of the enkrasia norm:
RR: If you believe that you ought to φ, then you are rationally required to intend to φ.
The subject of rational requirements is a hot one in contemporary philosophical debate.
There is not any widespread agreement about what rational requirements are.
23
This
makes it difficult to say anything brief and uncontroversial about how to construe the
normative relation invoked in this interpretation of the principle. For now I will confine
myself to the following observations.
First, let us take a break from the case of enkrasia and think for a moment about a
separate norm that most philosophers would judge to be a paradigmatic case of a norm of
rationality. Consider the following one premise closure principle for belief:
Closure: If you believe that p, and you believe that p entails q, and you are considering
whether q, then you are rationally required to believe that q.
Notice first what this principle doesn’t say. It doesn’t say that if you believe that p, and
you believe that p entails q, and you are considering whether q, then you ought to believe
that q. It might be, for example, that you have no good reason to believe that p in the first
place. In that case, you have no good reason to believe that q, and so presumably it
shouldn’t follow from any true principle that you ought to believe that q.
24
What the
principle does say is that you are under some kind of rational pressure to believe that q if
you have the other two beliefs. Specifically, rationality demands or requires of you that
you believe that q, if you have the other beliefs. What we need to know, first and
23
See the wide range of views in Broome (2004), Kolodny (2005), Southwood (2008),
Hussein (unpublished).
24
This is, of course, an informal analogue of the problem of fallibility for the narrow
scope ought version of the enkrasia principle.
23
foremost, is whether this concept of rational requirement is relevantly weaker than the
concept of ought. If it is not, it will not help us avoid the problem of fallibility.
If we take a step back and consider Closure, I think it should be clear that the
problem of fallibility is still lurking. Imagine that I believe that I created the universe, and
that if I created the universe, I am worthy of adulation. Closure has it that I am rationally
required to believe that I am worthy of adulation. But this claim seems odd. In this
situation you would undoubtedly counsel me to revise my beliefs, and not to believe that
I am worthy of adulation. On independent grounds, then, we should suspect that the
conception of rational requirements entailed by Closure is at variance with our everyday
judgments of how agents should rationally conduct themselves (at least assuming that
rational advising should line up, in normal case, with what rationality requires). So it
seems that even without a firm conception of the nature of the technical term ‘rational
requirement’, we have reason to doubt that Closure is true. And an analogous argument
calls RR into doubt.
Second, let us see if we can construct an argument by assuming only a minimal
and extremely natural picture of rational requirements. Proposal: to be rationally required
to have attitude A is for you to be irrational to some degree if you lack A. In other words,
a rational requirement is a necessary condition for ideal rationality.
25
This seems like a
pretty theory-neutral way of specifying the content of ‘rational requirement’. After all, it
seems uncontroversial to say that a moral requirement is something that you must comply
with in order to be ideally moral, a legal requirement something you must comply with in
order to be ideally law-abiding, a requirement of etiquette something you must comply
with in order to be ideally polite, etc.
Now imagine that I believe that I ought to jump to the moon. RR, together with
our stated gloss, implies that I cannot be fully rational unless I intend to jump to the
moon. But that just seems ludicrous. Intuitively, the only way I can get into a fully
25
For the record, I think that all the participants in the debates about rational
requirements that I cite in this dissertation would accept this characterization.
24
rational state is by discarding my irrational belief. Supplementing an irrational belief with
an irrational intention seems like exactly the wrong way to proceed.
Here is a third consideration. Recall that we are considering the view that the
narrow scope principle RR is not susceptible to the problem of fallibility, the proposed
justification being that positing a necessary rational connection between ought-beliefs
and intentions is substantially weaker than positing a necessary connection between such
beliefs and what we ought to intend. This justification requires a somewhat dismissive
view of this kind of rational connection, or in other words a weak account of what it
means to be rationally required. But this sort of view wreaks havoc on our common sense
practices of rational assessment, and we should be skeptical of it for this reason.
Here is a case that should illustrate the point. Bob has almost no money left this
month—just enough to buy groceries for his family. Nonetheless, Bob puts all his
remaining dough on the number seven at the roulette table. He knows that it is unlikely
that he will win, and he knows that losing will be a catastrophe for his family, but he
gambles anyway.
Bob is deeply irrational. In fact, his immoral conduct depends for its immorality
on the irrationality of his gamble. Even if Bob wins he deserves to be blamed. (If his wife
is at all responsible, she will give him a good talking to independently of whether he
comes home with a sack of bills.)
Was Bob rationally required to refrain from gambling in this fashion? Here’s why
we should answer affirmatively. If he wasn’t rationally required to refrain from gambling,
then we need some other technical term to capture the obvious fact that Bob acted
irrationally in gambling. But then that technical term—call it ‘smationally
requirement’—will be what picks out the notion of fundamental practical importance to
us. In that case, we could just dispense with rational requirements and make do with
smational requirements.
If this is right, then it is hard to see how we could dismiss the problem of
fallibility in the context of defending RR. Upon reflection, the relation of rational
requirement seems to be of deep practical significance, as evidenced by our views about
25
when reactive attitudes like blame and praise are appropriate. In fact, many of our most
central reactive attitudes align with what rationality demands from us, and not with what
we ought to do.
26
So it would appear wrong to dismiss this relation as substantially
weaker than the ought relation.
27
Some philosophers will try to resist this sort of reasoning by claiming that we
should conceive of rationality as nothing beyond coherence between one’s mental
states.
28
On this picture, rationality merely requires that you do not have certain
combinations of mental states. One of these combinations is believing that you ought to φ
and consciously failing to intend to φ. So when you have such a belief and you intend to
φ, you come to be coherent, no matter the content of your belief. And this is all that
rationality demands from you.
We should be careful about jumping to this conclusion. There is more than one
way of coming to have coherent mental states when you believe that you ought to φ and
as yet do not intend to φ: you can intend to φ, but you can also give up the belief that you
ought to φ. Moreover, it is certainly possible that in a wide range of situations rationality
requires you to give up the belief. You might have insufficient evidence for the belief, for
example, and it is prima facie likely that rationality requires you to form beliefs only
when you have sufficient evidence that they are true.
29
So it might well turn out that since
26
Again, assuming that ought=most objective normative reason. In Bob’s case, he is
blameworthy even if he ought to put his money on seven.
27
I have appealed to the legitimacy of reactive attitudes as a way of cashing out the
import of rational requirements, but it is important to be clear that I have not claimed that
only rational requirements ground the legitimacy of these attitudes. All I’m assuming is
that the systematic appropriateness of reactive attitudes like praise, blame, condemnation,
etc. is a reliable indicator of the centrality of the normative entities that ground these
attitudes. As we will see in chapter five, grounding reactive attitudes is a key feature of
commitments as well, and it gives us important insights into their normative character.
28
See Smith (1994) and Scanlon (2007).
29
Again, we need only appeal to our everyday practices to justify this claim. If I tell you
that I believe in ghosts, you will probably ask me why. And if I cannot support this belief
26
the only way of becoming coherent that allows you to conform to the evidential norm is
by revising your belief, you cannot rationally form the intention. If you cannot rationally
form the intention, we would be hard pressed to explain how rationality could require you
to form it.
Furthermore, it is not entirely clear what could be meant by ‘rational requirement’
if these requirements were so easy to come by (or alternatively, so detachable
30
). If any
normative relation gets grounded in, or comes into existence merely in virtue of, the
suicide bomber’s belief that he ought to kill children, it should not be a decisive kind of
normative relation. It should not come close to offsetting the fact that, all things
considered, he ought not to kill children or intend to kill children. But on the face of it, to
say that someone is rationally required to intend to φ appears to invoke precisely this kind
of decisive relation. As I suggested above, the most natural interpretation of the phrase
‘rational requirement’ accords this concept a fundamental place in the theory of
normativity—commensurate with the central practical and evaluative role it seems to
play in everyday life. Indeed, as we will see in the next section, it is plausible that what
rationality requires of us is a “guide to life” in a way that what we ought to do is not. If
anything in the ballpark of this observation is correct, then to be rationally required to
intend to φ is to stand in a fundamental and weighty relation to this intention. As I will
put things in chapter two, to be rationally required to intend to φ is for rationality to issue
a decisive verdict, a verdict that implies the absence of conflicts and the definitive
(rational) impermissibility of not intending to φ. So we should be immediately skeptical
about a narrow scope principle like RR that sanctions such a lax interpretation of rational
requirements.
with any evidence, you will be fully justified in concluding that I am irrational in having
it.
30
To be clear, a relation is detachable if it takes scope only over the consequent of a
conditional, as RR is in ‘If you believe that you ought to φ, then RR (φ)’. We can
‘detach’ the relation when the antecedent is satisfied—e.g. when you do believe that you
ought to φ.
27
Thus far in this chapter we have been concerned with one particular set of issues. I have
argued that the natural attempts to revise the enkrasia principle by employing a different
normative relation in the consequent of the conditional do not succeed in answering the
problem of fallibility.
31
The narrow scope reason version sanctions an implausible sort of
bootstrapping, which is analogous to, if not quite as troubling as, the detachment of an
ought. Moreover, it fails to make the enkrasia principle strict—fails, in short, to capture
the fact that the akratic agent is necessarily failing in some important respect. And even
absent a fully fleshed out theory of rational requirements, the narrow scope rational
requirement version seems to merely invite the problem of fallibility again in a slightly
different form. I conclude that this strategy for weakening the enkrasia principle cannot
succeed on its own. In the next chapter we consider the popular and natural alternative.
In the remainder of this chapter I turn my focus to a different set of issues. What
do you have to believe in order to fall under the scope of the enkrasia norm? The
dominant view, which I have so far assumed, is that you must believe that you ought to φ.
It turns out that this cannot be right, at least as long as we are talking about the ‘most
reason’ sense of ought. The next section explains the nature of this mistake.
31
Note that I have not claimed to exhaustively canvass the candidates for this normative
relation. Indeed, a central part of the positive project of the dissertation will be to argue
that there is another normative concept, the concept of commitment, which can be
employed in the consequent of a narrow scope enkrasia conditional. But I have
considered the two obvious possibilities, and made as thorough an examination of them
as I could. Since no alternatives have been suggested in scholarly discussions, and I can
think of no better options myself, I take the argument to be as exhaustive as can be
expected.
28
1.4 The three envelope problem
32
One interesting problem has been in the background of the preceding discussion. This is a
problem for all of the formulations of the enkrasia principle that I have thus far
considered, because it is a problem with any formulation that employs a normative belief
about what you ought to do. As we shall see, this three envelope problem demonstrates
that uncertainty can force us to rethink our conceptualization of the phenomenon of
akrasia. In particular, it shows that the sort of belief to which the enkrasia requirement
attaches is more plausibly a belief about what’s rational than a belief about what you
have most reason to do. More generally, the problem shows that beliefs about what form
of conduct would be rational are the beliefs that structure a well-functioning agent’s
deliberation.
In the following exposition, I will assume for simplicity’s sake that any adequate
formulation of the enkrasia requirement will be a wide scope principle. By this I mean
that the normative term (e.g. ‘rational requirement’) will range over the entire
conditional—that is, the entire sentence ‘If you believe that you […] to φ, then you intend
to φ’—rather than just the consequent. The reader need not grant this assumption yet; it
will be explored further in the following chapter. But note that I have offered what I take
to be compelling objections to the candidate narrow scope formulations of the enkrasia
norm in this chapter; so supposing that the principle must be a wide scope one is well
motivated at this point. In any case, nothing in my discussion of the three envelope
problem depends on this assumption. But as we need a principle to work with, it might as
well be one that gets us near the truth. I suggest the following:
WSRR: You are rationally required to be such that (if you believe that you ought to φ,
then you intend to φ).
33
32
In this section I draw on Jacob Ross (ms), who suggests these formulations of the wide
scope rational requirement, and who is also responsible for recognizing the relevance of
the issues about uncertainty highlighted by the three envelope problem to the study of
enkrasia. The name of the problem is his, though its general structure is a classic one (see
e.g. Regan (1980), Kolodny and McFarlane (ms)).
29
Now consider the following situation. Jane is given the choice of three envelopes. She
knows that the first contains one thousand dollars. She also knows that either the second
or the third contains fifteen hundred dollars, and the other one is empty; but she has no
evidence about which of the two contains the money.
Most people who are presented with this problem have the clear intuition that
though Jane ought to [choose either envelope two or envelope three], she would
definitely be irrational if she failed to choose envelope one.
34
And, if we are being
scrupulous about sticking to the stipulated meaning of ought, then we must take this to be
the correct diagnosis. But then the three envelope problem threatens to undermine any
attempt to formulate a principle of enkrasia that employs, as the agent’s relevant
normative belief, a belief about what she ought (has most reason) to do. For in three
envelope-like situations, an agent knows only that she has most reason to perform one of
multiple options, without knowing which one in particular. And it seems like, in at least
one fundamental sense, the best course of action for her is to do what it would be
irrational of her to avoid doing: choosing the guaranteed one thousand dollars in envelope
one.
But if choosing envelope one is intuitively the best course of action, and Jane
knows that it’s the best course of action, then is it really reasonable to suppose that she is
guilty of akrasia when she chooses it? We commonly assume that akrasia is irrational, or
improper, because it amounts to some kind of deep attitudinal incoherence—as I put it
(perhaps hyperbolically) in the Introduction, it involves being at war with oneself. But
Jane isn’t incoherent on the face of it, and she isn’t at war with herself. She is perfectly
consistent and reasonable.
33
In chapter two I will argue that this version is preferable to its analogues for the
normative relations of ‘reason’ and ‘ought’.
34
I am assuming that Jane gets no special satisfaction from risky gambling. Also, note
that rejecting the inference from ‘Jane ought to choose two or Jane ought to choose three’
to ‘Jane ought to choose [two or three]’ will not help evade the three envelope problem.
In either case Jane would be irrational if she acted in accordance with her ought-belief(s).
30
The three envelope problem definitively shows that uncertainty complicates our
attempts to formulate a satisfactory enkrasia norm. In what follows I explore two
different ways of responding to this problem.
The first strategy we might pursue is to modify the normative belief that gets
invoked in the principle. The natural modification would be to move from a belief about
what you ought (have most reason) to do to a belief about what it would be irrational of
you to fail to do. This is a natural modification because it is modeled on our intuitions
about Jane’s case. In her case, the limited evidence available makes it so that taking the
first envelope makes a great deal more sense than her other options. I suggest that this
leaves us with the principle:
WSRR*: You are rationally required to be such that (if you believe that you are rationally
required to φ, then you intend to φ).
By switching from an ought-belief to a rational requirement-belief, we more accurately
recognize the focal point of the agent’s rational deliberations. Instead of concerning
herself with what she has most reason to do, we are now suggesting that the properly
functioning agent concerns herself with what it would be (most) rational to do. This does
not mean that agents need to have the concept of ‘rational requirement’ in order to fall
under the scope of the enkrasia norm. All they need believe is that it would be irrational
of them to fail to intend to do something. We all believe this kind of thing quite
commonly, and Jane surely believes it (or should believe it) in the three envelope case.
It is only when an agent believes that an option is rationally required and fails to
intend that option that we may legitimately accuse her of the distinctive akratic error with
which we are concerned. Note that we need not build in, at this stage, any complex theory
of what would constitute the rational requirement or rational optimality. All we need to
appeal to is the intuition that drives the three envelope problem in the first place: that
there is a fundamental sense in which not choosing envelope one would be a practical
error, and that there is no analogous sense in which not choosing envelope two or three
31
would be a practical error. If we grant this intuition, then WSRR* indeed represents an
improvement upon WSRR.
35
In a moment, I will consider another strategy for avoiding the three envelope
problem. Before I do so, two explanatory notes are in order. First, I want it to be clear
that I regard WSRR* as the fundamental truth about the requirement of enkrasia.
36
This is
the best version of the enkrasia principle available. It will take a good deal of argument in
chapter two to defend this wide scope rational requirement principle against objections,
and to establish that it is preferable to the wide scope ought version of the enkrasia
requirement. But I want to lay my cards on the table ahead of time. It is especially
important for the reader to understand that the principle I go on to propose in this section
is not a serious alternative to WSRR*. It is at best a useful supplement: it brings some
broad, confusing issues into sharp focus, but it does not have the generality to be a
serious competitor.
Second, I want to be totally explicit about the fact that I make a potentially
misleading simplification in subsequent chapters of this dissertation.
37
Though I regard
WSRR* as the fundamental truth about the enkrasia requirement, I use WSRR as my
working representative of the wide scope rational requirement version of the enkrasia
principle. This is a convenient fiction. Adopting it makes the exposition smoother, both
because few writers have paid attention to the difficulties that necessitate the
abandonment of WSRR, and because WSRR* is a bit more cumbersome. Though I will
write as if this is my preferred version of the enkrasia requirement, and none of my
35
Of course it would not represent an improvement if there were other cases in which
WSRR fared better. But I do not know of such cases.
36
This claim is compatible with the main claim of chapters three and four, namely that
such a principle represents only part of the overall theory of enkrasia. In my view,
WSRR* is the best formulation of the enkrasia requirement available. Nonetheless, there
is in addition a true narrow scope commitment principle that actually explains the
existence of the requirement.
37
A simplification that, I should note in fairness, has been objected to by Steve Finlay.
As I go on to note in the text, I have judged it easier to adopt this fiction than to
consistently reword the claims of the writers I’m engaging with.
32
arguments will be the worse for it, the reader should bear this warning in mind. Strictly
speaking, beliefs about what you ought to do only ground the application of the enkrasia
requirement when they entail beliefs about what you are rationally required to do. The
discussion below elaborates this claim.
Let’s return to the challenge posed by the three envelope problem. We have seen
the best way to respond. But consider the attempt to avoid the problem in the following
way.
The problem, in sum, was that ignorance can make it the case that it is rational for
us to do what we know to be objectively sub-optimal. Perhaps, though, we can eliminate
such cases by fiat. For example, we might do so by distinguishing between atomic and
non-atomic options.
Let us call an option atomic just in case that option could be the effective output of
a rational process of practical deliberation. By ‘effective output’ I mean that if
deliberation concluded in the formation of an intention to choose this option, an agent
could act intentionally without doing anything beyond implementing the intention. So
‘taking out the trash’ is an atomic option, whereas ‘taking out the trash or watching more
television’ is not. The former is an atomic option because if my deliberation concluded in
the intention to take out the trash, I would have gotten myself into a state that could itself
produce the action of taking out the trash. The latter is not an atomic option because if my
deliberation concluded in the intention to take out the trash or watch more television, I
would not have gotten myself into a state that could itself produce action. In order to act
on the basis of my intention I would need to deliberate further. I could simply act
randomly, of course, but in this case my action would not be a matter of implementing
my intention.
Now we could claim that a belief about what you ought to do can indeed ground
the application of the enkrasia requirement in so far as the belief concerns an atomic
option. There just is no such a belief in the three envelope case. For in this case, what
Jane believes she has most reason to do is to choose [either envelope two or envelope
three, I know not which], which is not an atomic option. This intention, if it were the
33
conclusion of Jane’s deliberation about what to do, could not alone produce action. For
Jane knows that she is required to act: taking any of the envelopes is much better than
refraining. But in order to act rationally she must form an intention about which one to
pick. So if we permit only atomic options, Jane does not have any belief that could
ground the application of the enkrasia requirement. In one sense, then, we avoid the
problem: the resulting principle does not convict Jane of error when she intends to take
the first envelope.
Notice, however, the limited scope of this solution. Invoking atomic options does
not permit us to convict Jane of rational error if she fails to choose envelope one. It
merely allows us to say that she is not guilty of irrationality if she does choose it. But we
want to say that she would be guilty of akrasia if she didn’t choose it, given that she
believes it would be irrational not to.
In fact, obtaining even this limited resulted is a bit more complicated than it looks.
Imagine that we tweak the case by adding a choice point—a point at which the agent is
forced to choose what to do a second time around. The game is set up so that Jane first
needs to choose whether to take envelope one or to refrain. If she refrains, then she has a
choice between envelopes two and three.
In this case refraining is an atomic option by my definition, since Jane’s only
options at the first choice point are φ-ing and refraining from φ-ing. And refraining is
what she ought to do—she has most reason to choose [two or three, she knows not
which], and thus to refrain from choosing one. Yet clearly Jane is irrational if she fails to
choose envelope one: the expected utilities are the same as in the original version of the
problem. The addition of a choice point complicates things by making more options
count as atomic, so we need to restrict our attention in another way.
Call something a total present plan if it specifies, for every salient atomic option
relative to some end, whether to perform that action. So for example, a total present plan
for Rob’s marketing may include the intention to go to the store, the intention to buy milk
x rather than milk y (assuming x is available), and the intention to buy milk y rather than
milk z (assuming x is not available). Now we may restrict our principle to the atomic
34
options that would figure in total present plans. To figure out what her total present plan
is, Jane must consider her atomic options and their relationships at both related choice
points: choosing envelope one, refraining and choosing envelope two, or refraining and
choosing envelope three. The latter two options are total present plans since they are
conjunctions of an atomic option per choice point. But they are irrational, for the reasons
we’ve given.
This further stipulation allows us to avoid changing the normative belief invoked
in the enkrasia principle by requiring that the belief be of a very specific type, as in:
WSRR^: You are rationally required to be such that (if you believe that you ought to φ,
and φ-ing is an atomic option that is part of a total present plan, then you intend to φ).
38
It might be objected that requiring an atomic option is unduly restrictive. For
example, in order to vindicate the atomic option strategy we need not φ-ing to fail to be
an atomic option, since not choosing envelope one appears equivalent to choosing two or
three.
39
But one might have thought that not doing something could be the effective
output of practical deliberation, and thus an atomic option.
However, this thought may be misleading. In some cases, namely those in which
the options under consideration are confined to doing something and refraining from
doing it, not φ-ing is indeed an atomic option. If I receive a telephone call, and I
deliberate about whether to answer it, then both answering and not answering are atomic
options. Each can be an effective output of my deliberation. But in other cases, like the
three envelope case, not φ-ing is not an atomic option, because it is deliberatively
38
For a similar strategy, using atomic options but not total present plans, see Ross (ms).
Jake suggested to me something similar to the formulation in the text.
39
This is not strictly speaking true, since not φ-ing may just be the action of refraining
from φ-ing and doing nothing at all. But we can grant to the objector that the agent’s
deliberation takes place against a background of the intention to choose some envelope.
Given this stable intention, not φ-ing does seem equivalent (“deliberatively equivalent”,
as I say in the text) to choosing [two or three].
35
equivalent to a disjunction of options (assuming the background of an intention to choose
some envelope).
Though I think we can in this way defend the modified version of the atomic option
strategy, it’s important to emphasize that the three envelope problem illustrates a very
general lesson. Often our knowledge about our reasons is only partial. We may know that
we have most reason to perform one of a set of actions without knowing which one it is.
As an extreme example consider the act of choosing numbers in a lottery. We know that
we have most reason to φ, where φ-ing is the action of choosing the winning numbers.
But we can’t act on this knowledge. What this shows is that ignorance often forces us to
think in terms of what is rational, or what makes sense from our epistemic perspective,
rather than thinking in terms of what we have most reason to do. In other words,
ignorance can make it the case that we have no legitimate normative belief to plug into
the enkrasia principle, at least insofar as it’s construed as a principle about intending to
do what we believe we ought to do (and where ought=most objective normative reason).
But it still seems like we can be rationally required to intend to do certain things
in such cases; and it still seems like we can be akratic when we fail to do the things that
we believe it’s irrational to fail to do. It is totally natural, in other words, to regard beliefs
of the form ‘it would be irrational for me to fail to φ’ as paradigmatic instances of the
kind of belief that grounds a requirement of enkrasia.
The point is, I think, even more far-reaching. It is natural to suppose that our most
important (practical) normative concept is whatever concept gets invoked in the sort of
beliefs that legitimately terminate rational processes of practical reasoning. In other
words, having the function of ending rational deliberation seems to be a feature of only
the most central (practical) normative concept. For example, coming to any of the
following conclusions would be, at least sometimes, insufficient grounds for curtailing
deliberation about whether to φ: I have a reason to φ; I am prudentially, morally, or
etiquettely (sorry!) required to φ; I’ve promised to φ; and so on. We have argued that ‘I
ought to φ’ should be added to this list, and that ‘I am rationally required to φ’—or, if you
36
want to be more cautious, ‘My failure to φ would be irrational’—should not be. So if the
assumption of this paragraph is correct, we have shown that ‘rational requirement’, or
‘irrational if not’, is the most important normative concept that we appeal to in practical
deliberation. This is the right result. Rationality is the guide to life.
Let me reiterate that regardless of what we think of the strategy of isolating atomic
options and rational present plans, WSRR^ cannot be a complete solution to the three
envelope problem. For this principle does not imply that you are irrational if you fail to
choose envelope one in the original case. What it helps us capture is the fact that there is
no atomic option that you believe you ought to choose. In other words, it captures the
sense in which the principle’s application must be restricted if it is to retain a belief about
what you ought to do in the antecedent of the conditional. This is good, but it isn’t
enough: one of the big lessons of the three envelope problem is that when there is no
available ought-belief about an atomic option, it is clearly irrational to fail to do what you
know it would be irrational to fail to do. Thus WSRR^ cannot replace WSRR*.
Why not just make do with WSRR*? In a way my contention is that we can and
should. WSRR^ is merely a byproduct of WSRR*. It is less general, more contentious, and
equivalent in those cases in which it delivers a verdict at all. However, I think the
discussion of WSRR^ permits us to grasp some important insights. First, it shows quite
explicitly the very limited extent to which we can vindicate the classical definition of
akrasia.
40
Secondly, WSRR^ clarifies the connection between the actions it is rational of
you to perform and the actions you have reason to perform. Since a great deal of recent
literature concerns the relationship between reasons and rationality, it is helpful to draw
40
See Plato (1997), Aristotle (2000), Hare (1952), Davidson (1970), Watson (1977), and
just about every other writer who discusses the issue. To be fair, many of these authors do
not have a technical definition of ‘ought’ on the table—most of them do not explicitly
define the concept at all. Nonetheless, I do take it to be clear that the lessons of the three
envelope problem have not been appreciated.
37
out the connections between the normative import of beliefs about what you ought to do
and the normative import of beliefs about what you are rationally required to do.
41
I don’t mean to draw any global conclusions on the basis of this point. The idea is
just that the discussion of this chapter brings some of these large issues into sharp focus.
At minimum, then, WSRR^ is profitably mentioned as a way of delving into these issues,
and deserves the attention I have given it in anything hoping to be a thorough
examination of the enkrasia norm.
In this chapter I have attempted to respond to the problem of fallibility, a problem that
makes the narrow scope ought formulation of the enkrasia principle extremely
unattractive. I have given arguments against other narrow scope formulations, which
attempt to mitigate the force of the problem by invoking reasons or rational requirements
instead of oughts, in support of my claim that a satisfactory way of weakening the
enkrasia principle in response to the problem of fallibility must incorporate the wide
scope strategy. I have promised to discuss this strategy at length in the following chapter.
I concluded by presenting the three envelope problem and analyzing the sense in
which it forces a reexamination of how we conceptualize the requirement of enkrasia.
Though it does necessitate refinements, I have shown how the problem can be
surmounted, and how it may help us to draw some interesting connections between the
normativity of rationality and the normativity of reasons.
41
For an influential and provocative discussion of the relationship between reasons and
rationality see Kolodny (2005).
38
CHAPTER TWO: WIDE SCOPING AND RATIONAL REQUIREMENT
In the previous chapter I rejected several potential formulations of the enkrasia
requirement. In this chapter I will be concerned to evaluate the most plausible wide scope
versions of the principle. John Broome has explicitly defended one such version, and
several other philosophers have explicitly or implicitly endorsed similar theses about
enkrasia.
1
I consider first the wide scope ought version, and discuss some objections to it
that have been raised. The main claim of the chapter is that this version has significant
problems. After motivating its rejection, I attempt to pay off a debt I incurred in chapter
one by offering some further reflections on the nature of the normative relation of
‘rational requirement’. Having a fuller story about what rational requirements are allows
me to more convincingly introduce and critique the version of the enkrasia principle that I
find most plausible, the wide scope rational requirement formulation. I revisit the
objections to wide scope principles as they apply to the wide scope rational requirement,
and I argue that this principle avoids some of the most important problems for other
formulations.
Nonetheless, I conclude the chapter by suggesting that the principle, though true,
does not constitute a satisfying theory of enkrasia. Considered on its own, the principle
captures, and explains, only part of the important normative connection that holds
between beliefs of the form ‘I ought to φ’ and corresponding intentions. As we progress, I
will put this point by saying that I accept the wide scope thesis, but not the wide scope
theory, about the rationality of enkrasia—where the latter is conceived as the claim that
the wide scope thesis does capture, and explain, everything important about the
normative connection between beliefs of the form ‘I ought to φ’ and corresponding
intentions. The investigation of the wide scope thesis being concluded, I turn in the
remainder of the dissertation to an investigation and critique of the wide scope theory.
1
Broome (1999, 2007, ms); Wallace (2001), Brunero (2010), and Way (2010a and
2010b).
39
One chief motivation for this aspect of the project is my view that much
philosophical discussion of these issues has failed to adequately distinguish two central
questions: first, the question of what we can truly say about the rationality of enkrasia;
and second, the question of what explains these truths. The theory of rationality that
emerges in later chapters aims to supplement our answers to both questions.
2.1 Wide Scoping
In chapter one we discussed two structurally analogous strategies for weakening the
narrow scope ought version of the enkrasia principle. They were structurally similar
because they traded on replacing the occurrence of the ought-relation in the consequent
of the enkrasia conditional with a weaker normative relation like ‘reason’ or perhaps
‘rational requirement’. The wide scope strategy is of a different sort. It weakens the
principle by relocating the problematic occurrence of the normative relation. In one
popular incarnation, it does so in the following manner:
WSO: You ought to be such that (if you believe that you ought to φ, then you intend to
φ).
2
This is the version of the wide scope strategy influentially employed in the early work of
John Broome.
3
Before investigating the viability of this strategy, I want to offer the
following preliminary observations.
First, it’s important to note how the wide scope construal of the principle is a
formal departure from the other two proposed solutions: it alone alters the principle’s
2
A reminder: here and throughout the remainder of this dissertation, I act as if I have not
introduced the three envelope problem and refined the principle accordingly. That is, I
represent the belief that grounds the enkrasia requirement as a belief of the form ‘I ought
to φ’, instead of as a belief of the form ‘I would be irrational if I failed to φ’.
3
Broome (1999). Broome has since changed his view—see e.g. his (2005)—and moved
to a rational requirement formulation. Other proponents of wide scope ought principles
include Hill (1973), Gensler (1985), Hampton (1998), and Wallace (2001).
40
logical properties. Because the “ought” occurs outside the scope of the conditional there
are two ways to satisfy WSO: you can intend to φ, or you can stop believing that you
ought to φ. This allows the wide scoper to avoid our original problem of infallibility. The
wide scope principle does not imply infallibility. It does not imply that, given his belief,
the suicide bomber ought to intend to murder innocents. He can equally satisfy the
principle by discarding his belief that he ought to do this. Typically, the proponent of the
wide scope view will add something to the effect that other considerations (e.g. epistemic
requirements) will determine which of the two ways of satisfying the principle is
required. I will discuss this issue later on in detail.
Second, the wide scope strategy does not require that any particular normative
relation be utilized in the formulation of the principle. Wide scoping has been
influentially used as a way to vindicate the intuition that there is a sense in which you
ought to avoid akrasia, and I will be concerned, in the next section, with questioning this
account. But this is a contingent fact about the typical philosophical implementation of
the wide scope strategy, and certainly not a reflection of any formal constraints to which
it gives rise. As we will see, this strategy might prove less fruitful when it employs the
ought-relation than when it employs the rational requirement relation.
To be clear, since the wide scope view is compatible with whatever normative
relation its proponent wants to employ, we are immediately confronted with the wide
scope analogues of Reason and RR in addition to WSO (which is the analogue of O):
WS-Reason: You have reason to be such that (if you believe that you ought to φ, then you
intend to φ).
WSRR: You are rationally required to be such that (if you believe that you ought to φ,
then you intend to φ).
41
We can reject the first of these principles. Putting the occurrence of the reason relation
outside the scope of the conditional allows it to escape the problem of bootstrapping, but
it does nothing to address the problem of strictness.
Third, the wide scope view is typically taken as a solution to several problems,
problems with important similarities to the one we are investigating. For example, wide
scope views are commonly deployed in explaining instrumental rationality, epistemic
rationality, and the nature of our obligation to keep promises.
4
In considering objections
to the wide scope view I will have these other applications of wide scoping in mind. I will
try to show, in particular, how the case of enkrasia is different, in certain important
respects, from the case of instrumental rationality.
In the next three sections I offer criticisms of WSO.
2.2 First Problem for WSO: The Evil Demon
The first problem for the account of enkrasia that appeals to WSO is what we will call the
evil demon problem. In my view, this problem represents the clearest and strongest
challenge to the wide scope ought account. It shows, to put it starkly, that ‘ought’ is the
wrong normative concept for the job.
WSO says that we ought to be a certain way: namely, we ought to be such that,
when we believe that we ought to φ, we either give up this belief, or we intend to φ. But a
thought experiment purports to show otherwise. Imagine that an evil demon appears right
as you are forming the belief that you ought to help Aged Anne across the street with her
groceries. The demon threatens to kill you unless you akratically refuse to help Anne.
5
It
would seem that this threat clearly determines what you ought to do. You have far better
reason to save your life than to enkratically assist the woman. In order to save your life,
4
See Schroeder (2004) for a review of these arguments and for objections to wide-
scoping, some of which I discuss below. The question of promissory obligation will
emerge as a main theme of chapters five and six.
5
If you don’t like spooky things like demons, imagine a gang of mad scientists instead.
42
you must persist in your akrasia. But if this is right, then it cannot be true that you always
ought to avoid akrasia. So it is not true that you ought to satisfy WSO simpliciter.
The evil demon problem constitutes a simple objection to the wide scope ought
version of the enkrasia principle. It purports to show that sometimes we ought to be
akratic, and thus that the principle cannot be true.
There is, however, one important complication. In the setup of this problem, I
have claimed that the demon commands you to be akratic—that is, he commands you to
continue believing that you ought to help Annie and refrain from intend to help her. But
on reflection it might be doubted that you could continue to have this belief. You know
that, given the demon’s threat, you ought to refrain from helping. The goodness of
helping would be far outweighed by the badness of your death. So it can seem unlikely
that you could be akratic in this case, even if you wanted to; doing so would apparently
require the ability to continue believing something for which you have no evidence.
Perhaps if we deny, plausibly, that it is possible to believe things at will, then we will be
tempted to dismiss the problem.
But this solution is too quick. For there are, in principle, several ways in which
you could still manage to be akratic in this situation. For one, you might just fail to form
the belief that you ought to refrain from helping. Perhaps you do not believe that the
demon is real; or maybe you just are a bit dim-witted when it comes to forming beliefs
about what you ought to do in the presence of demons. In either of these cases you can
still be akratic—you can fail to intend to help the woman even though you believe you
ought to help her. And it seems clear that you have most reason to be precisely this way;
you ought to do what saves you from death, even though it requires refraining from
performing a small act of kindness.
Alternatively, you might accept a contradiction—that you ought to help and that
you ought not to help—and thus be akratic no matter what you do. This would certainly
involve irrationality, but that doesn’t make it impossible. And if akrasia is possible in
such a case, and you ought to be akratic, then WSO is false.
43
A Comparison: The Evil Demon and the Instrumental Principle
I have shown that the evil demon case allows for a compelling argument against WSO. It
will now be instructive, I think, to see whether similar problems arise when we offer the
same objection to an analogous (wide scope ought) version of the instrumental principle.
There are two reasons for this sort of comparative analysis. First, as I made clear in the
Introduction, a subsidiary task of these initial chapters is to use the investigation of
enkrasia as a tool for better understanding the substantial existing literature on means-end
coherence. And the evil demon objection has been discussed in this literature. Second, it
is useful to compare an analogous objection to the instrumental principle at a dialectically
important juncture, in order to firm up, or unsettle, our convictions about its application
to enkrasia. In this case, I’ll argue that the comparison does more firming up than
unsettling.
It is natural to suppose that the evil demon objection is more persuasive against
the principle of means-end coherence. For in that case the demon’s threat—to kill you
unless you are means-end incoherent—doesn’t obviously change the particulars of the
relevant attitudes, as it seemed disposed to do in the enkrasia case. In the latter case, the
threat seemed quite likely to override the agent’s original ought judgment and thus make
it difficult for the agent to be akratic with respect to that judgment. (You cannot
akratically refrain from helping Annie if you no longer believe that you ought to help
her.) But in the case of means-end coherence there is no ought judgment in play, so the
demon’s threat seems unlikely to change the content of the agent’s attitudes. For
example, the agent already intends the end—say to go to the store—and believes (more
or less irrevocably, given that the evidence is held fixed) that he must drive in order to get
there. So he ought to refrain from intending to drive in order to avoid being struck dead
by the demon. Thus he ought to be instrumentally irrational.
This is indeed my view about the comparison. The evil demon problem is even
more clearly compelling as an objection to the wide scope ought version of the
instrumental principle than to WSO. What conclusion should we draw on the basis of
44
this? If we are not persuaded by my defense of the objection in the case of enkrasia, does
the case of instrumental rationality matter at all?
I think that it does.
6
Many proponents of this formulation utilize its analogue in
the case of instrumental rationality, and in other cases as well.
7
Moreover, it is a striking
feature of this area of philosophical discourse that the overwhelming majority of writers
who endorse one version of the instrumental principle endorse, or will endorse if asked,
the analogous version of the enkrasia principle, and other principles besides.
8
These
authors are right to want a unified account of these phenomena. A view according to
which the enkrasia requirement employs a different normative concept or property than
the requirement of means-end coherence (or the requirement of intention consistency, or
belief closure for that matter) is prima facie unattractive. There is presumably a reason
why these norms are characteristically discussed together, under the heading of
‘coherence and consistency requirements’ or ‘requirements of rationality’.
9
And even if
this were just a misguided assumption, the theoretical goals of simplicity and elegance
would establish a presupposition in favor of a unified account of these intuitively related
6
To be clear, I do think that the objection presents us with a compelling problem for
WSO. I am just supposing that it might not for the sake of argument.
7
For example, the main contributor to these debates, John Broome (1999, 2007).
8
Note that by ‘analogous’ I mean employing the same normative concept/property and
having the same logical form. One interesting exception is Way (2010a and 2010b), who
will come up for discussion in a moment. Since he endorses an intermediate scope
version of the instrumental principle, he could not endorse a perfectly analogous version
of the enkrasia principle; as I noted in the Introduction, the fact that enkrasia relates only
two attitudes makes such an intermediate scope reading impossible.
9
Note that I am more concerned here to defend the view that these principles should
employ the same normative concept than the view that they should have the same logical
form. If they did not employ the same concept, then speaking of them as importantly
related would be an error. The same is not true with respect to logical form. In my view,
there are probably some narrow scope rational requirements, even though requirements
like enkrasia are wide scope. If this were true, it wouldn’t change the fact that these
requirements are importantly related. To put it one way, they issue from the same source
or domain (namely rationality).
45
phenomena. I take these considerations quite seriously. Consequently, I find the evil
demon problem to be the most persuasive objection to the wide scope ought version of
the enkrasia principle.
Way’s Response to the Evil Demon
In recent work, Jonathan Way gives a response to the evil demon problem that it will be
useful to briefly consider.
10
Following Way, we might claim that WSO is not conclusive simpliciter. It does
not really tell us how we ought to be all things considered. Rather, when it employs the
ought-relation [“you ought to be such that…”] it should be read as employing the
‘conclusive object-given reason-relation’. An object given reason is a reason that
supports the content (or object) of an attitude. For example, an evidential reason to
believe that p is an object given reason because the reason makes p more likely to be true
(from one’s perspective, at least). A state given reason is a reason that supports only the
state of having an attitude, and not the content of the attitude. For example, a prudential
reason to believe that p is a state given reason because it is a reason that derives
exclusively from the benefits of being in the state of having this belief.
11
Consider the
evil demon objection to the wide scope ought version of the instrumental principle. Even
though you have strong (indeed, conclusive) state-given reason to refrain from intending
to drive in the evil demon example, Way contends that you still have conclusive object
given reason to intend to drive (provided by the demon’s threat). Thus it remains true that
you ‘ought’ to intend to drive in this restricted sense of ought. The upshot is that, for
Way, the wide scope version of the instrumental principle (and, by analogy, WSO) must
be a claim about only object-given reasons.
10
Way (2010a).
11
E.g. ‘that a Republican won the Massachusetts senate race’ is an object given reason to
believe that Democrats will lose seats in November; whereas the fact that believing the
Democrats will lose seats in November will make you depressed is a state given reason
not to have this belief.
46
It is important to note how this claim fits into the current dialectic. We are
investigating wide scoping as a strategy for weakening the enkrasia principle, and
considering the most historically influential kind of wide scope principle, the principle
employing a wide scope ought. But now it appears that in order to counter objections
such principles, the advocate of these views needs to retreat from a claim about ought to a
claim about object given reasons. This maneuver is striking. Since many philosophers
claim that rational attitude formation is a matter of attending exclusively to your object
given reasons
12
, this can make it seem as if we are simply moving towards principles that
employ the relation of rational requirement.
13
This intuition is buttressed by the
observation (illustrated by the evil demon problem) that what you ought to do may be to
act on your state given reasons.
While powerful, the distinction between object and state given reasons—or, as it
is sometimes more generally stated, between right and wrong kinds of reasons—is a
thorny one.
14
There is considerable controversy about how best to utilize this distinction
and about where it is applicable. For example, it is unclear whether the object-state
distinction can capture the intuitions about the right and wrong kinds of reasons that it is
12
Strictly speaking, this claim cannot be right. You have countless object given reasons
that you cannot possibly know about. You are not irrational for failing to attend to these
reasons. What might be true is the nearby claim that rational attitude formation involves
attending to only your apparent or accessible object given reasons.
13
To be clear, I take it that Way would accept this point; in offering a principle framed in
terms of conclusive object given reasons, he takes himself to be offering a rational
requirement. But one need not have this account of rationality, so it’s worth noting, as I
do in the next paragraphs, that the move to object-given reasons introduces another
substantive candidate for the normative relation to be employed in the principle.
14
In the noncommittal spirit of this paragraph, I should emphasize here that I don’t mean
to be conflating the distinction between object and state given reasons with the right-
wrong kind distinction. It is an open question, given what I’ve said, how these
distinctions relate. All I mean to indicate is that some authors, e.g. Parfit (ms), offer the
object/state distinction as a way of illuminating the difference between the right and
wrong kind of reasons.
47
designed to help articulate. And it is unclear whether the distinctions apply to reasons for
action as well as reasons for attitudes such as fear, admiration, humor, and so on.
15
Furthermore, once we have retreated from the claim that the principle employs a
wide scope ought, the ground is cleared for an exploration of normative relations that
might work better. Crucially, we have acknowledged that the principle no longer
preserves the force or authority of the ought-relation, which we granted was a matter of
comparatively little philosophical dispute.
16
(It is, for example, not particularly
controversial to claim that I ought to avoid painful death at the hand of a malevolent
demon.)
I want to remain neutral about most of the controversial issues concerning the
wrong kind of reasons problem, and in particular about whether the rational status of
attitudes can be analyzed merely in terms of responding to object given reasons. And,
since one of the main goals of this chapter is to investigate the relative plausibility of
employing the relation of rational requirement in the enkrasia principle, it will be useful
to set Way’s suggestion to the side for the moment. I will return to related issues in
section 2.9.
2.3 Second Problem for WSO: Transmission
The second problem for the wide scope account is what I will call the transmission
problem.
17
This characterization is somewhat misleading, because there is no single,
agreed-upon problem of transmission. Instead, there is a general style of objection that
15
For general discussion of these problems see D’Arms and Jacobson (2000), Hieronymi
(2005), and Schroeder (2010).
16
Though I’ll remind the reader that the conclusions of the first chapter should have
considerably shaken our confidence if we previously thought that the ought-relation was
the central normative concept for our practical standpoint. The three envelope problem
shows that it is really the concept of rational requirement (or something in that ballpark)
that guides deliberation.
17
See Raz (2005), Setiya (2007), Schroeder (2009), Dreier (2009).
48
has been implemented in several ways. I will distinguish them presently, but it is useful
to attempt to categorize the basic idea before getting into the many complications that
inevitably arise.
The main point driving transmission arguments is that reasons for action seem to
transmit from ends to means. For example, if I have reason to lose weight, then I have
reason to exercise, assuming that I will not lose weight unless I exercise. And if I have
conclusive reason to lose weight, then presumably I have conclusive reason to exercise,
assuming that a reason for an end of a certain weight transmits a reason of at least that
weight to necessary means to that end.
18
But now recall WSO:
WSO: You ought to be such that (if you believe that you ought to φ, then you intend to φ).
If reasons to be a certain way transmit to means to being that way, then the reasons we
have to satisfy WSO will presumably transmit to the means of satisfying it. The point of
transmission arguments is to try to show that principles like WSO transmit reasons and
oughts when they shouldn’t.
One very controversial transmission argument goes like this. You have reason to
satisfy WSO. Indeed, you have conclusive reason to be just as you ought to be. (Note that
this is trivially true so long as we are clear, as I’ve tried to be, that we are taking ‘ought’
to mean ‘has most objective reason.’) So it would seem that you have at least some
reason both to not believe that you ought to φ, and to intend to φ. For satisfying a disjunct
is a way of satisfying a disjunction. But we want to avoid the conclusion that you
necessarily have reason to intend to φ. Therefore, WSO is false.
19
Notice that this argument relies on a strong assumption, one that we didn’t need in
the exercise case. In that case it was totally natural to think that the reasons got
18
This assumption is quite natural. If I have conclusive reason to lose weight—say I have
heart problems—then it seems like I have equal or better reason to do what it takes to
lose weight, namely exercise, if I won’t lose weight without doing it.
19
For this form of argument see Raz (2005). For discussion see Millsap (unpublished).
49
transmitted, but that was arguably because exercising was a necessary means to losing
weight. In this case, reasons get transmitted to merely sufficient means. Many will reject
this formulation of the transmission problem precisely because they deny that reasons
transmit to sufficient means.
20
Though I think that there are interesting things to say about the transmission of
reasons to sufficient means, I do not have a considered view about it. And even if I did, I
would not want to rest much on such a controversial case, since the point of this section is
to canvass the different forms of transmission arguments and evaluate their prospects
quite generally. So I will focus, in what follows, on stronger versions of these arguments,
even though I think it is worth bearing the sufficient means cases in mind.
The crucial claim in stronger transmission arguments is that there are cases in
which it is not plausible to contend that an agent ought to satisfy WSO by giving up his
belief that he ought to φ, and thus that it follows from the wide-scope principle that he
ought to satisfy WSO by intending to φ.
21
There seem to be at least two potential types of cases. Though these have been
distinguished in the literature, it will be helpful to attempt a more rigorous exploration of
them.
First, if an agent is incapable of changing his belief that he ought to φ, then it
seems that intending to φ is a necessary means to satisfying WSO. And since he ought to
satisfy WSO, it appears to follow that he ought to intend to φ. We will call this form of
the argument the necessity detachment version of the transmission problem, since it
proposes a case in which a necessary means to satisfying the WSO conditional is
satisfying the consequent.
Second, if an agent rationally believes that he ought to φ, it seems strange to say
that he ought to satisfy WSO by giving up this belief. Provided the agent is rational in
holding the belief, the objector argues, the only way he can satisfy the wide scope
20
For criticism of this view see Broome (2005).
21
The general structure of this argument comes from Greenspan (1975). See also
Schroeder (2004), Kolodny (2005), Way (2010a), and Finlay (2010).
50
principle is by intending to φ. We will call this form of the argument the normative
detachment version of the transmission problem, since it proposes a case in which a
necessary means to satisfying the conditional is satisfying the consequent, provided the
agent is to remain rational.
Necessity Detachment
The necessity detachment argument turns on the application of a means-end transmission
principle to WSO. In a simple and intuitively compelling incarnation, the transmission
principle says that if you have reason to do A, and doing B is a necessary means to doing
A, then you have a reason at least as strong to do B. Generalizing, we may extend the
transmission principle to ‘ought’, since we are analyzing this in terms of reasons:
T
1
: If you ought to do A, and doing B is a necessary means to doing A, then you ought to
do B.
Recall that, structurally speaking, WSO allows for two methods of compliance: if you
believe that you ought to φ, you can comply by either intending to φ, or by revising the
belief. But now imagine that you are incapable of revising the belief. For example,
imagine that the belief has been implanted in you by nefarious scientific geniuses in a
way that renders it completely incorrigible. Then it seems to follow that the only way you
can comply is by forming the intention. And therefore by T
1
you ought to form the
intention. But this is just the conclusion that we wanted to avoid. It represents a range of
cases in which the WSO collapses into O; or, to put it differently, a range of cases in
which we seem forced to detach the conclusion that you ought to intend to φ, even if φ-
ing is heinous or moronic.
22
If this argument succeeds, then it is a worrisome objection to the wide scope
ought account of the enkrasia principle. I am not sure whether it does succeed. Let me
22
See Setiya (2007) and Way (2010).
51
mention two concerns that we might have about it. As will become clear, I take the first
less seriously than the second.
First, we might worry about the claim that there are cases in which agents cannot
revise their beliefs (which is precisely what’s supposed to make intending a necessary
means to satisfying WSO). There are many possible interpretations of the modality of
‘cannot’, of course, and it is hard to know what exactly the proponent of the transmission
argument is proposing. Perhaps it will be said that this vagueness takes the bite out of the
objection.
However, there are surely cases in which there is an extremely powerful sense in
which the agent in question cannot revise his belief. Presumably the nefarious scientists
could make it impossible for an agent to revise his belief in the sense that, given the
continued operation of the physical laws of our universe, there is no possible future in
which this agent effects such a revision. In this case, though the agent can revise the
belief on some weak interpretation of ‘can’ (e.g. logical possibility), it is natural to insist
that this sense of possibility is simply not relevant to our evaluation of the strength of this
version of the transmission argument.
For one thing, logical possibility seems genuinely irrelevant to the determination
of means. The concept of a means to an end is a paradigmatically practical concept. If
you asked me how you ought to get to the San Fernando Valley, and I told you that you
ought to fly there on your jetpack, then you would interpret my words as facetious, even
though flying there on a jetpack is surely possible. Indeed, getting to the San Fernando
Valley via jetpack is not just logically possible—it is metaphysically possible, and
physically possible as well. This seems to indicate that our concept of a means is
restricted in some sense to practically available options. As a consequence, it seems fairly
unpersuasive to object to the counterexample on the grounds that there is some sense in
which the scientists’ subject can revise his belief. There may be such a sense, but we need
an argument for thinking this is a good reason to reject the view that the subject’s
forming the intention is a necessary means to complying with WSO.
52
Let me try to summarize the dialectic more explicitly. The objector claims that the
case of the nefarious scientists should get us worried about WSO, because it is a case in
which the agent’s formation of the intention is a necessary means to his complying with
WSO. If it is such a case, then this appears to show that WSO entails the same implausible
bootstrapping, in a certain range of cases, that we identified as the main reason for
rejecting O. The defender of WSO attempts to deny that the case is one in which forming
the intention is a necessary means to compliance. This introduces the worry about the
conception of ‘means’ that I outlined above. If (B or C) is a necessary means to A, when
exactly can the fact that you cannot C entail that B itself is a necessary means to A?
What needs to be vindicated for this version of the transmission argument to work
is the following argument (with ‘O’ standing for ‘you ought to’):
1. O(A) [Stipulation. A=comply with WSO].
2. (B or C) is a necessary means to A [Entailed by WSO. B=revise belief, C=form
corresponding intention].
3. You cannot C [Premise motivated by scientists case].
4. So, B is a necessary means to A [From 2 and 3].
5. If B is a necessary means to A, and O(A), then O(B) [Necessary Means Transmission].
6. Therefore, O(B)
The question is what validates this chain of reasoning—in particular, the move from 3 to
4. In other words, what justification is there for restricting necessary means to the things
you can, in some unspecified sense, actually do?
I began to suggest above that standard claims to the effect that doing B is a
necessary means to doing A seem sensitive to an agent’s capacities in the way that the
proponent of the transmission argument supposes. For instance, if I ought to lose weight,
and in order to lose weight I must either exercise or not eat the ten glazed donuts I had for
breakfast, then it is natural to conclude that I ought to exercise, since I cannot go back in
time and make a more respectable showing at the breakfast table. This is compatible with
53
the fact that I ought not to have eaten the donuts. The idea is just that we typically take a
specification of necessary means to be somehow tied to an agent’s present capacities, or
‘live options’ in Greenspan’s terminology.
23
So the objection seems to rely only on our
intuitive conception of a means. Until that conception is shown to be flawed, the
objection seems to be successful.
A second worry about this argument is, I think, much deeper. It concerns the
ultimate justification for transmission principles themselves. Rarely are such
justifications explicitly provided: the principles are often simply taken to be intuitively
compelling.
24
If we ask for an explanation of why transmission principles are true, one
natural line to pursue is by appealing to some version of ‘ought implies can’.
25
Here is
how it might go:
Assume that I ought to lose weight, and cannot do so without exercising. Then by
hypothesis, I cannot [lose weight and not exercise]. But if ought implies can, or
O(A) implies C(A)
then if you cannot do something, it follows that it’s not the case that you ought to. In
other words
~C(A) implies ~O(A)
So it follows that it’s not the case that I ought to [lose weight and not exercise]. But since
the fact that I ought to lose weight is logically equivalent to the fact that I ought to [lose
weight and [exercise or not exercise]], and we have seen that it’s not the case that I ought
23
Greenspan (1975).
24
An exception is Finlay (2008).
25
Something like this is suggested, though not spelled out, in Greenspan (1975), Howard-
Snyder (2006), and Way (2010b).
54
to [lose weight and not exercise], we can conclude that I ought to [lose weight and
exercise].
26
The problem is that this explanation can seem quite problematic if we are
antecedently skeptical about ‘ought implies can.’ Indeed, I find myself in this curious
position. I want to vindicate necessary means transmission, because it seems obvious. But
I am inclined to deny the natural explanation for it. Unfortunately, it would be too much
of a digression here to investigate the viability of ‘ought implies can’ and how this relates
to the justification of transmission principles. It is important to realize, however, that
even if we deny that ought implies can, this does not by itself undermine the transmission
objection. There may well be other ways of vindicating T
1
. I leave the reader to reach his
own conclusions about the strength of this version of the transmission argument against
WSO.
A concluding thought
Let us return for the moment to our comparison case, that of instrumental rationality. It
should be clear that the necessity detachment objection is a stronger objection to the wide
scope ought principle of enkrasia than it is to the analogous version of the instrumental
principle. This is a result of the fact that in the case of the instrumental principle, the wide
scoper postulates three possible ways of being instrumentally rational—intending the
means, giving up the means-end belief, or giving up the intention for the end.
27
So to
construct a case for the necessity detachment objection, we need one in which both the
belief and the intention for the end are unchangeable. But intentions are in at least one
important sense easier to change than beliefs. Beliefs aim at representing the way the
26
An interesting question that I haven’t answered for myself: does this argument rely on
an inference principle that the wide scoper already probably wants to reject—namely that
from O(A or B), and not O(A), we get O(B)? (The wide scoper will want to reject this if
he thinks that, e.g., it can be the case that I ought to [fight or serve] even though it is not
the case that I ought to fight, and not the case that I ought to serve.) Or is it just that from
O(A and (B or not B)) and not O(A and not B) we get O(A and B)?
27
Schroeder (2004).
55
world is—on one conception, they are necessarily responsive to evidence—and so it is
very difficult to spontaneously form or relinquish beliefs. Intentions are more subject to
an agent’s control. Most of us agree that an agent can form the intention to φ knowing
full well that he has insufficient reason for φ-ing.
28
And we should all agree that agents
typically consider several options when thinking about what to do, and often regard
multiple options as permissible. Indeed, in these cases it is common for agents to form
intentions and revise them on the basis of relatively minor considerations. So it seems
that temporary paralysis of intention is less common a phenomenon than temporary
paralysis of belief. Of course, there are cases of motivational compulsion in which some
would want to say that it is very difficult, or impossible, for the compelled agent to revise
his intention. And it’s likely that nefarious scientists who could make beliefs impossible
to revise could do the same for intentions. The point is just that the range of cases that
will support the necessity detachment version of the transmission argument seems
narrower in the case of the instrumental principle. Therefore, there is at least cause to
suspect that this version of the transmission problem is more profitably articulated as an
objection to the wide scope ought enkrasia principle than the analogous version of the
instrumental principle.
Normative Detachment
We now turn to the second sort of transmission argument that can be marshaled against
WSO. In this type of case, which I labeled the normative detachment version of the
transmission argument, an agent is rationally required to keep believing (falsely) that he
ought to φ. Then the objector asks us to share his intuition that it would be wrong for the
agent to give up his belief; accordingly, it’s not the case that he ought to give it up. But
then he must form the intention in order to satisfy WSO. Again, this amounts to the kind
of illegitimate transmission that persuaded us to reject O.
28
Even if there are cases of epistemic akrasia, they are surely less common than cases of
practical akrasia.
56
For example, consider Al, a cigarette smoker in the 1800’s who believes that he
ought to keep smoking. Al likes to smoke, and the unhealthy effects of smoking are not
yet widely recognized. His belief that he ought to keep smoking is justified; giving it up
would be irrational. So it can seem like the wide-scope principle implies that Al ought to
intend to keep smoking, insofar as we have the intuition that it’s not the case that he
ought to revise his rational belief. But this is a counterintuitive result—smoking is bad for
him, whether he knows it or not.
The proponent of WSO will respond in the following manner. While it is true that
it would be rational for Al to continue believing that he ought to smoke, it is also true that
he ought to give up this belief. The “ought” we are employing in the wide-scope principle
is the ought of ‘most objective reason’, as I have taken care to emphasize. Thus it can,
and often is, the case that an agent ought to do something irrational. Again, this was one
of the main lessons of chapter one.
I believe that this is the right response. However, I will register one puzzling
aspect of it for the sake of completeness.
It is certainly true that what it’s rational to believe and what you ought to believe
can come apart. But the worry that one might have with this response is that in these
cases, giving up one’s belief might involve being akratic with respect to the belief. (We
may define epistemic akrasia as that error you are guilty of when you believe that you
ought to believe that p, and yet you fail to believe that p.
29
) Suppose that Al does believe
that he ought to believe that he ought to smoke. Then in order to evade this version of the
transmission argument, the defender of WSO must say that Al is not required to intend to
smoke, because he can equally comply with the principle by akratically revising his
29
Again, note that, strictly speaking, my view is that theoretical akrasia is the error of
believing that you would be irrational if you failed to believe that p, and not believing
that p.
57
belief. But this is at least a curious thing to say in defense of a principle that’s intended to
capture what it is that’s wrong with akrasia.
30
In my view the argument of the preceding paragraph does not constitute an
especially worrisome objection to WSO. All I’m claiming is that it leaves us with a mild
theoretical curiosity that perhaps helps to indicate that ‘ought’ is the wrong man for the
job. Since the rational requirement formulation avoids this issue, this could be taken as a
point in its favor.
2.5 Third Objection to WSO: Symmetry
31
There is one more objection to the wide scope program in general that we need to
address. Call it the symmetry problem. The wide scope account posits a systematic
symmetry—there are generally two (or more) equally good ways of satisfying any wide
scope principle. WSO, for example, posits that an agent can be enkratic, when he judges
that he ought to φ, in one of two ways: he can intend to φ, or he can stop believing that he
ought to φ. The proponent of a symmetry argument objects that these are not equally
good ways of being enkratic. In fact, giving up the belief that you ought to φ is on the
face of it a terrible way of being enkratic. Giving up this belief can be what we call
rationalization: the vice of convincing yourself that it’s not the case that you ought to do
something, merely because you don’t want to do that thing.
32
30
Note that a more worrisome version of the argument could be given if we had reason to
deny the possibility of epistemic akrasia. In that event the case of Al would become a
further case of necessity detachment—a case in which the only possible way for Al to
comply with WSO is by forming the intention. Though I am skeptical about the existence
of epistemic akrasia, I do not have arguments to support this skepticism, so I don’t pursue
this line in the text.
31
The symmetry problem is one of the important threads running through this
dissertation. My most careful and extended discussion of it will come in chapter four. The
treatment here is more superficial, but serves its purpose in the chapter’s dialectic.
32
Schroeder (2004), Kolodny (2005). Example: I believe I should donate to charity, but I
want to keep my money for myself. So I think of reasons why I shouldn’t donate to
charity—for instance, that so few of my wealthier friends do. I convince myself that it’s
58
So contrary to the wide scope view, it is claimed that there is sometimes only one
way to exhibit the virtue of enkrasia: you have to do what you think you ought to do. This
doesn’t mean that agents cannot rationally or virtuously change their normative beliefs
about what they ought to do. That would be absurd: the suicide bomber should change his
belief about what he ought to do. However, if he really believes that he ought to murder
innocents, he shouldn’t count as exhibiting the virtue of enkrasia when he gives up that
belief.
Now it is true that there are other types of cases in which agents behave in an
epistemically enkratic way, and as a result revise their beliefs about what they ought to
do. These are not cases of rationalization. But cases of rationalization still seem
problematic for the wide scoper.
Proponents of wide-scope principles may respond to symmetry worries in the
following way
33
:
‘It’s true that my version of the enkrasia requirement is in one sense symmetric.
But I have other relevant principles—for example, a principle concerning the rational
formation of beliefs. This principle tells me whether an agent ought to have a certain
belief. So my account has the resources to distinguish cases in which an agent ought to
change his normative belief from cases in which he ought to intend to do what he
believes he ought to do. Thus my account delivers the appropriate asymmetry, even
though WSO is symmetric.’
This is the right response. But to see why I am not completely satisfied with it,
consider two different versions of the symmetry objection. The first says that WSO
predicts a symmetry that does not obtain. The second says that WSO fails to capture an
asymmetry that does obtain. In my view, the response cited in the previous paragraph is
sufficient to undermine the first version of the objection. For the wide scoper is right to
not the case that I should donate to charity so that I don’t feel any pressure to do what I
don’t have the desire to do.
33
Similar responses are discussed in Schroeder (2009), Finlay (2010), and Way (2010a
and 2010b).
59
claim that WSO, once it is embedded in the proper theory of rationality, will get the right
results about what rationality requires of an agent in a given situation (e.g. it will not
predict that rationalizing is permissible). Nonetheless, the response does not succeed in
answering the second version of the symmetry worry. The problem here is not that WSO
gets the wrong results; it is rather that the principle fails to capture the depth of an
asymmetry that is crucial to the way we think about enkrasia.
Here’s a preliminary stab at this idea. There seems to be something fundamentally
asymmetric about the relation that obtains between a belief that you ought to φ and
intending to φ. There is always something rational about forming the intention to φ on
account of your belief that you ought to φ. This is true even though it is sometimes less
than rationally optimal to go through this process of intention formation. But the same
cannot be said for revising one’s normative beliefs: it is not the case that there is always
something rational about doing so.
34
So it is natural to worry that an account of enkrasia
that appeals to WSO alone may miss something important. We will return to this issue at
the end of the chapter, and at length in chapter four.
2.6 Concluding remarks about WSO
I have offered several objections to WSO. Some are less than decisive. However, the
cumulative effect of these objections is to raise serious doubts about the viability of this
formulation of the enkrasia principle. Now I want to make one more observation in the
hopes of motivating our move to the wide scope rational requirement formulation.
Recall the discussion of normative detachment, and my case of Al the smoker.
One way of presenting that argument is to say that we will get detachment in so far as Al
34
It might be objected that there is in fact no asymmetry, since the relevant alternative is
not the action of revising one’s normative belief simpliciter, but rather the action of
revising it on account of a belief that it is unjustified. I agree that there is always
something rational about the latter form of mental action. But the explanation is that this
is a paradigm case of theoretical enkrasia. If one believes that p, and believes that one’s
belief that p is unjustified, then the same asymmetry obtains: there is always something
rational about revising the belief that p, but not always something rational about revising
the belief that p is unjustified, even though the latter may be what is rationally required.
60
ought to avoid irrationality—that is, in so far as he ought to avoid irrationally giving up
his normative belief. I mentioned that it would surely be objected that WSO merely says
that you ought to satisfy the conditional, and not that you are rationally required to satisfy
the conditional. Thus, it will be claimed that Al ought to give up his belief,
notwithstanding the fact that this would be irrational. But this reply left us with a
lingering curiosity. For it is strange to think that the enkrasia principle should sanction
irrationality of this sort. (It is particularly hard to swallow because this would, in cases
like Al’s, amount to permitting a kind of epistemic akrasia—an error that is related to the
error whose prohibition we are attempting to formulate.) Granted, we have already
become comfortable with the idea that what you ought to do and what it would be
rational for you to do can come apart. But I would suggest that it is this very idea that
makes it look odd to contend that the enkrasia principle could employ the ought-relation.
For we should (even pre-theoretically) consider akrasia a rational failure, and not
necessarily a failure to do what you ought to do or be as you ought to be.
Some evidence for this claim is the fact that people can akratically do what they
ought to do. Imagine that Sarah believes irrationally that she ought to refrain from
marrying her wonderful boyfriend Clarence, on the grounds that he isn’t Jewish;
nonetheless, she forms the intention to marry him anyway. In this sort of case, since
Sarah intends as she ought, it is difficult to see what would motivate the idea that her
error can be captured by appealing to what she ought to have done differently. We might
try to salvage the view by saying that she has intended as she ought, but for the wrong
reasons. But this will not always be plausible; Sarah might have married Bill for all the
right reasons.
35
We might instead claim that she ought to be such that she has rational
beliefs. But this is simply false: on the ‘most objective reason’ reading that we are
employing, it will often be the case that agents ought to have irrational beliefs (for
example, if the beliefs happen to be true, even though the agent has little evidence for
them).
35
See Arpaly (2003), chapter two, and chapter three of this dissertation.
61
Thus we turn now to what I regard as a more promising line: considering the
version of the wide scope view that employs the relation of rational requirement.
2.7 Reflections on ‘rational requirement’
The concept of rational requirement is now routinely invoked in writing on various issues
pertaining to practical and theoretical normativity, but it is rarely explained or analyzed.
In this section I will offer some remarks intended to clarify the nature of the concept that
I intend to employ throughout the rest of this dissertation.
The most natural interpretation of the phrase ‘rational requirement’ is ‘something
that you must satisfy on pain of irrationality.’ If you are rationally required to have
attitude A, then you are irrational if you don’t have attitude A. Moreover, it is lacking this
very attitude that makes you irrational. In other words, lacking A is intrinsically
irrational—irrational on its own, and not just irrational because it happens to have certain
kinds of consequences, e.g. leading to the formation of other attitudes that are irrational.
(The importance of this additional condition will emerge below.) For example, in the
three envelope case, lacking the intention to choose envelope one is intrinsically
irrational: it is irrational not because it will lead to further attitudes of questionable status,
but simply because lacking this intention itself impugns the agent’s rationality. If we
know the relevant details of the case, and we know that the agent does not intend to take
envelope one, then we are already in a position to conclude that the agent is irrational.
This is what grounds the judgment that the agent is rationally required to intend to take
the first envelope.
There is a useful way of explicating this notion of requirement with a bit of
technical jargon. A requirement is the kind of normative relation that is all things
considered, or, as I prefer, decisive, relative to some well circumscribed normative
domain D. (We can conceive of these ‘well-circumscribed domains’ as intuitively
appealing candidates for sources of normative truths, or kinds of normative
considerations: rationality, morality, and perhaps prudence and good taste would be
plausible suggestions for legitimate values of D.) One way of cashing this out is by
62
saying that if you are D-required to φ, then there is no D-permissible option aside from φ-
ing.
36
The existence of the requirement means that nothing else is needed to determine
the verdict of D in the given case. The requirement is the verdict.
Another, more cautious way to elucidate this distinction is by appealing to two
different categories of weighing concepts. A decisive weighing concept delivers a final
verdict that encompasses all the relevant considerations of the domain in question. A non-
decisive weighing concept delivers a more restricted verdict. Other things being equal
37
,
it is not legitimate to derive conclusions about the final verdict of a domain until we have
gotten all the non-decisive considerations together and evaluated them. A decisive
concept represents the outcome of this task.
38
Non-decisive (or pro tanto) normative
concepts lack this quality of finality. Standing in a non-decisive relation to φ-ing does not
alone entail anything about whether φ-ing is D-permissible or, more cautiously,
ultimately favored by D. For example, having a moral reason to φ does not entail that it is
morally impermissible to fail to φ, or that morality would convict you of any type of error
at all if you failed to φ.
Thus I use ‘rational requirement’ to pick out the decisive normative relation of the
domain of rationality. If you are rationally required to have attitude A, then you cannot be
fully rational without having A—you must be irrational to some degree if you fail to have
36
This assumes that you cannot be under conflicting D-requirements. I am inclined to
endorse such a “no conflicts” principle. Compare Broome (1999). See chapter five for
further discussion.
37
Of course, I do not mean to commit myself to the view that in practical decision
making we can never rationally or appropriately act on a reason without forming an all
things considered judgment. In many cases rational action requires the curtailing or
foregoing of deliberation. The point is just that decisive concepts have as part of their
function the role of integrating and weighing non-decisive considerations.
38
Note that this sketch is more cautious because it permits, for example, ‘ought’ to be
decisive even if we allow for the possibility of supererogation. On this conception you
can permissibly fail to do what you ought to do, and ‘ought’ is still decisive because it
encompasses all the relevant considerations and delivers a final verdict.
63
A. And if you are rationally required to avoid akrasia, then you are necessarily irrational
any time you are akratic.
39
2.8 A Defense of the Wide Scope Rational Requirement Formulation
We now return to the general objections to wide scope principles that I have canvassed in
this chapter, considering them in relation to
WSRR: You are rationally required to be such that (if you believe that you ought to φ,
then you intend to φ).
The evil demon
The first problem we considered for the wide scope ought version of the enkrasia
principle was the evil demon problem. The objection attempted to show that there were
cases in which agents ought to be akratic.
Does this argument undermine WSRR as well? Not obviously. On the face of it,
we can be rationally required to avoid akrasia even if we ought to be akratic. As we’ve
seen, what rationality requires from us can frequently conflict with what we ought to do.
Here is a contentious way to put the point. Suppose that rationality is just a matter
of conforming to your apparent object-given reasons. The threat of the demon gives you
only state-given reasons: it gives you reasons that have nothing to do with the object of
your intention (helping Annie), and everything to do with the cost of having the intention.
So the reasons you get from the demon’s threat are simply not relevant to what it’s
rational for you to do, on this conception of rationality. They are, however, relevant to
what you ought to do.
The general flavor of this response does not depend on appealing to the apparent
object-given reason account of rationality. We can restate it in more neutral terms. The
point is that the requirements of the rational domain need not line up with the
requirements of other domains, e.g. the requirements of prudence, or of objective
39
In the next chapter I emphasize this strictness thesis, which I introduced in chapter one.
64
normative reasons. So for every account of rationality that distinguishes the content of
rational requirements from the content of other requirements, there will be a
corresponding way of evading the evil demon problem.
There is a residual worry, however, that cuts more deeply. Think back to what I
had to say about the three envelope problem at the end of chapter one. There I contended
that the problem showed that rationality is nothing short of “the guide to life.” In other
words, I argued that the concept of rational requirement, if it is to line up with our most
important common sense practices of rational evaluation, is to be interpreted as the
central normative concept of our practical standpoint—the concept that closes good
practical deliberation by figuring as the ground of enkratic reasoning. But it can seem like
the evil demon problem amounts to a stark challenge to this view. For it presents us with
a case in which rationality requires an agent to avoid akrasia even though it is abundantly
clear that his life will be better if he doesn’t. Does this show that my claim about the
centrality of ‘rational requirement’ was overly optimistic? Is rationality really more like a
system of parochial norms, to be flouted when they conflict with other norms that are
more important?
I think not, but I’m not sure how to defend this view. I regard this dialectical
situation as a useful way of framing a deep puzzle about how we conceive of the nature
of reasons and rationality. The puzzle is general and is often referred to as the wrong
kinds of reasons problem; I have already invoked it and admitted that I do not have a
developed theory about how to solve it. It is the puzzle of explaining why some
considerations seem to both count in favor of attitudes (and perhaps actions) and to not
count in favor of the rationality of those attitudes (and perhaps actions). In the present
case, the demon’s threat seems to count in favor of being akratic (having a certain belief
and lacking the corresponding intention); but the threat does not seem sufficient for
making akrasia rational. In a more familiar case, a reward seems to both count in favor of
having a belief, and to have no bearing on the rational status of the belief.
40
40
See, most famously, Pascal (1908). I will note here that I think it is easy to
underestimate the severity of this problem by too quickly dismissing certain types of
65
We might attempt to mitigate the worry by drawing attention to the following
disanalogy between the three envelope and evil demon cases. In the former, it is up to the
agent to do what it would be irrational of her to fail to do (intend to take envelope one).
In the latter, however, it is plausibly not up to the agent whether she can be akratic. For
she might be incapable of retaining her antecedent belief that she ought to assist Annie
once the demon’s threat is delivered; and if she loses this belief, she cannot be guilty of
akrasia with respect to it. So perhaps we could argue that the relevant attitudinal change
must be ‘up to you’ in order for it to be rationally required.
I am inclined to think that something like this strategy is promising, but I have
worries about it. First, even if it were granted that the application to the present case
works, it is not totally obvious that the solution would generalize. Consider an analogous
evil demon objection to the wide scope rational requirement version of the instrumental
principle. In that case, the demon threatens to torture you unless you are means-end
incoherent. But it isn’t obvious that it is not up to you to be guilty of this.
41
A second worry about this approach is more interesting. It begins with the
observation that we often call attitudes like phobias irrational. Indeed, it seems part of the
definition of a phobia that it is an irrational, as opposed to a rational, fear. But
presumably it is not always up to people to get rid of their phobias. Many would love to
be rid of them, and take steps toward getting rid of them, but cannot accomplish this. So
considerations—most frequently, pragmatic considerations in the case of belief—as not
being reasons at all. Sports provide some interesting examples. In one clear sense,
athletes have very strong reasons to have certain irrational beliefs (example: Kobe
believes ‘I will make the three at the buzzer’ even though he’s shooting 30% for the game
and 45% lifetime in similar circumstances): these beliefs tend to have good
consequences. If you were to claim that Kobe has no reason for this belief, or that it is in
no way rationally justified by its consequences, it might be reasonable to respond that
you are obviously not a fan of the Lakers.
41
It isn’t obvious, but I think it is probably true. Sketch of an argument: if you intend an
end, you are disposed to pursue it by intending the means; once the demon threatens you,
you cannot be disposed to pursue the means; so once the demon threatens you, you
cannot intend the end. See Finlay (2009).
66
it seems that we intuitively regard certain attitudes as ones that rationality requires us to
revise even though they are for all practical purposes immune to revision.
Another attempted solution to our puzzle involves a retreat from at least one of
the claims I made about the decisiveness of ‘rational requirement’ in section 2.8. Maybe
the right thing to say about the evil demon case is that it is one in which rational
requirements conflict. On the one hand, the agent is required to avoid akrasia. On the
other, she is required to be akratic. We can still distinguish between the sources of these
requirements in some sense, perhaps, but not by saying that the injunction against being
akratic is a rational requirement, while the injunction against getting tortured is e.g. a
prudential one. This would involve giving up on the ‘No-Conflicts’ axiom for
requirements, which I endorsed. But it would not impugn the distinction between decisive
and non-decisive concepts. It would just mean that the final verdict of the rational domain
is a bifurcated one in a certain range of cases.
Finally, we might claim that there are, as it were, escape clauses implicitly built
into the content of rational requirements. So WSRR does not, strictly speaking, require all
agents at all times to be akratic: there are special circumstances that constitute defeaters
for the requirement, e.g. circumstances in which satisfying the requirement would have
catastrophic effects. I do not find this solution attractive. It gives up on what I’ve called
the strictness thesis, the claim that all instances of akrasia are instances of a substantial
agential failure. This is implausible: it does seem that there is something wrong with
being akratic, even in a case as strange as the one we are considering. The suggestion is
also, on one interpretation at least, incompatible with the claim I will make (in chapters
four and five) that we paradigmatically conceive of requirements as inescapable.
The evil demon problem raises, or underlines, some deep and troubling issues.
They are issues for all theorists who are attempting to understand the nature of
rationality, and I do not know of anyone (myself included) who has provided a
compelling answer to them. I have suggested some general forms of response, but they
need more filling out, and I can’t endorse them wholeheartedly. This is frustrating.
67
Hopefully the discussion at least serves the function of introducing these issues in a
relatively original way.
Transmission
The second form of argument against WSO was what we called the transmission problem.
This broke up into two separate cases, which I labeled necessity detachment and
normative detachment. But broadly speaking the problem was that, in at least some cases,
the wide scope view seemed to endorse the conclusion that it was specifically engineered
to avoid—namely, detachment of the consequent of the enkrasia conditional. Does this
problem arise for WSRR?
Normative Detachment
One advantage of WSRR is that it completely evades the argument from normative
detachment. It is true that Al is rationally required to be such that [if he believes that he
ought to smoke, then he intends to smoke]. And it is true that Al is rationally required to
intend to smoke in our case. Put it this way: given what he is in a position to know and
what he likes, smoking makes most sense. It’s not true that he ought to smoke, since there
are facts that he is not in a position to know that make it the case that smoking is very
bad. But we are comfortable with the idea that you can be rationally required to intend to
do something that you ought to intend to avoid (again, think of the three envelope
problem, or Williams’ gin and petrol case
42
).
The fact that WSRR avoids the issue of normative detachment will obviously seem
like no advantage at all if you were completely unmoved by the problem in the first
place. I claimed that the proponent of WSO had a satisfactory response to the objection,
but that this response left him to deal with a “lingering curiosity”—the fact that in some
cases, he would be committed to the view that an agent ought to satisfy the non-akrasia
42
Williams (1981). The partygoer rationally intends to drink the petrol because it appears
to be gin and tonic. He would be irrational if he didn’t intend to drink it, because he really
wants a gin and tonic. But he ought to refrain from drinking it.
68
requirement by being epistemically akratic. Since I find this mildly weird, I regard the
normative detachment argument as grounding a mild reason to favor WSRR over WSO.
Necessity Detachment
In order to run the necessity detachment argument against WSRR, we would need to
defend an analogue of the original necessary means transmission principle (T
1
) for the
relation of rational requirement. That is, it would have to be the case that
T
2
: If you are rationally required to have attitude A, and having attitude B is a necessary
means to having A, then you are rationally required to have B.
If T
2
were true, then we could construct counterexamples to WSRR by giving cases in
which an agent could not revise her belief that she ought to φ. For in those cases the
necessary means to satisfying WSRR would be forming the intention to φ.
However, T
2
does not have the intuitive plausibility of its predecessor. Imagine
that Genghis is rationally required to believe that he is worthy of praise. Suppose, though,
that the only way for Genghis to believe that he is worthy of praise is for him to intend to
invade Samarkand using living prisoner’s bodies as shields. Even though having such an
intention is a necessary means to having a belief that he is rationally required to have, it
does not seem like Genghis is (necessarily) rationally required to have the intention. For
it is plausible to think that the intention may be manifestly irrational on independent
grounds—plausible to think, in other words, that there is insufficient reason for the object
of the intention. If this is right, then Genghis’ weird psychological makeup should not be
able to bootstrap the intention into having the status of being rationally required.
This is why, in introducing my account of ‘rational requirement’, I added the
claim that to be rationally required to have attitude A requires that the irrationality of not
having A be intrinsic. Genghis’ intention to invade Samarkand using body shields is, let’s
suppose, intrinsically irrational. The only thing that recommends it is its strange
69
relationship to another attitude that he is rationally required to have. But this is not
sufficient to ground the claim that he is rationally required to so intend.
One natural thing to wonder is why, if this argument is at all persuasive, a similar
argument does not falsify T
1
. For example, imagine that Genghis ought to treat his
soldiers better. But suppose that his idiosyncratic psychology makes it the case that in
order to do so, he must first kill ten percent of them.
43
By T
1
it appears to follow that
Genghis ought to kill ten percent of his soldiers. This is, again, a troubling result. But
notice the following disanalogy between the cases. Though it is antecedently plausible to
contend that Genghis ought to treat his soldiers better, once we find out that the only way
for him to do so is to kill ten percent of them, we are inclined to revise our judgment
about what he ought to do—we are disposed to recoil from the claim that he ought to
treat them better. (At least this is the view that seems to fall out of the ‘live option’
conception of means that we tentatively adopted in 2.4.1.)
It seems that reasons against the necessary means also count as reasons against
the end. Generalized to oughts, the claim is that if you ought to refrain from performing
the necessary means to an action, then you ought to refrain from performing the action
itself. To put it formally, the following contra-means transmission principle seems true:
CT
1
: If x is a necessary means to y, and you ought to refrain from x-ing, then you ought
to refrain from doing A.
44
43
Note that the truth of this claim requires a certain interpretation of claims about means
that we invoked above in discussing the case of the nefarious neuroscientists, namely the
idea that S’s psychological incapacity to x is sufficient for making it the case that x is not
a means for S to y. If this were not true, then Genghis’ inability to treat his soldiers well
without killing ten percent of them would not entail that killing the ten percent is a
necessary means to treating them well.
44
For example, we have all probably offered conditional advice of the form: you ought to
go to the party and talk to Lisa, but if you cannot refrain from hitting on her and making
things uncomfortable for everyone, then you ought to stay home. My claim is that we will
retract the unqualified ought when we realize that a psychologically necessary means is
impermissible. This is relevant to the actualist-possibilist debate about moral obligation,
70
But compare:
CT
2
: If having B is a necessary means to having A, and you are rationally required to
avoid having B, then you are rationally required to avoid having A.
CT
2
is not plausible. Suppose I have the irrational belief that I don’t exist. The only way I
will revise this belief, and form the exceedingly rational belief that I do exist, is by
forming the intention to perform acts of excruciating self-mutilation. Assume that such an
intention is irrational. Then if CT
2
were true, it would follow that having the belief that I
exist is irrational. But this is absurd.
We should reject CT
2
, and we should reject T
2
. Transmission principles do not
apply as straightforwardly to the rational status of attitudes as they do to reasons for
action. The necessity detachment argument is a more compelling objection to WSO than
it is to WSRR.
To conclude this section, I will note that the argument against T
2
is strengthened
if we accept the view that state-given reasons are irrelevant to the rational status of
attitudes. As Jonathan Way shows, object given reasons clearly do not transmit from ends
to means.
45
I have by now made it clear that there are big, controversial issues that this
distinction between state and object-given reasons brings to the fore and that I do not
know how to resolve. So I have given an argument against T
2
that appeals only to
but it would be too much of a digression to introduce that material here. See Jackson and
Pargetter (1986), and Ross (2011 ms).
45
Way (2010a: 227). See also Setiya (2007). Here is one of Way’s examples. Harry has
object given reason to intend to give money to charity. However, he is psychologically
incapable of intending to give unless he also intends to tell all his friends about his
donation—so intending to tell is a necessary means to intending to give. But it doesn’t
follow that Harry has object given reason to intend to tell. Harry might have no reason to
tell, e.g. when telling is not necessary for giving. (For example, this would be the case if
the opportunity to tell came after the opportunity to give.)
71
intuitions about the rational status of certain attitudes. It is worthwhile to note that object
given reasons do not transmit, though, since this might be what explains our intuitions.
Symmetry
Our final objection to the wide scope ought version of the enkrasia principle was that it
was symmetric, sanctioning two ways of rationally conforming. But there is an important
difference, I claimed, between the two ways in which we can get ourselves into
conformity with the enkrasia principle. Intending to φ because you have the belief that
you ought to φ always exhibits a rational virtue, or at least demonstrates that something is
going right with your cognitive powers. On the other hand, giving up the belief that you
ought to φ only sometimes exhibits a rational virtue. It is rational to change your beliefs
in response to evidence. But giving up the belief on account of your lack of an intention
to φ, and/or your distaste for φ-ing, should not count as exhibiting a good disposition of
practical thought.
As I noted above, there are two ways to read the symmetry worry, and these
translate to the present application. On the first reading, WSRR is objectionable because it
allows that giving up a normative belief (perhaps by rationalizing) can constitute a
satisfaction of the enkrasia principle. There is a satisfying response to this objection. The
enkrasia principle need not and should not be concerned with whether an agent is guilty
of rationalization. It is simply unfair to suggest that a principle is untrue merely because it
fails to explain everything in the vicinity that we would like explained. That is why a
satisfying theory of rationality will need much more than a true enkrasia principle.
But now consider the second version of the symmetry objection. In this
incarnation, the objector claims not that WSRR licenses an illegitimate symmetry, but
instead that the principle fails to adequately capture the important asymmetry that we’ve
articulated. This objection is not unfair. It does not claim that WSRR is false; it only
claims that the principle is not enough to constitute a satisfying theory of enkrasia.
I hope it’s clear that the objection from symmetry should have the same status
against WSRR as it did against WSO. If it is a problem, then it is a general problem for
72
wide scope principles. In the terminology that I will use from here on out, the problem
underlines how even if we grant the wide scope thesis—the truth of a wide scope
principle such as WSRR—we are still faced with the task of analyzing the plausibility of
the wide scope theory. The wide scope theory of enkrasia is something like the claim that
everything fundamental about the rational import of enkrasia is captured, and explained,
by WSRR.
Ultimately, the symmetry problem leads me to desire more explanatory resources
than can be provided by the wide scope theory. I think there is something fundamental
about the relation between believing that you ought to φ and intending to φ beyond the
fact captured by WSRR. Specifically, I think that believing that you ought to φ commits
you to intending to φ. The nature of this sort of commitment will be the central topic of
the remainder of this dissertation.
There is a related point that has not animated this discussion but which deserves
mention.
46
A theory that appeals solely to wide scope principles has very limited
explanatory resources, because wide scope principles apply to all agents independently of
what they are like, what they care about, etc. This means that any explanation of wide
scope requirements must appeal to perfectly general features of agency and nothing else.
But it is difficult to give adequate explanatory theories on the basis of such thin
resources; it is much easier to explain situational requirements, ones that are based on
suppositions. This difference between rational requirements and rational commitments
will be stressed in later chapters, especially when I argue that commitments are escapable
and agent-dependent. Though I do not want to endorse the view that rational
commitments explain the existence of rational requirements, I do think that this view is
coherent and has some things to be said in its favor.
47
My conclusion is that a theory of the rationality of enkrasia, and of the
normativity of rationality more generally, must be supplemented: wide scope rational
46
This point is more central, and better expressed, in Schroeder (2004) and (2009).
47
In previous versions of this material I did endorse this view. I am now agnostic about
it. I provide some reflections in Appendix B.
73
requirements are not enough. The relation of commitment seems to me well poised to
play a central explanatory role. In chapter four I develop this argument in greater detail,
and in chapter five I fill out my suggestion by providing an account of the nature of
commitment that aims to unify several apparently separate strands of thinking about this
concept. First, though, we take a slight detour to consider some incompatible views about
the nature of rational norms.
74
CHAPTER THREE: TWO MYTHS ABOUT AKRASIA
In the first two chapters I’ve aimed to provide a characterization and defense of the most
plausible version of the enkrasia norm, the wide scope rational requirement formulation.
In later chapters, my concern will be to criticize the adequacy of this principle as a stand-
alone theory, and to offer an account of the notion of commitment that I regard as the
essential supplement to the conception of rationality so far articulated. The present
chapter takes a step back from the main thread of the dissertation to consider two
prominent views about akrasia that are incompatible with our preliminary conclusions.
The first view, defended by Joseph Raz and Niko Kolodny, is the view that
rational requirements like the non-akrasia principle are in an important sense myths.
These requirements are myths because there is no distinctive error of the sort that the
relevant principles suppose (for instance, there is no distinctive error of akrasia).
1
Instead,
there are only local failures to respond appropriately to reasons—e.g. the failure to
believe in accordance with your reasons for belief, or the failure to intend in accordance
with your reasons for intention. This is supposed to be compatible with the view that
whenever someone violates a putative rational requirement they are doing something
rationally impermissible. The idea is simply that the irrationality is a matter of responding
incorrectly to one’s reasons, and is not a matter of violating a principle like WSRR. So,
the story goes, such principles are substantially misleading: they purport to capture
distinctive and important types of rational errors, but they are in fact merely true
generalizations about when we can know that an agent must have inappropriately
responded to her reasons in one of a variety of ways.
Let us call this the NDE (“No distinctive error”) view, as its main claim is that
rational requirements are not to be construed as principles that describe central types of
1
Joseph Raz (2005); Kolodny (2005, 2007). I should note that Raz’s view is explicitly a
view about the norm of means-end coherence, and he does not discuss akrasia per se.
Whether or not he takes his conclusions to extend to the case of akrasia is unimportant for
our purposes, since we have been operating under the assumption that a unified treatment
of these related domains is preferable to a piecemeal one.
75
distinctive rational failure, but rather as downstream generalizations derivable from more
fundamental truths about reasons and appropriate response. These truths about reasons
are not distinctive in the relevant sense, for what explains the nature of the akratic error,
on this account, is not any intrinsic relation between beliefs about what you ought to do
and intentions. In fact, there is really no such thing as the akratic error at all. There are
simply failures to respond correctly to reasons for belief and intention. The most we can
say is that when you are akratic, you must have made one of these errors. But the error
you’ve made may be exactly the same error as another agent has made in a case that isn’t
a case of akrasia.
In the first part of this chapter I’ll suggest that the NDE view cannot vindicate the
intuitive asymmetry that I have claimed to be an integral part of our commonsense
thinking about akrasia. Then I’ll remind the reader of an assumption that I’ve been
operating with all along. The assumption is that a good theory of akrasia must entail that
every instance of akrasia is an instance of decisive irrationality. Following the discussion
in chapter one, and foreshadowing further consideration of this topic in chapter four, we
will call this assumption the strictness thesis. I’ll discuss how the NDE view attempts to
make good on this desideratum, and argue that the attempt cannot be successful.
This discussion will afford us a natural transition to the second prominent view
about akrasia that is incompatible with my treatment. This is the view, arguably defended
by Nomy Arpaly, that akrasia is not always irrational in a major way.
On one interpretation, Arpaly attacks the motivating assumption whose
acceptance I have taken for granted in the first two chapters; and if she is right to reject
the strictness thesis, then my argument against the NDE view cannot succeed. Worse, the
falsity of the strictness thesis would call my entire project of providing a theory of akrasia
into question—the idea all along has been to give an account of the sense in which
akrasia is invariably a major rational failing. (This would not be a problem for me alone;
it is a problem for anyone who hopes to invoke the concept of rational requirement in
giving a theory of what’s wrong with akrasia.) In the latter sections of the chapter I’ll try
76
to show why Arpaly’s arguments are not good arguments against this thesis, and I’ll give
my own diagnosis of what these arguments show.
3.1 The NDE view and an asymmetry intuition
There is an asymmetry embedded in our commonsense understanding of akrasia and
enkrasia that a good theory should capture. This asymmetry has already been articulated
in the previous chapters, and it will be the topic of further consideration in chapter four,
but for now we can locate it by considering the following hypothetical situations:
Bob and Jenny
1
: We know that Bob believes that he ought to ask Jenny out on a date. We
do not know anything about Bob’s justification for so believing. Later we discover that
Bob has, at least partially on the basis of this belief, formed the intention to ask Jenny
out.
Bob and Jenny
2
: We know that Bob believes that he ought to ask Jenny out on a date. We
do not know anything about Bob’s justification for so believing. Later we discover that
Bob has revised this belief.
The asymmetry I’m interested in can be seen in the difference between what we can
conclude about these two cases. In the first, we can conclude that there is at least
something rational about what Bob has done. In other words, there is something
necessarily rational about forming enkratic intentions. In the second case, however, we
cannot conclude that there is something rational about what Bob has done. There may
be—he might have had insufficient reason to have his belief in the first place, and this
realization might have prompted him to revise it. But there may be nothing rational about
such revision. Bob might have been rationally required to retain his belief.
What I want to emphasize here is that there is a relevant difference between the
two ways in which agents can potentially come into conformity with the rational
requirement of enkrasia. When an agent has a belief about what he ought to do, he may
77
come to satisfy WSRR in one of two ways: by revising this belief, or by forming the
corresponding, enkratic intention. But the latter method is distinguished from the former
in that it always amounts to some kind of rational success. Of course, this success may be
qualified—if Bob’s belief is ridiculous, then his intention will inherit much of this
ridiculousness. But nonetheless we would admit that there is something to be said for it.
2
Here’s another way to put the point. There is something distinctive about being
enkratic over and above the fact that it entails that you’ve avoided akrasia. For you can
equally avoid akrasia by giving up your belief about what you ought to do. But doing that
isn’t necessarily rational in any respect. So forming enkratic intentions is a distinctive
mode of rational conduct, one that we care about and assess positively in its own right;
and it cannot be reduced to the mere avoidance of akrasia.
Notice that this claim—that forming enkratic intentions is a distinctive mode of
rational conduct that we care about—is borne out by our commonplace practices of
rational assessment. For we often say, of people whose values or goals are in our opinion
inadequately justified, that they are nonetheless achieving some kind of rational success
when they consistently pursue these values and goals. (‘She’s crazy to care so much
about money, but you’ve got to give it to her—she works so hard that she’s sure to be
filthy rich someday.’ Similar modes of attenuated but nonetheless approving assessment
are present in cases of theoretical coherence as well: ‘His views are ridiculous, but at
least he’s consistent.’)
Moreover, the claim is corroborated by our practices of offering advice. As I
mentioned in the Introduction, we commonly counsel others to follow their hearts, to be
guided by their consciences, and to do what they’ve got to do. This is not always because
we agree with their views about what ought to be done. It is sometimes because we are
2
Ironically, it is Kolodny (2005, 2007) who has most frequently pressed a similar
objection against defenders of wide scope views. See also Schroeder (2004, 2009). My
formulation of the asymmetry is subtly different from the versions given by these authors.
This topic receives more sustained attention in chapter four.
78
agnostic about this question, and regard the rational pressure to be enkratic as the
deciding factor in a normative situation that is otherwise uncertain.
3
The problem for the NDE view is that it has a very tough time capturing this fact.
According to this view, our intuitions about the irrationality of akrasia are underpinned
by facts about what particular attitudes we have reason to form or revise. So WSRR is
true in virtue of the fact that an agent who believes that she ought to x either has
insufficient reason for that belief or insufficient reason to lack the intention to x. It is true
that an akratic agent has got to be irrational, the proponent of the NDE view says, but
only because she commits at least one of these errors. There is nothing of real
consequence in the connection between a belief about what you ought to do and the
intention to do it; it is rather that lacking this connection entails that some other error has
been made.
The thing to note is that the asymmetry I’ve attempted to motivate is precisely a
view about the special nature of this very connection. There is something directional
about it: there is something that’s necessarily true of you if you come to satisfy it in one
way that isn’t necessarily true of you if you come to satisfy it in another way. But if the
enkrasia requirement were merely a generalization that captured the fact that an agent
3
Imagine that my friend Sally asks me if she should break up with her boyfriend Tom. I
am unsure about what to think. However, I glean from the conversation that Sally thinks
she should break up with Tom. I might well say to her that, though I don’t know whether
the relationship is worth preserving, I’m inclined to say that she should end it given that
she seems to believe she should. Now the best analysis of this phenomenon might be the
following: on independent grounds I am unsure about what Sally should do, but her
normative belief strikes me as a further reason to believe that she should end the
relationship, perhaps because I regard her position as epistemically privileged. But this
need not be the case. I could regard her as epistemically impaired by her powerful
feelings for Tom and nonetheless give the same advice. That is, I might remain agnostic
about whether Sally has independently sufficient reasons to break up with Tom, and
nonetheless counsel her to do so because she believes she should. Similarly, I might
counsel a friend who is cultivating a Stoic temperament to go on a five day silence
retreat, even if I think the retreat, and Stoicism more generally, are awful ideas. Of
course, this recommendation would have to be qualified (“What you should really do is
relinquish this preposterous worldview…”), but qualified advice is still advice.
79
must have failed to respond appropriately to his reasons, then there would not be
anything interesting to say about the connection. Failures to respond appropriately to
reasons for belief and intention are not distinctive—they are general and occur in all sorts
of cases.
4
Since the NDE view is completely deflationary about the connection between
ought-beliefs and intentions—the connection is only interesting as an epistemic marker, a
kind of heuristic for the presence of error—it cannot vindicate this robust story about the
nature of the connection.
Let me put the point in one other way. The proponent of WSRR may well suppose
that this principle captures a constraint on a distinctive form of attitudinal consistency at
which rational agents aim. Indeed, as I suggested in the Introduction, most of us do often
take the connection between beliefs about what we ought to do and corresponding
intentions to be a distinctive and central element of rational conduct and self-governance.
This is why we have expressions such as “let your conscience be your guide” and “follow
your heart”, and why we often appeal to them in the course of giving advice. Moreover,
in this section I’ve argued that even more should be said: there is an asymmetrical feature
of this connection between ought-beliefs and intentions that a satisfying theory should
illuminate.
5
Now the NDE view is quite incompatible with all of this. The distinguishing
feature of the view is that it assimilates akrasia, and other putatively special rational
failings, into the homogeneous class of inappropriate responses to reason.
6
So, taken on
its own at least, the NDE view has no explanation of our intuitions about the
4
E.g. if the error that grounds my “akrasia” is simply my refraining from forming an
intention that I have conclusive reason to form, then I am guilty of exactly the same error
whether or not I have a corresponding ought-belief.
5
As I’ll argue in the next chapter, proponents of the wide scope theory cannot vindicate
this either. This is part of what justifies the introduction of the notion of rational
commitment.
6
See Raz (2005: 24): “The answer is that there is no distinctive form of rationality or of
normativity that merits the name instrumental rationality or normativity. In
particular there is no specific form of rationality or of normativity that concerns the
relations between means and ends.”
80
distinctiveness of akrasia, and it has no explanation of the difference between Bob and
Jenny
1
and Bob and Jenny
2
.
3.2 Revisiting a desideratum for the theory of akrasia
From the outset I have claimed that a satisfactory theory of akrasia must entail that every
case of akrasia is a case of genuine, substantial irrationality. Moreover, the theory should
explain this intuition: it should help us to understand the sense in which the akratic error
is a major error. We have called this claim the strictness thesis.
Why should we accept the strictness thesis? Thus far I have for the most part
taken this thesis as a kind of guiding intuition, an intuition that I expect many to share but
that I have not been especially concerned to defend. In chapter one I said that, if akrasia
is a major error, then presumably it has to be a major error on every occasion. Let me
make a few brief points about this claim.
First, I take the strictness thesis to be intuitively compelling because its denial
would be arbitrary. Akrasia is an error that we characterize formally; it involves being in
a certain state of attitudinal conflict. Since all cases of akrasia are of the same formal
type, it would be arbitrary to contend that some are irrational and some are not.
The second reason the strictness thesis is compelling is that the akratic error
appears to be constituted by this formal conflict, and not by any other particular or
idiosyncratic features of agents’ situations. In other words, agents are susceptible to
akrasia independently of the substantive defensibility of their attitudes. People with
manifestly absurd normative beliefs can be akratic with respect to these beliefs. They are
akratically irrational merely in virtue of the fact that they have beliefs about what they
ought to do without the corresponding intentions. Since being akratic requires only this
formal disconnect between ought-beliefs and intentions, we would seemingly be hard
pressed to find a relevant ground for denying the irrationality of akrasia in some subset of
cases.
Finally, I should add that writers on akrasia, and rational requirements more
generally, have almost invariably taken the strictness thesis for granted. This doesn’t
81
show that the thesis is true. But it might well be some evidence that it is. In any case, we
will consider the most plausible extant objection to the strictness thesis below in section
3.4, and we will see that the objection does not succeed. First we examine the NDE
view’s relationship to this thesis.
3.3 Why the NDE view cannot capture the desideratum
Proponents of the NDE view apparently aim to capture this feature of our intuitive
judgments about akrasia.
7
In any case, if the thesis is true, then they should want to
capture it. Let us try to understand how they could vindicate this thesis.
According to the NDE view, being akratic entails that the agent is guilty of a
major error of one kind or another. Specifically, it entails one of these two mistakes:
E
1
: Agent believes that she ought to x, and she has insufficient reason for this belief.
E
2
: Agent lacks the intention to x, and she has insufficient reason for lacking this
intention.
So the idea is that every case of akrasia is either a case of an E
1
error or a case of an E
2
error (or both). Since E
1
and E
2
are major errors, the strictness thesis is vindicated.
The problem is that this view will not in fact ensure the truth of the thesis. It
won’t suffice because of the existence of cases of mere sufficiency. Consider the
following case, and two hypotheses about it:
Tomato Soup: Amy is shopping in her local supermarket, looking at one of several cans
of Campbell’s tomato soup. She wants to buy one can total, and is considering whether
she ought to grab can C.
7
See Raz (2005: 3): “The second response, however, is reinforced by the thought that a
person with evil or worthless ends who fails to take the proper means to his ends is,
perhaps luckily, irrational in a way that is indifferent to the character of his ends. He is
irrational in the same way as someone whose ends are worthy, but who fails to take the
means toward them.”
82
Hypothesis
1
: Amy has sufficient reason to believe that she ought to grab can C, but she
also has sufficient reason to believe that she ought to grab another can. In other words, it
is rationally permissible for her to believe that she ought to grab the can she is looking at,
but it is also rationally permissible for her to believe that she ought to grab any of the
other relevantly similar cans within reach.
Hypothesis
2
: Amy has sufficient reason to intend to grab can C, but she also has sufficient
reason to intend to grab another can. In other words, it is rationally permissible for her to
intend to grab the can she is looking at, but it is also rationally permissible for her to
intend to grab any of the other relevantly similar cans within reach.
Now imagine that Amy is standing in the aisle, paralyzed with indecision. She believes
that she ought to grab can C, but she does not intend to grab it. Amy is clearly guilty of
akrasia. Can the NDE view capture this fact?
It cannot. By hypothesis, akrasia just is E
1
or E
2
. But Amy has not committed
either of these errors. She has sufficient reason for her belief, so she is not guilty of E
1
.
And she does not have insufficient reason for lacking the intention that she lacks—we’ve
said in Hypothesis
2
, after all, that she has sufficient reason to intend to grab other cans—
so she is not guilty of E
2
. So the account of akrasia suggested by the NDE view has the
unwelcome consequence of falsifying the strictness thesis.
I regard this as a strong argument against the NDE view. In the interest of
fairness, however, let’s consider two points that might be made in its favor. The first is an
objection to my construal of the case. The second is an amendment that might be made to
the NDE view itself.
An objection that might be pressed against me is that I’ve made the false
assumption in Tomato Soup that Amy is rationally permitted to believe that she ought to
choose can C. (The objection thus flatly rejects Hypothesis
1
.) Here is what might be said
in defense of this view.
83
Amy has no reason to choose any can over any another, at least among the ones
that are within reach. So while it is rationally permissible for her to intend to choose can
C—after all, she is rationally required to intentionally choose one of the cans—it is not
rationally permissible for her to believe that she ought to choose it. All she may rationally
believe is that she ought to choose one of the cans within reach. Thus the situation is not
a counterexample to the NDE view, since it is in fact a case in which Amy violates E
1
.
There are at least two worries about this proposed way out for the NDE view.
First, it is plausible that Amy is rationally permitted to have the belief that she ought to
choose can C. For Amy is, by hypothesis, rationally required to believe that she ought to
choose one of the cans within reach. Since any of the cans within reach will serve her
equally well, and she only wants one of them, it is natural to think that she is permitted to
form the belief that, for any one of these cans, she ought to choose it.
Notice a strange consequence of denying this picture. If it’s true that we cannot
permissibly reason from ‘O(A or B, it matters not which)’ to ‘O(A)’, then it follows that
we can never engage in direct enkratic reasoning in cases of mere permissions. For
enkratic reasoning requires as its basis a belief of the form ‘I ought to x’, where x is an
action. The view has the implication that in such cases we must reason directly from a
belief of the form ‘I ought to x or y, it matters not which’ to an intention to x, or an
intention to y.
8
But imagine Enkratic Ernie, a man who is maximally diligent about
engaging in enkratic reasoning. Ernie is so diligent that he cannot intend without first
engaging in such reasoning; moreover, Ernie’s reasoning is always sound. The view
under consideration implies that Ernie cannot intend in Tomato Soup. But that is weird,
because not intending is irrational, and Ernie needn’t be.
9
8
Another way to bring out the oddity of this implication is by noting that it falsifies the
following fairly intuitive principle linking the rational status of intentions and ought-
beliefs:
Backwards Enkrasia: If it is rationally permissible to intend to x, then it is rationally
permissible to believe that you ought to x.
9
You might say that it’s irrational for Ernie to have this studious policy of only forming
intentions by way of enkratic reasoning. But that isn’t necessarily the case. Imagine that
84
I recognize that what I’ve said thus far is likely to be contentious. I am not too
worried about this, because there are other reasons for thinking that Tomato Soup suffices
to undermine the NDE view.
First, imagine that we grant the objection to my treatment, giving up on the claim
that Amy is rationally permitted to believe that she ought to choose can C. Still, we are
well short of a satisfying response to the worry. For we can grant that Amy has a belief
that she is not rationally permitted to have, and still worry that this error is not the one we
are pointing to when we say that Amy is being akratic. So while it may be true that she is
guilty of this error, it remains to be seen why we should regard this error as constitutive
of akrasia. And plausibly Tomato Soup helps to show why it is counterintuitive to regard
Amy’s belief as the sole locus of her akrasia.
Second, the bigger problem is that this solution will not provide a general defense
of the NDE view from counterexamples like the one I’ve sketched. Recall that the NDE
view is supposed to be a view about rational requirements generally.
10
The issue is that
when we move to considering other rational norms the current solution will be
inapplicable.
Consider the case of means-end coherence. Amy intends to purchase can C, and
she believes that taking it off the shelf is necessary for taking it. Nonetheless, she stands
in the aisle paralyzed, for some reason failing to form the intention to take the can off the
shelf. Amy is clearly means-end incoherent. But the NDE view cannot capture this fact.
Means-end incoherence, according to the NDE view, is either: (a) intending an end for
insufficient reason, (b) believing for insufficient reason that a means is necessary for that
end, or (c) lacking an intention for the means for insufficient reason. But Amy is guilty of
none of these errors. She has sufficient reason to intend to purchase can C, and sufficient
reason to believe that taking it off the shelf is necessary for purchasing it. And she has
such processes are no burden on his time or energy. Then there would seemingly be no
grounds for thinking that his policy is irrational.
10
More precisely, it is explicitly general in Kolodny’s work. In Raz, it is confined to the
case of the requirement of instrumental rationality, but it is plausible to assume that Raz
too thinks the view has broader reach.
85
sufficient reason for lacking the intention to take the can off the shelf—she could
rationally intend to take one of the other cans within reach off the shelf instead. So the
NDE view appears straightforwardly incompatible with the strictness thesis.
Let us now consider an amendment that might be offered as a way to save the
NDE view. Originally, the view amounted to the claim that what we often take to be the
distinctive requirement of non-akrasia is really a generalization from more mundane
failures of reason-responsiveness. Akrasia just is an indication that the akratic agent has
at least one of two particular attitudes for insufficient reason. For this reason the view is
deflationary about the connection that I have been suggesting must be central to a
satisfactory understanding of akrasia.
The question is whether we can we tweak the NDE account of akrasia in order to
deal with the cases of mere sufficiency that plague it. The natural strategy would be to
extend the set of possible errors that are indicated by akrasia. The new and improved
view would have to claim that the errors underlying our judgments about the irrationality
of akrasia are:
E
1
: Agent believes that she ought to x, and she has insufficient reason for this belief.
E
2
: Agent lacks the intention to x, and she has insufficient reason for lacking this
intention.
E
3
: Agent believes that she ought to x, and she lacks the intention to x, and she has
insufficient reason for this combination of attitudes.
The refined view suggests the following analysis of Tomato Soup (or, if you prefer, the
analogous case for the instrumental principle). Though it’s true that Amy has sufficient
reason for her belief that she ought to choose can C, and true that she has sufficient
reason for lacking the intention to choose can C, she nonetheless has insufficient reason
for [believing that she ought to choose can C and lacking the intention to choose it]. So
her akrasia is still at root a failure of reason-responsiveness.
86
Here’s the problem with the amended view. The view says that Amy has
insufficient reason for a conjunction of attitudes. But it also says that she has sufficient
reason for each of the attitudes in the conjunction. This means that whatever it is that
makes the conjunction insufficiently supported by reason cannot have anything to do
with the independent justification of Amy’s individual attitudes. After all, that these
attitudes are themselves sufficiently supported by reasons is built into the structure of the
case—otherwise we would not have needed to add a further condition to the NDE
account. So then the only thing left to appeal to, as the ground of Amy’s violation of E
3,
is
the connection between her belief and her intention. And any appeal to this connection is
an abandonment of the NDE view.
Let me conclude the section by elaborating on this last claim. As I noted in
section 3.1, the NDE view is the view that our intuitions about the irrationality of akrasia
are underpinned by facts about what particular attitudes we have reason to revise.
Proponents of the view insist that we have mistakenly assumed akrasia to be essentially
about a connection between attitudes, when it is really just a marker for run of the mill
failures to respond to reasons for particular attitudes. We’ve seen, however, that in order
to vindicate the strictness thesis, the NDE view must appeal to facts about the
irrationality of certain sets of mental states, all of whose individual constituents are
rationally permissible. This shows that akrasia cannot be characterized as essentially an
indication that one or more of the akratic agent’s attitudes (or lacks of an attitude) is
insufficiently supported by reason. Akrasia must, on the contrary, be explained in terms
of the rational connection that obtains between ought-belief and intention.
In other words, the NDE view has got things backwards. It is not a plausible
alternative to the picture of rational norms that I’ve been defending.
3.4 How not to misinterpret Arpaly
We turn now to a consideration of one apparent challenge to the strictness thesis posed by
the arguments of Nomy Arpaly.
11
Arpaly maintains that agents often act rationally
11
Arpaly (2003), chapter 2; see also Audi (1990) and McIntyre (1993).
87
against their best judgment.
12
Taken at face value, this claim might be assumed to entail
the claim that some instances of akrasia are not irrational.
But things are not so simple. On one interpretation of her main contentions,
Arpaly is not at all concerned to argue against the strictness thesis; her goal is just to
point out a few ways in which she thinks the literature on akrasia has been misguided.
However, on another interpretation, some of her remarks do appear to be incompatible
with a certain element of the strictness thesis. This section will aim to make the content
of her worries more precise. I will show that Arpaly does not give us good reasons to part
with the strictness thesis, whether or not she intends to be giving us such reasons. In the
next section I’ll discuss some of the general lessons that her treatment of akrasia affords
us.
Though I think it is clear that Arpaly does not deny the strictness thesis in the way
that she is often taken to, misinterpretations of her on this score are prone to arise. At
several places, Arpaly does seem to be making the claim that some instances of akrasia
are not instances of irrationality. For example, in an early passage she sets her own view
in opposition to several orthodox positions, including the view “that akrasia is never
rational” (22). Later, in summing up the account of rationality she has given, she claims
to have argued—contra Davidson and the majority of philosophers who have written
about these matters—that “acting against one’s best judgment is not always irrational”
(62). Moreover, the name of her chapter (“On Acting Rationally Against One’s Best
Judgment”) might seem to imply that she regards certain cases of akrasia as cases in
which the agent is guilty of no irrationality, and thus as a denial of the strictness thesis.
However, more careful formulations of her position make it clear that this is not
what Arpaly is after. After saying that her chapter will argue that acting against one’s best
judgment can sometimes be rational, Arpaly adds an important elaboration:
12
Note that Arpaly regards akrasia as a relation between an agent’s best judgment (in my
terminology, her ought-belief) and her actions, whereas I have been regarding it as a
relation between this kind of belief and intention. For the purposes of this chapter the
distinction is of little consequence. I discussed this issue, and motivated my
characterization, in chapter one.
88
Or rather, to be more precise, I would like to argue that sometimes an agent is more rational for
acting against her best judgment than she would be if she acted in accordance with her best
judgment. I still agree that every agent who acts against her best judgment is, as an agent, less
than perfectly rational…
13
This passage and others like it make it difficult to maintain that Arpaly’s real target in
these sections is the strictness thesis. For that thesis is just the claim that every instance of
akrasia is an instance of a distinctive, substantial error. And this is compatible with the
claim that an agent can be guilty of this error and still be, on the whole, more rational in a
given situation in virtue of having consummated his akrasia.
Consider the now famous case of Huck Finn, which Arpaly calls a case of
“inverse akrasia.”
14
We suppose that Huck believes that he ought to return Jim, a
runaway slave, to his owner Ms. Watson. But when the opportunity to do so comes along,
Huck cannot bring himself to act in accordance with his judgment.
15
So Huck akratically
helps Jim escape.
Arpaly’s point is that in this situation, Huck is plausibly performing the rational
action, and doing so for exactly those reasons that a rational agent would act on in the
circumstances—sensitive consideration of Jim’s humanity, recognition of the unfairness
of the institution of slavery, etc. It can’t be a mark against Huck’s rationality that he does
the right thing for the right reasons. So it can’t be that acting akratically necessarily
makes someone more irrational than they would be if they acted enkratically. If Huck
were to return Jim to Ms. Watson he would be, on balance, more irrational than he
actually is.
13
Arpaly (2003: 36).
14
The example is originally from Bennett (1974).
15
For the moment I am just accepting Arpaly’s description of the case. In the next section
I will argue that it is significantly more complicated than she allows.
89
Now this position strikes me as quite plausible. But it is not a denial of the
strictness thesis. For Huck is, at least on this interpretation of his psychology, guilty of a
distinctive, substantial form of irrationality. He is at war with himself; his conduct defies
his own ultimate standards about how he should be. This is clearly compatible with the
claim that Huck would be guilty of a worse form of irrationality if he acted in accordance
with his judgment. After all, it is no part of the strictness thesis that akrasia is the only
error capable of impugning one’s rationality.
There is another strand of thought in Arpaly, though, that can more plausibly be
seen as an attack on the thesis. At numerous places Arpaly seems to claim that, though
akrasia is always irrational, it is not, as I’ve been saying, a “substantial” or “major”
defect. For example, just below the passage I quoted above, she says that
[T]here are cases where following her best judgment would make the agent significantly
irrational, while acting akratically would make her only trivially so—and as rational as most of us
ever are.
16
And later, in attempting to formulate the main conclusions of her chapter, she claims to
have argued that
A theory of rationality should not assume that there is something special about an agent’s best
judgment. An agent’s best judgment is just another belief, and for something to conflict with
one’s best judgment is nothing more dramatic than ordinary inconsistency in belief, or between
beliefs and desires.
17
So Arpaly’s real concern about the strictness thesis seems to be that it overstates the
sense in which akrasia is necessarily an error. It is always an error, but perhaps not
always a significant one.
16
Arpaly (2003: 36).
17
Arpaly (2003: 61).
90
In one way this claim is difficult to evaluate. Nowhere have I given an account of
significance or substantiality from which we could derive conclusions about what rational
defects count as significant or substantial. (Nor has Arpaly given such an account.)
Nonetheless, I think it’s clear that the view that akrasia is a trivial or minor error is
intuitively unappealing. Let me conclude this section with some reflections on why I
think this is so.
First, notice how the claims that Arpaly makes to elaborate this conception are
themselves extremely controversial. In the first quotation, she says that acting akratically
will, in some cases, make the agent no less rational than most of us ever are. But it is
unclear what this is supposed to show. It might be that most of us are significantly
irrational much of the time—in that case, being only as irrational as most of us would not
show that one is not significantly irrational. More importantly, the claim that Huck Finn
is only “trivially” irrational is precisely what’s at issue. As I’ve indicated, the fact that he
is more rational when he acts against his best judgment than he would be if he acted in
accordance with it does nothing to show that he is not nonetheless significantly irrational
in virtue of his having acted against this judgment.
Likewise, the claim that an agent’s best judgment is “just another belief”, and that
the akratic conflict is no more “dramatic” than other species of attitudinal conflict, is very
hard to interpret. Most writers on theoretical reason simply assume that having
inconsistent beliefs is an obvious, distinctive, and substantial form of theoretical
irrationality. At least according to the orthodoxy, then, Arpaly’s analogy should support,
rather than controvert, the strictness thesis. In addition, the analogy to a conflict of belief
and desire is perplexing. For the view that it’s irrational to have desires that conflict with
one’s beliefs about what one ought to do is at least highly controversial, and, I think, false
on a face-value interpretation.
18
18
I may believe that I ought to go to the dentist and nonetheless desire to avoid the
dentist. As long as I also desire to go to the dentist, and this desire is stronger than the
desire to avoid going (perhaps: stronger proportionally to my reasons), I am guilty of no
irrationality whatsoever. Some would take a view even more divergent from the one
91
Finally, there are positive reasons for thinking that the akratic agent is,
necessarily, guilty of a major error. Think again about Huck. While we may agree
wholeheartedly with Arpaly that Huck achieves a vital sort of rational success by doing
the right thing for the right reasons, we still typically take his situation to be emblematic
of a deep kind of conflict, one that is to be avoided by rational agents. Huck is internally
divided; he cannot reconcile his practical judgment with the way he feels compelled to
act. We routinely think that this means something important. On the account I’ve been
defending, one thing it means is that Huck is rationally required to revise either his values
or the way he acts. If we didn’t judge his predicament to be one in which this substantial
and interesting rational defect was a key element of his normative situation, then I have
trouble seeing why we would ever have found the case particularly interesting in the first
place, and why we would have continued to find it interesting for thousands of years.
I trust that many readers will share this intuition about the seriousness of the
akratic error. In any event, what I’ve argued in this section is that Arpaly does not
provide us with good reasons for rejecting the strictness thesis. Absent such reasons, I
feel justified in continuing to assume it.
3.5 What Huck Finn shows
In this final section I want to offer some more general reflections on the lessons we
should take from Arpaly’s discussion. I’ll be interested to make three main points. First,
cases of “inverse akrasia”, while fascinating, are far more complex than it may initially
appear, and we need to be careful about how we categorize them. Second, the intuition
that drives Arpaly’s treatment of these cases is at bottom the same intuition that drove
Broome and many subsequent thinkers to give wide scope accounts of the norm of
enkrasia (accounts like the one I endorsed in chapter two). As a first gloss, it is simply the
intuition that there are no necessary truths about which way of avoiding akrasia is the
most rational—this varies from case to case. Third, the proper analysis of cases of inverse
suggested by Arpaly’s analogy, namely the view that there are no rational norms that
relate belief and desire at all.
92
akrasia entails that there is a deep issue in the theory of rationality about the method by
which we weigh competing rational considerations in order to arrive at what I’ll call, for
lack of a less silly name, an agent’s rationality score. Though the subsequent chapters of
this dissertation will not offer a solution to this last problem, they will underscore the
importance of coming to grips with it.
The case of Huck Finn is more complicated than Arpaly lets on. It is complicated
because it is extremely difficult to determine what attitudes Huck has. As described, he is
supposed to believe that he ought to return Jim. But it is reasonable to wonder if he does
really believe this, or, even if he does, whether this is the end of the story. We are all
aware of cases in which people take themselves to believe things that they do not really
believe—for example, a young adult may take himself to believe that being gay is
immoral, when what he really believes is that being gay is immoral according to most of
his friends and acquaintances. So it might be that Huck “believes” that he ought to return
Jim in this non-standard sense only. Perhaps he regards it as the thing that most people
would do, or as the thing that a “good” person would do, without believing that he ought
to do it.
Alternatively, it might be that Huck simply has conflicting beliefs about what he
ought to do—he believes both that he ought to return Jim, and that he ought to help Jim
escape. This latter situation is hard to imagine, but gets easier if we assume that each of
these beliefs is implicitly relativized to a certain set of considerations. So for example,
Huck might believe that he ought “morally” to return Jim, but that all things considered
he ought to help him escape. (These classes of considerations, and the relativity of the
beliefs, obviously need not be psychologically transparent.) Of course, I’m not here
presenting a view about the particulars of the case, which is fictional, but rather
attempting to bring the many possibilities into focus. The point is just that it isn’t even
clear that Huck is akratic, since it isn’t clear what account to give of his mental states. If
93
he has no genuinely all things considered belief that he ought to return Jim, then he is not
guilty of akrasia.
19
Next observe that the intuition Arpaly is appealing to in analyzing these cases is
really the same intuition that we appealed to in order to motivate wide scope accounts of
the enkrasia norm. Recall that we followed Broome in rejecting narrow scope versions of
the norm because they permitted bootstrapping of an implausible kind: any old belief
about what you ought to do was sufficient to make it the case that you ought to, or were
rationally required to, intend to perform that action. But, I argued, believing that you
ought to kill innocent children, or believing that you ought to count the blades of grass on
the moon, cannot alone make the corresponding intentions the ones that (e.g.) rationality
requires you to have. This would cheapen the notion of rational requirement to the point
of severing any connection to our commonsense conception of rationality. For in ordinary
life we would never say that the agent who forms these corresponding, enkratic intentions
is ideally rational in doing so.
Huck Finn is just another illustration of this basic point, though this might not be
immediately obvious. Suppose that Huck is indeed akratic. Arpaly is concerned with the
comparative rationality of Huck’s akratic action and his (counterfactual) enkratic action.
She claims that, were he enkratic, Huck would be less rational overall, since he would
have acted in a rationally indefensible way in addition to having a rationally indefensible
belief about how he ought to act. Along the same lines, we observed that the rational
requirement of enkrasia must permit the suicide bomber to come into conformity with the
norm by discarding, rather than following through on, his rationally indefensible belief
about how he ought to act. These are two ways of observing that potential processes of
19
What about the case in which he has two conflicting but nonetheless genuine all things
considered ought-beliefs? I’m inclined to think that he has got to be akratic and enkratic
in such a case. Note also that the same complications arise for Arpaly’s own cases, e.g.
the case of Sam the student, who we are to imagine forming the belief that he ought to
become a hermit in order to pass his final exams. As Arpaly notes, this is a case in which
Sam’s belief that he ought to become a hermit is in tension with the overwhelming
majority of his other mental states. As such, I think it is plausibly a case in which Sam
has conflicting beliefs about what he ought to do.
94
reasoning, like the (possible) enkratic reasoning of Huck and the suicide bomber, can
have something rational about them even though the alternatives would be more rational.
What the case of Huck does importantly underline is that a complete theory of
rationality needs an apparatus for generating the kinds of comparative judgments that
Arpaly makes (e.g. the judgment that actual akratic Huck is more rational than
counterfactual enkratic Huck). Rationality comes in degrees, and we need a way to
measure these degrees: we need a way of calculating an agent’s rationality score.
This type of calculation can seem like a triviality, but this is not so. The claim that
actual Huck is more rational than counterfactual Huck is very intuitive. What explains it,
then? Is it that actual Huck has more attitudes/actions that rationality requires of him? Or
is it that he performs one extremely important action that rationality requires him to
perform? Arpaly’s treatment seems to indicate something like the latter view, but it is not
totally clear how best to capture this general idea.
20
The question of whether the number of rationally required (or rationally
impermissible) attitudes is the main ingredient in an agent’s rationality score, or whether
the nature of particular attitudes is what matters most, is in my view merely the tip of the
iceberg. The account of rational commitment that I begin to articulate in the next chapter
will introduce a further complication that a complete theory will have to address. As I’ll
argue, we can be—and all-too often are—rationally committed to having attitudes that we
are rationally required to avoid having. For example, a philosopher can be rationally
committed to believing an obvious consequence of his theory even if his belief in the
theory (and hence his actual or possible belief in its consequences) is itself rationally
indefensible. This means that there is some rational pressure on the philosopher to believe
the consequence of his theory, while there is also rational pressure on him—pressure of a
20
One complication that arises if we adopt such a picture concerns whether practical
relevance similarly determines theoretical rationality. Are irrational beliefs more
irrational in virtue of being disposed to lead to bad practical consequences? I feel some
pressure to say that they are, but then we seem to stray from the initially attractive view
that beliefs are rational simply in virtue of the degree to which they are responsive to the
agent’s evidence. Again, I think the issues here can seem relatively unimportant, but are
ultimately deep and perplexing.
95
more decisive or weighty sort—to give up his belief in the theory and its consequences.
Similarly, I argued earlier in this chapter that there is always something rational about
forming an enkratic intention, even though it might be more rational in a given case for
the agent to give up her ought-belief. What this account necessitates is a way of weighing
these potentially competing rational pressures.
Let me emphasize the importance of this point, which I take to be insufficiently
appreciated, by way of an example that I’ll be invoking in the next chapter as well.
Suppose that I irrationally believe a moral theory M.
21
Imagine that M is not a completely
crazy theory. It contains some important moral truths, but it also happens to be
inconsistent—not obviously inconsistent, but clearly so once submitted to enough
rational scrutiny. Perhaps, for example, M tells me that I am morally obligated to donate
ten dollars to the earthquake victims in Haiti, but that I am not morally obligated to
donate any money to fight world hunger, while giving no principled account of the
difference between the cases. Suppose further that I apply M with as rigorous a
conscience as I can muster. Though I am guilty of irrationality in believing M, I am
presumably also worthy of substantial rational praise for my diligent application of the
moral theory I genuinely believe to be true. In addition, I’m presumably worthy of some
praise for believing M over less defensible competitors. The question is how to account
for the complexity of this sort of situation in a theoretical determination of my rationality
score.
We can see the force of the problem when we reflect on the following fact: many
of us believe a theory like M. Simply considering the wide range of competing moral
theories that philosophers accept should leave us with the sense that most of us believe
false moral theories. And it is highly unlikely that each of these competing theories is
21
Strictly speaking, it would be more accurate to talk in this section of normative (rather
than moral) theories, because we don’t always take morality to be decisive in determining
what we ought to do.
96
equally rationally defensible.
22
What’s more, the majority of people do not subject their
moral beliefs to the kind of scrutiny that is characteristic of the philosophical sophisticate.
So there is overwhelming reason to think that at least a lot of regular people, who we
commonly consider capable of extremely rational action, are consistently acting (or
intending) in ways that violate what rationality requires. Still, though this may temper our
inclination to consider them the absolute paragons of rationality, it doesn’t undermine the
intuition that they achieve substantial rational success.
If believing, intending, and acting in accordance with what rationality requires
were the only way of being rational, then most of us would probably be guilty of
unadulterated irrationality most of the time. But that’s absurd. Most of us are a far cry
from being fully rational, but we achieve qualified rational successes all the time. Man is
a rational animal; there has got to be something that distinguishes us from the other
members of the animal kingdom, and the idea that it’s merely our susceptibility (and
constant violation!) of rational norms is hard to swallow. As my account of commitment
emerges in the next few chapters, I hope it will become increasingly apparent that being
in accordance with what rationality requires is often just too much to ask of us. We are
human beings and not angels. Sometimes satisfying our commitments is success
enough.
23
22
It might be objected that though the competing theories are not equally defensible,
believing each of them is nonetheless rationally permissible. This claim brings up
complicated issues about maximizing and satisficing that I cannot here treat with the
necessary care. I will say, however, that the view that a belief can be rationally
permissible when it is less supported than a competing belief is, at the very least, deeply
perplexing. Furthermore, the point I go on to make in the main text is completely
independent of these issues—for it is not at all reasonable to contend that it is rationally
permissible to believe any one of the radically divergent and inconsistent moral theories
believed by regular people. For more thoughts on this argument see chapter four.
23
In chapter five, I argue for a deep connection between rational and moral commitment.
A parallel question then emerges: is satisfying our moral commitments, and not satisfying
our moral requirements, sometimes success enough? Though I will not consider this
question at length in this dissertation, I think it is a fundamental one. I offer some
preliminary reflections on it in Appendix C.
97
CHAPTER FOUR: WIDE AND NARROW SCOPE
In the first two chapters I distinguished between the wide scope thesis and the wide scope
theory with respect to the phenomenon of enkrasia. The wide scope enkrasia thesis is the
claim that there is a true statement of the requirement of enkrasia that employs a
normative concept taking wide scope over the relevant conditional. I have endorsed this
thesis and provided my preferred formulation of the enkrasia requirement. But I have also
expressed skepticism about the wide scope theory of enkrasia: the theory that the true
wide scope enkrasia principle itself constitutes a satisfying, general view about the
domain. This chapter motivates such skepticism. It does so by presenting the version of
the symmetry problem that I have already introduced in chapters three and four, and
arguing that the introduction of the notion of rational commitment is the way to solve this
problem.
4.1 A review of the dialectic
One of the main contemporary issues in the theory of rationality concerns the nature of
so-called rational requirements. It is commonly assumed that such requirements specify
ways in which we may fall short of some important kind of rational ideal. So for
example, it is commonly assumed that we are rationally required to avoid having
manifestly contradictory beliefs, on the grounds that an agent who has manifestly
contradictory beliefs falls short of the ideal of the rational agent in an obvious way. But
though philosophers generally agree that rationality is the source of at least some norms
of this kind, they disagree about how these requirements are to be conceived.
1
In its most well known incarnation, this disagreement has focused upon a
distinction between wide and narrow scope formulations of the relevant principles. Take
the putative norm of non-akrasia (henceforth ‘enkrasia’) as illustration. Assume that it is
necessarily irrational to both believe that you ought to x and lack the intention to x. There
are two distinct ways of capturing this fact:
1
Though see Finlay (2009) for some skepticism about the existence of norms of
rationality.
98
RR:
If you believe that you ought to x, then you are rationally required to intend to x.
WSRR:
You are rationally required to be such that (if you believe that you ought to x, then you
intend to x).
In WSRR, the concept of rational requirement is said to take wide rather than narrow
scope, since it ranges over the whole conditional clause.
2
The issue can seem a minor one; it is natural to suspect that resolving a technical
dispute about the logical form of a restricted set of conditionals is unlikely to entail
substantive, interesting conclusions about the nature of rationality itself. This suspicion is
nonetheless mistaken. The competing conceptions of the form of rational requirements
lead to very different pictures of what rationality is like.
Suppose that RR is true. Suppose further that I believe on the basis of woefully
insufficient evidence that I ought to spit on Las Meninas. It follows that rationality
requires me to intend to spit on Las Meninas. In other words, I cannot be (fully) rational
unless I intend to spit on that masterpiece of Velasquez’s.
Alternatively, suppose that WSRR is true. Again I believe that I ought to spit on
Las Meninas. But no longer is it implied that I am rationally required to intend to spit on
Las Meninas. I can satisfy the requirement in another way: by revising my belief. Since
‘rationally required’ takes scope over the whole conditional, the consequent cannot be
2
For early defenses of wide scoping, see Hare (1971), Hill (1973), Greenspan (1975),
Darwall (1983), and Gensler (1985). For contemporary defenses, which have increasingly
focused on the formulation of rational requirements, see Korsgaard (1997), Hampton
(1998), Broome (1999, 2007, 2008, ms), Dancy (2000), Wallace (2001), Brunero (2010),
and Way (2010 and forthcoming). For the objections to wide scoping that I will be
concerned with late in the paper, see Schroeder (2004, 2009), Kolodny (2005, 2007a,
2007b, 2008a, 2008b), and, for a later presentation, Finlay (2010). For another objection
to wide scope principles that I won’t discuss, see Setiya (2007). For an interesting
discussion of Hill and Kant on wide scoping, see Schroeder (2005).
99
“detached”, to invoke the prevalent jargon. Thus if WSRR is true, I can be (fully) rational
without intending to spit on Las Meninas.
Many authors take similar reflections to constitute sufficient reason to favor wide
scope formulations of rational requirements. I agree with them. Narrow scope
formulations result in a highly counterintuitive picture of the normative pressure
rationality exerts on us. Nobody in her right mind would counsel me to spit on Las
Meninas merely because I believe that I ought to. A rational person would tell me to give
up my ridiculous belief. (We might also expect her to furnish me with reasons for doing
so.) On the face of it, then, the narrow scope view forces us to accept a highly revisionary
theory of what it takes to be a rational agent.
This chapter does not aim to defend wide scope conceptions of rational
requirements; that much I take to have been accomplished. The goal of the chapter is to
show that such requirements are not sufficient for an adequate theory of rationality. In
particular, these requirements cannot account for asymmetries that seem fundamental to
our judgments of when agents are rational, asymmetries that have led some philosophers
to regard wide scope principles as indefensible. In order to account for these asymmetries
we do not need to abandon wide scope requirements. Instead, we need to amplify our
conceptual resources. Fortunately, there is an extremely natural way to do so.
In section 4.2 I briefly expand upon this guiding idea, the claim that the truth of
wide scope theses does not entail anything about the sufficiency of these theses as
exhaustive theories of the norms of rationality. In section 4.3 I introduce my own version
of what has come to be known as the symmetry problem for wide scope requirements, and
I argue that a proper formulation of the problem does indeed show that wide scope theses
are insufficient as theories. In section 4.4s I consider one proposed solution to the
symmetry problem, John Broome’s account of basing permissions, and I show that this
solution is not compelling. Then in section 4.5 I offer a diagnosis of why Broome’s
suggestion fails, and the beginnings of an alternative proposal. The main insight to be
gained from the diagnosis is that the dual notions of rational requirement and permission
cannot be used to capture the asymmetries we need to capture. My own proposal appeals
100
to the fact that, conveniently, we already have the notion that’s appropriate for this task:
it is the intuitive notion that we employ when we speak of being committed to having
certain beliefs and intentions. In section 4.6 I offer a more detailed account of rational
commitment, outlining several essential features of this normative notion. In section 4.7 I
explain how the introduction of this notion affords us a satisfying resolution of the
symmetry problem. I also consider two important objections to my solution and reject
them.
The view that emerges in this chapter is a conciliatory one. The requirements of
rationality must be captured by wide scope principles. But an equally important set of
normative facts, the facts about rational commitment, must be captured by narrow scope
principles. The charitable way to read wide scopers is as being primarily concerned with
wide scope theses. And the charitable way to read critics of wide scoping is as being
primarily concerned with the viability of wide scope theories.
4.2 Theses vs. Theories
Most of the discussions of wide scope views in the theory of rationality have
concentrated on whether particular wide scope principles are true. Call claims to the
effect that one or more of these principles are true wide scope theses. For example, the
claim that WSRR is true is a wide scope thesis. There are some extremely general, and in
my view compelling, arguments for such theses, which we have already considered at
some length.
3
The simple point that I’m concerned to make in this section is that we need an
interpretation of the role of a particular wide scope thesis in our conception of rationality.
It could, after all, do one of many things. A simple view is that a wide scope thesis
constitutes a theory of the relevant domain. This would amount to the idea, for instance,
that WSRR captures everything about the rational import of akrasia that is worth
capturing.
3
See especially Broome (1999, 2007, ms).
101
Writers on rational requirements often seem to presuppose the truth of this simple
view. Discussions of these matters typically don’t distinguish between the question of
whether a thesis is true and the question of whether its truth suffices to give us a
satisfying theory of a given domain. So once the argument has been given for or against a
particular wide scope principle (or wide scope principles generally), or for or against a
particular narrow scope principle (or narrow scope principles generally), the writer
appears to consider the discussion settled. The issue of interpreting the putatively true
principle, of providing an analysis of its place in an overall theory of rationality, is never
broached. This can be seen most prominently in the exchange between John Broome and
Niko Kolodny.
4
But it is also related to the tendency of many writers to assume that we
have a natural, well-understood conception of rational requirement on the table.
Treatments of the so-called normativity problem for rational requirements seem to rely on
something like the simple view insofar as they suppose that the interesting question to ask
about rationality is whether rational requirements are normative. If the simple view were
false, the non-normativity of rational requirements would not entail the non-normativity
of rationality.
5
But this is clearly an oversight. Even if the simple view were correct, it’s
correctness would not be trivial. From the truth of a wide scope thesis nothing necessarily
follows about the truth of a wide scope theory. It might be that the thesis is only part of
the true theory of the relevant domain—e.g., that the wide scope thesis WSRR is true but
insufficient for a satisfying theory of enkrasia. This is the view I will be suggesting in
what follows: wide scope theses are true, but wide scope theories are not.
6
I endorse wide scope theses because they alone preserve the proper intuitive
interpretation of the notion of rational requirement, an interpretation according to which
4
Broome (2007); Kolodny (2007).
5
For discussions of the normativity problem see Kolodny (2005), Southwood (2008),
Hussein (ms).
6
For a similar line of thought see Way (forthcoming).
102
these requirements are (as I’ve been saying) decisive. In labeling rational requirements
decisive, I have intended to indicate that they are not the kind of things than can be
outweighed by competing rational considerations.
7
If I am rationally required to intend to
spit on Las Meninas, then the rational status of my intending to spit has been decided.
Being rationally required is not merely one rational consideration among many; it is the
all things considered verdict that the rational domain provides.
But if narrow scope theses were true, then the notion of rational requirement
would have an unnaturally non-decisive meaning. I could come to be rationally required
to have an intention to x merely because I stupidly form a belief that I ought to x.
Requirements are not this cheaply acquired, and they are not this myopic. I need to do
more than form a ridiculous belief for rationality to mandate that I intend to spit on Las
Meninas. And rationality is not plausibly like a horse with blinders on; if there are
rational norms, then they presumably take all the relevant features of an agent’s situation
into account. For instance, they do not ignore the fact that my belief that I ought to spit on
Las Meninas is wildly irrational.
Nonetheless, wide scope theses do not tell us the whole story about the topics they
concern. In particular, they cannot capture the concept of being committed to believing or
intending something. I’ll contend that this commonsense notion of commitment is crucial
for the theory of rationality. First, there are strong independent grounds for
countenancing the notion of rational commitment: it is already a part of our common
sense stock of normative concepts, and we constantly invoke it when engaging in rational
evaluation. Second, appealing to rational commitment is a satisfying way of filling a gap
that cannot be filled by wide scope theories. We turn now to a discussion of this gap.
4.3 The Symmetry Problem
7
Thus decisive notions like rational requirement are to be contrasted with pro tanto (or
contributory) notions such as the concept of a reason. The latter enter into weighing
relations, whereas the decisive notions are the ones we use to capture the result of such
processes of weighing. For similar thoughts see Dancy (2004) and Broome (1999, ms).
103
Wide scope principles like WSRR are intuitively symmetrical in that they draw no
normative distinctions between the different ways we might satisfy them. If you find
yourself in the state that a given wide scope principle prohibits—for example, the state of
believing that you ought to x and lacking the intention to x—then there are multiple ways
to get out of that state and into compliance with the norm. In the enkrasia case, you can
come into compliance by forming the intention to x, or by revising your belief that you
ought to x. The point is that the principles themselves do not enjoin us to adopt a
particular one of these methods. In fact, the principles are completely silent on the
comparative rationality of these methods; for all that is said they are on a par.
Narrow scope principles like RR are different. If RR
were true, then the person
who finds himself in the akratic state prohibited by the principle would only have one
rational option: forming the intention that corresponds with his ought-belief. So there
would be no symmetrical prediction, as only one process of reasoning could bring the
akratic agent out of his irrational state.
8
This general distinction, between the symmetrical consequences of wide scope
accounts of rational requirements and the asymmetrical consequences of their narrow
scope counterparts, has fueled worries about wide scope views. Mark Schroeder and Niko
Kolodny, in particular, have objected to wide scope views because of the symmetry that
they predict.
9
In my view, there are two reasonable interpretations of their arguments. At
times, the objection appears to involve the claim that wide scope theses predict
8
Although this might not be a consequence of the narrow scope view if it was admitted
that requirements could conflict. In that case the prediction might be symmetrical in the
sense that more than one process of reasoning could bring the agent into conformity with
a rational requirement (different ones in the case of each alternative process). There are
problems with this view. For one thing, it has difficulty capturing the decisiveness of
requirements, since in cases of conflict they are apparently weighed against one another. I
cannot here consider this version of the view at any length.
9
Schroeder (2004); Kolodny (2005). See also Finlay (2010) and Way (forthcoming).
104
symmetries.
10
At other times, the objection seems merely to be pointing out that wide
scope theses fail to predict a certain type of asymmetry.
11
The first version of the
symmetry argument is unconvincing. The second version is a successful objection to
wide scope theories.
In this section I’ll defend these claims and give my own slightly revised version
of this objection.
The first version of the argument goes something like this. The symmetry that the
wide scope enkrasia principle postulates is illegitimate. For one way of satisfying it is
rational, but the other is not. Intending to x on the basis of your belief that you ought to x
is rational; but revising your belief that you ought to x on the basis of your lack of an
intention to x is not rational. Since WSRR predicts symmetry where there is none, it
cannot be true.
This argument is not convincing. First, it’s not true that if you believe that you
ought to x, and come to revise this belief, then you must have done so on the basis of
lacking the intention to x. You might simply have reevaluated the evidence and seen that
you were mistaken in having the belief in the first place. Surely this sort of conduct is
rational; sometimes it will be maximally rational. So the fact that you can satisfy the wide
scope principle by revising your belief does not in itself constitute a worrisome type of
symmetry, since some such revisions are permissible.
Second, the theory of rationality in which particular wide scope principles are
embedded may still vindicate the relevant asymmetry when it’s present. Imagine that you
do indeed revise your belief, not in the light of appropriate evidential considerations, but
merely because you lack the corresponding intention. Then, though you will come to
satisfy the enkrasia principle, you might violate some other principle—for instance, a
principle that enjoins you to only believe that p on the basis of genuine evidence that p.
So you will not escape the charge of irrationality.
10
See e.g. Schroeder (2004: 3): “It is a symmetry predicted by the Wide Scope Account
that is simply not sustained.”
11
Schroeder (2004: 10, 13) could be interpreted this way.
105
The more plausible version of the symmetry argument is this. There is an
asymmetry with respect to the two possible ways of satisfying WSRR. The principle
cannot capture this asymmetry. This does not render it false; it just indicates that it can’t
explain everything about the topic worth explaining. Hence the thesis WSRR is not
sufficient for a theory of enkrasia.
The best way to characterize this asymmetry is as follows. If you believe that you
ought to x, and you do not intend to x, then you are irrational. When you find yourself in
this irrational state, you may proceed by doing one of a few things. You might form the
intention to x; you might give up the belief that you ought to x; or you might remain in
your irrational state. Now there is an important structural difference between these three
ways of proceeding. There is always something rational about forming the intention to x,
when you find yourself in this situation, assuming that you do so in response to the
distinctive nature of the situation—that is, in response to the fact that you believe that you
ought to x. By contrast, there is not always something rational about giving up the belief,
or remaining in the state you are in.
12
Here’s another way to make the point. There is a natural intuition about the
asymmetry of enkrasia that we’d like to characterize. Schroeder and Kolodny say various
things about the precise nature of this asymmetry. We can capture a central strand in their
worries by appealing to the distinction between the formation of the enkratic intention,
which is always rational in an important way, and the revision of the normative belief,
which is not.
For example, recall the parallel cases I introduced near the beginning of chapter
three. Let us describe the in a more detailed way. Bob and Bill each believe that he ought
to ask Jenny on a date. Imagine that Bob believes for woefully insufficient evidence: in
several attempts at casual conversation, Jenny has greeted him with nothing but scorn;
moreover, Bob’s fragile psyche should not, at present, be forced to endure the humiliation
12
We could resuscitate a related version of the first interpretation of the argument. It is
never rational to revise your belief on the basis of your lack of a corresponding intention.
But it is always rational, in some sense, to intend on the basis of your belief. The question
is what ‘sense’ we are appealing to here. This will be the topic of the next three sections.
106
of rejection. Bill, on the other hand, has good reason to believe that Jenny would like to
date him, and that the two of them will get along; in any case, he is hardened enough to
brush off spurned attempts of this sort. Observe that both Bob and Bill seem rational, in
one and the same important way, insofar as they form the intention to ask Jenny out.
(This is compatible with there being another way in which Bill is rational and Bob is not.)
Likewise, they seem irrational, in one and the same important way, if they refrain from
forming this intention. (Again, this is compatible with there being a separate sense in
which Bob is irrational and Bill is not.) On the other hand, if they were both to revise
their normative beliefs, only Bob would be doing something at all rational.
These observations lead us to a very general thought that might help to illuminate
some of the interest of the symmetry objection. If my earlier arguments were at all
convincing, then it is plausible to think that the notion of rational requirement is to be
distinguished by its decisive place in the rational domain. But symmetry considerations
give us reason to suspect that decisive concepts might not be especially well suited to
capture the whole story about rationality—or, to be more specific, the whole story about
phenomena like enkrasia. This section has tried to motivate the idea that our intuitive
understanding of the rationality of enkrasia appears to involve some kind of asymmetry.
Suppose for the sake of argument that there is something more “enkratic” about intending
to do something on the basis of your belief that you ought to than revising your belief (for
whatever reason). The thing to notice is that, if what rational requirements are supposed
to capture is rational decisiveness, then they will capture no relevant asymmetry.
Sometimes it is more rational to give up your normative belief, and sometimes it is more
rational to form a corresponding intention—this just depends on the rational justification
you have for believing, or in other words on whether your situation is more like Bob’s or
Bill’s. So as far as decisiveness goes, there is no general, interesting distinction to be
drawn between the two potential ways of getting into conformity with Enkrasia
W
. That
means that if there is indeed some kind of asymmetry, we should expect to need
something besides rational requirements to capture it. And even if we conclude that there
107
isn’t any relevant asymmetry, this line of thought gives us an explanation of some
potential motivations for narrow scope views.
Let me make one final, related point. Apart from the fact that it is quite intuitive
to think that forming enkratic intentions always involves something rational, the
symmetry argument can be seen as drawing attention to another feature of our everyday
discourse about rationality: the fact, stressed in the Introduction and chapter one, that
there are certain forms of reasoning that are distinctive and of special interest to human
beings. This is perhaps most obvious in the case of enkrasia. We habitually counsel
others to “let conscience be your guide”, to “follow your heart”, to “do what feels right”,
etc. and we reserve a special type of condemnation (or empathy) for those who fail by
these lights.
13
It is not unreasonable to think that these practices indicate that we regard
the distinctively enkratic move—the move from all things considered normative belief to
corresponding intention—as itself constituting an important part of proper functioning.
We’ll now explore these general ideas in more detail.
4.4 A Response and its Inadequacy
John Broome is the most articulate proponent of wide scope conceptions of rational
requirements, and in recent work he has acknowledged the general point I’ve been
stressing—that the symmetry problem shows wide scope theses to be insufficient for a
theory of rationality. Broome suggests that we fill this gap in the theory of rationality
with the notion of basing permissions.
14
Basing permissions are intended to play an important role in the theory of
reasoning; they capture the ways in which we may permissibly hold attitudes on the basis
13
I think something similar applies in the typical cases discussed by authors interested in
rational requirements—means end coherence, intention consistency, and belief
consistency and closure. All of these are distinctive and important ways in which we
expect one another to regulate our attitudes structurally, even when we fail to regulate
them in other appropriate, substantive ways. But the case of enkrasia is perhaps special in
virtue of its practical importance.
14
For this account see Broome (ms: 194).
108
of other attitudes (where ‘on the basis’ is a causal-explanatory relation). According to
Broome, basing permissions have the following general form:
Rationality permits S that
S has attitude A at t
a
and
S has attitude B at t
b
and
S has attitude C at t
c
and …
S has attitude K at t
k
and
S’s attitude K at t
k
is based on S’s attitude A at t
a
and B at t
b
and …
Applying this schema to the case of enkrasia, the suggestion would be that
Rationality permits S that
S believes at t
1
that she ought to x and
S intends at t
2
to x and
S’s intention to x at t
2
is based on S’s belief at t
1
that she ought to x
This basing permission is then supposed to entail the relevant asymmetry that is not
entailed by WSRR.
Recall that the main piece of data that needs to be explained is the fact that there
is always something rational about responding to a belief that you ought to x by forming
the intention to x, whereas there is not always something rational about responding to
lacking the intention to x by revising the belief that you ought to x.
15
On the face of it the
enkratic basing permission appears apt for this task. It says that rationality permits a
15
Of course the latter claim could be strengthened; plausibly it is never rational to
respond in such a way. But this is unimportant in the present context. Note that if this
stronger claim is true, then it is likely because there is a requirement of rationality
forbidding revising beliefs (or some types of beliefs) on the basis of lacking intentions.
This requirement, however, does not explain the piece of data that I’ve called the kernel
of the symmetry argument. So appealing to such a requirement cannot alone constitute a
satisfying response to the symmetry problem.
109
certain kind of response, namely the basing that obtains in cases of enkratic reasoning.
And of course this is precisely the response about which we have concluded that there
must be something rational.
The introduction of basing permissions has the virtue of looking for a response to
the symmetry problem in the right place. Nonetheless, the proposal cannot succeed.
Ironically, it is undermined by considerations analogous to the ones that Broome has
eloquently appealed to in rejecting narrow scope formulations of rational requirements.
In presenting this picture of basing permissions, Broome is careful to point out
that his formula “…does not imply that [S] may permissibly have any of the attitudes A,
B, C…or K individually…because permission does not necessarily distribute over a
conjunction” (ms: 194). His point is that, for example, being rationally permitted to have
attitudes [A and B] does not entail that one is rationally permitted to have attitude A. This
is crucial, since without this provision the proposal would be susceptible to the same kind
of bootstrapping worry that plagues narrow scope accounts. In other words, if we could
derive a rational permission to have A from a rational permission to have [A and B], then
the basing schema would imply that in every case in which my intention to x is based on
my belief that I ought to x, both the intention and the belief are rationally permissible.
And this would amount to granting that the basing relation between my two attitudes is
itself sufficient for making those attitudes rationally permissible. This is not a good
result; the fact that my intention to spit on Las Meninas is based on my belief that I ought
to is not enough to make the intention and belief okay by rationality’s lights.
The problem is that a similarly troubling result is entailed by Broome’s account of
basing permissions. These permissions aim to capture the normative status of the basing
relations that they treat, and they do so by appealing to the concept of rational
permission. But the concept of rational permission is not the right one to capture the data.
It is not always rationally permissible to base one’s attitudes in the ways implied by
Broome’s schema.
110
While a basing permission does not permit the individual attitudes (A…K)
implicated in the schema, it does permit the basing relation that holds between them.
16
It
is a natural consequence of this view that, for example, it is rationally permissible for me
to base my intention to spit on Las Meninas on my belief that I ought to spit on Las
Meninas. So though Broome avoids saying that my intention is rationally permissible, he
is committed to saying that the basis of my intention is rationally permissible. This is just
as problematic. It is not rationally permissible for me to base my intention to spit on Las
Meninas on my belief that I ought to spit on Las Meninas, because that belief is itself
rationally impermissible. Allowing that such basing is rationally permissible amounts to a
form of bootstrapping analogous to the one Broome has consistently worried about. The
mere fact that I have a belief with the proper formal properties—that is, a belief with the
content ‘I ought to x’—is not intuitively sufficient for making my basing an intention on
that belief rationally permissible. The permissibility of basing A on B must depend
essentially upon the permissibility of B.
Here is another way to put the idea. Imagine that I do base my intention to spit on
my belief that I ought to. According to Broome, this basing is rationally permissible. But
plausibly, I am rationally required to refrain from intending—this is the sort of intuition
that has motivated wide scope views from the beginning.
17
Moreover, it is highly likely
that I’m also rationally required to revise my belief; by hypothesis, I lack sufficient
evidence for it. But then I am rationally permitted to have an attitude that I’m rationally
required to revise on the basis of another attitude that I’m rationally required to revise.
16
See Broome (ms: 191), though he states the view for prohibitions rather than
permissions: “A basing prohibition does not prohibit the attitudes themselves. Rationality
may permit you to believe q, and also believe p and believe that if q then p. It prohibits
you only from having the first of these beliefs on the basis of the second and third.” Way
(forthcoming) briefly considers a similar proposal.
17
Admittedly, a proponent of wide scope theses could say that it is only the intuition that
I am not required to intend to spit that has been his motivation for rejecting narrow scope
alternatives. But I think it is extremely natural, and in accord with our common sense
practices of rational evaluation, to assume that in such situations I may be required to
refrain from intending to spit.
111
This is, I submit, an untenable result. Hence basing permissions cannot be the right way
to capture our asymmetry.
4.5 The Lesson and an Intuitive Proposal
It will be useful to provide a brief diagnosis of the problem we encountered in the last
section. This diagnosis will put us in a position to see why the solution I propose is a step
forward.
The problem with basing permissions is that they appeal to the wrong normative
concept. Our goal is to explain the sense in which there is always something rational
about certain forms of reasoning. For example, there is a sense in which it is always
rational to intend to x on the basis of your belief that you ought to x. But the claim that it
is always rationally permissible to engage in these forms of reasoning is too strong. For
example, the claim that it is rationally permissible for me to intend to spit on Las
Meninas on the basis of my belief that I ought to is excessive to the point of
implausibility. There is something rational about my intending to spit on the basis of my
believing that I ought to; but whatever this something rational amounts to it cannot
amount to rational permission.
Here is an analogy. Suppose that the fact that q is a reason for me to perform
action x. Given some assumptions about q’s non-triviality that we can ignore, this entails
that if I x for the reason that q, then there is something rational about my action.
18
However, this doesn’t mean that I am permitted to x for the reason that q. It might be that
the reasons to avoid x-ing are compelling, in which case I am required to refrain from x-
ing for the reason that q. In general, then, the fact that there is something rational about a
particular form of conduct (or reasoning) clearly does not entail that this form of conduct
(or reasoning) is rationally permissible.
18
For suppose that I x, but my reasons for x-ing do not include the fact that q. Assume
also that I am not motivated by any facts which constitute (good) reasons for x-ing. Then
my action of x-ing is clearly more irrational than it would be if I x-ed for the reason that
q. So there must be something rational about x-ing for the reason that q in virtue of which
this is true.
112
Once we see that the concept of rational permission cannot do the necessary work,
a natural thought to have is that we should invoke the concept of a reason—we should
say, in other words, that the asymmetry is explained by the fact that believing that you
ought to x gives you a reason to intend to x. But this thought should be resisted. As
Broome argues convincingly in ‘Normative Requirements’, and as I argued in chapter
one, bootstrapping worries plague the view that, e.g., believing that you ought to x gives
you a reason to intend to x (and, more generally, the view that what’s correct about these
distinctive processes of reasoning is that there is always a reason to engage in them).
19
If I
don’t have any reason to intend to spit on Las Meninas, then it’s counterintuitive to think
that my believing that I ought to spit generates a reason to have this intention. Similar
worries likewise doom the view that you have a reason to base your intention to x on
your belief that you ought to x. If I don’t have any reason to intend to spit, then it’s
counterintuitive to think that I can have a reason to base this intention on my belief that I
ought to spit.
So we seem to have shown that the dual concepts of rational requirement and
rational permission cannot be the proper ones to utilize in our solution to the version of
the symmetry problem I’ve presented. Likewise, we have good reason to suspect that the
concept of a reason will not do either.
I now turn to introducing my own view about how we should respond to the symmetry
problem. We need a distinct kind of concept to fill the gap in the theory of rationality that
is left over once we’ve postulated wide scope rational requirements. Fortunately we are
already in the practice of employing the concept that fits the bill. It is the concept of
rational commitment.
In what remains of this section I’ll argue that rational commitment is an
intuitively legitimate category of normative concept. In the next section I’ll provide an
account of some of the central features of this concept. In section 4.7 I’ll briefly outline
19
Broome (1999).
113
why this account of rational commitment gives us a satisfying resolution of the symmetry
problem.
Let’s begin by noting that there are strong intuitive grounds for thinking that we
use the term ‘commitment’ in English to pick out an interesting normative concept that is
worthy of study and especially relevant to the theory of rationality. For our purposes, we
will be concerned with only rational commitments; we will postpone until the next
chapter the discussion of what I will call moral commitments—for example, the
commitment that you have in virtue of having promised to do something—and
psychological commitments—for example, the commitment that’s constituted by your
dedication to some person, object, or goal. But it is worth pointing out from the beginning
that we do use the term to describe a wide range of apparently normative phenomena.
Rational commitments are commitments that you have in virtue of possessing
certain types of attitudes. We talk about rational commitments all the time, most
commonly in the case of closing your belief under the relation of entailment. We say
things like:
‘Your argument commits you to believing that z.’
‘That’s the best thing for Hume to say, given his commitment to the theory of
ideas.’
‘Plato is committed to a wacky ontology.’ Etc.
Moreover, note that what we are talking about in these cases seems importantly
different than what the notion of ‘rational requirement’ should be picking out. For
instance, it’s not always the case that a philosopher is rationally required to believe the
consequences of his theory—in some cases, he may be woefully irrational in accepting
the theory in the first place. But we would nonetheless grant that, since he holds the
theory, he is thereby committed to believing its consequences.
Before I offer further reflections about the nature of rational commitment a
general point is in order. As my examples bring out, the English word ‘commitment’,
when it is employed in an attempt to discuss rational commitments (as opposed to how
it’s employed in statements like ‘I committed to attending Lisa’s party’) is
114
paradigmatically used to describe cases of belief closure. Less commonly, we invoke
rational commitments to intentions, e.g. when we say that your intention commits you to
action, or that a belief that you ought to do something commits you to (intentionally)
doing it. I should stress that though this language is less widespread, it is by no means
confused. Rational commitment is an intuitively broad category, and the way we speak
about it bears out this point.
Setting aside linguistic evidence, there are some very general reasons for treating
the different cases in parallel fashion. Chief among them is the fact that the nature of the
rational connection between the implicated mental states—that is, the connection
between the states that ground the commitments, and the states that we are committed to
having in virtue of those grounds—is analogously strong in each case: intuitively, you
have got to be irrational in some major way if you fail to have an attitude that you are
committed to having. So it is reasonable to hypothesize that we can acquire rational
commitments in many ways—by having ‘all things considered’ normative beliefs about
what we ought to do, by forming sets of intentions, by having beliefs about the necessary
means to what we intend, and perhaps also as a result of having certain partial attitudes
(if there are such things).
20
In the next section I sketch an account of rational commitment
that allows for a unified treatment of these related cases. [*Readers of the entire
dissertation should skip section 4.6.]
4.6 Rational Commitment
I now turn to a more detailed examination of the nature of rational commitment. The
point of this section is to give us a better handle on the concept by articulating its
20
By using the vague terminology of ‘partial attitudes’ here I mean only to remain
neutral on the complex questions of whether there are partial beliefs, whether there are
partial intentions, and how these potential states are related. The point is just that if there
are such partial attitudes then they will plausibly ground rational commitments in ways
analogous to the ways in which full attitudes ground rational commitments. To mention
one sort of example, a partial belief that Spain will win the World Cup, accompanied by
certain other attitudes (like the belief that I ought to bet on Spain if it is likely to win me
money) may rationally commit me to intending to take bet B, given the odds of B.
115
essential features: commitments are normative rather than psychological, escapable,
agent-dependent, pro tanto, and strict. In what follows I discuss these features
individually, and I show that they suffice to distinguish rational commitments from
rational requirements. (A version of this section, amplified to include explicit
comparisons to moral commitments, can be found in the next chapter.)
Commitments are normative rather than psychological
Rational commitment cannot be analyzed in purely psychological terms. Adam might
believe that everything the Bible says is true, and he might believe that the Bible says that
the world was created in six days. But it clearly doesn’t follow that he believes that the
world was created in six days. What follows is that he is committed to having this belief.
So the commitment itself is not a mental state. Rational commitments are relations
between mental states (the ground of the commitment) and other mental states (the object
of the commitment). These relations obtain whether or not the agent in question actually
satisfies them, and they have the flavor of normative relations.
Commitments are escapable
Rational commitments are escapable. They can be escaped by relinquishing the
attitude(s) that constitutes the ground of the commitment. For example, Adam can escape
his commitment to believing that the world was created in six days by revising his belief
that everything the Bible says is true. So being committed to having A at t
1
is compatible
with your being perfectly rational in failing to have A at t
2
, in so far as you’ve escaped
the commitment. If commitments could not be escaped, this would not be so.
Notice that this feature of commitments seems to distinguish them from rational
requirements; intuitively, requirements are not escapable. If you are rationally required to
be such that [if you believe that you ought to φ, then you intend to φ], then this is so
whether or not you have a belief about what you ought to do. In other words, you fall
under the scope of the requirement no matter what you are like (at least in so far as you
116
are a rational agent).
21
Likewise, if you are rationally required to refrain from believing
that you ought to spit on Las Meninas at t
1
, and no evidential facts change by t
2
, then you
are still rationally required to refrain from having this belief. A reasonable explanation of
this fact is that the rational requirement to believe in response to your evidence is
inescapable.
Commitments are agent-dependent
By claiming that commitments are agent-dependent I mean primarily that, in order to
come into existence, a commitment must be grounded in an activity of the agent who is to
become committed. (The sense of activity I am after is somewhat opaque, but none the
worse for that; importantly, it is not meant to imply anything ambitious about free
agency.) The idea is that since rational commitments depend, in a broad sense, on the
activities of the agent who comes to stand in the commitment relation, it makes sense that
different agents are committed to different things. Another person—or, more generally,
the world itself—does not have the power to commit you. It is fundamentally your mental
states that constitute the ground of your rational commitments. This agent-dependence is
what explains why agents’ commitments vary widely.
Notice that the agent-dependence of commitments also serves to distinguish them
from requirements, at least on one popular conception of requirements. It is often
supposed that rational requirements exist independently of us—we are not responsible for
bringing them into being in any robust sense. For example, it is not up to us that the
evidence requires us to believe hypothesis H, or that the reasons require us to intend to
x.
22
And any two individuals in the same epistemic/normative situation will be required to
21
As it is commonly conceived, the non-akrasia norm requires you at all times to avoid
akrasia. In other words, you are governed by this requirement whether or not you have
any beliefs about what you ought to do. Thus there is a clear sense in which the norm is
inescapable: you are governed by it no matter what.
22
Humeans about normative reasons might object to the latter claim, since they view
normative reasons as grounded in the desires of individual agents. I find this view
implausible but cannot argue against it here.
117
believe H, and required to intend to x. But it isn’t true that any two individuals in the
same epistemic/normative situation will have the same rational commitments. You could
easily be in the same epistemic/normative situation as I am—standing in the Prado,
evaluating the same sets of considerations—and not believe that you ought to spit on Las
Meninas. In that case you would lack a commitment that I have, the commitment to
intending to spit.
Commitments are pro tanto
Sometimes you have reasons to do something that you should refrain from doing. In the
terminology of Jonathan Dancy, reasons are contributory—a reason might be an
ingredient in making it the case that you ought to do something, but it needn’t play that
role.
23
As John Broome puts it, there are reasons that are inconclusive but that still play a
role in normative weighing explanations.
24
Dancy and Broome are both getting at the
simple idea that reasons are pro tanto. They are to be contrasted with “all things
considered” concepts (or “decisive” concepts, in my terminology from section one) like
‘ought’ and ‘requirement’ that do not merely weigh in favor of something, but rather
indicate what emerges after the weighing has been accomplished.
Notice that this distinction between the pro tanto and the all things considered
may be more general than the distinction between reasons and oughts/requirements.
Plausibly some obligations can weigh against each other, and are thus like reasons in that
they merely contribute to, rather than constitute, what you ought to do. Rational
commitments are the same way. Being committed does not entail that you are rationally
required to satisfy the commitment.
To see that rational commitments are pro tanto, simply imagine that you have
intentions I
1
and I
2
that commit you to intending necessary means M
1
and M
2
that you
believe to be inconsistent with each other. Suppose that acting on I
1
is more important
23
Dancy (2004). See also Ross (2002) on prima facie duties.
24
Broome (ms). The actual account is a bit more complicated, but not in any way that’s
essential for making the simple point I’m making in the text.
118
than acting on I
2
. Then you are committed to intending M
2
, but you would be most
rational if you refrained from intending M
2
and intended M
1
instead. So being committed
to having attitude A can count in favor of having A, but it cannot do so decisively.
More formally, we can put the point like this. To be pro tanto is to play a non-
decisive weighing role in the determinations of some normative domain D. To be all
things considered is to encapsulate the final verdict of D, the verdict that emerges after
the weighing has been done.
Rational commitments are pro tanto because they are not like these final verdicts.
In other words, rational commitments are not rational requirements. The intuitive notion
of requirement is just one kind of decisive normative verdict. The claim that a
philosopher is required to discard his theory instead of accepting its implausible
implications should be fleshed out as the claim that rationality (‘all in’) says that he must
discard the theory. And the point is that commitments aren’t this strong. It may often
happen that you are required to escape, rather than satisfy, a commitment. Thus
commitments play a non-decisive, or pro tanto, weighing role.
Commitments are strict
Commitments are pro tanto, but they are not to be understood on the model of reasons.
There are tons of reasons that you can take or leave. I have a reason to intend to bring
flowers to the secretary of the geology department—namely, the fact that it would
brighten her day. I have a reason to intend to eat dim sum tomorrow morning—namely,
the fact that I find it delicious. Yet it is perfectly okay for me to fail to have these
intentions and countless others. But it is not perfectly okay for me to fail to satisfy my
commitments.
In the next chapter I argue at greater length that commitments have a
characteristic force that distinguishes them from our paradigmatic example of the pro
tanto. For present purposes it will suffice to make the following point. It is possible for a
normative relation to be more forceful than the relation of having a reason, but to
119
nonetheless not be decisive. In my terminology, rational commitments are pro tanto but
strict rather than slack.
25
To begin, let us consider what it is about rational commitment that motivates the
introduction of the distinction between the strict and the slack. Here is a simple
categorization of the idea. If you are committed to believing p, and you fail to believe p,
then there is something bad about that. But as we’ve seen, it can’t be that this badness is
decisive. In other words, it may be that you were rationally required to refrain from
believing p. So we need to look elsewhere for a way to characterize the badness of
violating these commitments. The problem is that it is not plausible to think that the
badness of violating a commitment is just the badness of believing or intending against a
reason. That would severely underestimate the seriousness of commitments and distort
their normative character.
Imagine that the moral theory I accept commits me to intending to give money to
Doctors Without Borders. Whether or not this moral theory is ultimately correct, my
friends may legitimately take me to task if I do not have the intention to give money to
this organization. This would not necessarily be appropriate if I merely had a reason to
give away the money in this manner. After all, I presumably have a reason to
intentionally do many incompatible things with the money—give it to Oxfam or Human
Rights Watch, spend it on my friends and family, bet it on the Lakers making a
championship run, etc. But if I refrained from forming the intention to give the money to
Oxfam, my friends could not reasonably take me to task. There is something distinctive
about my rational commitment; it is not just a reason to intend to give to Doctors Without
Borders.
The full elaboration of this point is beyond the scope of the present chapter. For
now the reader should just bear in mind that ‘strictness’ is a placeholder for the property
of commitments that renders them stronger than reasons—or, to put it more neutrally, as
25
The distinction between strict and slack normative relations is not a distinction with
much explicit precedent. The notable exceptions are Bratman (1987), Broome (1999),
and Schroeder (2009).
120
bearers of a distinctive and interesting kind of (pro tanto) normative force. More will be
said on this score in chapter five.
This concludes my sketch of the essential features of the relation of rational commitment.
It remains to be shown how the introduction of this notion solves the symmetry problem
as I conceive of it.
4.7 The Solution and Two Objections
Recall that the kernel of the symmetry problem was the inability of wide scope
requirements, and later on basing permissions, to capture the sense in which it is always
to some degree rational to (e.g.) intend to x on the basis of your belief that you ought to x.
This fact was what stood in need of explanation.
But we are now in a position to observe that the account of commitment I’ve
sketched gives us a plausible explanation of precisely this sort. There is always
something rational about forming the intention to x in such a way because doing so
constitutes the satisfaction of a commitment. Given that being rationally committed to
having attitude A means standing in some kind of normative relation to A, it should be no
surprise that there is something rational about forming A on the basis of the attitude(s)
that grounds your commitment. In other words, it is the fact that attitude B commits you
to having attitude A that explains why, invariably, having A on the basis of B involves
some degree of rationality, or some admittedly limited form of rational success.
Since commitments cannot plausibly be considered decisive, we also avoid the
result that doomed narrow scope conceptions of rational requirements. The thought was
that it was simply crazy to contend that, for example, I could be rationally required to
intend to spit on Las Meninas merely in virtue of my believing that I ought to. That belief
was, by stipulation, ridiculous. And so it was similarly ridiculous to suppose that the
intention could be decisively rationally justified by the belief.
Our solution implies nothing of the sort. We grant that, all things considered, I
would be crazy to intend to spit on Las Meninas. What we do claim is that I am
121
committed to having this intention, and that there is something rational about my forming
it on the basis of my belief.
Some readers might doubt even this weaker claim; perhaps they think that there is
absolutely nothing rational about such conduct. I have two responses to this worry.
First, it’s important to appreciate that the worry is not especially troubling
dialectically speaking. Most of the contributors to these debates agree that there is
something reasonable about the symmetry objection.
26
Many of them would, I suspect,
accept my categorization of the asymmetry.
Second, and more importantly, we should note that affirming the objector’s point
would leave us with a deeply impoverished theory. Specifically, it would leave us
without the resources needed to vindicate common sense comparative judgments of
agents’ rationality. I already made a version of this point at the end of chapter three, but I
will revisit it here.
Imagine that John believes that moral theory M is true. Imagine too that John is
rationally required to revise this belief, on the grounds that he has insufficient evidence
for it. Nonetheless, John persists in believing M, and this belief shapes a significant
portion of his intentional life. For example, John frequently engages in enkratic
reasoning, or in other words reasons from the belief that he ought to x to an intention to x,
and often the content of such ought-beliefs is either wholly or partially determined by M.
Now suppose that there is nothing rational about intending to x on the basis of believing
that you ought to x. Then it follows that there is nothing rational about John’s habitually
intending in accordance with M. But intuitively John is far more rational than Jim, who
also accepts M, but whose beliefs about what he ought to do, often influenced or
determined by M, never lead him to form the corresponding enkratic intentions. The view
under consideration leaves us no natural way of vindicating this comparative judgment.
And that is a huge problem. For in our everyday lives, we constantly make judgments
26
Hence the responses that attempt to address the objection and still vindicate wide scope
theses and/or theories, e.g. Broome (ms) and Way (forthcoming).
122
akin to the judgment that John is more rational than Jim, and we take such judgments to
be obviously true.
The problem is magnified when we observe that the majority of us are in John’s
exact situation. We irrationally accept certain moral (or, more generally, normative)
theories, and allow these theories to structure our intentional lives. If there was nothing
rational about enkratic reasoning as such, then there would be nothing rational about us,
at least when we form intentions on the basis of our inadequately justified moral beliefs.
But this is a highly counterintuitive conclusion. The natural thing to say is that, though
we fall short of ideal rationality in virtue of accepting a moral theory on the basis of
insufficient evidence, we still exhibit a rational virtue, or exemplify a distinctive kind of
rational success, when we enkratically intend on the basis of our moral beliefs.
27
It might be responded that, though the various moral theories that people accept
are inconsistent with one another (and, in some cases, internally inconsistent), they are all
nonetheless sufficiently evidentially supported to make belief in them rationally
permissible. Or, to make the claim a bit more attractive, perhaps it is that there is some
privileged set of moral theories—excluding, say, those that are internally inconsistent,
and those that no reasonable person could accept—for which it is rationally permissible
to believe any member of the set. This view is not persuasive. Setting aside the
substantial, and to my mind insurmountable, issues about how we would specify this
privileged set, it is independently implausible to contend that it could be rationally
permissible to believe any one of several moral theories which make flatly inconsistent
claims. Indeed, one main goal of moral philosophy is demonstrating why it would be
irrational to accept certain moral theories—namely, the theories that compete with the
27
It might also be thought that a theory that hopes to vindicate common sense
assessments of comparative rationality needs to incorporate the thought that the degree to
which a belief about what you ought to do is irrational influences the degree to which
enkratic reasoning counts as rational success. I remain agnostic about the point here, but
mention it as further illustration of how an adequate theory of rationality will have to be
much more complicated than a theory consisting of only wide scope rational
requirements; in some form or another, the theory will have to explain our intuitions
about degrees of rationality, and I doubt this should be done by appealing to such
requirements.
123
one we accept. So for example, opponents of Kantian ethics who object to the claim that
we should always keep our promises presumably believe that it is irrational to believe
such a claim. Their arguments purport to show, in other words, that by rationally
reflecting we can come to know that there are instances in which we ought to break a
promise. Hence these arguments hope to establish the rational impermissibility of
believing the Kantian view about promises.
We proceed now to the second objection. One might worry that my solution
doesn’t explain the fact that it is always irrational to reason in certain ways—e.g. (let’s
call it ‘rationalization’) to revise your belief that you ought to x on the basis of your lack
of an intention to x.
28
After all, what I’ve been claiming is that it is the notion of rational
commitment that explains why, when having attitude B commits you to having attitude
A, it is rational to some degree to have A on the basis of B. But I’ve done nothing to
explain why rationalization is categorically impermissible by rationality’s lights.
Moreover, the introduction of commitments cannot explain this fact. For suppose we
grant, for the sake of argument, that lacking an intention to x does not commit you to
lacking the belief that you ought to x. Even still, we cannot derive the presence of a
prohibition (or a requirement to refrain) from the absence of a commitment. So the
introduction of rational commitments cannot give us a satisfying resolution of the
symmetry problem.
29
The objection is tempting but ultimately unpersuasive. I’ve introduced the notion
of rational commitment both because it corresponds to a common sense concept that we
all routinely employ, and because it solves the symmetry problem in its most interesting
incarnation. As I’ve argued, the dual notions of requirement and permission are simply
insufficient for a theory of rationality that hopes to address the most interesting worries
brought up by the problem. None of this entails that rational commitments should be used
to account for all possible rational asymmetries. In the present case, if it is true that it is
always irrational to rationalize, then presumably there is a rational requirement
28
See Schroeder (2004) for the original worry about rationalization.
29
I owe this objection to an anonymous referee.
124
prohibiting this kind of behavior. This is the kind of asymmetry that should be dealt with
by an independent rational requirement, since it amounts to a blanket prohibition
(analogous to the prohibition on akrasia). But this is simply irrelevant when considering
whether rational commitments are necessary for accounting for other rational
asymmetries.
Many rational requirements must be formulated as wide scope principles. This insight,
however, does not settle several of the most important questions facing theorists of
rationality. The symmetry problem shows this.
The concepts of rational requirement and permission cannot figure in a satisfying
solution to the most interesting version of the symmetry problem. We should respond to
the problem by introducing the notion of rational commitment. This does not constitute
any revisionary addition, since we already employ the concept ubiquitously. And besides
solving the symmetry problem and vindicating our linguistic and conceptual intuitions,
the introduction of rational commitments promises to allow us to better capture common
sense comparative judgments of rationality. These are more than sufficient reasons for
amplifying our conceptual resources as I recommend.
In the next chapter I make my best attempt at this amplification. I argue that the
concept of rational commitment is only one instance of a more general concept, and,
specifically, that it is intimately related to the moral commitments that are grounded in
agreements such as promises. This investigation of the nature of commitment is the main
original contribution of the dissertation, and I think it gives us a new perspective on a
whole host of issues in practical philosophy. The final chapter helps to defend these
claims by considering several applications of the account of commitment I propose.
125
CHAPTER FIVE: MORAL AND RATIONAL COMMITMENT
We often use the term ‘commitment’ to make claims of apparent normative import. For
example, we say that Plato is committed to a wacky ontology, or that President Obama
committed himself to doing something about the nation’s health care problem, or that
Jacky is committed to her wife Joan. On the face of it these claims seem very different:
Plato’s commitment concerns the relationships between his beliefs and other
propositions; Obama’s has to do with something he’s promised the electorate; and
Jacky’s involves a constellation of caring attitudes towards Joan. Moreover, we can use
the term ‘commitment’ in even more obviously different ways—as in, e.g., ‘the judge
committed him to the asylum’—that do not have clear normative readings.
In this chapter I will argue for the unorthodox conclusion that rational
commitments, like the commitment to believe what you recognize to follow from your
other beliefs, and moral commitments, like the commitment to do what you’ve promised
to do, are species of a single genus. In other words, the commitment relation invoked in
the claims about Plato and Obama above is the same relation.
1
My claim is not that all uses of the term ‘commitment’ involve appeal to the same
relation. The account of the normative notion of commitment that I offer here will not
subsume commitments like Jacky’s or the judge’s, though there are, I think, interesting
and ultimately fundamental connections to be made. Nonetheless, the thesis of the
chapter is still ambitious, as the prima facie differences between moral and rational
commitments are substantial.
2
My contention in what follows will be that these
differences can be traced to a single, predictable source, and that this helps to show that
moral and rational commitments are two instances of one phenomenon. More
1
Note that when we claim that Plato is committed to a wacky ontology we are speaking a
bit loosely. What we mean is that Plato is committed to an ontology that we judge to be
wacky; we don’t mean that he’s committed to an ontology that he judges wacky.
2
It is also unorthodox for the simple reason that moral and rational commitments are
seldom discussed in tandem. Part of the aim of this paper is to draw our attention to this
similarity between the normative landscapes of these separate domains.
126
specifically, the genealogy of the differences will derive them from one general
distinction between the moral and rational domains, and this will demonstrate that the
dissimilarities are not evidence of a distinction in the kind of thing they are, but a
consequence of the fact that they are about different things. Even if this last step in my
argument is doubted, the account gives us reason to think that moral and rational
commitments are important and underappreciated constituents of our normative lives, and
that they are usefully investigated together.
5.1 The Nature of Commitment
A. Intuitive Reflections and Initial Similarities
In this section I motivate the idea that there is a distinct normative relation of being
committed that we commonly appeal to in ordinary language and as a criterion for moral
and rational evaluation. I also offer some preliminary reflections about the nature of this
notion of commitment.
Consider the following felicitous uses of ‘commitment’ and ‘commit’:
(1) Plato is committed to a wacky ontology.
(2) If Adam believes that everything in the Bible is true, and he believes that the Bible
says that the world was created in six days, then he’s committed to believing that the
world was created in six days.
(3) I made a commitment to my friend for Friday night, so I won’t be able to join you.
(4) President Obama committed to spending ten billion dollars on green jobs.
The first two sentences attribute what I will from here on out refer to as rational
commitments to Plato and Adam, respectively. The most familiar form of rational
commitment is a commitment to believing what you believe to follow from the things
you already believe. I will suggest in a moment that this is only one among several types
of rational commitment.
Sentences (3) and (4) introduce a different sort of commitment. The sentences say
that President Obama and I have promised to perform some action or set of actions. The
127
“obligation” or normative pressure that such promises engender is the paradigmatic case
of what I’ll be calling moral commitment.
There are other felicitous uses of ‘commitment’ and ‘commit’ that have different
meanings in English. For example, we can use the sentence
(5) Jacky is committed to Joan
in order to express the idea that Jacky is devoted to and values Joan. Likewise, we can
say that
(6) The judge committed him to the asylum
and thereby express that the judge legally mandated that he be placed in the asylum. As
I’ve mentioned, my view is that there are interesting relationships between all these uses;
in particular, the volitional sense of commitment invoked in (5) is importantly related to
the more run of the mill (moral) sense whose paradigm instance is given by the
promissory case.
3
But these relationships will not concern me here. In the following
investigation, I confine myself to uses akin to (1)-(4).
What can we say about the nature of commitment? Let’s begin with rational
commitment. It is uncontroversial that we make claims like (1) and (2). Indeed,
philosophers are particularly disposed to use such locutions. Here is a guiding hypothesis
about them.
To be rationally committed to having A is to be such that you must be irrational if
you fail to have A, assuming no changes in your other attitudes. For instance, Adam is
rationally committed to believing that the world was created in six days since, given (and
holding fixed) his other beliefs, he has got to be irrational if he fails to believe that the
world was created in six days.
There are two distinguishing features of rational commitments that it will be
3
To get a quick handle on the distinction between the moral sense of commitment and
the volitional sense, consider the fact that President Obama could commit to repealing
‘Don’t Ask, Don’t Tell’—e.g. by promising to do so in the course of campaigning—and
still be appropriately accused of lacking a commitment to repealing it (after he’s been
elected, say). To put it another way, Obama might fail to be committed to living up to this
commitment. In this case, Obama would be morally, but not volitionally, committed to
repealing ‘Don’t Ask, Don’t Tell’.
128
helpful to observe at the outset. First, to be rationally committed to A is to stand in a
normative relation to A. Commitments cannot be analyzed in terms of the actual
attitudes one has; Adam might not believe that the world was created in six days even
though he is committed to believing it. An agent’s actual attitudes constitute the ground
of his rational commitments—that is, what makes it the case that he has the commitments
that he does—but not the commitments themselves. The commitments themselves are
normative, or at least apparently normative, in the sense that they put pressure on the
committed agent to form the attitude to which he’s committed; and this pressure obtains
independently of how he thinks about it. Thus rational commitment is, on the face of it,
an irreducibly normative relation: it cannot be analyzed in terms of the attitudes one takes
towards the object of one’s commitments; and to stand in the commitment relation is to
be under the grip of some distinctive form of normative pressure.
Second, notice how this notion of commitment relates to a commonsense notion
of rational requirement. According to the dominant view of writers on rationality, it is
simply not true that Adam is rationally required to believe that the world was created in
six days.
4
This would amount to an illegitimate sort of bootstrapping; a wildly irrational
belief does not have the legitimizing power that this picture would grant it. Really Adam
is rationally required to either form the belief that the world was created in six days, or
revise his belief that everything the Bible says is true. But then there in an intuitive
distinction between what Adam is committed to believing and what he is required to
believe. This claim is supported by our linguistic practice, as we may appropriately say
things like
4
For some prominent defenses of ‘wide-scope’ views about rational requirements see
Korsgaard (1997), Hampton (1998), Broome (1999, 2007, ms), Dancy (2000), Wallace
(2001), Brunero (2010), and Way (2010). For objections to wide scoping see Schroeder
(2004, 2009), Kolodny (2005, 2007a, 2007b, 2008a, 2008b), Setiya (2007), and Finlay
(forthcoming).
129
(7) Adam is committed to believing some crazy things; but what he ought to do is stop
believing that everything the Bible says is literally true.
5
Whereas it would sound awkward to claim that
(8) Adam is committed to either believing that the world was created in six days, or to
revising his beliefs about the Bible.
On the face of it, then, rational commitment is distinct from rational requirement.
Finally, it will be my contention from here on out that the commitment that Adam
is under—a commitment to having a set of beliefs that obeys some weak closure norm
(e.g. one prescribing closure under obvious, believed entailments)—is only one of several
types of rational commitment.
6
Another example of a rational commitment that is purely
theoretical is the commitment one has to not believing that not p in virtue of one’s belief
that p. And there are rational commitments with practical objects as well. For example, a
belief of the form ‘I ought to φ’ grounds a rational commitment to intending to φ.
Likewise, an intention for an end and a belief that a particular means is necessary for that
end together ground a rational commitment to intending that means.
Though we are not as typically disposed to use the word commitment in
describing these cases, it is quite natural to treat them as analogous to the closure case.
Imagine that we engage in a process of shared planning, and eventually come to agree
that we ought to φ. Implementing an intention to φ requires that we take some simple
intermediary steps as necessary means to the successful realization of the intention. But,
inexplicably, I balk when you suggest that we take these steps. You ask whether I have
reconsidered our agreement that we ought to φ; I insist that I have not. You may then
very well contend that I am being wildly irrational, as I am committed to taking these
5
This is precisely the form in which we often phrase objections in philosophy: ‘Jones
believes/claims that [p, q, r …]; he is thereby committed to z; he should not believe that
z; so he should give up at least one of [p, q, r …]’.
6
For the record, I am hesitant to endorse any particular conception of the closure
commitment. There are substantial issues about this that I cannot address here at any
length. That said, I will briefly return to the issue at the end of Section 3. For an extended
recent discussion of epistemic closure see Hawthorne (2006).
130
steps in virtue of our intention.
7
Alternatively, imagine that there are no intermediary
steps necessary. But I simply refuse to form the intention to φ. Again, you ask whether
I’ve reconsidered our agreement, and I insist that I have not. You may once again
felicitously claim that I am being wildly irrational, as I am committed to forming this
intention in virtue of my belief that we ought to φ.
Moreover, it is natural to suppose that, if there is indeed a distinctive normative
relation of rational commitment, then it will obtain in all those cases that are relevantly
similar to the case of belief closure. It would surely be odd if there were a normative
relation obtaining only in that specific case. Since writers on these topics often assume
that they are susceptible to a general, more or less unified treatment, and since this
operative assumption is justified in so far as we think that there is a similar sort of
rational error going on in each of these cases when the commitment is violated, we will
proceed under the assumption that the notion of rational commitment has various
applications.
8
Consider now the claims about moral commitments that I introduced above.
When I say that I have committed to dinner plans on Friday, or that Obama committed to
spending money on green jobs, I am claiming that the agent in question promised to do
something. Let us note the way in which this sort of commitment relates to the rational
commitments just discussed.
There were two distinguishing features that initially struck us about rational
commitments. First, they seemed irreducibly normative. Being rationally committed to
having A could not be analyzed in terms of the actual psychological states of the
committed agent; these states served only to ground the commitment, whereas the
commitment itself seemed to amount to a kind of normative pressure. Second, rational
commitments appeared to be distinct from rational requirements. In other words, it
7
Strictly speaking, the belief that the steps are necessary for the action intended is part of
the ground of the commitment as well. But it is natural to omit this in conversation.
8
See for example the work of John Broome (1999, 2007, ms), who is quite clear that the
central requirements of rationality must include practical ones like enkrasia and means-
end coherence.
131
seemed that you could be rationally committed to having A without being rationally
required to have A.
Analogous claims hold for moral commitments. Being morally committed to φ-
ing cannot be analyzed in terms of the attitudes of the committed agent; it must be
analyzed as a fact about normative, rather than psychological, reality. Obama can be
committed to spending money on green jobs even if he does not desire or intend to do so.
For instance, pledging in his State of the Union address that he will spend ten billion
dollars on green jobs would suffice for grounding this commitment.
Likewise, moral commitments should be distinguished from moral requirements.
Imagine that Obama does commit in late January to spending the ten billion on green
jobs. However, in early February a devastating nuclear exchange wipes out much of the
Middle East. Plausibly a tragedy of this magnitude could make it so that Obama is
morally required to divert the ten billion dollars from green job creation to humanitarian
assistance. In this case he would be morally required to violate his commitment. So
there’s a strong case for thinking that moral commitments cannot be identified with moral
requirements.
We have seen some pretty fundamental initial similarities between these two
types of commitment. Let us now examine their differences. After doing so, we will
return to give a more rigorous and novel characterization of the shared nature of moral
and rational commitments (section 1C), and an explanation of the roots of their
differences (section 2).
B. Differences
The most immediately appreciable distinction between moral and rational commitments
concerns their objects. At a first gloss, it seems that when you are morally committed,
you are paradigmatically committed to performing a certain action. And when you are
rationally committed, you are committed to having or lacking a certain mental state.
In fact this first approximation is not obviously correct. I can promise to feel
certain emotions, and perhaps thereby come to be committed to feeling these emotions.
132
For example, I might promise to continue loving someone after they have died. Plausibly,
continuing to love someone (in this sense) is not an action, but rather the persistence of a
mental state. Nonetheless, I think it is natural to say that my promise commits me to
continuing to love the person in the same way that Obama’s promise commits him to
funding green jobs. So it seems like moral commitments need not take actions as their
objects.
In any case, we should not be surprised or worried about this difference. After all,
morality and rationality are different domains, and they engage different types of
considerations. On one common conception, morality essentially concerns our relations
with others. I would suggest that, in broad outline at least, there is a deep analogy here
between these normative realms: rationality essentially concerns our relations with
ourselves, or the connections between our mental states. This is something to bear in
mind throughout what follows. And it provides a simple explanation of why we should
expect the objects of moral and rational commitments to be different. Paradigmatically,
rational norms will govern your own mental states, while moral norms will govern your
dealings with other people.
Besides having different objects, the two types of commitment seem distinct
because they come into existence in different ways. Moral commitments must be
grounded in a specific form of interpersonal communication: the making, and plausibly
the acceptance, of a promise.
9
So in order to acquire a moral commitment, you need to
9
I won’t propose an account of what it takes to “accept” a promise. The key thought is
that the mere act of promising to do something can be insufficient for grounding a
commitment to do that thing if the promisee rejects the plan. So I may promise my crush
that I will take her to Bali, but not thereby become committed to taking her, since she has
no interest in spending time with me and tells me as much. Nothing that I say turns on
this intuition or on the details of an account of promissory acceptance. If you think that I
am committed in this case notwithstanding the rejection, you should still accept my claim
in the main text that the ground of moral commitments is interpersonal. In Section 2 I
will argue that moral commitments paradigmatically require acceptance, but this is
compatible with the view that in strange cases they do not.
133
perform a public action of communication with the promisee.
10
By contrast, rational
commitments are grounded by your actual psychological states. You acquire them by
forming beliefs and intentions that stand in certain relations, and forming these mental
states is something you do on your own.
This difference in the mode of acquisition of moral and rational commitments is
important. Diagnosing the difference will be one of the main tasks of section 2. There I
will contend that two other central differences obtain in virtue of these facts about how
commitments are acquired.
We are already in a position to see what these further differences are. First, it
seems that you may sometimes be released from commitments, but that securing this
release requires different methods in the moral case than in the rational one. In the former
case securing release involves contacting the promisee—again, performing a public,
communicative action, and obtaining some kind of permission from someone else. In the
latter case being released from a commitment simply requires revising or discarding some
of your attitudes.
Second, the two types of commitment appear to differ significantly in what I will
call their normative natures. Recall that in giving a preliminary gloss on the nature of
rational commitment, I claimed that to be rationally committed to having A is to be such
that you must be irrational if you fail to have A, assuming no changes in your other
attitudes. But we are already in a position to see that a strictly analogous claim cannot be
true for moral commitments. The analogous claim would be: to be morally committed to
φ-ing is to be such that you must be immoral if you fail to φ, assuming no changes in the
10
In point of fact this conception of moral commitments is intentionally oversimplified.
Though I take promises to be the paradigmatic grounds of moral commitments, I think
that the more general class of agreements is what we should appeal to in a complete
theory. (After all, I can be committed to meeting you for dinner even if my agreement to
meet you doesn’t contain explicitly promissory language.) Nonetheless, in this chapter I
treat only the promissory case, in an effort to simplify exposition.
134
grounds of your commitment.
11
But a moment ago we gave a case in which this was not
true: Obama could be morally committed to spending money on green jobs even if failing
to do so would not be immoral.
We will return to these considerations in a moment. To forecast: I’ll contend that
the differences in escapability conditions and normative natures are traceable to the
difference in mode of acquisition. And the differing mode of acquisition for moral and
rational commitments is not a reason to think that they are different types of thing. It is,
like the difference between their objects, a difference that arises not from the nature of
commitment, but from the nature of morality and rationality themselves.
But before we get to this, I need to offer a more rigorous account of the
commitment relation.
C. Evidence for a Unified Theory
I now turn to a more detailed examination of the nature of commitment. The point of this
section is not to offer an analysis of the commitment relation, but to make the point that
rational and moral commitments can and should receive a unifying treatment because of
their underlying structural similarities. The central features of my account are as follows:
commitments are normative rather than psychological, escapable, agent-dependent, pro
tanto, and strict. In what follows I discuss these features individually, showing how each
of them (apart from the claim that they are normative) distinguishes commitment from
some other normative relations, especially the requirement relation.
Commitments are normative rather than psychological
I have already shown why commitments cannot be analyzed in purely psychological
terms. Adam might believe that everything the Bible says is true, and he might believe
that the Bible says that the world was created in six days. But it clearly doesn’t follow
11
In the rational case, your actual attitudes constitute the ground of all of your rational
commitments. In the moral case, it is rather a particular type of action that grounds your
commitments. Since you cannot “revise” actions that you have already performed, the
process by which you may escape a moral commitment is more complicated.
135
that he believes that the world was created in six days. What follows is that he is
committed to having this belief. Likewise, Obama might promise to fund green jobs, but
it doesn’t follow from his promise that he wants to, intends to, or expects to fund green
jobs. What follows is that he is committed to funding green jobs.
12
So the commitments
themselves are not mental states. They are relations between mental states/actions (the
ground of the commitment) and other mental states/actions (the object of the
commitment). These relations obtain whether or not the agent in question actually
satisfies them, and they have the flavor of normative relations.
Commitments are escapable
Both moral and rational commitments are escapable. Suppose that you are committed to
doing x at t
1
, and suppose that t
2
comes along and you have not done x. It does not
necessarily follow that you have violated a commitment, or done anything objectionable
at all. For you can escape a commitment by giving up its ground. In the promissory case
you can escape the commitment by being released from the promise. (For Obama this
might be difficult; it is tough to secure this sort of release from millions of citizens. But
perhaps there are other ways to escape such commitments. For a simpler case, return to
my commitment to having dinner with a friend on Friday. This is escapable in an obvious
way: all I need to do is call up my friend and persuade him to release me, maybe with a
promise to reschedule.)
Likewise, rational commitments can be escaped by relinquishing the attitude(s)
that constitutes the ground of the commitment. For example, Adam can escape his
commitment to believing that the world was created in six days by revising his belief that
everything the Bible says is true. So, again, being committed to having A at t
1
is
compatible with your being perfectly rational in failing to have A at t
2
.
12
Do not be confused by the tricky ambiguity of the sentence ‘Obama is committed to
funding green jobs’, which may mean that Obama has made this commitment, or that he
is in the process of satisfying this commitment. (See footnote 3 above.) The former
reading is the one I am after in the main text.
136
Notice that this feature of commitments also seems to separate them from
requirements; intuitively, requirements are not escapable. If you are (as is commonly
supposed) rationally required to be such that [if you believe that you ought to φ, then you
intend to φ], then this is so whether or not you have a belief about what you ought to do.
In other words, you fall under the scope of the requirement no matter what you are like
(at least in so far as you are a rational agent).
13
Similarly, if you are morally required to
refrain from stealing, then there is nothing you can do to escape this requirement. It isn’t
grounded in a contingent feature of you, like the fact that you live in Berlin or the fact
that you believe in ghosts. If anything, it is grounded in the fact that you are a moral
agent, and this is something that you cannot escape.
14
Commitments are agent-dependent
By claiming that commitments are agent-dependent I mean primarily that, in order to
come into existence, a commitment must be grounded in an activity of the agent who is to
become committed. The sense of activity I am after is somewhat opaque, but none the
worse for that; importantly, it is not meant to imply anything ambitious about free
agency. The idea is that since both moral and rational commitments depend, in a broad
sense, on the activities of the agent who comes to stand in the commitment relation, it
makes sense that different agents are committed to different things. Another person (or,
more generally, the world itself) does not have the power to commit you to actions,
intentions, or beliefs. It is fundamentally your actions and your mental states that
constitute the ground of your commitments; it is, for example, your act of promising that
commits you to performing the promised action, and your formation of certain attitudes
13
Maybe a clarificatory remark is in order here. As it is commonly conceived, the non-
akrasia norm requires you at all times to avoid akrasia. In other words, you are governed
by this requirement whether or not you have any beliefs about what you ought to do.
Thus there is a clear sense in which the norm is inescapable: you are governed by it no
matter what.
14
You could of course kill yourself, but then it wouldn’t be that you have escaped the
requirement, since there would be nobody left.
137
that commits you to having other attitudes. This agent-dependence is what explains why
agents’ commitments vary widely.
Notice that the agent-dependence of commitments also serves to distinguish them
from requirements, at least on one popular conception of requirements. For it is often
supposed that rational and moral requirements exist independently of us—we are not
responsible for bringing them into being in any robust sense. A moral requirement to
refrain from stealing, if genuine, entails that I am categorically required to refrain from
stealing; it is merely in virtue of my moral personhood, and not my actions, that the
requirement applies to me. Of course, there may be conditional requirements that are
agent-dependent—for example, a requirement to return what I’ve stolen may only apply
to me in virtue of my act of theft. Nonetheless, there does seem to be an important
distinction between the paradigm cases, as my commitment to meet you for lunch exists
only in virtue of my promissory act, and the requirement I am under to refrain from
stealing exists in virtue of some fundamental normative fact about the moral wrongness
of stealing.
15
And even if we granted that there was a parallel type of explanation in the
case of commitment—that is, a fundamental normative fact such as ‘X-ing commits one
to Y’ combined with the situational or agent-dependent fact that the agent X’s—it would
still be true that agents’ commitments are far more variable, and far more self-imposed,
than their requirements.
16
Commitments are pro tanto
Sometimes you have reasons to do something that you should refrain from doing. In the
terminology of Jonathan Dancy, reasons are contributory—a reason might be an
15
The locutions of this sentence themselves indicate the distinction I am after. It is
natural to indicate an individual’s ownership of a commitment with a possessive (‘my
commitment’), but it would be awkward to refer to a genuine requirement in such a way.
The latter typically takes a categorical form, as in ‘the requirement’.
16
It is not in any sense up to me that an armored car just drove by, but I am morally
required to refrain from hijacking it. If I were to promise the driver that I wouldn’t hijack
it, however, then there would be a natural sense in which the resulting commitment owed
its existence to me.
138
ingredient in making it the case that you ought to do something, but it needn’t play that
role.
17
As John Broome puts it, there are reasons that are inconclusive but that still play a
role in normative weighing explanations.
18
Dancy and Broome are both getting at the
simple idea that reasons are pro tanto. They are to be contrasted with “all things
considered” concepts like ‘ought’ and ‘requirement’ that do not merely weigh in favor of
something, but rather indicate what emerges after the weighing has been accomplished.
Notice that this distinction between the pro tanto and the all things considered
may be more general than the distinction between reasons and oughts/requirements.
Plausibly some obligations can weigh against each other, and are thus like reasons in that
they merely contribute to, rather than constitute, what you ought to do. Commitments are
the same way. Being committed does not entail that you ought to/are required to satisfy
the commitment.
To see that commitments are pro tanto, simply imagine that you’ve made
conflicting promises that you cannot escape. Suppose that one of the promises is more
important than the other. Then you have two commitments that weigh against each other,
with one losing out. There is something in favor of keeping this less important promise,
but you ought to refrain from keeping it.
Likewise, imagine that you have intentions that commit you to intending
necessary means that you believe to be inconsistent with each other. Suppose that acting
on one of the intentions is more important than acting on the other. Then you are
committed to intending the means to the less important intention, but you would be most
rational if you refrained from intending the means.
More formally, we can put the point like this. The distinction between a pro tanto
normative relation and an all things considered one is a distinction concerning the
relation’s decisiveness. To be pro tanto is to play a non-decisive weighing role in the
determinations of some normative domain D. To be all things considered is to
17
Dancy (2004).
18
Broome (ms). The actual account is a bit more complicated, but not in any way that’s
essential for making the simple point I’m making in the text.
139
encapsulate the final verdict of D, the verdict that emerges after the weighing has been
done.
Commitments are pro tanto because they are not like these final verdicts. I have
really been saying this all along, in distinguishing commitments from requirements. The
intuitive notion of requirement that I have been appealing to is just one kind of decisive
normative verdict. Our intuition that Obama is required to divert the funds earmarked for
green jobs to humanitarian assistance should be fleshed out as the claim that morality
(‘all in’) says that Obama must divert the funds. Likewise our intuition that a philosopher
is required to discard his theory instead of accepting its implausible implications should
be fleshed out as the claim that rationality (‘all in’) says that he must discard the theory.
Commitments aren’t this strong. It may often happen that you are required to violate a
commitment (in the moral case) or to give up/escape its grounds (in either case). Thus
commitments play a non-decisive, or pro tanto, weighing role.
Commitments are strict
Commitments are pro tanto, but they are not to be understood on the model of reasons.
There are tons of reasons that you can take or leave. I have a reason to bring flowers to
the secretary of the geology department—namely, the fact that it would brighten her day.
I have a reason to eat dim sum tomorrow morning—namely, the fact that I find it
delicious. I have a reason to learn how to play the trumpet—namely, the fact that I’ve
always wanted to. Yet it is perfectly okay for me to fail to act on these reasons, and
countless others. But it is not perfectly okay for me to fail to satisfy my commitments.
In section 3 below I’ll return to my claim that commitments have a characteristic
force that distinguishes them from our paradigmatic example of the pro tanto. For the
time being I’m concerned to make the following points. First, it is possible for a
normative relation to be more forceful than the relation of having a reason, but to
140
nonetheless not be decisive. Second, moral and rational commitments seem to share this
feature of being strict rather than slack.
19
To begin, let us consider what it is about commitment that motivates the
introduction of the distinction between the strict and the slack. Here is a simple
categorization of the idea. If you are committed to meeting me for lunch, and you fail to
meet me for lunch, then there is something bad about that. Likewise, if you are
committed to believing p, and you fail to believe p, then there is something bad about
that. But as we’ve seen, it can’t be that this badness is decisive. In other words, it may be
that you were morally required to refrain from meeting me for lunch, or that you were
rationally required to refrain from believing p. So we need to look elsewhere for a way to
characterize the badness of violating these commitments. The problem is that it is not
plausible to think that the badness of violating a commitment is just the badness of acting,
or believing or intending, against a reason. That would severely underestimate the
seriousness of commitments and distort their normative character.
Imagine that Obama commits to repealing ‘Don’t Ask, Don’t Tell’ and then
refuses to satisfy this commitment. Whether or not he has ultimately done the right thing,
it seems likely that his critics will have something to legitimately complain about. But
merely having a reason to repeal would not necessarily ground the legitimacy of
complaint. Obama has a reason to do all sorts of things. For instance, he has a reason to
take today off work and spend it playing basketball. Yet if he goes to work today, it
would not be reasonable for us to complain that he didn’t spend the day playing
basketball. Merely having a reason to do something doesn’t ground the intelligibility of
someone else’s complaint when you fail to do that thing. Moral commitments like
Obama’s are not simply reasons. They have some distinct kind of force.
Similarly, imagine that the moral theory I accept commits me to intending to give
money to Doctors Without Borders. Whether or not this moral theory is ultimately
correct, my friends may legitimately take me to task if I do not have the intention to give
19
The distinction between strict and slack normative relations is not a distinction with
much explicit precedent. The notable exceptions are Bratman (1987), Broome (1999),
and Schroeder (2009).
141
money to this organization. This would not necessarily be appropriate if I merely had a
reason to give away the money in this manner. After all, I presumably have a reason to
intentionally do many incompatible things with the money—give it to Oxfam or Human
Rights Watch, spend it on my friends and family, bet it on the Lakers making a
championship run, etc. But if I refrained from forming the intention to give the money to
Oxfam, my friends could not reasonably take me to task. There is something distinctive
about my rational commitment; it is not just a reason to intend to give to Doctors Without
Borders.
The full elaboration of my related argument for the strictness of commitments will
have to wait until section 3.
20
For now the reader should just bear in mind that ‘strictness’
is a placeholder for the property of commitments that renders them stronger than
reasons—or, to put it more neutrally, as bearers of a distinctive and interesting kind of
(pro tanto) normative force. Since the main point of this section has been to offer the
essentials of a theory of commitment that (a) unifies the moral and rational cases, and (b)
distinguishes commitments from requirements, the discussion of the distinction between
commitments and reasons will be profitably postponed until after we have considered the
most important challenge to the account that I’ve articulated.
5.2 Acquisition, Escape, and Normative Nature
A. Unilateral vs. Multilateral Modes of Acquisition
As I said at the outset, there are some apparently substantial differences between moral
and rational commitments that reveal themselves almost immediately and that may
reasonably be thought to threaten a unifying analysis. The challenge of this section is to
provide an account of the nature of these differences that preserves the unified picture.
My strategy is to explain the differences as results of domain-specific differences
20
Indeed, the reader should note that this initial sketch of the argument, which appeals to
the intelligibility of complaint, is really only illustrative. As she will see, the argument as
I present it later involves the deliberative centrality of commitments, and their relation to
regret.
142
between morality and rationality, rather than as differences in the nature of the normative
relation that’s invoked.
21
This subsection will offer the outline of an argument for this
view, and the following one will provide further elaboration.
Let us recall first that though both moral and rational commitments depend upon
the activities of the agent who becomes committed, the mode of acquisition for these
types of commitment typically differs. Consider the fact that rational commitments are,
paradigmatically, unilaterally acquired. By this I mean that the only agency involved in
the acquisition of a rational commitment is that of the agent who is to become
committed.
22
Simply forming certain attitudes is sufficient to ground rational
commitments. Other agents’ actions do not enter into the picture.
By contrast, moral commitments are not, paradigmatically, acquired unilaterally.
In order for me to be committed to meeting Jane for dinner, it is typically necessary that
there be some sort of agreement between us. This is natural, since the moral commitment
that promises engender presumably gets its force from the relations of social dependence
that are brought about by genuine promising.
23
And an exercise of one’s own agency is
not typically sufficient for grounding these relations of social dependence. There needs to
21
Analogy: there are normative reasons for belief as well as for action, and there are
major differences between these types of reasons (e.g. the former are reasons for
believing something true, whereas the latter are reasons for performing some action).
However, these differences do not mean that there is no unified normative phenomenon
that is equally captured by both types of reasons claims. Since the observed difference
can be satisfyingly explained by appealing to the nature and aim of belief and action
themselves, this difference should not incline us to think that there are two unrelated
concepts in play.
22
The claim is not that the ground of the commitment must be an attitude or set of
attitudes that is voluntarily formed, though there may be a broad sense of voluntary
activity relative to which this claim is true. Many philosophers are uncomfortable with
the notion of a doxastic will, and I do not mean to offend their scruples here. I am only
arguing that it takes nothing beyond the exercise of one’s own mental agency to acquire
rational commitments.
23
For important accounts of promissory obligation that have influenced my treatment
here, see Ross (2002), Raz (1971), Scanlon (1998), and Owens (ms).
143
be in addition a related exercise of another agency that issues in a shared plan, or
something of this sort. Let’s call this the
Acquisition Fact: Rational commitments are unilaterally acquired; moral commitments
are not.
This difference in the way we acquire commitments corresponds to, and explains,
a substantial difference in the possibility of escaping them. Since rational commitments
are unilaterally acquired they are also unilaterally escaped. For to relinquish the ground
of a rational commitment, and thus to escape the commitment itself, is merely to give up
an attitude or set of attitudes that was at one point unilaterally acquired. And the process
of revising one’s own attitudes does not depend upon anyone else, any more than the
process of forming them does.
So for example, I have claimed that if you believe that you ought to φ, you are
thereby committed to intending to φ. But it should be clear that, if you reconsider and
discard your belief that you ought to φ, you are no longer committed to intending to φ.
And it should be equally clear that discarding this belief is something you can do without
anyone else’s assistance.
The same does not obtain, however, in the case of moral commitments. Since
moral commitments are acquired by way of an agreement with someone else, they can
only be escaped if you are released from the agreement. And your being released is
obviously not a matter of your own unilateral decision. First, the party to the agreement
may refuse to release you. For example, your mother may refuse to release you from the
promise you made to come home for Christmas. Second, you might be unable to escape a
commitment merely because you are incapable of securing your release. This might
happen if you have no way of contacting the person to whom you’ve made a promise.
Thus the unilateral mode by which we acquire rational commitments explains the
unilateral ability we have to escape them. And the non-unilateral mode of acquisition of
144
moral commitments explains the typical insufficiency of unilateral means of escaping
them. In other words, the Acquisition Fact directly explains the
Escape Fact: Rational commitments are unilaterally escaped; moral commitments are
not.
Further, the Escape Fact accounts for the
Violation and Conflict Fact: Violation and conflict of rational commitments is always
avoidable, whereas violation and conflict of moral commitments is not always avoidable.
In other words, because you can always unilaterally escape rational commitments, you
can always avoid violating them, and avoid being committed to conflicting attitudes.
Finally, we appeal to a general principle
Permissibility Principle: Violation and conflict of commitments is impermissible if not
properly excused.
And we arrive at an interesting conclusion about what I’ll call the divergent normative
natures of the two types of commitment:
The Normative Difference: We can always rationally fulfill, or escape, all of our
rational commitments; but it is not always the case that we can morally fulfill, or escape,
all of our moral commitments.
In other words, the nature of rational commitment guarantees that we are decidedly
irrational when we violate a rational commitment. But the nature of moral commitment
does not allow for such a guarantee. Sometimes the violation of a moral commitment
145
entails no immorality—as we have already observed above, in the case of Obama’s
diverting the green jobs funds to humanitarian assistance.
B. Violation and Conflict
As I’ve just argued, it is the distinction between unilateral and multilateral modes of
acquisition that explains why rational and moral commitments have divergent
escapability conditions. The point to emphasize is that the mode of escape for rational
commitments entails that violations (and conflicts) of rational commitments are
necessarily avoidable. This observation constitutes the key stage in my argument. It
permits us to derive a tidy explanation of the most important difference between the two
types of commitment—the difference in their normative natures—from the original
Acquisition Fact. And this seems to be a clear case in which it is the natures of the
rational and moral domains themselves that give rise to the difference.
For the sake of clarity I’ll state the general form of this argument in more detail.
Then I’ll consider some examples as illustration.
The fact that only violations of rational commitments are necessarily avoidable
leads to the difference in normative natures if we endorse the Permissibility Principle,
which says that violated commitments are irrational/immoral provided they are not
properly excused.
24
This entails that all violations of rational commitments, and conflicts
between them, are irrational, since there is no such thing as a proper excuse in the rational
case. (A proper excuse would need to appeal to something like the agent’s inability to
avoid the violation or conflict; by hypothesis, we never lack this ability.) And it explains
why some violations of moral commitments, and conflicts between them, are not
immoral: there are proper excuses in the moral case, since violations and conflicts of
24
Here we do not need a fancy story about what it takes to get excused. The point is just
that since commitments are normative, they should guide us insofar as they do not come
into conflict with a competing normative consideration that’s weightier. But though such
competition between two legitimately normative considerations is possible in the moral
case, it is impossible in the rational case: the necessary escapability of rational
commitments entails that any conflict is avoidable and thus irrational.
146
moral commitments are not necessarily avoidable. So the Acquisition Fact directly
explains the Escape Fact; we derive the Violation and Conflict Fact directly from the
Escape Fact; and we appeal to the Permissibility Principle in order to explain the
Normative Difference.
Now consider a case of conflicting rational commitments. You believe that you
ought to x, which commits you to intending to x. You also intend to z, and believe that y
is necessary for z, which commits you to intending to y. In addition, you believe that x-
ing is incompatible with y-ing, which commits you to not intending to both x and y.
25
You then have commitments to intending to x, intending to y, and not intending to both x
and y. This is a paradigm case of conflicting commitments.
Notice, first, that though it is possible to have conflicting commitments in this
sense, it is never rational. In our example you are definitely irrational, though without
more detail it is impossible for us to say precisely which attitude(s) is the culprit. You are
certainly irrational because if you are rational in believing that you ought to x, and
rational in believing that x-ing is incompatible with y-ing, it follows that you must be
irrational for having an intention that commits you to y-ing.
26
Now the problem may in
25
There are complications about the precise nature of the intention consistency
commitment that I set aside here in order to preserve a clear exposition of the material.
Briefly, the problem is that it does seem rational in some cases to intend both to y and to
z, where y-ing is incompatible with z-ing. (Bratman’s (1984) ‘Video Game’ is the classic
example. You cannot successfully shoot both targets; yet it is perfectly rational to aim at
them and perhaps to “intend” to hit them, though in Bratman’s view you can only
“endeavor” to hit them.) Perhaps the commitment to not intending both to y and to z only
gets generated when they are not just incompatible but also mutually destructive—that is,
when intending to z lowers your chances of fulfilling your intention to y, and vice versa.
Or perhaps we don’t really have intentions in Video Game. The minimal point in the text
could accommodate either of these strategies by refining the nature of the commitment.
And many more examples of conflicting commitments could be provided.
26
Another way to put this is that you are committed to intending something that will
prevent you from doing what you rationally believe you ought to do. But if you rationally
believe that you ought to do something, then you are rationally required to intend to do it.
As a result, you are rationally required to refrain from doing anything that will prevent
this intention from being realized.
147
fact lie somewhere else. The point is just that all of the relevant attitudes (and sets of
attitudes), the attitudes that ground the commitments at issue, cannot be rational. If they
were, then a conflict of commitments would not arise.
27
Our explanation of the necessary irrationality of violating rational commitments
also accounts for the necessary irrationality of conflict states. For if you have conflicting
commitments, you are in a state that guarantees that you will violate at least one of them
on account of satisfying the other, so long as you don’t escape one.
28
So while rational commitments can conflict, the presence of such a conflict is a
sure sign of some kind of irrationality, because the conflict is never unavoidable. Most
generally, it is always possible to escape a conflict of rational commitments by
relinquishing all the grounds of your commitments—in other words, by having no beliefs
and intentions whatsoever. But there are less trivial ways to escape such conflicts as well.
Typically we can eliminate a conflict of commitments by revising a single attitude. In the
example given above, you can escape the conflict by ceasing to intend to z; by revising
your belief that you ought to x; by revising your belief that y is necessary for z; and so
on. Furthermore, there is always at least one rational way of eliminating the conflict:
revising the irrational attitude or set of attitudes that gave rise to it. (Indeed, on my view
there is typically, though not always, a uniquely rational way of eliminating the conflict.)
Before explicitly contrasting the rational case with the moral one, let me
summarize our main conclusions about the normative nature of rational commitment.
First, rational commitments can be violated, and they can conflict. Second, violation and
conflict states are necessarily irrational. Third, it is always possible to escape a conflict of
rational commitments unilaterally and in a manner that is rationally permissible—
27
This point can be borne out by many other examples. The simplest are cases of
inconsistency. If you believe that p, that q, and that q implies not p, then you are
committed to believing a contradiction. And surely at least one of your beliefs is
independently irrational.
28
If the commitments are synchronic, then conflict entails that you are presently in a
violation state.
148
namely, by revising the irrational attitude or set of attitudes grounding one of the
conflicting commitments.
29
Now imagine a parallel case in the moral domain. Suppose that you have
promised to have a dinner date with Jane on Friday night, and also promised to have a
dinner date with Sally on Friday night. It is impossible, we stipulate, to go on both dates.
Here we have a straightforward case of conflicting moral commitments.
A few observations are in order. First, note that you may potentially escape the
conflict by being released from one of the two promises. However, this is a matter that is
not entirely up to you. So, secondly, whether you do indeed escape the conflict depends
on matters extrinsic to your psychology. Most commonly, it depends on whether one of
your promisees will release you; but it may also depend on whether you have available
means of attempting to secure such a release (e.g. means of contacting Jane or Sally
sufficiently prior to the time of the dates). Third, the existence of such a conflict does not
necessarily entail that you have acted immorally in any way. The present example lends
itself to an interpretation on which you have immorally made at least one plan that you
had no intention of fulfilling. But it may just as well be that you made the plan with Jane
in good faith, and only later learned of a personal crisis of Sally’s that only you had the
ability to subdue. Alternatively, you may have made the plans independently, and simply
not noticed the conflict. If such cases were suitably specified, it would be highly plausible
29
I have been intentionally vague about the sense of possibility to be employed in the
claim that it is always possible to escape rational commitments. There is a sense of
possibility—call it psychological possibility—that might seem to falsify my claim that we
can always escape a conflict of rational commitments. For it might be that for certain
agents in certain circumstances there are no psychological means whereby they can revise
the attitudes in question. To get a grip on this, consider the sorts of cases given in
Frankfurt (1971): a nefarious neurosurgeon will intervene if I decide to revise my
attitudes and block my attempted revision. It is reasonable to wonder in what sense I can
really revise my belief that I ought to x if a neurosurgeon will intervene when I try to
revise it. I cannot deal with these complex issues at any length here. However, note that
my claim is primarily that the possibility of escaping from conflicts of rational
commitments is unilateral—it depends only on the agent who has the conflicting
commitments. It is compatible with this claim that external or internal impairments may
prevent the agent from exercising this unilateral capacity on some occasions.
149
to contend that your promise to take Sally on the date was a morally permissible one,
even though you had previously committed to Jane. So unlike conflicts of rational
commitments, which arise because of at least one irrational attitude held by the agent,
conflicts of moral commitments may arise without the agent’s having acted immorally at
all.
Some care is required if we are to adequately grasp this last point about the moral
import of conflicting moral commitments. Let us assume that to be immoral, an act must
be such that you are morally required to refrain from performing it. What I am claiming
is that there are cases in which an agent S comes to have conflicting moral commitments
even though none of the grounds of these commitments were immoral actions—that is,
actions that S was morally required to refrain from performing. For example, the ground
of S’s commitment to have dinner with Jane is his promise to Jane, and his making of this
promise was morally permissible. Likewise, the ground of S’s commitment to have
dinner with Sally was his promise to have dinner with her, and his making of this promise
was morally permissible—provided his excuse is something like a pressing existential
dilemma of Jane’s, or simply sloppy calendar-keeping.
This does not necessarily mean that there is nothing wrong with making the latter
promise. In my first scenario, S knows that by doing so he places himself under the scope
of conflicting commitments, and that this means he will inevitably disappoint one of the
two women. But the kind of wrongness that we would ascribe to S’s action, if we would
call it wrong at all, is not sufficient to make him morally required to refrain from
performing that action. All things considered, it is preferable that S makes (and keeps) his
promise to Sally. Put differently, the morally best state of affairs accessible to S is the one
in which he escapes his commitment to Jane, and makes and lives up to the commitment
to Sally. And the second best state of affairs accessible to S is the one in which he
violates his commitment to Jane and lives up to his commitment to Sally.
I said that care is required here because the vagueness of expressions such as “is
immoral” can generate unnecessary confusion. I use this expression in a sense that is
analogous with the sense in which I employ “is irrational” above and throughout this
150
paper. These expressions refer to the decisive normative relations of their respective
domains. (For this reason I have taken care to mark such relations with the term
‘requirement’, which I regard as our paradigmatic bearer of decisiveness.) So while I am
happy to agree that there is a respect in which the violation of a moral commitment is
morally lamentable, and will indeed offer an account that attempts to make this idea more
precise, this is perfectly compatible with the claim that such a violation is not immoral.
Moreover, it is sometimes the case that something even more counterintuitive obtains:
morality requires you to violate a moral commitment.
The following sketches will clarify the various classes of situations that concern
us:
•Conflict of moral commitments grounded in an immoral act. S promises to dine with
Sally merely to get her out of his hair. He has no intention of breaking off the date with
Jane. This conflict is analogous to all cases in which rational commitments conflict—the
conflict arises because an impermissible ground is adopted.
•Conflict of moral commitments without any immoral act. The original Jane-Sally case. S
promises to dine with Jane, then later promises to dine with Sally, correctly judging the
latter to be more important. Surely there are some situations in which Sally’s predicament
could warrant S’s making and keeping such a promise, even absent prior permission from
Jane.
•Conflict of a moral commitment with another, weightier moral consideration. S has
promised to dine with Jane. On the way to his car he sees a man bleeding on the ground.
He takes him to the hospital and cannot get to a phone until Jane has already left the
restaurant, distraught and angry. Here S has fulfilled a moral requirement, but doing so
has necessitated the violation of a moral commitment. The violation is regrettable, but it
would be unfair to accuse S of immorality. Even the perturbed Jane should come to
dismiss any resentment she feels towards S once he apologizes and explains the situation.
151
This range of cases demonstrates that the relation of moral commitment is in
some ways more complicated than the relation of rational commitment. One way to
conceive of the main difference is by observing that moral commitments do not provide
as simple a guide to the moral nature of a situation as rational commitments provide to
the rational nature of a situation. The latter are a self-standing diagnostic tool: when there
is a conflict of rational commitments, there is also irrationality; when there is a violation
of a rational commitment, there is also irrationality. Not so for the former. A conflict of
moral commitments can mean that one of several things has occurred. Such a conflict can
be due to immorality, but it can also be due to a change in circumstances, or to simple
carelessness. And the violation of a moral commitment does not necessarily entail any
immorality at all. Sometimes it is even immoral to refrain from violating a moral
commitment.
Thus the central normative difference between the two types of commitment can
be summed up as a difference with respect to a kind of guarantee of the possibility of
success. Necessarily, you can rationally refrain from violating your rational
commitments. Stated differently: there is always an accessible and rationally permissible
world in which you refrain from violating your rational commitments. However, there is
not always an accessible and morally permissible world in which you refrain from
violating your moral commitments. Sometimes the only morally permissible worlds are
ones that involve the violation of a commitment.
30
In this section I have granted that there are some key differences between the two types
of commitment we are investigating. But I have also shown how these differences can be
traced to a fundamental and very simple Acquisition Fact. Moreover, it should be clear
30
This paragraph brings out the sense in which a skeptic about my claims about the
possibility of avoiding conflicts, and violations, of rational commitments can still endorse
the crucial distinction I have made between the moral and rational cases. For it remains
true that, necessarily, it is rationally impermissible to have conflicting, or violated,
rational commitments; whereas it is not true that, necessarily, it is morally impermissible
to have conflicting, or violated, moral commitments.
152
that the different modes of acquisition correspond to, and are predictable on the basis of,
central differences between the moral and rational domains. In other words, it is precisely
the reflexivity of rationality—the fact that the important set of rational norms under
consideration concerns the relations we stand in to ourselves—that explains the fact that
rational commitments are unilaterally acquired. And it is precisely the non-reflexivity of
morality—the fact that the important set of moral norms under consideration concerns
our relations with others—that moral commitments cannot be unilaterally acquired. This
principled explanation defuses the worry that we are really talking about two unrelated
things. The chief difference between moral and rational commitments is not a difference
in the kind of thing they are. It is a difference in the way we come to possess them, and
this difference is a natural consequence of the fact that morality and rationality are about
different things.
5.3 Commitments and the Strict Pro Tanto
I’ve made several claims about the essential features of commitment, but the most
controversial is the claim that commitments are pro tanto and strict—to be distinguished,
that is, from paradigmatic exemplars of the pro tanto. It is natural to assume that reasons
set the standard for our understanding of the pro tanto, and the idea that something can be
pro tanto but still forceful in a way that reasons aren’t requires defense. In this section I
attempt to give an account of this puzzling feature. Addressing this is useful because the
puzzle is what explains why philosophers have yet to appreciate the importance of
commitment.
Above I argued that commitments are the bearers of a characteristic normative
force, one that reasons lack. There are, I suspect, two main reasons that readers might be
skeptical about this claim. First, it might be worried that commitments are merely
particularly strong reasons. And while there is surely an intelligible distinction to be
drawn between strong and weak reasons, it is hardly something to get worked up about,
and does not merit the introduction of potentially confusing distinctions. Second, even
granting my claim that commitments do possess some kind of interesting and
153
characteristic normative force that deserves attention, one might remain doubtful about
whether this force is at all similar in the moral and rational cases. If it were not, then this
might call my attempt to unify the cases into question.
I’ll respond to both of these worries in what follows. Before doing so, I briefly
consider John Broome’s discussion of the notion of strictness. I do so because I have
lifted the term from him, and because my views represent a significant departure from his
thinking about these issues.
A. Broome’s suggestion and its inadequacy
Though I borrow the term ‘strict’ from Broome (1999), his characterization—that a strict
relation is one whose violation entails that the agent is not as he ought to be, or is failing
in some respect—cannot generalize into an account that would give the strict-slack
distinction a clear and useful role in our normative theorizing. Nonetheless, I think that
Broome was on to something. When we violate a commitment, we fail in some way. But
it is not the same failure as the failure to act on a reason, or the failure to do what’s
required or what ought to be done.
Broome’s suggestion is inadequate because it is either vague (and likely too
permissive) or repetitive. If strictness is defined as being such that violation amounts to
failing in some respect, then arguably all normative relations will count as strict. For
when you act against a reason, you fail in some respect—you fail to act in accordance
with the reason. This cannot be right, since Broome’s paradigm of the slack was the
reason relation. Perhaps another interpretation of ‘failing in some respect’, one which
excludes acting against a reason, is available. But any more complicated account would
face the worry that it is completely ad hoc.
If strictness is defined as being such that violation entails that the agent is not as
he ought to be, then the concept of strictness does no work for us. On this interpretation it
merely stands in for the decisive ‘ought’. Maybe this is what Broome had in mind, and
only hoped to introduce the terminology as a way of restating his claims about normative
requirement. In any case this view is unattractive, since there is a kind of normative
154
relation that should be distinguished both from decisive relations and from paradigmatic
pro tanto relations. We turn now to a more detailed consideration of my reasons for
holding this position.
B. Two Worries
I have claimed that commitments have a distinctive kind of forcefulness. One reasonable
objection to this view is that commitments are just especially strong reasons. So while
there is a distinction between commitments and reasons simpliciter, the distinction is not
an interesting one. A second important objection is that, even if I were right that
commitments do have some kind of distinctive force, the force is not at all the same in the
moral and rational cases. This calls the unifying picture that the chapter tries to motivate
into question.
That the first objection cannot succeed is demonstrated by the following
examples. Suppose I believe that I ought to jump to the moon. By hypothesis, I am
thereby committed to intending to jump to the moon. But it would be absurd to suggest
that this commitment consists in my having an especially strong reason to intend to jump
to the moon. Plausibly I cannot have reasons for intending to do things that are
impossible. In any event, I certainly cannot bootstrap a strong reason for jumping to the
moon into existence merely by forming an irrational belief.
31
Nonetheless, there does
seem to be some normative pressure on me to intend to jump to the moon. The
normativity of rational commitment cannot, then, be conceived of as reducing to the
normativity of reasons.
32
31
Since commitments are on my view normative, it seems that beliefs like this must lead
to some form of bootstrapping. Why is this not equally objectionable? I postpone
answering this question until it is clear what I take to be bootstrapped, but I acknowledge
that the question is a good one.
32
Some philosophers (e.g. Schroeder (2009)) think that we should countenance a
category of subjective reasons, and it might be argued that rational commitments can be
thought of as strong subjective reasons. But it is unclear what role we should want
subjective reasons to play; if they ground justification, in the way that epistemic reasons
are typically assumed to ground the rationality of belief, then they will not line up with
155
Now suppose I promise Billy that I’ll drive the getaway car after his heist. By
hypothesis, I am thereby committed to driving the getaway car. But it would not be
reasonable to suggest that this commitment consists in my having an especially strong
reason to drive the getaway car. I might have some reason to drive it, since Billy is
counting on me, but it strains credulity to suggest that this is a particularly strong reason.
Nonetheless, there does seem to be some kind of normative pressure on me to drive the
getaway car. The normativity of moral commitment cannot, then, be reduced to the
normativity of reasons.
33
Notice also that the thesis that commitments are merely strong reasons cannot be
reconciled with my claims in section 1C about the essential features of commitment. At
best, commitments would have to be a particular type of strong reason, individuated from
other strong reasons by, for example, their agent-dependence. Of course, I take the
preceding examples to illustrate that this amended view is still unsatisfactory.
The normative force of commitments is simply a different kind of normative force
than that of reasons. Neither is reducible to the other. This relates to the second objection.
Assume that my reply to the first objection is adequate. We might still wonder whether
the normative force of rational commitment is at all related to the normative force of
moral commitment. After all, this is by no means an obvious conclusion to draw
concerning the cases just introduced. So even if it’s clear that commitments are not just
strong reasons, it isn’t clear how we should explain their distinctive normative force. And
commitments. I will not have sufficient subjective reason to intend to jump to the moon
(insofar as this requires that my intention be justified), but I am certainly committed to
having this intention.
33
The case of patently immoral promises raises issues that I cannot adequately discuss in
this chapter. I mention them below where they emerge as a potential problem for my
characterization of the strictness of commitments. Note that the point in the main text
could be made with examples of banality rather than immorality. For instance, if I
promise to give John a penny, then it will be hard to defend the view that my
commitment to giving him a penny should be analyzed as my having an especially strong
reason to give him a penny. In normal cases there is simply no plausible candidate for
such a reason.
156
it seems as if we need such an explanation in order to vindicate the unifying hypothesis
driving the paper.
Actually this is too quick. If it were granted that rational commitments have a
distinctive normative force—distinct, that is, from the force of both reasons and rational
requirements—and it were granted that moral commitments have a distinctive normative
force—distinct, that is, from the force of both reasons and moral requirements—then I
think this would be evidence for and not against the unifying hypothesis. Nowhere have I
said that moral and rational commitments share every important feature; on the contrary,
I’ve been explicit about their various differences. The general argument has been that
they have enough in common to justify thinking of commitment itself as a normative
phenomenon with important applications in both the moral and rational domains. And
sharing the feature in question—having a normative force that’s not reducible to the
normative force of our central concepts of reason and requirement—constitutes quite a
robust similarity.
That said, I think I have a more ambitious response to this second worry. There
are constraints on the committed agent, and we can formulate these constraints in a way
that helps to capture the essential normative force of commitments while also applying to
both the moral and rational cases. I turn now to this project.
C. Some constraints on the committed agent
What I’ll claim in this section is that being committed involves being under the
jurisdiction of certain general agential constraints. I do not intend to show that these
constraints constitute what it is to be strict. All I mean to suggest is that we can
understand the distinctive normative force of commitments by thinking about the
constraints they place on your deliberation and your reactive attitudes. These constraints
demonstrate that there’s a clear contrast between commitments and reasons.
First, consider the fact that being committed structures your deliberation in some
important way, insofar as you are moral and rational. If you’ve promised to x, then, even
if you ultimately break the promise, it seems callous to completely disregard your
157
commitment to x. Likewise, if you’re committed to intending to x, then it is appropriate
for this commitment to structure your deliberation in the sense that, even if you
ultimately end up intentionally doing something that’s incompatible with x, you don’t
completely disregard your commitment to intending to x. Thus I propose the following
deliberative constraint on commitments:
DC: If you are committed to x-ing (or to having attitude A), and you are deliberating
about whether to do something (or form an attitude) that you know to be incompatible
with x-ing (or having A), then, in normal circumstances, it is appropriate to at least take
your commitment to x-ing (or having A) into account in these deliberations—and
inappropriate to fail to do this.
I want to offer a few clarificatory remarks on the content of this principle, but first let me
forestall one obvious objection. Many philosophers think that we cannot deliberate about
what to believe. So they will object to DC on the grounds that it cannot capture a whole
swath of the cases I want to capture—namely all those cases in which what’s at issue is a
rational commitment to believing some proposition or set of propositions. There’s an
easy response to this objection. If you think that we don’t deliberate about what to
believe, simply substitute ‘reasoning’ for ‘deliberation’ in DC.
The guiding idea behind the principle is that being committed puts you under
normative pressure to bear this commitment in mind, in particular when you are
considering performing actions, or forming attitudes, that are incompatible with your
commitment. This is the minimal concern that a moral and rational agent will have with
his commitments; hence the ‘at least’ qualification. DC needs to be weak—in the sense of
capturing this minimal condition rather than something more—because there are cases in
which you are required to violate a commitment, as we have discussed. Even in these
cases, though, it is wrong to violate DC. And this vindicates the claim that commitments
158
play a distinctive role in our mental lives, since an analogue of DC does not hold for the
reason relation.
34
It is also important to be clear on the meaning of ‘taking into account.’ This
certainly does not mean that you must, when you have a commitment, make all
subsequent practical deliberation conditional on the intention to perform the act you are
committed to performing (in the moral case), or all reasoning such that it holds fixed the
attitude you are committed to having (in the rational case). Such a requirement would be
far too strong, since sometimes you ought to violate or escape your commitments. Rather,
the point is that the moral and rational agent is one who avoids being indifferent or
callous with respect to his commitments. Even when he encounters circumstances in
which it is morally appropriate to violate a commitment, he takes time to reflect on the
commitment itself in the course of his deliberations. A refusal to do this demonstrates an
inappropriate stance towards the commitment.
I will add a note on the caveat ‘in normal circumstances.’ There are exceptional
contexts in which DC, without this caveat, would be overly demanding. If a thief
demands your wallet at gunpoint, then, depending on the exact details of the situation, it
may be perfectly appropriate for you to conclude your deliberation with an intention to
surrender your wallet, without having considered a commitment that you will violate on
account of this action. (It isn’t entirely clear, though, that you would be engaging in
genuine deliberation here.) The possibility of such extenuating circumstances restricts the
scope of DC, but I do not think it obscures the main insight—that commitments occupy a
distinctive place in the deliberative consciousness of the moral and rational agent.
Finally, let me say something brief in response to a potential objection. Why, the
objector asks, is it appropriate for me to take a commitment into account in my
deliberations if the commitment is one it’s inappropriate for me to have in the first place?
Imagine that I irrationally believe that I ought to jump to the moon; or imagine that I’ve
34
Argument: I have a reason to eat pizza for dinner—I like it. But tonight pizza isn’t my
top choice. I deliberate about dinner, and consider cooking pasta or going to a Thai
restaurant, but not having pizza. I have done nothing inappropriate. Generalizing, an
analogue of DC does not hold for the reason relation.
159
promised Billy that I’ll drive the getaway car. How could my commitments in these cases
themselves make it appropriate for me to pay attention to them in deliberation? Isn’t this
an objectionable form of bootstrapping?
I think not. I have not said that these commitments make it the case that you ought
to satisfy them, or that you have reason to satisfy them. All I’ve said is that they make it
the case that it is appropriate for your deliberation to be structured around them in a
minimal way. And this seems to me correct. If I were to genuinely believe that I ought to
jump to the moon, then there would be something inappropriate about my not
incorporating my commitment to intending to jump to the moon in my deliberations. That
commitment would not be playing its proper functional role in my rational agency.
Likewise, my neglecting the fact that I’m committed to driving the getaway care in the
course of deliberations would be inappropriate; for one thing, it would indicate that I
don’t take the institution of promising sufficiently seriously. Again, it would be to deny
commitment its proper functional role in my moral agency.
35
Next, consider the fact that being committed structures your emotional reactions
in an important way, insofar as you are moral and rational. If you’ve promised to x, and
you break the promise, it seems inappropriate to be completely devoid of ambivalence,
sadness, or some regret-like attitude. Likewise, if you’re committed to intending to x, and
you do not intend to x, then it seems inappropriate to be completely devoid of rational
analogues of the above-mentioned moral emotions. I acknowledge that we aren’t
especially used to thinking of rational analogues of ambivalence, sadness, or regret. But I
think there are such analogues. Focusing for the sake of simplicity on regret, it seems to
me that there is a mental state of something like cognitive embarrassment that we
characteristically feel when we realize one of our own rational shortcomings in the form
35
The case of coerced promises is a different story. It need not be inappropriate to fail to
consider a commitment that arises in such a fashion, since it need not indicate any failure
to take the institution of promising seriously. This makes me think that there are some
background conditions, among them coercion, which undermine promissory
commitments—some situations, in other words, in which promising does not give rise to
a moral commitment. Unfortunately I cannot explore this issue in any detail here.
160
of a violated commitment. For example, consider the feeling of realizing that there is an
important and obvious consequence of your theory that’s potentially disastrous and that
you’ve stupidly overlooked. If you are rational, then there is a distinctive psychological
state that accompanies such a realization. I think it’s plausible to regard this state as akin
to regret or shame or embarrassment in moral cases.
Thus I propose the following reactive attitude constraint on commitments:
RAC: If you are committed to x-ing (or to having A), and you fail to x (or to have A),
then, in normal circumstances, it is appropriate to feel regret, or some regret-like
emotion—and inappropriate to fail to feel this.
36
The claim, then, is that another way to see that commitments play a distinctive role in the
psychology of the moral and rational agent is to observe their intimate connection with
reactive attitudes that are typical indicators of the seriousness of certain considerations.
Philosophers sometimes argue for the existence of moral dilemmas by appealing to the
fact that in some cases it will be appropriate for the agent to feel regret no matter what
she does.
37
The influence of this argument illustrates that many of us often take the
appropriateness of regret to indicate that what is regrettable must be a serious violation—
in the present case, it is supposed to suffice to indicate that the agent has failed to do what
she ought to have done. But the account I’m sketching also suggests that we should reject
such an argument. It is appropriate to regret violating a commitment even though
commitments are pro tanto. So the appropriateness of regret when a given consideration
is violated cannot alone support the claim that this consideration is decisive.
36
By regret-like emotion I mean to include cognitive embarrassment, shame and
ambivalence, and agent-regret (at minimum). For the concept of agent-regret see
Williams (1982). The idea here is that you might not regret having acted as you did, but
you should at least regret that you had to violate a commitment, or something to that
effect.
37
E.g. Williams (1966) and Marcus (1980).
161
RAC applies even when you ought to violate a commitment. It would be callous
of S to feel no regret about the fact that he has inconvenienced or disappointed Jane, even
if we assume that going to dinner with Sally was the right thing to do. Likewise, it would
be appropriate of me to feel cognitive embarrassment on account of not intending to jump
to the moon when I’m committed to having this intention. And again, no analogue of
RAC is plausible for the relation of having a reason.
38
One further point deserves mention here. The ‘in normal circumstances’
qualification in RAC may encompass a bit more than it did in DC. For there are some
special ways in which agents may be able to unilaterally “escape” promissory
commitments, and which I mean to exclude with this qualification. For example, I can
presumably get out of a commitment to taking a girlfriend to dinner by instead taking her
to Barcelona.
39
Call these types of cases welcome surprises. Welcome surprises do not
call DC into question, because in deliberating about whether to take my girlfriend to
Barcelona it is surely appropriate to bear my commitment to taking her to dinner in mind.
But they do constitute counterexamples to RAC, since I should not regret violating my
promise when I do it one better.
40
Once again it might be thought that RAC sanctions an implausible form of
bootstrapping. If being committed cannot ground the existence of a reason, how can it
38
Argument: I have a reason to eat pizza for dinner—I like it. But tonight I don’t
especially feel like pizza. I eat pasta instead, and don’t regret not eating pizza. I have
done nothing inappropriate. Generalizing, an analogue of RC does not hold for the reason
relation.
39
I’m assuming that a robust theory will give us more precise conditions concerning what
needs to obtain in such a case. For example, something like the following is necessary: I
must rationally believe that my girlfriend would strongly prefer a Barcelona vacation to
dinner. I pass over these complications in the text.
40
It is also important to recall here my earlier note about coerced promises. Plausibly
coercion, and other forms of immoral conduct, can render promises insufficient for
grounding moral commitments. A fully worked out theory of moral commitment would
include an account of these kinds of undermining conditions. At this juncture I am happy
to allow the qualification in RAC to stand in for such an account.
162
ground the appropriateness of regret when the commitment is violated? But I think the
objection misinterprets the intuitive phenomena. Even in the difficult case of promises to
do immoral things—like my promise to Billy that I’ll drive the getaway car—it is
appropriate for the agent in question to feel some kind of regret-like emotion when he
violates his commitment. At the very least, he would be behaving callously towards the
institution of promising were he not to feel badly about having made and broken this
promise. This is of course compatible with his feeling that breaking the promise was, all
things considered, what he ought to have done. Similarly, it can be appropriate for agents
to feel regret or agent-regret even when they make a promise that they ought to have
made and violate it only because of some pressing moral necessity.
41
DC and RAC capture some interesting ways in which commitments play a unique
role in the mental life of the moral and rational agent. To coin a slogan, commitments
cannot be permissibly ignored, though they can be permissibly violated. This is one
intelligible sense we can give to the initially puzzling claim that commitments are strict
and pro tanto.
Conclusion
We routinely use the term ‘commitment’ in ordinary conversation when talking about
normative phenomena like promising. The term is employed ubiquitously in
philosophical discourse to capture normative claims about rationality. These uses are not
merely accidentally related. They are instances in which we invoke a general normative
kind, the relation of being committed. If the main ideas of this chapter are correct, then
this commitment relation plays a central role in some of the most important issues in
practical philosophy. It informs our understanding of promissory obligation, and more
generally the normativity of agreements; it gives us new insights into the complexity of
41
For example, Mark promises Maria that he’ll be at the chapel at nine a.m. On his way
he observes a terrible accident, and feels morally compelled to rush the victims to the
hospital. (For completeness, we add that he has no reasonable way of contacting Maria to
secure release from the promise.) Consequently he arrives late to the chapel, violating his
commitment. It would be callous of Mark to feel no regret (or regret-like emotion),
though he has not performed any morally impermissible action.
163
the demands of rationality; and it helps us draw crucial distinctions in the theory of
normative concepts. We have good reason to desire an account of the nature of
commitment. Indeed, our interest in normative questions commits us to providing one.
164
CHAPTER SIX: APPLICATIONS
One main contribution to emerge from chapter five was a theoretical framework for
motivating the postulation of a distinction between categories of pro tanto normative
concepts. Commitments, I argued, are essentially different than reasons, our paradigm of
the pro tanto. Further, I suggested that attention to the difference between the strict and
the slack pro tanto was likely to result in a better understanding of various normative
phenomena. This chapter is concerned with lending credence to the latter view by
demonstrating some helpful instances of the theory’s application. In other words, the
proof is in the pudding. More precisely, the idea is that substantial support for the account
of commitment I’ve provided, and for the strict-slack pro tanto distinction that is the most
interesting and controversial element of that account, will come from an investigation of
what it allows us to say about otherwise confusing debates, examples, or theories.
The general thrust of my view is that, whether or not the constraints I articulated
near the end of chapter five constitute an analysis of strictness, there is such a property
(or constellation of properties), and it can play a useful role in normative theorizing.
1
In
order to illustrate this point, I offer several examples of influential contemporary
philosophical views that can be helpfully elucidated by appealing to the resources of
chapter five. First, I argue that Michael Bratman’s view that there are both internal and
external norms on intention may be illuminated with the account of commitment I’ve
given.
2
Next, I show how this account can help us to articulate an attractively moderate
position about the existence of moral dilemmas. Finally, I present Joseph Raz’s original
theory of exclusionary reasons, and argue that my account gives us the materials to
clarify and improve upon it.
3
1
For the record, I do not take myself to have given an analysis, and I don’t think this
matters.
2
Bratman (1987).
3
Raz (1970).
165
I conclude the chapter by returning once more to the main thread of the
dissertation and exploring one prominent argument for narrow scope conceptions of
rational requirements, Niko Kolodny’s process argument. I show that this argument is
more usefully interpreted as an insight into the nature of rational commitments than as an
objection to wide scope conceptions of rational requirements.
4
Once again, this analysis
underscores the sense in which my take on the debate about the formulation of rational
requirements is a conciliatory one. And it likewise reinforces the guiding idea of the
present chapter: that the account of commitment gives us the materials to resolve, or at
least better understand, several fundamental debates about diverse issues in practical
philosophy.
6.1 A brief review of the position
Here are the bare essentials of the account I offered in chapter five.
First, I distinguished between pro tanto and decisive normative concepts. Pro
tanto concepts weigh in favor of actions or attitudes individually. Decisive concepts take
everything relevant into consideration and deliver a final judgment about some domain.
So a decisive concept is like a function that takes as input all relevant considerations,
performs a weighing operation, and delivers as output a verdict about what action to
perform or what attitude to have. Commitments are not like this; they are clearly pro
tanto. To be committed is not to be required. Indeed, you can be required to violate a
commitment.
Next, I distinguished between strictness and slackness. In a sense this distinction
was meant to track a dimension of strength, but we observed that it is hard to precisely
cash out this thought, since we naturally think about strength in terms of the weight of
reasons. (As we saw, something can be strict without giving you reasons in the standard
sense; you can be rationally committed to having an attitude without having any reasons
to have that attitude, even though the commitment is strict.) Strictness is a term of art
whose role is to mark off an interesting and underappreciated distinction between types
4
Kolodny (2005) and (2009).
166
of pro tanto normativity. My account of strictness in chapter five appealed to the special
role that certain kinds of consideration play in the deliberation and reactive attitudes of a
well functioning agent. The claim was that reasons do not necessarily assume this role.
My account of commitment also distinguished them by other features—e.g. their
agent-dependence and escapability. We need not rehearse these claims here. The point of
this chapter is to see the category of the strict pro tanto applied. None of the applications
will require an especially detailed familiarity with other particulars of the account of
commitment I’ve given.
6.2 What the theory can do
Understanding Bratman on internal versus external points of view
One extremely interesting and perhaps underappreciated section of Michael Bratman’s
Intention, Plans, and Practical Reason is the discussion of internal and external
perspectives on the rationality of a given intention.
5
My aim here is to articulate
Bratman’s stated view and argue that we are in a position to make this view more precise.
Bratman has two goals for his theory of intention that are relevant here, and they
both relate to his rejection of, and desire to improve upon, the view that intentions
provide us with reasons for the intended action. First, he thinks that intentions ground
rational constraints that have a “special nature”; they do not merely contribute one reason
for action among many (42). For example, among his central contentions throughout the
book is the claim that intentions have the function of guiding and focusing deliberation
by filtering the admissibility of new options. (They do this by excluding options that are
inconsistent with the agent’s current set of intentions and beliefs). Second, Bratman
wants to avoid the bootstrapping problem that saddles the view that intentions provide
reasons. We have already discussed similar problems in chapter one; the idea is just that
merely forming an intention seems insufficient for creating a reason to perform the
intended action.
5
Bratman (1987: 42-49).
167
The motivation for positing a distinction between internal and external
perspectives on the rationality of an intention comes from a tension between these two
commitments. The problem is that the idea that intentions ground a special set of rational
constraints on further deliberation seems itself susceptible to a version of the
bootstrapping worry. Consider Bratman’s example. Mondale is set to debate Reagan, and
he has already formed the intention to attack Star Wars. This filters his admissible
options, since some possible options are incompatible with it (e.g. asking a question
about Middle East policy). Assume that Mondale correctly judges that, of the admissible
options, question q is the most rational one to ask.
6
Then it seems like Mondale ought to
both believe that he ought to ask q, and form the intention to ask q. This intention is, by
hypothesis, his best admissible option. But suppose that the original intention to attack
Star Wars was irrational. Then we seem to have a case in which an irrational intention
bootstraps into rationality the derivative intention to ask question q.
Bratman’s response to this worry is to draw a distinction between two points of
view from which the rationality of intentional action can be legitimately assessed. From
the internal point of view, an agent’s prior intentions play their distinctive role of
providing standards of relevance and admissibility for new options. Thus the internal
perspective is plan-constrained. However, we may also assess the rationality of an
intention from a more external perspective, one that is not similarly plan-constrained.
From this perspective we attempt to determine what course of action is best for the agent
independently of his prior intentions. The crucial point is that from this latter perspective
we are evaluating the choice-worthiness of an agent’s options at a particularly
fundamental level. We are not limited by the filter of prior intentions, intentions which
may themselves be irrational.
6
Bratman puts this point by saying that, of the admissible options, q is best supported by
Mondale’s desire-belief reasons (43). I will stick to formulating things in terms of the
option or intention that is most rational, since, as we have seen (e.g. in the three envelope
problem), there are substantial complications about reducing a claim about the rationality
of an intention to a claim about the reasons favoring the intention.
168
Now this is not to say that the limitations imposed by prior intentions are
irrelevant or even unfortunate. On the contrary, Bratman is very concerned to elucidate
the sense in which these constraints are pragmatically essential—without such
constraints, we would be incapable of securing the massive advantages that accrue to
rational planners.
7
The claim is that intentions do not provide reasons in the paradigmatic
sense (44). They may provide “framework reasons”, like a reason to refrain from
reconsidering the intention. But an intention to attack Star Wars does not give Mondale a
reason to ask question q, even though it makes it internally rational for Mondale to ask q.
This is how Bratman avoids the bootstrapping problem.
Importantly, Bratman acknowledges a “clear potential for divergence” between
the assessments of the two perspectives (45). In Mondale’s case, the asking of question q
was internally rational, given the deliberative perspective at the time of the debate; but
asking q was irrational from the external, non-plan-constrained perspective.
It is natural to wonder how these divergent assessments are to be reconciled or
integrated into our overall theory of normativity. Bratman addresses this question, but his
suggestion is perplexing. He claims that
If we take the internal, plan-constrained perspective of the deliberating agent we may find it [the
intention to ask q] rational, all considered; and yet we may still determine that it is, all considered,
not rational from the external perspective…We may put the point in terms of a distinction
between two kinds of ought judgments. In his deliberation Mondale aims at reaching a judgment
of what he ought, on balance, to do—where this ought judgment is to be made from the internal,
plan-constrained perspective of his deliberation. Call this an internal-ought judgment. From the
external point of view of non-plan-constrained rationality we seek an, on balance, external-ought
judgment (46).
7
To cite just one example of such an advantage, it is surely true that by disposing us to
refrain from reconsidering the practical question ‘shall I φ?’, an intention to φ allows us
to act more efficiently—e.g. by allowing us to proceed to other pressing practical
questions—than creatures who were not so disposed.
169
There are a few curious things about this passage. First, Bratman admits that on his view
there are distinct, and potentially competing, types of all things considered rational
judgments. This is at least a highly controversial commitment of the view he proposes.
All things considered concepts—decisive concepts, in my preferred terminology—do not
seem to admit of conflicts. It is attractive to think that there can be only one type of all
things considered concept for any given normative domain.
We can set aside this assumption about conflicts. The problem with Bratman’s
way of presenting the idea is more general: it raises more questions than it answers. Why
should we think that there are two irreconcilably opposed “rational perspectives”? Or, to
put the point slightly differently, why should we grant that there are two equally
legitimate and forceful sets of rational norms that can and often do conflict with one
another?
8
If this is true of the rational domain, does something similar apply in other
normative domains? This suggestion leaves us with a host of residual questions, but the
most curious thing about it is that it prevents us from giving a straightforward account of
what it would be most rational of Mondale to do.
9
Finally, it is perplexing that Bratman claims that Mondale, in the course of his
deliberations, attempts to reach an internal ought judgment. This makes it seem as if the
external perspective is wholly disconnected from the agent’s actual processes of
reasoning. But surely a rational agent will, at the very least, attempt from time to time to
‘externalize’ his deliberation as a means to self-correction. For example, it would
presumably be completely rational of Mondale to spend some time before the debate
thinking about whether his strategy of attacking Star Wars is really the way to go. Of
course, this does not at all mitigate the point that Bratman is chiefly interested in: the
8
Bratman does think that these sets of judgments are on a normative par: “We should not
suppose that either ought judgment is more objective than the other…Nor should we
suppose that one generally takes precedence over the other” (46).
9
In my framework, this is tantamount to saying that there is no such thing as ‘rational
requirement’, since that concept is just the concept of what you must satisfy in order to be
fully rational. Neither of Bratman’s ought judgments could be construed in this manner
because neither outweighs the other. In Mondale’s case, there is no way for him to satisfy
both; so he must inevitably have an intention that he ought rationally not to have.
170
intention to attack Star Wars still places strong pressure on Mondale to refrain from
reconsidering his strategy, and it likewise filters the admissibility of new practical options
in some important way. But if these pressures and constraints were completely binding—
if it was always all things considered irrational to reconsider an intention—then we could
never rationally change our plans. So the natural thing to say is that in deliberation
rational agents aim at reaching judgments about what they externally ought to do, though
they often do this by reasoning through the filter of internal ought judgments. Bratman’s
external ought is the proper end of practical deliberation; his internal ought is perhaps the
pragmatically necessary vehicle for reliably achieving this end.
My suggestion in this connection is extremely simple. Bratman is right that there
are two legitimate perspectives from which we assess the rationality of intentions. (In
fact, the point is more general—an analogous distinction holds for our assessment of the
rationality of beliefs, and probably other attitudes as well.) But he is wrong to cash out
this idea by appealing to competing all things considered concepts.
The error likely arises in the following way. Intuitively, there is definitely
something rational about Mondale’s intending to ask q, given that he intends to attack
Star Wars. Indeed, asking q is the most rational way of intending to attack Star Wars. But
since prior intentions legitimately constrain deliberation, and intending to ask q is the
most rational intention to have in this constrained deliberative context, then it is tempting
to think that it must be all things considered rational to intend to ask q. Once this is
granted, Bratman needs to worry about bootstrapping. And this motivates him to adopt
the dual-ought conception we have just rehearsed.
But though it is natural to engage in this pattern of reasoning, it should
nonetheless be resisted. We can accept that there is definitely something rational about
Mondale’s intention to ask q, but reject the idea that this something grounds any sort of
all things considered rational assessment.
10
The intention is rational because it is a
necessary means to an intention that Mondale already has, namely the intention to do his
10
To put the point the way it was stated in chapter four, the dual concepts of rational
requirement and permission cannot be the ones we appeal to in saying what goes right in
the Mondale case.
171
best to attack Star Wars. As such, Mondale is committed to intending to ask q in virtue of
this prior intention. And being committed to having this intention certainly speaks in
favor of having it in something like the way Bratman articulates. We are cognitively
limited creatures, and the only way we can perform the complex tasks that we engage in
is to have a standing disposition to stick to our commitments. This is a rational
disposition, and its manifestations are rational. On this Bratmanian style of explanation,
the strictness of rational commitments plausibly derives from their being embodiments of
good practical dispositions. It is not just that we have a reason to have, and exemplify,
such dispositions. Something much stronger is true: our rationality is (partially)
constituted by the possession and exercise of these dispositions. So there is substantial
normative pressure to exercise these dispositions in any given case.
None of this shows, however, that every instance of commitment satisfaction is
maximally rational. Rational commitments have attitudes as their grounds, and these
attitudes may themselves be irrational. When they are, the rationality of satisfying the
commitment is outweighed by the rationality of escaping it. Rationality favors Mondale’s
revising the intention to attack Star Wars; this is rationally better than forming the
intention to ask q. That’s why we should say that though Mondale is rationally committed
to asking q, he is rationally required to revise his prior intention. And that’s why we can
criticize Mondale after the debate for asking q, even if we agree that this was the best
way of attacking Star Wars.
One might object to this treatment as follows:
“The fix is unnecessary. There is nothing wrong with the dual-ought conception.
There are simply two legitimate modes or perspectives of assessment, and they comprise
separate domains. So it’s not as if we violate the no conflicts principle for decisive
concepts. Internal rationality and external rationality are just different beasts. The fact
that Bratman allows for irreconcilable conflict between them is an advantage of his
theory, not a cost.”
This argument is not persuasive. Recall the intuitive objections to the narrow
scope rational requirement version of the enkrasia requirement from chapter one. Those
172
objections claimed that it was simply implausible to allow that e.g. the suicide bomber’s
irrational belief that he ought to kill children could (decisively) rationally legitimize his
intention. The wide scope formulation was deemed obviously preferable, since in some
cases it would clearly be more rational for the agent in question to revise his normative
belief than to follow through on it and form the corresponding intention. This dialectical
move depends upon our intuition that the ‘internal’ and ‘external’ perspectives can be
compared and weighed against each other. The suicide bomber is in an analogous
situation as Mondale; he has an attitude that is irrational, and this attitude commits him to
another intention.
11
It would be rational in some sense to form the intention, but it would
be more rational, it would get these agents closer to full rationality, if they revised the
original attitudes. In sum, this response would require us to give up on the very intuitions
that motivate wide scope principles in the first place—intuitions about when ‘internal
rationality’ simply does not suffice for rational agency.
The theory of commitment gives us a clean way of drawing the distinctions
Bratman was trying to motivate. The explanation does not require a revisionist reading of
Intention, Plans, and Practical Reason. All it requires is a minimal departure from
Bratman’s terminology, and a rejection of the (non-essential) view that rationality can
issue in competing, decisive directives.
Moral dilemmas
In this section I’ll suggest that the theory of commitment may help to explain some of the
special features of some so-called moral dilemmas without resorting to implausible, or at
least extremely tendentious, theses about the concept of ‘ought’. The reader should note
that I don’t mean my remarks to apply to every case of a putative moral dilemma. I’m
restricting my attention to one kind of scenario, though to be frank I think this is one of
the most commonly deployed sorts of cases.
11
This is not to say that their degrees of irrationality are on a par. As I’ve noted, one of
the jobs of a theory of rationality must be to provide an account of comparative claims
like this one.
173
In my view, the structure of many dilemma cases is as follows. S has most reason
to perform action r; but he has special reason to perform action q; only one of these
actions can be performed; thus there seems to be a real, open question about whether he
ought to perform r or q; and the defender of moral dilemmas claims that S ought to
perform r, and that he ought to perform q.
12
Often the “special” reason is precisely what
we might now expect: a commitment of the agent flowing from a role he occupies
voluntarily or an important promise he has made.
13
Again, I don’t mean to claim that this
is what happens in all cases of putative moral dilemmas. Note, though, that my theory
gives us the ability to distinguish some subtly different scenarios. For example, it might
be that commitments are only one example of the strict pro tanto. Perhaps there is a more
general category of moral obligations—like certain minimal obligations to family
members—that aren’t escapable, but that can be overridden or outweighed. If so, then a
case in which one of these familial obligations conflicted with an action that is better
supported by reasons could seem like a moral dilemma. In addition, conflicts of
commitments, or conflicts of commitments with other examples of the strict pro tanto,
could seem like dilemma cases.
14
12
Note that he may deny the principle of agglomeration [O(A) and O(B) implies O(A and
B)], and thereby avoid the consequence that Agent ought to do something impossible
when A and B are incompatible. I think that this is a poorly motivated move but will not
defend that claim here.
13
As in Sartre’s (1957) famous case of the young man deciding whether to join the
French resistance or stay with his aged mother. Though the man might not have promised
to stay with his mother, it is reasonable to think that his adopted role as a loving son
commits him to taking care of her. Note that whether or not we regard these as
commitments is really irrelevant to the point I’m making in this section. If they aren’t
commitments, then they are still plausibly examples of “obligations” that are strict but
pro tanto. I go on to make this point in other words in the text. In fact, I am highly
dubious of the idea that all strict pro tanto considerations are commitments.
14
Foot’s (2001) case of the unforeseeable conflict between weddings at which you have
promised to be best man is a case in point. Owens (2008: 26) claims that this is indeed a
dilemma case. I return to this below in my critique of Owens.
174
The theory I have articulated gives us the resources to better understand the
inclination to countenance the view that S ought to perform both actions, a view which
has the decidedly unattractive feature of entailing that he will necessarily fail to do
everything he ought to do. This position is likely motivated by thoughts about the
“impermissibility” of refraining from performing q, where these thoughts are driven by
reflection on the likely reactions that we (and S) would have were r to be performed
instead. However, we have provided a framework that explains how these kinds of
reactions can be appropriate even when the normative relation that grounds them is not
decisive. To be a bit more explicit: I have argued that it is appropriate to feel regret when
you violate a moral commitment even when you have acted exactly as you ought to have
acted. But this allows us to see that since ‘ought’ is a decisive relation, using these
intuitions about reactive attitudes to motivate a view about ‘ought’ may be misguided.
Thus the theory of commitment I have offered allows us to make a motivated conjecture
about what is behind certain attempts to establish the viability of genuine moral
dilemmas, and to diagnose what is right about these attempts, while also permitting us to
maintain a plausible theory of ‘ought’.
15
In fact the point is more general. The account offered here gives us the resources
to diagnose similar features in a wider range of cases. Call a situation in which Agent is
rationally committed to having attitude A, but rationally required to refrain from having
attitude A, a rational dilemma. For example, Stan believes that he ought to spend the day
reading comic books, so he is committed to intending to do so; but really he is rationally
15
I will note here a slightly different type of case that one might suspect will escape my
treatment. In such a case, S has equal reason to perform r as he does to perform q. In my
view this is not the typical structure of moral dilemma cases, though it might be the
structure of Sophie’s choice (Styron 1979). I have nothing to say about these cases
besides that I do not regard them as moral dilemmas. They are analogous to Buridan’s ass
cases, and I propose to treat them that way: as cases in which S ought to (either r or q),
and in which S does not fail to do anything she ought to do as long as she r’s or q’s. Once
again, the fact that S will feel regret and other negative emotions whatever she does is
simply poor justification for the claim that she necessarily fails to do everything she
ought to do.
175
required to believe that he ought to spend the day studying for finals, and as a result
rationally required to intend to spend the day studying. What we should note is that a
whole swath of legitimate reactive attitudes and judgments—on our part as well as
Stan’s—are grounded by the non-decisive commitment he has to intend to spend the day
reading comic books. So, for example, we might properly take him to task for watching
television instead of reading comics, even though we know that he should really, all
(rational) things considered, be studying for finals. Likewise, we might praise an
especially diligent execution of his intention, knowing full well that the intention itself
was foolish. But the appropriateness of such reactions is not sufficient for establishing
that Stan intended as he was rationally required to intend.
Raz on exclusionary reasons
Background
One of the most influential discussions in Joseph Raz’s Practical Reason and Norms
comes when Raz gives his account of exclusionary reasons.
16
In this section I argue that
though the cases Raz uses to motivate the idea of exclusionary reasons are important to
analyze, his account of these cases is inadequate. My main contention is that the
resulting theory of exclusionary reasons should be rejected in favor of a theory that is
closer to the account of commitment I have offered. We should countenance the strict pro
tanto before we countenance the notion of an exclusionary reason.
17
In particular, we
should agree that some actions (like promising) give us special kinds of reasons without
agreeing that these reasons have lexical priority or excluding power over ordinary
reasons. As we will see, rejecting the latter idea allows us to preserve an intuitive view
about how reasons contribute to what we ought to do, a view that Raz cannot accept.
16
Raz (1971, chapter one).
17
As will emerge in the following discussion, I do not mean to argue against the
exclusionary phenomenon generally. My view is that there might be cases in which
something akin to exclusion does occur. The main claim of this section is just that
promissory obligations and the commands of legitimate authorities are better explained
on my model than on the model that takes them to constitute exclusionary reasons.
176
Exclusionary reasons are, according to Raz, second-order reasons. They are
reasons to discount other (first-order) reasons. Raz motivates the idea by giving some
examples that have become well known. I will quickly walk through two of these
examples by way of introduction.
While serving in the military Jeremy is ordered by his commanding officer to
appropriate a civilian’s van. His friend urges him to disobey this order. The friend says
that the reasons to refrain from appropriating the van without permission are weightier
than the reasons to appropriate it. Jeremy grants that his friend may well be right. But it
does not matter to him whether his friend is right. It is not his place to decide which
orders to follow. The role of a subordinate is, he argues, precisely to regard the order of a
superior officer as binding, even when following the order involves acting against the
balance of reasons.
Here is another case. Colin has promised his wife that in all decisions bearing on
the education of his son he will act only for his son’s interests, disregarding all other
considerations. Suppose Colin is now considering whether or not to send his son to a well
regarded private school. If he does, he will be unable to quit his job in order to devote
himself to writing the book that he deeply cares about completing. Additionally, Colin
knows that if he decides to send his son to private school, this will influence other
members of the community to do the same, including some who cannot easily afford the
expense. However, Colin believes that because of his promise to his wife he should
disregard these reasons altogether, assuming that they do not have a direct impact on his
son’s welfare.
On Raz’s view it is not particularly important that we agree with Jeremy and
Colin’s assessments. We might think that Jeremy is wrong to regard a superior officer’s
orders as categorically binding, and we might think that Colin is wrong to regard his
promise as excluding the weight of other considerations. But the crucial point of these
examples is that it is intelligible for the agents to deliberate as they do. Raz wants to
understand and account for these types of reasoning processes. And to do so he thinks we
need an analysis of the special type of consideration that they seem to employ. It is
177
important to note this tendency in Raz’s thinking, to which I will return shortly. I will call
his emphasis on capturing the way people actually reason his ‘psychologism’, and I’ll
show how it leads to some important oversights.
Raz introduces the idea of an exclusionary reason to explain the reasoning that we
see exemplified by Jeremy and Colin. An exclusionary reason is a reason for refraining
from performing some action A for a certain reason R. It is not a first order reason to
refrain from performing A. The command issued by Jeremy’s superior is a reason for
Jeremy to ignore certain first order reasons that, absent the command, might on balance
make it the case that he ought to refrain from appropriating the van. This exclusionary
reason makes it permissible (and, according to Raz, obligatory) for Jeremy to fail to act
on the balance of his first order reasons. Likewise, Colin’s promise to his wife is not
(primarily) a reason to send his son to private school. Rather, it is a reason to disregard
otherwise relevant considerations, like his being able to quit his job to work on
completing his book. The promise constitutes an exclusionary reason because it makes
Colin obligated to ignore what a calculation of the balance of first order reasons would
tell him to do.
Raz stresses that these cases cannot be interpreted as mere conflicts of first order
reasons. He says this chiefly because the situations have the troubling feature of eliciting
in us contrary assessments.
18
Raz imagines that Jeremy, acting on his convictions,
instructs Dick to appropriate the van. Dick, however, becomes convinced that this should
not be done, even though a superior officer has mandated it, and he disobeys. Jeremy
finds himself
18
The reader should bear this feature in mind throughout the next few sections. In my
view, a similar point about competing legitimate assessments is part of the important data
motivating Bratman and the proponent of moral dilemmas, and this data is particularly
amenable to the treatment I suggest.
178
…torn between conflicting feelings. On the one hand he is convinced that Dick did the right
thing. On the other hand he thinks he acted wrongly. He wants to praise and blame him at
the same time.
19
Raz thinks that we are not, in these situations, unsure about which assessment should
prevail. Exclusionary reasons, as reasons of a higher order, have the function of excluding
and thus prevailing over first order reasons. The point is just that the difficulty we find in
making simple assessments of agents’ conduct in such cases is evidence that the cases
involve a conflict between two fundamentally different types of consideration. Thus Raz
has a quasi-phenomenological test for when an exclusionary reason is in play. Such a
reason gives rise to a “peculiar feeling of unease” (41) that is generated by the exclusion
operating between two levels of “autonomous practical assessment” (45).
20
Now I think Raz is right to draw our attention to this feature of his cases. There is
something intuitively powerful about this peculiar feeling of unease, and an analysis of it
does reveal a fundamental insight into the nature of practical reasoning. But Raz’s account
has unpalatable consequences, and it commits him to a relatively extravagant normative
ontology. In what follows I’ll argue that the phenomenon of exclusion does not do justice
to the cases under consideration, and, more generally, is not well suited to explain the
distinctive nature of promissory obligation. (The same argument will apply to the
commands of a legitimate authority, but I will focus on the case of promises, as that is my
central example of moral commitment in this dissertation.) Since exclusionary reasons are
an extra piece of ontological baggage, we would do better to reject them.
The gist of my argument against Raz’s account is this. Exclusionary reasons are
supposed to have a certain scope, a well circumscribed class of reasons that they exclude.
So we need a way of determining what gets included in this class. But intuitively the class
19
Raz (1971: 43).
20
See also Edmunson (1993: 333-335).
179
only includes reasons that aren’t especially great. Moreover, for the theory to be made
plausible, it has to entail that all really good reasons fall outside of the scope: really good
reasons do not get excluded. But this raises a puzzle. Why don’t good reasons get inside
the scope of what's excluded? My hypothesis is that they aren’t excluded because they
are too weighty to ignore. If this is shown to be plausible, then the idea of exclusionary
reasons is in trouble. For the purpose of exclusionary reasons is to exclude reasons
without engaging with their weight.
Before motivating this argument in more detail, I’ll say a bit more about the
theoretical apparatus Raz is working with, and outline some cases that will give us some
useful background for my objection.
The Essence of Exclusion
As we have seen, exclusionary reasons are second order reasons, reasons not to act on
certain first order reasons. Any exclusionary reason R has a scope, which is a set of
reasons that it excludes. Call that set S. The function of R is to make it the case that the
members of S are simply irrelevant to the determination of what the agent ought to do.
Quite literally, then, an exclusionary reason has the power to make an agent required to
act contrary to the balance of first order reasons.
It is worth emphasizing two key theoretical assumptions embodied in this
framework. First, Raz supposes that there is something like the phenomenon of exclusion
going on in the cases at issue. Second, Raz supposes as a result that there are cases in
which what an agent ought to do does not align with what he has most reason to do. For
excluded reasons do not disappear; they are still reasons, and they retain whatever weight
it is that they have independently of the exclusionary reason. They merely lose their
relevance when they are excluded. So in some cases an exclusionary reason makes it so
that the agent ought to do something that is less supported by (first order) reasons than
180
available alternatives.
21
This is why Raz has to reject the initially intuitive view that one
ought always to act on the balance of first order reasons (37).
In the sections that follow I will challenge this picture. My main charges will be
that it does not accurately account for the intuitive data, and that it postulates strange
normative entities that we would need very strong reason to accept into our ontology.
The case for thinking that the theory does not properly account for the data will emerge
in due course. But we are already in a position to see why exclusionary reasons are
strange. For exclusionary reasons to exist there needs to be a structural relationship
between them and first order reasons that’s of a distinct kind, a relationship that’s not to
be found elsewhere in the theory of reasons.
As I’ve tried to indicate, excluding has a different function than the more
traditional roles reasons play. We are used to the idea that reasons can override, as when
my reason to hop on the train outweighs my reason to keep staring at the pretty station
attendant. And we are used to the idea that reasons can defeat, as when the fact that there
is a red light shining on the glass defeats (or “undercuts”) my reason to believe that the
water is red. But when reasons override other reasons, they do so in virtue of their
greater weight. And when reasons defeat other reasons, they do so in virtue of the fact
that the defeated reason was not really a good reason after all. By contrast, exclusionary
reasons are supposed to simply bypass the issue of the weight or goodness of the reasons
they exclude. The excluded reasons retain their weight; they are not overridden; they
merely get left out of the calculus that determines what the agent ought to do.
21
On this point, see Edmunson (1993: 330, 332), who emphasizes the distinction between
excluding and the more common phenomenon of overriding; and Owens (2008:14-15). In
his later postscript (1999: 190), Raz notes that “the very point of exclusionary reasons is
to bypass issues of weight.”
181
Some Examples
In this section I present a few examples of my own that are meant to shed light on the
claim that exclusionary reasons have a strange nature. I conceive of them as test cases,
cases that will help us evaluate some potential motivations for accepting the viability of
the exclusionary phenomenon. The cases will also bring the contrasts between exclusion
and overriding, and exclusion and defeating, into clearer focus. Ultimately, I’ll suggest that
these cases do not succeed in pointing to a distinctive phenomenon of exclusion. Better
accounts of them are available, accounts that appeal only to the orthodox phenomena of
overriding and defeating.
The point of this section is not to give an exhaustive analysis of examples that
might receive a Razian treatment. The idea is just to give us some independent cases to
test our intuitions, and to suggest that exclusionary reasons would indeed be strange
beasts if they existed, before I provide a more direct argument against exclusion in the next
section.
Consider first a case of unknown hallucination. I seem to see a lizard crawling on
my ceiling. Though this is a startling sight, I have no reason to suspect my perceptual
faculties, and I consequently form the belief that there is a lizard on my ceiling. A minute
later, however, my roommate begins laughing mischievously, and, when I ask him for an
explanation, he admits that he has slipped a potent hallucinogen into my dinner. The fact
that I have been drugged is a very good reason to distrust my senses, and hence a very
good reason to discard my belief that there is a lizard on the ceiling. Might it be that this
fact excludes my reason for believing that the lizard is on the ceiling?
Not plausibly. Recall that exclusionary reasons make the reasons they exclude
irrelevant without engaging with their weight. This is not what happens in the present
case. The fact that I have been drugged is a reason for reappraising what I took to be a
good reason to believe in the existence of the lizard--namely, my visual experience. This
fact persuades me that my visual experience was never a very good reason to believe in
182
the existence of the lizard after all. (Clearly this does not entail that I was irrational to
regard it as such before I knew I had been drugged.) So it’s not as if the reason to believe
in the existence of the lizard retains its original weight while being excluded from the
determination of what I ought to believe. On the contrary, this reason seems to be
defeated, undercut, or undermined by my new information: it is shown to be a
significantly worse reason that it was initially thought to be, or perhaps no reason at all.
Next let’s think about a common philosophical example, the game Diplomacy.
The particulars of the game are unimportant for our purposes; the relevant feature is that
success in the game requires telling strategic lies. There is nothing wrong with lying in
Diplomacy. The practice of lying is essential to the game. But let’s assume that in general,
people have a reason to avoid lying.
22
Then we might contend that the fact that you are
playing Diplomacy is an exclusionary reason: it excludes your reason to refrain from
lying.
But again, this is not a good account of what’s going on. If the fact that you are
playing Diplomacy was an exclusionary reason, then the reason(s) it excludes--the reason
not to lie--would retain its weight even though it didn’t contribute to the determination of
what you ought to do. However, your reason not to lie does not retain its weight when
you are playing Diplomacy. You have absolutely no reason not to lie. After all, lying is
permitted and encouraged by the game. So it’s not as if the regular norms prohibiting lying
are still in play but are somehow temporarily neutralized. This would imply that you do
have a reason, indeed a relatively weighty one, to refrain from lying, even though this
reason is excluded. It’s rather that these norms are not in play at all.
To make this clear, consider the following comparison. Imagine that Colin is fully
in the grip of something like Raz’s theory. If we ask him whether the fact that sending his
son to private school will jeopardize the happiness of several families is a reason for him
22
I take this to be a weak and relatively uncontroversial assumption. But even if you
don’t grant it, you can suppose that it’s true for the sake of argument in order to see
whether the phenomenon of exclusion can do justice to the structure of the case.
183
not to do it, he must surely say that it is. Of course, he thinks that this reason is excluded
from relevance by his promise to his wife; nonetheless, he could not in good faith
maintain that it fails to be a reason. For the fact that he would thereby jeopardize the
happiness of others is an important strike against sending his son to private school.
Colin’s promise doesn’t change the fact that other people’s happiness is important. It
just (putatively) makes this importance irrelevant to Colin’s determination of what he
ought to do.
By contrast, imagine that we ask Kenny, who is in the midst of a game of
Diplomacy, whether the fact that he is going to need to engage in lying in order to enact
his strategy is a reason for him to rethink his strategy. Kenny would have to be bonkers
to say that it is such a reason. It’s not as if Kenny has a reason not to lie when he’s
playing Diplomacy that gets excluded by the fact that he’s playing Diplomacy. There
simply is no reason for Kenny to refrain from lying in the context of this game. Again, an
accurate diagnosis of the situation renders implausible the hypothesis that we are dealing
with an exclusionary reason.
Let us consider one further example. Imagine that the prosecution in a murder trial
presents the court with the murder weapon. A fingerprint analysis has confirmed that the
defendant’s prints, and only his prints, are on the weapon. Moreover, it is established
that the gun never left his possession. Assume that these considerations make it clear
beyond a reasonable doubt that the defendant is guilty of murder. However, it is later
determined that the police investigators obtained the weapon unlawfully. As a
consequence the evidence is ruled inadmissible.
Now consider the situation of Eliza, a juror on this trial. Eliza has good reason to
believe that the defendant is guilty. Nonetheless, the judge has told her to completely
disregard this reason. Indeed, he has specifically instructed the jurors to ignore it and give
it no weight in their deliberations. So it is tempting to think that this is just the kind of
case we are looking for, a case in which one reason excludes another from relevance.
184
However, the appeal to exclusionary reasons is not a satisfying diagnosis even in
this more plausible sounding scenario. Eliza’s reasons for believing that the defendant is
guilty are by no means excluded. She would be extremely irrational if, given the evidence
she has been confronted with, she somehow failed to believe in his guilt. The fact that the
evidence has been deemed inadmissible instead excludes a certain class of Eliza’s reasons
for action. She is not permitted to deliberate as a juror on the basis of certain facts, for
example the fact that the defendant’s fingerprints were found on the murder weapon. But
this is not even a restricted vindication of the notion of exclusion. Exclusion would require
the presence of a genuine reason that gets excluded. It turns out, though, that the fact that
the defendant’s fingerprints were on the murder weapon was never a genuine reason to
find him guilty (where ‘find guilty’ means ‘vote guilty in the jury’s deliberations’). For a
genuine reason to find someone guilty, in this sense of finding guilty, must be something
that is admissible in a court of law. So the intuitive thing to say about Eliza’s case is that
she has a great reason to believe the defendant guilty, which seemed to also be a good
reason to find him guilty, but which turned out not to be this kind of reason at all.
So it is not attractive to appeal to exclusion even in this initially plausible type of
case. Since I have not attempted anything like an exhaustive survey of possible cases, I
cannot hope to have shown that the notion of exclusion is incoherent. My aim is not so
grandiose. I am only trying to motivate rejecting Raz’s appeal to exclusion in his theory
of the distinctive normative nature of promises. The purpose of this section was to make
the initial case that exclusionary reasons are somewhat perplexing and not widely
applicable. This raises the bar for accepting them into our ontology. If the structural
relationships between reasons can be adequately captured without appealing to exclusion,
then exclusionary reasons are an unnecessary piece of baggage that we would be better off
without.
185
The Puzzle
Exclusionary reasons are supposed to play a distinctive role in practical reasoning. An
important way of explaining the nature of this role is by appealing to the sense in which
exclusionary reasons, unlike ordinary reasons, do not engage with the weight of the
reasons they exclude. As Raz notes in his Postscript to Practical Reason and Norms
The very point of exclusionary reasons is to bypass issues of weight by excluding
consideration of the excluded reasons regardless of weight. If they have to compete in weight
with the excluded reasons, they will only exclude reasons which they outweigh, and thus lose
distinctiveness.
23
This property of bypassing the weights of competing reasons is the unorthodox and
controversial heart of the matter. It leads to a puzzle that I don’t think Raz can solve in a
satisfactory way.
Here’s the puzzle. An exclusionary reason only excludes a certain class of reasons.
Raz identifies this class as the ‘scope’ of the exclusionary reason. But this class will not
include compelling reasons. Raz admits that very strong reasons will not get into the
scope, for example when he admits that “if he were ordered to commit an atrocity
[Jeremy] should refuse” (38). In this case, the superior officer’s order is an exclusionary
reason, but it cannot exclude all competing reasons. But then we have a puzzle: why
don’t the compelling reasons, like Jeremy’s reason to refrain from committing an
atrocity, get excluded? Why do only non-compelling reasons make it into the scope?
Raz needs some principled way of generating a distinction between reasons that
get excluded and reasons that don’t. If the view is not to seem ad hoc, there must be a
procedure that allows us to determine the scope of a given exclusionary reason. As I
mentioned earlier, Raz does offer a quasi-phenomenological test for the presence of an
exclusionary reason. But this test will not get us the needed particulars about its scope.
23
Raz (1999: 190).
186
Jeremy might, after all, experience the particular feeling of unease even when he is
ordered to commit an atrocity; Colin might experience this unease even when he knows
that sending his son to private school would lead to the downfall of civilization. So the
presence of this feeling cannot determine the scope of the exclusionary reason.
Imagine a pictorial representation of Jeremy’s reasons as dots on a page. Imagine
a circle C which represents the exclusionary reason. All the dots are either inside or
outside of C—inside or outside the scope of exclusion. Here is a prediction: C will be
determined by nothing other than weight. Every reason that is weaker than C will be
inside the circle. Every reason that is stronger than C will be outside the circle.
For instance, the fact that obeying the order will disrupt his afternoon plans is a
reason for Jeremy to disobey the order, but this reason will be excluded by the order,
since it’s not especially weighty. The reasons that won’t get excluded are the ones we
intuitively suppose to override the order—reasons not to commit atrocities, not to violate
people’s basic human rights, etc.
24
Raz seems to want to vindicate precisely this picture, but to do so without
accepting that exclusionary reasons engage with the weight of competing first order
reasons. This is simply implausible. It is the weight of the competing reasons that
underpins our intuitions about how the scope must be determined. For example, it would
be insane if his superior’s order did not exclude Jeremy’s reason to murder two innocent
people but did exclude his reason to murder one innocent person. It would be insane
because Jeremy’s reason to avoid murdering two innocent people has got to be at least as
weighty as his reason to avoid murdering one. But this appears to indicate that any reason
of sufficient weight will avoid exclusion.
Let me note one response that might be made to my arguments.
25
Perhaps it’s true
that the scope of exclusionary reasons must be determined by weighing them against first
order reasons. But this doesn’t show that Jeremy and Colin’s attitudes (about authority
24
For a similar objection see Gans (1986: 389).
25
William Edmunson suggests something like this in defending Raz from Chaim Gans’
objection. See Edmunson (1993: 336).
187
and promises, respectively) are incoherent. Such attitudes are commonplace.
Exclusionary reasons thus give us a useful way of characterizing widely held standards of
conduct and reasoning. As such, they serve as a useful addition to normative theory.
However, even if we were to grant this idea, it would not undermine my main
point in this part of the chapter. My goal has been to argue that there are sufficient
problems with the theory of exclusionary reasons to motivate accepting the theory of
commitment as a better account of the distinctive normative nature of promises. Claiming
that the theory is flawed, but that it nonetheless captures an important element of the way
people think, does not jeopardize this view. For I have been presenting the theory of
commitment as a true theory about normativity, and not as descriptive psychology. So the
fact that people may commonly reason on the basis of something like an exclusionary
reason does not vindicate Raz’s position qua theory of promissory obligation. Moreover,
it is unclear what philosophically central insight can be claimed for merely doing justice
to this one (perhaps irrational) bit of folk psychology. Raz himself is surely engaged in a
more ambitious program, though he does often write as though a chief goal is to capture
our actual processes of reasoning. So an appeal to the psychological accuracy of the
theory does not do much to recommend it.
The Suggestion
My suggestion is that the theory of commitment I have offered gives us a satisfying way
to capture at least some of Raz’s motivating intuitions, and the perplexing features of his
cases, without forcing us into the problems that I have outlined for his account. Jeremy
has adopted a role—that of a subordinate in the army—that commits him to certain
actions. Central among these are the actions commanded by his superiors.
This does not mean that he has most reason to perform each of these actions. Nor
does it mean that the commands of his superiors exclude other reasons from being
relevant to the determination of what Jeremy ought to do. But the role he has adopted
does place him under what I have been calling a moral commitment to perform these
actions, where this commitment is a consideration whose force is distinct from the force
188
of a regular first order reason. It is not merely as if the fact that an action has been
commanded by a superior officer is a reason to perform it. The command has a special
justificatory force for Jeremy, which is what may lead him to believe (mistakenly, I have
argued) that it has exclusionary force.
It is worth noting that the theory of commitment could also be construed as
appealing to second order reasons. Jeremy’s being committed to appropriating the van
means that he has (perhaps conclusive) reasons to include in his deliberations a
consideration of the reasons to appropriate it that are generated by his role in the army,
and (perhaps conclusive) reasons to have certain reactive attitudes like regret about any
failure to act on those reasons. This ‘second order reason’ interpretation of DC and RAC
does not lead to any unnecessary complications in our theory of first order reasons and
oughts. There are no complications because the additional, distinctive reasons engendered
by a commitment are not reasons to perform the action to which the agent is committed,
but rather reasons to engage in certain forms of deliberation and mental reaction. The
interpretation just helps to develop the idea that commitments have a unique character
that separates them from more run of the mill first order reasons.
Extending the Critique
In chapter five, I was especially concerned to show that commitments are not simply
reasons. I argued that commitments occupy a central role in the mental life of a rational
and moral agent, and that this role is sufficient to distinguish them from the slack pro
tanto concept of a reason. Many authors who write on promissory obligation take this
sort of position for granted at least implicitly. In attempting to give theories that capture
the special normative nature of promises, they assume that promises do not merely give
us a reason to fulfill them. Raz, as we have seen, is among such authors, since his project
invokes a new type of consideration in order to explain this special normative nature. (To
be sure, Raz thinks that exclusionary reasons are necessary for explaining many other
things as well, but that is not our concern here.) In this section, I briefly consider two
189
other prominent attempts to account for the nature of promissory obligation, those of
T.M. Scanlon and David Owens.
26
I claim that both attempts fail to account for the
intuitive data because they overstate the importance of promises.
The purpose of this section is to show that the theory of commitment is an
attractively conservative approach to the problem of promissory obligation. Prominent
accounts of promising capture its special normative nature only by giving up intuitions
about how promises contribute to what agents ought to do and how they ought to be. The
theory of commitment does not require us to abandon these intuitions.
Scanlon’s view about promissory obligation is in an important sense even stronger
than that of Raz.
27
Scanlon thinks that “being moral involves seeing reasons to exclude
some considerations from the realm of relevant reasons…”
28
He means not just that the
moral agent does not act on these reasons, but that she also regards this reason as
incapable of justifying anything, including various attitudes she might have. He is thereby
committed to the view that a promise deprives reasons favoring breach of the promise of
their justificatory force.
This may sound like Raz all over again, but the radical element of Scanlon’s view
is that he thinks these excluded reasons cannot even justify attitudes like wishing that you
hadn’t promised, reluctance to take the means to fulfilling the promise, etc. Just as he
thinks that your need of money is no reason to hope that your rich uncle will kick the
bucket, Scanlon apparently thinks that the fact that keeping a promise will be
inconvenient is no reason to feel reluctant to keep it. Raz did not go this far; recall that an
important part of his story about exclusionary reasons appealed to the legitimacy of
26
Scanlon (1998); Owens (2008).
27
I should note up front that the part of Scanlon’s account that interests me is only briefly
mentioned, and that I am not completely confident that I’m capturing his considered
view. If I am interpreting Scanlon incorrectly, the view that I attribute to him will
nonetheless be dialectically instructive.
28
Scanlon (1998: 157).
190
conflicted feelings (recall his 'quasi-phenomenological' test). But Scanlon must claim that,
for example, Colin’s feeling of unease about the possibly disastrous implications of
keeping his promise is to some degree immoral. Suffice it to say that I take this to be a
rejection of our commonsense intuitions about the case. Raz was right to draw attention
to the appropriateness of the conflicted feelings that accompany a choice between an
obligation or commitment and other weighty reasons.
Owens has recently defended an account of promissory obligation that sits firmly
in the Razian tradition. On his view a distinction “between excluded and non-excluded
reasons is clearly present in everyday thinking about promises” (21). Crucial to this
distinction is the notion of respecting, or being conscientious with respect to, a promise.
It is whether or not a course of deliberation and conduct would properly respect a
promise, and thereby refrain from wronging the promisee, that seems to correspond to
whether or not the reason I’ve deliberated and acted upon was excluded. (Owens denies
that the weight of competing reasons determines the scope of an exclusionary reasons
(what he calls the ‘zone of exclusion’), but his sketch of an alternative need not concern
us here.
29
My purpose is just to point out that by going in for Raz’s framework, Owens
commits himself to some counterintuitive claims about promising.)
29
The sketch is very programmatic, but the idea is supposed to be that the scope of the
exclusionary reason is determined by the “communicated intentions of the parties”, and
perhaps also by “elements of the social background” (36). There are several prima facie
difficulties with this view. First, making a promise need not involve any robust
communication of intentions. I can promise to meet you for lunch without indicating how
much the lunch means to me, and thus without even gesturing at the scope of competing
considerations that I think you ought to exclude. So it seems like this criterion will
severely underdetermine the scope of the exclusionary reason. Second, the account seems
perniciously ‘psychologistic’ in the sense I have discussed. It might be that
communicated intentions have something to do with what I take to be the zone of
exclusion, but I might be unreasonable. Even if I ask you to meet me come what may,
you are perfectly justified in bailing on lunch if your child needs to be taken to the
hospital.
191
In a case in which a promise is outweighed by non-excluded reasons, the agent
who breaks the promise can nonetheless respect it by giving it the appropriate weight in
his deliberations. (Owens thinks that it is not even appropriate to feel guilty about
breaking your promise in such a case. If this is compatible with regretting the situation,
then I am fine with it; but if regret is also deemed inappropriate, then it does not accord
with a lot of our intuitions about the relevant cases.) By contrast, in paradigmatic cases
in which the force of a promise outweighs all the competing non-excluded reasons, it
would be unreasonable to break the promise, even if an excluded reason would tip the
scales in favor of breaking it. Just considering this breach would constitute a failure to
respect the promise.
The problem for Owens is that he admits that there are non-paradigmatic cases--
cases, that is, in which it would be reasonable to breach a promise for an excluded reason,
since this reason outweighs the reasons to keep the promise.
30
Such a breach, though
reasonable, must involve a failure to “respect” the promise (23-4). After all, it is precisely
the idea that you can only respect a promise by excluding the considerations it excludes
that motivates Owens to think of promises as exclusionary reasons. But this leads Owens
to conclude that in these cases keeping the promise and breaking it are both reasonable
(24).
This is not a satisfactory solution to the problem of promissory obligation. First,
in the interesting cases it bottoms out in a claim that is uninformative: that the two
competing options are both reasonable. This would not be good advice to a person in
distress, and it is not a very helpful normative theory. Since Owens grants that the
troublesome cases are ones in which the excluded reason outweighs the reason to keep
the promise, we need a further argument for thinking that acting against the balance of
30
An initial worry about this is that it leaves us in need of a method for determining
whether a case is paradigmatic or not; since in both possible scenarios the excluded reason
outweighs the reasons to keep the promise, it isn't at all clear what this method could
appeal to in separating them.
192
reasons is reasonable, rather than just conscientious vis-à-vis the promise.
31
Second, the
idea that seriously considering breaking a promise for an excluded reason constitutes a
failure to respect the promise, and that a fully conscientious person would not do this, is
simply implausible. If a competing reason is weighty enough, any rational agent will at
least consider acting on it. Far from signaling a lack of conscientiousness, this
deliberative tendency—to consider all reasons of sufficient weight—seems to be a
requirement of rationality. As I’ve argued, the deliberative requirement on promises must
be much weaker, since there are all sorts of obvious cases in which you ought to break
your promises.
Owens himself references a nice case of Phillipa Foot’s that seems to me a
reductio of his view.
32
Imagine that Steve makes a promise to be best man at two
different weddings. Though he could not have anticipated it, the weddings turn out to be
on the same day (in different cities, if you like), so he is put in the unfortunate position of
having to decide on one. According to Owens, if Steve is a good person, he must fail to
respect both promises in this case (25). He will fail to respect the promises because he
will inevitably attempt to decide what to do on the basis of weighing competing
considerations—how essential he is to the planning of each wedding, how many close
friends each groom has, etc.—that are meant to be excluded. But this is a highly
counterintuitive analysis of the case. Most of us would hesitate to call Steve disrespectful
in any sense, assuming he makes his decision on the basis of good reasons. Even if his
case is one in which both options are permissible, which it might well be, that does not
31
Consider one of Owens’ cases. I have promised to meet my friend Jim for lunch the
day before he leaves for a vacation, but I find out that a longtime romantic interest is in
town, and the only time we can get together is over lunch. According to Owens, the
promise to meet Jim is meant to exclude things like lunch with someone else.
Nonetheless, the balance of reasons may favor breaking the promise. Strangely, Owens
admits that doing so could be the “right thing” (24). We can agree that doing the right
thing may, in this sort of situation, be compatible with or even mandate a feeling of
regret; but it is much harder to see how it could be reasonable to do what’s
acknowledged to be wrong.
32
Foot (2002: 41).
193
mean that Steve must necessarily display a lack of conscientiousness, or necessarily
wrong one of his friends. The most we can say is that it would be inappropriately callous
for Steve to feel no regret about the fact that he must break one of these promises.
My main claim in this section has been that prominent accounts of promissory
obligation do not err on the side of caution. Scanlon’s general view about obligation
seems to lead to some extremely strong and unpalatable claims about the moral
impermissibility of attitudes that are related to promising. For example, he seems
committed to the claim that Colin would be to some degree immoral if he even
momentarily lamented the fact that he made a promise to his wife about their son’s
education (at least if the lament had its source in excluded reasons). Likewise, Scanlon
seems committed to the claim that I am to some degree immoral when I feel reluctant to
board an annoyingly crowded train en route to a lunch date.
33
Owens does not endorse such a radical view, but he still gives up on some
strikingly intuitive claims. First, he rejects the general idea that it is impermissible (or
unreasonable) to fail to act on the balance of reasons, even though he clearly endorses a
picture on which the weight of reasons is extremely important. Besides being
counterintuitive, and lacking a method for determining when an excluded reason is
weighty enough to make acting on it reasonable, this analysis of the really interesting
cases has the unattractive consequence of failing to issue in substantive advice. In
addition, Owens rejects the natural view that someone like Steve can adequately respect
his promises even when he can’t fulfill them. These conclusions would be unpalatable
even if the framework of exclusionary reasons were not in itself problematic. The theory
of commitment allows us to avoid the unpalatable conclusions while doing justice to the
special nature of promissory obligation.
33
Owens (2008:11) gives this case as an objection.
194
6.3 Vindicating some Narrow Scope Intuitions
In this final section I canvass an influential argument offered in defense of narrow scope
versions of rational requirements. I show that the theory of commitment allows us to
make substantial sense of this argument while rejecting its unattractive conclusion.
Kolodny’s Process Argument
Niko Kolodny has prominently argued for the truth of narrow scope rational requirements
on the basis of a distinction between requirements on states and requirements on
processes.
34
State requirements require you to be a certain way at a given time; process
requirements require you to do a certain thing at a given time. His view is that, conceived
as requirements on processes, rational requirements must be narrow scope. Conceived as
requirements on states, rational requirements may be wide scope—but in that case the
notion of rational requirement is uninteresting.
I should note at the outset that I find it hard to know exactly what Kolodny is after
with this argument. My goal here is to evaluate some of his explicit claims about why we
should be talking about process requirements, and why process requirements must be
narrow scope. I’ll be suggesting that these claims can be vindicated if what he has in
mind is rational commitment, rather than rational requirement.
One of Kolodny’s chief reasons for supposing that rational requirements must be
requirements on processes, and not on states, seems to be that only the former can be
normative in the sense of guiding deliberation.
35
He thinks that while process
requirements tell you to do something, state requirements do not. At most, state
requirements function as evaluative tools, or necessary conditions for meriting a certain
kind of appraisal. For example, the wide scope enkrasia requirement merely tells you to
avoid being in the conflicting state of the akratic; it does not tell you what to do if you
find yourself in this state.
34
Kolodny (2005, 2007).
35
Kolodny (2007: 372).
195
Kolodny is keen to emphasize that such a wide scope requirement could not
genuinely be a requirement on processes. Whereas a narrow scope requirement tells you
what to do if you find yourself in a state of akrasia—namely, you are to form the
corresponding intention on the basis of your normative belief—a wide scope requirement
does no such thing. A wide scope requirement just tells you that you must be in error. For
imagine, says Kolodny, what a wide scope process requirement would have to claim. It
would have to say that if you find yourself in a state of akrasia, then you are to either
form the corresponding intention on the basis of your normative belief, or revise your
belief on the basis of your lacking the corresponding intention. But the latter would be
absurd.
Now the wide scoper may refine his account, so that it allows you to revise your
normative belief on the basis of other relevant attitudes. Kolodny thinks that this view is
still open to an important objection. Imagine that Smith believes at t that he ought to x,
but doesn’t intend to x. Going forward from t, he revises this belief on the basis of some
relevant attitudes. Kolodny thinks that nonetheless Smith has been irrational, since at t
(before the revision) he was guilty of resisting his judgment. But Smith has satisfied the
wide scope requirement, when conceived of as a requirement on processes. For that
requirement allows that he is rational when he revises his belief going forward from t
(379).
Here is what I take to be the main thrust of Kolodny’s claims about process
requirements. In order to legitimately guide deliberation, rational requirements must
counsel us to engage in certain processes of reasoning. But wide scope principles could
not issue in this sort of guidance. First, they may, depending on how they are formulated,
permit irrational processes, like revising beliefs on the basis of a lack of intentions.
(However, it seems clear that the wide scoper can avoid this result.) Second, given their
concern with irrational states, wide scope principles cannot be formulated as process
requirements without failing to convict irrational agents of error. For instance, the wide
scope enkrasia principle will convict Smith of irrationality only if it isn’t conceived as a
process requirement. When formulated as a requirement on processes, it will imply that
196
Smith has not been guilty of any irrationality, which is both false and anathema to the
proponent of the wide scope view.
36
So if we want to preserve the apparent normativity
of rationality, we must opt for narrow scope formulations of rational requirements.
The main difficulty with Kolodny’s line of argument has to do, I suspect, with its
target, the Broomean theory of rationality. Broome seems to think that all requirements of
rationality are wide scope principles of coherence or consistency.
37
Thus Broome does
have a real problem vindicating any sort of normative guidance. His picture of rationality
looks far more evaluative than normative, in Kolodny’s terms; so long as you come to be
coherent, it doesn’t matter what particular attitudes you have. For example, it doesn’t
make any difference whether the suicide bomber forms the intention to kill children or
revises his belief that he ought to. Either way he moves out of a state of rational conflict.
So it can seem as if the function of rational requirements, for Broome, is merely to signal
when and where you are guilty of inconsistency or incoherence.
But this conception is not forced on the proponent of particular wide scope
theses. He may accept that other requirements take narrow scope and interact with the
wide scope requirements to determine the sort of normative guidance Kolodny is after. So
for example, one might accept a principle of belief formation that says that if you have
conclusive evidence that p, then you are rationally required to believe that p. It might be
that the suicide bomber has conclusive evidence to believe that he should refrain from
killing children. If that were true, then the fact that the enkrasia requirement itself only
says that he is rationally required to either revise his belief that he ought to kill children,
36
The idea is that the process requirement would have to say something like ‘if you find
yourself in a state of akrasia, then get out of it by revising your belief or by forming the
corresponding intention.’ Smith complies with this requirement.
37
Broome (2007, 2008).
197
or intend to kill children, would not be problematic. The belief requirement would
determine that he is in fact rationally required to revise his belief.
38
Though Kolodny’s argument that only narrow scope requirements can vindicate
normative guidance seems to me unsuccessful, there is another element in his critique
that deserves attention. Kolodny is very much concerned with the actual situations of
deliberating agents, with all their temporal and cognitive limitations. These limitations
are importantly related, I suspect, to his emphasis on processes. Consider the suicide
bomber. I have claimed throughout this dissertation that we should preserve the intuition
that it would be more rational for this person to revise his absurd belief than it would be
for him to form the corresponding intention.
39
An irrational belief should not have the
power to, on its own, completely (rationally) legitimize another attitude; this simply flies
in the face of common sense. Nonetheless, Kolodny draws our attention to an important
feature of his case. Imagine the bomber deliberating at the fateful moment at which he
must form the intention if he is to act upon it. He does in fact believe that he ought to kill
children. How might he rationally discard this belief? Are the evidential considerations
that militate against it supposed to suddenly appear forceful to him? This might happen—
he might, for example, see the smiling, innocent face of one of his potential victims in a
new light. But it might not happen. And if it doesn’t, there is no rational process by which
he will come to revise his (irrational) belief.
Notice that I didn’t say that there is no rational process by which he can come to
revise his belief. There is such a process: he could properly evaluate the evidence, which
he has yet to do. The point is just that the most rational thing for him to do is something
that he has made exceedingly difficult by engaging in prior irrational processes of
reasoning. So barring some sort of revelation, the bomber, in believing that he ought to
38
Note that the same argument was deployed in response to the symmetry objection in
chapter two. As I will suggest, the process objection and the symmetry objection are best
conceived as two ways of getting at the same complaint.
39
Recall that we are just stipulating that the belief is irrational. The reader should feel
free to substitute his own preferred examples.
198
kill children up to the time of action, has rendered it all but impossible to rationally get
himself into a rationally optimal state.
This indicates that the distinction between rational states and rational processes is
quite important. Often we form an irrational attitude, and this puts pressure on us to form
other attitudes that are themselves intrinsically irrational. Furthermore, we rarely know
which attitudes should be revised. Ignorance leads us to depend on the rationality of
certain processes as defeasible but pragmatically essential guide to good reasoning.
So it’s true that these processes, which are undoubtedly better captured by narrow
scope formulations, occupy a central role in the nature of rational agency. If I am correct
in interpreting him as I do, Kolodny is attempting to explain the deliberative centrality of
rational commitments, which I have tried to stress in giving my account of their
strictness. From a first person deliberative perspective, our commitments are some of our
best (and only) guides to what is rationally required of us.
None of this shows, however, that the rational processes captured by
commitments should be accorded decisive status in the theory of rationality. It might
have been thought to show this, if wide scope principles couldn’t be normative. But they
can, as I’ve demonstrated; they need only be integrated into a theory that also
incorporates some narrow scope principles. Moreover, it is wide scope principles that
best capture the normative force of good advice. Though the bomber might be unable or
hard pressed to see his evidence in a different and more reasonable light, it is likely that a
rational advisor would do just this. Indeed, advising would be a pretty silly practice if it
functioned only to enjoin the advisee to draw the coherent consequences of what he
already believed and intended. Good advising is of course a far cry from this sort of
thing.
40
It crucially involves getting the advisee to reevaluate his evidence when his
interpretation of it is flawed.
On my interpretation, Kolodny is motivated by an understanding of the
deliberative centrality of unidirectional processes of attitude formation. These processes
40
Here we are concerned with advisory contexts in which both individuals have the same
body of evidence. The wisdom of an advisor consists in her capacity to more reasonably
interpret this evidence.
199
are central to the theory of rationality because, from a first person perspective, they often
represent our best way of moving forward, given our substantial temporal and epistemic
limitations. Since I am mostly ignorant about which of my normative beliefs are true, and
I do not have the time to exhaustively reevaluate them all, I move forward by forming
intentions on the basis of those beliefs about what I ought to do that I regard as relatively
secure. I proceed, in other words, by honoring my commitments. But this does not mean,
contra Kolodny, that in each of these cases I do what rationality requires of me. When
my normative beliefs are irrational, I am rationally required to revise them.
Commitments are good guides to rationality. But sometimes the content of a
commitment is irrational because its ground is irrational. In order to vindicate the natural
view that one instance of irrationality does not definitively license other such instances,
we must reject the view that rational requirements can be formulated as narrow scope
principles.
Conclusion
This chapter has aimed to apply the account of commitment that emerged in chapter five
to a wide range of contemporary philosophical issues. I have argued that it illuminates
our understanding of promissory obligation, putative cases of moral dilemmas, and the
nature of rational norms, and especially that it permits us to do without other theories of
these phenomena that require ambitious, and in my view unattractive, theoretical
commitments.
There is far more to be done on this score. In particular, I have not had the chance
to extend my treatment to some issues in the philosophy of action which I believe can be
better understood once we observe the intimate connections between moral and rational
commitments and psychological ones.
41
And I have not provided any extended discussion
41
I briefly mentioned these in chapters four and five. A psychological commitment is a
kind of dedication or structure of will. For example, it is true that Don is committed to his
wife Betty just in case Don cares for her, regularly does nice things for her, refrains from
cheating on her, etc.
200
of how I take moral commitments to figure in an overall theory of morality.
42
I look
forward to these future extensions of the present work.
42
Though see Appendix C for a brief précis of my thoughts on this difficult topic.
201
REFERENCES
Aristotle (2000). Nicomachean Ethics. Ed. by Roger Crisp. Cambridge: Cambridge
University Press.
Arpaly, Nomy (2003). Unprincipled Virtue. Oxford: Oxford University Press.
Audi, Robert (1990). “Weakness of Will and Rational Action.” Australasian Journal of
Philosophy 68: 271-281.
Bennet, Jonathan (1974). “The Conscience of Huckleberry Finn.” Philosophy 49: 123-
134.
Bratman, Michael (1984). “Two Faces of Intention.” Philosophical Review 93 (3): 375-
405.
------- (1987). Intention, Plans, and Practical Reason. Cambridge: Harvard University
Press.
Broome, John (1999). “Normative Requirements.” Ratio 12(4): 398-419.
------- (2004). “Reasons.” In Reason and Value: Themes from the Moral Philosophy of
Joseph Raz. Jay Wallace, Michael Smith, Samuel Scheffler and Philip Pettit
(eds.). Oxford University Press.
------- (2007). “Wide and Narrow Scope.” Mind 116(462): 359-70.
------- (2008). “Is Rationality Normative?” Disputatio 23(2): 161-178.
------- (ms). Reasoning. Manuscript of 2008.
Brunero, John (2010). “The Scope of Rational Requirements.” Philosophical Quarterly
60(238): 28-49.
Dancy, Jonathan (2000). Practical Reality. Oxford: Oxford University Press.
------- (2004). Ethics Without Principles. Oxford: Oxford University Press.
Davidson, Donald (1969). “How is Weakness of the Will Possible?” In Ernie Lepore and
Kirk Ludwig (eds.) The Essential Davidson. Clarendon Press: Oxford.
D'Arms, Justin, and Daniel Jacobson (2000). “Sentiment and Value.” Ethics 110: 722-
748.
Darwall (1983). Impartial Reason. Ithaca: Cornell University Press.
Edmunson, William (1993). “Rethinking Exclusionary Reasons: A Second Edition of
Joseph Raz’s Practical Reason and Norms.” Law and Philosophy 12 (3).
202
Ewing, A.C. (1953). Ethics. English Universities Press: London.
Finlay, Steve (2008). “Motivation to the Means.” In David Chan (ed.) Moral Psychology
Today: Values, Rational Choice, and the Will. Springer: 173-191.
-------- (2009). “Against All Reason? Skepticism About the Instrumental Norm.” In
Charles Pigden (ed.) Hume on Motivation and Virtue. Palgrave MacMillan.
-------- (2010). “What Ought Probably Means, and Why You Can’t Detach It.” Synthese
177 (1): 67-89.
Foot, Phillipa (2002). Moral Dilemmas and Other Topics in Moral Philosophy. Oxford:
Clarendon Press.
Frankfurt, Harry (1971). “Freedom of the Will and the Concept of a Person.” Journal of
Philosophy 68 (January): 5-20.
------- (1976). “Identification and Externality.” In Amelie Rorty (ed.) The Identities of
Persons. The Regents of the University of California.
Gans, Chaim (1986). “Mandatory Rules and Exclusionary Reasons.” Philosophia 15 (4):
373-394.
Gensler, Harry (1985). “Ethical Consistency Principles.” Philosophical Quarterly
35(139): 156-170.
Greenspan, Patricia (1975). “Conditional Oughts and Hypothetical Imperatives.” The
Journal of Philosophy. 72(10): 259-276.
Hampton, Jean (1998). The Authority of Reason. Cambridge: Cambridge University
Press.
Hare, R.M. (1952). The Language of Morals. New York: Oxford University Press.
------- (1971). “Wanting: Some Pitfalls.” In Binkley, Bronaugh, and Marras (eds.) Agent,
Action, and Reason. Toronto: University of Toronto Press, 81-127.
Hawthorne, John (2004). Knowledge and Lotteries. Oxford: Oxford University Press.
Hieronymi, Pamela (2005). “The Wrong Kind of Reason.” The Journal of Philosophy
102: 437-457.
Hill, Thomas (1973). “The Hypothetical Imperative.” The Philosophical Review. 82(4):
429-450.
Howard-Snyder, Frances (2006). ‘“Cannot” Implies “Not Ought”’. Philosophical Studies
130 (2).
203
Hubin, Donald (2001). “The Groundless Normativity of Instrumental Rationality.” The
Journal of Philosophy 98 (9): 445–468.
Hume, David (1978). A Treatise of Human Nature. Ed. by P.H. Nidditch. Clarendon
Press: Oxford.
Hussain, Nadeem (ms). “The Requirements of Rationality.” Manuscript of August 2007.
Jackson, Frank and Robert Pargetter (1986). “Oughts, Options, and Actualism.” The
Philosophical Review 95 (2): 233-255.
Kant (1997). Groundwork of the Metaphysics of Morals. Ed. by Mary Gregor. Cambridge
University Press: Cambridge.
Kolodny, Niko (2005). “Why Be Rational?” Mind 114(445): 509-63.
------- (2007a). “State or Process Requirements?” Mind 116(462): 371-85.
------- (2007b). “How Does Coherence Matter?” Proceedings of the Aristotelian Society.
107(1): 229-263.
------- (2008a). “Why Be Disposed to Be Coherent?” Ethics 118: 3 (2008): 437–463.
------- (2008b). “The Myth of Practical Consistency.” European Journal of Philosophy
16:3 (2008): 366–402.
------- (ms). “Ought: Between Subjective and Objective.” With John MacFarlane.
Korsgaard, Christine (1996). The Sources of Normativity. Cambridge University Press:
Cambridge.
------- (1997). “The Normativity of Instrumental Reason.” In Garrett Cullity & Berys
Gaut (eds.), Ethics and Practical Reason. Oxford: Clarendon Press.
Marcus, Ruth Barcan (1980). “Moral Dilemmas and Consistency.” The Journal of
Philosophy 77: 121–136.
McIntyre, Alasdair (1993). “Is Akratic Action Always Irrational?” In Owen Flanagan and
Amelie Rorty (eds.) Identity, Character, and Morality. Cambridge, Mass: MIT
Press.
Millsap, Ryan (ms). “Transmission and Sufficient Reasons.” Manuscript of 2011.
Owens, David (2008). “Rationalism about Obligation.” European Journal of Philosophy
16 (3): 403-31.
Parfit, Derek (ms). Climbing the Mountain. Manuscript of 2010.
Pascal, Blaise (1908). Pascal's Pensées. Translated by W. F. Trotter. London: J.M. Dent.
204
Piller, Christian (2007). “Ewing's Problem.” European Journal of Analytic Philosophy,
3.1: 43-65.
Plato (1997). Complete Works. Ed. By John M. Cooper. Hackett: Indianapolis.
Raz, Joseph (1971). Practical Reason and Norms. Oxford: Oxford University Press.
------- (1999). Postscript to Practical Reason and Norms. Oxford: Oxford University
Press.
------- (2005). “The Myth of Instrumental Rationality.” Journal of Ethics and Social
Philosophy, Volume 1, Number 1.
Regan, Donald (1980). Utilitarianism and Cooperation. Oxford: Clarendon Press.
Ross, Jacob (ms). Acceptance and Practical Reason. Ph.D. dissertation. Rutgers
University.
------- (2011 ms). “Actualism, Possibilism, and Beyond.” Manuscript of 2011.
Ross, W.D. (2002). The Right and the Good. Phillip Stratton-Lake (ed.), Oxford: Oxford
University Press.
Sartre, Jean-Paul (1957). “Existentialism is a Humanism.” Translated by Philip Mairet, in
Walter Kaufmann (ed.), Existentialism from Dostoevsky to Sartre. New York:
Meridian, 287-311.
Scanlon, T.M. (1998). What We Owe to Each Other. Cambridge: Harvard University
Press.
------- (2007). “Structural Irrationality.” In Reason and Value: Themes from the Moral
Philosophy of Joseph Raz. Jay Wallace, Michael Smith, Samuel Scheffler and
Philip Pettit (eds.). Oxford: Oxford University Press.
Schroeder, Mark (2004). “The Scope of Instrumental Reason.” Philosophical
Perspectives (Ethics) 18: 337-62.
------- (2005). “The Hypothetical Imperative?” Australasian Journal of Philosophy 83(3):
357-372. September 2005.
------- (2008). Slaves of the Passions. Oxford: Oxford University Press.
------- (2009). “Means End Coherence, Stringency, and Subjective Reasons.”
Philosophical Studies 143(2): 223-248.
------- (2010) “Value and the Right Kind of Reason.” Oxford Studies in Metaethics 5: 25-
55.
Setiya, Kieran (2007). “Cognitivism about Instrumental Reason.” Ethics 117(4): 649-673.
Smith, Michael (1994). The Moral Problem. Blackwell: Malden.
205
Southwood, Nic (2008). “Vindicating the Normativity of Rationality.” Ethics 119(1): 9-
30.
Styron, William (1979). Sophie’s Choice. New York: Random House.
Wallace, R. Jay (2001). “Normativity, Commitment, and Instrumental Reason.”
Philosophers' Imprint 1(4): 1-26.
Watson, Gary (1977). “Skepticism about Weakness of Will.” Philosophical Review 86:
316-339.
Way (2010a). “Defending the Wide Scope Approach to Instrumental Reason.”
Philosophical Studies 147(2): 213-33.
------- (2010b). “The Symmetry of Rational Requirements.” Forthcoming in
Philosophical Studies.
Williams, Bernard (1966). “Consistency and Realism.” Proceedings of the Aristotelian
Society (Supplement), 40: 1–22.
------- (1981). Moral Luck. Cambridge: Cambridge University Press.
206
APPENDIX A: SOME THOUGHTS ABOUT THE INSTRUMENTAL
REQUIREMENT
Most extant versions of the instrumental norm face a problem, which can be illustrated
with the following example:
Showboat: Lionel is an expert soccer player. His penalty shot is so good that he can
consistently score with his eyes closed. Lionel is playing in a penalty shootout, and he
needs to score in order to win the game. He very much wants to win the game; it is an
adopted end of his. The odds are 99% that he’ll score if he looks, and 90% that he’ll score
if he closes his eyes.
The question is whether instrumental rationality requires Lionel to keep his eyes open.
Keeping his eyes open is clearly the best means to Lionel’s end of winning, but it
isn’t necessary; he will likely win even if he shuts his eyes for the shot. Nonetheless, it
seems irrational for Lionel to close his eyes. And it seems that this irrationality is
instrumental irrationality—it involves the relationship between Lionel’s ends and the
means to those ends. So this appears to indicate that the instrumental principle requires us
to take the best means to our ends, where goodness of means is determined by something
like the likelihood of producing the end. This is a far more exacting principle than one
that merely enjoins us to take necessary means.
There is a complication that the title of our example suggests. Imagine that
besides having the end of winning, Lionel also has the end of showing up his
competition. In this case, assuming this latter goal is valuable enough to him, Lionel
would be irrational if he failed to close his eyes for the final penalty shot. So though
keeping his eyes open is the best means to his end of winning, it would be positively
irrational for him to keep his eyes open.
Have we shown that there is no categorically true instrumental principle? Of
course not. What we’ve shown is that the relevant end must be specified in all its rich
207
glory. The end that Lionel has adopted, and that exerts rational pressure in the form of
enjoining a best means, is the end of winning in a way that embarrasses the competition.
He is irrational insofar as he fails to take the best means to that end.
More generally, it is plausible to think that much practical deliberation consists in
precisely this: determining the best means to our rich, complicated ends. I suspect that the
resistance to ‘best means’ formulations involves misunderstanding that point. It is not
that I am rationally permitted to take the scenic route because it is good enough, though
not best. It’s rather that I am required to take it, given that I care about both distance and
the beauty of the drive.
This isn’t to say that there are no mere rational permissions. The scenic route and
the freeway might balance out equally, each promoting my complex end differently but to
the same degree. In such a case I may permissibly take either.
208
APPENDIX B: RATIONAL NORMS AND EXPLANATORY PRIORITY
At the end of chapter two I suggested that rational commitments might be used to explain
the existence of rational requirements. I remained agnostic about the merits of this
strategy. But I noted that it might be seen to have explanatory virtues, given that
commitments are not (unlike requirements) the kinds of things that can only be explained
by appealing to perfectly general features of human agency. In this appendix I outline the
way in which I think such an explanation would go.
Here is the general idea. Defenders of wide scope theses suppose that many
fundamental rational requirements are requirements to avoid having some set of
incoherent attitudes. We are taking the truth of this claim for granted. The point is that we
can show that in any case in which an agent violates one of these requirements, he will
necessarily violate a commitment. And then we have a question about the order of
explanation. Is it the commitment fact that explains the requirement fact? Is it the
requirement fact that explains the commitment fact? Or is it that neither explains the
other?
I don’t feel inclined to take a definitive stand on this issue apart from saying that I
do not think that the requirement fact explains the commitment fact. (I gave my reasons
for thinking this in chapter four. Briefly, the requirement fact does not suffice to explain
an important asymmetry, and the commitment fact explains this asymmetry. If claims
about commitments were just “shadows” of claims about requirements, then the
requirement fact would suffice to explain this asymmetry.) What I’ll do here is argue for
the claim that raises the question about the order of explanation, namely the claim that the
violation of a rational requirement necessarily entails the violation of a rational
commitment, and show how this could be used in arguing for the explanatory priority of
rational commitments.
Take the requirement of belief consistency as a preliminary case:
You are rationally required to be such that (if you believe that p, then you do not believe
that not p).
209
Consider the fact that believing p commits you to not believing ¬p. This means that if
you are in the inconsistent state of believing p and ¬p, you necessarily violate a
commitment. Since it is irrational to violate a commitment, we have an explanation of
why belief inconsistency is necessarily irrational. And this is just to say that we have an
explanation of why belief consistency is rationally required.
A similar form of explanation will work for all of the central coherence and
consistency requirements of rationality:
Belief Closure
Assume that being in the state of [believing p, believing p implies q, and considering
whether q] commits you to believing q. Then, if you are in this state, and you do not
believe q, you violate a commitment, and are necessarily irrational. Thus belief closure of
this form is rationally required.
Enkrasia
Assume that believing that you ought to φ commits you to intending to φ. Then, if you
believe that you ought to φ without intending to φ, you violate a commitment, and are
necessarily irrational. Thus enkrasia is rationally required.
Means End Coherence
Assume that intending to E and believing M is necessary for E commits you to intending
to M.
1
Then, if you intend to E, believe that M is necessary for E, and do not intend to M,
you violate a commitment, and are necessarily irrational. Thus means-end coherence is
rationally required.
Intention Consistency
1
In order to be more precise we should add the often overlooked condition that intending
to M now is necessary for E (Setiya 2007). You are not rationally required to form the
intention now if you can successfully E by forming it later.
210
Assume that intending to X commits you to not intending to Y, for any Y such that you
believe that Y-ing is incompatible with X-ing. Then, if you intend to X, believe that Y-
ing is incompatible with X-ing, and still intend to Y, you violate a commitment, and are
necessarily irrational. Thus intention consistency is rationally required.
The argument would proceed in something like the following manner:
Rational commitments have a clear explanatory advantage over wide scope
rational requirements. For wide scope requirements are in at least one sense ontologically
indulgent: they apply to all rational agents, no matter what they are like. If we accept the
wide scope theory, we get no explanation of why these requirements exist, why they
apply to all rational agents, and why they are normative, if they are. If we do get an
explanation, it cannot be anything less programmatic than the claim that these
requirements are constitutive of rationality. But the postulation of a primitive notion of
commitment is not nearly as ontologically indulgent. Commitments do not apply to all
agents, no matter what they are like; they apply to only those agents who have adopted
the grounds of the commitments. This is a more satisfying explanation than the one that
posits an eternally true, universally applicable wide scope principle, and explains my
irrationality merely by appealing to the fact that I have violated the principle.
By analogy, consider Kant’s classic distinction between hypothetical and
categorical imperatives.
2
Kant assumes, quite naturally
3
, that hypothetical imperatives are
easier to explain than categorical ones, because in explaining the former we can appeal to
the nature of the condition that makes it hypothetical. For example, the sentence ‘If you
want to get to Harlem, you ought to take the A train’ seems like a genuine (true)
imperative so long as taking the A train is the best way of getting to Harlem. And the
truth of this imperative seems metaphysically unproblematic precisely because it is
grounded in a condition. If you don’t want to get to Harlem, the imperative simply does
not apply to you. However, it is far more difficult to see how the sentence ‘You ought to
2
Kant (1997: 4:419). See discussion in Schroeder (2004) and (2005).
3
Though for skepticism on this front see Korsgaard (1997) and Hampton (1998).
211
take the A train to Harlem’, which expresses a categorical imperative, could be true. No
condition can explain it; the imperative must apply generally. But then we need an
explanation of what makes it the case that you ought to take the A train to Harlem that
appeals only to facts about the way the world is independently of anything special about
you. So just as there is a natural sense in which explaining the truth of hypothetical
imperatives is easier than explaining the truth of categorical ones, there is a natural sense
in which explaining the existence of rational commitments is easier than explaining the
existence of rational requirements.
There is something to this argument, but I am not persuaded by it. For one thing,
I do not think moral commitments can explain the existence of moral requirements. So
the analogous claim about rationality would strain my analogy between the types of
commitments and, more importantly, lose some of its force—for if moral requirements
must be conceived of as eternally true, applying to all agents independently of what they
are like, etc., then it is unclear why rational requirements should not be conceived of in
the same way.
I suspect that facts about rational requirements and rational commitments are both
fundamental, and that neither can be explained away as a relic of the other. However, if
there were strong reasons for thinking that commitments explain the existence of
requirements then this would be grist for my mill, since one main project of this
dissertation is to argue for the independent importance of commitment as a normative
concept. For now this leaves me in a position of interested agnosticism.
212
APPENDIX C: MORAL COMMITMENTS AND “SUCCESS ENOUGH”
At the end of chapter three I began to suggest that, given the cognitive limitations of
human beings, it is often the case that satisfying ones rational commitments is “success
enough.” I returned to this point in chapter six when I considered the process argument
given by Niko Kolodny, which I interpreted along similar lines. Since I’ve argued
(chapter five) that rational commitments and moral commitments are ultimately instances
of the same commitment relation, it is natural to wonder whether an analogous claim
holds in the moral case. Is it the case that, given human limitations, satisfying our moral
commitments is sometimes success enough?
I think it is indeed the case, but the claim requires some explanation. I will not
even come close to adequately motivating it here. But I’ll give a sketch of the general
idea, and will attempt to spell it out more convincingly in the future.
In claiming that sometimes the best we can do is to satisfy our rational
commitments, and that this can be success enough, I did not endorse the view that cases
in which an agent satisfies a rational commitment and thereby violates a rational
requirement are ones in which the agent is perfectly rational. On the contrary, I merely
contended that it is often the case that the degree of rationality we attain in virtue of this
sort of commitment satisfaction is the best that can be expected of us.
For example, we simply do not have the time, energy, or cognitive power to
consistently evaluate the rationality of each of our normative beliefs.
4
But we all have
irrational normative beliefs. Instead of engaging in a futile attempt to locate each and
every one, we typically preserve a stock of basic beliefs that are relatively immune to
revision, and we allow these beliefs to guide our conduct. Undoubtedly this eventuates in
our forming some intentions that a perfectly rational agent would never form. What,
though, is the alternative for agents like us?
4
Even the best philosophers don’t have the time, energy, or cognitive power to do this;
normal people certainly lack the time, energy, and cognitive power, and probably also the
inclination. It is probably rational to lack this inclination.
213
My thought is that the same is true of moral commitments. Satisfying them can
sometimes suffice to make us as moral as can be expected, even though this may involve
the violation of moral requirements.
Suppose that morality requires considering the interests of everyone, and is thus
in some substantial way impartial.
5
Suppose further that, as a consequence, I am morally
required to produce a benefit of +10 for S when the next best alternative is producing a
benefit of +5 for W (where S and W are arbitrary people). Now add another wrinkle to
the story. Imagine that some forms of moral commitment—for instance, the
commitments involved in friendship and love—are constituted by a dedication to another
person that essentially involves partiality. And suppose that, in at least some cases, living
up to these commitments, or being a good friend or a good lover, requires producing
benefits of e.g. +5 for the person to whom one is committed instead of producing benefits
of +10 for a stranger. Then we would have cases in which the commitments of friendship
and love conflict with the requirements of morality.
This may or may not be an accurate description of normative reality. But it
undoubtedly appeals to a theory of friendship and love that is at least implicitly accepted
by a huge number of human beings. Many people think that a good life is one in which
precisely these sorts of commitments are satisfied. I am inclined to think that such a life
can be a good human life even if the moral theory outlined in the previous paragraph is
true. And the explanation is very similar to the one I offered in the rational case. Most of
us are just incapable of living up to all the demands of morality. (I do not think it would
be hard to tell a story according to which the natural bonds of human sympathy and
affection, operating from infancy, instill in most of us an inescapable desire to love and
5
This is admittedly a contentious assumption, but it is one that I find very attractive. The
point here is not to defend it, or to defend the other contentious assumptions I go on to
make. My goal is just to sketch a view that I find interesting, not crazy, and worthy of
further scrutiny.
214
be loved.
6
) We are not cut out for moral sainthood, and the reasons are just as
insurmountable as the ones that make us inevitably fall short of rational perfection.
6
It is also probably reasonable to think that many of our most biologically entrenched
desires are both inescapable and irredeemably egoistic. This is part of what makes
moderate moral success “enough.” It is very hard for some of us to accomplish even this,
and so we are worthy of praise when we do.
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Microsoft Word - Dissertation.doc
PDF
Microsoft Word - Thesis.doc
PDF
Microsoft Word - ReineboldThesis.doc
PDF
Microsoft Word - Diss_111015.doc
PDF
Microsoft Word - ALAGEM - THESIS.doc
PDF
Microsoft Word - piunova_tss.doc
PDF
Microsoft Word - sandal_dissertation.doc
PDF
Microsoft Word - ATT00041.doc
PDF
Microsoft Word - title page.doc
PDF
Microsoft Word - Shayna Kessel Dissertation.doc
PDF
Microsoft Word - Bailey_FinalDissertation.doc
PDF
Microsoft Word - LHeld - Dissertation 110801.doc
PDF
Microsoft Word - Shuya_Dissertation_20110726.doc
PDF
Microsoft Word - Dissertation_chizhouv6.doc
PDF
Microsoft Word - OHFthesisAugust8.doc
PDF
Microsoft Word - Corrected Draft2.doc
PDF
Microsoft Word - Antonella Vecchiato 1.doc
PDF
Microsoft Word - GarrettR5610477999.doc
PDF
Microsoft Word - Lam_ThesisSubmit.doc
PDF
Microsoft Word - Liao_dissertation draft 1.doc
Asset Metadata
Core Title
Microsoft Word - Enkrasia, Rationality, and Commitment.doc
Tag
OAI-PMH Harvest
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC1388369
Unique identifier
UC1388369
Legacy Identifier
etd-ShpallSamu-105