Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Getting our act(s) together: the normativity of diachronic choice
(USC Thesis Other)
Getting our act(s) together: the normativity of diachronic choice
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Getting Our Act(s) Together:
The Normativity of Diachronic Choice
by
Vishnu Sridharan
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(PHILOSOPHY)
August 2022
Copyright 2022
ii
Table of Contents:
LIST OF FIGURES: ..................................................................................................................................... iii
ABSTRACT: ................................................................................................................................................ iv
CHAPTER 1: ................................................................................................................................................. 1
CHAPTER 2 ................................................................................................................................................ 32
CHAPTER 3 ................................................................................................................................................ 67
CHAPTER 4 .............................................................................................................................................. 100
CHAPTER 5 .............................................................................................................................................. 126
BIBLIOGRAPHY...................................................................................................................................... 157
iii
List of Figures:
Figure 1. Decision tree for EXTENDED TROLLEY……………………………………………73
Figure 2. Decision Tree for SIMULTANEOUS TROLLEY…………………………………….78
Figure 3. Decision Tree for SEQUENTIAL TROLLEY………………………………………...80
Figure 4. Decision Tree for REVERSE SEQUENTIAL TROLLEY……………………………85
Figure 5. Decision Tree for GREAT BEGINNING……………………………………………..90
Figure 6. Decision Tree for MESSY EXIT……………………………………………………...91
Figure 7. Decision Tree for DUAL-FRONT WAR……………………………………………...94
iv
Abstract:
Broadly speaking, there are two non-consequentialist ways to determine whether a
particular action is morally wrong or impermissible. First, one can examine the principle or
maxim under which that action falls. For instance, according to Kantian framework, an action is
wrong if the maxim it falls under cannot be properly universalized. Second, one can examine the
properties of the action itself. For instance, according to at least some contractualists, an action is
wrong when that action cannot be properly justified to others.
In my dissertation, I argue that each of these approaches is flawed, especially when
applied to cases involving diachronic or dynamic choice. In such cases, I argue, we cannot
determine whether an action is morally wrong or impermissible without examining the
diachronic action of which it is a part. For instance, according to the approach I favor, to
determine whether a certain war maneuver is permissible, we need to examine the larger war of
which it is a part. Similarly, to determine whether a particular stage in a medical treatment is
permissible, we need to examine the complete medical treatment of which it is a part.
Unless our non-consequentialist theory is sensitive to the peculiarities of diachronic
choice, it will miss something of fundamental importance about actions that are intuitively
grouped together as components of a plan or steps in the pursuit of a larger objective. Perhaps the
most significant revision that is necessary, I argue, is that the permissibility of an agent’s action
may depend on the actions that she has already performed and the evidence she had available
when she performed them.
My dissertation proceeds in five chapters:
v
Chapter 1: When Right Actions are Wrong in Principle
In Chapter 1, I argue against those who determine the wrongness of an action based on
the principle under which it falls. My primary target in this chapter is Scanlon’s contractualism,
though I also show how the basic critique might apply to the contractarian and Kantian position
as well. According to the contractualist, an action is morally wrong if any set of principles that
disallowed it could not be reasonably rejected by suitably informed and sufficiently rational
individuals. At least as developed by Scanlon, an individual needn’t be either fully informed or
ideally rational for her rejection of such principles to be relevant; instead, the individuals who
evaluate candidate principles simply need to be ‘reasonable,’ having decent access to
information, their reasons, and the strength of their reasons. Chapter 1 argues that the potential
cognitive limitations of reasonable people undermine the claim that an action is wrong if any set
of principles that disallowed it could not be reasonably rejected. More specifically, the fact that
reasonable individuals may have false empirical beliefs, alongside the fact that such principles
need to control for reasonable mistakes in their application, ought to lead us to doubt that actions
prohibited by contractualist principles are invariably wrong.
Chapter 2: Procedural Chances and the Equality of Claims
In Chapter 2, I begin my critique of the view that we can determine whether an action is
impermissible in a purely forward-looking manner. In particular, Chapter 2 critiques views
according to which an action is wrong if it cannot be properly justified to others on the basis of
the agent’s evidence or certain objective chances at the time of her action. My two primary
targets in this chapter are Johann Frick’s ex ante contractualism and Caspar Hare’s analysis of
what it means to “wish well” to others. Broadly speaking, according to these views, we can
vi
determine whether an action is wrong by examining the likelihood that others will be harmed by
it, conditional on the agent’s evidence, or by examining certain objective chances that obtain at
the time of the action. Chapter 2 both points out the shortcomings of these views and introduces
the notion of procedural chances. In the context of my dissertation, the notion of procedural
chances is important because it suggests that, at least in certain contexts, determining whether an
action is wrong requires examining the procedure of which the action is a part.
Chapter 3: The Dynamics of Moral Thresholds
In Chapter 3, I continue my critique of views that determine the permissibility of an
action in a purely forward-looking manner. The specific topic of Chapter 3 is what many have
called “Threshold views,” according to which it is permissible to violate moral constraints when
the expected value of the outcome is high enough. I first show that, if our version of the
Threshold View fails to take into account actions that the agent has already performed, it will
result in judgments of permissibility that are sensitive to seemingly arbitrary differences between
synchronic and diachronic actions. In order to avoid this unappealing result, we require an
interpretation of a Threshold view according to which certain sequences of actions are viewed as
part of a morally relevant group. This broader perspective generates compelling results not only
in cases that involve moral thresholds, but also with respect to much larger questions dealing
with the proportionality of war.
Chapter 4: The Composition of Risk
In Chapter 4, I present and defend a version of ex ante contractualism, which I call
Composite Contractualism, according to which the justification of an action is not purely
vii
forward-looking. The primary hurdle to adopting Composite Contractualism is that, if we were to
do so, it would be difficult to accommodate the intuition that medical experimentation is almost
always morally impermissible. My first task in this chapter is to demonstrate that salient attempts
by the ex ante contractualist to accommodate our intuitive discomfort with medical
experimentation are unsuccessful. In light of this failure, we have two basic choices: first, we can
embrace a view such as Composite Contractualism, which would allow us to maintain our
intuitions with respect to the permissibility of risky medical treatments; or second, if we are truly
convinced that medical experimentation is almost always morally impermissible, we may need to
abandon the ex ante contractualist’s moral framework altogether.
Chapter 5: The Culpability of Criminal Attempts
In Chapters 2-4, I discuss several scenarios in which what we ought to do might depend
on what we’ve already done. In Chapter 5, I argue that, these results notwithstanding, it is not the
case that our moral or legal evaluation of actions ought to be parasitic on the diachronic actions
of which they are a part. In particular, at least as a general matter, agents who fail to complete a
diachronic action merit a different legal and moral evaluation than those who do complete it. One
legal context in which this issue arises is with respect to the treatment of incomplete and
complete criminal attempts. In most (if not all) jurisdictions in the US and UK, individuals who
perform incomplete attempts can be punished in the same manner as those who commit complete
attempts. For instance, someone who aims a gun at someone else (with the intent to kill) but
never fires can be charged with the same crime as someone who aims, fires, and misses. In this
chapter, I put forward three arguments as to why we should maintain a legal distinction between
viii
incomplete and complete attempts, arguments that draw on action theory, normative ethics, and
legal doctrine.
Taken together, the five chapters of my dissertation make clear that, at least in certain
contexts, determining whether an action is permissible requires us to examine the diachronic
action of which it is a part. This discussion gives us strong reason to believe that justification of
our actions to others, from a non-consequentialist perspective, will not always be purely forward-
looking. If we favor a non-consequentialist theory, then, we will be forced to pay closer attention
to those contexts in which neither a general maxim nor a synchronic snapshot will suffice in
determining the permissibility of our actions.
1
Chapter 1
When Right Actions are Wrong in Principle
According to what we might call ‘regulatory’ views of moral wrongness, wrong actions
are those that fall under principles or maxims that are, for one reason or another, inappropriate to
regulate our own behavior or that of others. For instance, according to the rule utilitarian, actions
are wrong if and only if they are prohibited by rules that, were they to be widely accepted, would
lead to the best expected consequences. In addition, according to a variety of deontological
views, actions are wrong if and only if the maxims they fall under cannot be properly
universalized.
While each regulatory view is fleshed out differently, one of the most recent and cogent
defenses of its basic premises can be found in Scanlon’s contractualism. According to
contractualism, an action is morally wrong if any set of principles that disallowed it could not be
reasonably rejected by suitably informed and properly motivated individuals. At least as
developed by Scanlon, perhaps contractualism’s most influential proponent, an individual
needn’t be either fully informed or ideally rational for her rejection of such principles to be
relevant: instead, the individuals who evaluate candidate principles simply need to be
‘reasonable,’ having decent access to information, their reasons, and the strength of their reasons.
Using a straightforward interpretation of contractualism as my muse, in this chapter I
argue that the potential cognitive limitations of reasonable people undermine regulatory views of
moral wrongness. In particular, the potential cognitive limitations of reasonable people lead to
two types of errors that undermine the purported normative force of moral principles and
maxims, as conceived of by regulatory views in general and a natural interpretation of
contractualism in particular. First, since reasonable people have limited access to information,
2
their construction of principles and maxims may be rooted in factual error. If the construction of
certain sets of principles is rooted in factual error, it’s hard to believe that actions disallowed by
such principles are inevitably morally wrong. Second, in constructing principles or maxims for
the general regulation of behavior, those who reasonably reject principles must account for the
errors that people will make in the principles’ application. If principles are crafted to minimize
the harm of reasonable mistakes in application, however, it is unclear that those who are immune
from such mistakes necessarily do wrong by violating them.
While I make my point using the machinery of contractualism as a reference point, the
more general takeaway with respect to regulatory views is that, without the sufficient idealization
of both the rule makers and rule followers, the moral principles that emerge may lack normative
force because they disallow actions that are not morally wrong. If we attempt to remedy this
problem by simply idealizing both the rule makers and rule followers, we will see, the principles
or maxims that emerge might not be well-suited for the regulation of actual (or even reasonable)
people’s behavior.
This paper is organized into five sections. First, in Section 1, I outline the contractualist’s
notion of wrongness as developed by Scanlon. To follow, in Sections 2-4, I discuss the two types
of errors that undermine the purported normative force of contractualist principles. To close,
Section 5 responds to objections and makes clear that it is not only the contractualist who is
vulnerable to this critique.
1. Contractualist Wrongness
In this section, I present what I take to be a straightforward interpretation of Scanlon’s
contractualism. As outlined above, what is most interesting about this interpretation is not
3
necessarily its fidelity to Scanlon’s initial vision, though I’d be willing to defend that as well, but
instead its embodiment of a more broadly shared commitment to a regulatory view of moral
wrongness.
Let’s begin with Scanlon’s (1998, 12) account “of the property of moral wrongness
itself.” Scanlon (1998, 153) says that “an act is wrong if its performance under the circumstances
would be prohibited by any system of rules for the general regulation of behavior which no one
could reasonably reject as a basis for informed, unforced, general agreement.” At other times,
Scanlon (1998, 4) claims that an action is wrong “if any principle that permitted it would be one
that could reasonably be rejected by people” who were motivated “to find principles for the
general regulation of behavior that others, similarly motivated, could not reasonably reject.”
1
I
will work with the first formulation for two reasons. First, whether an action is disallowed by a
set of principles will often depend on the other principles in the set. As Robert M. Adams (2001,
564) puts it:
There are doubtless particular principles, and small sets of principles, that could not
reasonably be rejected—for instance, the principle that one ought in general to keep one’s
promises. But no such principle, and no small set of uncontroversial principles, will be
enough to disallow an action (relationship between a principle and an action); for the
reasons warranting and excluding exceptions to it, and indefinitely various. Hence, it
seems that our complete set of moral principles is at least implicitly involved in allowing
or disallowing any action.
1
Scanlon actually frames this second articulation in terms of necessary and sufficient conditions (if and only if);
however, he also explicitly acknowledges that his analysis does not encompass the whole of morality, so presumably
he will accept that his account is simply providing a sufficient condition for wrongness as opposed to necessary and
sufficient conditions. Scanlon (1998, 173) for instance writes that “most of us commonly use the terms ‘moral’ and
‘morality’ to refer to a diverse set of values, and that while contractualism characterizes a central part of the territory
called morality, it does not include everything to which that term is properly applied.” I discuss this point in more
detail below.
4
To take a simple example of Adams’ point, many murder statutes prohibit the ‘unlawful’ killing
of a human being. In order to determine whether a particular type of killing is disallowed by such
a principle, we would first need to examine principles specifying, among other things, what
constitutes an unlawful killing. While there may be some cases in which an action is clearly
disallowed by one principle alone, we can accommodate a greater variety of relevant cases by
speaking in terms of sets of principles.
Second, just as importantly, it makes more sense to think of wrong actions as ones that
violate certain principles, with actions that do not violate those principles not having the property
of moral wrongness (and hence having the property of being morally permissible). At a
minimum, this parallels our thinking in, for instance, the criminal law, in which a sufficient
condition for an action being illegal is that it violates a law and a sufficient condition for an
action being legal is that it does not violate any law.
2
In this context at least, the laws specify
actions that are in violation of agreed upon principles, not those that are permitted by them—as
the latter list would be too long to be helpful in the regulation of human behavior. This aligns
with the broadly appealing (non-consequentialist) view that an individual’s everyday actions are
presumptively morally permissible unless they interfere, or at least threaten to interfere, with the
rights and liberties of others. It is only when there is some risk that my action may adversely
impact the rights or interests of others that I need consult some sort of moral list, and that will be
a moral list of actions that are impermissible. If my contemplated action in this special domain is
on the impermissible list, then I should refrain from performing it; if not, I’m in luck.
With this in mind, a first pass at the notion of contractualist wrongness I’ll be working
2
For Scanlon on the similarity between contractualist principles and the law, see (1998, 153). This is not to say that
there are no domains in which it is more natural to think about in terms of what is explicitly permitted. For instance,
in professional baseball in the U.S., a permissible bat is one piece of solid wood that is a smooth, round stick not
more than 2.61 inches in diameter at the thickest part and not more than 42 inches in length.
5
with is the following:
Contractualist Wrongness (first pass): An action is wrong if its performance would be
disallowed by any set of principles for the general regulation of behavior that no
reasonable person could reject.
Before getting to my critique, we must get a bit clearer about what Scanlon means by a
‘reasonable’ person. (Again, this is not only important to note for making sense of Scanlon’s
view but because, as discussed below, it is a common feature of regulatory views more
generally.) Scanlon (1998, 32) distinguishes between the individual who is merely reasonable
and the individual he calls the ‘ideally rational.’ According to Scanlon (1998, 32), the ideally
rational individual is idealized in at least three dimensions: possession of relevant information,
awareness of relevant reasons, and a grasp of what her reasons support. In contrast, an individual
who is merely reasonable—the individual who we care about in thinking about the acceptability
of principles for the regulation of human behavior—may according to Scanlon (1998, 32) only
have access to “a specified body of information and a specified range of reasons, both of which
may be less than complete.” With this specified body of information and specified range of
reasons, merely reasonable individuals then decide which principles to reject and accept based on
what Scanlon (1998, 194) refers to as their judgments of “the suitability of [those] principles to
serve as the basis of mutual recognition and accommodation.”
3
While Scanlon says that merely reasonable people, unlike the ideally rational, may only
have access to a specified body of information and set of reasons, he doesn’t provide much
guidance as to the outer bounds of this body of information or set of reasons. Given the extent to
3
While Scanlon speaks of the ideally rational agent as one who possesses full information, it’s perhaps more
common to think of an agent’s rationality as distinct from how informed she is. In particular, we might think of an
ideally rational agent as one who, with whatever knowledge she possesses, is able to reason to the conclusions that
are properly supported by this knowledge. This point won’t make much of a difference in what follows, however.
6
which Scanlon is comfortable relying on the notion of reasonableness in articulating his view,
however, it doesn’t seem radical to think he might employ the notion in this context as well. In
particular, it seems fair to think that a merely reasonable person who is rejecting sets of
principles for the general regulation of behavior might do so with access to both a reasonable
amount of information and a reasonable amount of her relevant reasons. In addition, such an
individual might only have a reasonable grasp of the strength of her reasons.
4
With this in mind,
Contractualist Wrongness becomes:
Contractualist Wrongness (final): An action is wrong if its performance would be
disallowed by any set of principles for the general regulation of behavior that no person
with merely reasonable access to information, her reasons, and the strength of her reasons
could reject.
Given the potential cognitive limitations of the community of people who are judging the
acceptability of contractualist principles, it should be of no surprise that errors might creep into
the process. More specifically, as explored in more detail below, such individuals may commit
errors in their rejection of principles, and accepted principles must take into account the extent to
which individuals will err in their application. The extent to which such principles are influenced
by error, however, severely undermines their purported normative significance.
2. Factual Error in the Construction of Principles
The first, and perhaps simplest, type of error that can creep into contractualist principles
in particular (and those of regulatory views more generally) is factual error. If the reasonable
4
For related discussion, see Frick (2015, 190).
7
rejection of sets of principles for the general regulation of behavior is rooted in factual error, it is
hard to believe that all actions disallowed by such principles are inevitably wrong.
Let’s start with an example. Assume that any set of principles that no reasonable person
could reject would disallow the importation of toys made with carcinogenic substances. Further,
assume that, given the range of scientific studies that have been conducted on Cadmium,
everyone reasonably believes that Cadmium is carcinogenic, although in fact it is not. Given
these assumptions, a principle disallowing the importation of Cadmium-containing toys could
not, it seems, be reasonably rejected. This is because a principle disallowing the importation of
Cadmium-containing toys would be viewed as a natural application of the more general principle
disallowing the importation of toys made with carcinogenic substances. However, since
Cadmium is not, in fact, carcinogenic, it’s far from clear that the importation of Cadmium-
containing toys is morally wrong. If the importation of Cadmium-containing toys is not, in fact,
wrong, then this may be a counterexample to Contractualist Wrongness.
One way the contractualist might respond to this example is to claim that there is a set of
principles that does not disallow the importation of Cadmium-containing toys that could not be
reasonably rejected by this community. More specifically, according to this response, this
community would necessarily find acceptable a set of principles that simply contained a general
principle disallowing the importation of toys made with carcinogenic substances but did not
contain a particular principle disallowing the importation of toys made with Cadmium. If this
community could not reject a set of principles that did not include one disallowing the
importation of toys made with Cadmium, then the importation of such toys would not be wrong.
8
It is certainly interesting to think about how fine-grained principles should be to best
regulate interpersonal relations, and this is a point to which I return below.
5
For now, however,
all we need establish is that a community of merely reasonable people may be such that it
reasonably rejects sets of principles that lack a more specific principle dealing with the
importation of Cadmium-containing toys. For instance, we might fill in the details to include the
fact that, in the past, the merely reasonable community only had a more general principle
disallowing the importation of toys made with carcinogenic substances, but it found that people
made too many mistakes in determining to which toys this applied. They then dedicated
communal resources toward investigating the toxicity of substances in toys and compiled a
specific list of those whose importation was not allowed. More simply, we can simply assume
that all the information that this merely reasonable community possesses suggests that a more
specific principle disallowing the importation of Cadmium-containing toys is a necessary part of
any set of adequate principles for the regulation of community behavior. As long as such a
merely reasonable community might exist, then such a community might insist on a more
specific principle disallowing the importation of Cadmium-containing toys and reject any set of
principles that lacked one.
6
A more likely contractualist response to this case would be that the importation of
Cadmium-containing toys does have a wrong-making feature, namely, that it is disallowed by
principles that no one could reasonably reject. Even though this merely reasonable community’s
judgment is rooted in a factual error, according to this response, this error does not deprive the
community’s judgment of its unique normative force.
5
For relevant discussion, see Scanlon (1998, 198-206).
6
In his discussion of moral relativism, Scanlon (1998, 348-9) acknowledges that the reasonable rejection of
principles may depend on both “social conditions” and “the appeal of values we may share.” Presumably, such
social conditions might include the level of knowledge that is generally accessible to members of that society.
9
In considering this response, it is important for us to distinguish between the action of
importing a Cadmium-containing toy and the action of knowingly violating community
standards. Although we will question this assumption in what follows, for now let’s assume that
knowingly violating community standards is morally wrong. Even if knowingly violating
community standards is wrong, however, this does not mean that importing Cadmium-containing
toys is morally wrong. This is clearest in cases in which merely reasonable community members
do not recognize some action as an instance of importing Cadmium-containing toys. For
instance, suppose that unbeknownst to reasonable people, the substance Berilium is identical to
Cadmium. When community members import Berilium-containing toys, they neither knowingly
violate community standards nor do they import a toy made with carcinogenic substances. When
community members perform actions that lack either of these features, it seems clear that they
are not doing anything wrong. However, if we accept Contractualist Wrongness, the importation
of Berilium-containing toys is wrong. This is because, according to Contractualist Wrongness,
the importation of Cadmium-containing toys is wrong, and the importation of an Berilium-
containing toys is an importation of a Cadmium-containing toy.
7
7
To this point, the contractualist might respond that what we learn from the case of Cadmium-containing toys is that
the relevant principles for the regulation of human behavior should not disallow the performance of actions
simpliciter; instead, the principles should disallow the knowing performance of actions. If a set of principles
disallowed the knowing importation of Cadmium-containing toys while remaining silent on the importation of
Cadmium-containing toys themselves, then neither importing Berilium-containing laptops nor accidentally
importing Cadmium-containing toys would not be wrong. The first problem with such a proposal is that we would
surely want to disallow more than the knowing performance of certain actions, as even risking certain outcomes
seems morally wrong. The second, more fundamental problem is similar to the one raised above. That is, while it is
possible that sets of principles that stay silent on the performance of particular actions (such as the importation
Cadmium-containing toys) might not be reasonably rejected, certain merely reasonable communities might insist on
principles without this epistemic component. Such a community might insist, based on its own experience, that the
best principles for the regulation of behavior lack such epistemic conditions, and that such epistemic conditions are
only important for determining how much blame individuals deserve for violating such principles. (The fact that
Scanlon adopts this sort of approach is strong evidence that such communities might be reasonable to insist upon it).
As long as a community might reasonably insist on principles that lack these epistemic components, our problem
with Berilium-containing toys will remain unaddressed.
10
One last contractualist response to this case, which would be much more radical, is worth
considering. According to this response, what the contractualist ought to claim is that an action is
wrong if it is disallowed by sets of principles that no fully informed person could reject. Since a
fully informed person could reject a set of principles prohibiting the importation of Cadmium-
containing toys, this response would avoid the problem discussed above. One problem with this
approach is that, if we adopt it, then contractualist principles may be epistemically inaccessible
to those with only limited information. If such principles are epistemically inaccessible to those
who are less than fully-informed, which may be the vast majority of individuals in any given
community, then even the fully-informed will not, as Rahul Kumar (2003, 11) puts it, be
“entitled to demand” of such people, “as a matter of respect for her value as a person, that they
incorporate such principles into the broader set of principles that structure their deliberations.”
Second, and perhaps even more importantly for our purposes, even if we idealize those who are
rejecting principles for the regulation of behavior, many of those regulated will presumably be
non-idealized. As will become clearer below, insofar as the contractualist principles to regulate
behavior reflect the cognitive limitations of those governed, we have ample reason to doubt that
the principles successfully track moral wrongness.
Given the limited amount of information that merely reasonable people possess, it seems
inevitable that the reasonable rejection of some sets of principles will be rooted in factual error.
When this happens, it is difficult to maintain that the actions disallowed by such sets of
principles are, without exception, morally wrong. This provides us with initial reason to question
the strength of the link between the reasonable rejection of sets of principles and the moral
wrongness of actions that those sets of principles disallow.
8
8
It’s worth noting that factual errors can also result in sets of principles that no one could reasonably reject that are
incoherent. For instance, a set of principles that no one in a particular community could reasonably reject might both
11
3. Controlling for the Erroneous Application of Principles
In the previous section, I showed that, when there is factual error in the construction of
sets of principles, the actions disallowed by such sets might not be morally wrong. In this and the
following section, I argue that we obtain a similar result when we examine the extent to which
the construction of principles must account for reasonable mistakes in application. This is a
serious problem not only for a particular interpretation of contractualism but also for regulatory
views more generally.
According to Scanlon, when individuals decide whether sets of principles are reasonably
rejectable, they must take into the impact of such principles on the community at large. In
particular, reasonable people look not only at the individuals impacted by particular actions
allowed by the principles but also, according to Scanlon (1998, 203), “the consequences of
general performance or nonperformance of such actions and of the other implications (for both
agents and others) of having agents be licensed and directed to think in the way that that
principle requires.” For instance, Scanlon (1998, 203) points out, "if we know that we must stand
ready to perform actions of a certain kind should they be required, or that we cannot count on
being able to perform acts of another kind should we want to…these things have important
effects on our planning and on the organization of our lives.”
Given the role that contractualist principles play in the general regulation of behavior,
when merely reasonable people evaluate such principles, they will need to control for the factual
errors that they expect will occur in the application of such principles. However, insofar as
prohibit the importation of goods made in a sweatshop and require the importation of a particular brand of lumber
(perhaps that brand is the only one that has agreed to certain environmental standards). If, unbeknownst to the
community, the production of that brand of lumber utilizes sweatshop labor, then the importation of that lumber will
both have the property of moral wrongness and the property of being morally required.
12
contractualist principles are designed with reasonable mistakes in mind, they may be a poor fit
for the error-free. In fact, as we will see, there will be a range of situations in which those who
are error-free may be morally required to violate such principles.
As a starting point, it will be helpful to work with a model of a somewhat stylized case.
Let’s assume that public bus drivers have two levels of skill: Average and Expert, with 98% of
bus drivers being Average and 2% of bus drivers being Expert. 90% of bus drivers are Average
bus drivers who know they are Average, 8% of bus drivers are Average bus drivers who falsely
believe they are Experts, and all Expert drivers know they are Experts. We can represent this in
the following table:
Proportion of Bus Drivers Skill Level Perceived Skill Level
90% Average Average
8% Average Expert
2% Expert Expert
Bus drivers often confront emergencies in which they have the choice between two
actions, one we can call ‘Safe’ and the other we can call ‘Risky.’ If any driver takes chooses the
Safe action in an emergency, she guarantees the second best (and second worst) outcome for her
passengers. If an Expert driver chooses the Risky action, she guarantees the best outcome for her
passengers. If an Average driver chooses the Risky action, she guarantees the worst outcome for
her passengers. Again, a table will help clarify, with some particular values for the outcomes:
Outcome of Safe action Outcome of Risky action
Average Driver All passengers lose both arms All passengers die (Worst)
Expert Driver All passengers lose both arms All passengers unharmed (Best)
13
If we adopt the contractualist outlook, the Expert driver would be wrong to choose the
Risky option if any set of principles for the general regulation of behavior that could not be
reasonably rejected would disallow that choice. That is, even though the Expert driver knows
that choosing the Risky option is best for her passengers, if a principle disallowing such an
action could not be reasonably rejected, then the Expert driver would be wrong to do so. If this
were the case, the contractualist would be committed to the claim that the Expert ought to refrain
from performing an action that she knows will be best for those she owes a duty of care simply
because of a property possessed by sets of principles disallowing the action.
Before exploring this model case in a bit more detail, it’s worth putting some more
realistic cases on the table in which this phenomenon seems to be present:
Amateur Help: Lilith never finished medical school, but she knows how to perform a
tracheotomy. Lilith comes across a man in distress and is told that the paramedics are on
the way. She knows that, if she doesn’t begin the tracheotomy immediately, the man
would die; however, with immediate, skilled intervention, the man’s life would be saved.
Despite her lack of formal credentials, Lilith intervenes and saves the man’s life.
Paternalistic Intervention: Marc has bought an expensive herbal treatment for his prostate
cancer. Professionals strongly disagree as to whether the treatment provides any health
benefit, but Marc is convinced that it is the one thing he needs to get better. After
reviewing some recent studies, Marc’s friend Mary realizes that, if Marc ingests the
treatment, he would kill himself. Mary rushes to Marc’s house and sees that Marc is
14
about to take the supplement. Mary knocks the supplement out of Marc’s hand,
destroying it.
Defense of Others: Sujata has known Sara since the two were kids. When Sara gets a
certain look in her eye, Sujata knows that Sara is about to commit an act of horrific
violence. Sara is walking around a public park with a gun. Guns are allowed to be carried
in this park as long as the owner has the proper permit, which Sara does. Sara has the
look in her eye that only Sujata can properly decipher. Sujata tackles Sara and wrestles
away Sara’s gun, saving the life of Sujata’s potential victims.
In all the above cases, I would submit, the agent is right to take bold action even though there’s a
good case to be made that any set of principles that disallowed such an action could not be
reasonably rejected. For instance, it might very well be the case that a principle prohibiting
unlicensed medical professionals from conducting tracheotomies could not be reasonably
rejected, a principle prohibiting interference by non-professionals in other’s chosen medical
treatment could not be reasonably rejected, and a principle prohibiting tackling others and taking
away their guns just because they have a certain look in their eyes could not be reasonably
rejected. At the same time, in these cases, the agents involved know that their own intervention,
in their own specific situation, is absolutely justified. In particular, these individuals know that, if
they do not intervene, a much greater harm and injustice will occur.
In working through our model case of the Expert and Average bus driver, the underlying
structure of these more realistic cases will also become clearer. In order to evaluate and compare
candidate sets of principles, we must, as Scanlon (1998, 195) puts it, “form an idea of the
15
burdens that would be imposed on some people in such a situation if others were permitted to do
X…We then need, in order to decide whether these objections provide grounds for reasonably
rejecting the proposed principle, to consider the ways in which others would be burdened by a
principle forbidding one to do X in these circumstances.” Since we’ll be considering which of a
number of principles to include in our set, we will examine how the burdens of each principle
compares with each other as opposed to with the absence of a principle.
I would argue that, in our model case, any set of principles for the general regulation of
behavior that could not be reasonably rejected will include or entail something like the
following:
Principle 1: Upon confronting an emergency for the first time, no bus driver is allowed to
choose the Risky action.
If we adopt Principle 1—the principle that disallows all bus drivers from choosing the Risky
action—those who bear the greatest burden, and who will have the strongest complaint against
the principle, will be the passengers who lose both arms. To see Principle 1’s relative appeal, at
least according to the contractualist, let’s consider whether sets of principles that could not be
reasonably rejected might include (or entail) a principle that would allow Expert bus drivers to
choose the Risky action. One obvious candidate would be:
Principle 2: Upon confronting an emergency for the first time, no bus driver is allowed to
choose the Risky action unless that bus driver is an Expert.
If Principle 2 is included in a set, 10% of bus drivers would choose the Risky action when faced
with emergencies and 90% would choose the Safe action. More specifically, 2% of Expert
drivers would choose the Risky action because they know they are Experts, 8% of Average
drivers would choose the Risky action because they falsely believe they are Experts, and 90%
16
would choose the Safe action because they know they are Average. This will result in 90% of
passengers involved in emergencies losing both arms, 8% of passengers dying, and 2% emerging
unharmed.
9
In table form:
Percentage of Emergencies Principle 2 Outcome
90% Passengers lose both arms
8% Passengers die
2% Passengers unharmed
The strongest complaint against including Principle 2 instead of Principle 1will be the complaint
of the 8% of passengers involved in emergencies who will die as a result (as opposed to simply
losing both arms), as they bear the greatest burden. This is presumably a stronger complaint than
the complaint of the 2% of passengers who will lose both arms (as opposed to emerging
unharmed) as a result of including Principle 2 as opposed to Principle 1, so a set of principles
that included Principle 2 could be reasonably rejected.
While I explore more candidate principles below, what should be clear is that, if we adopt
the contractualist outlook, a set of principles may be reasonably rejected in virtue of the
application errors of the 8% of drivers who falsely believe that they are Experts. However, even
if any set of principles disallowing an Expert bus driver from choosing the Risky action could
not be reasonably rejected, it seems not only permissible but morally required for the Expert to
choose the Risky action. After all, the Expert driver owes her passengers some duty of care, and
9
Assuming that all bus drivers are equally likely to confront emergencies.
17
if she chooses the Safe action, she is choosing an action that she knows will result in
significantly more harm to them than her other available action.
10
A similar analysis can be put forward with respect to the more realistic cases listed above.
For instance, if we adopt a contractualist outlook on Amateur Help, it would be wrong for
unlicensed Lilith to intervene to save the man’s life if any set of principles disallowing such an
action could not be reasonably rejected. The reasonable rejectability of sets of principles, in such
a case, would depend on how many people without medical licenses would falsely believe that,
without their immediate intervention, someone will die, and the overall consequences of their
misguided interventions. If enough individuals were to have this false belief and the costs of
misguided interventions were high enough, then any set of principles that disallowed Lilith’s
intervention might not be reasonably rejected. Even if Lilith’s intervention might be disallowed
by principles that could not be reasonably rejected, however, it is still the case that the action is
one that she knows will save the man’s life while exposing him to absolutely no risk of harm.
Just like the Expert bus driver, if Lilith could easily perform an action that saves someone else’s
life and prevents great harm, it seems at least permissible (if not obligatory) for her to do so.
An initial response to the model case that is worth addressing is that we might remedy the
problem through administering some sort of driving test that could distinguish the Expert drivers
from the Average. The first problem with this remedy is that, as is suggested by the breadth of
the realistic cases mentioned above, we shouldn’t expect such a remedy to always be available.
In Protection of Others, for instance, we are unlikely to think it possible for Sujata to obtain a
10
While my primary goal in this paper is to show that an action may have the contractualist property of wrongness
without being morally wrong, these cases also suggest that some actions that the contractualist considers to be
permissible will, in fact, be morally wrong. More specifically, while the contractualist will consider it permissible
for the Expert to choose the Safe action, it seems wrong for her to choose the action that she knows will result in
gratuitous harm to those to whom she owes a duty of care.
18
certificate for being able to know when Sara is about to kill. More fundamentally, even if certain
certifications were available, this wouldn’t completely remedy the problem. For instance,
someone who knew she was an Expert driver might have a cold the day of the certification exam.
If, because of the cold, she fails the exam, or perhaps doesn’t even show up, she still should
choose the Risky option if she knows that doing so will prevent grave harm to her passengers.
After all, what’s important in such a case is (presumably) the well-being of the passengers, and
the presence of the certification exam is merely a means to this end.
In the next Section, I consider two ways in which the contractualist might respond to this
model case (and its real-life counterparts): first, to claim that the Expert driver choosing the
Risky action would not be disallowed by sets of principles that no one could reasonably reject,
and second, to claim that the Expert driver acts wrongly in choosing the Risky action. To follow,
in Section 5, I respond to objections and suggest that the problems discussed above plague more
than the contractualist.
4. Two Contractualist Responses
4.1 More Permissive Principles
One way the contractualist might respond to the model case discussed in Section 3 is to
argue that an Expert driver choosing the Risky action is not disallowed by any set of principles
that no one could reasonably reject. If a set of principles that allowed for this choice could not be
reasonably rejected, then, according to the contractualist, the action would not be morally wrong.
One factor that we should keep in mind as we think through these sets of principles is that
principles should not be so fine-grained or complex that it will be difficult for reasonable people
to apply them. Scanlon (1998, 205) voices this concern when he writes:
19
There is an obvious pressure toward making principles more fine-grained, to take account
of more and more specific variations in needs and circumstances. But there is also
counterpressure arising from the fact that finer-grained principles will create more
uncertainty and require those in other positions to gather more information in order to
know what a principle gives to and requires of them.
While Scanlon recognizes that principles that are too fine-grained will make it difficult for
individuals to know what such principles require of them, there does not seem to be a recognition
that a similar difficulty may plague individuals even with respect to coarse-grained principles.
For instance, in our model case, the 8% of bus drivers who falsely believe they are Experts may
have great difficulty knowing what relevant principles require of them, and this is a difficulty
that must be taken into account when principles for the general regulation of behavior are
evaluated.
With this in mind, we can think through more candidate sets of principles for the
regulation of bus drivers’ behavior. In Section 1, we considered the following two principles:
Principle 1: Upon confronting an emergency for the first time, no bus driver is allowed to
choose the Risky action.
Principle 2: Upon confronting an emergency for the first time, no bus driver is allowed to
choose the Risky action unless that bus driver is an Expert.
If Principle 2 were included in our set instead of Principle 1, it seemed clear that the complaints
of the 8% passengers who would die in emergencies (as opposed to losing both arms) would be
much stronger than the complaints of the 2% who would lose both arms (as opposed to emerging
unharmed) as a result of the inclusion of Principle 1. However, this does not mean that there is
no principle that might be preferable to Principle 1. For instance:
20
Principle 3: Upon confronting an emergency for the first time, no bus driver is allowed to
choose the Risky action unless she knows she is an Expert driver.
Principle 3 puts more stringent conditions on when it is permissible for the Risky action to be
chosen than Principle 2. If we include Principle 3 in our set, then those who falsely believe they
are Experts (and those who truly believe this but do not know it) will not be allowed to choose
the Risky action. If we adopt Principle 3, then, any Average driver who chooses the Risky action
will do so in violation of our set of principles. However, just because the language of Principle 3
differs from the language of Principle 2, this does not mean that its real-world impact will be
much different. In particular, if most of the 8% of Average drivers who believe they’re Experts
take themselves to know they’re Experts, Principle 3 will still have disastrous effects as
compared to Principle 1.
At this point, we might think it better to focus on the credence it is rational for an agent to
have in a particular proposition instead of her knowledge. In doing so, we would align ourselves
with many who have discussed the contractualist approach to risk imposition in terms of
epistemic probability, or the probability that a proposition is true conditional on an agent’s
evidence.
11
In this spirit, consider:
Principle 4: Upon confronting an emergency for the first time, no bus driver is allowed to
choose the Risky action unless her rational credence that she in an Expert driver is
sufficiently high.
The hope of Principle 4, I take it, is that, while an agent’s knowledge might not be transparent to
her, her rational credence might be.
12
That is, while the Average driver might falsely believe that
11
See especially Frick (2015, 182), as well as Kumar (2015, 27, fn. 1) and (2015, 30, fn. 3).
12
For discussion of the non-transparency of knowledge, see Williamson (2000).
21
she knows that she is an Exert driver, perhaps she is less likely to falsely believe that the
credence it is rational for her to have, given her evidence, is sufficiently high.
13
There are several reasons why we should be skeptical that framing principles in terms of
rational credence or epistemic probabilities will be helpful here. First, just as an agent’s
knowledge is not transparent, whatever mental states that constitute an agent’s evidence are
likely not transparent. If the mental states that constitute an agent’s evidence are not transparent,
then the probability of a given proposition, given the agent’s evidence, will not be transparent
either. Second, and just as importantly, even if an agent’s evidence were transparent to her, it
would be reasonable to be mistaken about the probability of some proposition conditional on that
evidence. In fact, as discussed above, Scanlon (1998, 32) endorses a very similar claim when he
writes that only an “ideally rational” agent reasons “flawlessly…about what one’s reasons
support.” With this in mind, just as 8% of drivers falsely believe themselves to be Experts, 8% of
drivers might falsely believe that their rational credence is sufficiently high to justify the choice
of the Risky option. For these reasons, shifting our terminology from knowledge to rational
credence is not a viable solution to this problem.
In discussing cases involving risk, many contractualists assume that the magnitude of the
relevant risk is epistemically accessible to all reasonable agents. As Kumar (2015, 46) puts it,
“[s]ince the magnitude of the risk an activity imposes on others is pertinent to the permissibility
of its pursuit, the magnitude of that risk ought to be reasonably epistemically accessible to a
person deliberating about the permissibility of its pursuit.” What our model bus driver case
illustrates is that, at least in general, we cannot expect individuals to have epistemic access to the
risks of their available actions. At a minimum, if we work with Scanlon’s notion of reasonable
13
At a minimum, this sort of assumption is made by Frick (2015, 182).
22
people, we must make allowances for cases in which—due to cognitive limitations—individuals
are mistaken about these risks. If we can expect some number of individuals to be mistaken
about these risks, then our principles for the general regulation of behavior cannot ignore the
consequences of such mistakes. This leaves open the possibility that, from a contractualist
perspective, any set of principles for the general regulation of behavior that disallows the Expert
driver from choosing the Risky action could not be reasonably rejected.
4.2 Impermissible Expert
Considering the discussion in Section 4.1, it seems like the contractualist is committed to
the claim that it would be wrong for the Expert driver choosing the Risky action. In this section, I
examine how the contractualist might defend this commitment.
At least at first glance, what makes the claim that the Expert driver acts wrongly by
choosing the Risky action hard to defend is that, in doing so, the Expert driver acts in the manner
that she knows will be best for all of those to whom she owes a duty of care. In fact, it’s hard to
imagine that the Expert driver has any compelling reason to choose the Safe action, let alone that
it would be wrong for her to choose the Risky action. While this is clear enough in our model
case, it’s even clearer when we look at the more realistic cases put forward in Section 3. For
instance, if an individual without a medical license knew she could save someone’s life with
immediate action, she should do so even if a general principle prohibiting her action could not be
reasonably rejected. Similarly, if someone knew that only the destruction of a friend’s medicine
would save his life, she should destroy the medicine even though a principle prohibiting her
action could not be reasonably rejected. Lastly, if someone knew just by looking that only a
23
forceful intervention could prevent an unjustified murder, she should prevent the murder even if
a principle prohibiting such a forceful intervention could not be reasonably rejected.
To this point, the best argument the contractualist might offer is to claim that the fact that
an action is disallowed by principles that no one could reasonably reject makes the action wrong.
For instance, Scanlon (2003, 437) writes that “an action’s being wrong in [the contractualist’s]
sense makes it morally wrong in the perfectly general sense of that term, since being forbidden
by the standards contractualism supports is one way of being forbidden by standards that there is
reason to treat as authoritative.” (To be clear, at other times, Scanlon denies this claim).
14
If, as
Scanlon (2003, 435-436) argues, what makes an action morally wrong is violating “standards of
conduct that everyone has good reason to regard as authoritative,” then the contractualist can
simply cite this fact in attempting to explain why the Expert driver is wrong to choose the Risky
action. What seems implausible about this argument is that, in certain cases such as this one (and
its more realistic counterparts), it’s not clear that everyone has good reason to consider
community standards as authoritative. In particular, while it is clear that Average drivers have
good reason to regard Principle 1 as authoritative, it’s far from clear that the same can be said
about Expert drivers. In fact, given the duty of care the Expert driver owes her passengers, as
well as her knowledge that the action that will allow them to keep both arms, it seems like a case
where, all things considered, violating community standards is the morally right, if not the
morally required, thing to do.
Before exploring the challenge that my argument poses for other views of moral
wrongness, it is worth briefly examining whether a weaker version of contractualism is immune
14
More specifically, in response to objections from Judith Thomson and Frances Kamm, Scanlon writes that what
makes an action wrong “are the properties that would make any principle that allows it one that it would be
reasonable to reject.” Scanlon (1998, 391, fn. 21), in response to Thomson (1990, 30, fn. 19) and Kamm (2002, 324,
331-332). See also Wallace (2002, 461-462).
24
from my critique. Specifically, according to Johann Frick, being disallowed by a set of principles
that no one could reasonably reject might not make actions wrong, but instead simply be a
‘wrong-making property’ of actions. As Frick (2015, 220) puts it:
[W]hile the contractualist formula captures an important class of pro tanto moral reasons
that contribute to making actions right or wrong in the domain of interpersonal morality,
reasonable rejectability in the contractualist sense is not the only relevant consideration in
determining whether an action is right or wrong, all things considered.
Frick (2015, 221) then claims that, when an action is disallowed by principles that no one could
reasonably reject, that action fails to be “equitable,” or fails in terms of how it “treat[s] each
person individually.” While Frick’s analysis may help the contractualist deal with cases of
interpersonal aggregation—the cases in which he has the most interest—it does not seem to help
with the cases considered in Section 2 and Section 3 above. Let’s think of ‘inequitable’ actions
as ones that fail to give proper weight to some individual or group of individuals’ claims. In the
example discussed in Section 2, the importation of Cadmium-containing toys was disallowed by
principles that no one could reasonably reject in light of merely reasonable individuals’
erroneous beliefs. Although this action is disallowed by contractualist principles, it is not at all
clear that the action is inequitable or fails in how it treats each person individually. Along similar
lines, were the Expert driver to choose the Risky action, this does not seem at all inequitable
because, by doing so, she treats each passenger as an individual with unique interests that are
distinct from any group of which she or he is a part. More generally, what my argument suggests
is that, because of the cognitive limitations of reasonable people, the actions that are disallowed
by contractualist principles will sometimes lack wrong-making properties; if this is the case, then
Frick’s retreat will be unsuccessful.
25
5. Against a Regulatory Approach to Moral Wrongness
Up to this point, I have argued that we should reject the contractualist claim that actions
that are disallowed by any set of principles that no one could reasonably reject are morally
wrong. In this final Section, I aim to accomplish two tasks: first, I argue that the contractualist’s
problems cannot be solved through further idealization; and second, I suggest that the problems
highlighted above applies to many views of moral wrongness.
The basic problem with the contractualist view of moral wrongness is that, without
sufficient idealization, the principles that govern the behavior of reasonable people may lack
normative force. As argued in Section 2, this is most obvious insofar as merely reasonable
individuals may have false beliefs. If a community’s regulative principles are rooted in factual
error, it’s hard to believe that the actions such principles disallow are necessarily wrong. This
phenomenon is also present insofar as a community’s principles must take into account its
expectations regarding the erroneous application of principles. While controlling for the
erroneous application of principles is necessary for the general regulation of behavior, the
violation of such principles, at least by those who are not prone to such errors, does not seem
inevitably wrong.
Given that it is the lack of sufficient idealization that leads us to question the normative
force of the contractualist’s principles, one might wonder if the easiest remedy is simply to
idealize people even further.
15
In particular, perhaps Scanlon should theorize about ideally
rational, fully informed agents instead of simply reasonable ones. If we idealize in this manner,
however, the contractualist principles will simply run into a different normative problem. More
specifically, if contractualist principles are ones that govern the behavior of ideally rational
15
For an excellent discussion of the perils of idealization, see Enoch (2005).
26
individuals, it might not be the case that non-ideal individuals ought to attempt to follow them.
This will be particularly salient in cases in which, were the non-ideal to attempt to follow
principles that govern the behavior of the ideally rational, the results would be catastrophic. In
this way, while failing to sufficiently idealize gives us contractualist principles that lack
normative force because the actions they prohibit might not be wrong, excessive idealization
gives us contractualist principles that are not suitable for the general regulation of the behavior,
at least of non-ideal or merely reasonable individuals.
While my argument above takes contractualism as its muse, a number of views claim that
the moral wrongness of actions depends on whether the principles or maxims under which such
actions fall are suited to govern or guide the behavior of others. As noted above, I call such
views ‘regulatory’ because they define wrong actions as those that fall under principles or
maxims that are inappropriate to regulate our own behavior or that of others. In addition to
contractualism, there are at least three families of views that adopt a regulatory approach to
moral wrongness and thus might be challenged by my critique. First, and perhaps most
obviously, is the rule consequentialist. According to the rule consequentialist, an act is wrong if a
certain set of rules that prohibited that action would have the best (or the best expected)
consequences. For instance, Brad Hooker (2000, 32) argues that:
An act is wrong if and only if it is forbidden by the code of rules whose internalization by
the overwhelming majority of everyone everywhere in each new generation has
maximum expected value in terms of well-being.
Notably, Hooker (2000, 113) acknowledges that, in evaluating the expected benefits or value of
different codes, we must “pay attention to the affective and cognitive limitations of human
agents,” which is exactly what leads to the problem discussed above. Since the codes that would
27
have the highest expected value for cognitively limited agents might reflect the same errors
discussed above, it would face the same sort of normative challenge.
The second family of views that is threatened by my critique is that of the so-called
‘contractarian,’ inspired by Hobbes, who argues that morality consists of principles that
individuals voluntary agree to in order to strategically pursue their self-interests. A society in
which individuals do not agree to such principles may be worse than one in which they do,
people might reason, and this provides sufficient reason for all to abide by such principles. David
Gauthier (1986, 9) provides a modern defense of this view, arguing that:
Moral principles are introduced as the objects of fully voluntary ex ante agreement
among rational persons. Such agreement is hypothetical, in supposing a pre‐moral
context for the adoption of moral rules and practices. But the parties to agreement are
real, determinate individuals, distinguished by their capacities, situations, and concerns.
In so far as they would agree to constraints on their choices, restraining their pursuit of
their own interests, they acknowledge a distinction between what they may and may not
do.
Gauthier, like Hooker, acknowledges that the parties to the agreement are “real” and have
particular, determinate “capacities.” Even though Gauthier (1986, 61) assumes such individuals
have “perfect information about possible actions, possible outcomes, and all preferences over
these outcomes, and that it is common knowledge that everyone has such perfect information,”
he does not assume that such individuals have either perfect information about all empirical
matters or are perfect reasoners. Insofar as Gauthier acknowledges the variety of cognitive and
practical limitations of parties to his moral agreement, we should question the normative force of
the agreed upon principles or the extent to which such principles will track moral wrongness.
28
The third and last family of views that might be threatened by my critique is Kantian.
Though there are several ways we might develop a Kantian view, there is clearly some similarity
between the contractualist claim and the claim that any action that violates the first formulation
of the Categorical Imperative, the Formula of Universal Law, is wrong. According to Kant’s
(1998, 421) Formulation of Universal Law (FUL): “Act only on that maxim through which you
can at the same time will that it should be a universal law.” As Onora O’Neill (1990, 128-9)
convincingly argues, FUL asks agents “to shun principles that could not be adopted by others,
that is, that could not be universal laws.” In other words, O’Neill (1990, 84) tells us, FUL
proposes “a standard against which the principles agents propose to act on, of whatever sort, may
be tested.” According to Kant (1998, 424), one context in which we cannot will that a maxim be
“raised to the universal law of nature” is when “such a will would contradict itself.” This is the
sort of contradiction that is relevant to the above discussion, as O’Neill (1990, 134) describes it,
because the contradiction is “between the thought experiment of universalizing a maxim and the
background conditions of the lives of specifically finite rational agents.” In examining whether
this sort of contradiction is present, we consider whether our maxim could be universalized
among beings that are not fully ideal, according to O’Neill (1990, 134), “not only because their
rationality is limited, but because they are finite in many ways. They have limited capacities to
act that can be destroyed or undercut in many ways.” Insofar as we determine whether our
maxims can be universalized among limited agents, however, the maxims that are
universalizable may fail to have the desired normative force.
Like the contractualist, the rule consequentialist, the contractarian, and the Kantian
focused on the Formula of the Universal Law all define wrong actions in terms of the failure of
29
some principle or maxim under which the action falls to regulate communal behavior.
16
The
challenge all these regulatory views face is that the principles that govern communal behavior
often reflect the cognitive limitations of community members. Insofar as these principles reflect
individuals’ cognitive limitations, it’s hard to believe that the actions that they prohibit are
inevitably wrong. In response, if we simply idealize the community of rule makers and rule
followers, then it might not be the case that we, as cognitively limited beings, should even
attempt to follow the moral maxims or principles that result. If cognitively limited beings such as
ourselves ought not even attempt to be guided by such principles, then the principles will not
meet the regulatory views’ own criteria for being well-suited for interpersonal or communal
governance.
6. Conclusion
In this chapter, I discussed two problems for regulatory views of moral wrongness. First,
I argued that, insofar as principles and maxims may be rooted in factual error, it is implausible to
think that the actions disallowed by such principles are invariably wrong. Second, I showed that,
insofar as such principles must control for reasonable errors in application, it is implausible to
think that, whenever the error-free violate such principles, their actions are invariably wrong.
These problems illustrate a fundamental dilemma for proponents of all such views—which
includes rule consequentialists, contractarians, and Kantians: while the normative force of their
principles and maxims depends on the idealization of agents and their peers, the suitability of
16
Given these three families, it’s worth noting that Parfit’s Triple View is likely vulnerable to my critique as well.
Parfit incorporates what he sees as the best versions of rule consequentialism, Scanlon’s contractualism and Kantian
contractualism into his theory, which claims that:
[E]veryone ought to follow Rule Consequentialist principles that [would be optimific if universally
accepted] because these are the only principles with universal acceptance which everyone could rationally
choose, and that no one could reasonably reject (2011, 244).
30
their principles and maxims for interpersonal and communal governance is severely undermined
by it, at least for the merely reasonable among us.
Sources:
Adams, Robert Merrihew. (2001). Scanlon’s Contractualism: Critical Notice of T.M. Scanlon, What We
Owe to Each Other. The Philosophical Review, 110(4), 563-586.
Brown, Jessica. (2008). Subject-Sensitive Invariantism and the Knowledge Norm for Practical Reasoning.
Nous, 42(2), 167-89.
Frick, Johann. (2015). Contractualism and Social Risk. Philosophy and Public Affairs, 43(3), 175-223.
Gauthier, David. (1986). Morals by Agreement. Oxford: Oxford University Press.
Hooker, Brad. (2001). Ideal Code, Real World: A Rule-Consequentialist Theory of Morality. Oxford:
Oxford University Press.
Kamm, Frances. (2002). Owing, Justifying, and Rejecting. Mind, 11(442), 323-354.
Kant, Immanuel. (1785, repr. 1998). Groundwork of the Metaphysics of Morals. Trans. Mary Gregor.
Cambridge, UK: Cambridge University Press.
Kumar, Rahul. (2015). Risking and Wronging. Philosophy and Public Affairs. 23(1), 27-51.
Kumar, Rahul. (2003). Reasonable Reasons in Contractualist Moral Argument. Ethics, 114, 6-37.
O’Neill, Onara. (1990). Constructions of Reason: Explorations of Kant’s Practical Philosophy.
Cambridge: Cambridge University Press.
Parfit, Derek. (2011). On What Matters, Vol. 2. Oxford: Oxford University Press.
Radford, Colin. (1966). Knowledge—By Examples. Analysis, 27(1), 1-11.
Reed, Baron. (2010). A Defense of Stable Invariantism. Nous, Vol. 44(2), 224–44.
Roeber, Blake. (2018). The Pragmatic Encroachment Debate. Nous, 51(2), 171-95.
Scanlon, T.M. (1998). What We Owe to Each Other. Cambridge, Mass.: Harvard University Press.
Scanlon, T.M. (2003). Replies. Ratio 16, 424–39.
Thomson, Judith. (1990). The Realm of Rights. Cambridge, Mass.: Harvard University Press.
31
Wallace, R. Jay. (2002). Scanlon’s Contractualism. Ethics, 112, 429-470.
Williamson, Timothy. (2000). Knowledge and its Limits. Oxford: Oxford University Press.
32
Chapter 2
Procedural Chances and the Equality of Claims
In certain cases, it is natural to think that two or more individuals have an equal claim to
some indivisible good. To take an example put forward by John Broome (1990, 91), before a
game both competitors may have an equal claim to the first move or the more advantageous
position on the field.
17
Along similar lines, there are many cases in which individuals have an
equal claim to avoid some indivisible harm. To use another example from Broome (1990, 91),
each member of a platoon of soldiers might have an equal claim to avoid a dangerous mission
that is necessary to save the entire group or to advance the war effort more generally. In cases
such as these, we face significant pressure to allocate the good or harm using a procedure that
gives each individual an equal chance of obtaining that to which she possesses an equal claim.
In this chapter, I aim to accomplish three tasks. First, I introduce the notion of procedural
chances. This notion of procedural chances is important, I argue, because when individuals have
an equal claim to some good (or to avoid some harm), each ought to have an equal procedural
chance to obtain that good (or avoid the harm). Second, I show that this notion of procedural
chances provides us with a fresh and interesting perspective on debates about statistical lives and
the ex ante pareto principle. Lastly, I make clear that, in contrast to my systematic and unified
treatment of these two topics, the views of other prominent thinkers are at war with themselves.
1. Objective Chances and Procedural Chances
In this section, I lay the groundwork for the exploration to follow. First, I briefly discuss
17
For relevant discussion, see Diamond (1967) as well as the sources listed in fn 49 below.
33
the notion of objective chance. Second, I discuss what I mean by procedural chances and suggest
that, when individuals have an equal claim to some good, they deserve an equal procedural
chance at obtaining it.
1.1 Objective Chances
Before getting to my main argument, some preliminary remarks will be helpful to
distinguish objective chances from subjective probabilities. Let’s take the example of a coin that
you know to be fair. Before the coin is tossed, the objective chance of the coin landing heads (or
tails) is .5, which is the same as your subjective probability of the coin landing heads (or tails).
After the coin is flipped and lands heads, the objective chance of the coin landing heads is 1.
However, if you have not yet seen how the coin landed (or received additional evidence about
the outcome), your subjective probability that it did so will still be .5. Once you see that the coin
landed heads, the objective chance that the coin landed heads remains 1, and your subjective
probability that the coin landed heads is also 1.
18
With this simple example in mind, we might distinguish subjective probabilities from
objective chances in the following (admittedly rough) manner: while the subjective probability of
an event at a particular time is equal to its likelihood, given an agent’s evidence at that time, the
objective chance of an event at a given time is equal to its likelihood given the history of the
world up until that time (plus the given laws of nature).
19
This rough gloss captures the intuition
that, while the objective chance of a coin landing heads changes after the coin lands, an agent’s
subjective probability of it landing heads does not change until she sees (or gets some other clues
as to) how it landed.
18
Or perhaps only very close to 1, if you have some rational credence that your vision may be faulty.
19
For a bit more technical discussion of objective chances, see List and Pivato (2015).
34
While this is a very intuitive way of thinking about subjective probabilities and objective
chances, it is not entirely uncontroversial. This is because some would argue that, if the world is
deterministic, then, before we flip a fair coin, the objective chance of it landing heads is either 1
or 0.
20
In other words, while we are tempted to imagine that fair coins might either come up
heads or tails on any given toss, according to some, the history of the world prior to any
particular toss entails that there is only one possible way for the coin to land.
While a full discussion of this topic would go far beyond the scope of this paper, I believe
that the truth of determinism is compatible with the notion of objective chance that I utilize
below.
21
That said, if the reader doesn’t find those particular arguments compelling, there are at
least three reasons why the discussion below will still be of philosophical interest. First,
determinism may be false. If determinism is false, then there is no challenge to the notion of
objective chance put to use below. Second, even if determinism is true and incompatible with
objective chances other than 1 or 0, it might be rational for agents to give some (and perhaps
substantial) credence to indeterminism. If it is rational to assign credences in this way, then
intermediate objective chances might still be relevant to our normative deliberation. Lastly, even
if it is not rational to believe in intermediate objective chances, it might be the case that certain
moral intuitions depend on them. In other words, if certain patterns of moral reasoning depend
on the existence of intermediate objective chances, then the rejection of such chances might
force us to abandon certain intuitive patterns of reasoning.
20
Of course, if we adopted the deterministic outlook, we might even challenge the very idea of a ‘fair’ coin. If, on
any particular occasion, the coin will either definitely land hands or definitely land tails, then we might begin to
doubt that the coin is, on any particular instance, “fair.”
21
This is a view adopted by a number of philosophers of science and ethicists, and I find their arguments quite
compelling. Ethicists who follow such an approach include Frick (2015, 197-200); philosophers of science include
List and Pivato (2015), who build on List (2014), Butterfield (2012), and von Plato (1982).
35
1.2 Procedural Chances
Now that I’ve discussed the distinction between objective chances and subjective
probabilities, I can discuss what I call procedural chances and suggest that, when individuals
have an equal claim to some good, they deserve an equal procedural chance of obtaining it. As a
first pass, the procedural chance of an agent obtaining some benefit is the objective chance of her
obtaining it, conditional on the procedure used to determine who obtains it, just prior to the
initiation of the procedure. Let’s begin with an example:
CAKE SPLIT: You see Tessa and Janelle arguing about who gets to eat the last piece of
cake. To be helpful, you offer them a way of resolving their dispute. You tell them that,
about five minutes earlier, you flipped a coin. If they are amenable, Janelle could guess
the outcome of the toss, and if she guesses correctly, she gets the cake. Tessa and Janelle
agree to your proposal, and Janelle correctly guesses that the coin landed heads.
In a case like CAKE SPLIT, both the objective chances and your subjective probability
that Janelle will eat the cake will vary over time. For instance, when you first see the pair, your
subjective probability that Janelle will eat the cake might be .8, because you know Janelle to be
relentless in her pursuit of pastries. When Janelle and Tessa agree to the coin flip, your
subjective probability that Janelle will eat the cake might be .7 because you know the coin
landed heads and that Janelle usually guesses heads. After Janelle guesses correctly, your
subjective probability that she’ll eat the cake might go up to 1 (or at least very close to it). The
relevant objective chances may follow a similar trajectory: for instance, at the time when you
first see Janelle, the objective chance of her eating the cake might be .8 because the chance of her
agreeing to a coin flip is small and there’s a serious chance of her just grabbing the cake and
eating it, the objective chance of her eating the cake at the time the coin flip is agreed upon might
36
be .7 because she usually guesses heads, and the objective chance of her getting cake after she
guesses correctly is presumably close to 1.
In contrast to both subjective probabilities and objective chances, what I’m calling the
procedural chance of either Tessa or Janelle getting the cake is the objective chance of these
events conditional on the procedure used to determine who gets the cake just prior to the
initiation of that procedure. In CAKE SPLIT, the selection procedure includes both the coin flip
and Janelle making her guess, so the procedural chance of either her or Tessa winning is
calculated just prior to the coin flip. Since the coin has an equal objective chance of landing
heads or tails immediately prior to the flip, the procedural chance of either getting the cake,
conditional on the coin flip determining the winner, is .5.
In the cases discussed below, it will be clear how subjective probabilities, objective
chances and procedural chances can come apart. In addition, in these cases, it will become clear
that it is procedural chances, as opposed to subjective probabilities or objective chances at the
time of the agent’s decision, that we want equalized when individuals have an equal claim on a
particular good. Before getting there, however, it is worth saying more about the difference
between objective chances and procedural chances, as the two are closely related. In particular, it
is easy (though mistaken) to think that Tessa and Janelle’s procedural chances of winning are
just equal to each of their objective chances of winning just prior to the coin toss. The distinction
between the two is clearest when, for instance, the objective chance of Janelle abiding by the
agreed upon procedure, immediately before that procedure is initiated, is low. If there is a serious
chance that, were Janelle to lose the coin toss, she would just take the cake and run, then the
objective chance of her getting the cake, immediately prior to the coin toss, would be much
37
higher than the chance of her getting the cake conditional on the coin toss determining the
winner.
In order for the allocation of a good to be procedurally fair, each individual must have an
equal procedural chance of obtaining the good. For instance, in CAKE SPLIT, the procedure for
allocating the cake will be fair just as long as both Tessa and Janelle have an equal (.5)
procedural chance of getting the cake. At the same time, in light of the above discussion, it is
also worth noting that, in order for a procedure to determine the allocation of the good, it must be
true of all of the agents involved that they would have accepted the outcome of the procedure if
they had lost. In CAKE SPLIT, then, in order for it to be the case that the coin toss determined
the allocation of the cake, it must be true that both Tessa and Janelle would have accepted the
outcome in which she lost the coin toss and the other got the cake.
In the cases discussed below, it will be clear how subjective probabilities, objective
chances and procedural chances can come apart. In addition, in these cases, it will become clear
that it is procedural chances, as opposed to subjective probabilities or objective chances at the
time of the agent’s decision, that we want equalized when individuals have an equal claim to a
particular good. This is an important result, we will see, for debates about identified and
statistical lives and the ex ante pareto principle.
2 Procedural Chances and Statistical Lives
One set of cases in which procedural chances are relevant to our moral decision-
making are those having to do with identified and statistical lives. It is a well-known
psychological phenomenon that people display a preference for saving so-called ‘identified’ lives
38
over so-called ‘statistical’ lives.
22
To take an oft-cited example, people are willing to donate
significantly more money to save a child who is trapped in a well than they are to efforts to
prevent childhood diseases. This phenomenon is puzzling, one might think, especially in cases
that one can expect that donating money to prevent childhood diseases will save more children
than donating to save the one child trapped in the well.
23
While this psychological phenomenon is empirically robust, its normative support is
hotly contested. For our purposes, what’s interesting to examine is how a view focused on
procedural chances gives us a clear diagnosis of what we ought to do in these and nearby cases
while avoiding the serious shortcomings of competing views. To begin, it will be helpful to
home in on one articulation of the preference for identified lives, which we can call ID
Preference:
ID Preference: At a given time t1, it is morally better to contribute to endeavors that will
save individuals that you can identify at t1—‘identified lives’— than to contribute to
endeavors that will save individuals that you cannot identify at t1—‘statistical lives.’
Conversely, if forced to choose, it is morally better to contribute to endeavors that will
harm statistical lives than those that will harm identified individuals.
24
A case in which ID Preference might come into play is the following, as suggested above:
22
For an early and influential discussion of this phenomenon, see Schelling (1968).
23
The exact factors that lead to this sort of preference are unclear. For discussion of relevant experimental findings,
see Lee and Feeley (2016).
24
There are a number of tricky questions with respect to when exactly individuals are to be considered identified.
For instance, if two people’s names are written on pieces of paper in a basket, then those two individuals can be
plurally identified as the individuals named on the piece of paper even though neither can be individual identified.
According to Hare (2012), a statistical person is one whose identity is counterfactually open at a particular time. For
instance, if one person out of one hundred will be selected via a lottery to receive some good, then the person to be
selected may be a statistical person immediately prior to the lottery. For this to be the case, we must assume
(alongside Hare) that neither of the following counterfactuals are true: 1) if the lottery had occurred, Bill (one of the
hundred) would have been selected; and 2) if the lottery had occurred, then Bill would not have been selected (2012,
p. 382). Hare’s view thus requires violations of the conditional excluded middle (for more on this topic, see
Stalnaker (1980) and Williams (2010).
39
SAVE FIRST: It is morally better to save a child trapped in a well than to donate to large
multi-national charities, even if we expect that our donation will save as many children
over time.
25
In SAVE FIRST, the identity of the child in the well is known at the time of your contribution. In
contrast, those who you can expect to save by donating to charity cannot be identified at that
time. If such an identification is not possible at the time of your donation, then one might say
that, at least for you, it is an open question as to who will be saved by the donation. while it is
not an open question as to who will be saved in the well.
One thing to note about ID Preference is that it leaves open how much morally better it is
to save identified lives than statistical lives. For instance, some argue that, while it is morally
better to save identified lives than statistical lives, this should basically only be relevant in cases
in which we are comparing the same number of each.
26
Others counter that the moral value of
identified lives is so much greater than the moral value of statistical lives that we should favor
the former even when, by doing so, we sacrifice a greater number of the latter.
27
I will engage
with (and critique) both of these views below.
While several scholars have argued in favor of some form of ID Preference, the two
arguments that are of most interest to us are those presented by Caspar Hare (2012) and Johann
Frick (2015a).
28
While Hare argues that, in light of the relevant objective chances—in particular,
counterfactual objective chances (the chance that event E would occur if action X were
performed)—we ought to adopt ID Preference, Frick argues that, in light of our relevant
subjective probabilities, we ought to do so. In the rest of this Section, I argue that, in these cases,
25
This story is based loosely on ‘Baby Jessica,’ which is discussed in some detail in Jenni and Loewenstein (1997).
26
See, for instance, Hare (2012, 388).
27
See, for instance, Frick (2015a, 219).
28
A different sort of argument for ID Preference is put forward in Slote (2015).
40
procedural chances trump both counterfactual objective chances and subjective probabilities in
terms of moral relevance. In addition, when we focus on procedural chances in these cases, there
is very little to be said in favor of ID Preference.
To begin, let’s discuss case in which Hare, Frick, and my view reach the same
conclusion:
HOSTAGE FATE: You receive a call from Alice, a psychologically troubled individual
who has 10 hostages. Abe, Beth, Cal, Devon...and Jin. Alice tells you that she is either
going to 1) kill Abe, the hostage she hated the most that morning, or 2) kill a hostage that
she will choose via a procedure that gives an equal objective chance of survival (9/10) to
each prior to the initiation of the procedure. Alice is allowing you to make this choice for
her.
If we accept ID Preference then, in HOSTAGE FATE, it is morally better for you tell Alice to
kill the hostage using a procedure that gives each hostage an equal objective chance of survival
at the time at which the procedure is initiated. This is because, if Alice makes use of such a
process, she will be killing a statistical person—a person you cannot identify at the time of your
decision—as opposed to an identified one (Abe).
29
Although Hare, Frick, and I agree about what you ought to do in HOSTAGE FATE, my
view diverges from theirs with respect to the following:
HOSTAGE FLIP: You receive a call from Alice, a psychologically troubled individual
who has 10 hostages. Abe, Beth, Cal, Devon...and Jin. Alice tells you that she is either
29
Note that, in the case in which you are choosing between saving someone trapped in a well and donating to
international charities, there are two senses in which the person in the well is ‘identified.’ First, and less importantly
for our purposes, you might have access to personal information about the person in the well, such as her date of
birth, hair color, and hobbies. Second, and more importantly for our purposes, you can identify the person in the
well as the unique person that will be saved if you contribute to the rescue efforts. This distinction will be helpful to
keep in mind as we move through the cases below.
41
going to 1) kill all hostages except Beth, the hostage she liked the most that morning, or
2) kill all hostages except one that is chosen via a procedure that gives an equal objective
chance of survival (9/10) to each prior to the initiation of the procedure. Alice is allowing
you to make this choice for her.
In HOSTAGE FLIP, ID Preference would have us select option 1). This is because, just as we
prefer preventing harm to identified (vs statistical) lives, we prefer benefitting identified (vs
statistical) lives. While Hare and Frick embrace this result, for reasons to be discussed below, a
view focused on procedural chances would not. This is because a view that considers procedural
fairness important does not distinguish between procedures that distribute harms and procedures
that distribute benefits. As such, the view considers HOSTAGE FATE to be relevantly similar to
HOSTAGE FLIP in that, just as Abe shouldn’t receive harm in a manner that is not procedurally
fair, Beth should not receive a benefit in a manner that is not procedurally fair.
Now that we have a basic idea of how ID Preference and a view focused on procedural
chances diverge, we can discuss in more detail Hare and Frick’s defense of ID Preference. Hare,
who focuses on counterfactual objective chances at the time of the agent’s decision, supports ID
Preference by arguing for the following general principle:
Distribute Bad Effects: Given a choice between doing something very good for one
person but very bad for another person, and doing something very good for one person
but quite bad for nine others, you ought, other things being equal, to do the latter.
30
In HOSTAGE FATE, if you tell Alice to kill the hostage she selects via an objectively chancy
process, according to Hare’s way of thinking, you do something very good for the identified
person Alice had initially selected (because, had you acted otherwise, she would have certainly
30
Modified from Hare (2012, 387). Note that Hare would not think his principle would justify allowing two people
to die as opposed to one. In this way, as noted above, he disagrees with Frick.
42
died), while doing something quite bad for the others (because, had you acted otherwise, they
would only have had a slightly greater chance of surviving). This is better, according to Hare,
than doing something very good for the person selected by the objectively chancy procedure
(since that person would have been killed) and very bad for Abe. Similarly, in HOSTAGE FLIP,
if you allow Beth to be saved, you do something very good for her and only quite bad for the
others, which is better than doing something very good for the person selected by the objectively
chancy procedure and very bad for Beth. We ought to prefer these options because, as Hare
(2012, 387) puts it,
Morally speaking, many small bad effects on many people do not add up to one big bad
effect on one person. It is better to hurt many people a little than one person a lot.
In contrast to Hare, Frick puts forward a complaint-based argument focused on an agent’s
subjective probabilities at the time of her decision to generate the conclusion that, in HOSTAGE
FATE, we ought to have Alice conduct an objectively chancy procedure to select the hostage to
kill, while in HOSTAGE FLIP, we ought to allow Alice to save Beth. More specifically, Frick
(2015a, 215) writes:
[I]t seems hard to deny that a single individual has a stronger claim to be rescued from a
certain (or very likely) harm of size H than a different individual has to have her risk of
suffering a harm of equal size slightly reduced.
31
31
To be clear, this quote ends with the following: “especially if her risk of suffering this harm is quite low to begin
with.” Along these lines, Frick then speaks of “how much we can reduce” a statistical person’s “risk of death by”
(2015a, 215). However, in accordance with Frick’s earlier (and I think more perspicuous) articulation of his view, it
would be better if, instead of speaking of reducing some individual’s already low risk of death, he spoke of what this
individual’s risk of death would be conditional on the agent performing one action as opposed to another. This is
because, while Frick provides us with the machinery to calculate an individual’s risk of death conditional on
particular actions, this same machinery won’t necessarily generate the numbers he wants with respect to that
individual’s risk of death prior to those particular actions. At a minimum, to calculate this prior risk, we need to take
into account the agent’s subjective probability of performing each action, which Frick does not appear to do in this
passage.
43
According to Frick’s argument, in HOSTAGE FATE, the strength of Abe’s complaint if you
guarantee his death is significantly stronger than the strength of Beth, Cal, Devon…Jin’s
complaints for exposing them to a small chance of death, conditional on your evidence. Along
similar lines, in HOSTAGE FLIP, the strength of Beth’s complaint if you opted for the
objectively chancy procedure is significantly stronger than the strength of the other nine for
denying them a small chance of survival. Since Abe would have the strongest complaint in
HOSTAGE FATE and Beth would have the strong complaint in HOSTAGE FLIP, you should
opt for a chancy procedure in HOSTAGE FATE and against it in HOSTAGE FLIP.
Before presenting my view of this case, as well as objections to both Hare and Frick’s
views, it’s worth noting that their views do not provide general support for ID Preference. More
specifically, while their views explain and justify our preference for identified over statistical
lives in HOSTAGE FATE and HOSTGE FLIP, there are nearby cases in which, if we adopt their
views, we will seemingly favor statistical lives over identified lives. For instance, according to
both Hare and Frick’s views, we will favor statistical lives over identified in the following case:
SHARE THE WEALTH: You are deciding between two manners of distributing your
wealth: 1) giving $100 to each of the 40,000 people in your hometown; or 2) giving $1
million to 4 out of your 5 best friends, with the winners to be determined by lottery. Both
the townspeople and your friends have roughly the same net wealth.
32
If we adopt Hare or Frick’s view, we will opt to give the $1 million to four of our 5 best
friends, with the winners to be determined by lottery. By giving to our friends, we are
distributing bad effects (40,000 each losing out on $100), and the inhabitants of your hometown
have weaker complaints than your friends would have (since the inhabitants are only losing out
32
If we are concerned about the fact that utilities aren’t linear with dollars, we can reframe this example in terms of
distributing utiles instead of money.
44
on $100, as opposed to your friends, who would be losing out on an expected $800,000).
However, by giving money to our friends, we seem to be favoring statistical lives (the 4 out of 5
who win the million, whose identities we cannot ascertain when we make the decision) as
opposed to the 40,000 inhabitants of our hometown.
33
Even though neither Hare nor Frick’s view consistently generates verdicts that align with
ID Preference, it is still worth closely examining the cases in which they do. This is because, by
examining their views, we can more clearly see the intuitive role that procedural chances, as
opposed to subjective probabilities or objective chances at the time of the agent’s decision, play
in these and nearby scenarios.
In both HOSTAGE FATE and HOSTAGE FLIP, according to a view focused on
procedural chances, we should ask Alice to choose which hostage to kill and save using a
procedure that, prior to its initiation, gives each hostage an equal objective chance of being
selected. If we do so, then all the hostages’ procedural chances of survival, or objective chances
of survival, conditional on the selection procedure utilized, will be equal just prior to that
procedure being initiated.
3435
Let’s assume for the sake of argument that, immediately prior to
33
For similar reasons, I submit, Hare and Frick’s view would commit us to harming identified instead of statistical
individuals in the following sort of case:
Furious Flames: A wildfire is heading toward a small neighborhood. Left unchecked, the fire will destroy
all the property of 9 out of the 10 people who live in that small neighborhood, with each of the 10 being
equally likely to have her property completely destroyed. If you redirect the blaze, the property of the 10
people will be safe, but the blaze will destroy 10% of the property of each of the 90 people in the adjacent,
larger neighborhood. The properties in the two neighborhoods are of roughly the same economic value.
34
To be slightly more precise, each of the hostages’ procedural chances of being selected will be equal as long as
your decision to opt for the objectively chancy procedure is not influenced by any partiality toward or prejudice
against any particular hostage. This is because (at least arguably) the relevant selection procedure includes not only
the objectively chancy procedure but also your choice as to whether to utilize that procedure. If you are prejudiced
against, for instance, Cal, and would not have opted for the objectively chancy procedure if Cal was the one selected
by Alice to kill, then, even if you opt for the lottery when Abe is selected, Cal’s procedural chance of being killed
will be higher than the other hostages. One way of thinking about such a case is that the relevant selection procedure
would be something like, “if Alice selects Cal, allow her to kill Cal, and if not, have her run the objectively chancy
procedure.”
35
I acknowledge that there is ultimately some vagueness with respect to which things count as falling under the
relevant procedure. In the examples under discussion, however, I think this question has a relatively straightforward
answer.
45
Alice choosing Abe to kill in HOSTAGE FATE, and immediately prior to Alice choosing Beth
to save in HOSTAGE FLIP, there was no objective chance of her choosing anyone else. In both
cases, then, if you allow Alice to kill or save the hostage she selected, then either one or nine
hostages will be procedurally doomed. Intuitively, I would submit, we value procedural fairness
in both cases, and the only way we can achieve this is to have Alice make both choices via an
objectively chancy procedure.
In addition to generating the right verdict in HOSTAGE FLIP, there are many reasons to
think that procedural chances, as opposed to either objective chances or subjective probabilities
at the time of the agent’s decision, are morally relevant in these cases. Here are three:
Reason 1: Hare and Frick’s views ask us to be partial when we ought to be indifferent.
There are a number of simple cases in which, according to Hare and Frick’s view, we
have a moral reason to perform one action as opposed to another, even though indifference
seems to be the most obvious response. For instance:
HOSTAGE FATE-A: Similar set-up to HOSTAGE FATE except Alice rolls a 10-sided
die to choose which hostage to kill. She then allows you to decide whether she kills the
hostage selected on the first die toss or rolls the die again and kills the one selected next.
According to Hare, we should ask Alice to roll the die again because this will distribute bad
effects among the hostages. This is because, if we go with first roll, then the identity of the
person to be killed will not be counterfactually open. Prima facie, this is absurd. From a moral
perspective, there seems to be absolutely no reason to prefer that Alice kill the hostage chosen
based on the second roll of the die as opposed to the first.
Frick puts us in a similar position with respect to the following:
46
HOSTAGE FATE-B: Similar set-up to HOSTAGE FATE except the hostages’ given
names, at birth, were One, Two, Three, Four…Ten. Alice rolls a 10-sided die to choose
which hostage to kill. She tells you that the die came up 7. She then allows you to decide
whether she kills the hostage named Seven or rolls the die again and kills the one selected
next.
According to Frick, we should ask Alice to roll the die again because the hostage named Seven
will have a strong complaint against us failing to do so. Again, this seems absurd. From a moral
perspective, there seems to be absolutely no reason to prefer that Alice kill the hostage chosen
based on the second roll of the die as opposed to the first, even though we know that the hostage
chosen by the first is named Seven. After all, even in HOSTAGE FATE-A, we could’ve given
names to all the hostages, and doing so wouldn’t have altered our relevant moral obligations.
36
Reason 2: Hare and Frick’s views lack the resources to explain cases in which we ought
to be partial instead of indifferent.
If we adopt either Hare or Frick’s views, we will have nothing to say about the
importance of procedural fairness. This is a mistake. Let’s start with an example that shows this
aspect of Hare’s view:
HOSTAGE FATE-C: Similar set-up to HOSTAGE FATE, except Alice tells you that she
is either going to 1) kill Abe, the hostage she hated the most that morning, or 2) kill the
hostage that she chose five minutes ago by rolling a 10-sided die.
Hare offers us no reason to prefer 2) to 1). This is because, in HOSTAGE FATE-C, no matter
which option we choose, we will be doing something very good for one hostage and something
very bad for another. At least intuitively, however, I would submit, there is a value to choosing
36
For discussion, see Setiya (2019). I return to this point in both Sections 3 and 4 below.
47
2) instead of 1). This is because, if we opt for 2), then all of the hostages will have had an equal
procedural chance of survival.
37
This is unlike if you ask Alice to kill Abe, the hostage she hates
the most, because, conditional on that procedure, Abe’s chance of being killed is much higher
than the rest of the hostages.
Frick’s view also asks us to be indifferent when we ought to be partial. For instance:
HOSTAGE FATE-D: Similar set-up to HOSTAGE FATE, except Alice tells you that she
is either going to 1) kill the hostage among the ten that she hates the most; or 2) kill the
hostage she selects by rolling a ten-sided die. Conditional on your evidence, each of the
hostages is equally likely to be most hated by Alice.
Similar to Hare, Frick gives us no reason to prefer 2) to 1). Since each hostage is equally likely
to be most hated, conditional on our evidence, each hostage’s complaint against us allowing
Alice to kill the most hated hostage will be the same as each of their complaints against us opting
for the procedure that gives them an equal objective chance of being selected immediately prior
to the initiation of that procedure. For the same reasons outlined above, however, it seems
preferable to have Alice kill a hostage that she selects via a procedure that gives each an equal
objective chance of being selected. I would explain this result by pointing out that, if we opt for
2), then each hostage will have had an equal procedural chance of survival.
One way that Hare or Frick might respond to these cases is to say that procedural chances
might play a secondary role in our analysis of these cases. That is, they can point out that they do
not deny the importance of procedural fairness, they simply claim that what is most relevant to
these cases are certain objective chances or subjective probabilities at the time of the agent’s
37
As noted above, this assumes that your opting for 2) does not reflect partiality toward or prejudice against any
particular hostage.
48
decision. While this response is certainly on the table, its inadequacy becomes clear when we
consider my third and last reason.
Reason 3: Hare and Frick’s views have serious problems with series of decisions.
One of the most serious flaws of Hare and Frick’s views is that they are vulnerable to
what I call ‘unfair cycling.’ Consider the following:
HOSTAGE SERIES: Similar set-up to HOSTAGE FATE, except Alice is allowing you
to make a series of decisions. She has chosen Abe via a procedure that gave each hostage
an equal objective chance of being selected immediately prior to the initiation of the
procedure. More specifically, Alice rolled a ten-sided die that has a hostage’s name on
each side.
If you’d like, instead of killing Abe, Alice will roll a different die that is slightly weighted
so that Beth’s name is .15 likely to come up while each of the other hostage’s names are
about .094 likely to. If you opt for Alice to roll the die that is weighted against Beth, she
will then give you a similar choice. Namely, at that point, you can choose whether she
kills the person selected by this second, somewhat skewed procedure or opt for her to roll
a third die that is even more weighted against Cal. (The third die is such that Cal’s name
is .2 likely to come up while each of the other hostage’s names are about .089 likely to.)
If Alice runs this third, even more skewed objectively chancy process, she will then allow
you to choose whether she kills the person selected through this third process or roll a die
that is heavily weighted against Dan. (The fourth die is such that Dan’s name is .25 likely
to come up and the other hostage’s names are about .083 likely to.)
All in all, Alice is willing to run 9 increasingly biased procedures, in addition to her
original procedurally fair one, at which point she will kill the person chosen by the last
49
one. For the tenth procedure, she will a roll a die that gives Jin a .55 chance of being
killed and each of the other hostages a .05 chance of being killed.
38
If we adopt either Hare or Frick’s view, we will have a difficult time saying no to Alice’s offers
to conduct increasingly biased selection procedures. This is because, according to both of their
views, in a case like HOSTAGE SERIES we ought to prefer heavily biased procedures in the
future as opposed to completely fair procedures in the past.
If we adopt Hare’s view, we will basically have two choices: we can either allow Alice to
kill Abe, the person selected in a procedurally fair manner, or we can accept all ten of Alice’s
offers so that the person who is eventually killed is chosen via a procedure that is heavily skewed
against Jin. If we adopt Hare’s view, Alice’s second through ninth offers will basically be
equivalent to her first because it will give one of the hostages no objective chance of survival,
and Hare thinks it is better to hurt many people a little than one person (the person with no
objective chance of survival) a lot. In other words, even though the tenth die roll is extremely
biased, it still allows us to distribute bad effects. After all, if we allow Alice to kill Abe, Abe is
doomed, while if we accept all of Alice’s offers, we still give Jin a .45 chance of surviving.
Hare’s core claim is that it is better to do something very good for one person (Abe) and quite
bad for several others (the rest of the hostages, but especially Jin) than it is to do something very
good for one person (the person selected via the tenth procedure) and very bad for another (Abe).
If we believe in this sort of distribution of bad effects, then we will be on board for all of Alice’s
increasingly biased procedures.
For slightly different reasons, if we adopt Frick’s view, we are also committed to Alice
choosing increasingly biased procedures. For every roll apart from the last, we can only make a
38
I thank Simon Blessenohl for discussion of this example.
50
decision after we learn the outcome of the previous role. At that point, it will be the case that the
person selected by that roll has the strongest complaint against us allowing them to die. This will
be true starting with Abe and culminating with the tenth roll, since the person selected by the
ninth roll will definitely die if we don’t opt for another one, while Jin will only have a .55 chance
of dying if we do. According to Frick, those who will certainly suffer a harm have a stronger
claim to be rescued than those who are exposed to a smaller risk of that harm. Following this
logic, it is hard to see how he can account for our strong intuition that we should reject each and
every one of Alice’s offers.
If we adopt a view focused on procedural chances, it is easy to explain why we should
reject Alice’s offers. Since, according to my view, what matters is that individuals have equal
procedural chances to avoid some harm, it is clear that each subsequent toss of the die not only
achieves nothing of value, it makes the final outcome morally worse. In this way, it is only a
view like mine that can generate the intuitively correct result that we should ask Alice to kill the
hostage initially selected in HOSTAGE SERIES, Abe. In order to accommodate this strong
intuition, however, we must reject the value Hare places on distributing bad effects and the
disproportionate weight that Frick gives to the complaints of those who will certainly suffer harm
if we fail to act.
Given the counterintuitive implications of Hare and Frick’s accounts, as well as the
phenomenon of unfair cycling, I think that it is procedural chances ought to be kept in mind
when thinking about the cases in which we prefer identified over statistical lives. At a minimum,
a focus on procedural chances avoids the most obvious pitfalls of views focused on
counterfactual objective chances and subjective probabilities at the time of the agent’s decision.
51
3. Procedural Chances and Reasonable Beneficence
In this section, I’ll argue that procedural chances are also relevant to the plausibility of
what we might call a principle of Reasonable Beneficence (or the ex ante pareto principle).
39
I’ll
frame this principle as follows:
Reasonable Beneficence: Given the choice between two actions X and Y, if, given your
evidence, X-ing instead of Y-ing is in the expected interests of each person impacted by
your decision, then you should X.
It’s worth noting that, as I’ve formulated the principle, Reasonable Beneficence focuses
exclusively on an agent’s subjective probabilities at the time of her decision. Since both Hare and
Frick’s defense of Reasonable Beneficence focuses exclusively on an agent’s subjective
probabilities, I adopt this focus in what follows.
Hare (2016, 451) claims we should adopt a principle like Reasonable Beneficence
because you ought always “act as you would if you were reasonable and moved solely by
individual concern for each one of the people affected by your actions.” Instead of being
motivated by “platitudes” like “be fair” and “don’t use some to benefit others,” Hare (2016, 464)
urges us to be motivated by platitudes like “be guided by concern for the person’s well-being, act
as he or she would reasonably want you to act if he or she knew what you know.” Frick (2015a,
192) motivates the principle by pointing out that, when we act in accordance with it, we can
justify our action “in the single-person case,” in which furthering the interests of each person
impacted by our action is “our sole concern.” This accords with the version of contractualism put
forward by Frick (2015a, 187, 197), according to which “[a]n act is wrong if and only if there is
someone who can complain that we failed to treat her in a way that was justifiable to her, not
39
For a list of those who invoke ex ante pareto superiority, see Adler (2003, fn. 1).
52
because the consequences were impersonally bad,” where the impersonal badness of an act’s
consequences might include inequities in the distribution of benefits and harms.”
40
To see this principle in action, consider the following case, a variation of which is
discussed by Johann Frick (2015a, 181):
MASS VACCINATION: 100,000 children have contracted a fatal virus. We have given
all the children Vaccine I, which will take effect the following day. Immediately prior to
Vaccine I taking effect, each of the children will have a .98 objective chance of survival.
The only downside of Vaccine I is that, after the vaccine takes effect, the children who
survive will have a minor skin irritation.
We have the opportunity of switching all the children from Vaccine I to Vaccine II. If we
switch to Vaccine II, each child will have a .98 chance of survival, conditional on our
evidence, and, as a bonus, it will not result in minor skin irritation when it takes effect.
The downside of Vaccine II is that that, instead of each child having a .98 objective
chance of survival immediately prior to it taking effect, the .02 of children with genetic
marker G will have no objective chance of survival and the .98 of children who lack G
will definitely survive. We are unable to detect which children have genetic marker G.
Before more in-depth discussion, two features about MASS VACCINATION are worth noting.
First, if our primary concern is with the expected interests of each child, given our evidence, we
will switch the children to Vaccine II. This is because, if we switch to Vaccine II, our subjective
probability that each child will survive will remain constant but none of the children will suffer a
skin irritation. On the other hand, while our subjective probability that each child will be cured
40
It’s worth noting that ‘impersonal’ moral principles that are rejected by Frick include views such as “telic
egalitarianism or the priority view,” as these moral principles are supported by “impersonal” concerns such as with
the “overall shape of the outcome” (2015a, 197). In this way, Frick parallels Hare’s rejection of fairness in favor of
Reasonable Beneficence.
53
will be the same regardless of whether we switch to Vaccine II, the relevant procedural chances
will differ. This is because, if we stick with Vaccine I, each child’s procedural chance of survival
will be equal (.98), while if we switch to Vaccine II, the procedural chance of the children with
genetic marker G surviving will be 0 and the procedural chance of the children without genetic
marker G surviving will be 1.
There are four related reasons why I think that we ought to stick with Vaccine I instead of
switching to Vaccine II, even though doing so will violate Reasonable Beneficence.
Reason 1: Reasonable Beneficence is highly sensitive to our mode of individual
designation.
For Reasonable Beneficence to lead us to favor Vaccine II over Vaccine I, it must be the
case that switching to Vaccine II is in the expected interests of all the children who receive the
vaccine, given our evidence. Yet the difference between Vaccine I and Vaccine II brings out
something that is missing when we focus simply on expected interests of the children relative to
our own evidence. That is, while administering Vaccine I to the children (as opposed to leaving
them unvaccinated) is in all of their expected interests, regardless of their genetic markers,
switching to Vaccine II from Vaccine I is only in the expected interests of the children in virtue
of our ignorance of their genetic markers.
41
With this in mind, some might argue that switching
to Vaccine II is not, in fact, in the expected interests of all the children, since it will kill those
with genetic marker G (and some of these children would have presumably survived if we stuck
with Vaccine I).
42
41
Along similar lines, administering Vaccine II to the children as opposed to leaving them unvaccinated is only in
their expected interests in virtue of our ignorance of their genetic markers.
42
For an excellent (and I think convincing) articulation of this point, see Mahtani (2017). For additional discussion
of this problem, see Kumar (2015, 43) and Hare (2016, 467).
54
One notable feature of a view focused on procedural chances is that it is not sensitive to
modes of individual designation. For instance, a particular child’s procedural chance of being
helped by Vaccine I or Vaccine II does not depend on how we designate that child; it simply
depends on whether that child possesses genetic marker G. This reflects the underlying fact that
procedural chances are not relativized to our evidence, they are relativized to all the relevant
facts about the procedure used to distribute benefits and those who may receive them (and the
given laws of nature).
Reason 2: Reasonable Beneficence is insensitive to the knowledge of those impacted by
our decisions.
If we adopt Reasonable Beneficence, then, in deciding what to do in MASS
VACCINATION, we will consider only the evidence we possess (and that we might easily
gather) with respect to the merits of each vaccine.
43
What will have no relevance to our
deliberation is the evidence of those who will be impacted by our decision. For instance, if all the
children who possess genetic marker G know this about themselves, but we cannot access that
knowledge (at least not before we are forced to make our decision), then, according to
Reasonable Beneficence, we still ought to switch all of the children to Vaccine II. However, in
this modified scenario, by switching to Vaccine II we would be switching the children with G
from a treatment that has some chance of curing them to a treatment that they know is useless.
The complaints of these children, at least presumably, would be much stronger than those the
children who lack G would have for failing to switch to Vaccine II, even if those who lack G also
know their genetic makeup. This is because, by sticking with Vaccine I, the chance of survival or
children who lack G remains at .98 (in addition to suffering a minor skin irritation).
43
For more on our potential obligation to gather more information, see Frick (2015a, 193).
55
A view focused on procedural chances is not vulnerable to this same critique because
individual’s procedural chances are in no way dependent upon anyone’s subjective probabilities
at the time of their decisions; instead, such chances are dependent upon facts about the procedure
used to distribute benefits or harm (as well as the laws of nature). When procedures make use of
objectively chancy mechanisms such as coin tosses and lotteries, it will often be the case that, for
all anyone knows, each individual will be the one to obtain the benefit or avoid the harm.
44
Applied to MASS VACCINE, this would mean that, for all anyone knows, each child might be
cured, and no child could know that, in sticking with Vaccine I, we have guaranteed her or his
demise.
Reason 3: Reasonable Beneficence is entirely insensitive to the interests of groups as
such.
If we adopt Reasonable Beneficence, we will only be concerned with the extent to which
our actions further the expected interests of the individuals impacted by our actions. What will
not register, at least without modification, is the impact our actions might have on relevant
groups of people. However, insofar as we may be rightly concerned with how our actions impact
these groups, we will find Reasonable Beneficence unsatisfying and incomplete.
Consider an example:
TRIBAL FATE: You grew up in a small town of 1,000 in which membership in
indigenous tribes was quite common. You knew everyone in your town by their first
name, though you never could quite remember who belonged to which tribe. One day you
get a call from Alice. Alice has taken the 1,000 inhabitants of your small town hostage.
After telling you that 500 townsfolk are members of Tribe A and 500 are members of
44
For further discussion of knowledge and objective chances, see Hawthorne and Lasonen-Aarnio (2009),
Williamson (2011), Dorr et. al (2014), Goodman and Salow (2017).
56
Tribe B, she gives you the choice between two options: 1) she cuts off one hand of each
of the 500 members of Tribe A and gives each of them a donut, or 2) she cuts off one
hand of each of 500 hostages selected in a manner that is procedurally fair.
If we accept Reasonable Beneficence, we will have to opt for 1) since it is in each hostage’s
interests to have a probability of .5, conditional on our evidence, of losing one hand and a free
donut than to simply have a .5 probability of losing one hand. However, if we think there is
something uniquely disvaluable about subjecting the 500 members of Tribe A to the loss of one
hand, then we will be quite tempted by option 2). As Hare might put it, there seems to be
something uniquely bad about concentrating bad effects on a particular group as opposed to
distributing them fairly amongst individuals.
45
By being procedurally fair in a case like TRIBAL
FATE, we avoid the uniquely disturbing disvalue that accompany concentrating bad effects on
particular groups.
To be clear, it need not be the case that either Tribe A or Tribe B has been historically
disadvantaged—though, of course, if that is the case, then a principle like Reasonable
Beneficence would be in even more trouble. In addition, it might not be the case that we would
hesitate to opt for 1) regardless of the characteristic or group selected against: for instance,
subjecting the right-handed to harm as opposed to those selected by lottery may not strike us as
quite as repugnant. That said, a principle like Reasonable Beneficence will only allow us to
account for the disvalue of concentrating harm on particular groups insofar as such disvalue is
not in the expected interests of particular individuals.
46
Since, as discussed above, such
individual interests do not include considerations of impartial fairness or distributive justice for
45
I accept that, if we accept this line of reasoning, it might be even better to ensure that an equal number of hostages
from each tribe loses a hand.
46
For more on the rights of minority groups, see Kymlicka (1996).
57
either Hare or Frick, they will be quite vulnerable to the charge of being entirely insensitive to
the interests of groups as such.
Reason 4: Reasonable Beneficence is largely insensitive to what we typically consider to
be individual moral claims.
One of the most pressing challenges for the principle of Reasonable Beneficence is that,
if we adopt it, we can give little weight to other moral claims that individuals seem to possess.
This is most easily illustrated through example:
PROPERTY GRAB: You know that Abe, Ben, or Cal has lost a valuable piece of
property. The police officer who retrieved the piece of property gives you two choices: 1)
give the property back to the person to whom it belongs, or 2) give the property, in
addition to a donut, to the tallest of the three. Given your evidence, it is equally likely that
the property belongs to Abe, Ben, or Cal and that each of the three is the tallest.
If we act in accordance with Reasonable Beneficence, we will seemingly prefer option 2), since
that is in the expected interests of all three, given your evidence. After all, if you opt for 2), each
person not only has an equal chance of receiving the valuable property but also has a 1/3 chance
of getting a donut.
47
At the same time, if we think the rightful owner of the property has a unique
claim to it—a claim that far outweighs a 1/3 chance of a free donut—it seems clear that you
should opt for 1).
What a case like PROPERTY GRAB illustrates is that, if we adopt Reasonable
Beneficence, then, if we can’t identify who has a particular moral claim, then that moral claim
47
This assumes that being tallest is probabilistically independent of being the property owner. If you know that the
tallest person is the property owner, then option 2) would amount to giving the property, as well as a donut, to that
person.
58
will receive very little moral weight.
48
As one might expect, in addition to having problems
handling cases in which one individual has a clear claim to a particular good, Reasonable
Beneficence has difficulty handling cases in which multiple individuals have an equal claim to
some good. When multiple have equal claim to some good, it is natural to think that each should
have an equal procedural chance at obtaining that good.
49
In MASS VACCINATION, for
instance, it’s plausible that each child has an equal claim to being cured of her fatal disease,
perhaps especially if the vaccine was developed with public funds. At the very least, children
who lack genetic marker G have no greater claim to being cured than those who have it. If we
accept that all children have an equal claim to being cured, then we should give at least some
normative weight to this equality of claims. If we give claims to fair treatment some amount of
weight, we will stick with Vaccine I instead of switching to Vaccine II even though doing so
violates Reasonable Beneficence.
50
One way we might modify Reasonable Beneficence to account for cases like
PROPERTY GRAB is to claim that it only applies after we have taken into account individuals’
other moral claims. For instance, we might say that we should always act in accordance with
Reasonable Beneficence as long as doing so would not violate our other moral obligations. If we
adopt this approach, however, then we will have to reject Hare and Frick’s arguments for the
48
It’s worth noting that, in PROPERTY GRAB, it might be the case that Abe, Ben, and Cal don’t know who the
property belongs to and might never find out. In such a case, it will be true that, by opting for 1), you opt for an
expected outcome that is suboptimal. Even if we think that extreme suboptimalities ought to be avoided, even if that
involves violating individual claims to property, it is implausible that even the most insignificant of suboptimalities
should be treated in a similar manner.
49
As discussed by Alex Voorhoeve and Marc Fleurbaey, there is “a strand of contemporary egalitarianism” that is
concerned with the “unfairness of unchosen, undeserved outcome inequality” Voorhoeve and Fleurbaey (2012), p.
393. To me, it is natural to think about MASS VACCINATION in these terms. Those that Voorhoeve and Fleurbaey
see adopting this sort of view are Cohen (1989), Arneson (1997, 1999), and Temkin and Temkin (2001, 2002). To
this list I would also add Arneson (1989). For a somewhat contrary view to mine, see Henning (2015).
50
For relevant discussion of the distinction between an individual’s interests and her rights, see Setiya (2019, 11-
13).
59
principle. In particular, if we ought to violate Reasonable Beneficence in order to honor claims to
personal property, then it will not be the case, as Hare argues, that we should always act as others
would reasonably want us to if they knew what we knew, or that, as Frick argues, acting in
accordance with Reasonable Beneficence is always justifiable to each person impacted by our
actions. In addition, and perhaps more importantly for our purposes, once we give individual
moral claims priority over Reasonable Beneficence, we must contend with the possibility that
one such claim is the claim to fair treatment, where that claim is understood in terms of
procedural fairness.
4. Identified Lives vs. Reasonable Beneficence
In the previous two sections, I’ve given a unified and systematic treatment of both the
question of identified lives and the ex ante pareto principle. In this final Section, I show that the
views put forward by other authors seem to be at war with themselves. More specifically, the
arguments often put forward in favor of preferring identical lives over statistical ones seem to be
in open conflict with the arguments for adopting Reasonable Beneficence.
The tension between these two claims is most easily seen in the following sort of case:
WISHING WELL: Marc runs a daycare center in which he takes care of exactly one
hundred babies; the babies are named One, Two, Three, Four…Hundred.
One day, Marc learns (to his dismay) that one of the babies is trapped in Well A, though
he doesn’t know which one, while the other ninety-nine are trapped in Well B. Marc has
the choice of saving the baby in Well A or saving one of the 99 babies in Well B via a
lottery. If Marc opts for the lottery, the baby saved from Well B will also receive a
lollipop.
60
If we adopt Hare’s view—in which we attempt to Distribute Bad Effects in terms of
objective chances at the time of the agent’s decision but satisfy Reasonable Beneficence with
respect to the agent’s subjective probabilities at that time—a case like WISHING WELL will
present Marc with a dilemma. If Marc wants to Distribute Bad Effects, he will save the baby in
the well, because in doing so he does something quite bad for the other 99 babies and something
awesome for the one in Well A. However, if Marc acts in accordance with Reasonable
Beneficence, he will opt for the lottery and the lollipop because doing so is in the expected
interests of each baby, given Marc’s evidence. After all, opting for 2) is in the expected interests
of each of One, Two, Three, Four…Hundred, and Marc knows and deeply cares about each of
those babies. If it is impossible for Marc to both Distribute Bad Effects and act out of reasonable
concern for each person impacted by his decision, however, then something has to give.
One way in which we might think that Hare can address a case like WISHING WELL is
to claim that Marc should calculate the expected interests of the babies not by using the names
by which he knows each one, but instead using guises such as “the baby in Well A.” If we use a
guise scheme that includes the baby in Well A, then opting for 2) will not be in the expected
interests of all the babies, given Marc’s evidence. While the issues of guises requires a great deal
of care and nuance, the suggested remedy for Hare’s account is one he explicitly rejects. As Hare
puts it, with slight modification to be applicable to WISHING WELL, “when you know nothing
about the person you are in a position to kill, beyond that he or she is one of the [one hundred],
you ought to kill. When you learn that the person you are in a position to kill is [in a well], still
you ought to kill. When you learn that the person you are in a position to kill is [named such-
61
and-such], and you know a very great deal about [such-and-such], it is not the case that you
ought to kill.”
51
The problem for Hare is that saving the baby in Well A is extremely good for that baby
even if we do not know that baby’s identity. In other words, saving the baby in Well A
distributes bad effects, even if we do not know who we are doing something great for—and, by
extension, who we are doing something quite bad for. This issue also arises for Frick, who
claims that we should favor identified over statistical lives because the former have a stronger
complaint if we act contrary to their interests. We should adopt a principle like Reasonable
Beneficence, according to Frick, because any action done in accordance with the principle can be
justified to each of the individuals impacted.
Applying Frick’s view to WISHING WELL, the baby in Well A seems to have a much
stronger complaint against Marc opting for the lottery and the lollipop than any of the babies in
Well B have against him saving the baby in Well A. After all, if Marc opts for the lottery and the
lollipop, the baby in Well A will certainly die instead of certainly being saved, whereas if Marc
opts to save the baby in Well A, the 99 in Well B go from being almost certainly doomed to
being certainly doomed. At the same time, Marc opting for the lottery and the lollipop satisfies
Reasonable Beneficence. That is, if Marc opts for the lottery and the lollipop, he can say to each
baby (by their names, One, Two, Three, Four,)—each baby that he knows quite well,
remember—that his decision was in that baby’s expected interests, given his evidence. This is
because, given his evidence, each of the babies has a 1/100 chance of surviving if he saves the
51
Hare (2016, 468). Hare’s full quote is as follows:
[W]hen you know nothing about the person you are in a position to kill, beyond that he or she is one of the
six, you ought to kill. When you learn that the person you are in a position to kill is on a bridge, still you
ought to kill. When you learn that the person you are in a position to kill is Alexia, and you know a very
great deal about Alexia, it is not the case that you ought to kill.
62
one in Well A but a 1/100 chance of both surviving and getting the lollipop if saves one from
Well B.
Perhaps the most tempting way for Frick to address WISHING WELL is to claim that,
despite its superficial appearance, WISHING WELL is not a case in which we must weigh the
interests of an identified person vs. a group of statistical people. More specifically, if we think of
the babies under the guises of their given names (One, Two, Three, Four…Hundred), then the
baby in Well A will not be identified because we don’t know his or her name. If the baby in Well
A is not identified, then our choice will simply be between saving one of the hundred babies that
we know from Well A or saving one of the hundred babies that we know from Well B.
Again, the role that guises play in these cases is subtle and complex. That said, Frick’s
own analysis of the cases in which the interests of identified lives are pitted against those of
statistical lives, an analysis that I think correctly identifies the phenomenon under consideration,
seems to rule out this sort of response. In particular, Frick (2015b, 187) writes that
Saving an identified person from death means saving someone who, but for our
intervention (or that of someone else), was certain (or at least very likely) to die. By
contrast, the way in which we prevent the loss of statistical lives is, typically, by reducing
or eliminating the risk of death faced by each member of some larger group…This
distinction, between preventing loss of life by saving people from almost certain death
versus preventing loss of life by slightly reducing the risk of death for many people,
whose antecedent chance was already significantly smaller, is mirrored in the choice
between treatment and prevention.
It’s incredibly hard to deny that, if Marc opts for 1) in WISHING WELL, he saves someone
who, “but for his intervention, was certain to die.” This is in contrast with him opting for 2), in
63
which case he would eliminate “the risk of death faced by each member of some larger group.”
With this in mind, it would require radical revision to Frick’s thinking to generate the result that
WISHING WELL is not, in fact, a case in which the interests of identified and statistical lives
are pitted against each other.
52
The deeper problem for both Frick and Hare is that Reasonable Beneficence severely
undermines the role that other moral principles might play in their accounts. As we saw in the
case of PROPERTY GRAB in Section 3, Reasonable Beneficence can easily conflict with
straightforward claims that individuals have to their own property. This is in addition to its
conflict with claims that individuals have to an equal chance at obtaining some good. Even if we
are skeptical of these particular moral claims, which would be surprising, we should remember
that Reasonable Beneficence also conflicts with the claims that both Frick and Hare insist that
identified people have and statistical people lack. In this way, even Frick and Hare’s own
reasoning does not sit well with the principle of Reasonable Beneficence. If there is little hope of
reconciling a preference for identified lives with a principle like Reasonable Beneficence,
perhaps a view focused on procedural chances is our best hope for a systematic and coherent
approach to these questions.
52
Another way we might think Frick could address WISHING WELL is to claim that, although it is a case in which
the interests of identified and statistical lives conflict, it is not a case in which Reasonable Beneficence can be
satisfied. Similar to the proposed response on behalf of Hare, Frick might claim that Reasonable Beneficence cannot
be satisfied because, if we use the guise of, for instance, “the baby in Well A,” then opting for 2) will not be in each
baby’s expected interests, given our evidence.
Perhaps unsurprisingly, Frick rejects this approach for the same reason as Hare does, namely, that, while
Marc knows all of the babies by name, and knows a fair amount about each one, he does not know (and could not
easily find out) who the baby is that is in Well A. This is the same line of reasoning that Frick deploys in support of
the claim that we are justified in vaccinating children even when we know that, for some of the children, the vaccine
will be useless (2015a, 181-184). Just as the vaccine is justified because we do not know who the children are who
will not be helped by it, opting for 2) in WISHING WELL is justified because Marc does not know who, of all the
babies he knows quite well, is in Well A.
64
5. Conclusion
In this chapter, I aimed to accomplish three tasks. First, I outlined a notion of procedural
chances that I suggested captured what we ought to equalize when multiple agents have an equal
claim to a particular good. To follow, I showed that this notion of procedural chances provided
us with a fresh and interesting perspective on debates about statistical lives and the ex ante pareto
principle. To close, I made clear that, in contrast to my systematic and unified treatment of these
topics, the views of other authors are at war with themselves.
Sources:
Adler Michael D. (2003). The Puzzle of ‘Ex Ante Efficiency’: Does Rational Approvability Have Moral
Weight? University of Pennsylvania Law Review, 151(3), 1255–90.
Arneson, Richard J. (1989). Equality and Equal Opportunity for Welfare. Philosophical Studies, 56(1),
77-93.
Arneson, Richard J. (1999). Equality of Opportunity for Welfare Defended and Recanted. The Journal of
Political Philosophy, 7, 488–97.
Arneson, Richard J. (1997). Postscript to “Equality and Equal Opportunity for Welfare. In L. Pojman and
R. Westmorland (Eds.), Equality: Selected Readings (pp. 238-241). Oxford: Oxford University Press.
Bradley, Seamus. (2017). Are objective chances compatible with determinism? Philosophy Compass.
Broome, John. (1991). Fairness. Proceedings of the Aristotelian Society, 91(1), 87-102.
Butterfield, Jeremy. (2012). Laws, Causation and Dynamics at Different Levels. Interface Focus, 2(1).
Cohen, G. A. (1989). On the Currency of Egalitarian Justice. Ethics, 99, 906–944.
Daniels, Norman. (2012). Reasonable Disagreement about Identified vs. Statistical Victims. Hastings
Center Report, 42(1), 35-45.
Diamond, Peter. (1967). Cardinal Welfare, Individualistic Ethics, and Interpersonal Comparison of
Utility: Comment. Journal of Political Economy, 75(5), 765-766.
Fleurbaey, Marc and Voorhoeve, Alex. (2013). Decide as you would with full information! An argument
against ex ante Pareto. In N. Eyal, S. A. Hurst, O. F. Norheim, and D. Wikler (Eds.), Inequalities in
health: Concepts, measures, and ethics (pp. 113-128). New York: Oxford University Press.
65
Frick, Johann. (2013). Uncertainty and Justifiability to Each Person: Response to Fleurbaey and
Voorhoeve, In N. Eyal, S. A. Hurst, O. F. Norheim, and D. Wikler (Eds.), Inequalities in health:
Concepts, measures, and ethics (pp. 129-146). New York: Oxford University Press.
Frick, Johann. (2015a). Contractualism and Social Risk. Philosophy and Public Affairs, 43(3), 175-223.
Frick, Johann. (2015b). Treatment versus Prevention in the Fight against HIV/AIDS and the Problem of
Identical versus Statistical Lives. in Cohen, G, Daniels, N, and Eyal. N (Eds.), Identified versus Statistical
Lives: An Interdisciplinary Perspective (pp. 182-202). Oxford: Oxford University Press.
Gallow, J. Dmitri. (2019) A subjectivist’s guide to deterministic chance. Synthese.
Hare, Caspar. (2012). Obligations to Merely Statistical People. Journal of Philosophy, 109(5/6), 378-390.
Hare, Caspar. (2016). Should We Wish Well to All? Philosophical Review, 125(4), 451-472.
Henning, Tim. (2015). From Choice to Chance? Saving People, Fairness, and Lotteries. Philosophical
Review, 124(2), 169-206.
Jenni, Karen E. and Loewenstein, George. (1997). Explaining the “Identifiable Victim Effect.” Journal of
Risk and Uncertainty, 14, 235-257.
Kumar, Rahul. (2015). Risking and Wronging. Philosophy and Public Affairs, 43(1), 27-51.
Kymlicka, Will. (1996). Multicultural Citizenship: A Liberal Theory of Minority Rights. Oxford: Oxford
University Press.
Lee, Seyoung and Feeley, Thomas H. (2016). The identifiable victim effect: a meta-analytic review.
Social Influence, 11(3), 199-215.
List, Christian and Pivato, Marcus. (2015). Emergent Chance. Philosophical Review, 124(1), 119-152.
List, Christian. (2014). Free Will, Determinism, and the Possibility of Doing Otherwise. Nous, 1, 156-
178.
Mahtani, Anna. (2017). The Ex Ante Pareto Principle. Journal of Philosophy, 114(6), 303-323.
Parfit, Derek. (2011). On What Matters. Oxford: Oxford University Press.
Ruger, Korbinian. (2018). On Ex Ante Contractualism. Journal of Ethics and Social Philosophy, 13(3).
Scanlon, T. M. (1996). The Status of Well-Being. The Tanner Lectures on Human Values.
Schaffer, Jonathan. (2007). Deterministic Chance? British Journal of Philosophy of Science, 58, 113-140.
Schelling, Thomas. (1968). The Life You Save May Be Your Own. In S. B. Chase (Ed.), Problems in
Public Expenditure Analysis. Washington, DC: The Brookings Institution.
Setiya, Kieran. (2019). Ignorance, Beneficence, and Rights. Journal of Moral Philosophy, 17(1), 56-74.
66
Sher, George. (1980). What Makes a Lottery Fair? Nous, 14, 203-216.
Slote, Michael. (2015). Why Not Empathy? In G. Cohen, N. Daniels, and H. Eyal (Eds.), Identified versus
Statistical Lives: An Interdisciplinary Perspective (pp. 150-158). Oxford: Oxford University Press.
Stalnaker, Robert C. (1980). A Defense of the Conditional Excluded Middle. In W. L. Harper, G. A.
Pearce, and R. C. Stalnaker (Eds.), Ifs. D. Reidel Publishing Company. London: England.
Temkin, Larry. (2002). Equality, Priority, and the Levelling Down Objection. In M. Clayton and A.
Williams (Eds), The Ideal of Equality (pp. 126–61). Palgrave Macmillan: UK.
Temkin, Larry. (2001). Inequality: A Complex, Individualistic, and Comparative Notion. Philosophical
Issues, 11, 327–53.
Thomson, Judith Jarvis. (1990). The Realm of Rights. Cambridge: Harvard University Press.
Von Plato, J. (1982) Probability and Determinism. Philosophy of Science, 49(1), 51-66.
Voorhoeve, Alex and Fleurbaey, Marc. (2012). Egalitarianism and the Separateness of Persons. Utilitas,
23(3), 381-398.
Wasserman, David. (1996). Let Them Eat Chances: Probability and Distributive Justice. Economics and
Philosophy, 12, 29-49.
Williams, J. Robert G. (2010). Defending Conditional Excluded Middle. Nous, 44(4), 650-668.
67
Chapter 3
The Dynamics of Moral Thresholds
According to some views, moral constraints are not absolute. Instead, according to these
views—which many have called “Threshold” views—it is permissible to violate moral
constraints when the expected value of the outcome is high enough.
53
In this paper, I explore
some challenges that arise when applying Threshold views to cases involving diachronic or
dynamic choice. In light of these challenges, I argue that, at least in certain cases, proponents of
Threshold views ought to broaden their analysis in such a way that the permissibility of
particular actions is parasitic on the diachronic actions of which they are a part.
While I will be focusing on a particular class of Threshold views in what follows, it’s
worth noting that this discussion has more general relevance. As Seth Lazar puts it in his
discussion of proportionality:
Questions of proportionality also arise in other contexts: for example, in risky rescue
attempts, emergency medicine or the management of bushfires. And the problem arises in
more mundane scenarios too—environmental regulation and public health policy, for
example, even taxation. In all these areas, you want to achieve some moral goal, doing so
is ‘worth’ a particular degree of moral cost and no more.
54
Before starting, a word about methodology is in order. The primary context in which the
permissibility of diachronic choices has been discussed is with respect to the proportionality of
53
Perhaps the most salient Threshold view is moderate deontology. See, for instance, Zamir and Medina (2010),
Johnson (2020), Smilansky (2003); for critique, see Alexander (2000). This view is often put forward alongside
views in which the infringement of rights is distinguished from the violation of rights; see for instance Thomson
(1990), Brennan (1995), and for critique, Oberdiek (2004).
54
Lazar (2018), p. 842.
68
the continuation of just wars.
55
In particular, the views discussed below have been offered as
ways to determine how past casualties and losses ought (or ought not) to be factored into
decisions about further attempts to achieve a war’s objective. As we will see below, my goal is to
put forward an account to answer exactly this sort of inquiry. However, before getting to
questions related to the continuation of just wars, it is helpful to lay a foundation using simpler
and philosophically cleaner examples. With this in mind, in what follows, I start with a
discussion of trolley-type cases before applying the view that emerges to the context of war.
This chapter will proceed in four parts. In Section 1, I discuss Threshold views in more
detail and outline two ways in which such theories have been applied to cases involving
diachronic choices. In Section 2, I argue that each of these ways generates an arbitrary
distinction between synchronic and diachronic choices and offer a third way that avoids this
result. The main challenge for my proposal is that, if we adopt it, then whether the expected
value of a particular action meets the relevant threshold can depend on the diachronic action of
which it is a part. In Section 3, I argue that proponents of a Threshold view should be willing to
accept this conclusion. Section 4 concludes.
1. Threshold Views and Diachronic Choices
According to Threshold views, it may be permissible to violate moral constraints when
the expected value of doing so is high enough.
56
To take a simple example, according to
proponents of this sort of view, an action that is expected to kill two innocent people—a
violation of a rather serious moral constraint—is permissible if doing so is a side effect of saving
55
See, for instance, Lazar (2018), Tadros (2018), McMahan (2015), Fabre (2015), Kamm (2001), and Rodin (2008).
56
Note that proponents of the Threshold view can be either partially or fully aggregative. For a defense of the
former, see Kamm (2013) and Voorhoeve (2014); for a defense of the latter, see Horton (2018) and Horton (2020).
69
5, at least if the agent is faced with a simple choice between killing two as a side effect of saving
five and doing nothing at all.
57
Before discussing how Threshold views have been applied to cases involving diachronic
choice, some preliminaries are in order. First, in what follows, I will discuss the permissibility of
an agent’s available actions in terms of the expected outcomes of those actions, conditional on
the agent’s evidence.
58
As such, in evaluating whether particular actions or series of actions are
permissible, I will examine whether, given the agent’s evidence, the number of lives expected to
be saved and the number expected to be killed as a side effect meet the threshold required by the
view. To continue with the threshold discussed above, then, an action that the agent expects to
have the consequence of killing two innocent people will be permissible if, conditional on the
agent’s evidence, the killing of the two would be a side effect of saving at least five others.
In discussing permissibility in terms of outcomes conditional on the agent’s evidence, it’s
important to keep in mind the difference between two very different readings of “the number of
people expected to be saved” by a particular action. On one reading, the number of people
expected to be saved is the number of people the agent believes (or has high credence) will be
saved. This is not what I intend. For instance, if I have .5 credence that a particular action would
save 2 people and .5 credence that it would save 4, I’m certain that the action would not save 3
even though (in the intended sense) the number of people expected to be saved by the action is 3.
In what follows, then, it is important to read “the number of people expected to be saved” by an
57
Proponents of Threshold views often also require that the relevant killings be “unintentional,” though it’s not
widely agreed upon exactly what this means. For an excellent discussion of this and related issues, see Quinn (1989)
and Ramakrishnan (2016).
58
While I do not prefer the terminology for a variety of reasons, if the reader were so inclined, she could think of
permissibility in this sense as “evidence-relative” Parfit (2011), pps. 150-151.
70
action as “the number of people that would be saved, in expectation” by that action, where that’s
arrived at by summing the weighted probabilities of the number of people saved in each state.
59
In order to operationalize Threshold views, I will make a number of assumptions. The
argument that follows can be translated to suit different assumptions; at the same time, I think
that the following assumptions are both plausible and easy to work with. First, I will assume that,
at least in a range of cases, the relevant threshold can be reduced to a ratio of individuals
expected to be saved to individuals expected to be killed as a side effect such that, when this
ratio is exceeded, the action is permissible (at least if no other available actions that exceed the
relevant threshold are even better). In other words, if the expectation of saving 5 people makes
an action that is expected to kill 2 as a side effect permissible, then it is the case that, as long as
other available actions don’t have an even higher ratio, actions that are expected to save more
than 2.5 times the number that are expected to be killed as a side effect are permissible.
For the cases that will concern us below, three additional assumptions will be helpful.
The first addresses when it is permissible for an agent to violate a moral constraint when she has
an action available to her that does not violate that constraint. While this a complex question, in
what follows all we need assume about Threshold views is that it is permissible to perform an
action that is expected to violate the moral constraint against killing when two conditions are
met: 1) the expected value of the action meets the relevant threshold, and 2) the action(s)
available that does not violate this moral constraint is not expected to save anyone. While there
are a number of tricky questions this does not resolve—such as how to choose between an action
59
This is similar to how expected utilities are calculated in decision theory. For relevant applications of such an
approach to ethical questions within a consequentialist framework, see Jackson (1991), and for a recent attempt to
apply such an approach within a deontological framework, see Bjorndahl, London, and Zollman (2017).
71
that violates a constraint but saves many in expectation with an action that does not violate a
constraint but saves fewer in expectation—I needn’t take a stand on such questions here.
60
Two other assumptions that will be helpful below address how an agent ought to respond
to situations in which all of her available actions violate a moral constraint but fail to meet the
relevant threshold and situations in which more than one such action meets this threshold. When
all of an agent’s available actions violate a moral constraint but fail to meet the relevant
threshold, I will assume that the agent ought to perform the action that kills the fewest innocent
people in expectation. The basic motivation for this assumption is that, according to Threshold
views, unless certain very special conditions are met, other people’s claim rights against being
killed remain intact. With this in mind, it seems plausible for a proponent of a Threshold view to
maintain that, in cases in which these special conditions do not obtain and all innocent people’s
claim rights remain intact, an agent ought to kill as few innocent people as possible.
The last assumption we’ll need below addresses how an agent ought to respond to
situations in which all of her available actions violate a moral constraint and meet the relevant
threshold. In such cases, I will assume that the agent ought to perform the action with the highest
ratio of individuals saved in expectation to those killed as a side effect in expectation. For
instance, if an agent has the choice between killing 2 to save 5 and killing 2 to save 10, the agent
ought to do the latter because the ratio of those saved to killed is twice as high. Since, according
to the Threshold view, it is the value of this ratio that triggers a change in our moral landscape, it
seems plausible that higher values of this ratio are preferable to lower ones.
With these three assumptions in place, we can examine the question of how Threshold
views can be applied to cases involving diachronic choices. In the rest of this section, I will first
60
For an argument that performing a lesser evil may be morally obligatory, see Frowe (2018).
72
discuss an example of such a case and then outline two ways in which Threshold views might be
applied.
The simplest cases that involve diachronic choices are ones in which an agent knows at
the outset that she might face a series of choices that are relevant to the achievement of a
particular goal.
61
Consider the following:
EXTENDED TROLLEY: A runaway trolley is careening down a main track and will kill
five innocent people (A, B, C, D, and E) unless Tessa diverts the trolley onto a branch
track. There are two innocent people (F and G) trapped on the branch track who will be
killed if the trolley is successfully diverted. Tessa knows that pulling a particular switch
at t1 is .8 likely to divert the trolley. However, Tessa is aware that there is a .2 chance
that the switch will malfunction and kill two innocent people who are neither on the
branch nor main track (H and I). If the switch malfunctions, the trolley will continue
unabated toward the five on the main track. Tessa knows that she will have the
opportunity to then fix the switch and pull it again at t2. Were Tessa to pull the switch a
second time at t2, she would definitely divert the trolley, which would kill the two people
on the branch track (F and G) and save the five on the main track (A-E).
62
EXTENDED TROLLEY involves a diachronic choice in virtue of the fact that, when Tessa is
deliberating with respect to whether to pull the switch the first time, she knows that there is a .2
chance that she’ll have to make another decision that is relevant to the goal of saving the five
61
While this is a helpful simplification, I accept that further qualifications might be necessary to deal with cases in
which an agent ought to have known that she would face a series of relevant choices and perhaps even those in
which she ought to have had a non-negligible credence that she would. For general discussion of cases involving
diachronic choice, see McLennan (1990) and Buchak (2013). For discussion of the implications of such cases for
our ethical reasoning, see Wu (2022).
62
Based on McMahan (2015, 703-4).
73
innocent people on the main track. That is, if things go wrong and the first pull fails to divert the
trolley, she’ll have to decide whether to fix it and pull it again.
Figure 1. Decision tree for EXTENDED TROLLEY (with rectangles representing choices and
ovals outcomes):
Perhaps the simplest way of applying a Threshold view to EXPECTED TROLLEY is
calculate, at any given time, whether the ratio of the number of people expected to be saved by a
particular action to the number of people expected to be killed as a side effect is high enough for
the action to be permissible. Using the ratio we’ve been working with, this means that, at any
particular time, if performing a particular action is expected to save at least 2.5 times more
people than it would kill as a side effect, then the action may be allowed by the Threshold view.
Let’s call this view a Prospective view. Applied to the choice Tessa faces at t2, the Prospective
view would generate the judgment that it is permissible for her to pull the switch, because the
Refrain from Pull at t1
A, B, C, D and E
die.
Pull at t1
F and G killed; A-E
saved.
H and I killed; No
one saved.
Refrain from pull at t2
H and I killed; No one
saved.
Pull at t2
F, G, H and I
killed; A-E saved.
.8
.2
74
number of people expected to be saved (5) is at least 2.5 times greater than the number expected
to be killed as a side effect (2).
If we apply a similar analysis to Tessa’s decision at t1, we will see that, according to a
Prospective view, pulling the switch is impermissible. Pulling the switch at t1 has a .8 chance of
saving 5, thus saving an expected 4, and doing so will definitely kill 2 as a side effect. Since 4 is
not at least 2.5 times greater than 2, it will be impermissible to pull the switch at t1.
Pull at t1? Pull at t2?
Prospective View Impermissible Permissible
The key feature of a Prospective view is that the determination of whether the expected
value of an action meets the relevant threshold is purely forward-looking.
63
The core intuition in
support of a Prospective view is that both the actions an agent has chosen in the past and the
outcomes of those actions do not have any impact on the permissibility of the actions that are
currently available to the agent. Otherwise put, regardless of the permissible or impermissible
things an agent may have done in the past, in determining whether the relevant threshold of
permissibility is met, we only need look to the outcomes of the actions available to the agent
now.
Before moving on, it will be helpful to state sufficient conditions of the class of
Prospective views with which we will be working. As discussed above, the following
formulation will only handle a limited range of cases, but these are the only cases that are
relevant to the analysis that follows:
63
This class of Prospective views, at least in its broad contours, is defended in McMahan (2015), as well as Tadros
(2018).
75
Prospective view: if one (but not all) of an agent’s available actions violate the moral
constraint against exposing others to a substantial risk of death,
64
that action is
permissible if (1) the number of people who the action is expected to save is at least 2.5
times greater than the number of people who the action is expected to kill as a side effect,
and (2) the agent’s available actions that do not violate this constraint save no one in
expectation.
65
If all of the agent’s available actions violate this moral constraint and meet
the relevant threshold, the agent must perform the one with the highest ratio of people
saved in expectation to people killed in expectation. If all of the agent’s available actions
violate this constraint but fail to meet this threshold then, if there is an action that exposes
others to the smallest risk of death in expectation, the agent must perform that action.
66
While a Prospective view is a natural way of applying a moral threshold to EXTENDED
TROLLEY, some are bothered by the thought that those we have killed as a side effect in the
pursuit of some goal are not relevant to the permissibility of continuing to pursue it. To these
objectors, when determining whether it is permissible to continue to pursue some goal, we need
to keep in mind the costs (but not the benefits) incurred along the way. (I’ll consider a view that
keeps in mind both past costs and benefits below). Applied to the choice faced by Tessa at t2 in
EXTENDED TROLLEY, the defender of this view—which I’ll call the Quota view—would
insist that we take into account the two people, H and I, that were killed as a side effect of
pulling the switch at t1. With these people in mind, if Tessa pulls the switch at t2, she will expect
64
Note that I stay neutral on the permissibility of exposing others to minimal risks of death. For more on this
question, and in particular the permissibility of exposing others to very small risks of death, see Isaacs (2014) and
Isaacs (2016), as well as Smith (2014) and Monton (2019).
65
Again, this does not cover cases in which the agent’s available actions that do not violate the constraint do save
people in expectation.
66
Notably, this does not over cases in which all of an agent’s available actions violate the moral constraint, fail to
meet the relevant threshold, and there is no unique action that kills the fewest innocent people.
76
to kill a total of four innocent people as a side effect of saving an expected five. Since the five
expected to be saved by pulling at t2 is not at least 2.5 times greater than the four people who
would be killed as a side effect of her efforts, the Quota interpretation of the Threshold view
would consider pulling the switch at t2 impermissible. In addition, the Quota view would agree
with the Prospective view that pulling at t1 is impermissible since, as calculated above, the
number of lives expected to be saved is not at least 2.5 times greater than the number expected to
be killed as a side effect.
Pull at t1? Pull at t2?
Prospective View Impermissible Permissible
Quota View Impermissible Impermissible
In contrast to a Prospective view, according to Quota view, the determination of whether
the expected value of an action exceeds the relevant threshold is forward-looking at the first
choice point and conscious of past costs at every choice point thereafter. The core intuition in
support of the Quota view is that the bad outcomes of the choices we’ve made in pursuit of some
goal have at least some bearing on whether current efforts to achieve that goal meet the relevant
threshold. In particular, the proponent of the Quota view might point out, there must be some
point at which we’ve accrued sufficient moral costs to make the further pursuit of the same goal
fail to meet the threshold imposed by the moral constraints that are expected to be violated. We
can state sufficient conditions of the class of Quota views as follows:
Quota view: At first choice point, identical to the Prospective view. At all subsequent
choice points, if one (but not all) of an agent’s available actions exposes others to a
substantial risk of death, where that includes risk exposure at previous choice points, that
77
action is permissible if (1) the number of people who the action is expected to save is at
least 2.5 times greater than the number of people who the action is expected to kill as a
side effect, where that calculation includes those killed (but not saved) at previous choice
points, and (2) the agent’s available actions that do not violate this constraint save no one
in expectation. If all of the agent’s available actions violate this moral constraint and
meet the relevant threshold, the agent must perform the one with the highest ratio of
people saved in expectation to people killed as a side effect in expectation, taking into
account those killed (but not saved) at previous choice points. If all of the agent’s
available actions expose others to a substantial risk of death fail to meet this threshold
when past costs are included, then, if there is an action that exposes others to the smallest
risk of death in expectation, the agent must perform that action.
67
At this point, my hope is simply to be relatively clear about key aspects of two ways we
might apply a Threshold view to cases involving diachronic choice. In the following Section, I
raise an objection to the Prospective and Quota views and propose an alternative to them both.
2. The Problem with the Prospective and Quota Views
In this Section, I argue that the Prospective and Quota views are committed to an
arbitrary distinction between synchronic and diachronic choices.
68
In order to avoid this arbitrary
distinction, we ought to adopt what I call an Aggregative Threshold view.
67
This view is based loosely on the one presented in Moellendorf (2015); a similar idea is found in Fabre (2015). To
be clear, however, the Aggregative view explored below is also consistent with Moellendorf’s analysis. For
discussion of other possible views, see Rodin (2015).
68
This argument would apply with equal force to the Discount view put forward in Lazar (2018, 843), a view he
characterizes as an “intermediate approach.”
78
To get a better idea of the problem with the Prospective and Quota views, it will be
helpful to begin by examining how Threshold views apply to synchronic choices, or choices at
one point in time. Consider the following:
SIMULTANEOUS TROLLEY: A two-car runaway trolley is careering down the main
track and will kill five innocent people (A, B, C, D, E) unless both cars are diverted onto
a branch track. If both cars of the trolley are diverted, the two innocent people on the
branch track (F and G) would be killed. In order to divert both trolley cars, Janelle must
pull both Master Switch 1 and Master Switch 2 at the same time. Pulling Master Switch 1
results in the first trolley car being diverted and pulling Master Switch 2 results in the
second trolley car being diverted. The diversion of the first trolley car results in one
person (F) on the branch track being killed, the diversion of the second trolley results in
the other person (G) being killed. If Janelle pulls Master Switch 1 but not Master Switch
2, diverting the first but not the second trolley car, four out of the five people on the main
track would be saved (A, B, C, and D) but a resulting short-circuit would expose an
innocent bystander (H) to a .8 chance of death.
Figure 2. Decision Tree for SIMULTANEOUS TROLLEY
Do Nothing
A, B, C, D, and E
die.
Only Pull Master
Switch 1
F killed, H exposed to
.8 chance of death; A,
B, C, and D saved.
Pull Master Switch 1
and Master Swtich 2
F ang G killed;
A, B, C, D, E
saved.
79
Regardless of whether Janelle adopts a Prospective or Quota view, pulling both switches in
SIMULTANEOUS TROLLEY will be permissible. This is because pulling both switches is
expected to save five people, which is at least 2.5 times the number of people expected to be
killed as a side effect, and the only other action available to Janelle that does not violate this
constraint, doing nothing, fails to save anyone (in expectation). In addition, regardless of which
view Janelle adopts, only pulling Master Switch 1 will not be permissible because doing so will
kill 1.8 (in expectation) as a side effect of saving 4, and 4 is not at least 2.5 times greater than
1.8.
Prospective view
Quota view
SIMULTANEOUS
TROLLEY
Permissible to pull both
switches
Permissible to pull both
switches
The challenge for the Prospective and Quota views is their judgments of
SIMULTANEOUS TROLLEY are hard to square with their judgments of the following,
diachronic variation:
SEQUENTIAL TROLLEY: A two-car runaway trolley is careering down the main track
and will kill five innocent people (A, B, C, D, E) unless both cars are diverted onto a
branch track. If both cars of the trolley are diverted, the two innocent people on the
branch track (F, G) would be killed. In order to divert both trolley cars, Janelle must pull
Master Switch 1 (MS1) at t1 and Master Switch 2 (MS2) ten seconds later at t2. Pulling
Master Switch 1 at t1 results in the first trolley car being diverted and pulling Master
Switch 2 at t2 results in the second trolly car being diverted. The diversion of the first
trolley car results in one person (F) on the branch track being killed, the diversion of the
second trolley car results in the other person (G) being killed. If Janelle pulls MS1 at t1
80
but not MS 2 at t2, diverting the first but not the second trolley car, four out of the five
people on the main track (A, B, C, and D) would be saved but a resulting short-circuit
would expose an innocent bystander (H) to a .8 chance of death.
Janelle has the same options available to her in SIMULTANEOUS TROLLEY as she does in
SEQUENTIAL TROLLEY, except, in the latter, some of her options are diachronic.
Figure 3. Decision Tree for SEQUENTIAL TROLLEY
At least intuitively, I would submit that what’s permissible in SIMULTANEOUS TROLLEY
ought to also be permissible in SEQUENTIAL TROLLEY. In particular, just as it is permissible
to pull both master switches in SIMULTANEOUS TROLLEY, it ought to be permissible to pull
both master switches in SEQUENTIAL TROLLEY. After all, in both cases, pulling both
switches results in the deaths of two (F and G) in order to save five (A, B, C, D, E).
Let’s think through how Janelle would act in SEQUENTIAL TROLLEY if she were to
adopt a Prospective view. In this analysis (and those that follow), it will be helpful to assume that
Janelle is what I call a “self-aware constraint-follower.” By calling her a constraint-follower, I
mean that, given the choice between an action that she considers morally permissible and one
Do Nothing at t1 A, B, C, D, E die.
Pull MS1 at t1. F killed; A, B, C, D saved.
Don't Pull MS2 at t2. F killed, H exposed to .8
chance of death; A, B, C,
D saved.
Pull MS2 at t2.
F and G killed; A, B, C, D,
E saved.
81
that she considers morally impermissible, she would perform the action she considers morally
permissible. By calling her self-aware, I mean that she knows both that she is a constraint
follower and which interpretation of the Threshold view she has adopted.
In SEQUENTIAL TROLEY, at t2, Janelle would see that both of her available actions—
pulling MS2 and refraining from pulling MS2—fail to save a number of lives in expectation that
is at least 2.5 times the number that is expected to be killed as a side effect. If Janelle refrains
from pulling MS2 at t2, this would result in a .8 chance of one person (H) dying and save no one;
If Janelle pulls MS2 at t2, she would kill one person (G) as a side effect of saving one (E). Since
both of her available actions fail to meet the relevant threshold at t2, Janelle would be obligated
to perform the action that kills the fewest people in expectation, which is refraining from pulling
MS2 at t2.
With this knowledge of how she would behave at t2 in mind, pulling MS1 at t1 also fails
to meet the relevant threshold. If Janelle pulls MS1 at t1 but refrains from pulling MS2 at t2, this
would result in four people being saved (A, B, C, and D), one person being killed (F) and one
person being exposed to a .8 chance of death (H) as a side effect. Since 4 is not at least 2.5 times
greater than 1.8, pulling at t1 would be impermissible for the adopter of the Prospective view.
This is an odd result, I would submit, because the Prospective view considers it permissible for
Janelle to pull MS1 and MS2 at the same time in SIMULTANEOUS TROLLEY. It’s not clear
why there would be a moral difference between pulling both switches at the same time and
pulling one switch right after the other, especially because each has the same effect in the world,
namely, the diversion of both the trolley cars from the main to the branch track, killing two (F
and G) as a side effect of saving five (A, B, C, D, E).
82
Before considering how we might avoid this result, it’s worth discussing why Janelle
would be in the same position if she were to adopt the Quota view. If Janelle adopts a Quota
view, then she would also refrain from pulling MS2 at t2. This is because both pulling MS2 and
refraining from pulling MS2 will fail to save a number of people that is 2.5 times the number that
are expected to be killed as a side effect, as calculated by the Quota view, and refraining from
pulling will kill the fewest innocent people. More specifically, refraining from pulling MS2 at t2
would expose one person (H) to a .8 chance of death with no additional lives saved. Once we
take into consideration the one person who Janelle would have killed as a side effect of pulling
MS1 at t1 (F), as the Quota view requires, the number of people saved by refraining from pulling
MS2 will clearly be below the relevant threshold. On the other hand, pulling MS2 would kill one
person (G) as a side effect of saving one person on the main track (E). After we take into
consideration the one person who Janelle kills as a side effect of pulling MS1 (F), the number of
people expected to be saved by pulling MS2 would not be at least 2.5 times greater than the
number of people expected to be killed as a side effect.
Just like if Janelle adopts a Prospective view, if she adopts a Quota view, then she would
refrain from pulling MS1 in the first place because the number of people expected to be saved by
pulling MS1 is not at least 2.5 times the number of people expected to be killed as a side effect.
As suggested above, this is a peculiar result since pulling both MS1 and MS2 at the same time
has the same effect in the world as pulling MS1 and then pulling MS2 ten seconds later, namely,
the diversion of the two-car trolley from the main track onto the branch track.
83
Prospective view
Quota view
SIMULTANEOUS
TROLLEY
Permissible to pull both switches Permissible to pull both switches
SEQUENTIAL
TROLLEY
Impermissible to pull either
switch
Impermissible to pull either
switch
One objection to this analysis of SEQUENTIAL TROLLEY that is worth considering is
that, at t2, Janelle should consider the death of 1.8 people (in expectation) to be guaranteed. If
she considered the death of 1.8 people at t2 to be guaranteed, she would then take her choice to
be between killing no one to save no one (if she refrains from pulling MS2 at t2) and killing an
additional .2 people in expectation to save 1 person, E (if she pulls MS2 at t2). If we adopt this
approach, then pulling MS2 at t2 would meet the relevant threshold because the ratio of people
saved to killed as a side effect is greater than 2.5 (1 saved to .2 killed).
While this sort of approach might make sense for certain purely consequentialist views, it
is a poor fit for the Threshold views under consideration. According to Threshold views, killing a
person is permissible when it is a side effect of saving a sufficient number of others. When we
evaluate whether a particular action meets the relevant threshold, then, we must both determine
the number of individuals who will be saved in expectation and the number who will be killed as
a side effect of saving those individuals. Applied to SEQUENTIAL TROLLEY, to determine
whether pulling MS2 at t2 meets the relevant threshold, we compare the one person who will be
saved in expectation (E) with the one person who will be killed as a side effect of saving this
individual (G).
69
When we determine whether refraining from pulling MS2 meets the relevant
threshold, we compare the zero people who will be saved as a result with the one person who
69
At least if we adopt a Prospective view or Quota view. As discussed below, we’d approach this question
differently if we adopt what I call an Aggregative view.
84
will be exposed to a .8 chance of death (H) as a side effect. Since the death of G, but not the .8
risk to H, is a side effect of saving E, the adopter of the Threshold view will be loath to simply
subtract one from the other in determining whether pulling or refraining from pulling is
permissible.
70
While delivering differing verdicts in SIMULTANEOUS TROLLEY and SEQUENTIAL
TROLLEY is odd, an even more unappealing feature of both the Prospective and Quota views is
that their verdicts depend on the order of the actions in a sequence. More specifically, both views
deliver differing verdicts in SEQUENTIAL TROLLEY and the following case:
REVERSE SEQUENTIAL TROLLEY: Same set-up as SEQUENTIAL TROLLEY
except that, in order to divert both cars of the trolley from the main track onto the branch,
killing two (F and G) in order to save five (A, B, C, D, E), the agent must pull Master
Switch 2 at t1 and Master Switch 1 ten seconds later at t2. Pulling MS2 at t1 results in the
second trolley car being diverted and pulling MS1 at t2 results in the first trolly car being
diverted. The diversion of the second trolley car results in one person (G) on the branch
track being killed, the diversion of the first trolley results in the other person (F) being
killed. If Tessa pulls MS2 at t1 but not MS1 at t2, diverting the second but not the first
trolley car, one out of the five people on the main track (E) would be saved but a
resulting short-circuit would expose an innocent bystander (H) to a .8 chance of death.
70
As will be discussed in more detail below, we might make a similar point with respect to the proportionality of
wars. If a nation launches a war at t1, then what will be relevant to the evaluation of the proportionality of the war is
the objective to be achieved and the number of casualties expected. If the war objective was to save a certain number
of innocent lives, then the analogy with SEQUENTIAL TROLLEY is clearest. At t2 in SEQUENTIAL TROLLEY,
Janelle can be thought of as determining whether or not to launch a war to save E. In making this determination, she
would compare the number of casualties expected as a result of that war to the number of people who will be saved.
At least arguably, in determining whether launching a war at t2 to save E would be proportional, the casualties from
prior wars (including the war launched at t1 to save A, B, C, and D) are not relevant. Instead, such casualties are
only relevant to the determination of whether the war launched at t1 was itself proportionate.
85
The only difference between SEQUENTIAL TROLLEY and REVERSE SEQUENTIAL
TROLLEY is that, in order to divert both trolley cars, Janelle has to pull Master Switch 2 before
Master Switch 1. What does not change is the effect of pulling each master switch: in both
SEQUENTIAL TROLLEY and REVERSE SEQUENTIAL TROLLEY, pulling MS1 results in
the diversion of the first trolley car, and pulling MS2 results in the diversion of the second trolley
car.
Figure 4. Decision Tree for REVERSE SEQUENTIAL TROLLEY
If Janelle adopts either the Prospective view or the Quota view, it will be permissible for
her to pull MS2 then MS1 in REVERSE SEQUENTIAL TROLLEY. In short, this is because,
regardless of which view she adopts, pulling MS1 at t2 will be permissible, as the number of
people expected to be saved by pulling MS1 at t2 (4) is more than 2.5 times the number of
people expected to be killed as a side effect (1), and refraining from pulling MS1 at t2 fails to
meet this threshold. As a self-aware constraint-follower, Janelle knows she would pull MS1 at t2
if given the chance; with this knowledge in mind, the expected number of lives saved by pulling
MS2 at t1 is 5, while the number of people expected to be killed in 2. Otherwise put, since
Do Nothing at t1 A, B, C, D, E die.
Pull MS2 at t1 G killed; E saved.
Don't Pull MS1 at t2 G killed, H exposed
to .8 chace of death;
E saved.
Pull MS1 at t2
F and G killed; A, B,
C, D, E saved.
86
Janelle knows that she would pull MS1 at t2, she expects that, by pulling MS2 at t1, she will save
five (A, B, C, D, E) while killing two (F and G) as a side effect.
This leaves us with the following verdicts:
Prospective View
Quota View
SIMULTANEOUS
TROLLEY
Permissible to pull both switches Permissible to pull both switches
SEQUENTIAL
TROLLEY
Impermissible to pull either
switch
Impermissible to pull either
switch
REVERSE
SEQUENTIAL
TROLLEY
Permissible to pull both switches Permissible to pull both switches
It is hard to fathom why the order in which Janelle pulls the master switches would
matter to the permissibility of pulling both. Simply put, when considering whether it’s
permissible to perform two actions, each of which is necessary to bring about a certain effect in
the world, it does not seem morally relevant which action comes first. Regardless of the order in
which the master switches are pulled in SEQUENTIAL TROLLEY and REVERSE
SEQUENTIAL TROLLEY, both pulls are necessary to bring about the exact same result in the
world. That is, whether Janelle pulls MS1 and then MS2 or MS2 and then MS1, both pulls are
necessary to divert the two-car trolley onto the branch track, killing two (F and G) as a side
effect of saving five (A, B, C, D, E).
One objection that is worth considering at this point is that the order of actions matters
because, if we adopt a Threshold view, agents are allowed to perform actions that would
otherwise be impermissible in order to be in a position to perform actions that have a high
enough expected value. For instance, according to the Threshold view, an individual may be
87
allowed to kill 2 as a side effect of saving 5, but she would not be allowed to save 5 and then
proceed to kill 2. In this relatively straightforward way, we might think, the order of the actions
that an agent performs is of crucial importance to their permissibility according to a Threshold
view.
As discussed above, I agree that an essential feature of Threshold views is the causal
relationship between those killed as a side effect of those saved by one’s actions. At the same
time, what makes this objection less forceful when applied to SIMULTANEOUS TROLLEY,
SEQUENTIAL TROLLEY, and REVERSE SEQUENTIAL TROLLEY is that, regardless of the
order in which the switches are pulled (or whether they are pulled simultaneously), this has the
same effect in the world. To make this vivid, we might imagine Janelle in a scenario in which
she has three choices: pull both switches at the same time, pull MS1 and then MS2, or pull MS2
and then MS1. Since each of these choices brings about the same effect in the world, diverting
two trolley cars and killing two (F, G) to save five (A, B, C, D, E), it’s hard to imagine that
anything of moral significance hangs on how exactly Janelle proceeds. If we think that our
Threshold view ought to reach the same verdict regardless of the order in which Janelle pulls the
switch (or whether she does so simultaneously), we will find the Prospect view and Quota view
unsatisfactory.
In my view, the best way of avoiding inconsistent and arguably incoherent verdicts in
SIMULTANOUS TROLLEY, SEQUENTIAL TROLLEY, AND REVERSE SEQUENTIAL
TROLLEY is to adopt what I call an Aggregative view. According to an Aggregative view, the
relevant threshold calculation takes into account both those killed and those saved in one’s
efforts up to that choice point. Sufficient conditions for the class of Aggregative views under
consideration are as follows:
88
Aggregative view: At first choice point, identical to the Prospective view. At all
subsequent choice points, if one (but not all) of an agent’s available actions exposes
others to a substantial risk of death, where that includes risk exposure at previous choice
points, that action is permissible if (1) the number of people who the action is expected to
save is at least 2.5 times greater than the number of people who the action is expected to
kill as a side effect, where that calculation includes those killed and saved at previous
choice points, and (2) the agent’s available actions that do not violate this constraint save
no one in expectation. If all of the agent’s available actions violate this moral constraint
and meet the relevant threshold, the agent must perform the one with the highest ratio of
people saved in expectation to people killed as a side effect in expectation, taking into
account those killed and saved at previous choice points. If all of the agent’s available
actions expose others to a substantial risk of death fail to meet this threshold when past
costs are included, then, if there is an action that exposes others to the smallest risk of
death in expectation, the agent must perform that action.
The primary virtue of the Aggregative view is that it generates the same judgment with respect to
each of the three cases discussed in this section. More specifically, when Janelle is considering
whether to pull MS1 and MS2—whether at the same time, MS1 then MS2, or MS2 then MS1—
she expects that doing so will save five people while killing two as a side effect, taking the
outcomes of these actions in the aggregate. Since the number of people that are expected to be
saved by pulling both switches is 2.5 times greater than the number that is expected to be killed
as a side effect, and the only series of actions available to Janelle that doesn’t violate a moral
89
constraint fails to save anyone in expectation, the Aggregative view considers pulling both
switches permissible.
Prospective Quota Aggregative
SIMULTANEOUS
TROLLEY
Permissible to pull
both
Permissible to pull
both
Permissible to pull
both
SEQUENTIAL
TROLLEY
Impermissible to
pull at t1
Impermissible to
pull at t1
Permissible to pull
both
REVERSE
SEQUENTIAL
TROLLEY
Permissible to pull
both
Permissible to pull
both
Permissible to pull
both
Of course, the Aggregative view has problems of its own. It is to these problems that I
now turn.
3. Two Problems with the Aggregative View
There are two central challenges to the Aggregative view outlined above. First, one might
think that, if we adopt the view, then an agent can perform actions whose costs grossly outweigh
their benefits as long as their actions at earlier choice points went well enough. Second, we might
doubt the Aggregative view’s claim that an agent’s past actions impact the permissibility of the
actions that are currently available to her. This latter objection is especially pressing if we
concede that such constraints would not apply to agents who entered the scene (so to speak) at
later choice points.
As detailed below, I agree that, if we adopt the Aggregative view, then actions that save
fewer than 2.5 times the number of people that they kill (in expectation) will be considered as
90
meeting the relevant threshold. Before discussing such a case, it is crucial to recognize that the
Aggregative view does not generate counterintuitive judgments in the following sort of case:
GREAT BEGINNING: A nation starts a war about which it is true that the deaths of
1,000 enemy combatants is permissible to save 2,500 innocent civilians. Near the
completion of the war, the nation has saved 2,490 innocent civilians while only killing 10
enemy combatants. The nation now has two choices: kill 990 enemy combatants to save
10 innocent people or do nothing at all.
Figure 5. Decision tree for GREAT BEGINNING
One worry we might have about the Aggregative view is that, if we adopt it, then it will be
permissible to continue the war at t2. After all, if we continue the war at t2, we will save 2500
while only killing 1000, preserving the 2.5 ratio of those saved to killed in expectation. This is an
abhorrent result, since continuing the war at t2 kills 990 to save 10 while we have the option
available of killing no one else.
As a reminder, according to the Aggregative view we’ve been considering, if multiple
actions meet the relevant threshold when both the costs and benefits of prior actions are taken
Do Nothing at t1. 2500 civilians die.
Launch War at t1.
10 combatants
killed; 2490
civilians saved.
End war at t2.
10 combatants
killed, civilians
2490 saved.
Continue war at t2.
1000 combatants
killed; 2500
civilians saved.
91
into account, the agent must perform the one with the highest ratio of those saved to those killed
in expectation. Even though continuing the war at t2 has a ratio of 2.5, which is above the
relevant threshold, ending the war at t2 has a ratio of 249 (2,490:10). With this in mind, although
it is true that the Aggregative view would consider both of these options as meeting the relevant
threshold, it would still require the nation of end hostilities at t2.
That said, there are cases in which adopting the Aggregative view allows for actions
whose expected cost-to-benefit ratio fails to meet the relevant threshold. However, in these cases
the difference between the Aggregative view on the one hand and the Prospective and Quota
views on the other are far less pronounced. For instance, consider the following:
MESSY EXIT: A nation starts a war about which it is true that the deaths of 40 enemy
combatants is permissible to save to save 100 innocent civilians. If the war is launched at
t1, 94 innocent civilians will be saved while only killing 1 enemy combatant. To end
hostilities, the nation has two choices: Exit 1, which will kill 38 enemy combatants and
save 3 innocent people, and Exit 2, which will kill 39 combatants to save 6.
Figure 6. Decision Tree for MESSY EXIT
Do Nothing at t1. 100 civilians die.
Launch War at t1.
1 combatants
killed; 94 civilians
saved.
Exit 1 at t2.
39 combatants
killed; 97 civilians
saved.
Exit 2 at t2.
40 combatants
killed; 100
civilians saved.
92
In MESSY EXIT, the defender of a Prospective or Quota view will insist is that both Exit 1 at t2
and Exit 2 at 2 fail to meet the relevant threshold. Exit 1 is expected to kill 38 to save 3, while
Exit 2 at t2 is expected to kill 39 to save 6. Since each of these Exits are expected to kill more
than six times the number of people that it would save, the proponent of the Prospective and
Quota view will insist that, if forced to choose between the two, an agent ought to simply kill the
fewest innocent people possible (in expectation), which in MESSY EXIT would require Exit 1.
As a self-aware constraint-follower, if the nation adopts either the Prospective or Quota view, it
would then be impermissible to launch the war, as the expected number of people expected to be
saved by doing so will not be at least 2.5 times greater than the number of people expected to be
killed.
A defender of the Aggregative view will approach MESSY EXIT quite differently.
Perhaps most controversially, the defender of the Aggregative view will claim that Exit 2 does,
in fact, meet the relevant threshold. This is because, while Exit 2 kills 39 in order to save 6, the
determination of whether the relevant threshold is met takes into account both people killed and
people saved at previous choice points. In terms of the relevant threshold, launching the war was
fantastic: it saved 94 innocent civilians while only killing 1 enemy combatant. Given how well
things turned out at t1, things can go much worse at t2 without her running afoul of the relevant
threshold. Since Exit 2 meets the relevant threshold (while Exit 1 does not), if the nation adopted
an Aggregative view, it would opt for Exit 2 at t2. Knowing this about itself, it would then be
permissible to launch the war at t1.
If we favor a Prospective or Quota view, we might view the judgment generated by the
Aggregative view in MESSY EXIT as quite damaging. After all, if one adopts a Threshold view,
it seems hard to deny that killing 39 to save 6 fails to meet the relevant threshold. I agree that this
93
might not be initially attractive feature of the Aggregative view. The first response to this
critique is that, if we accept it, then it seems like we must accept a view that has the flaws
outlined in Section 3. After all, in the variant of MESSY EXIT in which the beginning of the war
goes horribly—which we might call MESSY ENTRANCE—the Prospective and Quota view
would agree with the Aggregative view that launching the war is permissible. In addition, in a
case in which the nation had the option of conducting the entire war in one stage, proponents of
the Prospective and Quota views would also agree that doing so would meet the relevant
threshold as it would kill 40 combatants to save 100 civilians. Since all three views agree on
these slight variations of MESSY EXIT, it’s not clear how cogent of a case the proponent of
either the Prospective or Quota view can make against the Aggregative view’s judgment in the
original case.
I think the strongest critique of the Aggregative view’s judgment in this case—which
would also apply to the Quota view—is put forward by Jeff McMahan.
71
As an initial matter, we
can frame McMahan’s critique as follows: in MESSY EXIT, the proponent of the Aggregative
view agrees that if a nation is forced to make a one-off choice between killing 39 combatants to
save 6 innocent civilians in expectation and killing 38 combatants to save 3 innocent civilians in
expectation, it ought to opt for the latter because it kills the fewest innocent people in
expectation. With this in mind, McMahan might continue, it is hard to make sense of why the
proponent of the Aggregative view would insist that, in MESSY EXIT, it is permissible for the
nation to opt for Exit 2. After all, by opting for Exit 2, the nation is killing 39 combatants to save
6 innocent civilians in expectation instead of killing 38 combatants to save 3 innocent civilians.
71
See McMahan (2015, 705-706), as well as Tadros (2018, 25-27) and, to some extent, Lazar (2018, 854-55).
94
Simply put, the challenge for the proponent of the Aggregative view is explaining how a
nation’s action at t1 can impact what it is permissible for it to do at t2. The most compelling
response to this challenge, in my mind, is to point out a similar commitment for the Prospective
view. This commitment becomes clear when considering the following sort of case:
DUAL-FRONT WAR: A nation is deciding whether or not to initiate hostilities to save
the lives of 100 civilians. To achieve this objective, it is proportionate and thus
permissible to kill 40 enemy combatants. If the nation initiates Battle 1 and Battle 2 at t1,
40 combatants will be killed in order to save 100 civilians. If the nation initiates Battle 1
but not Battle 2, 10 combatants will be killed in order to save 10 civilians.
Figure 7. Decision Tree for DUAL-FRONT WAR
If we adopt the Prospective view, it will be permissible for the nation to launch both battles in
DUAL-FRONT WAR because doing so is expected to save 100 innocent lives while only killing
40 combatants, and the only action available to the nation that does not violate a moral constraint
saves no one in expectation. However, it would not be permissible for the nation to only launch
Battle 1, because doing so would kill 10 combatants and save 10, which does not meet the
relevant threshold, and the nation has actions available to it that do meet the threshold.
Do Nothing
100 innocent
civilians die.
Only Initiate Battle 1
10 combatants killed;
10 civilans saved.
Initiate Battle 1 and
Battle 2
40 combatants
killed; 100
civilians saved.
95
In analyzing DUAL-FRONT WAR, the proponent of the Prospective view considers one
option to be launching both battles to save the civilians. Against this approach, one might ask
why this option ought not be broken down into two independent actions: launching Battle 1 and
launching Battle 2. If we were to analyze each of these independent actions, then it’s less clear
that launching Battle 1 would be permissible. This is because launching only Battle 1 kills 10
and saves 10, which the proponent of the Prospective view will agree violates a moral constraint
but fails to meet the relevant threshold, and the nation has an action available to it that does not
violate a moral constraint, namely, doing nothing. Since the proponent of the Prospective view
agrees that launching Battle 1 fails to meet the relevant threshold, it is natural to wonder why she
nonetheless considers launching both battles to be permissible. Otherwise put, why should the
fact that the nation is launching Battle 2 have any impact on the permissibility of launching
Battle 1?
The most natural response on behalf of the Prospective view is to insist that, in DUAL-
FRONT WAR, it is natural to consider multiple actions that are at least somewhat independent,
such as launching Battle 1 and launching Battle 2, as the relevant unit of moral analysis. As
discussed in the Introduction above, the most salient discussion of the Prospective view and
Quota view has taken place within a larger literature on the permissibility of war, and a war is a
quintessential example of a collection of actions that are at least conceptually separable. Since
any given war will consist of numerous tactical decisions with respect to troop movements and
battles, at the point at which we agree to analyze the proportionality of a war, the proponent of
the Prospective view can point out, we have already agreed to analyze a unit that consists of
many seemingly independent actions.
96
Of course, the fact that the Prospective view is amenable to larger units of analysis such
as wars plays right into the Aggregative view’s hands. Once the Prospective view agrees with the
Aggregative view that, at least in certain contexts, entire wars are the relevant unit of analysis,
we have a readymade answer as to why a nation’s past actions may constrain its future ones,
namely, that all of those actions are part of the same war. For a war to be permissible, the
Prospective view presumably concedes, they must be proportionate. The Aggregative view
agrees and, furthermore, takes this agreement as entailing a certain concern for past actions in a
war when considering when and whether to continue it. In other words, the concern that the
Prospective view has about the proportionality and thus permissibility of wars as such translates,
for the Aggregative view, into the fact that past actions within a war are relevant to the
proportionality of continuing that war.
This response also addresses the objection that it seems unappealing for it to be
impermissible for one nation to continue hostilities if it would be permissible for another to start
them.
72
However unappealing this conclusion may be, once we accept that we are interested in
the proportionality of wars, and that new hostilities from a previously uninvolved state constitute
a new war, it is quite difficult to avoid the result that certain actions will be permissible for the
uninvolved state that would not be permissible for the state that has already started hostilities.
After all, with these assumptions in place, the proportionality of the war initiated by the first state
will only take into account the casualties inflicted and objectives achieved by that state; if other
states choose to initiate their own wars, then the proportionality of those wars will only take into
account the casualties inflicted and objectives achieved by those states. While there is certainly a
debate to be had about when and whether the actions of multiple nations count as being part of
72
McMahan (2015, 706-707).
97
the same war, the proponent of the Aggregative view need not be wedded to any particular view
on this matter. Instead, she can simply start with the unit that we take of be of moral interest, a
unit that (at least in theory) the proponent of the Prospective view can agree upon.
One option that is left on the table for the Prospective view is to abandon all groupings of
actions. According to this revised view, it is not the permissibility or proportionality of wars we
ought to be concerned with, but instead the permissibility of all of the actions we can foresee
ourselves performing into the indefinite future. If we are open to the possibility of evaluating all
of our foreseeable actions at every point in time—and replacing our analyses of smaller groups
altogether—then the Aggregative view will not be able to piggyback on the agreed-upon
importance of groups of actions such as wars and efforts to save people from runaway trolleys.
While it is open to the proponent of the Prospective view to abandon all talk of action groupings,
I don’t think it’s an accident that she is yet to do so. Since we naturally care a great deal about
the permissibility of wars and our pursuit of other moral goals as such, abandoning such
particular cares would require an enormous amount of motivation.
4. Conclusion
Even if an action violates certain moral constraints, many think that it may nonetheless be
permissible if the amount of good it produces exceeds a certain threshold. As argued above,
however, it is not straightforward to apply such Threshold views to cases involving diachronic
choice. On the one hand, if we adopt a Prospective or Quota view, we have to make seemingly
arbitrary distinctions between synchronic and diachronic choices. If we opt for an Aggregative
view instead, we must accept that particular actions will be considered as meeting the relevant
threshold in virtue of the diachronic actions of which they are a part. With this in mind, if we
98
adopt an Aggregative view, we will be forced to think carefully about which actions we group
together into relevant units and why.
Sources:
Alexander, Larry. (2000). Deontology at the Threshold. San Diego Law Review, 37, 893-912.
Anscombe, G.E.M. (1957). Intention. Cambridge: Harvard University Press.
Bjorndahl, Adam, Alex J London and Kevin J.S. Zollman. (2017). Kantian Decision Making Under
Uncertainty: Dignity, Price, and Consistency. Philosophers’ Imprint, 17:7, 1-22.
Bratman, Michael E. (1987). Intention, Plans, and Practical Reason. Cambridge: Harvard University
Press.
Brennan, Samantha. (1995). Thresholds for Rights. The Southern Journal of Philosophy, 33, 143-168.
Buchak, Lara. (2013). Risk and Rationality. Oxford: Oxford University Press.
Davidson, Donald. (2001). Essays on Actions and Events. Oxford: Oxford University Press.
Fabre, Cecile. (2015). War Exit. Ethics, 125, 631-652.
Frick, Johann. (2015). Contractualism and Social Risk. Philosophy and Public Affairs, 43:3, 175-223.
Frowe, Helen. (2018). Lesser-evil Justifications for harming: Why We’re Required to Turn the Trolley.
The Philosophical Quarterly, 68:272, 460-480.
Horton, Joe. (2020). Aggregation, Risk, and Reductio. Ethics, 130:4, 514-529.
Horton, Joe. (2018). Always Aggregate. Philosophy and Public Affairs, 46, 160-174.
Isaacs, Yoaav. (2016). Probabilities Cannot Be Rationally Neglected. Mind, 125(499), 759-762.
Isaacs, Yoaav. (2014). Duty and Knowledge. Philosophical Perspectives, 28:1, 95-110.
Jackson, Frank. (1991). Decision-theoretic Consequentialism and the Nearest and Dearest Objection.
Ethics, 101, 461-482.
Johnson, Christa M. (2020). How Deontologists Can be Moderate (and Why They Should Be). The
Journal of Value Inquiry, 54: 227-243.
99
Kamm, F. M. (2013). Nonconsequentialism. The Blackwell Guide to Ethical Theory, ed. Hugh LaFollette.
Oxford: Blackwell.
Kamm. F. M. (2001). Making War (and its Continuation) Unjust. European Journal of Philosophy, 9:
328-343.
Lazar, Seth. (2018). Moral Sunk Costs. The Philosophical Quarterly, 68:273, 841-861.
McLennan, Edward F. (1990). Rationality and Dynamic Choice: Foundational Explorations. New York:
Cambridge University Press.
McMahan, Jeff. (2015). Proportionality and Time. Ethics, 125, 696-719.
Moellendorf, Darrel. (2015). Two Doctrines of Jus ex Bello. Ethics, 125, 653-673.
Monton, Bradley. (2019). How to Avoid Maximizing Expected Utility. Philosophers’ Imprint, 19:18, 1-
25.
Oberdiek, John. (2003). Lost in Moral Space; On the Infringing/Violating Distinction and its Place in the
Theory of Rights. Law and Philosophy, 23:4, 325-346.
Parfit, Derek. (2011). On What Matters, Vol. 1. Oxford: Oxford University Press.
Quinn, Warren. (1989). Actions, Intentions, and Consequences: The Doctrine of Double Effect.
Philosophy and Public Affairs, 18, 334-351.
Ramakrishnan, Ketan H. (2016). Treating People as Tools. Philosophy and Public Affairs, 44:2, 133-165.
Rodin, David. (2015). The War Trap: Dilemmas of just terminato. Ethics, 125, 674-695.
Rodin, David. (2008). Two Emerging Issues of Jus Post Bellum: War Termination and the Liability of
Soliders for Crimes of Aggression, in C. Stahn and J. K. Kleffner (eds.) Just Post Bellum: Toward a Law
of Transition from Conflict to Peace, 53-76. The Hague: T.M.C. Asser Press.
Smilansky, Saul. (2003). Can Deontologists Be Moderate? Utilitas, 15(1), 71-75.
Smith, Nicholas J.J. (2014). Is Evaluative Compositionality a Requirement of Rationality? Mind,
123:490, 457-502.
Tadros, Victor (2018). Past Killings and Proportionality in War. Philosophy and Public Affairs, 46:1, 9-
35.
Thomson, Judith Jarvis. (1990). The Realm of Rights. Cambridge: Harvard University Press.
Voorhoeve, Alex (2014). How Should We Aggregate Competing Claims? Ethics 125, 64-87.
Wu, Patrick. (2022). Aggregation and Reductio. Ethics, 132:2, 508-525.
Zamir, Eyal and Medina, Barak (2010). Threshold Deontology and Its Critique, in Law, Economics, and
Morality. New York: Oxford, 41-56.
100
Chapter 4
The Composition of Risk
In this chapter, I present a novel contractualist analysis of risk-imposing activities.
According to most ex ante contractualists, there is a principled distinction to be made between
permissible risk-imposing activities such as medical treatments and impermissible risk-imposing
activities such as medical experimentation.
73
After demonstrating that previous attempts to make
this distinction are implausible, I argue that the most promising version of ex ante contractualism
generates similar verdicts with respect to the permissibility of both risky medical treatments and
medical experimentation. With this in mind, if we think medical experimentation is never
permissible, we will either judge risky medical treatments to be impermissible as well or reject
ex ante contractualism altogether.
This paper proceeds in five sections. In Section 1, I discuss the ex ante contractualist
framework as articulated by Johann Frick, as well as his approach to distinguishing “good” from
“bad” risky activities. In Section 2, I discuss some fundamental problems with Frick’s approach.
In Section 3, I present what I take to be the most promising path forward for the ex ante
contractualist, which I call Composite Contractualism. Section 4 applies Composite
Contractualism to cases involving risky treatments and medical experimentation, and Section 5
concludes.
Section 1. Justifying the Imposition of Risk
73
As James puts it, ex ante contractualists generally want to distinguish between “good aggregation” and “bad
aggregation” (2012, 263). Those who question the contractualist’s ability to make this sort of distinction
successfully include Fried (2012), Otsuka (2011), Ashford (2003), and Reibetanz (1998).
101
In this Section, I aim to accomplish two tasks. First, I discuss how our actions can be
justified to each other using the ex ante contractualist framework developed by Johann Frick. To
follow, I illustrate how he applies this framework to cases involving risky medical treatments
and medical experimentation.
One answer to the question of how and when we can justify our actions to others comes
from the ex ante contractualist.
74
According to a contractualist such as Scanlon (1998, 153), “an
act is wrong if its performance under the circumstances would be prohibited by any system of
rules for the general regulation of behavior which no one could reasonably reject as a basis for
informed, unforced, general agreement.” According to an ex ante contractualist, rules licensing
risky actions ought to be evaluated in terms of the prospects they have for each individual. As
Johann Frick (2015, 188) puts it:
According to ex ante contractualism, the strength of someone’s personal reasons for
rejecting a principle licensing a risky action depends on the quality of the prospect that
the action gave her ex ante. A person’s harm-based complaint against a loss she suffers
must, therefore, be discounted by her ex ante unlikelihood of suffering a loss and by her
ex ante likelihood of benefiting from the risky action.
A person lacks a complaint against a principle licensing a risky action, Frick (2015, 188)
continues, if “the person’s ex ante prospect from the risky action was good enough.” For
instance, a person will lack a complaint against a risky action if, before that action’s
performance, we can say something like the following to them:
74
Representative examples of ex ante contractualism include Frick (2015), Kuimar (2015), Gordon-Solomon
(2019), James (2012), and, to some extent, Lenman (2008).
102
To the best of our knowledge, the likelihood that this action will benefit you considerably
outweighs the chance of it turning out to your disadvantage.
75
When we evaluate risk-licensing principles from an ex ante perspective, the evidence
available to the agent at the time of her action plays a crucial role.
76
Otherwise put, the risks that
are relevant to the ex ante contractualist’s analysis are epistemic risks, as opposed to objective
risks, and such risks will, as Frick (2015, 182) puts it, “reflect[] our incomplete knowledge of the
state of the world and the laws of nature.” In what follows, I will assume, as Frick (2015, 182)
does, that “the decision makers’ actual degrees of credence track what the available evidence
makes it rational for them to believe.”
With this basic framework in mind, many think that it is of the utmost importance that the
ex ante contractualist be able to distinguish between the following two sorts of cases:
RISKY TREATMENT: At time t1, a doctor administers a risky treatment to 100
paraplegic children. He knows that, for each child, the treatment has a 99 percent chance
of curing her paraplegia, while foreseeing that approximately 1 child will die at t2 as a
result of receiving the treatment.
77
AUTOMATIC EXPERIMENT: At t1, a doctor sets in motion an unstoppable device,
which first randomly selects 1 paraplegic child from among the 100, and then, at t2,
automatically conducts lethal medical experiments on the chosen child without the need
75
Although Frick (2015, 192) talks of actions being “highly likely” to benefit the other, what is most important is
not the likelihood of the benefit but the relative likelihood of the benefit to the harm (as well as the magnitude of the
benefit and harm).
76
Frick (2015, 191) discusses the importance of “easily” available evidence on p. 191. In the cases that follow, this
type of subtlety will not be an issue.
77
Frick (2015), p. 204 (with some modification).
103
for any further human involvement. The knowledge gained in this way is certain to
enable the doctor to cure the remaining 99 children.
78
Frick, like many other ex ante contractualists, wants to distinguish RISKY TREATMENT from
AUTOMATIC EXPERIMENT in such a way that, while the doctor’s action in the former is
permissible, the doctor’s action in the latter is not. The basic point that the ex ante contractualist
such as Frick wants to make about AUTOMATIC EXPERIMENT is that, since it would
(presumably) be impermissible for the doctor to conduct fatal experiments on 1 paraplegic child
selected at random to save 99 others, it ought to be impermissible for the doctor to set in motion
a machine to do the selection and experimentation for him. This is unlike RISKY
TREATMENT, in which there does not seem to be any subversion of moral rules via machines;
instead, the doctor simply seems to be providing the best treatment he can for the patients under
his care, given the evidence he has available.
Before discussing in more detail how Frick distinguishes these cases, it is worth noting
the approach he does not take, which is to focus on bodily autonomy. This distinguishes Frick
from scholars such as Rahul Kumar, who claims that medical experimentation is impermissible
in virtue of the “intrinsic consideration” that licensing such experimentation “would make it the
case that the authority to make decisions concerning how an individual’s body may be used has,
at least in part, been ceded to others.”
79
According to Kumar (2015, 38), “[e]ach individual has
good reason to want this kind of decision-making discretion to be solely her own,” and this
reason has "nothing to do with the imposition of either harm or the risk of harm.” Kumar goes so
78
Frick (2015, 210), with some modification.
79
See also Gordon-Solomon (2019, 282).
104
far to say that even medical interventions that guaranteed benefits to all would be impermissible
if the recipients had no say in determining whether or not to receive it.
80
I accept that ex ante contractualist that disallows risky medical treatments when they are
not consented to—even when those treatments guarantee beneficial outcomes for all individuals
impacted—will have no problem ruling out medical experimentation (I return to the implications
of such views in more detail in Section 4). The question I am interested in exploring is whether,
for the ex ante contractualist like Frick who believes that an improvement in prospects is
sufficient for justification, at least when that improvement in prospects is significant enough,
there is a principled manner of distinguishing risky medical treatments from medical
experimentation. According to Frick (2015, 205), what distinguishes RISKY TREATMENT
from AUTOMATIC EXPERIMENT is that the former but not the latter passes the
Decomposition Test, which he states as follows:
Decomposition Test: If a rule or procedure can be decomposed into a sequence of distinct
causal stages, each of which involves the voluntary action of some agent (or of a
surrogate for human agency, such as a programmed machine), then it is permissible to
80
Kumar (2015), p. 36. Kumar describes what he sees as an “intuitive” example of a medical intervention that
would be impermissible regardless of its guaranteed benefits as follows:
Say the experimentation is still involuntary, but it involves painlessly enhancing individuals in ways that
significantly benefit each of them. Further, it takes place without the experimental subjects’ knowledge, by
slipping drugs into the water supply of their homes. [There is no] element of pain or distress at the
knowledge that one is being involuntarily experimented on, and no prospect of ending up burdened as a
result.
While a fuller response to this point is beyond the scope of this paper, one worry with Kumar’s approach is how to
account for the permissibility of (for instance) an off-duty nurse administering CPR on an unconscious individual or
injecting Narcan into someone who has overdosed. Such treatments might not be voluntary, but they still seem
permissible (if not obligatory), especially if the nurse knows that her actions will save these individual’s lives. If we
accept that these sorts of interventions are, in fact, permissible, we will need to somehow limit the reach of Kumar’s
objections to “involuntary” medical interventions that guarantee (or even are incredibly likely to result in) better
outcomes for all.
105
adopt and act on this rule or procedure only if the actions it requires at every stage are
justifiable to each person at that time.
In the following Section, I will subject the Decomposition Test to more scrutiny. For now, it is
simply important that we understand its basic motivation.
81
As envisioned by Frick, RISKY
TREATMENT only involves one causal stage, the causal stage of the doctor administering the
treatment. At the time at which this takes place, the doctor can justify his action to the recipients
of the treatment in the manner specified by the ex ante contractualist. AUTOMATIC
EXPERIMENT, however, involves two causal stages: the initiation of the machine and the
conducting of the experiments. AUTOMATIC EXPERIMENT has two causal stages instead of
one because, as Frick (2015, 211) sees it, the doctor has set up a machine to “perform certain
actions in [her] stead.” At the time of this second causal stage, the machine’s functioning cannot
be justified to the paraplegic child that is selected for the lethal experiments, according to Frick
(2015, 211), because the functioning of the machine “places the uncompensated and severe
burden of death” on that child in order to cure 99 others “from the less serious burden of
paraplegia.”
I accept that, at least intuitively, we want to allow for risky medical treatments but not for
medical experimentation. In addition, I agree that it is difficult to distinguish between these cases
in a principled manner. As a first step in this exploration, in the following Section, I show that
Frick’s attempt to make this distinction falls flat.
81
For further discussion and refinement of the Decomposition Test, see Gordon-Solomon (2019, 264-267).
106
2. The Flaws of the Decomposition Test
In this section, I argue that, regardless of what we think about medical experimentation,
we ought to reject the Decomposition Test. This is because, if we adopt the Decomposition Test,
the permissibility of medical experimentation will hinge on factors that seem orthogonal to its
justification to those impacted.
The most glaring problem with the Decomposition Test is that, if we adopt it, then
medical experimentation will often be permissible when it only involves one causal stage. For
instance, consider the following:
ONE-STAGE EXPERIMENT: 100 paraplegic children are trapped in a cave. The
children will be rescued the next day, but a doctor currently has the opportunity to release
a noxious gas into the cave. The doctor knows that the gas would kill one of the children,
though she has no idea which one. The doctor also knows that, from examining the dead
child’s body, she would learn enough to cure the other 99 of their paraplegia. If the
doctor does not release the noxious gas immediately, the opportunity to cure 99 of the
100 children of their paraplegia will be lost.
I assume that ex ante contractualist who considers medical experimentation impermissible will
think it impermissible for the doctor to release the noxious gas in ONE-STAGE EXPERIMENT.
After all, in both ONE-STAGE EXPERIMENT and AUTOMATIC EXPERIMENT, the doctor
is killing one child in order to learn how to cure the other 99 of their paraplegia. If we were
hoping that the Decomposition Test would help us to rule out this sort of medical
experimentation, however, we will be sorely disappointed.
In ONE-STAGE EXPERIMENT, there seems to be only one causal stage to examine: the
doctor releasing the noxious gas at t1. The doctor releasing the gas at t1 is justified to all the
107
children because, conditional on the doctor’s evidence, doing so improves each of their
prospects. The doctor has no idea which child will be killed by the gas, and it’s easy to imagine
that it would be impossible for her to find out. This is essentially the same position the doctor is
in with RISKY TREATMENT. That is, as is the case in RISKY TREATMENT, the doctor has
the option of performing an action that gives each child a .99 chance of being cured of paraplegia
and a .01 chance of being killed. Insofar as we think the doctor is justified RISKY
TREATMENT in virtue of the extent to which this improves each child’s prospects then, even if
we adopt the Decomposition Test, we seem to be forced to accept the permissibility of the
release of the noxious gas in ONE-STAGE EXPERIMENT.
One way in which the ex ante contractualist might respond to ONE-STAGE
TREATMENT is by insisting that, in fact, there are two stages that need to be justified: the
release of the gas and the examination of the dead body. While it’s far from clear that the
examination of a dead body needs to be justified to that individual, especially if the examination
is cursory in nature, we could always modify the example to avoid this potential issue. For
instance, the doctor might need to simply watch the child die (via remote camera) in order to
learn how to cure the other 99, or sample the air in the cave to determine how the noxious gas
was metabolized. In this (and many other) ways, a doctor might only need one so-called causal
stage to perform a medical experiment that many will find intuitively objectionable. This
severely undermines the usefulness of the Decomposition Test to the ex ante contractualist.
82
Regardless of what we think of the permissibility of medical experimentation, cases like
ONE-STAGE EXPERIMENT suggest that we ought to reject the Decomposition Test. This is
82
If an advocate for Frick’s view were tempted to respond that the functioning of the noxious gas was itself a
distinct causal stage, this would rule out risky medical treatments altogether. This is because, at some point in time,
risky medical treatments (like the noxious gas) the treatment will kill certain individuals without offering them any
offsetting benefit.
108
because, regardless of what we think of the permissibility of medical experimentation, we are
unlikely to think that the number of causal stages involved is dispositive. In other words, if we
think medical experimentation is impermissible, we will think it impermissible in both
AUTOMATIC EXPERIMENT and ONE-STAGE EXPERIMENT, and if we think it
permissible, we will think it permissible in both cases. With this in mind, even if we are
unwilling to revise our intuitions about the permissibility of medical experimentation, we should
be willing to explore alternatives to the Decomposition Test.
According to the ex ante contractualist position under consideration, whether an action is
permissible depends on the prospects for those impacted, conditional on the agent’s evidence at
the time at which she acts. Even if the Decomposition Test ruled out the possibility of using
machines to conduct experiments on others—a claim of which I am skeptical
83
— doctors would
be able to conduct what we might think of as one-stage experiments as long as their actions
improve each child’s prospects to a sufficient degree, as is the case in ONE-STAGE
EXPERIMENT.
84
If we think the permissibility of medical experimentation ought not hinge on
the number of causal stages involved, one natural alternative is to focus on the composition of
such experiments as opposed to their decomposition.
83
I am skeptical of this claim because, according to the ex ante contractualist, the justifiability of an action is
conditional on the agent’s evidence at a particular time. With this in mind, adopting the Decomposition Test would
not seem to rule out the possibility of medical experimentation when, for whatever reason, it was nearly impossible
to ascertain who was experimented upon until after the experiments were completed. For instance, even if we adopt
the Decomposition Test, sending a machine into a dark cave to conduct experiments would presumably be
permissible as long as, given logistical constraints, the identities of those experimented upon could not be easily
ascertained even if the doctor herself was performing the procedure. Since the Decomposition Test’s problem with
one-stage experimentation is a bit more straightforward, I put this worry about epistemic access to the identities of
those experimented upon aside in what follows.
84
One potential response from Frick is that what the Decomposition Test is primarily concerned with is intention.
This is suggested when he writes that the problem with AUTOMATIC EXPERIMENT is that “the only rationale for
setting up such a device…is precisely to avoid the need to perform an action that…could not be justified to each
person at that time,” Frick (2015, 211). This response would not apply to ONE-STAGE EXPERIMENT, however,
because the functioning of the noxious gas is not a causal stage that requires justification. If the functioning of the
gas and other medications did require justification, then risky medical treatments in general would be impermissible
(see fn 16).
109
3. Composite Contractualism
In this Section, I accomplish three tasks: I propose a novel contractualist approach to risk
imposition, I show how it avoids the pitfalls of the Decomposition Test, and I argue that its
appeal extends beyond cases involving risky medical treatments.
In contrast to the Decomposition Test outlined in Section 2, I propose a version of ex ante
Contractualism that I call “Composite Contractualism” (CC). This approach mimics that of
Johann Frick (2015, 188) and other ex ante contractualists in that an individual’s personal
reasons for rejecting a principle licensing a risky action “depends on the quality of the prospect
that the action gave her ex ante.” CC differs from other versions of ex ante contractualism in
rejecting the Decomposition Test in favor of what I call the Composition Test:
Composition Test: If a rule or procedure is justified to each person impacted at a
particular time, then the rule or procedure’s causal stages are justified to each person
regardless of which ones involve the agent’s voluntary action as long as 1) the total
expected value of the sequence and those expected to be impacted remain unchanged and
2) no other action becomes available with a higher total expected value. The causal stages
that are relevant to the application of this test include not only the functioning of
surrogates for human agency such as programed machines but also the functioning of
medications and other treatments.
The first thing to note about CC is that it generates consistent verdicts in cases involving
medical experimentation. For instance, according to CC, if medical experimentation is
permissible using a noxious gas, then it is also permissible using a programmed machine. This is
110
because the functioning of the gas is considered a causal stage in the same manner as the
functioning of the machine. Similarly, CC generates consistent verdicts with respect to the
permissibility of causal stages of procedures whether or not they involve the agent’s voluntary
action. In other words, if medical experimentation is permissible using a noxious gas or a
programmed machine, it is also permissible for the doctor to perform the experiments herself.
To be clear, CC does not entail the permissibility of medical experimentation—though I
will argue that it removes a significant obstacle to accepting that conclusion. Instead, CC is
simply committed to the truth of conditionals such as if medical experimentation is permissible
using a gas, it is permissible to do by hand. This alone renders it superior to the Decomposition
Test, I would argue, as the Decomposition Test makes the permissibility of medical
experimentation dependent on the number of causal stages involved and whether gases instead of
programmed machines are used. At a minimum, CC isolates itself from these seemingly
irrelevant features of medical experimentation to focus squarely on whether such an intervention
can be justified to those impacted.
An additional benefit of CC is that the permissibility of the latter causal stages of risky
procedures is not dependent upon the agent remaining ignorant as to who exactly will benefit and
who will be harmed.
85
This is not to say that CC is insensitive to all changes in an agent’s
evidence. For instance, if partway through a risky procedure an agent obtains new evidence
demonstrating that procedure will be disastrous for everyone, or a different procedure becomes
available that has an even higher total expected value, then the continuation of the initial
85
In this way, Composite Contractualism avoids Gordon-Solomon’s critique that it is “morally arbitrary…whether
the risk to which a person is exposed is known to eventuate in her life at the moment of the policy-choice.” In
Gordon-Solomon’s view, this “adulterates the ex ante perspective…with knowledge of the outcomes for some, but
not other, persons exposed to risk” (2019, 275).
111
procedure may be impermissible.
86
At the same time, when an agent’s evidence does not change
in this manner, it is natural to think that the permissibility of the procedure’s latter causal stages
remains unchanged as well. While this may initially be seen as a liability of CC, I would argue
that it is one of its most appealing features. For this reason, before discussing the implications of
CC for the permissibility of medical experimentation, I take some time to illustrate how this
feature of CC provides independent reason to reject the Decomposition Test.
Consider the following:
AUTOMATIC IMPROVEMENT: While Mia is driving down a steep hill at t1, her
primary brakes go out. If Mia does nothing, there is a .9 chance that her out-of-control car
would kill both Fred and Gary. If Mia activates her car’s autopilot at t2, then both Fred
and Gary would only have a .5 chance of being killed but the death of one of them would
be guaranteed. If Mia activates her car’s autopilot at t2, she would see who is going to be
killed before he is killed at t3. Once the car’s autopilot is activated, however, there is no
way of stopping it.
Chance of Survival No autopilot (t1) Autopilot activated (t2) Someone killed (t3)
Fred .1 .5 1 or 0
Gary .1 .5 0 or 1
86
While I won’t pursue the matter here, it is natural to think that the Decomposition Test should also limit its
application to a similar range of cases. For instance, let’s say a doctor initiated an unstoppable multi-stage procedure
that, conditional on the doctor’s evidence at the time of initiation, significantly improved all of her patient’s
prospects. Even if, before the last stage of the procedure, the doctor learned that the procedure was actually going to
harm all of the patients, it does not seem that the ex ante contractualist would want to condemn the doctor as having
acted impermissibly. As written, however, the Decomposition Test would seem to generate this result, since the last
causal stage of the procedure would not be justified to the patient conditional on the evidence the doctor possessed at
that time. (This problem with the Decomposition Test is related to the one discussed above in fn 16.)
112
I think that the ex ante contractualist ought to judge it permissible for Mia to activate her car’s
autopilot. Simply put, this is because, conditional on Mia’s evidence, activating autopilot
increases both Fred and Gary’s chances of survival. More specifically, if Mia does not activate
her car’s autopilot, both Fred and Gary have a .1 chance of survival, but if she does, then their
chances of survival jump to .5. CC has no problem accommodating this result since, regardless
of the autopilot’s causal stages, its activation is justified to both Fred and Gary in virtue of the
fact that its activation significantly improves each of their prospects.
If we adopt the Decomposition Test, however, we must reject this judgment. This is
because there is a causal stage of Mia’s action—her autopilot running over either Fred or Gary at
t3—that cannot be justified to that person. According to the defender of the Decomposition Test,
Mia couldn’t justify running over Fred or Gary herself, so she can’t justify using her autopilot to
do so. Even though Mia’s only chance to influence her car’s trajectory is at t2, and her
intervention at t2 would be in the expected interests of both Fred and Gary, the Decomposition
Test considers such an intervention impermissible. This is a particularly unappealing result, I
submit, since both Fred and Gary would want Mia to intervene at t2 so as to improve their
respective chances of survival, and this is what she would do if she were only concerned with the
interests of either one of them in isolation.
Perhaps the proponent of the Decomposition Test will dig in and insist that, in
AUTOMATIC IMPROVEMENT, it is impermissible for Mia to activate her car’s autopilot. This
is a difficult stance for the proponent of the Decomposition Test to defend, however, since she
herself judges structurally similar interventions as permissible when they only involve one causal
stage. For instance:
113
ONE-STAGE IMPROVEMENT: While Mia is driving down a steep hill at t1, her
primary brakes go out. If Mia does nothing, there is a .9 chance that her out-of-control car
will kill both Fred and Gary. If Mia uses the emergency brake at t2, her car will skid out
of control. Mia, an expert driver, knows that her skidding car would kill either Fred or
Gary at t3 while leaving the other unscathed. Conditional on Mia’s evidence, which
includes countless hours of training behind the wheel, Fred and Gary have an equal
chance of being killed by this maneuver.
Chance of Survival No emergency brake (t1) Emergency brake (t2) Car impact (t3)
Fred .1 .5 1 or 0
Gary .1 .5 0 or 1
As discussed above, if we adopt the Decomposition Test, our analysis of the permissibility of
risk imposition will crucially depend on the number of causal stages involved. ONE-STAGE
IMPROVEMENT seems to involve one causal stage, Mia using the emergency brake. Since Mia
using the emergency brake improves both Fred and Gary’s chance of survival, the proponent of
the Decomposition Test would judge it permissible for her to do so. While the proponent of CC
would agree with this verdict, she would be quick point out that it’s hard to imagine that it would
be permissible for Mia to accomplish with her emergency brake in ONE-STAGE
IMPROVEMENT what it would be impermissible for her to accomplish with her autopilot in
AUTOMATIC IMPROVEMENT. After all, in both cases Mia’s intervention improves both Fred
and Gary’s chance of survival, is what Fred and Gary would desire for themselves, and is what
would be done if either Fred or Gary’s individual interests were paramount.
114
What ONE-STAGE IMPROVEMENT and AUTOMATIC IMPROVEMENT illustrate is
that it is not only cases involving medical experimentation that the Decomposition Test seems to
get wrong. I would argue that this is because the Decomposition Test gives up what, at least
according to the ex ante contractualist, is most important for the justification of our actions to
others. In AUTOMATIC IMPROVEMENT, what seems most important for the justification of
Mia’s actions is that those actions give both Fred and Gary the greatest chance of survival. If
what’s most important is Fred and Gary’s prospects, conditional on Mia’s activation of autopilot,
then the justification of the final causal stage of the activation will be parasitic on the
justification of the entire procedure. In other words, if the person to be killed by the autopilot
were to ask (before his death) why its activation was justified to him, the answer would be that
initiating the autopilot gave that person a better chance of survival than the failure to initiate it.
87
Insofar as the later causal stages of a procedure can be justified to others in this manner, this is a
significant mark in favor of CC.
88
One question we might have about CC is how exactly to specify the set of causal stages
or actions that require justification. An initial answer is that the same rules or procedures to
which we would be applying the Decomposition Test are the ones to which we would be
applying the Composition Test. For instance, Frick considers administering a vaccine a group of
people and conducting a medical experiment the sorts of action (or series of actions) that need to
be decomposed, and the adopter of the Composition Test can wholeheartedly agree. A slightly
more substantive answer, at least in the context of risk imposition, is that the Composition Test
87
Along similar lines, if the person to be killed by Mia’s skidding car in ONE-STAGE IMPROVEMENT were to
ask what justified his death, the answer would be that using the emergency brake gave that person a better chance of
survival than failing to do so. For more in-depth discussion of this sort of case, see [source redacted].
88
A more general mark against the Decomposition Test is that, at least arguably, there are some groups of actions
that are naturally seen as important units of moral analysis. One example that has been discussed extensively is the
justification and proportionality of wars. For relevant discussion of the proportionality of war, see Kamm (2001a),
McMahan (2015), Moellendorf (2015), Fabre (2015), Rodin (2008), and Rodin (2015).
115
can be applied to any plan, series of actions, or sequence of causal stages that, at its initiation,
improves the prospects of those impacted conditional on the agent’s evidence.
89
According to
CC, if the series of actions is justified to those impacted at the time at which it is initiated, the
entire series is permissible to perform as long as 1) the total expected value of this series of
actions as well as those expected to be impacted does not change, and 2) no other series becomes
available that has an even higher total expected value.
90
Perhaps the most pressing objection to CC is that, if we adopt it, it may be permissible to
treat others in a manner that most non-consequentialists would find objectionable, including
subjecting them to medical experimentation. In the following section, I make clear exactly what
exactly the CC’s commitments are in this regard and, more importantly, the price the ex ante
contractualist must pay to reject them.
4. From Risky Treatment to Medical Experimentation
In this Section, I discuss Composite Contractualism’s (CC’s) commitments on cases
involving medical experimentation. To warm up to this question, I first discuss CC’s
commitments with respect to risky medical treatments.
As discussed in Section 1, many worry that the ex ante contractualist is unable to
distinguish cases like RISKY TREATMENT from ones like AUTOMATIC EXPERIMENT.
Where I think CC is helpful is in demonstrating that, if we think doctors act permissibly in
RISKY TREATMENT, then we ought to think it permissible for doctors to administer stages of
89
In the simple case in which the agent faces two options, initiating the sequence and refraining from doing so.
90
In principle at least, this question is not so different from the Scanlon’s problem of specifying how fine-grained
the principles that individuals can reasonably reject:
There is an obvious pressure toward making principles more fine-grained, to take account of more and
more specific variations in needs and circumstances. But there is also a counterpressure arising from the
fact that finer-grained principles will create more uncertainty and require those in other positions to gather
more information in order to know what a principle gives to and requires of them. Scanlon (1998, 205).
116
medical treatments even when, at the time they administer those stages, they know who will be
harmed in a way that is not offset by any benefit.
91
If the proponent of CC succeeds at
convincing us that the permissibility of risky treatments entails the permissibility of performing
stages of treatment even when doctors know who will be harmed, this will remove one
significant obstacle to thinking that, in certain contexts, medical experimentation is permissible.
Let’s start with a case that all who believe in the permissibility of risky medical
treatments ought to agree upon:
SLOW DRUG: A treatment is administered to 100 paraplegic children. The treatment
consists of a one-time injection of two drugs, Drug A and Drug B. When this injection is
administered, each patient has a .99 chance of being cured (conditional on the doctor’s
available evidence) and a .01 chance of being killed. Drug A moves through the
bloodstream much faster than Drug B. In addition to enabling Drug B to have its full
effect, Drug A has a peculiar side effect on those who the treatment would kill: it turns
their hair while. Since it is only the patients who would be killed by the treatment whose
hair is turned white, the doctor knows, before Drug B takes effect, who will be cured and
who will be killed by the treatment. By this time, however, there’s nothing that can be
done to stop the treatment.
If risky treatments are permissible, then it’s clear that the treatment administered in SLOW
DRUG is permissible as well. Even if we adopt the Decomposition Test, SLOW DRUG only
involves one causal stage, and this causal stage can be justified to the patients given the doctor’s
evidence at the time of the injection.
91
This point builds off Gordon-Solomon’s argument that it is “morally arbitrary…whether the risk to which a
person is exposed is known to eventuate in her life at the moment of the policy-choice.” In Gordon-Solomon’s view,
this “adulterates the ex ante perspective…with knowledge of the outcomes for some, but not other, persons exposed
to risk” (2019, 275).
117
If ex ante contractualists want to allow for the injection in SLOW DRUG, they likely
would want to allow for the following sort of injection as well:
SLOW AUTOMATIC MICROCHIP: A treatment is administered to 100 paraplegic
children that has a .99 chance of curing and a .01 chance of killing each child. The
treatment consists of a one-time injection of both Drug A and a microchip. Drug A moves
through the bloodstream quickly, but it takes a while for the microchip to reach the
temperature necessary to activate in the patient’s body. In addition to enabling the
microchip to have its full effect, Drug A turns white each patient’s hair who will be killed
by the treatment. The doctor thus knows, before the microchip reaches the temperature
necessary to be effective, who will be cured and who will be killed by the treatment. By
this time, however, there’s nothing that can be done to stop the treatment.
92
If the doctor acts permissibly in SLOW DRUG, then it’s hard to deny that she also does so in
SLOW AUTOMATIC MICROCHIP. The only difference between SLOW DRUG and SLOW
AUTOMATIC MICROCHIP is that, in the latter case, the work of Drug B is done by a
microchip. In both cases, the doctor cannot halt the treatment once it is started, and in both cases,
the administration of the treatment improves the patient’s prospects by the same amount
conditional on the doctor’s evidence when she administers it. At a minimum, from the
perspective of the patients who may be cured by each treatment, it’s hard to imagine they have a
complaint against being potentially killed by a microchip that they would not have against being
potentially killed by a drug. If we adopt CC, we would have no problem accounting for the
similarity between these cases because the Composition Test treats the functioning of drugs and
92
We can also assume, for those who might have a worry about the role intention might play in such a case, that the
microchip treatment was simply what emerged from the best research available (as opposed to some subversive ploy
for doctors to avoid dirtying their hands).
118
microchips in the same manner. If we favor the Decomposition Test, however, we would have
more difficulty because, at least according to Frick, only the functioning of surrogates for human
agency such as programmed machines counts as causal stages.
Although there is little reason to distinguish between SLOW DRUG and SLOW
AUTOMATIC MICROCHIP, ex ante contractualists will almost certainly object to the
following:
SLOW MANUAL MICROHIP: Same set-up as SLOW AUTOMATIC MICROCHIP
except that, after the microchip has reached the temperature necessary to be effective in a
patient’s body, the doctor must activate the microchips manually. Since the patient who
will be killed by the treatment’s hair is white at this point, if the doctor activates the
microchips, she would know exactly who he would be curing and killing. If the doctor
does not activate the microchips, patients will be left in the same state as they were
before the treatment began.
Most ex ante contractualists will judge it impermissible for the doctor to activate the microchips
at t2 because doing so, in their view, cannot be justified to the patient the doctor knows will be
killed.
93
While I can understand our discomfort with the doctor activating the microchips in
SLOW MANUAL MICROCHIP, it is difficult to maintain that her actions are any less
justifiable than her actions in SLOW AUTOMATIC MICROCHIP. This is because the only
difference between SLOW AUTOMATIC MICROCHIP and SLOW MANUAL MICROCHIP is
that, in the former case, the doctor has a machine available to, as Frick (2015, 211) puts it, do her
“dirty work” for them. In other words, if the doctor acts impermissibly in SLOW MANUAL
MICROCHIP, then in SLOW AUTOMATIC MICROCHIP she is simply setting in motion a
93
For further discussion of this point, see Steuwer (2018), Ruger (2018), and Holm (2018).
119
device that takes the relevant decision out of her hands while delivering the (by assumption
prohibited) results she desires. If we do not think that it is permissible to use machines as a
surrogate for human agency in such a way that, if we had to perform the actions ourselves, those
actions would be impermissible, then we cannot judge the doctors to act permissibly in SLOW
AUTOMATIC MICROCHIP but not in SLOW MANUAL MICROCHIP.
94
This presents a dilemma for the ex ante contractualist. On the one hand, she strongly
wishes to deny that the doctor acts permissibly in SLOW MANUAL MICROCHIP. However, if
the doctor acts impermissibly in SLOW MANUAL MICROCHIP, and having machines perform
one’s dirty work—that is, perform actions that would impermissible if done manually—is itself
impermissible, then the doctor doesn’t act permissibly in SLOW AUTOMATIC MICHROCHIP
either. If the doctor doesn’t act permissibility in SLOW AUTOMATIC MICROCHIP, however,
then it is arbitrary to judge that he does so in SLOW DRUG. In this way, and with the
assumption that there is no relevant difference between SLOW DRUG and RISKY
TREATMENT, a judgment of impermissibility in SLOW MANUAL MICROCHIP leads us to a
judgment that risky medical treatments, as a general matter, are impermissible.
On the other hand, the ex ante contractualist can maintain her judgment that risky
treatments are generally permissible. This is the approach adopted by the proponent of CC. If
risky treatments are generally permissible, then, according to CC, the doctor acts permissibly in
SLOW DRUG, AUTOMATIC MICROCHIP, and SLOW MANUAL MICROCHIP. In
particular, once we adopt CC’s Composition Test, the permissibility of the doctor’s actions in
SLOW AUMATIC MICROCHIP will entail the permissibility of the doctor’s actions in SLOW
MANUAL MICROCHIP since, in SLOW MANUAL MICROCHIP, the casual stages of the
94
At least the proponent of CC who wants to allow for risky medical treatments.
120
treatment are voluntarily performed by the doctor instead of being automated. Of course, there is
a price to be paid for this approach as well, namely, that we must give up our intuitive resistance
to the doctor’s actions in SLOW MANUAL MICROCHIP. However, for some ex ante
contractualists at least, this is a less significant cost than giving up the permissibility of risky
medical treatments as a whole.
If we accept that doctors act permissibly in SLOW MANUAL MICROCHIP, we ought to
be less resistant to the permissibility of medical experimentation. Perhaps the most significant
barrier to allowing for medical experimentation is the thought that, at the time at which the
experimentation is conducted, it cannot be justified to those who stand no chance of benefitting
from it.
95
What the argument in this section demonstrates is that, even outside of the context of
medical experimentation, the ex ante contractualist has reason to allow for such actions. In
particular, insofar as the ex ante contractualist wants to allow for risky medical treatments, this
will put significant pressure on her to allow doctors to perform stages of treatment even when
they know, at the time of these stages, who will be harmed without offsetting benefit. If these
latter stages of risky treatments are permissible, then the fact that medical experiments, at the
time at which they are conducted, will guarantee harm to some and benefit to others will not rule
out their permissibility.
To be fair, another pressing objection that many have to medial experimentation is that it
violates individuals’ bodily autonomy. For instance, according to Kumar (2015, 38), to allow for
medical experimentation when it has not been explicitly agreed to is to make it the case that “the
authority to make decisions concerning how an individual’s body may be used has, at least in
part, been ceded to others.” There are two points that are worth keeping in mind when it comes
95
Though this problem does not seem present in ONE-STAGE EXPERIMENT.
121
to explicit agreements. First, according to the ex ante contractualist position under consideration,
principles licensing risky activities are only justifiable to each person if such activities are the
ones we would perform if we were “concerned solely with that [person’s] interests.”
96
Since this
version of ex ante contractualism mandates that we treat individuals in the manner that they
themselves would prefer, it’s hard to make the case that it licenses violations of their individual
autonomy.
97
In fact, given this sort of account, to deny the permissibility of medical
experimentation and risky medical treatments is to constrain the range of choices that people can
make with respect to their own health and well-being.
98
The second point that is worth keeping in mind is that, even if we adopt a focus on
explicit agreements, we will still be left with the question of what people can reasonably agree
to. For instance, even if an individual agreed to it beforehand in light of the overwhelming
likelihood of future benefit, I doubt that the ex ante contractualist would consider it permissible
for her to be shocked in order for millions of others to enjoy the rest of the World Cup final.
99
If
the ex ante contractualist rules out this sort of explicit agreement, she will be tempted to rule out
agreements having to do with medical experimentation as well.
100
If the ex ante contractualists
rules of agreements having to do with medical experimentation, then it will simply not be the
case that the problem with such experimentation is a lack of explicit agreement.
101
96
For instance, Frick (2015, 187-88); for a similar line of argument, see Hare (2016).
97
A similar point might be applied to Gordon-Solomon’s argument about the importance of deep relationships
(2019, 282). That is, if we are truly treating others as they want to be treated, this does not seem to “challenge
ongoing relations” (2019, 283).
98
Gordon-Solomon argues that contractualism should be constrained in this way (2019, 281).
99
This example is discussed by Scanlon (1998, 235).
100
Frick certainly would want to rule out such agreements (2015, p. 208 fn 208).
101
In addition, even if we accept that agreements play a central role in such cases, we will be left with the question
of whether such agreements must pass something like Frick’s Decomposition Test. For instance, is it sufficient for
an individual to agree to AUTOMATIC EXPERIMENT at the point at which the machine is launched? Or is it also
necessary for the individual to agree once she has been selected to be killed? If the ex ante contractaualist thinks
agreement after selection is necessary, then she will be faced with the problems of the Decomposition Test outlined
above.
122
On way the ex ante contractualist can accommodate our some of our intuitions with
respect to medical experimentation is to point out that being experimented upon is often a
horrible way to die (or a horrible way to have one’s body treated). According to this line of
thinking, which is perfectly consistent with CC, having one’s organs harvested is likely a
gruesome and incredibly painful way for one’s life to end. Since dying in this manner is often
heinous, individuals have personal reason to object to such deaths that they do not have to object
to other ways of dying. This fact alone could make plausible the claim that there is a difference
between risky treatments and automatic experiments, though our ultimate judgments would
depend on the details (as one might die a gruesome death as a result of a medical treatment and
one’s death as a result of medical experimentation might be painless).
To be clear, this argument will not accommodate the intuition that medical
experimentation is almost never permissible. At the same time, I would argue, the
contractualist’s analysis of moral wrongness does not easily accommodate this intuition.
According to the contractualist, an action is only wrong if no one could reasonably reject a set of
principles prohibiting it.
102
This means that medical experimentation is only wrong if none of
those most burdened could reasonably reject such a prohibition. In a case like ONE-STAGE
EXPERIMENT, then, medical experimentation would only be wrong if none of the 100
paraplegic children (or their guardians) could reasonably reject a principle that denied them a .99
chance of being completely cured. If we think that even one of these children could reasonably
reject a principle that denies her the possibility of a significantly better life, then we will either
need to accept the permissibility of medical experimentation in this context or reject the ex ante
contractualist’s framework altogether.
102
This is the formulation initially put forward by Scanlon (1998, 153), emphasis added.
123
5. Conclusion
As many have noted, ex ante contractualism has problems distinguishing cases involving
risky medical treatments from those involving medical experimentation. After showing that
Frick’s attempt to make this distinction fails, I presented my own version of contractualism,
Composite Contractualism (CC). The basic upshot of CC is that the permissibility of a stage of
an action may be parasitic on the permissibility of the multi-stage action of which it is a part.
One virtue of CC is that it generates consistent verdicts in cases involving both risky medical
treatments and medical experimentation. In addition, in cases that do not involve medical
experimentation (such as AUTOMATIC IMPROVEMENT), CC allows us to preserve the ex
ante contractualist’s intuition that an act is permissible if it improves the prospects of those
impacted, conditional on the agent’s evidence at the time at which she acts. While it is true that
CC may entail that medical experimentation is not prohibited in all contexts, this should only
concern us if we think that none of those most burdened by such a prohibition could reasonably
reject it. Insofar as reasonably people might disagree on the moral complexities of medical
experimentation, we must either reject our intuition that medical experimentation is almost
always impermissible or reject the ex ante contractualist’s analysis of the property of moral
wrongness.
124
Sources:
Adams, Robert M. (2001). Scanlon’s Contractualism. The Philosophical Review, 100:4, 563-586.
Ashford, Elizabeth. 2003. The Demandingness of Contractualism. Ethics 113, 2: 273-302.
Buchak, Lara. (2013). Risk and Rationality. Oxford: Oxford University Press.
Enoch, David. (2017). Hypothetical Consent and the Value(s) of Autonomy. Ethics, 128(1), 6-36.
Fabre, Cecile. (2015). War Exit. Ethics, 125, 631-652.
Frick, Johann. (2015). Contractualism and Social Risk. Philosophy and Public Affairs, 43:3, 175-223.
Fried, Barbara. (2012). “Can Contractualism Save Us from Aggregation?” Journal of Ethics, 16: 39–66.
Gordon-Solomon, Kerah. (2019). Should Contractualists Decompose? Philosophy and Public Affairs,
47(3), 259-287.
Hare, Caspar. (2016). Should We Wish Well to All? The Philosophical Review, 125(4), 451-472.
Holm, Sune. (2018). The Luckless and the Doomed. Contractualism on Justified Risk-Imposition. Ethical
Theory and Moral Practice, 21:231-244.
James, Aaron. 2012. Contractualism’s (not so) slippery slope. Legal Theory 18: 263–292.
Kamm. F. M. (2001a). Making War (and its Continuation) Unjust. European Journal of Philosophy, 9:
328-343.
Kamm, F. M. (2001b). Morality, Mortality Volume II. Oxford: Oxford University Press.
Kumar, Rahul. 2015. Risking and Wronging. Philosophy and Public Affairs 43, 1: 27-51.
Lazar, Seth. (2018). Moral Sunk Costs. The Philosophical Quarterly, 68:273, 841-861.
Lenman, James. 2008. Contractualism and risk imposition. Politics, Philosophy and Economics, 7: 99–
122.
McLennan, Edward F. (1990). Rationality and Dynamic Choice: Foundational Explorations. New York:
Cambridge University Press.
McMahan, Jeff. (2015). Proportionality and Time. Ethics, 125, 696-719.
Moellendorf, Darrel. (2015). Two Doctrines of Jus ex Bello. Ethics, 125, 653-673.
Otsuka, Michael. (2011). Scanlon and the Claims of the Many versus the One”, Analysis, 60: 288–90.
Parfit, Derek. (2011). On What Matters, Vol. 1. Oxford: Oxford University Press.
125
Ramakrishnan, Ketan H. (2016). Treating People as Tools. Philosophy and Public Affairs, 44:2, 133-165.
Reibetanz, Sophia. (1998). Contractualism and Aggregation. Ethics, 108:2, 296-311.
Rodin, David. (2015). The War Trap: Dilemmas of just terminato. Ethics, 125, 674-695.
Rodin, David. (2008). Two Emerging Issues of Jus Post Bellum: War Termination and the Liability of
Soliders for Crimes of Aggression, in C. Stahn and J. K. Kleffner (eds.) Just Post Bellum: Toward a Law
of Transition from Conflict to Peace, 53-76. The Hague: T.M.C. Asser Press.
Ruger, Korbinian. (2018). On Ex ante Contractualism. Journal of Ethics and Social Philosophy, 13(3).
Scanlon, T. M. (1998). What We Owe Each Other.
Steuwer, Bastian. (2021). Contractualism, Complaints, and Risk. Journal of Ethics and Social Philosophy
19, 2: 11-147.
Tadros, Victor. (2018). Past Killings and Proportionality in War. Philosophy and Public Affairs, 46:1, 9-
35.
Wu, Patrick. (2022). Aggregation and Reductio. Ethics, 132:2, 508-525.
126
Chapter 5
The Culpability of Criminal Attempts
Everyone who commits a crime begins to commit that crime, but not everyone who
begins to commit a crime ends up committing one. Those who begin to commit a crime but don’t
end up committing one can be split into two broad categories: those that attempt to commit the
crime but are unsuccessful and those that never finish their attempt. For instance, someone who
fires a gun at her intended victim but misses would fall into the first category—a category we
can call ‘complete’ attempts—while someone who has the gun knocked out of her hand before
she has a chance to fire would fall into the second—a category we can call ‘incomplete’
attempts.
103
In most (if not all) common law jurisdictions, some range of incomplete attempts is
considered to be the same offense as a complete attempt. For instance, if an agent aims a gun at
another with the intention to shoot and then has her gun knocked out of her hand, she will almost
certainly be charged with the same crime as someone who fired and missed, namely, attempted
murder. In jurisdictions in which attempts are eligible for the same punishment as the underlying
offense—such as in most of the U.S. and U.K.—this means that individuals who never complete
an attempted murder can be punished as severely as those who successfully kill their intended
victim.
104
103
Usage of terms ‘complete’ and ‘incomplete’ attempts most closely parallels that of Adams (1998). Helpful
discussion can also be found in Cahill (2012).
104
Note that certain punishments, such as the death penalty in the U.S., generally cannot be given to those whose
attempts are not successful.
127
In this chapter I present three arguments against the practice of considering some
individuals who perform incomplete attempts as equally culpable as those who perform complete
attempts. While there may be a certain prima facie plausibility to considering some who never
complete their attempted crimes as equally culpable as those who attempt crimes but fail, this
does not withstand close scrutiny. If we are unable to find a principled defense of the criminal
justice system’s treatment of some incomplete attempts in the same manner as complete attempts
then, at least as a general matter, we should never lump the two together as one.
1. Complete and Incomplete Attempts
To start, it will be helpful to get a basic sense of the difference between complete and
incomplete attempts.
105
In this section, I first discuss my definition of these terms, which
borrows heavily from David M. Adams, and follow by contrasting my definition with the one
utilized by Larry Alexander and Kimberly Ferzan.
Incomplete and complete attempts occur near the end of a chain of steps that we can
imagine starting when an individual gets the idea of committing a particular crime. For our
purposes, it will be helpful to think of the commission of a crime in five rough stages:
Stage 1) Conception: Individual decides to commit a crime.
Stage 2) Preparation: Individual obtains the materials and know-how necessary
for the crime and places themself in a position to commit it.
Stage 3) Agential Commencement: Individual begins to commit the crime.
Stage 4) Agential Completion: Individual finishes doing what he needs to do to
commit the crime.
105
For historical discussion of attempts in the common law tradition, see Keedy (1954); for a recent discussion in
the U.S. context, see Doyle (2020).
128
Stage 5) Act Completion: The crime is successfully committed.
106
To take a simple example, we can think of making a cake through the lens of these five stages:
Stage 1) Conception: Individual decides to make a cake.
Stage 2) Preparation: Individual buys the ingredients and tin to make the cake,
clears her afternoon schedule, and preps the kitchen.
Stage 3) Agential Commencement: Individual mixes cake ingredients in a bowl
and pre-heats the oven.
Stage 4) Agential Completion: Individual places the filled cake tin in the oven.
Stage 5) Act Completion: The cake is made.
In what follows, we will be primarily concerned with Stage 3) Agential Commencement and
Stage 4) Agential Completion. This is because, as I define the terms below, individuals who
reach Stage 3 perform incomplete attempts, while those who reach Stage 4 perform complete
attempts. To continue with the cake analogy, an individual who performs an incomplete attempt
might, for instance, begin mixing the ingredients in a bowl and then get called away to another
task. An individual who performs a complete attempt, on the other hand, might mix all the
ingredients and place the tin in the oven only to see the oven cease to function, leaving her with
nothing but a tin full of cake batter.
With this basic distinction in mind, we can now examine the following sufficient
conditions for complete and incomplete attempts.
107
Complete attempt: An individual performs some action A about which it is true that, if
circumstances were as the individual believed them to be (and as a reasonable person
106
There is some similarity between these stages and the ones found in Hall (1960, 558).
107
As I make clearer below, by offering sufficient conditions, I acknowledge that there may be other actions that we
may want to qualify as complete and incomplete attempts. This will not adversely impact the argument that follows.
129
would believe them to be), there was a significant possibility that A would have
constituted the act element of the underlying offense. When the individual A-s, she does
so with the mens rea required for the underlying offense.
Incomplete attempt: An individual performs some action A that is not a complete attempt
and about which it is true that, if circumstances were as the individual believed them to
be (and as a reasonable person would believe them to be), there was a significant
possibility that A would have constituted the beginning of the act element of the
underlying offense. When the individual A-s, she does so with the mens rea required for
the underlying offense.
Note that incomplete and complete attempts, as I have defined them, are not exhaustive.
That is, there are actions in the vicinity of incomplete and complete attempts that, while arguably
constituting attempts, will not fulfill either of these sufficient conditions. One such case is that of
an agent who stabs a voodoo doll thinking that doing so will kill her intended victim.
108
Such an
agent will believe her action to be a complete attempt, but it will not be the case that it is
reasonable for her to think so. Another such case is that of an agent who blows up a vacant house
with an unreasonable belief that her intended victim lives there. The agent’s belief might be
unreasonable because she herself had information from which she could have deduced that the
victim no longer lived there. In what follows, I will not take a position on how these actions
should be charged and punished. The primary reason I exclude such actions from my initial
analysis is that my analysis applied most clearly to incomplete and complete attempts as I define
108
For discussion, See Keedy (1954), 470, Commonwealth v. Johnson 312 Pa. 140 Atl. 344 (1933), State v. Clarissa
11 Ala. 57 (1847)
130
them. If I am successful in convincing the reader that these two categories merit differential
treatment, this itself will be quite significant. (A remaining and secondary task will then be to
decide how to best treat attempts that do not meet either of these sufficient conditions.)
In the context of criminal law, there are three basic reasons why an individual who
performs an incomplete attempt might not perform a complete attempt: abandonment,
postponement, and intervention. In the first case, the agent might have a change of heart, see the
proverbial light, and decide that he couldn’t live with himself if he proceeded any further. (In this
case, the agent might not be culpable for a complete attempt, but the possibility of avoiding
culpability varies with the jurisdiction.
109
) In the second case, the individual might decide not to
complete his attempt because, for instance, there are a lot of cops around and he thinks he will be
caught. In this case, the agent will likely still be culpable for a complete attempt, at least if he
plans to complete the attempt on some other occasion.
110
The last reason why an individual who
performs an incomplete criminal attempt might not perform a complete attempt is perhaps the
canonical one: external intervention. For instance, before the individual has a chance to fire the
gun at her intended victim, a police officer might knock the gun out of her hand. If the only thing
that keeps an individual from performing a complete attempt is this sort of external intervention,
it is easiest to think that he should be charged with the same crime as someone who performs the
complete attempt. Since this is the best case for my opponent, I will focus on it in what follows.
While practice varies from jurisdiction to jurisdiction, in most (if not all) jurisdictions
some incomplete attempts are considered to be equally culpable as complete attempts. One
illustration of this is in the United States Model Penal Code “substantial step” test, according to
which a person is guilty of an attempt to commit a crime if he does anything which “is an act or
109
For more on the abandonment defense, see Meehan (1979) and Sanford (1854).
110
For discussion of the subtleties involved here, see Hoeber (1986), Crew (1988), and Tsen Lee (1997).
131
omission constituting a substantial step in a course of conduct planned to culminate in his
commission of the crime,” where those substantial steps must be “strongly corroborative of the
actor’s criminal purpose.”
111
In the U.K., the Criminal Attempts Act 1981 states that “if, with
intent to commit an offence to which this section applies, a person does an act which is more
than merely preparatory to the commission of the offence, he is guilty of attempting to commit
the offence.” Examples of substantial steps provided by the MPC include lying in wait, enticing
the victim to go to the scene of the crime, investigating the potential scene of the crime,
unlawfully entering a structure or vehicle where the crime is to be committed, and possessing,
collecting, or fabricating materials to be used in the crime’s commission. If an individual
performs one of these substantial steps, then, at least according to the MPC, we might punish that
individual to the same extent as we would a complete attempt (which, as the reader will recall, is
the same as the underlying offense in many jurisdictions).
According to another widely held view, an incomplete attempt ought to be treated as a
complete attempt when it passes what we might call a ‘dangerous proximity’ test, or the
defendant gets close enough to threaten the imminent completion of the attempt. When we
examine this distance to completion, what is relevant is the distance between what the individual
has done and the completion of the attempt, not the distance between what the individual has
done and when the thought first came to mind. This test is reminiscent of the one put forward by
Justice Holmes (1881, 68), who thought that not only completed attempts deserved to be
punished but also acts performed with the intent to bring about the crime depending on the
“nearness of the danger, the greatness of the harm, and the degree of apprehension felt.”
112
In
Commonwealth v. Peaslee, Justice Holmes applied his thinking to a case in which the defendant
111
MPC Section 5.01 (1)(c), 5.01 (2).
112
For an excellent discussion of the evaluation of Holmes’ view, see Bloustein (1989).
132
had arranged combustibles in a building in a manner in which they were ready to be lit on fire
but ultimately failed to do so. Homes believed that such a preparatory act (or incomplete attempt)
ought to be punished in the same manner as a complete attempt. In particular, he wrote:
That an overt act, although coupled with an intent to commit the crime commonly is not
punishable if further acts are contemplated as needful, is expressed in the familiar rule
that preparation is not an attempt. But some preparations may amount to an attempt. It is
a question of degree.
113
It is exceedingly common, if not the norm, to treat certain incomplete attempts in the
same manner as complete attempts. Before beginning a critique of this practice, it will be helpful
to contrast my definitions of complete and incomplete attempts with the those put forward by
Alexander and Ferzan. There are two important differences between our views: first, they define
incomplete and complete attempts solely in terms of an agent’s beliefs, without consideration of
whether or not those beliefs are reasonable, and second, they define incomplete and complete
attempts in terms of (believed) risk exposure. More specifically, Alexander and Ferzan (2009,
197) consider an actor as having completed an attempt “at the time the actor engages in the act
that unleashes a risk of harm that he believes he can no longer control (through exercise of
reason and will alone).” According to their view (2009, 216), then, an agent is not culpable for
an incomplete attempt unless and until she “does something that she believes increases the risk
of harm to the victim in a way that she no longer can control,” because this is the point where
‘what she does ceases to be guided by her reason and will.”
I don’t have a decisive argument against Alexander and Ferzan’s claim (2009, 23) that
culpability is, as they put it, “an entirely subjective matter.” However, a couple of cases show
113
177 Mass. 267, 59 N.E. 55 at 56 (1901), emphasis added.
133
that we do not simply want to focus on what the agent believes about the risk she is unleashing.
At a minimum, these cases suggest, we should also take into account what the agent believes
about the actions she has already performed.
For instance, let’s say an agent plants a bomb under her enemy’s house and sets a timer to
go off in three hours. This agent believes that she retains complete control over whether the
bomb goes off; unfortunately, she falsely thinks that she set the timer to go off in five hours.
When the bomb she planted detonates, her victim is out in the garden and thus is not at all
injured. If we adopt Alexander and Ferzan’s account, this agent is not culpable for attempted
murder because at no point did she unleash a risk of harm that she believed that she could no
longer control. When she planted the bomb, she believed she could control it, and she continued
to (falsely) believe this until the bomb exploded. Yet to think that the agent avoids all culpability
in virtue of this false belief is absurd. We can avoid this result, at least in theory, by including in
our analysis the agent’s beliefs about what she has already done, and in particular her beliefs
about whether what she has already done constitutes the act element of the underlying offense.
It is also unclear why an agent's belief that he can neutralize the harm put out into world
is relevant to his culpability if he, for instance, knows that he is not going to neutralize it and
takes active steps to prevent its neutralization. For instance, let’s say an agent administers a
slow-acting poison to his intended victim knowing that, if the antidote is administered within six
hours, the poison will do no harm at all. At the same time, the agent also knows that he will not
administer the antidote in the coming six hours and will do everything in his power to prevent
anyone from administering it. If a paramedic administers the antidote within six hours and saves
the intended victim, Alexander and Ferzan would again consider the agent blameless even if he,
for instance, attempted to hide the body from the paramedics and counteract the effects of the
134
antidote. While it’s true that such an agent retains control over the risk she is about to unleash, it
is also true that she has and plans to do everything in her power to ensure that the risk is
unleashed. While Alexander and Ferzan would not consider such an agent as attempting a
murder, I certainly would in light of the agent’s belief that her action constitutes the act element
of the underlying offense.
With this basic idea of incomplete and complete attempts in mind, we can proceed to a
consideration of three arguments against considering some incomplete attempts as equally
culpable as complete attempts.
2. No Magic Moment of Legal Culpability
One general challenge for those who consider incomplete attempts as equally
culpable as complete attempts is explaining why their analysis only applies to incomplete
attempts. More specifically, as I detail below, we have a number of reasons to think that an
individual’s culpability gradually increases as she moves from Stage 1) to Stage 4) of the
commission of a crime. My opponent, however, must explain why the agent’s culpability for a
criminal attempt begins and peaks at Stage 3) and plateaus through Stage 4). This challenge is
especially pressing in common law jurisdictions such as the U.S. and U.K. in which individuals
who fail to reach Stage 3) are often not culpable for anything.
There are two primary reasons why it is natural to think that an agent’s culpability
gradually increases as she moves from Stage 1) to Stage 4) in the commission of a crime. First,
as she moves from Stage 1) to Stage 4), it becomes more likely that she actually will commit the
crime—both from a third-person perspective, which is most relevant to the law, and (quite likely)
from a first-person perspective. From a third-person perspective, many more people think about
135
committing crimes than actually take steps in preparation, and many more take steps in
preparation than begin in the crime’s actual commission. In addition, from a first-person
perspective, an agent may have serious doubts that she’ll be able to follow through with her plan
even as she prepares and begin to commit the crime. As such, from both a first-person a third-
person perspective, the likelihood that the crime will actually be committed gradually increases
as an individual moves from Stage 1) to Stage 4), and this seems to correspond to a gradual
increase in culpability.
Another basic reason to think that an agent’s culpability gradually increases is that, as she
moves from Stage 1) to Stage 4), it will be true that she fails to take advantage of more and more
opportunities to change her mind. As a general matter, as an individual moves from the
conception phase through preparation and completion, she will continually be faced with the
choice of abandoning the attempted commission of the crime or soldiering on. Each time she
decides to soldier on, she shows that she is more and more dedicated to making sure that the
crime is successfully completed. In this way, as she moves from Stage 1) to Stage 4), her heart
gradually hardens and she becomes gradually more criminally culpable.
To get a better idea of how my opponent views the transition from Stage 2) to Stage 3),
the moment at which an agent’s culpability leaps and plateaus, let’s work with some concrete
cases that are based on People v. Miller:
114
NO AIM AMY: After threatening to kill Sierra, Amy obtains a .22 caliber rifle and goes
to a public park, a place where she is allowed to have a firearm, with the loaded gun.
114
People v Miller, 2 Cal. 2d 527 42 P. 2d 308 (1935). Cases that could have been used to similar effect include
Woolridge v. U.S., 237 F. 775 (9th Circuit 1916) and U.S. v Joyce, 693 F. 2d. 838 (8
th
Cir 1982). See also Batey
(2012) for a hypothetical that raises the same fundamental issues.
136
Amy plans to approach Sierra and then aim the gun at her and fire. As Amy approaches
Sierra with the loaded rifle, but before she takes aim, she is tackled by police.
AIMING ASHLEY: After threatening to kill Sierra, Ashley obtains a .22 caliber rifle and
goes to a public park, a place where she is allowed to have a firearm, with the loaded gun.
Ashley plans to approach Sierra, aim the gun at her and fire. After Ashley approaches
Sierra, and after she takes aim, she is tackled by police.
To start, I would submit that, while Ashley has begun to commit a crime, Amy has not. In other
words, it is only Ashley who has reached Stage 3) and performed an incomplete attempt. This is
because, while both Ashley and Amy are approaching their intended victim, loaded gun in hand,
it is only Ashley who has taken aim. For this reason, and following the court’s reasoning in
Miller, I would submit that, in NO AIM AMY, Amy is not likely be charged with attempted
murder. As the court wrote (and most legal commentators agree), in order to be culpable for
attempted murder, Amy would have had to have taken aim at her intended victim.
115
Before discussing these cases further, it’s worth noting that the point at which an agent
begins to commit a crime is incredibly difficult to pinpoint with any sort of precision.
116
In other
words, one judge’s Stage 2) actions might easily be another’s Stage 3) actions, which means that
different judges, viewing the same fact pattern, can easily disagree as to whether an incomplete
attempt was performed. As the court in Miller wrote, “it is impossible to formulate a general rule
115
See, for instance, Pizzi (2012) and Cahill (2012). For a case in which a defendant was charged with attempted
murder on the basis of having already aimed the gun, see R v Jones (1990) 1 W. L. R. 1057 (Eng.) For critical
discussion of this case, see Williams (1991).
116
I take this to be the main argument in Adams (1998). For related discussion in the context of Scottish law, see
Ferguson (2014).
137
or definition of what constitutes an attempt which may be applies as a test in all cases.”
117
While
I will not harp on this point, given the difficulty of even identifying incomplete attempts, it is
hard to imagine the transition from Stage 2) to Stage 3) in the commission of a crime bearing
some great moral weight. In fact, the court’s own reasoning in Miller itself does not successfully
make this distinction. The court cites Wharton’s Criminal Law for the claim that “[a] juridical
cause is such an act, by a moral agent, as will apparently result, in the usual course of natural
events, unless interrupted by circumstances independent of the actor, in the consequence under
investigation.”
118
At least at first glance, however, it’s hard to imagine that the simple fact that
one agent had taken aim at her intended victim while the other has not is sufficient to determine
that, left uninterrupted, only the first would have, in the usual course of events, resulted in the
‘consequence under investigation.’ After all, someone like Amy who has a murderous plan, buys
a gun, scopes out her victim and begins to approach her will, if uninterrupted, in the usual course
of natural events, at least attempt to commit murder.
To be clear, Ashley may very well be more culpable than Amy, since Ashley progressed
to Stage 3) while Amy only progressed to Stage 2). This is perfectly consistent, and would
follow the same pattern of judgments as, thinking that someone who reaches Stage 4) is even
more culpable than someone who reaches Stage 3). What seems counterintuitive is to think that,
not only is Ashley more culpable than Amy, but Ashley is as culpable as someone who fires at
her intended victim. That is, if culpability increases as one progression in the completion of a
crime, the most natural way for this to happen would be for culpability to increase steadily from
Stage 1) until Stage 4). At a minimum, if there is an abrupt leap and plateau in culpability when
117
Miller, at 529.
118
Miller at 531, Citing Wharton’s Criminal Law, 12
th
ed, p. 287. This is similar to a test put forward by Stephen
(2014).
138
one crosses from Stage 2) to Stage 3), this is something that would require additional
explanation, as well as (at least arguably) a clear way of demarcating this boundary.
As an illustration of the counterintuitive implications of my opponent’s view, consider
three ways in which things might transpire in AIMING ASHLEY:
1) Actual World: After Ashley aims but before she fires, she is tackled by police.
2) Renunciation World: After Ashley aims but before she fires, she has a change of heart
and renounces her criminal intent.
3) Firing World: After Ashley aims but before she fires, she considers whether to go
through with her plan. She then reaffirms her commitment to her criminal endeavor
and fires six shots at Sierra, barely missing Sierra’s head.
In my opponent’s view, Ashley is equally culpable in the Actual World and the Firing World.
Contrary to the arguments put forward above, my opponent denies that the facts that 1) Ashley
passed on an additional opportunity to renounce her criminal plans in Firing World and 2) that
Ashley didn’t actually fire any shots in Actual World are at all relevant to her culpability. At the
same time, my opponent is likely to concede that Ashley’s culpability is significantly diminished
in Renunciation World.
119
If Ashley’s culpability in Renunciation World is significantly
diminished, however, one would think that her decision as to whether or not to renounce is quite
significant to her culpability. If this decision is quite significant, then a world in which she does
not have a chance to make this choice (the Actual World) merits distinct treatment from a world
in which she does (Firing World).
At this point, it’s worth dispelling two easy explanations that my opponent might offer
for drawing such a significant distinction between Stage 2) and Stage 3). One is purely practical:
119
For more on abandonment, see references in fn 7 and fn 8.
139
we might want to authorize police intervention when individuals reach Stage 3), and we might
think that, in order to do so, we must consider such individuals as equally culpable as those who
reach Stage 4). What this explanation fails to take into account is that the justification for police
intervention is distinct from levels of culpability in the criminal law. In particular, police officers
will always be able to intervene when they reasonably believe a crime will take place, and this
ability to intervene is independent of what charges, if any, are eventually filed.
120
Another
explanation of this leap is epistemic: that is, some courts follow the Model Penal Code in
claiming that only incomplete attempts are “strongly corroborative of the actor’s criminal
purpose.”
121
The primary problem with this epistemic justification is that Stage 2) actions can
also be strongly corroborative of an actor’s criminal purpose. According to this line of reasoning,
then, individuals who fail to even reach Stage 2) may be equally culpable as those who perform
complete attempts.
122
Putting aside practical and epistemic considerations, it’s exceeding difficult to find
genuinely normative reasons to consider those who perform incomplete attempts, and only those
who perform incomplete attempts, as equally culpable as those who perform complete attempts.
One influential response to this challenge comes from Gideon Yaffe, who argues that instead of
considering only those who have performed incomplete attempts as equally culpable as those
who perform complete attempts, we ought to consider all rational actors who perform any action
as a ‘means’ to the crime as equally culpable as those who perform complete attempts. More
specifically, Yaffe (2010, 272) claims that an actor is as culpable as someone who performs a
120
Among other things, the charges that are eventually filed (if any) will depend on the entirety of the evidence
gathered, not simply the actions witnessed by the police officers. See Hoeber (1986, 389-390).
121
5.01 (c)
122
In addition, according to this line of reasoning, those who reach Stage 3) will presumably not be equally culpable
as those who reach Stage 4) when their doing so is not strongly corroborative of their criminal purpose. This sort of
case is discussed in more detail in Section 4, as it provides further ammunition against my opponent’s view.
140
complete attempt if she performs an action “that is not the ‘last act’” for the completion of the
crime but “is performed as a means to” the commission of the crime,” and the actor “is both
practically competent and rational.” Yaffe (2010, 272-3) offers this condition as capturing “those
cases in which we reach the conclusion that the defendant would have performed the act
involved in completion had he had ability and opportunity and no change of mind.”
Yaffe’s account faces three central problems, two of which are related to the claims in
this Section. First, if we adopt Yaffe’s account, we must reject the thought that an actor’s
culpability increases over time as she moves from Stage 1) to Stage 4) in the commission of a
crime. This means that, first, the probability that the actor will actually go through with the crime
is not relevant to her culpability, and second, the number of opportunities that she has failed to
take advantage of to abandon her criminal endeavor is not relevant to her culpability.
The second problem with Yaffe’s account is that, just as it hard to draw a clear line
between Stage 2) and Stage 3), it is also hard to draw a clear line between Stage 1) and Stage 2),
and this difficulty should lead us to question the moral weight that such a line can hold. Most
fundamentally, we should wonder why it is that certain steps that are part of Stage 1) could not
be considered ‘means’ to the commission of the crime. For instance, let’s say an actor thinks
through all of the stages that would be necessary to commit the crime, what sort of tools would
be necessary, and the conditions under which she would be most likely to escape undetected. If
this actor were practically competent and rational, why would these Stage 1) actions not make
her as culpable as someone who performed a complete attempt? Although we might have
concerns about gathering sufficient evidence to convict an actor on the basis of such internal
ideation, in principle, if we adopt Yaffe’s account, the actor would be equally culpable as
141
someone who performs a complete attempt, even though the actor has not even reached Stage 2)
and performed any overt preparatory act.
123
The third problem with Yaffe’s account is that, in order to consider an individual at Stage
2) as culpable as an individual at Stage 4), we must also radically revise the mens rea
requirement for attempts. According to the Model Penal Code, for instance, an action can only be
considered an attempt if it performed with the mens rea required for the crime. This is a simple
enough requirement to meet when the actions in question are incomplete attempts: that is, when
Ashley aims at Sierra, she likely does so with the intention of causing Sierra’s death. However,
the further the action is removed from Stage 4), the less likely it is that it will be performed with
the required purpose. For instance, when Ashley puts on all black clothing the morning of her
planned crime, it’s absurd to think that she does so with the intention of causing Sierra’s death.
Yaffe recognizes this challenge and, in response, claims that, as long as an actor has the
intention of committing a particular crime and is guided by that intention, it is fair to hold him
accountable for the mens rea of the complete attempt. This is because, as Yaffe (2010, 126) sees
it, “if he is actually guided by his intention-based commitment to the relevant acts, results, and
circumstances, then he has intent with respect to both those conditions that are in the content of
the intention and with respect to those that are realized thanks to the intention’s playing its
proper causal role.” That is, if the actor intends to commit a crime, then, according to Yaffe
(2010, 126), she is committed to the occurrence of the mens rea required for the crime “since
[the required mental states] would be present were [his] intention to play its proper causal role.”
123
Yaffe (2010, 281) claims that such preparatory actions would not be culpable because they provide insufficient
evidence. Alongside Simester, I disagree, as the amount of evidence we might have against a particular defendant
will vary significantly. In addition, as I return to in Section 4, what fails to justify state intervention at this point is
not contingently epistemic but necessarily normative.
142
This revisionary approach to mens rea is worth highlighting because, in its own way, it is
even more radical than considering agents who reach Stage 2) as equally culpable as those who
reach Stage 4) in the commission of a crime (though perhaps it is a necessary corollary). In
claiming that those who reach Stage 2) are as culpable as those who reach Stage 4), one is at
least extending the widely held judicial view and practice of considering those who reach Stage
3) as equally culpable as those who reach Stage 4). Yaffe’s claim about the mens rea
requirement cannot be seen as a similar sort of extension. That is, while courts extend the actus
rea requirements to include those at Stage 3), they maintain the same mens rea requirements as
the underlying offense.
124
If we accept Yaffe’s account, however, the mens rea requirement is, at
least in some sense, simply folded into the actus rea requirement. That is, according to Yaffe, an
individual is as culpable as one who perform a complete attempt with the intention of causing
death even if she never performs an action, or even the beginning of an action, with that
intention.
125
Insofar as the mens rea with which agents perform actions is a crucial component of
the traditional story of criminal culpability, this aspect of Yaffe’s account, though it has received
much less scholarly attention, is perhaps the most revisionary and difficult to swallow.
3. The Parallel between Legal and Moral Evaluations
If we are tempted by the arguments presented in Section 2, we will be tempted to think
that an agent is only as culpable for her actions as others who have reached the same stage in the
commission of a crime. In this section, I argue that this close relationship between our legal
124
This is because, as Yaffe (2010, 125) puts it, “[m]ental state…transforms wrongful behavior into the kind of
thing that the criminal law is to respond to.”
125
Nor does the individual aid someone else in performing a complete attempt or hire someone else to do so—
actions which are seen as justifying imputing such a mens rea to her.
143
evaluation of an agent and the agent’s actual actions parallels a similar finding in the moral
domain.
Before digressing into moral evaluations, it’s worth noting that my claim is not that moral
and legal culpability are the same. (In fact, as we will see in Section 4 below, one of the
problems with my opponent’s view is that it considers certain actions legally culpable when, at
most, they are morally culpable.) However, what I think highly unlikely is that, in the process of
the commission of a crime, an agent’s legal culpability will radically increase while her moral
culpability stays the same. In other words, if agents who reach Stage 3) lack the moral culpability
of agents who reach Stage 4), I take this to undermine the claim that their legal culpability spikes
to equal those who reach Stage 4).
While most discussions of criminal culpability focus on an agent’s moral
blameworthiness, an example of an agent’s praiseworthiness will be helpful warmup for our
purposes. Consider the following
SUCCESSFUL SALLY: Sally knows that there is a child trapped inside a burning
building and thus decides to try and save the child’s life. Sally’s heroic efforts succeed in
saving the child.
SAD SAMANTHA: Samantha knows there is a child trapped inside a burning building
and thus decides to try and save the child’s life. Despite Samantha’s heroic efforts,
however, by the time she gets the child out of the building, the child has inhaled too
much smoke to survive.
It’s clear enough that Sally merits a lot of praise for saving the child. In addition, even though
Samantha does not succeed in saving the child, it seems clear enough that she merits a lot of
144
praise for her attempt—perhaps even as much praise as Sally merits. One might tentatively
conclude from these cases that an individual like Samantha who performs a complete but
unsuccessful attempt merits as much praise as someone who performs a complete and successful
attempt like Samantha. What’s at issue here is the agent who performs the incomplete (and
unsuccessful) attempt, such as the following
126
:
STOPPED SHEILA: Sheila knows there is a child trapped inside a burning building and
thus decides to try and save the child’s life. As she enters the house and is about to run up
the stairs, her (much stronger) friend—who doesn’t want Sheila to risk her life—grabs
Sheila and removes her from danger
While Sheila surely merits some praise for her plan and intention to save the child, few would be
tempted to consider her as praiseworthy as either Samantha or Sally. I would explain this
intuitive difference by pointing to the simple fact that, while Sheila had the same intention and
plans as both Samantha and Sally, she simply did not put her life on the line in the way that Sally
and Samantha did. In this way, the actions an agent actually performs seem to be crucial to her
praiseworthiness.
Of course, some might claim that our analysis of praise is fundamentally different from
our analysis of blame or culpability. While this may be true, it’s easy to see that, at least in a
wide range of cases, the actions that an agent actually performs are of crucial importance to her
blameworthiness as well. Consider the following:
RACIST RON: At the end of a staff meeting, Ron makes a racially derogatory mark
about a co-worker. Awkward silence, followed by a heated argument, ensues. Ron is
reprimanded and suspended without pay.
126
I acknowledge that an individual might perform an incomplete but successful attempt; however, such a
possibility will not concern us here.
145
RETICENT RAMY: At the end of a staff meeting, Ramy makes a racially derogatory
mark about a co-worker. Unbeknownst to Ramy, his microphone was muted, so no one
heard his remark.
As was the case with praise, it’s clearest that Ron merits blame for his remarks. That is, Ron is
clearly culpable for his hurtful remark and merits the punishment he receives. It’s also easy
enough to get on board with the claim that Ramy merits as much blame as Ron does, even
though no one hears his remark. We might support this intuition by pointing out that, if Ramy’s
boss reviewed the recording of the meeting and read Ramy’s lips, Ramy would likely merit some
punishment as well. We can take this pair of cases as tentative support for the claim that, when it
comes to blameworthiness, agents who perform complete and successful attempts (such as Ron)
merit as much blame as those who perform complete but unsuccessful attempts (such as Ramy).
What’s at issue here is the blameworthiness or culpability of an agent who performs an
incomplete and unsuccessful attempt, such as in the following:
REBUFFED RENFORD: At the end of a staff meeting, Renford begins to make one last
comment—a racially derogatory one. Before he can get a word out, however, his boss
gets a pressing call and abruptly ends the meeting.
Again, I’m open to the claim that Renford merits some amount of blame for starting to make a
racially derogatory comment. What is much harder to believe is that Renford is as blameworthy
or culpable as either Ramy or Ron. That is, if Renford’s boss found out that, had he not ended the
meeting abruptly, Renford would have made a racially derogatory comment, this is not likely to
provide the boss grounds to punish Renford (beyond perhaps issuing him a warning of some
sort). As was the case with Sheila, I would explain this intuitive difference by pointing out the
146
fact that, although Renford had the same intention and plans as both Ramy and Ron, he simply
did not make a racially derogatory comment. In this way, the action that an agent actually
performs seems crucial to his blameworthiness.
The primary response to this objection that is worth considering is that the difference
between Renford, Ramy, and Ron is simply a matter of luck. According to this line of reasoning,
Ron was unluckiest (if someone who merits blame is unlucky), as he was both able to make his
comment and his comment was heard, and Renford was the luckiest, as—despite his best
efforts—he was prevented from making his comment. According to this line of argument, we
don’t want an agent’s blameworthiness or culpability to depend entirely on this sort of luck, so
we should treat all three agents similarly, at least from a moral perspective.
127
One problem with this response is it conflates two different types of luck: luck in
outcomes and luck in action performed. The only thing that separates Ron and Ramy (and Sally
and Samantha for that matter) is luck in outcomes; that is, both agents do everything necessary in
order for the desired outcome to obtain, but only one of them is ultimately successful. I’m more
than willing to accept, for the sake of argument, that outcome luck ought not impact an agent’s
blameworthiness. We should not move so easily from this conclusion to one about the difference
between both Ron and Ramy on the one hand and Renford on the other, however. This is
because, unlike Ron and Ramy, Renford does not make a racially derogatory comment; he
simply begins to do so before he is cut off by his boss.
For lack of a better phrase, we might describe the case of Renford (and Sheila) as
instances of ‘action luck.’ While it’s plausible enough to think that outcome luck ought not
impact our assessments of praise, blame, and culpability, this does not apply as easily to action
127
For classic discussions of moral luck, see Nagel (1979) and Williams (1976).
147
luck. For instance, Sheila does not seem as praiseworthy as Sally or Samantha even though the
only thing that prevented her from a complete attempt is bad action luck. For similar reasons, we
intuitively distinguish Renford from Ron and Ramy. Of course, my opponent may urge us to
simply reject these intuitions. The first reason to hesitate before rejecting these intuitions is that,
if we do so, the result is much more radical than simply considering agents who reach Stage 3) as
equally culpable as those who reach Stage 4). If we reject action luck, then an individual who
does not even reach Stage 1) may be equally culpable as someone who performs a complete
attempt. For instance, it may be true of many of us that, if we saw a stranger drop something of
incredible value, we might keep it for ourselves. Yet it’s hard to think that we are all as
blameworthy as someone who actually picks up and keeps a stranger’s valuables. Along similar
lines, some of might be quite brave if we were ever in the battlefield, but few think we merit the
same amount of praise as those who proved their mettle under extreme duress.
The second, related reason why rejecting the relevance of action luck to culpability will
not result in much progress is that my opponent’s account is vulnerable to the same critique. In
particular, my opponent thinks that, once an individual reaches Stage 3) Agential
Commencement of the commission of a crime, her legal culpability equals that of someone who
reaches Stage 4) Agential Completion. Insofar as my opponent does not make the same claim
about individuals who simply reach Stage 2), she accepts that the point at which an individual is
interrupted in her commission of a crime is, in fact, relevant to her blameworthiness or
culpability. As such, in order to be of any help to my opponent, we must reject intuitions with
respect to action luck as applied to individuals who reach Stage 3) but accept them as applied to
individuals who reach Stage 1) and Stage 2). Without additional explanation, it’s not clear what
reason we have to think of action luck in this selective and ad hoc manner.
148
4. Contrasting Moral and Legal Culpability
In the last section, I argued that it would be odd if an agent’s moral culpability was
disconnected from her legal culpability in such a way that her legal culpability radically
increased while her moral culpability remained the same. In this final section, I argue that there
are clearly cases in which agents who reach Stage 3) have no legal culpability at all. If many
agents who complete Stage 3) are not at all culpable, then it simply cannot be the case that those
who perform a substantial step or come within a dangerous proximity of a complete attempt are
equally culpable as those who perform a complete attempt.
Few would argue that all actions that are morally culpable ought to be legally culpable.
First and foremost, we want to leave room in a democratic, pluralistic society for many different
conceptions of the good, and those with different conceptions of the good will likely disagree
about which actions are morally culpable.
128
Perhaps just as importantly, the criminal justice
system’s involvement in our lives is incredibly expensive and has a chilling effect on personal
liberties. As such, even if we all agreed on the morally culpability of certain actions (such as
lying), we would rightly hesitate to get the state involved in prohibiting such speech acts, as
Simester (2014, 1250) points out, “because of the intrusiveness of any such prohibition and its
implications for the lives of citizens.” John Hasnas (2012, 763) puts this point quite eloquently:
Criminal law may be necessary to provide for the security of citizens’ persons and
property, but it does so by depriving those who violate the law of their liberty. The
normative priority of liberty over security requires that criminal responsibility be
assigned so that increases in security against criminals are not purchased with decreases
128
In particular, we might adopt the sort of ‘political liberalism’ advocated for by Rawls (1999) and, more recently,
Quong (2011).
149
in the liberty of law-abiding citizens. Accordingly, a liberal society regards the protection
of citizens against state enforcement error and abuse as relatively more important than
protection against criminal activity.
Certain incomplete attempts fall into this category of actions that, while perhaps carrying
with them some moral culpability, ought to carry no legal culpability. Let’s start with an
example:
PERMISSION PETE: Jamie has given Pete permission to take $50,000 from her vault.
Pete plans to take $100,000 instead. As Pete walks into the vault and reaches for a stack
of bills, he is tackled by the police, arrested, and charged with attempted burglary.
129
Even though Pete had begun to steal Jamie’s money—an act that would surely pass both the
dangerous proximity and substantial step test—it does not seem that his incomplete attempt is at
all legally culpable. After all, Pete was at liberty to be in Jamie’s vault, by which I mean that
Jamie had no claim that Pete not be there (since Jamie had given Pete permission).
130
If we
consider incomplete attempts as equally legally culpable as complete attempts, however, we will
be committed to the claim that Pete is legally culpable for attempted robbery for performing an
action that he is at liberty to perform. While Pete may have some moral culpability for entering
Jamie’s vault with this sinister purpose, it seems absurd for this to trigger the mechanism of state
intervention.
One way that my opponent might respond to the case of PERMISSION PETE is to point
out that there’s no way that we’d have enough evidence to convict Pete of anything, let alone
attempted robbery. However, this response fails to recognize that more evidence can be adduced
129
This case is loosely based on one found in Stephen (1883).
130
Tbe basic idea here is similar to Hohfeld’s (1919) framework of rights, claims, and liberties. Of course, whether
Jamie had a claim that Pete not enter the vault with sinister intention is what is at issue here.
150
at trial than the agent’s actions at the time of his incomplete attempt. In particular, Pete may have
written detailed notes in his diary with respect to the commission of the crime, he may have
bragged about it on social media, and he might even confess to his intentions after the fact.
Given the possibility of all this supplemental evidence, we should hesitate before thinking that,
as a general rule, someone like Pete could never be convicted of attempted robbery.
To facilitate further discussion, it’ll be helpful to get a couple of other examples on the
table.
POLICY PAM: Pam plans on committing insurance fraud. More specifically, she plans to
buy insurance for her jewelry, pretend the jewelry was stolen, and then file fraudulent
claims to the insurer. After buying the insurance, Pam obtains the paperwork necessary to
file an insurance claim. After she prints her name and policy number at the top, she is
arrested and charged with attempted insurance fraud.
POSSESSIVE PHIL: Phil plans on purchasing some cocaine from Senthil, who Phil
knows to be a trustworthy and reliable dealer. Phil goes to a public park with enough
money for a purchase. Senthil approaches Phil and sits next to him on a park bench. As
Phil puts his hand in the pocket that contains money, he is arrested and charged with
attempted possession of a controlled substance.
In both of these cases, I would argue that, although the individuals involved perform incomplete
attempts, their actions should not be legally culpable because they should have the liberty to
perform them. More specifically, just as Pete was at liberty or free to enter Jamie’s vault, Pam
should be free to print her name and insurance policy number at the top of a claim, and Phil
151
should be free to reach into his pocket while sitting next to Senthil in a public park.
131
These
individuals should retain their liberties for the same reason as individuals should retain their
liberty to speak untruths: the costs of state intervention and their chilling effect on personal
freedom are simply too high otherwise. Even if these individuals, left to their own devices,
would have performed complete attempts, this alone does not give us reason to consider them
legally culpable before they do so. To hark back to our discussion in Section 2, there does not
seem to be a magic moment before an individual completes an attempt at which an individual
becomes culpable for doing so. When the individual does perform a complete criminal attempt,
then, this is a significant normative event, an event that justifies a unique level of state
intervention.
Of course, my claim is not that individuals are never morally culpable for performing
actions that they should be at liberty to perform. For instance, individuals are free to form non-
violent communities around hateful and prejudicial ideologies, even though the formation of
such communities is morally blameworthy in some sense. My claim is much weaker, namely,
that if an agent should be free to perform an incomplete attempt but should not be free to perform
a complete attempt, then the two actions are not equally culpable from a legal perspective, given
the immense cost of state intervention and its impact on individual liberties.
In this way, considering individuals who perform incomplete attempts as equally legally
culpable as those who performs complete attempts embodies state overreach in two distinct
ways. First, such a policy will consider the incomplete attempts of individuals Pete, Pam, and
Phil as worthy of state intervention when in fact they are not. Second, such a policy will consider
the incomplete attempts of individuals such as Ashley, who aim but do not fire at their intended
131
Again, where being at liberty to perform a particular action entails that no individual has a claim against them
performing that action.
152
victims, as more culpable than they actually are.
132
The above cases also illustrate the gap
between a practical or epistemic justification, on the one hand, and a normative justification, on
the other, for charging those who perform incomplete and complete attempts with the same
crime.
133
Unlike the cases discussed in Section 2, in none of the cases above is there a practical
need for the police to immediately intervene. As such, if our support for my opponent’s claim
was based in practical considerations, it should not apply to these cases. In addition, the
individuals’ actions in the cases in this Section may not, on their own, strongly corroborate
criminal intent. As such, if our support for my opponent’s claims was based on epistemic
considerations, it should not apply to these cases. Given the fact that, in these cases, agents may
perform actions that they should be free to perform, it is implausible to think that they are
equally legally culpable as agents who perform complete criminal attempts, even if they have
performed so-called “substantial steps” or come within a “dangerous proximity” of a complete
attempt.
Of course, my opponent might point out that while many incomplete attempts are, in fact,
harmless and seemingly not legally culpable, this is not true of all incomplete offenses. For
instance, to revisit the examples discussed in Section 1, it is not harmless to aim a gun at one’s
intended victim, especially if the victim is aware of the threat. My response to this point would
be that, in such cases, the actions that constitute the incomplete attempt merit state intervention
for reasons that are independent of the possibility of a complete attempt. To continue with the
same example, someone who aims a gun at someone else may be culpable for threatening
132
As a practical matter, we are only likely to see manifestations of the second sort of state overreach, and we can
expect a number of individuals who perform incomplete attempts that are otherwise culpable to be arrested and
charged with crimes that they should not be. At the same time, the very fact that police officers in such cases may
wait for individuals to perform complete attempts in order to be certain of those individuals’ intentions (and obtain
additional evidence for trial) should give us reason to think that incomplete attempts are not equally culpable as
complete attempts.
133
Again, assuming the complete attempts are unsuccessful.
153
another with bodily injury, placing other in reasonable fear of physical injury through display of
a deadly weapon, or reckless endangerment.
134
When incomplete attempts consist of actions that
themselves carry legal culpability, the question we must answer is not whether state intervention
is merited, but the level of culpability that those actions ought to carry. According to the view
defended here, the level of culpability of actions that constitute an incomplete attempt should
never equal the level of culpability of actions that constitute a complete attempt. Simply put, an
individual who points a gun at another and threatens them is culpable for those actions but is not
culpable for an attempted murder unless he pulls the trigger. In other words, while the actions
that constitute an incomplete attempt may themselves be against the law, this does not mean that
the individuals who perform them are as culpable as those who perform the additional actions
that constitute a complete attempt.
To be clear, if we accept the argument of this section, we might still think that some
incomplete attempts are equally legally culpable as complete attempts. That is, my opponent can
simply concede that some incomplete attempts are not culpable at all while maintaining that this
does not apply to all incomplete attempts. If we accept the argument in this section, however,
this opens up the possibility that, when incomplete attempts are in fact culpable, their culpability
stems entirely from the actions that are actually performed. In other words, if my opponent
wishes to build her case on the basis of incomplete attempts such as pointing a gun at and
threatening others, she leaves herself vulnerable to the charge that, while these acts are certainly
culpable, this provides no particular support for the thesis that those who perform incomplete
attempts are equally culpable as those who perform complete ones. Instead, a thesis that is
equally supported by these cases is that, while only those who perform complete attempts ought
134
A similar point is made in Pizzi (2012).
154
to be charged with attempted murder, those who menace and threaten others ought to be held to
account for those actions (and only those actions).
5. Conclusion
In this chapter, I presented three arguments to the conclusion that individuals who
perform incomplete attempts should never be charged with the same crime as those who perform
complete attempts. First, it is hard to see how there is some magic moment in the commission of
a crime at which an agent’s legal culpability radically increases. Instead, it seems much more
natural to think that, as an agent gets closer to the crime’s commission, he becomes more and
more culpable for what he’s done. Second, an agent’s moral culpability does not spike when she
reaches a certain stage in the commission of a crime, so a radical increase in her legal culpability
would be anomalous. Lastly, certain incomplete attempts seem to lack legal culpability at all,
which simply rules out the possibility that they have equal legal culpability as complete attempts.
Given the complete lack of normative—as opposed to practical or epistemic—reasons to treat
incomplete attempts as equally culpable as complete attempts, we must take seriously the
possibility that radical reform may be in order. This is especially true in jurisdictions in which
individuals who are found guilty of attempts face the same punishment as those who actually
perform the underlying offense.
155
Sources:
Adams, David M. (1998). The problem of the incomplete attempt. Social Theory and Practice, 24(3),
317-343.
Alexander, Larry and Ferzan, Kimberly K. (2009). Crime and Culpability: A Theory of Criminal Law.
Cambridge: Cambridge University Press.
Batey, Robert. (2012). Attempt and Reckless Endangerment in Saul Bellow’s Herzog. Ohio State Journal
of Criminal Law, 9, 749-750.
Bloustein, Edward J. (1989) Criminal Attempts and the Clear and Present Danger Theory of the First
Amendment. Cornell Law Review, 74(6), 1118-1150.
Cahill, Michael T. (2012). Defining Inchoate Crime: An incomplete Attempt. Ohio State Journal of
Criminal Law, 9, 751-760.
Crew, Michael H. (1988). Should voluntary abandonment be defense to attempted crimes. American
Criminal Law Review, 26(2), 441-462.
Doyle, Charles. (2020). Attempt: An Overview of Federal Criminal Law. Congressional Research Service
R42001.
Ferguson, Pamela R. (2014). Moving from Preparation to Perpetration? Attempted Crimes and Breach of
the Peace in Scots Law. Ohio State Journal of Criminal Law, 11, 687-700.
Hall, Jerome. (1960). General Principles of the Criminal Law, Indianapolis, Ind: Bobbs-Merrill.
Hasnas, John. (2012). Attempt, Preparation, and Harm: The Case of the Jealous Ex-Husband. Ohio State
Journal of Criminal Law, 9, 761-770.
Hoeber, Paul R. (1986). The Abandonment Defense to Criminal Attempt and Other Problems of
Temporal Individuation. California Law Review, 74, 377-427.
Hohfeld, Wesley. (1919). Fundamental Legal Conceptions, W. Cood (ed.) New Haven: Yale University
Press.
Holmes, Oliver W. (1881). The Common Law. Dover: New York.
Keedy, Edwin R. (1954). Criminal Attempts at Common Law: General Principles. University of
Pennsylvania Law Review. 102, 464-489
Lee, Evan Tsen. (1997). Cancelling crime. Connecticut Law Review, 30(1), 117-156.
Nagel, Thomas. (1979). Moral Questions. New York: Cambridge University Press.
Pizzi, William T. (2012). Rethinking Attempt Under the Model Penal Code. Ohio State Journal of
Criminal Law, 9, 771-778.
156
Quong, Jon. (2011). Liberalism Without Perfection. Oxford: Oxford University Press.
Rawls, John. (1999). A Theory of Justice: Revised Edition. Oxford: Oxford University Press.
Simester, Alfred P. (2014). Attempts: In the Philosophy of Action and the Criminal Law. Book Review.
Mind. Vol 123:492, pps. 1249-1255.
Stephen, J. F. (2014). A History of Criminal Law of England (three volumes). Cambridge: Cambridge
University Press.
Williams, Bernard. (1976). Moral Luck. Proceedings of the Aristotelian Society, 50, 115-135.
Williams, Glanville. (1991). Wrong Turning on the Law of Attempts. Criminal Law Review, 416-425.
Yaffe, Gideon. (2010). Attempts: In the Philosophy of Action and the Criminal Law. Oxford: Oxford
University Press.
157
Bibliography
Adams, David M. (1998). The problem of the incomplete attempt. Social Theory and Practice, 24(3),
317-343.
Adams, Robert M. (2001). Scanlon’s Contractualism: Critical Notice of T.M. Scanlon, What We Owe to
Each Other. The Philosophical Review, 110(4), 563-586.
Adler Matthew D. (2003). The Puzzle of ‘Ex Ante Efficiency’: Does Rational Approvability Have Moral
Weight? University of Pennsylvania Law Review, 151(3), 1255–90.
Alexander, Larry. (2000). Deontology at the Threshold. San Diego Law Review, 37, 893-912.
Alexander, Larry and Ferzan, Kimberly K. (2009). Crime and Culpability: A Theory of Criminal Law.
Cambridge: Cambridge University Press.
Anscombe, G.E.M. (1957). Intention. Cambridge: Harvard University Press.
Arneson, Richard J. (1989). Equality and Equal Opportunity for Welfare. Philosophical Studies, 56(1),
77-93.
Arneson, Richard J. (1999). Equality of Opportunity for Welfare Defended and Recanted. The Journal of
Political Philosophy, 7, 488–97.
Arneson, Richard J. (1997). Postscript to “Equality and Equal Opportunity for Welfare. In L. Pojman and
R. Westmorland (Eds.), Equality: Selected Readings (pp. 238-241). Oxford: Oxford University Press.
Ashford, Elizabeth. 2003. The Demandingness of Contractualism. Ethics 113, 2: 273-302.
Batey, Robert. (2012). Attempt and Reckless Endangerment in Saul Bellow’s Herzog. Ohio State Journal
of Criminal Law, 9, 749-750.
Bjorndahl, Adam, Alex J London and Kevin J.S. Zollman. (2017). Kantian Decision Making Under
Uncertainty: Dignity, Price, and Consistency. Philosophers’ Imprint, 17(7), 1-22.
Bloustein, Edward J. (1989) Criminal Attempts and the Clear and Present Danger Theory of the First
Amendment. Cornell Law Review, 74(6), 1118-1150.
Bradley, Seamus. (2017). Are objective chances compatible with determinism? Philosophy Compass.
Bratman, Michael E. (1987). Intention, Plans, and Practical Reason. Cambridge: Harvard University
Press.
Brennan, Samantha. (1995). Thresholds for Rights. The Southern Journal of Philosophy, 33, 143-168.
Broome, John. (1991). Fairness. Proceedings of the Aristotelian Society, 91(1), 87-102.
Brown, Jessica. (2008). Subject-Sensitive Invariantism and the Knowledge Norm for Practical
Reasoning,” Nous, 42(2), 167-89.
158
Buchak, Lara. (2013). Risk and Rationality. Oxford: Oxford University Press.
Butterfield, Jeremy. (2012). Laws, Causation and Dynamics at Different Levels. Interface Focus, 2(1),
101-114.
Cahill, Michael T. (2012). Defining Inchoate Crime: An incomplete Attempt. Ohio State Journal of
Criminal Law, 9, 751-760.
Cohen, G. A. (1989). On the Currency of Egalitarian Justice. Ethics, 99, 906–944.
Crew, Michael H. (1988). Should voluntary abandonment be defense to attempted crimes. American
Criminal Law Review, 26(2), 441-462.
Daniels, Norman. (2012). Reasonable Disagreement about Identified vs. Statistical Victims. Hastings
Center Report, 42(1), 35-45.
Davidson, Donald. (2001). Essays on Actions and Events. Oxford: Oxford University Press.
Diamond, Peter. (1967). Cardinal Welfare, Individualistic Ethics, and Interpersonal Comparison of
Utility: Comment. Journal of Political Economy, 75(5), 765-766.
Doyle, Charles. (2020). Attempt: An Overview of Federal Criminal Law. Congressional Research Service
R42001.
Enoch, David. (2017). Hypothetical Consent and the Value(s) of Autonomy. Ethics, 128(1), 6-36.
Fabre, Cecile. (2015). War Exit. Ethics, 125, 631-652.
Ferguson, Pamela R. (2014). Moving from Preparation to Perpetration? Attempted Crimes and Breach of
the Peace in Scots Law. Ohio State Journal of Criminal Law, 11, 687-700.
Fleurbaey, Marc and Voorhoeve, Alex. (2013). Decide as you would with full information! An argument
against ex ante Pareto. In N. Eyal, S. A. Hurst, O. F. Norheim, and D. Wikler (Eds.), Inequalities in
health: Concepts, measures, and ethics (pp. 113-128). New York: Oxford University Press.
Frick, Johann. (2013). Uncertainty and Justifiability to Each Person: Response to Fleurbaey and
Voorhoeve, In N. Eyal, S. A. Hurst, O. F. Norheim, and D. Wikler (Eds.), Inequalities in health:
Concepts, measures, and ethics (pp. 129-146). New York: Oxford University Press.
Frick, Johann. (2015). Contractualism and Social Risk. Philosophy and Public Affairs, 43(3), 175-223.
Frick, Johann. (2015b). Treatment versus Prevention in the Fight against HIV/AIDS and the Problem of
Identical versus Statistical Lives. in Cohen, G, Daniels, N, and Eyal. N (Eds.), Identified versus Statistical
Lives: An Interdisciplinary Perspective (pp. 182-202). Oxford: Oxford University Press.
Fried, Barbara. (2012). “Can Contractualism Save Us from Aggregation?” Journal of Ethics, 16: 39–66.
Frowe, Helen. (2018). Lesser-evil Justifications for harming: Why We’re Required to Turn the Trolley.
The Philosophical Quarterly, 68(272), 460-480.
159
Gallow, J. Dmitri. (2019) A subjectivist’s guide to deterministic chance. Synthese.
Gauthier, David. (1986). Morals by Agreement. Oxford: Oxford University Press.
Gordon-Solomon, Kerah. (2019). Should Contractualists Decompose? Philosophy and Public Affairs,
47(3), 259-287.
Hall, Jerome. (1960). General Principles of the Criminal Law, Indianapolis, Ind: Bobbs-Merrill.
Hare, Caspar. (2012). Obligations to Merely Statistical People. Journal of Philosophy, 109(5/6), 378-390.
Hare, Caspar (2016). Should We Wish Well to All? Philosophical Review, 125(4), 451-472.
Hasnas, John. (2012). Attempt, Preparation, and Harm: The Case of the Jealous Ex-Husband. Ohio State
Journal of Criminal Law, 9, 761-770.
Henning, Tim. (2015). From Choice to Chance? Saving People, Fairness, and Lotteries. Philosophical
Review, 124(2), 169-206.
Hoeber, Paul R. (1986). The Abandonment Defense to Criminal Attempt and Other Problems of
Temporal Individuation. California Law Review, 74, 377-427.
Hohfeld, Wesley. (1919). Fundamental Legal Conceptions, W. Cood (ed.) New Haven: Yale University
Press.
Holm, Sune. (2018). The Luckless and the Doomed. Contractualism on Justified Risk-Imposition. Ethical
Theory and Moral Practice, 21:231-244.
Holmes, Oliver W. (1881). The Common Law. Dover: New York.
Hooker, Brad. (2001). Ideal Code, Real World: A Rule-Consequentialist Theory of Morality. Oxford:
Oxford University Press.
Horton, Joe (2020). Aggregation, Risk, and Reductio. Ethics, 130(4), 514-529.
Horton, Joe. (2018). Always Aggregate. Philosophy and Public Affairs, 46, 160-174.
Isaacs, Yoaav. (2016). Probabilities Cannot Be Rationally Neglected. Mind, 125(499), 759-762.
Isaacs, Yoaav. (2014). Duty and Knowledge. Philosophical Perspectives. 28(1), 95-110.
Jackson, Frank. (1991). Decision-theoretic Consequentialism and the Nearest and Dearest Objection,
Ethics, 101, 461-482.
James, Aaron. 2012. Contractualism’s (not so) slippery slope. Legal Theory 18: 263–292.
Jenni, Karen E. and Loewenstein, George. (1997). Explaining the “Identifiable Victim Effect.” Journal of
Risk and Uncertainty, 14, 235-257.
160
Johnson, Christa M. (2020). How Deontologists Can be Moderate (and Why They Should Be). The
Journal of Value Inquiry, 54, 227-243.
Kamm, F. M. (2013). Nonconsequentialism. The Blackwell Guide to Ethical Theory, ed. Hugh LaFollette.
Oxford: Blackwell.
Kamm. F. M. (2001a). Making War (and its Continuation) Unjust. European Journal of Philosophy, 9,
328-343.
Kamm, F. M. (2001b). Morality, Mortality Volume II. Oxford: Oxford University Press.
Kamm, Frances. (2002). Owing, Justifying, and Rejecting. Mind. Vol 11: 442, pps. 323-354.
Kant, Immanuel (1785, repr. 1998). Groundwork of the Metaphysics of Morals. Trans. Mary Gregor.
Cambridge, UK: Cambridge University Press.
Keedy, Edwin R. (1954). Criminal Attempts at Common Law: General Principles. University of
Pennsylvania Law Review. 102, 464-489.
Kumar, Rahul. (2015). Risking and Wronging. Philosophy and Public Affairs, 23(1), 27-51.
Kumar, Rahul. (2003). Reasonable Reasons in Contractualist Moral Argument. Ethics, 114, 6-37.
Kymlicka, Will. (1996). Multicultural Citizenship: A Liberal Theory of Minority Rights. Oxford: Oxford
University Press.
Lazar, Seth. (2018). Moral Sunk Costs. The Philosophical Quarterly, 68(273), 841-861.
Lee, Evan Tsen. (1997). Cancelling crime. Connecticut Law Review, 30(1), 117-156.
Lee, Seyoung. and Feeley, Thomas H. (2016). The identifiable victim effect: a meta-analytic review.
Social Influence, 11(3), 199-215.
Lenman, James. 2008. Contractualism and risk imposition. Politics, Philosophy and Economics, 7: 99–
122.
List, Christian and Pivato, Marcus. (2015). Emergent Chance. Philosophical Review, 124(1), 119-152.
List, Christian. (2014). Free Will, Determinism, and the Possibility of Doing Otherwise. Nous, 1, 156-
178.
Mahtani, Anna. (2017). The Ex Ante Pareto Principle. Journal of Philosophy, 114(6), 303-323. doi:
doi.org/10.5840/jphil2017114622
McLennan, Edward F. (1990). Rationality and Dynamic Choice: Foundational Explorations. New York:
Cambridge University Press.
McMahan, Jeff. (2015). Proportionality and Time. Ethics, 125, 696-719.
Moellendorf, Darrel. (2015). Two Doctrines of Jus ex Bello. Ethics, 125, 653-673.
161
Monton, Bradley. (2019). How to Avoid Maximizing Expected Utility. Philosophers’ Imprint, 19(18), 1-
25.
Nagel, Thomas. (1979). Moral Questions. New York: Cambridge University Press.
Oberdiek, John. (2003). Lost in Moral Space; On the Infringing/Violating Distinction and its Place in the
Theory of Rights. Law and Philosophy, 23(4), 325-346.
O’Neill, Onara. (1990). Constructions of Reason: Explorations of Kant’s Practical Philosophy.
Cambridge: Cambridge University Press.
Otsuka, Michael. (2011). Scanlon and the Claims of the Many versus the One. Analysis, 60: 288–90.
Parfit, Derek. (2011). On What Matters, Vol. 1. Oxford: Oxford University Press.
Parfit, Derek. (2011). On What Matters, Vol. 2. Oxford: Oxford University Press.
Pizzi, William T. (2012). Rethinking Attempt Under the Model Penal Code. Ohio State Journal of
Criminal Law, 9, 771-778.
Quinn, Warren. (1989). Actions, Intentions, and Consequences: The Doctrine of Double Effect.
Philosophy and Public Affairs, 18, 334-351.
Quong, Jon. (2011). Liberalism Without Perfection. Oxford: Oxford University Press.
Radford, Colin. (1966). Knowledge—By Examples. Analysis. 27(1), 1-11.
Ramakrishnan, Ketan H. (2016). Treating People as Tools. Philosophy and Public Affairs, 44(2), 133-
165.
Rawls, John. (1999). A Theory of Justice: Revised Edition. Oxford: Oxford University Press.
Reed, Baron. (2010), “A Defense of Stable Invariantism,” Nous, 44 (2), 224–44.
Reibetanz, Sophia. (1998). Contractualism and Aggregation. Ethics, 108:2, 296-311.
Rodin, David. (2015). The War Trap: Dilemmas of just terminato. Ethics, 125, 674-695.
Rodin, David. (2008). Two Emerging Issues of Jus Post Bellum: War Termination and the Liability of
Soliders for Crimes of Aggression, in C. Stahn and J. K. Kleffner (eds.) Just Post Bellum: Toward a Law
of Transition from Conflict to Peace, 53-76. The Hague: T.M.C. Asser Press.
Roeber, Blake. (2018). “The Pragmatic Encroachment Debate.” Nous, 51(2), 171-95.
Ruger, Korbinian. (2018). On Ex Ante Contractualism. Journal of Ethics and Social Philosophy, 13(3).
Scanlon, T.M. (1998). What We Owe to Each Other. Cambridge, Mass.: Harvard University Press.
Scanlon, T.M. (2003). Replies. Ratio, 16, 424–39.
162
Scanlon, T. M. (1996). The Status of Well-Being. The Tanner Lectures on Human Values.
Schaffer, Jonathan. (2007). Deterministic Chance? British Journal of Philosophy of Science, 58, 113-140.
Schelling, Thomas. (1968). The Life You Save May Be Your Own. In S. B. Chase (Ed.), Problems in
Public Expenditure Analysis. Washington, DC: The Brookings Institution.
Setiya, Kieran. (2019). Ignorance, Beneficence, and Rights. Journal of Moral Philosophy, 17(1), 56-74.
Sher, George. (1980). What Makes a Lottery Fair? Nous, 14, 203-216.
Simester, Alfred P. (2014). Attempts: In the Philosophy of Action and the Criminal Law. Book Review.
Mind. Vol 123:492, pps. 1249-1255.
Slote, Michael. (2015). Why Not Empathy? In G. Cohen, N. Daniels, and H. Eyal (Eds.), Identified versus
Statistical Lives: An Interdisciplinary Perspective (150-158). Oxford: Oxford University Press.
Smilansky, Saul. (2003). Can Deontologists Be Moderate? Utilitas, 15(1), 71-75.
Smith, Nicholas JJ. (2014). Is Evaluative Compositionality a Requirement of Rationality? Mind,
123(490), 457-502.
Stalnaker, Robert C. (1980). A Defense of the Conditional Excluded Middle. In W. L. Harper, G. A.
Pearce, and R. C. Stalnaker (Eds.), Ifs. D. Reidel Publishing Company. London: England.
Stephen, J. F. (2014). A History of Criminal Law of England (three volumes). Cambridge: Cambridge
University Press.
Steuwer, Bastian. (2021). Contractualism, Complaints, and Risk. Journal of Ethics and Social Philosophy
19, 2: 11-147.
Tadros, Victor. (2018). Past Killings and Proportionality in War. Philosophy and Public Affairs, 46(1), 9-
35.
Temkin, Larry. (2002). Equality, Priority, and the Levelling Down Objection. In M. Clayton and A.
Williams (Eds), The Ideal of Equality (pp. 126–61). Palgrave Macmillan: UK.
Temkin, Larry. (2001). Inequality: A Complex, Individualistic, and Comparative Notion. Philosophical
Issues, 11, 327–53.
Thomson, Judith (1990). The Realm of Rights. Cambridge, Mass.: Harvard University Press.
Voorhoeve, Alex (2014). How Should We Aggregate Competing Claims? Ethics, 125, 64-87.
Voorhoeve, Alex and Fleurbaey, Marc. (2012). Egalitarianism and the Separateness of Persons. Utilitas,
23(3), 381-398.
Wallace, R. Jay. (2002). Scanlon’s Contractualism. Ethics, 112, 429-470.
Wasserman, David. (1996). Let Them Eat Chances: Probability and Distributive Justice. Economics and
Philosophy, 12, 29-49.
163
Williams, Bernard. (1976). Moral Luck. Proceedings of the Aristotelian Society, 50, 115-135.
Williams, Glanville. (1991). Wrong Turning on the Law of Attempts. Criminal Law Review, 416-425.
Williams, J. Robert G. (2010). Defending Conditional Excluded Middle. Nous, 44(4), 650-668.
Williamson, Timothy (2000). Knowledge and its Limits. Oxford: Oxford University Press.
Wu, Patrick. (2022). Aggregation and Reductio. Ethics, 132(2), 508-525.
Yaffe, Gideon. (2010). Attempts: In the Philosophy of Action and the Criminal Law. Oxford: Oxford
University Press.
Zamir, Eyal and Medina, Barak. (2010). Threshold Deontology and Its Critique, in Law, Economics, and
Morality. New York: Oxford, 41-56.
Abstract (if available)
Abstract
Broadly speaking, there are two non-consequentialist ways to determine whether a particular action is morally wrong or impermissible. First, one can examine the principle or maxim under which that action falls. For instance, according to Kantian framework, an action is wrong if the maxim it falls under cannot be properly universalized. Second, one can examine the properties of the action itself. For instance, according to at least some contractualists, an action is wrong when that action cannot be properly justified to others.
In my dissertation, I argue that each of these approaches is flawed, especially when applied to cases involving diachronic or dynamic choice. In such cases, I argue, we cannot determine whether an action is morally wrong or impermissible without examining the diachronic action of which it is a part. For instance, according to the approach I favor, to determine whether a certain war maneuver is permissible, we need to examine the larger war of which it is a part. Similarly, to determine whether a particular stage in a medical treatment is permissible, we need to examine the complete medical treatment of which it is a part.
Unless our non-consequentialist theory is sensitive to the peculiarities of diachronic choice, it will miss something of fundamental importance about actions that are intuitively grouped together as components of a plan or steps in the pursuit of a larger objective. Perhaps the most significant revision that is necessary, I argue, is that the permissibility of an agent’s action may depend on the actions that she has already performed and the evidence she had available when she performed them.
My dissertation proceeds in five chapters:
Chapter 1: When Right Actions are Wrong in Principle
In Chapter 1, I argue against those who determine the wrongness of an action based on the principle under which it falls. My primary target in this chapter is Scanlon’s contractualism, though I also show how the basic critique might apply to the contractarian and Kantian position as well. According to the contractualist, an action is morally wrong if any set of principles that disallowed it could not be reasonably rejected by suitably informed and sufficiently rational individuals. At least as developed by Scanlon, an individual needn’t be either fully informed or ideally rational for her rejection of such principles to be relevant; instead, the individuals who evaluate candidate principles simply need to be ‘reasonable,’ having decent access to information, their reasons, and the strength of their reasons. Chapter 1 argues that the potential cognitive limitations of reasonable people undermine the claim that an action is wrong if any set of principles that disallowed it could not be reasonably rejected. More specifically, the fact that reasonable individuals may have false empirical beliefs, alongside the fact that such principles need to control for reasonable mistakes in their application, ought to lead us to doubt that actions prohibited by contractualist principles are invariably wrong.
Chapter 2: Procedural Chances and the Equality of Claims
In Chapter 2, I begin my critique of the view that we can determine whether an action is impermissible in a purely forward-looking manner. In particular, Chapter 2 critiques views according to which an action is wrong if it cannot be properly justified to others on the basis of the agent’s evidence or certain objective chances at the time of her action. My two primary targets in this chapter are Johann Frick’s ex ante contractualism and Caspar Hare’s analysis of what it means to “wish well” to others. Broadly speaking, according to these views, we can determine whether an action is wrong by examining the likelihood that others will be harmed by it, conditional on the agent’s evidence, or by examining certain objective chances that obtain at the time of the action. Chapter 2 both points out the shortcomings of these views and introduces the notion of procedural chances. In the context of my dissertation, the notion of procedural chances is important because it suggests that, at least in certain contexts, determining whether an action is wrong requires examining the procedure of which the action is a part.
Chapter 3: The Dynamics of Moral Thresholds
In Chapter 3, I continue my critique of views that determine the permissibility of an action in a purely forward-looking manner. The specific topic of Chapter 3 is what many have called “Threshold views,” according to which it is permissible to violate moral constraints when the expected value of the outcome is high enough. I first show that, if our version of the Threshold View fails to take into account actions that the agent has already performed, it will result in judgments of permissibility that are sensitive to seemingly arbitrary differences between synchronic and diachronic actions. In order to avoid this unappealing result, we require an interpretation of a Threshold view according to which certain sequences of actions are viewed as part of a morally relevant group. This broader perspective generates compelling results not only in cases that involve moral thresholds, but also with respect to much larger questions dealing with the proportionality of war.
Chapter 4: The Composition of Risk
In Chapter 4, I present and defend a version of ex ante contractualism, which I call Composite Contractualism, according to which the justification of an action is not purely forward-looking. The primary hurdle to adopting Composite Contractualism is that, if we were to do so, it would be difficult to accommodate the intuition that medical experimentation is almost always morally impermissible. My first task in this chapter is to demonstrate that salient attempts by the ex ante contractualist to accommodate our intuitive discomfort with medical experimentation are unsuccessful. In light of this failure, we have two basic choices: first, we can embrace a view such as Composite Contractualism, which would allow us to maintain our intuitions with respect to the permissibility of risky medical treatments; or second, if we are truly convinced that medical experimentation is almost always morally impermissible, we may need to abandon the ex ante contractualist’s moral framework altogether.
Chapter 5: The Culpability of Criminal Attempts
In Chapters 2-4, I discuss several scenarios in which what we ought to do might depend on what we’ve already done. In Chapter 5, I argue that, these results notwithstanding, it is not the case that our moral or legal evaluation of actions ought to be parasitic on the diachronic actions of which they are a part. In particular, at least as a general matter, agents who fail to complete a diachronic action merit a different legal and moral evaluation than those who do complete it. One legal context in which this issue arises is with respect to the treatment of incomplete and complete criminal attempts. In most (if not all) jurisdictions in the US and UK, individuals who perform incomplete attempts can be punished in the same manner as those who commit complete attempts. For instance, someone who aims a gun at someone else (with the intent to kill) but never fires can be charged with the same crime as someone who aims, fires, and misses. In this chapter, I put forward three arguments as to why we should maintain a legal distinction between incomplete and complete attempts, arguments that draw on action theory, normative ethics, and legal doctrine.
Taken together, the five chapters of my dissertation make clear that, at least in certain contexts, determining whether an action is permissible requires us to examine the diachronic action of which it is a part. This discussion gives us strong reason to believe that justification of our actions to others, from a non-consequentialist perspective, will not always be purely forward-looking. If we favor a non-consequentialist theory, then, we will be forced to pay closer attention to those contexts in which neither a general maxim nor a synchronic snapshot will suffice in determining the permissibility of our actions.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Responding to harm
PDF
Aggregating complaints
PDF
The virtue of reasonableness: on the normative epistemic implications of public reason liberalism
PDF
Public justification beyond legitimacy
PDF
Mistaken defense and normative conventions
PDF
The minimal approval view of attributional-responsibility
PDF
On intimacy: a philosophical account
PDF
Responsibility and rights: a search for a principled distinction between criminal law and tort law
PDF
An account of the normative structure of human rights
PDF
Rationality and the primacy of the occurrent
PDF
A deontological explanation of accessibilism
PDF
The constitution of action
PDF
Reasoning with degrees of belief
PDF
The causal account of liability to self-defensive harm reconsidered
PDF
Units of agency in ethics
PDF
Minimal sensory modalities and spatial perception
PDF
Contract, from promise to commodity
PDF
Feeling good and living well: on the nature of pleasure and its role in well-being
PDF
A perceptual model of evaluative knowledge
PDF
Decoding information about human-agent negotiations from brain patterns
Asset Metadata
Creator
Sridharan, Vishnu
(author)
Core Title
Getting our act(s) together: the normativity of diachronic choice
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Philosophy
Degree Conferral Date
2022-08
Publication Date
05/27/2022
Defense Date
05/27/2022
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
contractualism,diachronic choice,dynamic choice,ex ante contractualism,incomplete attempts,OAI-PMH Harvest,proportionality of war,threshold deontology
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Hawthorne, John (
committee chair
), Nebel, Jake (
committee member
), Quong, Jonathan (
committee member
)
Creator Email
thevish@gmail.com,vishnusr@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC111339150
Unique identifier
UC111339150
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Sridharan, Vishnu
Internet Media Type
application/pdf
Type
texts
Source
20220608-usctheses-batch-945
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
contractualism
diachronic choice
dynamic choice
ex ante contractualism
incomplete attempts
proportionality of war
threshold deontology