Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Units of agency in ethics
(USC Thesis Other)
Units of agency in ethics
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
1 Units of Agency in Ethics Alexander Dietz A dissertation presented to the faculty of the USC Graduate School, University of Southern California, in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Philosophy December 2018 2 Contents Acknowledgments 3 Introduction 4 Chapter 1: What Should We Do? 9 Chapter 2: Are My Temporal Parts Agents? 22 Chapter 3: When Should I Do My Part? 40 Chapter 4: Unit-of-Agency Dilemmas 59 References 77 3 Acknowledgements To make the obvious jokes: writing this dissertation was not only a project requiring coordination among various of my temporal parts, but also a collective project, in which a number of genuinely distinct people participated. I owe a debt to lots of people whom I’ve known during my time at USC, but those I owe specifically for their feedback on material in this dissertation include Mike Ashfield, Stephen Bero, Simon Blessenohl, Renee Jorgensen Bolinger, Mark Bryant Budolfson, Erik Encarnacion, Maegan Fairchild, Jeremy Goodman, Frank Hong, Joe Horton, Nathan Robert Howard, Nicola Kemp, Zoë Johnson King, Nick Laskowski, Matt Leonard, Julia Nefsky, Daniel Pallies, Caleb Perl, Abelard Podgorski, Jacob Ross, Alexander Sarch, Jeff Sebo, Kenneth Silver, Shane Ward, Aness Webster, Christopher Woodard, the editors and referees at Ethics and Philosophy and Phenomenological Research, and audiences at Colorado University, Indiana University, and the University of Rennes. I also owe particular thanks to the members of my dissertation committee: Stephen Finlay, Gregory Keating, Mark Schroeder, Ralph Wedgwood, and my chair, Jonathan Quong. I am especially grateful to Mark and Jon, both of whom have been exceptionally generous to me with their time and words of encouragement. 4 Introduction Everyone has decisions to make. There are many, and very different, things you could do right now, over the next year, or over the rest of your life. So what should you do? This is the kind of ethical question that we often, for good reason, focus on. But it’s not the only kind of question we can, or do, ask. This question asks what you, an individual person, should do. But, of course, you are not alone, and you are not the only one who has choices to make. Just as there are many things you could do, there are many things that you and others could together do. You and I could go for a walk, or start a business. You and your fellow citizens could agree to change your drug laws, or your healthcare system. You and the rest of humanity could decide to liberalize trade, or cut greenhouse gas emissions. It is natural, then, to ask not only what we each individually should do, but what we together should do. In other words, it is natural to expand the unit of agency that we focus on in our ethical thinking, from individual persons to groups of people. 1 And we do, in fact, often ask ethical questions from one of these alternative standpoints in private, professional, and public life. But these questions have so far gotten relatively little systematic attention in philosophical ethics. 2 Ethical theories and principles about how we should act are typically stated in terms of what each individual person ought to do. They claim that each of us ought to do what makes the world as good as possible, or to treat others as ends in themselves, or to act in accordance with virtue. Sidgwick even goes so far as to explicitly define ethics as the study of what each individual ought to do. 3 My goal in this dissertation is to make progress toward clarifying and answering the most fundamental questions about how the existence of multiple units of agency might affect our thinking about what we should do. Do we really have reason to accept that there can be things that we together should do, over and above what each of us individually should do? If the units of agency can expand, to include groups, can they also contract, to include something even “smaller” than the individual person? And if there are multiple units of agency of ethical significance, how do they relate to each other? How might what we together ought to do affect what each of us ought to do? Could there be situations where what we together ought to do and what each of us ought to do conflict? The main ethical concept that I will focus on in this dissertation is that of a normative reason for action. When some fact gives me a normative reason to do something, this means that there is 1 I take the term “unit of agency” from Hurley 1989: Chap. 8. 2 Recent discussions include Isaacs 2014, Collins and Lawford-Smith 2016, and Wringe 2016. For an earlier defense of the view that groups have obligations, see Jackson 1987. For a critique, see Parfit 1988. 3 Sidgwick 1981: 1. 5 something which contributes to its being the case that I should or shouldn’t do this thing. It is important to distinguish normative reasons for action from motivational reasons for action, or what does or might motivate some agent to do something. When I talk about reasons for action, I intend to refer to normative reasons. I will also be bracketing questions about what we should do in conditions of ignorance. In my examples, I will be assuming that all parties know the relevant facts, except where otherwise noted. Another important distinction, which we will return to throughout the dissertation, is the distinction between agent-relative and agent-neutral reasons. 4 Whenever someone has a reason to act in some way, I will assume, we can speak of this reason as deriving from some feature of the “outcome” of that act, very broadly construed. For example, an agent might have a reason to act in such a way that someone’s life is saved, that that very agent is benefitted, or that that act is the fulfillment of a promise. A reason is agent-relative, in my sense, when it derives from a feature that involves the agent or the action as such, and can be specified only using terms like “that agent”; otherwise, it is agent-neutral. For example, an agent’s reason to act in such a way that that very agent is benefitted, or that this act is the fulfillment of a promise, would be agent- relative, whereas a reason to act in such a way that someone’s life is saved would be agent- neutral. It is also important to distinguish questions about what we have reason to do from questions about responsibility. The notion of responsibility, as I understand it, is “backward-looking.” That is, if I am responsible for some action or outcome, this means that it is connected to me in such a way that I would be the appropriate target of certain kinds of retrospective response to it, such as being blamed, praised, thanked, punished, or rewarded. In contrast, the notion of what we have reason to do, as I understand it, is “forward-looking.” That is, the primary “role,” in some sense, of our thoughts about whether we have most reason to do something is to help us to decide, before the fact, whether to do it. Again, the purpose of this dissertation is to explore how we might rethink the limits of the unit of agency from which we assess what to do. That is, rather than focusing only on what each individual person ought to do, we will be exploring how we might broaden our focus to groups of people. We will also be exploring how we might narrow our focus to something “smaller” than the individual person: in particular, we will focus on the idea of persons as having temporal parts, or what Olson calls subpersons. 5 This leaves us with three possible units of agency. What exactly do I mean by a “group” of people, and what is a “temporal part” of a person? 4 This distinction comes from Nagel 1970: 90-91. The terms “agent-relative” and “agent-neutral” are due to Parfit 1984: 24. My definitions are slightly different from those offered by Nagel and Parfit, but I take it that they correspond to the same distinction. 5 Olson 2010. 6 By a “group” of people, I simply mean a set or collection of persons. Literally any set of persons is a group in this sense, whether or not they know each other, are alive at the same time, or have any other interesting property in common. The only additional assumption I will make about the people in my examples is that they are capable of acting together: that is, that they are capable of performing a joint action that involves the individual actions that the examples describe. So I will not be referring only to special kinds of social groups, such as Jews or Americans. 6 It is also important to distinguish groups considered as sets of people from entities such as clubs, states, corporations, or sports teams. These are social entities, in the sense that they exist in virtue of a certain kind of social recognition. These entities are not groups in my sense. For example, a team can exist even when its members have changed. So, for example, we need to distinguish the team from the set of the particular people who are currently on the team. In addition, some social entities, such as the governor’s office, are not even groups in the sense of having multiple members. 7 So while we might also be interested in whether social entities have reasons for action, this is quite different from the question of whether sets of persons have reasons for action. The notion that persons have temporal parts, or subpersons, comes from a broader view in metaphysics that persons, and physical objects generally, are extended not only in space, but also in time. 8 Proponents of this view claim that, if I spend the morning in my apartment and the afternoon in the library, I am never strictly speaking entirely in either my apartment or the library. Rather, there is a temporal part of me, my morning-part, that is in my apartment, and another part, my afternoon-part, that is in the library. What is it for a group or a subperson to have a reason for action? You might think that I have already answered this question, since I have already explained what I mean by “reason for action,” “group,” and “subperson.” However, there is still some room for disagreement. In particular, we might be deflationists about the reasons of groups or of subpersons: we might accept that these things have reasons, but claim that what it is for them to have reasons is reducible to facts about the reasons of individual persons. For example, we might think that what it is for a group to have a reason for action just is for each member of the group to have a reason to do her part. (Relatedly, we might allow that what it is for groups or subpersons to have reasons is not reducible to facts about the reasons of individual persons, but claim that group and subpersonal reasons derive entirely from individual reasons.) The view that I will be defending, however, is not deflationary in this way. In fact, I will discuss how the reasons for action possessed individual persons might sometimes derive from collective reasons, rather than the other way around. And I will argue that subpersons, persons, and groups 6 See Ritchie 2013 and 2015, and Epstein 2017. 7 For a more detailed discussion of social entities, see Epstein 2015. 8 For a general introduction to temporal parts, see Sider 2001. 7 might even have reason to act in incompatible ways: that it might be true that I should do something that one of my present subpersons should not, or that we together should do something but that I should not do my part. I have said that persons, groups, and subpersons represent three possible “units of agency.” This term signals the idea that the question of whether groups or subpersons might, in addition to individual persons, have reasons for action seems to be importantly different from the question of whether, say, dolphins or robots might have reasons for action. 9 This is because groups and temporal parts seem to be related to individual persons in a special way. The relevant relation, I take it, is one of metaphysical priority. It seems that collective action should be explainable, in some way, in terms of facts about individual persons. That is, we together do what we do in virtue of facts about such things as what each of us does and what mental states each of us has, and the relations between them. (I will not try to develop a more specific account here. 10 ) Likewise, as I will discuss in Chapter 2, if we think that subpersons can act, then we have reason to think that the actions of individual persons are to be explained in terms of the actions of subpersons. The idea of these three related units of agency can be compared to the idea of smaller and larger possible units of concern. Many philosophers have discussed how it is appropriate to compare our own present interests, our interests over the course of our lives, and the interests of others. But, I will argue, it is not enough to rethink the limits of our concern. We should also rethink the limits of the unit of agency from which we assess what to do. Again, it seems to me that we commonly make claims about what groups ought to do in ordinary life. So why does the idea that groups have reasons for action need to be defended? We might be skeptical about whether collective action is possible. But this is something that seems to be taken for granted by common sense, and I will not try to defend it. Instead, I take it that the most important challenges facing this idea are the following. First, we might be skeptical about why we really need this notion: what we can gain in our ethical thinking by appealing to collective reasons, in a non-deflationary sense, rather than merely appealing to individual reasons and other ethical ideas. Second, we might think that groups should not be taken to have their own reasons on the grounds that even if there is such a thing as collective action, it is metaphysically less fundamental than individual action. I will address the former challenge in Chapter 1, arguing that collective reasons can help us to explain the judgments we want to make about certain kinds of cases. I will address the latter challenge in Chapter 2. 9 Hurley 1989: Chap. 8. 10 Leading accounts of collective action include those offered by Gilbert 1990 and Bratman 1999. For more recent treatments, see Gilbert 2014 and Bratman 2014. See also List and Pettit 2011. 8 The way in which I will argue that subpersons have reasons for action will be importantly different from the way in which I will defend collective reasons for action, because the idea of subpersons faces different challenges. First, whereas common sense seems to take it for granted that groups can act, the idea that subpersons are agents in their own right is much less familiar, so I will argue that we do in fact have significant reasons to accept this idea. Second, it might seem that the view that subpersons are agents with their own reasons for action would come with unacceptable normative implications, so I will also offer a response to this concern. I will proceed as follows. In Chapter 1, I will offer arguments in support of the idea that groups have reasons for action. In Chapter 2, I will offer arguments in support of the idea that subpersons have reasons for action, and will argue that subpersons can help us to indirectly support the case for collective reasons. Chapters 3 and 4 will deal with how the reasons for action bearing on different units of agency might affect one another, focusing on the relationship between groups and individuals. In Chapter 3, I will argue that we should accept a moderate view about when individuals ought to do their parts in what the group ought to do. In Chapter 4, I will argue that there can be even be cases where what individuals ought to do is incompatible with what the group ought to do, and that these cases are analogous to traditional moral dilemmas in important, and troubling, ways. 9 Chapter 1: What Should We Do? 1. Introduction Do groups of people, and not only their individual members, have reasons for action? Are there things that we together ought to do? In this chapter, I will argue that the answer is yes. I will focus on three important kinds of reasons: reasons to make outcomes better, to avoid harming people in certain ways, and to benefit ourselves. I will assume, as our commonsense ethical thinking seems to suggest, that we have individual reasons to act in these ways. However, I will argue that there is also significant evidence that we have collective reasons to do so. This evidence comes from focusing on cases. In order to capture our intuitions about these cases, I will argue, it will not be enough to appeal to individual reasons. However, we can capture these intuitions in a simple and satisfying way by appealing to collective reasons of the relevant kinds. If these arguments are successful, then we will not only be in a position to conclude that collective reasons exist, but also that there are collective reasons of these three particular kinds. After making these arguments, I will discuss what else the arguments might tell us about what collective reasons are like. First, while these arguments suggest that there are important parallels between individual and collective reasons, I will argue that we have grounds to be cautious about supposing that these reasons are parallel across the board. I will next discuss the question of which groups have reasons for action. If we accept collective reasons on the basis of my arguments, I will argue that we should think that the class of groups that possess reasons may be much larger than is sometimes assumed. 2. Reasons to make outcomes better Start with our reasons to make outcomes better. Most people believe that each of us has some reason to do what will make things better. I claim that, in addition, we together have such reasons. I will support this claim with the following case, introduced to the philosophical literature by Gibbard. 11 Suppose that you and I each have two options, with outcomes as follows: You do A do B I do A Second-best Bad do B Bad Best 11 Gibbard 1965: 214-215. Regan 1980 also extensively discusses a version of this case. 10 Suppose further that the second-best outcome would be much worse than the best outcome, and that the bad outcomes would be extremely bad. We can call this Gibbard’s Case. 12 As Parfit writes, it is “in some sense obvious” that, other things equal, we should both do B and bring about the best outcome. 13 Likewise, it seems to me in some sense obvious that we should not both do A, and bring about the second-best outcome. But in what sense are these claims correct? On the account I will defend, these claims are correct in the sense that we together ought to carry out the course of action in which we each do B. We have decisive reasons as a group to carry out this course of action, in virtue of the fact that we would together be making things better. And we together have decisive reasons not to do A, in virtue of the fact that we would together be making things worse than they could be. 14 This account, I will argue, has significant advantages over two alternative ways of accounting for Gibbard’s Case. On the first such alternative, we might try to show that we each ought to do B, and not A. Second, we might make claims not about how we ought to do B and not A in the usual sense, but rather about other things, such as about what merely ought to happen, or about our motives. I will start by considering these alternatives and showing how they fall short. I will then show that the appeal to collective reasons can avoid these problems and can thus provide a more satisfying account of the sense in which we should both do B, and not A. Let’s start by considering noncollectivist accounts of the first type. On these accounts, our sense that “we should both do B, and not A” should be understood not as a claim about what we as a group should do, but rather merely as equivalent to the conjunction of the purely individualistic claims that I ought to do B, and not A, and that you ought to do B, and not A. How could we explain why each of us should do B, and not A? Note that it would not be enough to point to the effects of what each of us would be doing. For suppose that we both do A. And suppose further that each of us is stubborn and would do A regardless of what the other does. In that event, we would each be making things as good as possible, given what the other is doing. 12 For a more specific version of this case, imagine that you and I are industrialists who have been asked to release our supplies of a certain chemical into the atmosphere (option B), as doing so will reverse the effects of global warming. If we both refuse (option A), global warming will proceed unchecked, causing millions of deaths. However, this chemical has certain dangerous properties which are neutralized only when there is enough of it dispersed throughout the air. The chemical will be harmless if we both release our supplies. But if only one of us does, this will cause changes to the environment far more catastrophic than the effects of global warming. 13 Parfit 1988: 11. 14 In using Gibbard’s Case to support the idea of collective reasons to make outcomes better, I am following a suggestion that Parfit 1988 makes, but ultimately rejects. I discuss two of his reasons for rejecting this idea below. Killoren and Williams 2013 pursue a similar strategy. 11 So appealing to our individual effects would not explain why we should not do A. However, we might find some alternative explanation. For example, we might claim, with some rule consequentialists, that each of us ought to do what, if everyone did it, would have the best consequences. However, even if we can find a plausible explanation of some such kind, all accounts according to which each of us ought to do B, and not A, are open to the following objection. Suppose that you will do A. In that event, my doing B would make the outcome extremely bad. So it seems clear that I should not do B. But the accounts we are considering claim that “we should do B, and not A” in the sense that each of us should do B, and not A. These accounts therefore claim that I should do B, even though this would lead to disaster. That is unacceptable. Rather than making the general claim that each of us ought to do B and not A, we might instead merely claim that I ought to do B if you do B, but ought to do A if you do A. But again, this would not help to explain in what sense we should not both do A. Let’s turn, then, to accounts of the second kind, which make claims not about the rightness or wrongness of our doing A or B, but rather about other aspects of the case. First, in some versions of this case, it might be true that we would do B just in case one of us suggests that we do so. As Parfit points out, in these versions of the case, each of us would then have acted in the wrong way by failing to make such a suggestion, in virtue of its consequences. 15 However, this account will not succeed in other versions of the case. If we are each determined to do A regardless of what the other says, then, given the other, none of our prior choices would make things worse. Next, it might be claimed that if we are each determined to do A even if the other were willing to do B, then this reveals something bad about our motives or character. 16 And this, it might be suggested, can explain our feeling that if we both do A, something has gone wrong. However, this seems inadequate. It seems obvious, not just that we should not be the sort of people who would do A, but also simply that we should not do A. 17 15 Parfit 1988: 21-22. 16 See Parfit 1988: 22. 17 See Killoren and Williams 2013: 296. This objection also applies to cooperative utilitarianism, a theory that Regan 1980 proposes in order to address Gibbard’s Case. Roughly, this theory requires each of us to cooperate, with whoever else is cooperating, in producing the best outcome. In order to “cooperate,” we must not only act in the right ways but also have the right motivations. If in Gibbard’s Case we both do A because we are both uncooperative, then we will have failed to satisfy Regan’s theory, since we will have failed to have the right 12 Another alternative is to claim that when we say that “we ought to do B,” we are using “ought” in what is called the evaluative sense, rather than the deliberative sense. Whereas the deliberative “ought” is closely tied to particular agents, we use “ought” in the evaluative sense to make more impersonal claims about what would ideally be the case, such as “there ought to be less misery.” 18 On this suggestion, in claiming that “we ought to do B,” what we mean is that “it ought to be the case that we do B.” This is roughly equivalent to claiming that it would be a good thing if our both doing B is what happens. However, this too seems inadequate. It might seem sufficient to claim that it would be a good thing if our both doing B is what happens if this were out of our control. But again, I am assuming in this and other cases that we are capable of intentionally acting together. While it might be out of my control whether we do B, it is not out of our control. So it seems inadequate merely to think of our failing to do so as unfortunate. 19 Finally, reconsider the collective reasons account. Again, this account claims that we together ought to carry out the course of action in which we each do B in virtue of the fact that we would together be making things better. And we together have decisive reasons not to do A, in virtue of the fact that we would together be making things worse than they could be. These claims offer straightforward and what seem to me to be very plausible explanations of both in what sense we should do B, and not A, and why. In addition, they do not have counterintuitive implications. As I will argue later, the claim that we together ought to act in some way does not imply that each of us ought to do his part, no matter what. So we can accept that we together ought to do B without claiming that I ought to do B even if you do A. Finally, this account offers claims about what we have decisive reasons to do now, rather than merely about our previous acts, our motives, or what ought to happen. I conclude, then, that Gibbard’s Case provides significant evidence in favor of the idea that we have not only individual reasons to make outcomes better, but also collective reasons to do so. 3. Reasons not to harm Consider next our reasons not to harm other people in certain ways, such as by killing them. It might seem that there is some prima facie reason to be skeptical that reasons of this sort would apply equally to groups. Unlike our reasons to bring about good outcomes, our reasons not to motivations. However, we will not have done anything wrong: when you are uncooperative, Regan would agree, A is the right thing for me to do, and vice versa. 18 See Schroeder 2011. 19 As Jackson writes of different examples, “It is evident that something wrong happens” in these cases, “but more than that is evident: something wrong is done. (It would be quite wrong to think of either case as being one of a natural misfortune, like a flood).” Jackson 1987: 100. 13 harm others might derive from the relations in which we would stand to other people if we acted in these ways. And there might be important differences between the ways in which I can relate to someone and the ways in which a group of people can relate to someone. However, I will now argue that groups, and not only individuals, have reasons not to harm people. I will support this claim with the following case. Consider Firing Squad: You and I each shoot some innocent victim, who then dies. Our bullets both contribute to this person’s death. However, either shot, by itself, would have killed. 20 Finally, suppose again that we are each stubborn and would shoot regardless of what the other does. Now, because any one of our contributions would be enough, it is true of each of us that whether our victim is injured or killed does not depend on what we do. Nevertheless, in this case, it seems obvious that, in some sense, we are doing something seriously wrong. On the account I will defend, if we both shoot, we will together be doing something wrong, either in intentionally carrying out this course of action together, or in failing to stop ourselves from doing so individually. For we would together be killing this person. This fact gives us as a group a strong reason to avoid this course of action. Again, it might be claimed that we can account for the wrongness of what we would be doing purely in terms of our individual duties and reasons for action. I will consider two possible accounts of this kind. I will then argue, again, that the collective reasons account enjoys significant advantages over these alternatives. One purely individualistic account could go as follows. First, even if our victim’s death does not depend on what either one of us does, it could be argued, at least one of us must still have killed this person—that is, at least one of us must in fact have caused this person’s death. Other things equal, we might believe, it is wrong to kill someone, even if we do not hasten this person’s death. So, we might claim, if I turn out to be the killer, then this makes what I have done wrong. And you may have acted wrongly insofar as you risked being the killer. Or we might each have caused this person’s death, and so we might each be killers. However, even if killing that does not hasten death is objectionable, it seems less serious than the other kind. Suppose that 20 This is based on a case described in Parfit 1984: Sec. 26. 14 two hospital patients are dying of some disease. There is only one drug that would have any effect. It would cure this disease, but it would also kill these patients painlessly at the exact moment that the disease would have done so. However, the drug would also make their organs safe for transplantation. While the patients have no objection to donating their organs, they refuse to take this drug. But if their doctor gives them this drug anyway, she could save someone else’s life. We might normally find killing unacceptable, even when it would prevent other deaths. But it at least seems open to question whether the doctor should give her patients this drug. So it seems that our reasons not to perform even two killings that do not hasten death must be weaker than our reasons not to perform a normal killing. And if this is so, then even if, in Firing Squad, we both turn out to be killers, the fact that we are committing two killings that do not hasten death does not seem to be enough to capture what wrong is being done. Again, what we are doing seems to be seriously wrong. It might next be suggested that even if we want to explain this case solely in terms of individuals’ reasons for action, we do not have to ground these reasons in facts that are only, in a narrow sense, about these individuals. Parfit uses a firing-squad case to support the following claim: Even if an act harms no one, this act may be wrong because it is one of a set of acts that together harm other people. 21 Parfit’s proposal does not explicitly make claims about what groups ought to do. This proposal thus allows us to appeal to facts about what we together do without relying on the unfamiliar notion of collective reasons for action. In this way, we can think of this proposal as occupying a middle ground between a purely individualistic account of Firing Squad, and a collective-reasons account of the kind I will describe below. However, it is unclear whether this middle ground is tenable. For it is unclear whether this proposal, unaided by an appeal to collective reasons not to harm, can offer a convincing explanation of why it would be wrong for each of us to shoot. On this proposal, if we both shoot, then: (1) I will be acting wrongly, because we will together be harming others. If we accepted the claim that we have collective reasons not to harm others, we could expand this to become: 21 Parfit 1984: Sec. 26. 15 (2) I will be acting wrongly, because we will together be harming others, and so will together be acting wrongly. But if we reject (2), then (1) seems undermotivated. It seems unclear why my action should be criticizable in virtue of its part in some larger activity, if there were nothing wrong with that larger activity itself. It might be replied that we can criticize our larger set of acts simply by saying that it is a bad thing that these acts are performed. On this suggestion, the wrongness of my shooting derives from the fact that what I did would be a part of something bad that happens, rather than of something wrong that is done. However, this seems implausible. Suppose that I suffer from a genetic condition that causes my body to produce too much of certain chemicals. My favorite food also contains these chemicals, and if I ate this food, the chemicals from it would combine with the chemicals from my body to produce painful symptoms. But because my body already produces more than enough of this chemical to cause these symptoms, eating this food would not make me feel any worse. It seems very implausible that these facts could make it wrong for you to offer me this food. This case suggests that the harms that we produce together with other people must be criticizable in some way that does not apply to the harms we produce together with natural or inanimate causes. And it seems to me that the most natural criticism we could make is that if we together harm other people, we will together be acting wrongly. This leads us to the collective-reasons account of Firing Squad. Again, on this account, we together have a strong reason to avoid shooting. For if we both shoot, we will together be killing our victim. (This is true whether or not this killing will count as an intentional collective action.) Moreover, our combined actions will hasten this person’s death. Since our combined actions do hasten our victim’s death, this account seems capable of capturing what is seriously wrong about what we are doing. In addition, unlike the Parfit proposal, this account offers a straightforward and plausible explanation of why what we are doing is wrong. I conclude, then, that Firing Squad provides us with significant evidence in favor of the idea that we have reasons not to harm people in certain ways, not only as individuals, but also as groups. 16 4. Self-interested reasons Consider finally our reasons to benefit ourselves. Here parity between individual and collective reasons might seem especially unlikely. Cases in which I benefit myself, and cases in which we together benefit ourselves, differ with respect not only to the agent but also to the beneficiary. And if groups can be said to have a welfare at all, it might be thought, this must be a very different sort of thing from the welfare of an individual. Despite these differences, I will now argue that groups, as well as individuals, may have self- interested reasons. This claim, I will argue, is supported by the case of the Prisoner’s Dilemma, where we must each choose whether to benefit the other at some lesser cost to ourselves. This case has the following structure: You benefit me harm me I benefit you Second-best for each Best for you, worst for me harm you Best for me, worst for you Third-best for each Suppose finally that, unlike in Firing Squad, the relevant harms here would not involve killings or other kinds of actions thought to be specially prohibited. Discussions of the Prisoner’s Dilemma often assume that we cannot currently communicate and must each decide whether to harm the other. But suppose instead that we can communicate and agree to harm each other. This may seem, in some sense, crazy, at least to some of us. It may seem that, in some sense, we must have some reason not to harm each other, given that there is another course of action that would be better for both of us. If this is right, then how can we explain in what sense this is true, and why? Note first that we could not explain why we have some reason not to harm each other in virtue of the reasons that we each have to do what is in our own interest. Whatever you do, benefitting you would make me worse off. We might next claim that we each have altruistic reasons not to harm the other. If my acting in some way would make someone worse off, we might claim, this fact in itself gives me a reason not to act in this way. 17 This suggestion is very plausible. However, it does not seem to be enough to capture what is going wrong. What we are doing seems to be not merely unfriendly, but self-destructive. Next, as in the discussion of Gibbard’s Case, we might focus on prior acts, or claim that what happens is merely unfortunate, or reveals something defective about our characters or motives. However, we might think that in choosing to harm each other rather than benefitting each other, what we are doing is itself wrong. And as before, these suggestions do not support the idea that we are making the wrong choice. Finally, we could claim that we together have a reason of some kind not to carry out the course of action in which we harm each other, since this would be worse for both of us. Such an account might offer a straightforward and plausible way to make sense of the idea that we have some reason not to harm each other, and why. In addition, such an account might make sense of the idea that we have some reason not to harm each other not only for altruistic reasons, but also because doing so would be self-destructive. Now, to develop this proposal, it might be suggested that the same ultimate self-interested reasons apply to groups and individuals alike. In other words, it might be suggested, our underlying principle should be that each agent has a reason to do what benefits that agent. On this view, our collective reasons to benefit ourselves would be reasons to benefit us as a group. However, philosophical theories of group welfare have not yet, to my knowledge, been developed as thoroughly as theories of individual welfare. And again, given the important differences between individuals and groups, it might turn out that the welfare of a group is a very different sort of thing from the welfare of an individual. So until theories of group welfare are more fully developed, it seems premature to claim that groups and individuals have the same fundamental self-interested reasons. But even if groups do not have the same reason to benefit themselves that individuals have, this does not mean that they do not have self-interested reasons at all. Instead, I propose that we can take the Prisoner’s Dilemma to suggest that a similar reason applies to them. In particular, I propose, we can simply claim that when acting in some way would be better for all of us, this fact gives us a collective reason to act in this way. 5. Are collective reasons like individual reasons? In the last three sections, I have given arguments in support of the idea that groups have reasons for action. In the rest of this chapter, I will discuss what these arguments imply about what collective reasons are like. 18 My arguments suggest not only that groups have reasons for action, but that they have reasons falling into three particular categories: reasons to make outcomes better, reasons not to harm, and self-interested reasons. These categories parallel three important categories of individual reasons. This might lead us to suspect that, in general, there might be full parity between individual and collective reasons: that the very same reasons might apply to groups and to individuals. To defend a full-parity thesis, it might be argued that our ethical theory should apply to all agents in the same way. Insofar as a group is an agent, it must have the same basic reasons for action as any other agent. While what groups and individuals have reasons to do might often differ in particular cases, this will merely be due to differences in the circumstances they happen to face. 22 However, there also seem to be fundamental differences between the nature of an individual person and the nature of a group, and not merely in their circumstances. And at least two such differences seem to have normative importance. First, individuals are capable of certain mental states, including beliefs, desires, guilt, pleasure, pain, and continuous conscious experience, mental states which many people take to be closely linked to our reasons for action. However, it might be pointed out, it is controversial whether or in what ways groups have mental states of these kinds. And it is certainly doubtful whether all groups who can act together have such mental states. Second, individuals and groups do not seem to be capable of the same kinds of relationships. For example, individuals might have reasons for action related to love, friendship, and family relationships that could not apply to groups. In addition, since groups, unlike individuals, have members, certain kinds of groups, such as states or corporations, might have obligations of justice toward their members that could not apply to individuals. Unless it can be shown that these differences are either illusory or unimportant, I conclude that we should be skeptical of the claim that the very same reasons apply to groups as apply to individuals. Therefore, to fully understand what we together ought to do, we would have to figure out which individual reasons do not apply to groups, and which reasons groups have that are unique to them. However, I will not pursue this project in this dissertation. Instead, I will continue to focus on the three categories of reasons we have been discussing so far. 6. Which groups have reasons? Next, if we accept that groups can, in principle, have reasons for action, a natural next question is which groups qualify. In particular, we might wonder, do groups need a certain amount of organization, or structure, in order to have reasons for action? Or might reasons for action also be 22 Collins and Lawford-Smith 2016 defends a parity view. 19 possessed by unstructured groups – such as, say, the collection consisting of Queen Elizabeth and me, or humanity as a whole? Collins has argued that only groups with certain kinds of structure could be capable of having duties or obligations, and her argument naturally extends to suggest a similar view about reasons for action. 23 Here is a reconstruction of Collins’s argument, extended to cover reasons for action: P1. Reason implies can: an entity can have a reason to perform some action only if that entity can intentionally perform that action. P2. An entity can intentionally perform some action only if that entity has a decision- making procedure, “a process that takes in reasons and produces aims and instructions.” C1. Therefore, an entity can have a reason to perform some action only if that entity has a decision-making procedure. P3. A group has such a procedure only if enough members have agreed to the use of a mechanism for determining collective aims and assigning individual roles for pursuing those aims. C2. Therefore, a group can have a reason to perform some action only if enough members have agreed to the use of such a mechanism. 24 Similarly, Lawford-Smith writes that the “conditions for collective agency are supposed to capture what it would take for a group to be able to fulfill obligations,” and that these conditions include requirements that the members of the group have accepted a set of goals and a procedure for assigning roles. 25 P2, however, seems dubious. I think that I am capable of performing intentional actions, but I do not think I have any kind of general standing procedure for deciding what to do. The same seems true of groups. For example, it seems perfectly reasonable to think that you and I could go for a walk, even if we do not have any kind of general procedure for how to make decisions together. It might be that when an individual or group agent intentionally does something, the agent must accept some specific aim and instructions for how to pursue it, but it does not seem that the agent needs to follow a general procedure in order to do this. Suppose that Collins and Lawford-Smith grant that intentional action does not require a decision- making procedure. Still, they might argue: P1. Reason implies can: an entity can have a reason to perform some action only if that entity can intentionally perform that action. 23 Collins 2013. 24 Ibid.: 231-232, 236. 25 Lawford-Smith 2015: 231-232. 20 P4. Only agents can act. 26 C3. Therefore, only agents have reasons for action. P5. An entity is an agent only if it either (1) has a decision-making procedure, (2) has psychological states, or (3) exhibits rationally explicable behavior. C4. Therefore, only groups which meet at least one of these conditions are agents. C5. Therefore, only groups which meet at least one of these conditions have reasons for action. List and Pettit have argued that certain structured groups are able to meet these conditions, and so are candidates for having normative properties such as reasons for action. 27 It might seem doubtful, however, that unstructured groups will be able to meet these conditions. However, this argument seems to rely on an equivocation. An “agent” might be something that is capable of acting intentionally, or something that does in fact act intentionally. On the former reading, P4 is analytic, but P5 is open to question. As Killoren and Williams write, we intuitively judge that groups are able to intentionally act in certain ways even when they do not meet these conditions. 28 For example, suppose that two strangers who pass each other in the park, and never see each other again. In this case, it seems intuitive that these people could have spontaneously decided to take a walk together, even though it might seem that, as it stands, we have no grounds for attributing them any collective decision-making procedure, psychological states, or rationally explicable behavior. Alternatively, suppose that by “agent” we mean something that does in fact act intentionally. On this reading, P5 is plausible, but P4 is open to question. It is plausible that the strangers in the park do not in fact perform any intentional collective actions, but again, it is still plausible that they could. I have been resisting arguments for the view that only structured groups can have reasons for action. But do we have any positive grounds for thinking that unstructured groups can in fact have reasons for action? We do. Recall that in the arguments I offered above in support of collective reasons, the only assumption I have been making about the people in my examples is that they are capable of intentionally acting together. 29 For example, in Gibbard’s Case, you and I are capable of collectively acting in such a way that we both do B. Now, there are plausibly some conditions that you and I will need to meet in order to be capable of acting together in this way. For example, maybe we each have to be in a position to intend to do her part in a larger course of action, and maybe this needs to be common knowledge. But it does not seem that we as a group will need to possess any significant structure. We might be perfect strangers, who just happen to have found ourselves in this situation. If we know what is going on, and we can 26 Collins 2013: 231. 27 List and Pettit 2011. 28 Killoren and Williams 2013: 303-305. 29 See Introduction. 21 communicate with each other, then plausibly, we will be able to agree to both do B, and our doing B could then count as something that we intentionally did together. If, when we keep this in mind, we are still persuaded that we have a collective reason to do B, this suggests that reasons for action can be possessed by unstructured groups. 30 The above discussion also implies that the class of groups that have reasons for action may be much larger than we might have thought. The number of collections of people who could act together in ways that would make the world better, harm others, or benefit themselves is potentially huge. Plausibly, Queen Elizabeth and I are in a position to collectively act in ways that meet these conditions (if she would just return my emails). The same seems true of much of humanity as a whole, and many overlapping groups in between. 7. Conclusion In this chapter, I have argued that there is significant evidence to support the idea that groups, like individuals, have reasons for action. In particular, I have argued, groups have reasons to make outcomes better, to avoid harming others, and to do what benefits their members. While these reasons parallel three important categories of individual reasons, we have reason to be cautious, I have suggested, about supposing that collective reasons parallel individual reasons in general. However, I have also suggested that reasons are possessed by a much larger class of groups than has sometimes been assumed. 30 Compare Held 1970. 22 Chapter 2: Are My Temporal Parts Agents? 1. Introduction In Chapter 1, I argued that groups of people, and not only individual persons, have normative reasons for action. In this chapter, we will consider another possible departure from the ordinary focus of our ethical thinking. We can think of the question of whether we should recognize groups as having reasons for action as the question of whether we should expand the limits of the ethically relevant unit of agency. The question we will be interested in here is: if these limits can expand, can they also contract? If in the group we find a “bigger” unit of agency, might there also be a “smaller” unit of agency? That is, might it be the case that, just as the actions of groups consist in the actions of their individual members, the actions of individual persons themselves consist in the even more metaphysically basic actions of some further unit of agency, which is itself an appropriate subject of ethical assessment? What might this unit of agency be? There are several possible candidates, but I will here focus on one. 31 According to a widely accepted view in metaphysics, persons, like physical objects generally, are extended not only in space, but also in time. As a result, they have not only spatial parts, but temporal parts, or subpersons. 32 For example, one subperson is twenty-year-old-me, a temporal part of mine that comes into existence when I turn twenty, and that ceases to exist when I turn twenty-one. I will explore the idea of seeing the subperson as a further unit of agency. The notion of treating beings like this as agents is not new. As Butler wrote, on Locke’s psychological theory of personal identity, “our present self is not, in reality, the same with the self of yesterday, but another like self or person coming in its room, and mistaken for it; to which another self will succeed tomorrow.” 33 Now, Butler saw this as a reductio of Locke’s theory. Yet the idea that our present, past, and future “selves” might be thought of as distinct agents is one that has struck a number of philosophers as compelling. 34 And the notion that persons have temporal parts, or subpersons, offers us one way of making sense of this idea. 31 Other candidates include putative sub-personal agents that the person might be said to be constituted by at any given time, rather than only across time. For example, Hedden notes that in cognitive science, it is common to think of the mind as composed of semi-autonomous subsystems or “modules,” which we might think of as distinct agents. Hedden 2015: 198-199. 32 Again, I take this term from Olson 2010. See also Johnston 2017. Whereas I am focusing primarily on the idea of seeing subpersons as normative agents, Olson and Johnston focus on the idea of seeing subpersons as normative patients, beings whose interests we persons ought to take into account. For a more general introduction to temporal parts, see Sider 2001. 33 Butler 1975: 102. 34 For example, see Jackson 1987: 103-106; Parfit 1984: 92-93; Hurley 1989: 136-148; Sebo 2015a and 2015b; and Hedden 2015: 6-7. 23 I have two aims in this chapter. My first aim is to clarify the motivations and drawbacks of recognizing subpersons as a locus of ethical evaluation even more basic than the individual person. I will argue that we have good reasons for finding the view that subpersons are genuine agents with reasons for action attractive. As we will see, this view also faces serious challenges, but I will argue that they are not insurmountable. My second aim is to argue for a conditional claim: if we do recognize the subperson as an ethically relevant unit of agency, then this can strengthen the case for recognizing the group as an ethically relevant unit of agency. I proceed as follows. In Section 2, I discuss the idea of subpersons, and how they might relate to persons. In Section 3, I discuss what reasons we have for thinking of subpersons as agents. In Section 4, I discuss whether the reasons of persons and their subpersons could come apart. In Section 5, I discuss how the view that subpersons are agents bears on the question of whether groups have reasons for action. Section 6 concludes. 2. Subpersons as agents Again, the view that persons have temporal parts, or subpersons, is implied by the more general view that physical objects are extended not only in space, but also in time. This view is called perdurantism. Perdurantists claim that, if I spend the morning in my apartment and the afternoon in the library, I am never strictly speaking entirely in either my apartment or the library. Rather, there is a part of me, my morning-part, that is in my apartment, and another part, my afternoon- part, that is in the library. Perdurantism stands in contrast to the endurantist view that persons, and objects generally, are not extended in time, but are instead wholly present whenever they exist. While the debate between these views remains unresolved, perdurantism is widely accepted, because it is thought to help solve a variety of metaphysical puzzles. 35 Perdurantists typically claim that we have temporal parts corresponding to every period of time that we exist, from momentary “time-slices” to longer “segments,” like my morning-part, or my October-part, or my 1990s-part. 36 This means that there are a number of possible versions of the view that subpersons can be agents, depending on which of these subpersons we have in mind. I will initially leave this matter open. Next, I noted earlier that, when we are thinking about ethics, we naturally focus on individual persons like you and me as the relevant agents. But, I suggested, we might also consider the 35 Sider 2001: Chaps. 4-5. 36 Olson 2007: 101. 24 possibility that subpersons are agents with their own reasons for action. These claims now require some clarification. In making these claims, I was taking for granted certain assumptions about what individual persons like you and me are like. What we should say is that the agents we ordinarily focus on in our ethical thinking are individual persons like you and me, as we normally understand them. For example, I was assuming that individual persons like you and me can, these days, normally expect to live for around 70 or 80 years. The reason why we need to make such seemingly obvious assumptions explicit is that the view we are interested in here may call them into question. That is, this view may call into question two things: what sorts of beings count as persons, and what sorts of beings you and I really are. First, there is a case to be made that if subpersons were agents of the sort capable of having normative reasons for action, they would have to qualify as persons. According to Locke, a person is “a thinking intelligent Being, that has reason and reflection, and can consider it self as it self, the same thinking thing in different times and places.” 37 But plausibly, a being could count as an agent, and one subject to ethical assessment, only if it had the sorts of capacities of thought and reflection that Locke mentions, and so would meet this definition of a person. 38 Second, if we end up accepting that the subpersons of the long-lived beings we normally take ourselves to be are thinking, reflective agents, we might wonder whether we are actually these subpersons. In fact, Sider has argued that we are; and Olson has argued that if subpersons existed, then since a person and her present subpersons would all be in indistinguishable epistemic circumstances, it seems unclear how you could ever tell which one you were. 39 For ease of exposition, I will simply stipulate that by “person,” I mean to refer to beings that can normally expect to live for around 70 or 80 years, and when I talk about “subpersons,” I mean to refer to the temporal parts of beings with that average lifespan. I will also continue to assume that we are persons, or the beings that can normally expect to live for around 70 or 80 years, and will, for the purposes of this paper, set aside Sider and Olson’s important challenges to this assumption. 40 37 Locke 1975: 335. 38 A natural question this raises is whether groups would also have to qualify as persons in order for the group to be an ethically relevant unit of agency. I will not pursue this question here. 39 Sider 2001: 188-208; Olson 2002: 192. 40 For critical discussion of Sider’s view, see Olson 2007: 125-128. As Olson notes, the epistemic problem raised by the idea of subpersons parallels epistemic problems for certain influential views about personal identity, and there are several proposed solutions to these parallel problems. See Olson 2002. Some of these solutions, including the externalist solution defended in Brueckner and Buford 2009, and the linguistic solution defended in Kovacs 2016, may also be helpful in the present context. In addition, we may find the problem less worrisome if we think that persons and subpersons would have corresponding reasons for action, an issue that I will discuss in Section 4. 25 In a moment, I will discuss what reasons we might have for thinking that subpersons are agents. However, recall that our interest in this question is motivated by the more general question of whether there is a unit of agency relevant to ethics whose actions are even more metaphysically basic than those of the person. Even if we do end up accepting that subpersons are agents, why should we think their actions are more metaphysically basic than those of persons? If we are endurantists, then we might still be happy to talk about temporally distinguished “selves” as existing and acting. We might think that, when I talk about me-as-a-child or me-as- an-adolescent, this is just a convenient shorthand for talking about the kind of person that I was at those times, rather than genuinely distinct beings. 41 Or we might be happy to accept that such beings exist in a more substantial sense, but claim that a given such being will, as long as it exists, be identical with the person. However, if endurantism is true, it seems there is little motivation for thinking that the actions of these beings would be metaphysically prior to the actions of persons. After all, endurantists claim that when I raise my arm, I myself am wholly present. So, at least initially, there doesn’t seem to be any reason to doubt that I myself directly perform this action. 42 In contrast, if we are perdurantists, we will likely be happy to claim that the actions of subpersons are prior to the actions of persons. After all, perdurantists claim that when I raise my arm, only a tiny part of me is wholly present. So how can I take credit for raising my arm? I count as raising my arm, perdurantists can claim, only in virtue of the fact that I have a temporal part who raises his arm. In other words, the action of the subperson is metaphysically prior to the action of the person. Of course, I might have many temporal parts that are simultaneously raising their arms. The most metaphysically basic actions, presumably, would be those of whichever are the shortest-lived parts that count either as raising their arms, or as performing even finer-grained actions in which the raising of my arm consists. 3. Why think that subpersons are agents? Suppose we accept the perdurantist view that persons have temporal parts, or subpersons. How can we decide whether these entities should qualify as agents? Now, we might think that we could decide whether subpersons are agents by applying a more general theory of agency. For example, suppose that we think that some being is an agent just in case it is capable of performing actions, and that some being counts as performing an action just in case its behavior is caused in the right way by its beliefs and desires. And suppose we think that a being can be an agent of the sort that can be subject to ethical assessment only if it meets 41 Compare Brink 1997: 111. 42 However, matters may turn out to be more complex on closer inspection. In particular, certain versions of endurantism, including the accounts described in Hawthorne 2006: Chap. 5 and in Miller 2006, may be friendlier to priority claims of the sort in question. 26 certain further conditions, such as the capacity to reflect or deliberate. Then, we might think, to tell whether a subperson is an agent of the right sort, we just need to tell whether it meets these conditions. Such theories might help us to rule out some subpersons as candidates for agency. For example, reflection and deliberation, it seems, will always take at least some amount of time. So if an agent needs to exist throughout at least one such process, then time-slices will just be too short- lived to count as agents. But to assess whether longer-lived parts count as agents, we would still need some way to tell whether it makes sense to attribute things such as beliefs, desires, and behavior to a subperson, rather than only to the person. In the rest of this section, I will describe several more promising arguments that we can use to motivate the idea that we should treat subpersons as agents. While I will not try to show that any of these arguments are conclusive, I will try to show that there are compelling reasons at least to take the idea of subpersons as agents seriously. One strategy for arguing for the agency of subpersons starts from the observation that many subpersons are intrinsically just like beings whom we would count as agents. For example, consider a subperson of yours, S, that we define as existing for some five-year segment of your adult life. We can imagine a “mirror person,” M, who comes into existence with qualitatively identical intrinsic bodily and psychological characteristics as S, continues to have parallel such characteristics over the next five years, and at the end of five years, is annihilated. M, I take it, would clearly count as an agent. So we might try to use this observation to support the idea that S is also an agent. 43 One way of developing this strategy would be to claim that agency is an intrinsic property. In that case, if M is an agent, then since S is intrinsically just like M, S must also be an agent. Alternatively, we might try to use this strategy not to support the conclusion that subpersons are agents strictly speaking, but to support the conclusion that it would be appropriate to focus on them, and to treat them like agents, in our ethical thinking. In particular, we might think that this strategy does not show that subpersons are agents, because we might deny that agency is an intrinsic property. We might think, following Sider, that properties relevant to agency, such as being conscious, are maximal. When a property is maximal, in Sider’s sense, this entails that something can fail to be F in virtue of the fact that it is a large part of an F. 44 As a result, if being conscious is maximal, then being conscious is not intrinsic, because whether something is conscious can depend on what it is part of. However, although Sider argues that being conscious 43 Johnston 2017: 618-623; see also van Inwagen 1981. It is important to note that a similar strategy could be used to argue that many of our spatial parts are agents. For example, it seems that there could be a person who was intrinsically just like my “nose-complement”: the part of me that includes all of me except for my nose. 44 Sider 2003: 139; see also Burke 1994. 27 is an extrinsic property, he also claims that there is an associated intrinsic property, which we can call being conscious*, which differs from it only in being non-maximal. In other words, to be conscious just is to have the intrinsic property of being conscious* while failing to be a proper part of a larger conscious* object. 45 So if M is conscious, then S, being intrinsically just like M, must at least be conscious*. And following Merricks, we might wonder why we should care about consciousness rather than consciousness*. 46 Similarly, even if we deny that subpersons are agents, we might wonder why we should care about agency rather than agency*. 47 If we have no reason to care about agency rather than agency*, then even if subpersons are not agents, they may still be beings which it is appropriate to focus on, and to treat relevantly like agents, in our ethical thinking. In addition to these rather abstract and theoretical considerations, we can also see how thinking of subpersons as agents can represent an attractive perspective by consulting our intuitions about three more specific kinds of cases. First, it is intuitive to treat subpersons as distinct agents when we think about time travel examples. In particular, suppose that you travel back in time to meet your past self. 48 Again, the notion of subpersons offers us one way to make sense of this intuitive notion of past or future “selves.” And in this situation, it seems, it would be natural for third parties to treat and think of your two selves, and for your two selves to treat and think of each other, as different agents. For example, if your selves were in the mood for cards, it seems they could choose a two-player game—they would not be confined to solitaire. And I suspect that it would be natural for your two selves to treat each other as different agents even if they knew that they were the past and future selves of the same person, and even if your later self could remember what it was like to be your earlier self. Examples like this can make it more natural to think of our lives as a succession of multiple agents, a fact that we ignore only because our past, present, and future selves are often not salient to each other. Second, as Parfit has argued, it can be natural to view present and past selves of some person as distinct agents when this person has undergone significant psychological changes. 49 This can be true both when we are thinking about our own present and past selves, and when we are thinking about the present and past selves of other people. For example, Parfit imagines a nineteenth- century Russian nobleman who, in his idealistic youth, signs a document which will automatically give away the vast estates he is due to inherit, and which can only be revoked with his wife’s consent. The nobleman then asks his wife to promise not to give her consent. Over 45 Ibid.: 147-148. 46 Merricks 2003: 155. 47 Compare Johnston 2017, esp. 627. 48 To get a better sense of what this situation might feel like, I recommend watching the film Looper (2012). 49 Parfit 1984: 302-306. 28 time, his ideals fade, and he asks his wife to revoke the document. But, Parfit suggests, his wife could plausibly regard herself as not released from her promise, on the ground that there is an important sense in which the man who she is now married to is not the man to whom she made the promise. 50 And this remains plausible even though the psychological changes are not sufficient for us to say that there has been a break in personal identity, say, because there has been no break in the nobleman’s psychological continuity. This would seem to support at least treating some subpersons—those which are separated by significant psychological differences— as distinct agents. 51 Third, it is natural to see our past, present, and future selves as distinct agents when we want to carry out a project that we would need to work on over an extended period of time. For example, in these cases, you might find yourself thinking of your future self as a different person whom you’ll need to communicate with and motivate to do his part, and you might motivate your present self to do your part out of fairness to all the work that your past self put in. As Brink writes, subpersons do and must “interact and co-operate, much as distinct individuals interact and co-operate in groups, in order to plan and execute long-term projects and goals.” 52 In fact, the idea that subpersons are agents in their own right should not be as surprising as it may have initially sounded. Many of us are already intuitively drawn to treat our past or future selves as distinct agents. The idea that we can treat our past or future selves as distinct things from our present selves seems to be one that we often find natural even when we are not engaged in philosophical reflection. Indeed, Strawson claims that a number of people, including himself, systematically fail to identify with their past or future selves. For example, Strawson writes, “it seems clear to me, when I am experiencing or apprehending myself as a self, that the remoter past or future is not my past or future, although it is certainly the past or future of GS the person. … I have no significant sense that I—the I now considering this question—was there in the further past.” 53 But even those of us who do normally identify with our past and future selves have often had the experience of seeing these as distinct agents. 4. Would subpersons and persons have conflicting reasons for action? Suppose we are convinced that subpersons are agents in their own right. In that case, why should we think that the subperson is an ethically relevant unit of agency—that subpersons have their own reasons for action? 50 Ibid.: 326-329. 51 Shoemaker 1999 defends a version of this view. 52 Brink 1997: 114. 53 Strawson 2004: 433. Similar experiences have been reported by patients suffering from brain damage, and from patients suffering from post-traumatic stress disorder. See Nichols 2014: 140-143. 29 If we accept that subpersons are agents, it seems that this fact alone gives us prima facie reason to think that they are capable of having reasons for action. After all, it seems that the actions of a subperson could easily have morally salient features, in virtue of which we can evaluate them as reasonable or unreasonable. For example, if something a subperson could do would cause someone unnecessary pain, this would seem to provide this agent with an excellent reason against doing so. And if we accept subpersons as agents, then it seems we should accept that they would have all the same capacities as short-lived persons, including the relevant abilities to reflect and evaluate. So it seems that they will indeed have the necessary capacities to have reasons for action. Even if you are willing to grant that subpersons have their own reasons for action, however, you might be skeptical that it really matters whether we make this claim. We already know that you shouldn’t cause unnecessary pain, for example. Does it really matter if we also claim that your today-part shouldn’t cause unnecessary pain? Whether it would make a difference in practice to recognize subpersons as having reasons for action depends on what we think these reasons would be. For example, we might think that subpersons simply inherit their reasons from persons: that is, that some fact gives some subperson a reason to perform some action only if, and because, this fact gives the whole person a reason to do the corresponding (metaphysically less basic) action. 54 On this conservative view, it indeed wouldn’t seem to matter whether we thought about the reasons of subpersons, or only the reasons of persons. I will now argue, however, that we have reason to accept a more radical view, on which the reasons of subpersons can in fact come apart from the reasons of the corresponding persons. To start, it is widely accepted that many of our reasons for action are agent-relative. That is, many of our reasons are reasons to bring about outcomes that other agents do not have the same reasons to bring about, because these outcomes are specially related to us in some way. For example, I may have special reasons to do what promotes my own welfare, satisfies my own desires, keeps my own promises, and so on, as opposed to the welfare, desires, or promises of others. But if the subperson and the broader person are two different agents, then we should expect these agents to have different agent-relative reasons for action: we would expect the subperson to have special reasons to care about her welfare, desires, or promises, and the person, by contrast, to have special reasons to care about her welfare, desires, or promises. 54 Or we might think that persons inherit their reasons from subpersons. After all, we might think, if my actions are to be explained in terms of the actions of my subpersons, then it seems natural to think that my reasons for action should likewise be explained in terms of the reasons of my subpersons. However, I will not pursue this suggestion here. 30 To make things more concrete, let’s focus on our reasons to promote our own welfare. Suppose that you—the whole person—have to choose whether to make some sacrifice now for the sake of some greater future reward. Other things equal, it seems, you ought to make the sacrifice, since doing so would be in your overall self-interest. However, think about things from the perspective of one of your present subpersons, who will not live long enough to enjoy the reward. Does she have the same reasons to make the sacrifice? If we think that agents have special reasons to care about their own welfare, then it seems that the answer should be no. The present subperson’s sacrifice may be in the interest of the whole person, but it is not in the subperson’s own interest, and this fact seems to have at least prima facie normative significance. We can also put the point in terms of compensation. The person has a special kind of justification for making the sacrifice, because she herself will be compensated for the sacrifice: she will be not only the benefactor but also the beneficiary. But the present subperson will not be compensated. 55 So there is a justification for the sacrifice which is available to the person but not to the present subperson. So it seems plausible that the person’s reasons for making the sacrifice should be stronger than the present subperson’s reasons. As Nagel has written, prudential intuitions “reflect an individual’s conception of himself as a temporally persistent being.” 56 Now, there is an obvious reply to this argument. Even if we grant that a person and her temporal parts are distinct agents, it may be argued, they are nevertheless very closely connected. Each subperson is connected to the other parts of the person through whatever relations we think are involved in personal identity over time, such as certain relations of physical and/or psychological continuity and/or connectedness. And even when some fact of special significance to the person as a whole does not apply directly to a given subperson, but rather to the person’s past or future subpersons, the subperson in question will still have a special connection to this fact through her relations to these other subpersons, and so should have special reasons to care about it. For example, it may be argued, even though the sacrifice will not benefit the present subperson herself, it will benefit a future subperson with which she is related, so she has reasons to give this benefit special weight after all. However, even if this is right, we still have grounds for thinking that the reasons of the subperson can come apart from the reasons of the person. This is because, even if the physical and psychological relations that a given subperson bears to past or future subpersons gives her special reasons to care about the things happening with these subpersons, it remains the case that these things are not happening to her, and this fact plausibly still makes a difference to her reasons. 55 See Brink 1997: 110-111, Olson 2010: 263, and Johnston 2017: 623-624. 56 Nagel 1970: 58. 31 The notion that the reasons of persons and subpersons can come apart in these ways— that my present subpersons might often have reason not to make sacrifices for my future interests, or keep promises that I made in the past—seems to represent a radical departure from our ordinary ethical commitments. It also seems to come with further troubling implications. Again, Olson has argued that since a person and her present subpersons seem to be in indistinguishable epistemic circumstances, it seems unclear how you could ever tell which one you were. As a result, if what persons and their subpersons ought to do can often come apart, this seems to imply that it is even harder than we thought to know what we ought to do. These challenges might strike us as compelling reasons to reject subpersons as agents. 57 On the other hand, there is one way in which the unorthodox normative implications of the view that subpersons are agents can actually give us a reason to accept this view. It is a familiar fact that many of us are often tempted not to make sacrifices for our long-term self-interest, keep promises made long ago, and so on, and we often in fact succumb to these temptations. Why is this? Maybe we just aren’t being reflective enough to recognize what we ought to do, or maybe our normative convictions are just not strong enough to overcome the attractions of immediate pleasure or convenience. But if subpersons are agents, then there is a more interesting alternative explanation. Perhaps we simply do not identify with our future or past selves, and so correctly recognize that the subperson that we identify with really shouldn’t make the sacrifice or keep the promise. While the person would still be failing to do what she ought to do, this would be at least a partially rationalizing explanation, since it suggests that there is at least one agent in the vicinity who is acting appropriately. 58 (Of course, it would be also less charitable to many actions than the standard view. For example, when prudent sacrifices are made, it would frequently imply that there are at least some agents in the vicinity who are acting inappropriately.) Similarly, we might argue that the further in the future some benefit will be, the fewer present subpersons will still be around to enjoy it: however many subpersons there are who will be 57 There are also at least two other important kinds of challenge which I will not discuss here. First, while I am focused here on the idea of treating subpersons as normative agents, we might think that the motivations for this idea would also support treating subpersons as normative patients, beings whose rights and interests we persons, at least, need to take into account. But treating subpersons as normative patients seems to have a variety of highly counterintuitive implications. For discussion, see Olson 2010: 264-265, and Johnston 2017: 623-624 and ff. Second, Hud Hudson has argued that if we accept a view on which that there are many overlapping agents in my vicinity, such as my temporal or spatial parts, face a challenge which he calls “many-brothers determinism.” This is roughly the idea that, whenever one of the other agents in my vicinity performs an action, this seems to entail that I must perform the corresponding action, and so it might seem that we must conclude that I have not acted freely. See Hudson 2001: 39-44. 58 Is the identification with the subperson appropriate? Again, it seems that at least one agent in the vicinity—the subperson in question—will be correctly identifying herself, though other many agents in the vicinity will be incorrectly identifying themselves. However, we should also keep in mind that Olson’s epistemic challenge calls into question whether even this subperson could be justified in identifying with herself. 32 around tomorrow, only some subset of these will be around a year from now. 59 Thus, more of the agents currently sharing my body will have reason to make sacrifices for benefits in the nearer future than would have reason to make sacrifices for benefits in the further future. Another way to look at this is that the agents currently sharing my body will, taken together, have stronger reasons to care about what happens in the more immediate future. This can provide another interesting explanation of a sense in which time bias could be rational. 60 A final option is to deny the commonsense assumption that whether something happens to me can make a difference to my reasons. This would allow us to maintain that subpersons and persons always have matching agent-relative reasons. For example, we could agree with Parfit that personal identity is “not what matters.” 61 That is, when I am considering whether to benefit some future person, for example, Parfit suggests that what matters is not whether this person will be me, but only the degree to which there is continuity and connectedness between the psychological states that I have now, and the states that this person will have at that future time. On this view, it would make no difference to my agent-relative reasons whether I was a person or a subperson. It is also worth noting that, although this view would deny that subpersons have reasons for action which come apart from those of the corresponding persons, it would not commit us to the conservative view that subpersons simply inherit their reasons from persons. 62 There is no sense in which this view makes the individual person the privileged unit of agency. Now, it might seem that, if this strategy is successful, this success would come at the cost of forfeiting our reasons for worrying about subpersons in the first place. Again, if the reasons of persons and subpersons always align, then it seems that we have no reason to pay attention to what subpersons ought to do rather than what persons ought to do. However, again, Parfit’s view would still allow us to deny that there is any privileged role for the individual person as a unit of agency. And as I will argue next, there is another reason why recognizing subpersons as agents 59 I am assuming here that subpersons must be temporally continuous. For discussion of this issue, see McKinnon 2008. 60 I owe this suggestion to Jonathan Quong. A challenge for this proposal would arise on certain views of the nature of space and time. In particular, we might think that space and/or time are gunky, or infinitely divisible (see Russell 2008). If time is gunky, and if any of my temporal parts which has the right intrinsic properties is an agent, then it seems that there will be both infinitely many agents who will survive long enough to enjoy the benefit, and infinitely many who will not. As a result, we may not be able to say that there are more agents who will not survive. Similarly if space is gunky, and if any of my less-than-full-sized temporal parts (such as my Monday-nose-complement) with the right intrinsic properties is an agent. However, the view that time is gunky is more controversial than the view that space is gunky. So we may be able to avoid this challenge if we accept only full-sized temporal parts, and not spatial parts, as agents. 61 Parfit 1984: Chaps. 12-15. For replies, see Lewis 1983 and Brink 1997. 62 Nor would it commit us to the view, mentioned in an earlier footnote, that persons inherit their reasons from subpersons. On Parfit’s view, the reasons of persons and of subpersons would be guaranteed to line up not because there is a dependence relation between them, but rather because these reasons derive from facts about their current psychological states, states which I will always share with any of my present subpersons. 33 matters: doing so can help us to indirectly support the case for thinking that there can be collective reasons for action, or reasons possessed by groups of people. 5. How subpersons can support collective reasons In the beginning of this chapter, I compared two ways in which we might rethink the limits of what we recognize as the ethically relevant unit of agency. Rather than focusing only on the individual person, we could expand outward, by claiming that groups have reasons for action, as we discussed in Chapter 1. For example, we might claim that we collectively ought to reduce our carbon emissions in order to avoid the harmful effects of climate change. Alternatively, we could focus inward, by claiming that subpersons have reasons for action. I will now discuss another way in which subpersons can make a difference to our ethical thinking. If we focus inward, by claiming that subpersons have reasons for action, this can strengthen the case for expanding outward as well, by claiming that groups have reasons for action. Subpersons, I will argue, can help us strengthen the case for collective or group reasons in two ways. First, they give us the resources to respond to a compelling objection to collective reasons. Second, I will argue that subpersons give us the resources to offer a new positive argument for collective reasons. On its face, using subpersons to support collective reasons might seem like an odd strategy. After all, while collective reasons may be controversial, the notion that subpersons are agents in their own right is quite radical, and may strike many people, at least initially, as much more implausible. So it might seem odd to use an implausible claim to support a more plausible one. However, there are two reasons for thinking that this strategy may hold some promise. First, even if the idea of subpersons as agents strikes us as initially implausible, I have argued that it can be supported by some compelling motivations. And, importantly, these motivations had nothing to do with the idea of collective reasons. So the idea of subpersons as agents is in fact something that we can appropriately appeal to in order to support collective reasons. Second, while I am focused here on how subpersons might bear on collective reasons, the idea of collective reasons might also be used to support subpersons. In particular, insofar as we find collective reasons plausible, this might make us more comfortable with the idea that there may be other units of agency. It might make it more natural to expect that if in the group we can find a unit of agency “bigger” than the individual person, then we might also find a unit of agency “smaller” than the individual person. And if collective reasons can help to make the idea of subpersons more plausible, then it could become more reasonable to think that we might be able to rely on subpersons to assuage some of our remaining worries about collective reasons. In this way, the ideas of collective reasons and of subpersons could in fact be mutually supporting. 34 So in what ways can the idea of treating subpersons as agents support the idea of collective reasons? First, treating subpersons as agents can help us to answer an important kind of objection against collective reasons. In particular, it might be argued that collective actions cannot really be assessed as reasonable or unreasonable, because collective actions are simply the sum of the various actions performed by the group’s individual members. As Parfit writes in his critique of collective reasons, Just as it is individuals, and not groups, who deserve blame, it is individuals, and not groups, who make decisions. (This is so even when these individuals act together as members of a group. When a group decides what to do, this is not a separate decision, over and above the decisions made by the members. We impute a decision to the group, according to certain rules or procedures, given the only actual decisions, which are those taken by the members.) 63 If a collective action is nothing over and above the various actions performed by the members of the group, it might be claimed, these seemingly metaphysically prior actions must really be the appropriate objects of ethical assessment. However, if we accept the idea of subpersons as agents, then as I argued earlier, we have good reasons for claiming that the actions of subpersons would be metaphysically prior to the actions of individual persons. In that case, if we want to retain the core commonsense notion that individual persons have reasons for action, we will have to accept that actions which are not metaphysically fundamental can nevertheless be appropriate objects of ethical assessment. In addition to using the idea of treating subpersons as agents to respond to this objection against collective reasons, I will now show that we can also use this idea to go even further, and construct a new positive argument for collective reasons. The basic outlines of this argument are as follows. Individual persons can have not only reasons to perform a given action at any given time, but also reasons to perform sets of actions over time. But a set of actions performed by an individual person over time can be relevantly similar to a set of actions performed by a group of people. So if an individual person can have reasons to perform a set of actions over time, then a group can likewise have reasons to perform a set of actions. 64 First, why should we think that persons have reasons to perform sets of actions over time, and not merely particular actions at particular times? 63 Parfit 1988: 10. 64 The general strategy of this argument parallels a strategy used by Rovane to defend a view about collective agency. Rovane 1998: 142-150. 35 Common sense and intuitions about cases suggest that we do indeed have reasons to perform sets of actions. When we are deciding what we should do, we are frequently deciding between sets of actions, rather than between individual actions. We often think not only about what we should do at any given time, but also about what we should do with our day, our week, or even our lives. In addition to these common-sense judgments, it can also seem to us that we have reasons to perform sets of actions when we think about imaginary cases. For example, suppose that I will face two choices, one now, one later, with outcomes as follows: I later do A2 do B2 I now do A1 Second-best Bad do B1 Bad Best In this case, it seems plausible that I should perform a certain set of actions, namely B1 and B2, since this will lead to the best outcome. And I should not do A1 and then A2, since this will lead only to the second-best outcome. Can we make sense of these judgments purely in terms of reasons for particular actions, rather than sets of actions? We might think, for example, that we can understand the claim that I should do B1 and then B2 simply as the conjunctive claim that I should do B1, and also should do B2. But plausibly, I should do B1 only if I will then do B2, since otherwise I will produce the bad outcome. And if all we can claim is that I should do B1 if I then do B2, but should do A1 if I then do A2, this would not explain in what sense I should do the B set and not the A set. 65 So in order to accommodate our intuitions, it seems that we should claim that we have reasons to perform sets as such, and not merely the individual actions in a given set. 66 Next, how are sets of actions by an individual person like the actions of a group of people? It may be claimed that every set of actions performed by an individual person is constituted by the actions of various subpersons, which can be seen as agents in their own right. If we see our lives as divided into the lives of subpersons, then when a person intentionally carries out some set of actions over time, we can see that set as the product of coordination between the subpersons. Again, as Brink writes, subpersons do and must “interact and co-operate, much as distinct individuals interact and co-operate in groups, in order to plan and execute long-term projects and 65 This argument parallels the argument for collective reasons discussed on pp. 9-12. 66 For criticism of the similar idea that sequences are subject to rational evaluation, see Hedden 2015: Chaps. 6-7. 36 goals.” 67 Thus, the intertemporal action of an individual person, like the action of a group, consist in the coordinated actions of multiple distinct agents. 68 But if we accept that the action of a person over time is a product of the actions performed by distinct agents, then there may be less distance between individual and collective action than we may have thought. As Hurley writes, “The notions of personal action and a personal unit of agency already allow for the possibility and indeed the normality of the corresponding sense of intrapersonal ‘collective’ agency.” 69 Now, there may also be important disanalogies between the intrapersonal case and the interpersonal case. First, there is an intimate causal connection between the reasons possessed by my subpersons: my present subpersons may often have reasons to do things which will causally influence the reasons that my future subpersons will possess. Second, there are plausibly important constitutive connections between my reasons and those of my subpersons. For example, plausibly, whenever one of my subpersons has a reason to do something that will avoid pain, I must also have a reason to perform the corresponding action. In contrast, we might think, the reasons present in the interpersonal case are more independent. There is not the same kind of intimate causal connection, we might think, between the reasons possessed by each of the individuals in a group. And we might doubt that there could be the same kind of constitutive connection between the reasons of individual members of the group and the reasons of the group itself. For example, we might think that, even if I have a reason to do something that will avoid pain, this may not mean that the group has any reason to act in a way that will allow me to avoid pain. However, these differences do not seem to me to jeopardize the analogy. First, there do often seem to be causal connections between the reasons of individuals in a group: I may have reason to do something that will affect what you have reason to do. Second, in the sort of constitutive connection between my reasons and those of my subpersons we saw above—that I must have a reason to avoid pain whenever one of my subpersons does—the relevant reasons are reasons of a special kind: agent-relative reasons. While it is plausible that there is a close connection between the agent-relative reasons of persons and subpersons that has no analogy in the interpersonal case, this does not indicate that there will be any disanalogy in cases where only agent-neutral reasons are at play. Here is a more intuitive and direct way of arguing that a set of actions performed by an individual person over time can be relevantly similar to a group action. First, we can imagine a case where you seem to have strong agent-neutral reasons to perform a certain set of actions. In 67 Brink 1997: 114. 68 For more comparisons, see Rovane 1998: 142-150. 69 Hurley 1989: 142. 37 particular, suppose you are in a two-stage case of the sort described in the table above. More specifically, consider Case One: You have been entrusted with a machine that can release a chemical into the atmosphere that would reverse the potentially catastrophic effects of climate change. Unfortunately, activating the machine requires a rather inconvenient two-step process: you will need to press a button now, on your twentieth birthday, and press another button in ten years, on your thirtieth birthday. And if by the end of your thirtieth birthday only one button has been pressed, the machine will release a different chemical that would even further devastate the environment. Luckily, you have an excellent memory and a strong will, so if you now intend not only to press the first button now but also to press the second button in ten years, you can be confident that you will follow through. 70 In this case, it seems that you ought to perform the set of actions in which you press both buttons. Consider next Case Two: All is as in Case One, except that you do not have a good memory or a strong will. You know that you cannot ensure that you will press the second button in ten years just by intending now to do so. However, you can ensure that you will press the second button by leaving yourself a reminder. Again, it seems that you ought to perform the set of actions in which you press both buttons. It might be suggested, plausibly, that what you ought to do is closely connected with what you can form an effective intention to do. 71 But individual action over time often involves “external” mechanisms, like notes to oneself and commitment devices. 72 And apart from the costs involved, it seems plausible that I have no less reason to perform sets of action that require such mechanisms. Consider next Case Three: The machine has a different activation method: now, the two buttons simply need to be pressed simultaneously. Unfortunately, the buttons are on opposite sides of the room. Fortunately, on your thirtieth birthday, you will time-travel back to today. At least two of your temporal parts will then meet: twenty-year-old-you, and thirty-year-old-you (that is, the temporal parts that exist just as long as you are twenty and thirty years old, 70 This is based on an example offered on p. 10, n. 11. 71 I owe this suggestion to Abelard Podgorski. 72 Compare Sebo 2015a, esp. 137-138. 38 respectively). Twenty-year-old-you will see a time-traveler, thirty-year-old-you, appear at the other end of the room. These two subpersons will then be able to press both buttons simultaneously. Again, twenty-year-old-you will not be able to ensure that thirty-year- old-you presses the second button just by intending now to do so. But twenty-year-old- you will be able to persuade him by talking to him once he appears. This case, it seems, is relevantly like Case Two. Again, you ought to perform the sequence in which you press both buttons. But you just are the collection of your subpersons, including twenty- and thirty-year-old-you. So it seems that the two subpersons could truly say: “We together ought to press both buttons.” It might be objected that even if we accept that individuals have reasons to perform sets of actions in general, this does not mean we have to accept that you ought to perform the button pressing sequence in Case Three. That is, it might be claimed that while we have reasons to perform sequences in normal cases, the time travel case is not a normal case of individual action over time, and so our reasons to perform sequences do not apply to it. It is true that the introduction of time travel makes this case quite different from all cases of individual action over time that anyone has ever actually performed. Still, this does not seem to me that it could in itself affect whether a person has a reason to perform a set of actions. For example, suppose that I need to move a couch out from the wall. One way to move the couch would be to move the right side of the couch, then move the left side. Another way to do it would be to move the right side of the couch, then travel thirty seconds into the past, then help my past self out by moving the left side of the couch. This course of action seems just as reasonable as the more standard way of moving the couch. 73 So we should accept that you ought to perform the button pressing sequence in Case Three, and so that your subpersons together ought to press both buttons. Finally, consider Case Four: Twenty-year-old-you and an older-looking time-traveler from the future meet in the machine room. Since they share an uncanny resemblance, and since the time- traveler’s memories are hazy, both initially think that they are temporal parts of the same person. They discuss the situation, and agree that they together ought to press both buttons. They then learn, however, that they are actually two different people. Here’s the question: Should our judgment change? Should the two parties abandon the claim that they together ought to press both buttons? Intuitively, it seems not. In that case, it seems we should accept that groups of people can indeed possess reasons for action. 73 I owe this example to Mark Schroeder. 39 To resist this conclusion, we might try to find some relevant difference between Cases Three and Four. But again, we cannot point to the difference that might seem particularly forceful: we cannot claim, that is, that the relevant difference is that the set of actions in Case Four is merely the product of the actions of various distinct agents, because if we recognize subpersons as agents, we will have to say the same about the set of actions in Case Three. What we seem to be left with, then, is to simply appeal to the bare fact that Case Three involves only a single person, whereas Case Four involves a group of distinct persons. But in the absence of some further rationale for why this should make a difference, it is not clear how forceful this is. It would be one thing if your reason for performing the set of actions in Case Three had been a special agent-relative reason. For example, it is at least initially plausible that your two temporal parts together have a reason to do what benefits you, whereas two unrelated agents might not have any such reason. 74 But when the reason that your temporal parts collectively have is merely an agent-neutral reason to produce a better outcome, impartially considered, it is not clear why the fact that they are both parts of the same person should be relevant. Why have I chosen to use examples involving time travel? These examples help to make vivid the way in which our temporal parts have to cooperate in order to carry out long-term projects, analogous to the ways in which different people cooperate. This cooperation is often hidden, but time-travel examples bring it out, by forcing our temporal parts to see each other face to face. 6. Conclusion In this chapter, I have discussed the idea of treating the temporal parts of persons, or subpersons, as agents with their own reasons for action. This idea, I have argued, has both deep attractions, and deep, but surmountable, challenges. I also argued that if we do accept this idea, this can help us to support the notion that we have collective reasons for action. So we should not rest content with the assumption that the individual person is the ethically relevant unit of agency: we should both broaden and narrow our horizons. 74 However, recall that on pp. 12-17, I argued that two unrelated agents can in fact constitute a group that possesses agent-relative reasons. 40 Chapter 3: When Should I Do My Part? 1. Introduction In the previous two chapters, I have argued that we should rethink what unit of agency it is appropriate to focus on in our ethical thinking. In particular, I have argued that both groups of people, and the temporal parts of people, have normative reasons for action. Suppose that we accept that there are multiple units of agency with their own reasons for action. In the remaining two chapters, I will discuss how these reasons might affect each other. Since I expect that my suggestions about temporal parts will be especially controversial, I will focus on the interactions between collective and individual reasons. In this chapter, I will discuss the following question: If we together have reasons to act in certain ways, how, if at all, does this affect what each of us ought to do? Do individuals have any group- based reasons for action: reasons to do our parts in what we together ought to do, deriving from the group’s reasons to perform this collective act? 75 I will start by discussing why we would think that individuals have group-based reasons at all. While group-based reasons cannot be supported by a tempting theoretical argument, I will argue, they are nevertheless intuitive. However, I will argue that we should in a variety of cases be skeptical about the idea that what individuals ought to do can be affected by group-based reasons. My argument for this skeptical conclusion will proceed in several steps. I will first argue that we should deny that individuals have group-based reasons to do their part when other members of the group will not cooperate, that we should not allow for an objectionable kind of double- counting, and that we should not allow for group-based reasons in cases where doing one’s part would make no relevant difference. I will then argue that these views imply that one important class of collective reasons, those which are agent-neutral, can never affect what each of us ought to do. I will then argue for a skeptical conclusion in an important agent-relative context: in particular, I will argue that we should reject the view that individuals have reasons for action deriving from the fact that they would be participating in causing harm. 2. Why group-based reasons? Why would we think that individuals have group-based reasons in the first place? 75 I take the phrase “group-based reasons” from Woodard 2008. However, Woodard uses this phrase to refer to reasons an individual might have to do her part in actions that a group containing her could perform, in virtue of the good or bad consequences of those actions, rather than in virtue of the group’s reasons for action, which might not all derive from consequences. 41 It is tempting to think that claims about collective reasons could be correct or interesting only if they ultimately had implications for individual reasons. As Nefsky writes, “What does the claim that the group did wrong amount to? If it does not say that any individual ought, or even had reason, to have acted otherwise, then it doesn’t seem to be a normative claim at all.” 76 Similarly, Kagan writes that although we might be tempted to appeal to group duties, “it is not altogether transparent how this sort of approach is actually to be applied to specific cases, for it is not altogether clear how such collective moral duties should impinge on the decision making of a given individual.” 77 It seems, however, that we would need some explanation of why we should think that claims about the reasons possessed at one unit of agency could be correct or interesting only if they had implications for the reasons possessed at another unit of agency. Why should we privilege the reasons of individuals over the reasons of groups, rather than the other way around? Luckily, Nefsky suggests such an explanation. Nefsky is considering the claim that in certain cases we should say that the group is acting wrongly, but that no individual is acting wrongly. If this claim were correct, Nefsky writes, “then since it is individuals who can be guided by morality and not mereological sums of individuals, morality would be powerless in a wide array of cases in which it should have force.” 78 Nefsky’s suggestion seems to be that one role that we expect ethical claims to have is to be able, in some important sense, to guide our actions. In particular, we might think, if I believe that I ought to do something, then it seems that this belief is supposed to motivate me to actually do it (in some sense of “supposed to”). As I discussed in the Introduction, this sort of action-guiding role is theoretically important, because it can plausibly explain in what sense claims about what we ought or have reasons to do are thought to be “forward-looking,” as opposed to “backward- looking” claims, such as certain kinds of claims about responsibility. But even apart from this theoretical issue, it is natural to be interested in ethical claims, including claims about collective reasons, in large part because of how they might actually influence what we do. As Lawford- Smith writes, when we are tempted to claim that we have collective reasons or obligations to fight climate change or global poverty, we should make sure that we are in a position to “give useful advice that, if acted upon, would see those problems actually being addressed.” 79 So claims about collective reasons for action, we might think, should also be able to play this role: if we agree that we together ought to do something, this belief should be able to motivate us to do it. However, a group can’t do anything unless its individual members do their parts. So, we might think, in order for our belief about our collective reasons to motivate the group to act, 76 Nefsky 2012: 7-8. 77 Kagan 2011: 113. 78 Nefsky 2012: 8. 79 Lawford-Smith 2015: 226-227. 42 individuals must be able to infer from their belief about the collective reasons that they also individually ought, or at least have some reason, to do their parts. Once the individuals decide that they each have reason to do their parts, then these beliefs can motivate each of them to do so, and so the group will have acted. However, the claim that our belief about our collective reasons could only motivate individuals to do their parts through first affecting their beliefs about their individual reasons is suspect. After all, it does not seem true in general that individuals are only motivated by their beliefs about what they have reasons to do. So it seems perfectly possible for me to simply decide to do my part as a result of recognizing that this is what the group has reason to do, without the intermediate step of thinking about what I as an individual have reason to do. Nefsky might want to object that it should not merely be possible for beliefs about reasons to motivate us, but that they should be able to motivate us in the right way, and that the motivation I just described would not be of the right kind. But in order to evaluate this objection, I would like to see what the general proposal would be about in what way reasons should be able to motivate us. In the absence of such a proposal, I conclude that the guidance argument does not show that individuals have group-based reasons. Even if this argument is unsuccessful, however, it is still plausible that individuals do have group-based reasons: that if we have reasons to act in some way, this at least often means that I have reasons to do my part, and that if we have reasons not to act in some way, this at least often means that I have reasons not to do my part. As well as being plausible on their face, these claims are arguably reflected in our ordinary ethical thinking. For example, when there is something good that we could do together, we often urge each other to “do your part,” and when there is something bad that we could do together, we often say things like “I don’t want any part in that.” 80 Finally, these claims can strike us as plausible accounts of particular cases. For example, in Firing Squad, it may strike us as plausible that I have strong reasons not to do my part, and that this is because I would be doing my part in something that we have strong reasons not to do. For these reasons, it is worth taking seriously the possibility that individuals have group-based reasons. 3. What if others won’t cooperate? Let’s start, then, with an initial proposal about how collective reasons can affect individual reasons. According to (1) If we together ought to act in some way, then each of us ought to do our part. And if we together ought not act in some way, then each of us ought not do our part. 80 Woodard 2017: 110. 43 This is a natural answer. As I will now show, however, we need to qualify this principle in significant ways. First, note that (1) tells me to do my part even when others will not do theirs. This makes this principle vulnerable to two objections. First, if others won’t cooperate, my contribution can seem pointless. This is especially apparent in cases where my own contribution is some mundane task with no special significance apart from the role that it would play in the group activity. For example, suppose that we ought to paint a house together, and my job is to mix the paints. 81 (1) implies that, even if you abandon the project, and I can’t do it by myself, I still ought to mix the paints. But it seems pointless for me to spend my time doing this. Second, if others won’t cooperate, my contribution could lead to disaster. Suppose again that we are in Gibbard’s Case and ought to both do B, thereby producing the best outcome. (1) implies that, even if you will do A, I still ought to do B. But again, doing this would make the outcome very bad. Woodard suggests that when others will not cooperate, while it may not always be true that I ought to do my part anyway, I may still have some reason to do so. 82 In other words, Woodard suggests (2) Even when others will not do their parts in what we together ought or have reason to do, I may still have some group-based reason to do my part. While Woodard claims that the uncooperativeness of others does not prevent me from having some reason to do my part, he leaves open the possibility that there may be other constraints on when we have group-based reasons. 83 However, he suggests that we do have group-based reasons in many actual noncooperative contexts. Woodard argues that the idea that we have such reasons even in noncooperative contexts is the best explanation of the intuitive tension we feel between being “principled” and being “pragmatic.” For example, in Williams’s famous Jim and the Indians case, Woodard writes, Jim seems to have both reasons to accept Pedro’s offer to shoot one of his Indian captives, so that Pedro will let the other nineteen go, and reasons to decline the offer, and not shoot. 84 These latter 81 This example is adapted from Bratman 1992. 82 Woodard 2008. Opponents of this view include Regan 1980: 124, Hurley 1989: 146, Schwenkenbecher 2014: 70- 71, and Wringe 2016: 488. 83 Ibid.: Chap. 5, Sec. 5. 84 Williams 1973. 44 reasons, Woodard suggests, are the reasons that Jim has to do his part in the best group course of action, in which neither Jim nor Pedro shoots anyone. And Jim still seems to have these reasons even though Pedro is unwilling to cooperate in that course of action. However, (2) faces two objections. First, this principle implies that even if you abandon our house-painting project, I still have some reason to mix the paints, unless this situation would fail to satisfy some other constraint on when we have group-based reasons. Therefore, this principle implies that, if there is no independent constraint that applies here, and if I have nothing better to do, then this is what I ought to do. But again, it seems that I have no reason to do this. (2) seems more compelling when my part consists in not killing anyone than when it consists in mixing paints. But this, I suggest, may be because the idea that I always have some reason not to kill anyone is independently plausible and supported by other plausible theories. 85 In contrast, it is not plausible that I always have some reason to mix paints. Woodard offers four responses to this type of objection. 86 First, he points out, even if I have some group-based reason to perform apparently pointless actions like mixing the paints, I may often have independent reasons to act in some other way. This will not help in cases where I have nothing better to do, but Woodard might suggest that there will typically be some at least mildly worthwhile alternative. Second, Woodard cites Schroeder’s argument that linguistic theory gives us reason to generally be wary of claims to the effect that there is no reason to do something. Third, Woodard argues that accepting group-based reasons to perform apparently pointless actions could help explain why these actions could at least be intelligible. Fourth, he suggests that there may other constraints that block group-based reasons in the cases at hand that do not apply to all non-cooperative contexts. For example, one such constraint might be a restriction to cases where our group action is morally required, rather than merely a useful project. However, even if one or more of these strategies is successful, Woodard’s proposal is still vulnerable to a second objection: the most natural ways to develop this proposal still imply that we should act in ways that lead to disaster, and it is not clear whether Woodard could avoid this implication in a principled way. To start, if I can have some reason to do my part even in noncooperative contexts, how strong would this reason be? It seems that the reason would have to be proportional to the strength of 85 Woodard offers forceful objections to several of these alternative theories. While I do not think these objections are decisive, it would take us too far afield to discuss them here. 86 Woodard 2017: 123-125. 45 our collective reason to cooperate. If I do have a group-based reason to mix paints, it seems this must be weaker than Jim’s group-based reason not to shoot, since the stakes are so much lower. 87 If this is right, then we might claim: (3) The strength of my reason to do my part in noncooperative contexts is some proportion of the strength of our collective reason to cooperate. But even if this proportion is very small, this principle, together with other plausible normative views, can imply that I ought to do my part even when this would lead to disaster. For example, suppose that we face another version of Gibbard’s Case. In this version, a million lives are in danger, and we have the following options: You do A do B I do A One hundred saved All die do B All die All saved It seems plausible that, if we are in a position to save lives, either together or individually, then other things equal, the strength of our reasons to do so will be proportional to the number of lives that would be saved. Let’s say, then, that if we could save n lives, we have a reason of strength n to do so. If so, then we together have a reason of strength 1,000,000 to both do B. Now suppose that my reason to do my part in noncooperative contexts is only one thousandth as strong as our collective reason to act. Then if you do A, I will still have a group-based reason of strength 1,000 to do B. And since I only have a personal reason of strength 100 to do A, this means that I ought to do B, even though I will be letting one hundred people die, and saving no one. Now, we might be skeptical that the strength of reasons can be quantified so precisely. But we can also state the problem without relying on precise numbers. There are some versions of Gibbard’s Case, we can claim, in which our reason to cooperate will be extremely strong, but in which you refuse to cooperate. So if the strength of my reason to do my part is some proportion of the strength of our collective reason, however small, this reason could still be very, very strong. It could thus be strong enough even to outweigh the personal reason I have against doing my part, deriving from the fact that this would lead to disaster. 87 I should emphasize that Woodard does not himself make this claim. 46 To avoid this implication, Woodard might set an upper bound on the strength of our reasons to do our part in noncooperative contexts. 88 However, this seems ad hoc. In addition, where exactly should we set this bound? Setting the bound at any particular strength seems arbitrary. Therefore, while the idea that we have some reason to do our part even when others won’t cooperate does not entail that we should act in ways that lead to disaster, it is not clear whether there is any principled way of avoiding this conclusion. Finally, Woodard might again suggest that there may be some other constraint on when we have group-based reasons which applies to cases like the one described above. If some such constraint applies here, Woodard might claim, then I may have no reason to do my part. For example, Woodard might plausibly claim that I have no reason to do my part if I would thereby let one hundred people die. Woodard could then adopt some nonarbitrary proposal about the strength of the group-based reasons that we do have, in cases where no such constraint applies. However, recall that Woodard’s main argument in favor of the idea that we can have group- based reasons even in noncooperative contexts focused on the case of Jim and the Indians. Our intuitions about this case, Woodard argues, are best explained by the hypothesis that Jim has some reason to do his part in the course of action in which neither he nor Pedro shoots anyone. This is true, Woodard claims, despite the fact that if Jim does his part and refrains from shooting one of the captives, Pedro will kill them all. But if Jim has some reason to do his part even when he would be letting nineteen people die, it seems that the fact that I would be letting one hundred people die could not prevent me from having a reason to do my part in Gibbard’s Case. Nor does it seem that the lives that we would be saving if we both did B could prevent me from having a reason to do my part. So it seems that there is no plausible constraint on group-based reasons that applies to this version of Gibbard’s Case but does not apply to Jim and the Indians. Finally, we might be attracted to Woodard’s view because there might be many noncooperative contexts in which it seems intuitive that I do have reasons to do my part. For example, we might think that I ought to do my part in the fight against climate change, even though I know that many people will not do their parts. But even if we reject Woodard’s view, there are two ways in which we can accommodate these intuitions. First, I might have other, non-group-based reasons to do my part. For example, I might have a reason to reduce my carbon emissions in virtue of the fact that doing so would itself make some morally significant difference to mitigating climate change, or because doing so would encourage enough others to act similarly that my action would cause a morally significant difference to be made. 88 Again, I should emphasize that Woodard does not himself make this claim. 47 Second, my action might also count as doing my part in some less salient group action in which all of the other members of the group are cooperating, in which case I might still have group- based reasons to perform my action. For example, suppose that only several million other people are going to reduce their carbon emissions. In that case, if I reduce my emissions, I will be doing my part in the group action in which those people and I reduce our emissions. Since this group action would probably significantly reduce the amount of harm that we would be doing, we might have strong reasons to perform it, and so I might have group-based reasons to do my part. Thus, the view that we have some reason to do our parts even in noncooperative contexts can imply that we ought to act in ways that seem pointless or that would lead to disaster, and it is unclear if there is any principled way to avoid this. So we should restrict group-based reasons to cases where others are willing to cooperate. In other words, we should accept the Cooperation Constraint: I can have a group-based reason to do my part in some group activity only when the other members of the group will do their parts. 4. Double-counting I have just defended the Cooperation Constraint, according to which individuals can have group- based reasons only when the other members of the group will do their parts. I will now defend a second constraint on group-based reasons. The motivation for this constraint can be illustrated with another rescue case. Suppose that two people are in imminent danger. I could help you to save one of these people or could save the other on my own. But there is not enough time to save both. Suppose we believe that any person or group has some reason to act in such a way that a life is saved. Then the fact that a life will be saved if I help you gives me some reason to do so. But we will also be together acting in such a way that a life is saved. So if we think that individuals do have group-based reasons at least in cooperative contexts, then I will have not only a personal reason to help you, but also a group-based reason to help, because I would thereby be doing my part in our rescue mission. But it seems implausible that I could have stronger reasons to do my part in our rescue mission than to save the other person’s life, merely on the grounds that our rescue mission would involve cooperation. I shouldn’t have extra reason to do what will bring about some outcome merely because I would thereby be playing a part in a group act that would achieve the very same thing. Allowing these reasons to add up seems to represent an objectionable kind of double-counting. Thus, I propose 48 the Double-Counting Constraint: If I have both personal and group-based reasons to do my part in some group activity, and these reasons derive from the very same feature of the outcome, then these reasons do not add up. For example, again, by helping you to rescue someone, I might have a reason to act deriving from the fact that if I do so, this person’s life will be saved. And if I have a group-based reason to do my part, then this reason might also derive from the fact that if we perform the rescue, this person’s life will be saved. But the Double-Counting Constraint claims that these reasons do not add up. 89 In claiming that our group-based reasons should not add together with my personal reasons in such cases, I am not claiming that we do not have group-based reasons in these cases. Rather, we can think of these two kinds of reasons as overlapping. 90 To justify our actions, we could appeal either to our individual reasons, or to our group-based reasons, or to both. We just shouldn’t claim that these reasons combine to make this justification extra strong. 5. What if I won’t make a difference? When I am trying to figure out whether I should do my part in some worthwhile collective endeavor, it is natural to be interested in what, if any, difference I would be making. If my contribution would not make any relevant difference, it is natural to wonder whether the fact that it would be part of a worthwhile group project could still make it something I ought to do. In this section, I will argue that the answer is no. I will defend a third constraint on group-based reasons, the Difference-Making Constraint (DMC): Suppose that some fact about our together acting in some way gives us a reason (not) to act in this way, but that whether that fact obtains does not depend on whether I do my part. In that case, I have no corresponding reason (not) to do my part. For example, suppose that we together have a reason to carry out some rescue mission, in virtue of the fact that if we carry out the mission, someone’s life will be saved, but that whether this person’s life will be saved does not depend on whether I do my part. DMC implies that our collective reason to carry out the rescue mission gives me no reason to do my part. DMC in itself seems quite plausible. If what I do would make no difference to the reason-giving features of what we would be doing, it is plausible to think that it does not matter whether I do it. In addition, when we think about whether we should contribute to particular collective harms or 89 Schroeder has also argued that there are other kinds of reasons which should not add up, because they are not independent in the right way. Schroeder 2007: Sec. 7.1. It seems plausible that two reasons given by the very same feature of an outcome might similarly not be independent enough to add up. 90 I owe this image to Abelard Podgorski. 49 benefits, where it often seems as if I would not make a difference to the amount of harm or benefit that we would collectively do, is natural to think that if I do not make a difference, then it does not matter whether I do it. Even those who think we do generally have important reasons to do our parts in these cases grant that this line of thought is compelling. 91 There are two main reasons we might have for denying DMC. First, we might be inclined to think that individuals do often have reasons for action in collective harm and benefit cases, even though we often suspect in these cases that our contribution would make no difference to how much harm or benefit we would be bringing about. Second, we might think that denying that individuals have such reasons would be troubling, because that would mean, as Nefsky writes, that “morality would be powerless in a wide array of cases in which it should have force.” 92 I have already addressed the “powerlessness” worry in Section 2. Again, Nefsky argues that even if we claimed that we collectively have relevant reasons for action in these cases, these claims could not guide our actions unless we could infer that each of us had corresponding individual reasons for action. However, I argued that individuals could in fact be motivated to do their parts in what the group has reason to do even without making such an inference. What about the idea that individuals do often seem to have reasons for action in collective harm and benefit cases? We can respond to this in several ways. First, even if we accept DMC, we can agree that individuals often have reasons for action in collective harm and benefit cases. We can do this, in turn, in three ways. One way is to argue that, at least in many collective harm and benefit cases, my contribution does, or at least might, make a significant difference to the amount of harm or benefit that we would be bringing about. 93 Next, I may have reasons to do my part owing to other reason-giving features that I can affect. For example, it is particularly plausible that I ought to do my part in small-scale collective benefits, such as Lyons’s case of helping other people to push a car up a hill, even though they will be able to push it without me. 94 But as Glover points out, in these cases, it is often true that whether I contribute can affect other features of the situation, such as how hard the others will have to push. 95 So I might have a reason to do my part in order, for example, to help relieve the burden on the other members of the group. 91 See Glover and Scott-Taggart 1975; Sinnott-Armstrong 2005; Kagan 2011; and Nefsky 2012. 92 Ibid.: 8. 93 See Glover and Scott-Taggart 1975; Parfit 1984: 67-86; and Kagan 2011. For criticism, see Nefsky 2011. 94 Lyons 1965. 95 Glover and Scott-Taggart 1975: 182. 50 Finally, it is important to recognize that when our collective reasons are given by agent-relative features of the outcome, my contribution could make a difference to these features even if it does not make a difference to the amount of harm or benefit that we would be bringing about. It is particularly plausible that these agent-relative reasons are in play in collective harm cases such as Firing Squad. For example, suppose that, in Firing Squad, we have a collective reason not to act in such a way that we would be killing someone. And although whether I shoot our victim does not affect whether this person will be killed, I will affect whether we would be killing someone. But note that DMC still implies that I have no group-based reason to do my part in cases where our collective reasons are agent-neutral, as they plausibly are in many collective benefit cases. A second kind of response is to concede that if we accept DMC, we will have to deny that individuals have reasons for action in some cases where it seemed plausible that they did. However, we can argue, while you might think that you have the relevant reasons even though you worry that you will not make a difference, this may be because you still suspect that you might make a morally significant difference. When you really convince yourself that you will not make a difference, you may be less confident that you do in fact have the relevant reasons for action. A third response is to ask for an explanation of what makes your contribution something you still have a reason to do, even if it would make no morally significant difference. For example, suppose that we together ought to each donate some amount of money to the Against Malaria Foundation, but that my donation would not itself make a difference to how many people are protected from malaria. Why do I have a reason to do my part in the course of action <Others donate, I do too>, and not a similar reason to do my part in the course of action <Others donate, I do nothing>? One natural answer is that if I donate, I will be part of the cause of the good outcome, whereas if I do nothing, I will not be. However, this does not seem to provide a sufficient explanation. This is because, as Nefsky points out, there are cases where my contribution would be part of the cause of the good outcome that we would together be bringing about, but in which it seems that I have no reason to make it. In particular, Nefsky describes a variation on Parfit’s case The Drops of Water, in which we could together fill a cart with water to be distributed to a thousand men starving in the desert. In Nefsky’s variation, she imagines that the cart is already full, but that I could use a power hose to add my own pint of water, displacing one pint of the water currently in the cart. As Nefsky points out, although I would then be part of the cause of the relief to the men in the desert, it seems that I have no reason to act in this way. 96 Nefsky’s own proposal is that I have a reason to do my part because it would be a non- superfluous part of the cause of the outcome, or, as she puts it, would “help” to bring about the 96 Nefsky 2017: 2750-2752. 51 outcome. And doing my part would count as “helping” to bring about the outcome, Nefsky proposes, under the following conditions: Suppose that acts of a certain sort – acts of X-ing – could be part of what cause outcome Y. In such a case, your act of X-ing is non-superfluous and so could help to bring about Y if and only if, at the time at which you X, it is possible that Y will fail to come about due, at least in part, to a lack of X-ing. 97 By requiring that it be possible that the outcome will fail to come about due to a lack of acts of the relevant kind, Nefsky is able to avoid the implication that I have a reason to use my power hose in her variation of The Drops of Water. But I do not find this proposal attractive, for three reasons. First, recall that we are interested here in what we have reasons to do given the facts of our situation, and so are bracketing epistemic issues. If the relevant notion of “possible” is epistemic, then, Nefsky’s proposal is not applicable to our project. Second, the way in which Nefsky gets around the “power hose” objection is suspicious. It is plausible that I ought to add my pint if it is possible that the benefit will fail to come about due to a lack of actions of this kind. But it is plausible that the reason why this is true is that this means that the benefit might depend on my adding my pint. But this is no longer the case when we assume that the benefit will not in fact depend on my adding my pint. Third, this proposal relies on the notion of acts belonging to certain types, in a way that does not seem morally relevant. For example, in order to get the result that I ought to add my pint of water in the original version of The Drops of Water, Nefsky needs to say that I ought to add my pint because it is an “adding of a pint,” and because “addings of a pint” could together cause the relief of the men in the desert. But whether my action is an adding of a pint does not seem morally relevant. Again, our judgments about what we ought to do in particular cases are torn: it is plausible that individuals have the relevant reasons for action in particular cases, but it is also plausible that they do not. And again, DMC provides a plausible explanation of why individuals would not have the relevant reasons. But if opponents of DMC can’t provide a similarly plausible explanation of why individuals would have the relevant reasons, then defenders of DMC seem to be on a stronger footing. 97 Ibid.: 2753. 52 Finally, it is important to note that even if we reject DMC, and think that you can have a group- based reason to do your part even when you would not make any relevant difference, there are reasons for thinking that in typical cases, it still would not be true that you ought to do your part. In particular, suppose that you are trying to decide whether to do your part in a collective benefit, but that your part would not make a difference to the amount of benefit that we would produce. In many such cases, rather than doing your part, you could instead do some alternative that would itself do some good. For example, consider the following case from Parfit: I know all of the following. A hundred miners are trapped in a shaft with flood-waters rising. These men can be brought to the surface in a lift raised by weights on long levers. If I and three other people go to stand on some platform, this will provide just enough weight to raise the lift, and will save the lives of these hundred men. If I do not join this rescue mission, I can go elsewhere and save, single-handedly, the lives of ten other people. There is a fifth potential rescuer. If I go elsewhere, this person will join the other three, and these four will save the hundred miners. 98 In this case, if I do my part in the rescue mission, then we will together have acted in such a way that the miners are saved. We plausibly have strong reasons to act in this way, in virtue of the good that we would be doing. Suppose we think that, even though I will not affect whether the miners are saved, I still have a group-based reason to do my part. Even so, notice that if I instead save the ten other people, then we will together have acted in such a way that the miners and ten other people are saved. And since this course of action would do even more good, it is plausible that we have even stronger reasons to act in this way. In that case, it seems that I should have an even stronger group-based reason to do my part in this larger course of action. And since my action in fact makes a difference to how many lives are saved, it seems to clearly count as a non- superfluous part of the cause of this outcome. And in fact, in many actual collective benefit cases, we seem to find ourselves in a situation like this. That is, for many collective projects in which our contribution would not make a difference, we could direct our money, time, energy, and other resources in some other way that would itself do some good, or would at least have positive expected value, and would meet the other conditions listed above. For example, there are some charities – such as GiveDirectly, which transfers donations directly to extremely poor people – where even a relatively modest donation could significantly benefit someone. If so, then whenever I am considering donating to some cause in order to help badly off people, but believe that my donation would not make a difference, then even if I have a group-based reason to donate, I will likely have a stronger reason to instead donate to a charity like GiveDirectly. 98 Parfit 1984: 67-68. 53 6. Why agent-neutral collective reasons never affect what individuals ought to do So far, I have defended three constraints on when individuals can have group-based reasons that affect what they ought to do. I first argued that individuals do not have group-based reasons in non-cooperative contexts. I next argued that we should not allow group-based reasons to add up with the personal reasons that we already have when these reasons derive from the same feature of the outcome. Finally, I argued that individuals do not have group-based reasons to do their part when their part would not make any relevant difference. I will now argue that if these views are correct, then they significantly limit the extent to which collective reasons can affect what individuals ought to do. In particular, I will argue that these views imply that one important category of collective reasons, those which are agent-neutral, can never affect what individuals ought to do. Suppose that we together could act in some way, and that we have some agent-neutral reason bearing on whether we should act in this way. For example, suppose that we could together undertake some rescue mission, and that we have a reason to act in this way in virtue of the fact that, if we do so, someone’s life will be saved. Now, it will either be the case that we will carry out the rescue mission if I do my part, or we won’t. If we will not in fact undertake the rescue mission if I do my part, then this is a non- cooperative context. In that case, as I have argued, I will have no group-based reason to do my part. Suppose that we will carry out the rescue mission if I do my part. We can next consider whether the reason-giving feature – in this case, the fact that someone’s life will be saved – depends on my doing my part. If the person’s life being saved does not depend on whether I do my part, then, as I have argued, I have no corresponding reason to do my part, because doing so would make no relevant difference. Finally, suppose that the person’s life being saved does depend on whether I do my part. In that case, I could have a group-based reason to do my part. But notice that since the reason-giving feature is agent-neutral, it can be described not only as a feature of the outcome of our act, but also as a feature of the outcome of my own act. And as I have been assuming, and as many people believe, individuals also have agent-neutral reasons, such as reasons to do good. If so, then this feature will also give me an independent individual reason to act in this way. In that case, since my group-based reason to do my part and my personal reason to act in this way derive from the very same feature of the outcome, our proviso against double-counting will prevent these reasons from adding up. 54 Thus, we can conclude that our agent-neutral collective reasons can never affect what individuals ought to do. 7. Reasons not to participate in harm I have argued that agent-neutral collective reasons can never make a difference to what individuals ought to do. However, this leaves open the possibility that agent-relative collective reasons could make such a difference. One important category of agent-relative reasons are reasons not to cause harm. In Chapter 1, I defended the view that groups also have reasons of this kind, using the Firing Squad example. In this section, I will discuss how collective reasons of this kind might affect our individual reasons. This could have important implications in many actual cases, because there are many actual cases in which we cause harm collectively. For example, consumers of animal products collectively cause animals to be harmed by signaling their demand for such products, and when a country goes to war, those in and around the military collectively cause harm to enemy soldiers and civilians. It seems initially plausible that individuals would have group-based reasons not to participate in causing harm. In other words, it seems plausible that (4) If we together have a reason not to perform some course of action because of the harm that this course of action would cause, and we will perform this course of action if I do my part, then these facts give me a reason not to do my part. In addition to being plausible in itself, we might find this claim attractive because it seems plausible that individuals often have reasons not to participate in firing squads, in the signaling of demand for animal products, or in the military, and this claim might seem to offer a plausible explanation of why they would have such reasons. However, there are also challenges to this claim. Consider the following variation on Firing Squad, based on a case from Jackson: X and Y jointly cause pain to Z. However, if X had not acted, Z’s pain would have been much worse. 99 In this case, Jackson suggests that, given that Y is going to act regardless, it is obvious that X is right to act. Jackson writes: 99 Jackson 1987: 98. 55 Think of it from Z’s point of view. If Z knows the situation, won’t he be hoping like mad that X will act? Indeed, we can imagine that Z pleads with X to act, pointing out that if he doesn’t, Y will act alone and he, Z, will suffer worse pain. Is Z pleading and hoping for something immoral to be done? Surely not! 100 I find this persuasive. If we accept (4), can we accommodate this judgment? According to (4), X has a group-based reason not to participate in causing pain to Z. A liberal conception of group actions might imply that X also has an even stronger group-based reason not to participate in the pattern where he does nothing while Y causes pain to Z, but intuitively, it seems that this should not count as an eligible pattern, so let’s suppose that X has no such reason. Still, X seems to have a significant non-group-based reason to participate in causing pain to Z, since his doing so would spare Z from much worse pain. So whether X ought to participate depends on the relative strength of these reasons. Now, in discussing Woodard’s view that we can have group-based reasons even in non- cooperative contexts, I noted that it was plausible that the strength of our individual group-based reasons should be proportional to the strength of the group reasons from which they derive. Similarly, it is plausible that the strength of our reasons not to cause pain should be proportional to how much pain we are causing, so that, for example, if I have a reason of strength 1 not to cause 1 unit of pain, then I have a reason of strength 2 not to cause 2 units of pain. But in that event, for reasons similar to those I offered in relation to Woodard’s view, we can get a version of the case in which we seem forced to accept that X should not participate. Suppose that we think that the strength X’s group-based reason not to participate is only a thousandth of the strength of the group’s reason not to cause the pain. And suppose that the group would be causing Z one million minutes of pain, but Y alone would cause Z an additional hundred minutes of pain. Then if the group has a reason of strength 1,000,000 not to jointly cause the pain, X will still have a group-based reason of strength 1,000 not to participate, and only a personal reason of strength 100 to participate. So we are forced to conclude that X ought not to participate. But again, we might think, this is the wrong result: X ought to participate, and save Z from the additional hundred minutes of pain. How should these cases affect our views about group-based reasons not to participate in causing harm? In these cases, it seems significant that, as Jackson points out, the victim of the harm would himself want the agent to participate, because the agent’s participation would spare him from additional harm. This suggests that we can accommodate these cases by amending (6) with the following claim: 100 Ibid.: 99. 56 (5) If those who we would together be harming if I did my part do or would consent to my doing so, then this cancels my group-based reason not to participate in harming these people. What about cases where my participating in causing harm to some would instead benefit others? The general lesson of cases like Jackson’s is that if I have a group-based reason not to participate in causing harm, then as long as the harm we would together be causing is great enough, we will have to conclude that I should not participate. And we will have to conclude this even if my participating would not in itself make the harm worse in any respect, and even if my participating would in fact do any arbitrarily large amount of good, or prevent an arbitrary large amount of evil, as long as these amounts are small relative to the amount of harm that we would together be causing. To really come to terms with what this would mean in cases where my participation in causing harm to some would instead benefit others, we should again raise the stakes, by switching from cases of causing pain back to cases of causing death. Suppose that I have injected two groups of innocent victims with a lethal but slow-acting poison. One group contains one million victims; the other contains one hundred victims. You could use a remote control to automatically administer a drug to both groups. The drug would provide a partial antidote to the poison. In the members of the hundred-victim group, this would be enough to save their lives. However, because the members of the million-victim group are allergic to the drug, they would simply instead die from the combined effects of the drug and the remaining poison. Administering this drug would not affect the timing or the painfulness of their deaths. 101 In this case, if you administer the drug, then the combination of my act of poisoning and your act of drugging would together cause the deaths of the million victims, and plausibly, we collectively have an agent-relative reason not to act in this way. And it is plausible the strength of our agent-relative reason not to kill people is proportional to the number of people that we would be killing. So let’s suppose that we collectively have an agent-relative reason of strength 1,000,000 not to act in this way. In that case, even if we think that our individual group-based reasons are only a thousandth as strong as the corresponding collective reasons, then you will have a group-based reason of strength 1,000 not to administer the drug. It is also standardly thought that if we have special agent-relative reasons not to kill people, then these reasons are stronger than our reasons to save lives. So if you would have a reason of 101 The structure of this case is loosely based on Williams’s comparatively low-stakes Jim and the Indians case, described above. Parfit 1984: 71 considers another comparatively low-stakes version of the case. 57 strength 100 not to kill a hundred people, then the strength of your personal reason to save a hundred people, by means of administering the drug, must be weaker than 100. Thus, we seem forced to conclude that you have overwhelming reasons not to administer the drug, even though this would mean letting one hundred additional people die, and saving no one. I think that this is the wrong result. It might be objected that only those who are already skeptical about constraints against killing in general would think that you ought to administer the drug. But it seems reasonable to think that while there is a strong constraint against acting in a way that would either kill someone or help to kill someone in cases where your action would shorten this person’s life, this constraint either does not apply, or is significantly weakened, when your action would not shorten the person’s life. Thus, we could consistently claim both that you ought to administer the drug in the case above, and that in more ordinary cases, we have overwhelming reasons not to perform actions that kill people even if those actions would also save other people’s lives. While I think that you ought to administer the drug, however, I think that those who are attracted to a more hardline approach to constraints against killing could reasonably disagree. Now, we might think that only those who are absolutists about our reasons not to kill – those who think that we should never kill, no matter how much good we could do – could take this position. After all, if we multiply the numbers of lives that are at stake in the above example, we can get the result that you should not to participate in the killing even if you would thereby be saving any arbitrarily large number of lives. But it is important to note that we would also have to be multiplying the number of people that you would be helping to kill. And one could reasonably think that you should not help to kill a very large number of people in order to save some much smaller, but still large, number, but that you should kill or help to kill some number of people in order to save some much larger number. If we do want to claim that you ought to administer the drug, though, how should this affect our views about our group-based reasons not to participate in killing, or in causing other kinds of harm? We might try to pursue strategies like those I suggested Woodard might pursue, such as claiming that there is some limit to the strength of our group-based reasons in the relevant contexts. But again, there are reasons to think that these strategies are unpromising. As a result, we might have to abandon the idea that individuals have group-based reasons not to participate in causing harm altogether. 8. Conclusion In this chapter, I have discussed how collective reasons for action might affect individual reasons for action. Although the claim that what individuals ought to do can be affected by what we together ought to do is plausible, I have argued that we should, at least in a variety of cases, reject it. I first argued that individuals should have group-based reasons to do their parts only 58 when others will cooperate and only when doing their parts would make some relevant difference, and that we need to rule out an objectionable kind of double-counting. I then argued that these claims imply that what individuals ought to do can never be affected by agent-neutral collective reasons. Finally, I argued that we should reject the view that individuals can have group-based reasons not to participate in causing harm. Now, we might feel that these skeptical conclusions would undermine the possibility or at least the interest of collective reasons. But I suspect that this feeling rests on the notion that we encountered earlier, that collective reasons could be possible or interesting only if they were action guiding, and that they could be action guiding only insofar as they had implications for individual reasons. And as I argued earlier, this notion is mistaken. The recognition of what we together ought to do can motivate each of us to do our parts even if we do not infer that we have individual reasons to do our parts. So collective reasons can guide our actions even when they do not affect our individual reasons. 59 Chapter 4: Unit-of-Agency Dilemmas 1. Introduction We often find ourselves in situations where we struggle to decide between two courses of action, each of which seems to be supported by compelling ethical considerations. Some philosophers claim that these situations can take an extreme form: that there are genuine ethical dilemmas, cases where some agent ought, all things considered, to do one action, and ought, all things considered, to do some other action, even though she cannot do both. This is a troubling prospect. As I will discuss, one of the key reasons why ethical dilemmas strike us as troubling is that it seems that they would make our struggle to decide what to do irresolvable. As Hill writes, in an apparent dilemma, conscientious people would find that “principles and values they assumed could never be compromised pull at them from opposite directions, threatening to tear apart that unity of soul long supposed to be the only indestructible reward of virtue.” 102 Now, a number of philosophers believe that there are plausible ways to deny ethical dilemmas. But in this chapter, I will argue that even if we are comfortable denying ethical dilemmas as they are traditionally understood, there is another kind of normative conflict that we must confront: cases where what ought to be done at one unit of agency is incompatible with what ought to be done at another unit of agency, or what I will call unit-of-agency dilemmas. In particular, there may be cases in which what we together ought to do is incompatible with what each of us individually ought to do. For example, there may be cases in which we should do something, but I shouldn’t do my part. These cases, I will argue, threaten to be just as paralyzing as traditional “single-agent” dilemmas, but the problem cannot be avoided or resolved in the same ways. 2. Single-agent dilemmas 2.1. What are single-agent dilemmas? I will start by considering a more traditional kind of dilemma, which I will call single-agent dilemmas. I will understand these as cases in which an agent ought, all things considered, to act in each of two or more incompatible ways. It is important to distinguish these conflicts between all-things-considered oughts from conflicts between other sorts of oughts. For example, there may be times when I have to choose between acting morally and promoting my own self-interest. In these cases, we might think, there is a conflict between oughts that are each qualified in some way: in particular, what I morally ought to do conflicts with what I ought to do in self-interested terms. But if I am facing a single-agent dilemma, as we are defining them, something further must be true: it must be true both that I ought, all things considered, to act morally, and that I ought, all things considered, to promote my self-interest, even though I cannot do both. Similarly, we might face situations where there is a conflict in terms of just one of these qualified oughts. For example, suppose that I have made two incompatible promises. We might think both that I morally ought to keep the first promise, and that I morally ought to keep the second 102 Hill 1996: 168. 60 promise, even though I cannot keep both promises. But again, this will be a single-agent dilemma, in our terms, only if it is also true that I ought to keep each promise all things considered. 2.2. Single-agent dilemmas and paralysis Again, the idea that we might find ourselves confronted with a single-agent dilemma seems troubling. I will now try to identify a key part of what makes these cases troubling. Again, I have defined single-agent dilemmas in terms of what we ought to do all things considered. I will now argue that these dilemmas, unlike “qualified-ought” dilemmas, would be uniquely paralyzing for conscientious agents. Now, single-agent dilemmas are not the only cases in which we would expect conscientious agents to struggle to decide what to do. For example, it seems plausible that conscientious agents could experience similar struggles when facing conflicts between qualified oughts. However, in those cases, it seems that ethical reflection could ultimately help to resolve this struggle. For example, suppose again that I have to choose between what I morally ought to do, and what I ought to do in self-interested terms. In this case, I may find it difficult to decide what to do. After all, the moral and self-interested considerations may each be highly compelling, and I may know that whatever I do, I will have strong reasons to regret my choice. Nevertheless, in such a situation, it seems that ethical reflection could, at least in principle, help to resolve our struggle to decide what to do, by enabling us to figure out what we ought to do all things considered. As the phrase “all things considered,” suggests, this ought takes all moral, self-interested, and other considerations into account, and weighs them appropriately against one another. If we can figure out the answer to this question, this seems to resolve the question of what to do. In contrast, if I am a conscientious person, and I find myself in a dilemma where I ought all things considered to do each of two incompatible actions, then it seems that I would be stuck in a struggle to decide what to do which ethical reflection could not resolve. Once I know what I ought to do all things considered, it seems there is no further court of appeal to which I can turn. Judgments about what we ought to do all things considered seem to be by definition the ultimate source of ethical guidance available to us. In other words, insofar as we are conscientious, we will always take our judgments about what we ought to do all things considered to be decisive in deliberation, and will never be willing to act in ways that are incompatible with these judgments. So if dilemmas occurred with respect to these judgments themselves, it seems that this would be uniquely paralyzing for conscientious agents. We can see this point emerging in an exchange between Thomson and Horty. Thomson considers a situation in which Alice has promised both Bert and Charles that she would give them a pill, but only has one pill to give, and asks for advice to a person who believes that there are genuine ethical dilemmas: 61 she asks, “Which ought I do – give Bert the pill, or give Charles the pill?” The Dilemmist replies, “Well, you ought to give Bert the pill, and you ought to give Charles the pill.” Alice may be expected to reply: “I can’t do both! So which ought I do?” The Dilemmist can merely repeat: “No ‘which’ about it! You ought to give Bert the pill, and you ought to give Charles the pill.” 103 Thomson seems to be suggesting here that if we accepted the existence of genuine dilemmas, then we would not be able to give agents in these dilemmas helpful advice. In other words, further ethical reflection by an advisor, like further reflection by the agent herself, would not be able to resolve the agent’s struggle to decide what to do. Horty has argued that we can resolve Thomson’s worry by appealing to Williams’s distinction between the moral and “deliberative” senses of ought, that is, between “the ought that occurs in statements of moral principle, and in the sorts of moral judgments about particular situations that we have been considering, [and] the ought that occurs in the deliberative question ‘what ought I to do?’ and in answers to this question, given by myself or another.” 104 The cases Thomson is concerned with, Horty suggests, are naturally understood as specifically moral dilemmas – that is, dilemmas between incompatible moral oughts. But when Thomson imagines an agent asking for advice, Horty claims, this question is “naturally interpreted as deliberative, taking the moral predicament as a premise and asking what she should do now that she has found herself in such a predicament.” In other words, Horty is suggesting that when we find ourselves in a dilemma between two qualified oughts (in this case, two moral oughts), we can decide what to do by appeal to a further ought, which Horty identifies as the deliberative ought. Schroeder has similarly suggested that the deliberative ought “settles what to do,” while Joyce and Wedgwood have ascribed this role to the ought simpliciter. 105 Now, the “deliberative ought,” the “ought simpliciter,” and the “all- things-considered ought” may refer to three different concepts, or they may be three different names for the same concept. But either way, whatever we want to identify as the furthest ought to which we can appeal to help us to decide what to do, we can ask: what if we faced a dilemma in terms of that ought? Then it would seem that ethical reflection would not be able to help us to decide what to do, and so conscientious agents would be in a uniquely paralyzing situation. (If you think that either the deliberative ought or the ought simpliciter is the ought which “settles what to do,” and is not the same as the all-things-considered ought, then you can redefine single- agent dilemmas accordingly.) 103 Thomson 2008: 174-175. 104 Horty 2003: 588-589; Williams 1973: 184. Horty is responding to an earlier statement of Thomson’s worry, presented in Thomson 1990: 83. 105 Schroeder 2001: 9, n. 11; Joyce 2001: 50; Wedgwood 2007: 25. For similar claims, see Gibbard 2003 and Enoch 2011: 72. 62 Now, we might be skeptical that any sensible agent really would be paralyzed in the way I have described. When all else fails, why wouldn’t the agent just flip a coin, or just pick one option or the other? But again, insofar as we are conscientious, we will never be willing to act in ways that are not in accordance with our judgments about what we ought to do all things considered (or whichever are our “ultimate” ought-judgments). And if we decided in the face of a genuine dilemma to act in accordance with a coin flip or to just pick one of our options, then we would be deciding not to do something we all things considered ought to do. So if we think that a conscientious agent would never be paralyzed in the way I have described, then we should simply take this as a reason to be skeptical that there are genuine all-things-considered dilemmas. My claim here is only that, if such dilemmas did exist, then they would be paralyzing for conscientious agents. 2.3. Are there single-agent dilemmas? If there are single-agent dilemmas in terms of what we ought to do all things considered, I have argued, this would be troubling, because agents facing them could find themselves in an apparently irresolvable struggle to decide what to do. But as I will now discuss, there are some compelling reasons to be skeptical of single-agent dilemmas. Why might we think that there are genuine single-agent dilemmas? 106 One reason is that we might be inclined to accept particularly strong versions of certain normative principles. That is, we might think that certain kinds of considerations can always determine what we ought to do, even when these considerations conflict with each other. For example, we might think that we always ought to keep our promises, even when we’ve made incompatible promises. Similarly, Sidgwick suggests that even if one finds it self-evident that he should always do whatever promotes the greatest overall happiness, “he may still hold that his own happiness is an end which it is irrational for him to sacrifice to any other.” 107 However, it also seems plausible to accept more moderate versions of these principles. For example, it seems plausible to claim that we have strong reasons to keep our promises, but that when we’ve made incompatible promises, it is not true that we should keep each promise. Rather, if one is more important, we should keep that promise but not the other; or, if neither is more important, what is true is just that we should keep one or the other. 108 Similarly, a number of philosophers find it plausible that when self-interest and morality, or impartial beneficence, conflict, it will either be true that one overrides the other, or that it is reasonable to follow either one. 109 106 Another influential argument in favor of single-agent dilemmas which I will not discuss here focuses on the emotional reactions of the agent in a putative dilemma. For discussion of this argument, see Williams 1973; van Fraassen 1973: 147-148, 151; Marcus 1980: 193, 196-197; and Brink 1994: 220-223. 107 Sidgwick 1981: 498. 108 For the latter type of proposal, which is called the “disjunctive analysis,” see Brink 1994: 236-242. 109 For example, see Parfit 2011: Sec. 19, and Crisp 2015: Chap. 7. 63 In addition to finding the normative case for single-agent dilemmas unpersuasive, some philosophers claim that we can rule out dilemmas on conceptual grounds. Some philosophers seem to find this claim intuitively compelling on its face. 110 Others offer further arguments for the claim that we can rule out dilemmas on conceptual grounds. In particular, some philosophers argue that dilemmas are incompatible with certain compelling principles of deontic logic, which arguably purport to describe conceptual truths. 111 For example, the claim that there are single- agent dilemmas can generate a contradiction when combined with the following principles (where “OX” means “X is obligatory”): (1) OA → ¬O¬A (2) □ (A → B) → (OA → OB) 112 Roughly, these principles say that the same action cannot be both obligatory and forbidden, and that if doing A requires doing B, and A is obligatory, then B is obligatory. Another contradiction arises if we accept both that ‘ought’ implies ‘can’, and the principle that if an agent is required to do each of two actions, she is required to do both (“agglomeration”). 113 3. What are unit-of-agency dilemmas? As we’ve seen, single-agent dilemmas seem to present a significant and troubling implication: they seem to be cases where conscientious people will be stuck in a struggle to decide what to do which ethical reflection cannot resolve. Luckily, many philosophers believe, there are plausible ways to deny the existence of genuine single-agent dilemmas. However, I will now describe another possible kind of normative conflict which, I will later argue, threatens a similar implication. Whereas traditional ethical dilemmas confront a single agent, the kind of conflicts at issue here occur between two different levels or units of agency. 3.1. Defining unit-of-agency dilemmas I will define unit-of-agency dilemmas as cases in which what some group ought to do is incompatible with what one or more of its members individually ought to do. In particular, we will be focusing on cases in which the group should act in some way, but one or more members individually should not do their parts in the group activity, but should instead act in some incompatible way. It is important to distinguish unit-of-agency dilemmas as I have defined them from another kind of conflict arising from the interaction between individual and collective reasons for action. As I mentioned when discussing the climate change example, it is plausible that what we together 110 For example, see Thomson 1990: 83. 111 For a comprehensive survey of these arguments, see Goble 2013. For recent discussion, see Nair 2014 and 2016. 112 See Goble 2009: 451-452. 113 See Williams 1973. 64 ought to do can affect what we individually ought to do. That is, I may have “group-based” reasons for action: reasons to act in some way, such as to reduce my carbon emissions, because I would thereby be doing my part in what we together ought to do. So one kind of conflict we might be interested in is the conflict between what I have group-based reasons to do, and what I have personal, non-group-related reasons to do. But this is not the kind of conflict we are addressing here. We are focusing on cases where what we together ought to do is incompatible with what I ought to do all-things-considered, where this latter “ought” already takes into account any group-based reasons for action that I might have. Conflicts between what I individually ought to do and what we together ought to do may be only one kind of normative conflict between levels or units of agency. There might also be intrapersonal unit-of-agency dilemmas, such as cases where I and my temporal parts ought to act in incompatible ways. 114 We can also think about conflicts between units of agency which only partially overlap with one another, such as cases in which two groups ought to act in incompatible ways despite sharing some members between them. However, in this chapter, I will focus on individual-collective dilemmas. 3.2. Examples To get a sense of what unit-of-agency dilemmas might look like, recall Gibbard’s Case, where you and I each have two options: we could both do A, and produce the second-best outcome; we could both do B, and produce the best outcome; or one of us could do A and one of us could do B, and produce the worst outcome. In this case, I have argued, we together ought to do B, and produce the best outcome. But suppose that we both do A. In that event, we might think, each of us will have done the right thing, given what the other is doing. After all, given that you are doing A, my doing B would have led to disaster, and vice versa. But we together will have failed to do what we ought to have done. So in this event, there is a kind of conflict between what we individually ought to do and what we together ought to do: in doing A, we each act rightly, even though we together act wrongly. However, while there is a conflict here between what we collectively ought to do and what each of us individually ought to do, the conflict seems relatively unproblematic. This is because there is a way to avoid the conflict. Suppose that we both do B. In that case, as I have said, we are together doing the right thing. Moreover, note that since you are doing B, my doing A would have led to disaster, and vice versa. So in this event, it seems that B was also the right thing for each of us individually to do. For a more potentially problematic example, consider the following case, which will be our focus going forward: 114 See Jackson 1987: 103-106. 65 Rescue Mission: You and I are about to carry out a rescue mission to save the lives of five strangers in imminent danger. But I then learn that my child’s life is also in danger. If I continue with our rescue mission, there will not be enough time to save my child. We might think that in this case, I have a special reason to rescue my child, stronger than my reasons to help save the strangers. So I ought to rescue my child. But we as a group, we might think, do not have the same kind of reason to rescue my child. After all, my reason for acting might derive from the fact that this is my child, and this fact might not apply to the group. So it might be that we as a group only have reasons to do whatever would save the most lives, and so ought to rescue the five strangers. In that case, what I ought to do will conflict with what the group ought to do. And unlike Gibbard’s Case, this conflict would seem to be guaranteed: no matter what we in fact end up doing, the group ought to rescue the strangers, while I ought to save my child. Now, you might not share my intuitions about this particular case. For present purposes, I am just using this case to show why we have at least prima facie reason to think that unit-of-agency dilemmas of the more troubling kind might exist, and to illustrate what they might look like. In the next section, I will return to the question of whether we should really accept that cases like this represent genuine unit-of-agency dilemmas. By comparing Rescue Mission with Gibbard’s Case, we can see how unit-of-agency dilemmas might arise. As Rescue Mission illustrates, dilemmas may arise because individuals and groups have what we can think of as different ethical aims. That is, it may be that an individual has stronger reasons to do what brings about one outcome (such as the outcome in which my child is saved) than what brings about another (such as the outcome in which five strangers are saved), while the group has stronger reasons to do the reverse. 115 And this, in turn, must be because at least one of the parties has agent-relative reasons to bring about some outcome, in the sense of reasons which do not apply to all agents equally, as opposed to agent-neutral reasons, which do. For example, as we have seen, it may be that I should not do my part in a reasonable collective action because I have reasons to place special weight on the interests of my own children, reasons which are not shared by the group. In principle, there could also be conflicts because groups have agent-relative reasons for action which do not apply equally to their members. Note that these conflicts arise in a different way from the conflict in Gibbard’s Case. In Gibbard’s Case, our individual and collective reasons diverge because although the parties would each have reasons to bring about the same outcomes if they were in a position to do so, they are in fact in different positions with regard to which outcomes they can bring about: if we both do A, then though we collectively could have done better, I individually could not. In contrast, cases 115 I am using “outcome” here to cover anything that would be true if we acted in a certain way, including facts like “that this very agent is benefitted” and “that this very act was a lie.” 66 like Rescue Mission indicate that our collective and individual reasons may diverge because the parties have agent-relative reasons to bring about different outcomes. 4. Unit-of-agency dilemmas and paralysis I have suggested that there may be situations where what two units of agency ought to do – such as what we collectively ought to do, and what I individually ought to do – are incompatible, situations which I have called “unit-of-agency dilemmas.” But we might be skeptical that these situations would really be “dilemmas.” After all, it seems that there would not in fact be any dilemma at either unit of agency. There would only be a single course of action that the group ought to carry out, and only a single course of action that I ought to carry out. It would not be the case, for example, that the group both ought to rescue the strangers and ought to allow me to rescue my child, even though it cannot do both; nor is it the case that I both ought to save my child and do my part in rescuing the strangers, even though I cannot do both. In addition, it is plausible that there would also be no dilemma in terms of what it is rational for either unit of agency to do. If we collectively recognize that we ought to save the strangers, then it seems that doing so is what it would be rational for us as a group to do. And if I recognize that I ought to save my child, then it seems that doing so is what it would be rational for me as an individual to do. 116 I agree that in these situations, there would not be a dilemma for either unit of agency, either in terms of what the group or the individual ought to do, or in terms of what it would be rational for the group or the individual to do. And if we prefer to reserve the word “dilemma” for situations like those, I have no objection to that. However, I will now argue that the situations we are focusing on here possess an analogue to the key troubling feature of single-agent dilemmas that we focused on earlier. As I argued earlier, single-agent dilemmas are distinctively troubling because they place conscientious agents in a struggle to decide what to do which ethical reflection is apparently unable to resolve. But I will now argue that in unit-of-agency dilemmas, it will likewise be the case that if we are conscientious both as a group and as individuals, then we will struggle to decide what to do. And I will argue in Section 7 that this struggle may be one which ethical reflection is powerless to resolve. How would this struggle arise? Being conscientious, it seems, just is or at least involves intending to do what one believes one ought to do. So if I am conscientious as an individual, and I believe that I ought to perform some action, then I will intend to do so. And if we are conscientious as a group, and we believe (in some suitable collective sense) that we ought to perform some collective action, then we will collectively intend to do so. For example, in Rescue 116 In other words, it is plausible that there is a rational requirement not to be akratic, or to fail to do what one judges one ought to do, and that this requirement may apply to groups as well as to individuals, at least if we think that there are things that groups as such ought to do. For further discussion of the idea of akrasia as it applies to groups, see Pettit 2003. 67 Mission, if we accept the judgments I suggested, then I will intend to rescue my child, and we will intend to rescue the strangers. However, these intentions seem to be in tension. This, I suggest, is because if we collectively intend to act in some way, it must be true that each of us intends to do our part. I will defend this claim further in the next section. If collective intentions do imply corresponding individual intentions, and we as a group intend to rescue the strangers, then I will intend to do my part. But again, if I am conscientious as an individual, and know that I ought to rescue my child, I will also intend to do that. So if we are conscientious both individually and collectively, then I will find myself with two conflicting intentions. Plausibly, as long as I have intentions which I know to be in conflict, I will be paralyzed: in order to decide what to do, it seems, I will need to settle on one coherent intention. And plausibly, in order for the group to decide what to do, its members will need not only intend to do their parts, but also to be free of conflicting intentions. So it seems that the group will also be paralyzed. Now, we might be skeptical that it is even possible to have conflicting intentions. If we accept that collective intentions imply corresponding individual intentions, this means that in Rescue Mission, it will be impossible for us to intend to do what we ought to do collectively and individually at the same time. So it will be impossible for us to be conscientious both individually and collectively at the same time. It seems that it will still be true, however, that we will struggle to decide what to do insofar as we at least approximate conscientiousness both individually and collectively. Plausibly, in order to be fully conscientious, you must not only form an intention to do what you believe you ought to do, but also maintain this intention until you carry it out. But in order to approximate conscientiousness, it seems, you at least have to form an intention to do what you believe you ought to do when you are thinking about what you ought to do. So if we approximate conscientious both individually and collectively, then when I think about the fact that I ought to rescue my child, I will intend to do so; and when we think about the fact that we ought to rescue the strangers, we will intend to do so, and so I will intend to do my part. But if it is impossible to have conflicting intentions, then each of these intentions will take turns driving the other out. So it seems that neither I nor the group will be able to reach a stable decision about what to do. 5. Collective intention and individual intention I have just argued that insofar as we are conscientious both individually and collectively, unit-of- agency dilemmas will be paralyzing. A key premise of this argument was the claim that if a 68 group collectively intends to perform some action, then it must be true that each member of this group intends to do her part. In this section, I will first defend this claim, then discuss a possible worry about why individuals would form such intentions. 5.1. Does collective intention require individual intention? There are at least two reasons to think that in order for us to collectively intend to act in some way, we must each individually intend to do our parts. The first is the idea that collective attitudes must somehow be built out of the attitudes of individuals. A number of theorists find this idea compelling. For example, Searle claims that since “society consists entirely of individuals, there cannot be a group mind or group consciousness.” Instead, Searle claims, our theories of collective intention should be consistent with an ontology and metaphysics based on “the existence of individual human beings as the repositories of all intentionality.” 117 In part because of ideas like this, a number of theorists claim that there is a strong link between collective intention and individual intention. 118 Second, the claim that collective intentions require that the individuals intend to do their parts can be supported by intuitions about simple examples. For example, suppose you ask me what my spouse and I are planning to do this afternoon, and I tell you that we’re going to go for a walk. But, I add, I haven’t yet decided whether or not I’ll be going. Something seems to have gone wrong here. Maybe we agreed to go for a walk, and maybe my spouse still believes we intend to go, but as long as I don’t yet have the intention to participate, it seems odd to say that we actually have the intention to go. Now, while simple examples like this might seem to support the claim that collective intentions require that the individuals intend to do their parts, we might think that this claim does not stand up in other kinds of examples. The group consisting of my spouse and I was small, and it was natural to suppose that it did not have much in the way of formal structure, such as established procedures for deliberation or decision-making. We might think that matters change when we consider examples involving large or structured groups. But I will suggest that we may have this reaction only because in these examples we may be prone to misidentify the relevant group. Let’s first consider a large group. Suppose that the reporters at a large newspaper all agree to pitch in a certain amount of money to buy a retirement gift for their beloved editor-in-chief. It may seem perfectly appropriate to say that the reporters together intend to buy the gift, even if one of them doesn’t actually intend to contribute. But this, I suggest, is just because it is natural to speak of “the reporters” as a generalization, which need not include all of them. If we consider whether all of the reporters together intend to 117 Searle 1990. 118 See Tuomela and Miller 1988, Kutz 2000, and Bratman 1999. 69 buy the gift, or just that most of them do, the latter seems to be the correct answer. Similarly if we specify the reporters by name – for example, if we ask “Do A, B, C,… Y, and Z together intend to buy the gift, or just A, B, C,… and Y?”, it seems that the latter is the correct answer. Let’s next consider a structured group. While the reporters may not have formal procedures for deliberation or decision-making when it comes to buying retirement gifts, let’s suppose that the newspaper does have formal procedures – say, a daily vote by the editorial board – when it comes to deciding what to print. At today’s meeting, the board votes to lead with an exposé about a powerful politician. However, the layout editor, who supports the politician’s agenda, decides to ignore the vote and bury the story at the back of the paper. In this case, it may seem appropriate to say that the board intends to lead with the story, even though one member of the board does not intend to do her part. This seems to be another counterexample to the claim that for a group to intend to do something, each member of the group must intend to do her part. However, I suggest that this seems to be a counterexample because it is natural to assume that the editorial board is the same as the group consisting of the members of the editorial board, understood as the set containing these particular people – say, A, B, C, D, and E. But in fact, these are different things. An editorial board is a socially constructed entity, like a state, a legal code, or a currency. That is, it exists because it enjoys a certain kind of social recognition. The group consisting of A through E, however, is not a socially constructed entity. The board can also persist despite changes in its membership, whereas the group is just defined as the set of A through E, and so its members can’t change. Because the board and the group of A through E are different, then the conditions, whatever they might be, that allow us to ascribe an intention to the board, may not be the same as the conditions that allow us to ascribe an intention to the group. 119 And when we make it clear that we’re asking whether A through E together intend to lead with the exposé, it seems appropriate to say that they do not, if E does not intend to do her part. Finally, suppose we do deny the claim that collective intention requires that the individuals intend to do their parts. Even then, that does not mean we are safe from the paralysis threatened by unit-of-agency dilemmas. Even if collective intention does not entail that the individuals intend to do their parts, it seems plausible that group intention cannot exist when individuals single-mindedly intend not to do their parts – that is, when they have this intention without also having any conflicting intentions. And we can support this claim as follows. It is plausible, though not uncontroversial, that I cannot intend to do something if I believe that I will not do it. 120 If we accept this claim with respect to individual intention, it seems natural to assume that group intention will share the same feature: that is, we cannot together intend to do something if 119 For a more detailed discussion of this distinction, see Epstein 2015. 120 Defenses of the stronger claim that if one intends to do something, one must believe that one is going to do it can be found in Anscombe 1963, Grice 1971, Audi 1973, Harman 1997, Davis 1997, and Ross 2009. Criticisms can be found in Davidson 1980b and 1980c, and Bratman 1987. 70 we believe that we will not do it (for some appropriate sense of “believe”). But if we know that one or more members of the group single-mindedly intend not to do their parts, we will be in a position to know that we will not perform our joint activity, so we will not intend to perform the joint activity. So again, my intention not to do my part will end up competing with our intention to perform the joint activity. 5.2. Why would I intend to do my part? Again, I have suggested that insofar as we are conscientious both individually and collectively, unit-of-agency dilemmas will be paralyzing, because my intention to do what I ought to do will compete with my intention to do my part in what the group ought to do. But we might be worried about how this could come about. Why would I intend to do my part, even though I know that this is not what I ought to do? Now, I might intend to do my part simply because I am momentarily forget that I ought to rescue my child, and start to think that I ought to do my part instead. But otherwise, it might be claimed, my intending to do my part would simply be unintelligible. What kind of intelligibility is at issue here? Consider the following example. People have different tastes in movies. There are some movies that we know other people are into, but that we ourselves don’t particularly care for. In some cases, we say that while some movie didn’t do it for us personally, we can understand why other people would like it. We “get it.” It “makes sense.” In other cases, we say that we don’t understand why anyone would like that movie, just as we don’t understand, in Anscombe’s famous example, why someone would desire a saucer of mud. 121 It simply seems bizarre, pathological. The kind of intelligibility at issue here is whatever distinguishes the former cases from the latter cases. 122 But if my intention to do my part is either unintelligible or depends on my being confused about what I ought to do, it might be argued, then the paralysis I have described no longer seems so troubling. That is, the conditional I argued for above might still be true: it might be true that if both I and the group intend to do what we ought to do, we will be paralyzed. But this paralysis could never happen as long as I was well-informed and my intentions were intelligible: I would never intend to do my part, so the collective intention would never in fact get going. I will try to respond to this challenge by showing how, even if I am not confused about what I ought to do, my intention to do my part could still be intelligible. Suppose that I know that I have some reason to act in some way, and this leads me to intend to act in that way. When this is true, this seems sufficient to make my action intelligible. And this seems true even when I am acting 121 Anscombe 1963: 70. 122 Note that in order for someone’s attitude to be intelligible to us, it does not seem sufficient to know the causal explanation of their attitude. I might know that there is some causal explanation why someone wants a saucer of mud, but still say that I don’t understand why anyone would have that desire. 71 against my better judgment; that is, when I am acting akratically. For example, suppose that I decide to enjoy myself by drinking some vodka tonight, even though I know I will regret it in the morning, and will be right to do so. In this case, it does not seem that my decision is bizarre or pathological. We get it. After all, plausibly, I do have at least some reason to drink – it will make my evening more enjoyable – and what I am doing can be made sense of as a response to this reason, even though I am failing to respond correctly to the countervailing reasons. Similarly, when someone else likes a movie that I don’t like, I might understand why they would like it because I see that the movie has at least some good qualities; I get what they see in it. But this, I claim, can be just what is happening in cases like Rescue Mission. It may be, as I have suggested, that all things considered, I should not do my part in rescuing the strangers. Nevertheless, I plausibly do still have some reason to do my part: I plausibly have an agent- neutral reason to do what saves the lives of others, and may also have a group-based reason, a reason to act in virtue of the fact that I would be doing my part in what we together ought to do. After all, it seems that while it may be that I ought to rescue my child rather than help to rescue the strangers, it would surely be better to help rescue the strangers than to do nothing. But if so, then even if I know that I ought to rescue my child, I may nevertheless be moved by the reasons I have to do my part to form the relevant intention. Thus, we can make sense of why I would intend to do my part after all. 6. Are there unit-of-agency dilemmas? I have suggested that there may be unavoidable unit-of-agency dilemmas, which would take the form of examples like Rescue Mission. But we might think that we have good reasons to deny that unavoidable unit-of-agency dilemmas really exist. In this section, I will discuss two broad strategies available to opponents of unit-of-agency dilemmas, paralleling two of the strategies used by opponents of single-agent dilemmas. First, we saw earlier how we might oppose single-agent dilemmas on substantive normative grounds, arguing against the stringent principles that seemed to produce dilemmas in particular cases in favor of more moderate variants. Similarly, opponents of unit-of-agency dilemmas might argue against the normative claims I made in my examples. A second strategy for resisting unit-of- agency dilemmas might rely on conceptual rather than substantive considerations. That is, just as opponents of single-agent dilemmas claimed that these cases were conceptually incoherent, we might make the same claim about unit-of-agency dilemmas. In both cases, however, I will argue that opponents of unit-of-agency dilemmas are on shakier ground than opponents of single-agent dilemmas. 6.1. Resisting unit-of-agency dilemmas on normative grounds To see how we might oppose unit-of-agency dilemmas on substantive normative grounds, recall the example of the rescue mission where I must choose whether to participate in saving five 72 strangers or my own child. In this case, I suggested, we might think that my relationship to my child gives me an overriding personal reason to defect, but that because the group does not itself have such a relationship to my child, it should save the strangers. Opponents might try resist these claims, in two main ways. They could focus on resisting my claims about our individual reasons. For example, they could grant that the group ought to save the strangers, but argue that I in fact ought to do my part. Alternatively, they could focus on resisting my claims about our collective reasons. For example, they could grant that I should rescue my own child, but deny that the group in fact ought to rescue the strangers. Let’s start by seeing whether we can try to defuse the threat coming from our individual reasons. Was I wrong to suggest that we sometimes have overriding agent-relative reasons to defect from what the group ought to do? Now, one way to resist this suggestion would simply be to deny that individuals ever have agent- relative reasons for action. 123 But these kinds of reasons are standardly accepted to be recognized by common-sense morality, and many people find them highly intuitive. So this option would be very high cost. (Those who are independently motivated to argue that we have only agent-neutral reasons for action, however, should take note: the fact that denying agent-relative reasons would get us out of unit-of-agency dilemmas could at least somewhat strengthen their case for doing so.) Second, we could claim that while individuals do have agent-relative reasons for action, they are always overridden in the relevant group contexts. This option is also unattractive. It is highly intuitive to think that I have much stronger reasons to save my own child from danger than to save even five strangers. If we share this intuition, then we are likely to resist the idea that being part of a group which ought to save the strangers is enough to override my personal reasons. Next, let’s see if we can defuse the problem by focusing on the collective side of the conflict. As before, there are more and less radical ways we could try to do this. The more radical option would be to abandon the idea of collective reasons altogether. But we would then be losing out on what a number of philosophers find to be a highly plausible and 123 Could we get into an unavoidable conflict even when only agent-neutral reasons are at play? It appears not. Suppose that we together do what we ought to do, producing the best outcome it can in terms of its agent-neutral reasons, reasons that I share. Could I have done better by defecting? No, because then we would together have produced an even better outcome, contrary to stipulation. (A similar argument can be found in Regan 1980: Chap. 2.) It might be objected that this better outcome may not have been one that the group could have intentionally brought about. But in that case, the group will have produced the best possible outcome by failing to perform any intentional action. So it should have done nothing rather than what it did, contrary to stipulation. 73 attractive idea, and a promising way to make sense of the judgments we want to make about a variety of examples. The less radical option is to claim that what we together ought to do can itself be affected by the kinds of considerations that give rise to our personal reasons. For example, in the rescue mission case, it might be claimed that the group does have a special reason to save my child, or at least to act in a way that allows me to save my child. More generally, we might claim that even if I have a reason to bring about some outcome in virtue of some fact that does not apply to the group to which I belong, this always provides the group with a reason to bring about, or at least to allow, this outcome. We can think of this as the idea that just as the group’s reasons can transmit down to me, giving me group-based reasons to do my part, my personal reasons can also transmit up to the group. However, even if we did think that individual reasons transmit up to the group, it’s not clear that it would solve the problem. Suppose that rather than saving strangers, our rescue mission would save the lives of your two children, but that my doing my part would once again make it impossible for me to rescue my child. In that case, the group would get reasons for action corresponding to your reasons to rescue your two children, which would presumably be stronger than whatever reasons it got corresponding to my reason to rescue my one child. So again, it seems that we together ought to undertake the rescue mission. But it is still plausible that I ought to defect, and rescue my own child. Next, rather than claiming that the group gets more or stronger reasons to act in ways that correspond to what we have personal reasons to do, we might claim that when we have personal reasons for action, this can undercut conflicting group reasons. But this seems implausible, at least in cases like Rescue Mission. That is, while it may be that the group should place extra weight on my child’s life in virtue of my relationship to my child, it does not seem that the group should place any less weight on the strangers’ lives. And even if we do think that personal reasons for action undercut conflicting group reasons, it’s not clear that this would help in the “your children vs. my child” version of the case. That is, even if my reasons to rescue my one child have the effect of undercutting the group’s reasons to rescue your children, your reason to rescue your children would similarly undercut the group’s reason to rescue my child, so it may still end up being true that the group ought to rescue your children, while I ought to defect. Finally, we might claim that in cases like the “your children vs. my child” example, the group does not have reason to favor one parent over the other. For example, we might claim that even if the reasons you have to rescue your two children are stronger than the reason I have to rescue my one child, our reasons nevertheless make equal contributions to the group’s reasons, and any agent-neutral reasons the group has to do whatever would save the most lives are undercut. However, if one parent has much more at stake than the other, and if this is true because there is 74 a difference in the number of lives that would be saved in the two possible rescue missions, it seems implausible that these facts would not at least slightly affect what the group has most reason to do. 6.2. Resisting unit-of-agency dilemmas on conceptual grounds So much for resisting unit-of-agency dilemmas on substantive normative grounds. Next, opponents might claim that we can rule out unit-of-agency dilemmas on conceptual grounds, just as a number of philosophers claim that we can rule out single-agent dilemmas on conceptual grounds. For one thing, just as some philosophers find single-agent dilemmas to be incoherent on their face, we might claim that unit-of-agency dilemmas are incoherent on their face. Opponents of unit-of-agency dilemmas might also enlist principles of deontic logic to bolster the charge of incoherence, just as their comrades did in the fight against single-agent dilemmas. As we saw earlier, the claim that there are single-agent dilemmas can generate a contradiction when combined with the following principles: (1) OA → ¬O¬A (2) □ (A → B) → (OA → OB) These principles might similarly seem to rule out unit-of-agency dilemmas. That is, if our together acting in some way is obligatory, then since our acting in some way entails my doing my part, (2) seems to entail that my doing my part is obligatory. And (1) seems to entail that my doing my part could not then be forbidden. So it seems that there could not be cases where we together ought to act in some way, but I ought not do my part. To respond to this challenge, recall our earlier distinction between unavoidable dilemmas, such as the rescue mission example, and avoidable dilemmas, such as Gibbard’s Case. We have been focusing primarily on unavoidable dilemmas. However, when we consider avoidable dilemmas, it seems hard to avoid admitting that it is indeed conceptually possible for collective and individual reasons to come apart. Again, in Gibbard’s Case, it seems that we together ought to both do B, leading to the best outcome. But if you do A, then my doing B would lead to disaster. In this case, the claim that though we together ought to both do B, I should not do my part seems not only coherent, but correct. And if we do admit that it is conceptually possible for collective and individual reasons to come apart in some cases, it seems hard to believe that we could not coherently claim that they come apart in other cases. In other words, avoidable dilemmas suggest that unit-of-agency dilemmas are conceptually possible, and so it seems hard to deny that unavoidable dilemmas are also conceptually possible. Now, even if we find this response plausible, it might seem that we have strong reasons to hesitate to give up the principles of deontic logic given above. After all, a number of philosophers are attracted to principles like these, both because of their intuitive plausibility, and 75 because they are able to systematize what seem to be good patterns of reasoning about what we ought to do. 124 In addition, if we do give up these principles, it seems that we could no longer make the conceptual argument against single-agent dilemmas that we saw above. 125 However, even if accepting unit-of-agency dilemmas would require us to reject the simple formulations of the principles of deontic logic given above, there are independent reasons to think that we should revise these principles anyway. In particular, there are independent reasons to think that principles of deontic logic must be relativized in some way to particular agents. 126 And principles that are relativized to particular agents can allow for unit-of-agency dilemmas while still ruling out single-agent dilemmas. For example, suppose we add an agent parameter to (1) and (2), yielding the following principles, where OaX means “a is obligated to bring it about that X”: (1a) OaA → ¬Oa¬A (2a) □ (A → B) → (OaA → OaB) Even if our group action entails my doing my part, and our group action is obligatory for the group, this principle would not entail that my doing my part is obligatory for me. But the addition of the agent parameter would not affect the argument against single-agent dilemmas, since there only one agent’s actions were at issue in the first place. 7. Could unit-of-agency dilemmas be rationally resolved? I have argued that we have strong reasons to think that there are unit-of-agency dilemmas, and that if they do exist, they will put us in a situation where, if we are conscientious both individually and collectively, we will struggle to decide what to do. But once we find ourselves in such a situation, could ethical reflection help us to resolve this struggle? Could we rationally take both our individual and collective reasons into proper account, and find some appropriate way of deciding between them? Recall the analogy with single-agent dilemmas. For example, consider the apparent dilemma that I face when I must choose between promoting my self-interest and acting morally. As we saw earlier, there is a sense in which this is a case where I ought to act in two incompatible ways: there is one thing which I ought to do from a self-interested point of view, and another thing which I ought to do from a moral point of view. But as we saw, this does not mean that ethical reflection can provide me with no way to choose. After all, we might say, to talk of what I “morally ought” to do is just to talk about what I have the strongest moral reasons to do, and to talk about what I “ought in self-interested terms” is just to talk about what I have the strongest 124 For discussion of the latter point, see Nair 2014. 125 Thanks to Abelard Podgorski for raising these points. 126 See Kanger 1971, Ross 2010, and Horty 2001: 49-53. 76 self-interested reasons to do. And moral reasons and self-interested reasons are both just two species of a broader genus, reasons for action simpliciter. I can decide what to do by considering all of my reasons for action, which determine what I ought to do all things considered. This analogy might lead us to think that we can rationally adjudicate the conflict between individual and collective reasons in the same way. Individual and collective reasons, we might think, are just another way of carving up reasons for action simpliciter. So the problem is just a problem of balancing our individual reasons against our collective reasons. It may not be obvious how to do this, but there is no reason to think it’s impossible. After all, while it may not be obvious how in general to balance moral reasons against self-interested reasons, but many people think that there are at least some examples where we have stronger reasons of one type than the other. And even if we can’t figure out how to compare them, or think that they don’t in themselves admit of comparisons, then it seems that it would be rational to act in either way, as we might think in more familiar cases of incomparability. This analogy, I think, is misleading, and the problem is much more intractable than it suggests. For there does not seem to be any agent who has both individual and collective reasons to balance against one another. By definition, all of the reasons that I have are individual reasons, and all of the reasons that the group has are collective reasons. I already know what I ought to do all things considered, and the group already knows what it ought to do all things considered. For the suggestion to work, it seems, we would have to be able to specify some third unit of agency that encompasses both the individual and the collective, and claim that individual and collective reasons are both ultimately possessed by this neutral agent. But it seems difficult to make sense of what this third unit of agency would be. So it seems difficult to see how ethical reflection could help us to resolve unit-of-agency dilemmas. 8. Conclusion In this chapter, I have discussed unit-of-agency dilemmas, cases where what we together ought to do comes apart from what we individually ought to do. While there are a number of plausible ways to resist single-agent dilemmas, I have argued, it is not easy to avoid or overcome the challenge posed by unit-of-agency dilemmas. So even if we reject single-agent dilemmas, we may nevertheless have to accept that there are cases in which our considered ethical judgments can pull us, unrelentingly, in opposite directions. In this sense, practical reason may be, as Sidgwick feared, “divided against itself.” 127 127 Sidgwick 1981: 508. 77 References Anscombe, G. E. M. 1963. Intention. Second edition, Oxford: Blackwell. Audi, Robert. 1973. “Intending.” The Journal of Philosophy, 70: 387-403. Bratman, Michael. 1987. Intentions, Plans, and Practical Reason. Cambridge, Mass.: Harvard University Press. Bratman, Michael. 1999. “I Intend that We J,” in Faces of Intention. Cambridge, UK: Cambridge University Press. Bratman, Michael. 1999. Faces of Intention. New York: Cambridge University Press. Bratman, Michael. 2014. Shared Agency. New York: Oxford University Press, 2014. Bratman, Michael. 1992. “Shared Cooperative Activity.” Philosophical Review, 101: 327-341. Brink, David O. 1994. “Moral Conflict and Its Structure.” The Philosophical Review, 103: 215- 247. Brink, David O. 1997. “Rational Egoism and the Separateness of Persons.” Reading Parfit, ed. Jonathan Dancy. Oxford: Blackwell: 96-134. Brueckner, Anthony and Christopher T. Buford. 2009. “Thinking Animals and Epistemology.” Pacific Philosophical Quarterly, 90: 310-314. Burke, Michael B. 1994. “Dion and Theon: An Essentialist Solution to an Ancient Puzzle.” The Journal of Philosophy, 91: 129-139. Butler, Joseph. 1975. “Of Personal Identity,” in The Analogy of Religion, reprinted in Personal Identity, edited by John Perry. Berkeley: University of California Press. 99-105. Collins, Stephanie. 2013. “Collectives’ Duties and Collectivisation Duties.” Australasian Journal of Philosophy, 91: 231-248. Collins, Stephanie and Holly Lawford-Smith. 2016. “Collectives’ and Individuals’ Obligations: A Parity Argument.” Canadian Journal of Philosophy, 46: 38-58. Copp, David. 1997. “The Ring of Gyges: Overridingness and the Unity of Reason.” Social Philosophy and Policy, 14: 86-106. Crisp, Roger. 2015. The Cosmos of Duty. Oxford: Oxford University Press. Davidson, Donald. 1980a. “Actions, Reasons, and Causes.” Reprinted in Essays on Actions and Events. Oxford: Clarendon Press. 3-20. Davidson, Donald. 1980b. “Agency.” Reprinted in Essays on Actions and Events, Oxford: Oxford University Press, pp. 43-61. Davidson, Donald. 1980c. “Intending.” Reprinted in Essays on Actions and Events, Oxford: Oxford University Press, pp. 83-102. Davis, Wayne. 1997. “A Causal Theory of Intending.” Reprinted in The Philosophy of Action, A. Mele (ed.), Oxford: Oxford University Press, pp. 131-148. Enoch, David. 2011. Taking Morality Seriously: A Defense of Robust Realism. Oxford: Oxford University Press. Epstein, Brian. 2015. The Ant Trap: Rebuilding the Foundations of the Social Sciences. Oxford: Oxford University Press. Epstein, Brian. 2017. “What are social groups? Their metaphysics and how to classify them.” 78 Synthese, doi:10.1007/s11229-017-1387-y. Gibbard, Allan. 1965. “Rule-Utilitarianism: Merely an Illusory Alternative?”. Australasian Journal of Philosophy, 43: 211-20. Gibbard, Allan. 2003. Thinking How to Live. Cambridge, Mass.: Harvard University Press. Gilbert, Margaret. 2014. Joint Commitment. New York: Oxford University Press. Gilbert, Margaret. 1990. “Walking Together: A Paradigmatic Social Phenomenon.” Midwest Studies in Philosophy, 15: 1-14. Glover, Jonathan and M. J. Scott-Taggart. 1975. “It Makes No Difference Whether or Not I Do It.” Proceedings of the Aristotelian Society, 49: 171-209. Goble, Lou. 2009. “Normative Conflicts and The Logic of ‘Ought’.” Noûs, 43: 450-489. Goble, Lou. 2013. “Prima Facie Norms, Normative Conflicts, and Dilemmas.” In Handbook of Deontic Logic and Normative Systems, edited by D. Gabbay, J. Horty, R. van der Meyden, and L. van der Torre. London: College Publications. 241-352. Grice, H. P. 1971. “Intention and Uncertainty.” Proceedings of the British Academy, 5: 263-279. Harman, Gilbert. 1997. “Practical Reasoning.” Reprinted in The Philosophy of Action, A. Mele (ed.), Oxford: Oxford University Press, pp. 149-177. Hawthorne, John. 2006. Metaphysical Essays. Oxford: Oxford University Press. Hedden, Brian. 2015. Reasons without Persons: Rationality, Identity, and Time. Oxford: Oxford University Press. Held, Virginia. 1970. “Can a Random Collection of Individuals Be Morally Responsible?”. The Journal of Philosophy, 67: 471-481. Hill, Thomas E. 1996. “Moral Dilemmas, Gaps, and Residues: A Kantian Perspective.” In Moral Dilemmas and Moral Theory, ed. H. E. Mason. New York: Oxford University Press. 167- 198. Horty, John F. 2001. Agency and Deontic Logic. New York: Oxford University Press. Horty, John F. 2003. “Reasoning with Moral Conflicts.” Noûs, 37: 557-605. Hudson, Hud. 2001. A Materialist Metaphysics of the Human Person. Ithaca and London: Cornell University Press. Hurley, S. L. 1989. Natural Reasons. Oxford: Oxford University Press. Isaacs, Tracy. 2014. “Collective Responsibility and Collective Obligation.” Midwest Studies in Philosophy, 38: 40-57. Jackson, Frank. 1987. “Group Morality.” In Metaphysics and Morality, ed. Philip Pettit, Richard Sylvan, and Jean Norman. Oxford: Blackwell. 91-110. Johnston, Mark. 2017. “The Personite Problem: Should Practical Reason Be Tabled?”. Noûs, 51: 617-644. Joyce, Richard. 2001. The Myth of Morality. Cambridge University Press. Kagan, Shelly. 2011. “Do I Make a Difference?” Philosophy and Public Affairs, 39: 105-141. Kanger, Stig. 1971. “New Foundations for Ethical Theory.” In Deontic Logic: Introductory and Systematic Readings, ed. Risto Hilpinen. Dordrecht: D. Reidel, 36-58. Killoren, David and Bekka Williams. 2013. “Group Agency and Overdetermination.” Ethical 79 Theory and Moral Practice, 16: 295-307. Kovacs, David Mark. 2016. “Self-made People.” Mind, 125: 1071-1099. Kutz, Christopher. 2000. “Acting Together.” Philosophy and Phenomenological Research, 61: 1- 31. Lawford-Smith, Holly. 2015. “What ‘We’?”. Journal of Social Ontology, 1: 225-249. Lewis, David. 1983. “Survival and Identity,” in Philosophical Papers, Vol. 1. New York: Oxford University Press. List, Christian and Philip Pettit. 2011. Group Agency. New York: Oxford University Press. Locke, John. 1975. An Essay concerning Human Understanding, ed. Peter H. Nidditch. Oxford: Oxford University Press. Marcus, Ruth Barcan. 1980. “Moral Dilemmas and Consistency.” The Journal of Philosophy, 77: 121-136. McKinnon, Neil. 2008. “A New Problem of the Many.” Philosophical Quarterly, 58: 80-97. Merricks, Trenton. 2003. “Maximality and Consciousness.” Philosophy and Phenomenological Research, 66: 150-158. Miller, Kristie. 2006. Issues in Theoretical Diversity: Persistence, Composition, and Time. Dordrecht: Springer Netherlands. Nagel, Thomas. 1970. The Possibility of Altruism. Oxford: Clarendon Press. Nair, Shyam. 2014. “Consequences of Reasoning with Conflicting Obligations.” Mind, 123: 753- 790. Nair, Shyam. 2016. “Conflicting reasons, unconflicting ‘oughts’.” Philosophical Studies, 173: 629-663. Nefsky, Julia. 2011. “Consequentialism and the Problem of Collective Harm: A Reply to Kagan,” Philosophy and Public Affairs, 39: 364-395. Nefsky, Julia. 2015. “Fairness, Participation, and the Real Problem of Collective Harm.” Oxford Studies in Normative Ethics, Vol. 5, ed. Mark Timmons. Oxford: Oxford University Press. Nefsky, Julia. 2016. “How You Can Help, Without Making a Difference.” Philosophical Studies, DOI 10.1007/s11098-016-0808-y. Nefsky, Julia. 2012. “The Morality of Collective Harm.” Ph.D. dissertation, University of California, Berkeley. Nichols, Shaun. 2014. “The Episodic Sense of Self.” In Moral Psychology and Human Agency, edited by Justin D’Arms and Daniel Jacobson. Oxford: Oxford University Press. 137- 155. Olson, Eric T. 1997. The Human Animal. New York: Oxford University Press. Olson, Eric T. 2002. “Thinking Animals and the Reference of ‘I’.” Philosophical Topics, 30: 189-207. Olson, Eric T. 2007. What Are We? A Study in Personal Ontology. Oxford: Oxford University Press. Olson, Eric T. 2010. “Ethics and the generous ontology.” Theoretical Medicine and Bioethics, 80 31: 259–270. Parfit, Derek. 1984. Reasons and Persons. Oxford: Clarendon. Parfit, Derek. 1988. “What We Together Do” (ms.). Parfit, Derek. 2011. On What Matters. Oxford: Oxford University Press. Pettit, Philip. 2003. “Akrasia, Collective and Individual.” In Weakness of Will and Practical Irrationality, edited by Sarah Stroud and Christine Tappolet. Oxford: Oxford University Press. Regan, Donald. 1980. Utilitarianism and Co-operation. New York: Oxford University Press, 1980. Ritchie, Katharine. 2013. “What are groups?” Philosophical Studies, 166: 257-272. Ritchie, Katharine. 2015. “The Metaphysics of Social Groups.” Philosophy Compass, 10: 310- 321. Ross, Jacob. 2009. “How to Be a Cognitivist about Practical Reasoning.” Oxford Studies in Metaethics, 4: 243-281. Ross, Jacob. 2010. “The Irreducibility of Personal Obligation.” Journal of Philosophical Logic, 39: 307-323. Rovane, Carol. 1998. The Bounds of Agency: An Essay in Revisionary Metaphysics. Princeton, N. J.: Princeton University Press, 1998. Russell, Jeffrey Sanford. 2008. “The Structure of Gunk: Adventures in the Ontology of Space.” Oxford Studies in Metaphysics, 4: 248–274. Schroeder, Mark. 2007. Slaves of the Passions. New York: Oxford University Press. Schroeder, Mark. 2011. “Ought, Agents, and Actions.” The Philosophical Review, 120: 1-41. Schwenkenbecher, Anne. 2014. “Joint Moral Duties.” Midwest Studies in Philosophy, 38: 59-74. Searle, John. 1990. “Collective Intentions and Actions”, in Intentions in Communication, ed. P. Cohen, J. Morgan, and M. Pollack, 401-415. Cambridge: MIT Press. Sebo, Jeff. 2015a. “The Just Soul.” Journal of Value Inquiry, 49: 131-143. Sebo, Jeff. 2015b. “Multiplicity, self-narrative, and akrasia.” Philosophical Psychology, 28: 589- 605. Shoemaker, David. 1999. “Selves and Moral Units.” Pacific Philosophical Quarterly, 80: 391- 419. Sider, Theodore. 2001. Four-Dimensionalism: An Ontology of Persistence and Time. Oxford: Clarendon Press. Sider, Theodore. 2003. “Maximality and Microphysical Supervenience.” Philosophy and Phenomenological Research, 66: 139–149. Sidgwick, Henry. 1981. The Methods of Ethics. Indianapolis: Hackett. Singer, Peter. 2011. The Expanding Circle. Princeton, NJ: Princeton University Press. Sinnott-Armstrong, Walter. 2005. “It’s Not My Fault.” In Perspectives on Climate Change, ed. Walter Sinnott-Armstrong and Richard Howarth. Elsevier. 221-253. Snedegar, Justin. 2017. Contrastive Reasons. Oxford: Oxford University Press. Strawson, Galen. 2004. “Against Narrativity.” Ratio, 17: 428-452. 81 Strawson, Galen. 2009. Selves. New York: Oxford University Press. Thomson, Judith Jarvis. 1990. The Realm of Rights. Harvard University Press. Thomson, Judith Jarvis. 2008. Normativity. Chicago: Open Court. Tuomela, Raimo and Miller, Kaarlo. 1988. “We-Intentions.” Philosophical Studies, 53: 367-389. van Fraassen, Bas. 1973. “Values and the Heart’s Command.” The Journal of Philosophy, 70: 5- 19. van Inwagen, Peter. 1981. “The Doctrine of Arbitrary Undetached Parts.” Pacific Philosophical Quarterly, 62: 123–137. Wedgwood, Ralph. 2007. The Nature of Normativity. Oxford University Press. Williams, Bernard. 1973. “A Critique of Utilitarianism.” In Utilitarianism: For and Against, ed. J. J. C. Smart and Bernard Williams. New York: Cambridge University Press. Williams, Bernard. 1973. “Ethical Consistency,” in Problems of the Self. Cambridge, UK: Cambridge University Press. 166-186. Woodard, Christopher. 2008. Reasons, Patterns, and Cooperation. New York: Routledge. Woodard, Christopher. 2017. “Three Conceptions of Group-Based Reasons.” Journal of Social Ontology, 3: 107-127. Wringe, Bill. 2016. “Collective Obligations: Their Existence, Their Explanatory Power, and Their Supervenience on the Obligations of Individuals.” European Journal of Philosophy, 24: 472-497.
Abstract (if available)
Abstract
In this dissertation, I explore which units of agency should be of relevance to ethics. I argue that obligations and normative reasons for action can be possessed not only by individual persons, but also by larger units, namely groups, and by smaller units, namely the temporal parts of persons. I also defend views about how obligations and reasons for action possessed at different units of agency interact: for example, about when I have reason to do my part in what the group ought to do, and about how there could be irresolvable conflicts between individual and collective obligations.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Reasons, obligations, and the structure of good reasoning
PDF
Beliefs that wrong
PDF
Rationality and the primacy of the occurrent
PDF
Political decision-making in an uncertain world
PDF
Aggregating happiness: seeking and identifying a single plausible unifying theory
PDF
Rethinking reductive realism in ethics
PDF
The minimal approval view of attributional-responsibility
PDF
A deontological explanation of accessibilism
PDF
Contrastive reasons
PDF
Process-oriented rationality
PDF
The virtue of reasonableness: on the normative epistemic implications of public reason liberalism
PDF
Toward a more perfect liberalism: perfectionism in Kantian political philosophy
PDF
Responsibility and the emotional structure of relationships
PDF
Contract, from promise to commodity
PDF
Public justification beyond legitimacy
PDF
A perceptual model of evaluative knowledge
PDF
What "ought" ought to mean
PDF
Positivist realism
PDF
Extending credit
PDF
On law enforcement officers’ liability to defensive harm
Asset Metadata
Creator
Dietz, Alexander
(author)
Core Title
Units of agency in ethics
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Philosophy
Publication Date
10/15/2018
Defense Date
09/06/2018
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
collective action,collective obligations,collective responsibility,normative ethics,OAI-PMH Harvest
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Quong, Jonathan (
committee chair
), Finlay, Stephen (
committee member
), Keating, Gregory (
committee member
), Schroeder, Mark (
committee member
), Wedgwood, Ralph (
committee member
)
Creator Email
alex.dietz@gmail.com,DietzA4@cardiff.ac.uk
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-79315
Unique identifier
UC11670856
Identifier
etd-DietzAlexa-6839.pdf (filename),usctheses-c89-79315 (legacy record id)
Legacy Identifier
etd-DietzAlexa-6839.pdf
Dmrecord
79315
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Dietz, Alexander
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
collective action
collective obligations
collective responsibility
normative ethics