Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Reasons, obligations, and the structure of good reasoning
(USC Thesis Other)
Reasons, obligations, and the structure of good reasoning
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Reasons, Obligations, and the Structure of Good Reasoning Gopal Shyam Nair University of Southern California Table of Contents Acknowledgements i Introduction 1 Chapter One: Conflicting Reasons, Unconflicting Obligations 18 Chapter Two: Consequences of Reasoning with Conflicting Obligations 45 Chapter Three: Must Good Reasoning Be Cumulative Transitive? 75 Chapter Four: Bootstrapping, Dogmatism, and the Structure of Epistemic Justification 111 References 144 Introduction 0. The Aim This dissertation concerns a series of overlapping structural questions in normative theory. Each chapter discusses a different question and is written so that it can be read without reading any of the other chapters. Unfortunately, in order to make the chapters free-standing pieces, the ways in which the questions that they consider overlap is often left implicit. The purpose of this introduction is give an overview of each chapter, to describe the connections between these chapters more explicitly, and to situate them in the context of a more general set of questions that are only partially answered by the work done here. Chapters one and two focus on metaphysical, epistemological, and logical issues in ethics. Chapter two develops a theory of good reasoning about what we ought to do and what we have reason to do. It provides a bridge between the issues in ethics discussed in chapter one and two and the issues in philosophical logic and epistemology that are at center stage in chapters three and four. In what follows, I summarizing the claims of chapter one and two and explain how they are connected (§1). I then summarize chapter three and explain its connection to chapter two (§2). Finally, I summarize chapter four and explain its connection to chapter three (§3). 1. Reasons and Obligations Some normative notions allow for conflicts. For example, consider the notion of a reason. I might have a reason to work on a paper tonight and a reason to go out with friends even though I cannot do both. A central contention of chapters one and two is that this simple fact has far-reaching implications for the relationship between reasons and obligations and for the structure of reasoning about reasons and obligations. 1.1 Chapter One I begin exploring the implications of conflicting reasons in chapter one (“Conflicting Reasons, Unconflicting Obligations”). In this chapter I focus on the increasingly popular idea in moral philosophy that reasons explain obligations. This idea is the modern descendant of an idea first developed by W.D. Ross in The Right and the Good. And the concerns that this chapter addresses are ones that were live to Ross. Ross thought that there could be no helpful theory of how reasons come together to determine 1 what we ought to do. Critics of Ross have, I think rightly, found this to be an unsatisfying feature of his view. Indeed, I believe that this concern is more pressing than has been appreciated. To see this, notice that the fact that reasons can conflict can seem like an attractive feature of Ross’s idea. For example, the fact that reasons conflict promises to give a tidy explanation of why agents often face the task of choosing among a variety of incompatible options each of which has something to be said for it and how the competition of these conflicting considerations determines what such an agent ought to do. But I argue that this fact actually poses a cluster of problems for the idea that we can explain what we ought to do in terms of reasons: There are intuitive and theoretical considerations from both ethics and deontic logic that force the advocate of this popular idea to claim that there are certain entailments among reasons. But it turns out to be no easy task to come up with a set of entailments that adequately captures the intuitive and theoretical phenomena without leading to unacceptable results. After elaborating on this cluster of problems, I develop a unified solution to it. The solution has two important upshots. One is that the familiar distinction in moral philosophy between non-derivative (or intrinsic) normative notions and derivative (or extrinsic) normative notions is surprisingly important because solving this cluster of problems requires us to make use of the distinction between derivative and non-derivative reasons. The other is that the most natural and widely accepted idea about how reasons explain obligations is incorrect. In particular, most who accept this popular idea believe that if an agent (non-derivatively) ought to do some act, this is in part explained by the fact that there is a (non-derivative) reason for the agent to do that act. My solution entails that this is false. Instead, facts about what an agent (non-derivatively) ought to do are explained by complex logical facts about what that agent has (non-derivative) reason to do. Very roughly, the idea is that sometimes it can be that an agent ought to do a not because there is a non-derivative reason to do a itself, but instead because the agent’s non-derivative reasons support a plan that can be accomplished only if the agent does a. 1.2 Chapter Two In chapter two (“Consequences of Reasoning with Conflicting Obligations”), I turn from considering the question of how reasons explain obligations and turn to considering reasoning with normative notions that allow for conflict. In particular, the chapter considers the question of how we may reason with obligations on the assumption that obligations can conflict. I argue that while there are well-known logical problems concerning the existence of conflicting obligations, there are also distinctive problems concerning reasoning with such obligations that have received less direct 2 attention. After showing that solutions to these logical problems do not also solve the reasoning problems, I develop my own theory of reasoning. The theory of reasoning that I develop depends on a background picture about the nature of obligation; namely, that reasons explain obligation. I then show how this idea about the nature of obligation together with other assumptions allows us to develop a theory of reasoning with conflicting obligations. The theory solves the problems about reasoning in part by entailing that good reasoning does not have the structural property of cumulative transitivity. Roughly, for good reasoning to satisfy cumulative transitivity—for short, cut—is for it to be the case that if two pieces of reasoning are good on their own, then a larger piece of reasoning that consists of performing these two pieces of reasoning back-to-back is also good. The theory that I develop entails that in a certain limited range of cases such back-to-back reasoning is not good. And it explains why this is so and how this solves the problem of reasoning with conflicting obligations. 1.3 Agglomeration Now that we have an overview of each chapter in hand, let me explain how they are connected. To do this, I will look at the particular way these theories deal with the issue of so-called “agglomeration”. The issue of agglomeration concerns the conditions under which we can go from an obligation or a reason to do each of two things to an obligation or a reason to do both. Chapter one discusses this issue when it describes what entailment hold among reasons. This concerns when it is a logically valid inference to conclude ‘there is a reason to do ᵯ� and ᵯ� ’ from some claims about the reasons to do ᵯ� and the reasons to do ᵯ� . Chapter two discusses under what conditions it is good reasoning to conclude ‘it ought to be that ᵯ� and ᵯ� ’ from some claims about obligations to do ᵯ� and obligations to do ᵯ� . In order to compare these two treatments, we must begin by being clear on two points. First, the the treatment in chapter one of agglomeration concerns reasons and the treatment in chapter two concerns obligations. In what follows, I will translate the theory of chapter two into a theory of reasoning about reasons in order to facilitate easier comparison between the views in chapter one and chapter two. Second, the treatment in chapter one of agglomerations concerns logically valid inferences while the treatment in chapter two concerns good reasoning. We can distinguish these at least in principle. And as I will explain, logic and reasoning do come apart in practice when we consider the case of agglomeration. 3 1.3.1 The Logic of Agglomeration With this in mind, let me now present chapter one’s theory of when agglomeration is logically valid. According to the theory of chapter one, ‘there is a reason to do ᵯ� and ᵯ� ’ is not a logical consequence of ‘there is a reason to do ᵯ� ’ and ‘there is a reason to do ᵯ� ’ even if ᵯ� is consistent with ᵯ� . That is, I reject the following claim: Consistent Reasons Agglomeration (CRA): if there is a reason for S to do ᵯ� , there is a reason for S to do ᵯ� , and ᵯ� is consistent with ᵯ� , then there is a reason for S to do ᵯ� and ᵯ� As I explain in the chapter, CRA leads to unacceptable results. More precisely, the problem arises if we accept CRA together with the uncontroversial assumption that reasons can conflict as well as the following controversial but plausible claim: Single Reasons Closure (SRC): if there is a reason for S to do ᵯ� and ᵯ� entails ᵯ� , then there is a reason for S to do ᵯ� To illustrate the problem in a simple way, let’s suppose there is the following particular instance of conflicting reasons: (1) There is a reason to meet Sam for a drink (2) There is a reason not to meet Sam for a drink By SRC applied to (1), we may infer: (3) There is a reason to meet Sam for a drink or torture a kitten Since it is possible to meet Sam for a drink or torture a kitten and not meet Sam for a drink, we may apply CRA to (2) and (3) and infer: (4) There is a reason to meet Sam for a drink or torture a kitten and not meet Sam for a drink Finally by SRC again, we may infer: (5) There is a reason to torture a kitten Thus, the existence of conflicting reasons concerning meeting Sam for a drink together with SRC and 4 CRA entail the existence of the intuitively unacceptable reason to torture a kitten. Chapter one elaborates on this problem, argues the best solution to it is to reject CRA, and offers a replacement agglomeration principle. As I alluded to in the overview of this chapter, an important component of the solution involves distinguishing between derivative reasons and non-derivative reasons. Derivative reasons are reasons in virtue of standing in some important relation to other reasons. Non-derivative reasons are reasons that are not derivative. This distinction allows us to provide the following replacement agglomeration principle for CRA: Non-derivative CRA (NCRA): if there is a non-derivative reason to do ᵯ� , a non-derivative reason to do ᵯ� , and ᵯ� is consistent with ᵯ� , then there is a derivative reason to do ᵯ� and ᵯ� This replacement principle avoids the problem. Here is how it does so. We assume that non-derivative reasons are reasons but not all reasons are non-derivative. We assume that even non-derivative reasons can conflict. And we continue to assume SRC holds. However we do not assume that a non-derivative reason to do ᵯ� entails a non-derivative reason to do ᵯ� when ᵯ� entails ᵯ� . Instead, the reason to do ᵯ� is derivative in virtue of the important logical relationship between it and the non-derivative reason to do ᵯ� . To see how this avoids the explosion problem, suppose that non-derivative reasons conflict: (1) There is a non-derivative reason to meet Sam for a drink (2) There is non-derivative reason not to meet Sam for a drink Since non-derivative reasons are a kind of reasons, we may apply SRC to (1) and generate: (3) There is a reason to meet Sam for a drink or you torture a kitten The next step in our derivation had us use CRA to derive the following claim from (2) and (3): (4) There is a reason to meet Sam for a drink or torture a kitten and not meet Sam for a drink Since we reject CRA, this step is blocked. Now NCRA would apply if (2) and (3) were claims about non-derivative reasons. But since (3) is not a claim about non-derivative reasons and since we rejects the idea that a non-derivative reason to do ᵯ� entails a non-derivative to do ᵯ� if ᵯ� entails ᵯ� , we cannot generate some non-derivative reasons version of (3) from (1). Thus, we avoid this explosion result by blocking the move to (4). 5 As the discussion in chapter one makes clear, this treatment of this puzzle and agglomeration is motivated primarily by considerations in the metaphysics of morals concerning reasons, obligations, and their connections. The situation however turns out to look very different when we consider the analogous puzzle that arises in reasoning about what we have reason to do. 1.3.2 The Inadequacy of NCRA for Reasoning If we were to apply this theory of the logic of agglomeration “off the shelf” to get a theory of reasoning, the idea would be that it is good reasoning to agglomerate reasons just in case doing so corresponded to the valid inference, NCRA. While I agree that reasoning that corresponds to NCRA is good, I do not believe that this can be the whole story about agglomerative reasoning. To see why, consider cases of agglomerative reasoning. Suppose for example that I know that I have a reason to fight in the army or perform alternative public service. And suppose that I know that I have a reason to not fight in the army. And suppose that’s all the relevant information that I have bearing on these issues. It is reasonable for me to reason as follows: given these two reasons, I have a reason to fight in the army or perform alternative public service and not fight in the army. How could we use NCRA to explain this case? Begin by noticing that at face value the case does not involve any claims about non-derivative reasons. Instead, it simply uses the ordinary notion of a reason. So if we take things at face value, NCRA does not explain this case. But we could claim that we should not take the reasoning in this case at face value but instead should think of it is enthymematic or equivocating for reasoning with claims about non-derivative reasons. More precisely, the enthymematic strategy would say that we tacitly rely on the suppressed premises ‘there is a non-derivative reason to fight or serve’ and ‘there is a non-derivative reason to not fight’ and conclude ‘there is a reason to fight or serve and not fight’. The equivocating strategy on the other hand would be that the premises really are ‘there is a reason to fight or serve’ and ‘there is a reason to not fight’ and the conclusion really is ‘there is a reason to fight or serve or not fight’. But our reason-talk is context sensitive and ‘reason’ as it occurs in the context of the premises picks out the notion of a non-derivative reason while ‘reason’ as it occurs in the context of the conclusion picks out derivative reason (or perhaps, simply the ordinary notion of a reason that does not distinguish between derivative and non-derivative reasons). In chapter two, I give a general response to resorting to enthymematic or equivocating strategies like this. I begin by pointing out that absent independent argument, it is ad hoc to claim reasoning that does not appear equivocating or enthymematic on its face is in fact equivocating or enthymematic. Now since we know that CRA does not correspond to a valid inference, we know that taken at face a value, the reasoning in Smith’s case does not correspond to a valid inference. And this 6 may seem like just the kind of independent argument that we need. But as I point out, it is not generally true that reasoning that on its face doesn’t correspond to a valid inference is really enthymematic or equivocating for reasoning that corresponds to a valid inference. Inductive and abductive reasoning testify to this fact. Moreover, if we consider what kinds of suppressed premises or equivocation we would have to attribute to inductive or abductive reasoning in order to claim that they do correspond to a valid inferences, it is easy to see that agents are not generally in a position to know such things. Comparing chapter one and two enables us to take this general idea and apply it more specifically to the case that we are considering. Chapter one tells us exactly what the suppressed premise or equivocation would have to be in order for agglomerative reasoning to correspond to a valid inference. It tells us that we would have to start out with claims about what we have non-derivative reason to do. And as with inductive and abductive reasoning, it simply not plausible that we are in a position to know claims about what we have non-derivative reason to do. To see why, begin by reminding ourselves of what the derivative/non-derivative distinction is and why it is so important in moral philosophy. In the first instance, the distinction is a theoretical one. Theorists distinguish reasons according to their place within an explanatory structure. The utility of this distinction is that theorists can settle questions about what reasons there are in two stages. First they decide what the non-derivative reasons are. Then they give an account of relations to non-derivative reason that lead to derivative reasons existing. Together, these claims provide an account of what all the reasons are in the ordinary sense; the primary data such theories aim to explain are what the reasons are in an ordinary non-theoretical sense of that notion. So the theoretical distinction is useful because it allows us to make perspicuous the explanatory structure of the theory. I take this observation to show at least two things. First most ordinary people do not have the concept of a non-derivative reasons. It is a theorist’s concept. This is not to say the concept of a non-derivative reason is not one that is expressible in ordinary language or thought. Rather, it is just to say that most ordinary people do not in fact express it in ordinary language or thought because they are not deploying the conceptual resources needed to draw the fine-grained derivative/non-derivative distinction among reasons. Second, even those like you and I who have the concept of a non-derivative reason and deploy it on certain occasions do not employ the notion of a non-derivative reason in ordinary contexts. Often this is because in ordinary contexts we don’t bother to attend to such fine grained distinctions. 7 But even if we were to attend to such distinction, we may still be reluctant to deploy the notion. This is because what exactly the non-derivative reasons are is highly controversial. For example, even if we all agree that promises are sources of non-derivative reasons (which is controversial in itself), there is still disagreement on exactly what promises give us reason to do. Does a promise count in favor of the promised act, promise non-breaking, maximizing promise fulfillment etc.? The point here is not that we can’t or don’t sometimes know what non-derivative reasons we have. There very well may be cases like that. And in such cases, I am happy to say that we do best to reason in a accordance with NCRA. The point rather is that is the vast majority of cases are not like this. And nonetheless, we do reason agglomeratively even in these cases and this appears to be sensible. This shows that we cannot explain the full range of cases of good agglomerative reason by saying that such reasoning is enthymematic or equivocating for reasoning that corresponds to NCRA. And indeed, since NCRA tells us when it deductively follows that we can agglomerate reasons, this tell us that our agglomerative reasoning is non-deductive. 1.3.3 Ampliative Agglomerative Reasoning Reflection on these cases, then suggests that agglomerative reasoning is ampliative in the sense that we reason to conclusions that are not guaranteed to be true by our premises. What’s more, the fact that we find the reasoning in these cases acceptable suggests that this ampliative reasoning is sensible. It is reasonable to agglomerate our reasons in these cases and we could not do so if we merely reasoned deductively. Chapter two then provides an account of such ampliative reasoning. According to the account, consistent agglomerative reasoning itself is good. And this leads to a straightforward explanation of Smith’s case taken at face value. Of course, we saw that consistent agglomeration is an unacceptable logical principle because it leads to the unacceptable result that we have a reason to torture a kitten. But consistent agglomeration as principle of reasoning is not subject to this same worry because good reasoning has a different structure than logical consequence. Whereas logical consequence satisfies cut, chapter two claims good reasoning does not satisfy cut. Satisfying cut ensures that if we have two logically valid inferences, a larger inference that consistent of doing them back-to-back is also logically valid. This means we cannot accept CRA and SRC interpreted as claims about logical validity without saying that we can do them back-to-back. But chapter two claims that good reasoning does not satisfy cut. This means that we cannot always string two independently good pieces of reasoning together to form a larger chain of reasoning that is also good. In particular, the idea is that though we may reason agglomeratively from our starting 8 points, we may not first perform reasoning that corresponds to SRC with our starting points and then reason agglomeratively with the results of that. This blocks the explosive reasoning because explosive reasoning has us first reason in a way that corresponds to SRC and then apply agglomerative reasoning to the results. That is bad reasoning according to the approach developed in chapter two. Thus the key idea is this: The logical and metaphysical issues concerning reasons, obligations, and their relationship are best solved in part by making use of an important theoretical notion in moral philosophy, the notion of a non-derivative reason. This solution however cannot be applied to the everyday reasoning of agents because agents do not generally know the answers to theoretical questions about which reasons are non-derivative. Instead, agents adopt an ampliative inference strategy that allows them to reason productively in the face of their ignorance about these theoretical questions. An upshot of this ampliative strategy is that good reasoning fails to satisfy cut. 1.4 Looking Forward That is the main way in which these two theories fit together. In work in progress (Nair ms), I develop a framework that generalizes and makes formal these connections that I have just informally described. So for example, in this introduction I have been translating the theory in chapter two so that it applies to reasoning about reasons. I have not however spelled out explicitly in this dissertation how to generalize the theory in chapter two to give a theory of reasoning with reasons. This work in progress does this and more. But even with the result of this work in progress in hand, there are further remaining issues to be worked out. In particular, the work done so far has focused on issues that arise due to the fact that reasons can conflict. In the future, I wish to explore a number of outstanding questions about how reasons explain obligations that arise due to the fact that reasons come in different strengths. There are at least three issues that arise here. First in my view, the most worked out theories about how reasons determine what we ought to do either do not directly tell us when one reason is stronger than another or do not capture all of the generalizations about strength of reasons that we would like to capture. What these theories are good at doing is modelling the complicated ways in which reasons including reasons to give priority to other reasons interact to determine what we ought to do. Except for the reservation that I will discuss below, I find the results of these theories about how reasons determine what we ought to do satisfying. But the theories often do not give us the resources to represent when one reason is stronger than another in the object language of the theory. And even if they do, they do not enable us to prove 9 certain high level connections between reasons and what we ought to do. For example, most who believe reason explain what we ought to do believe that we ought to do some act just in case there is better reason to do that act than any alternative to it. But this is not a theorem of these views. I do not take these to be principled problems for the general kind of approach that these theories take but rather simply suggestive of way in which we must build on and reinterpret these theories. A first step toward doing that is to provide an adequately rich representation of what reasons there are. And this is what I have begun to do in chapter one. The next step is to try to expand this approach to capture high-level generalizations about weight such as the one that I just described. Second, even these most worked theories have trouble explaining when and modelling how two reasons to do the same act “add up” to further support that act. In particular, there appear to be cases where we have two reasons to do some act ᵯ� and each of those reasons is individually worse than a reason to do ᵯ� , but collectively the two reasons to do ᵯ� are stronger than the reason to do ᵯ� . We would like to be able to accommodate and model this kind of case. There are of course certain obvious ways of doing this. For example, we may associate reasons with a numerical strength and “add up” the strength of the individual reasons to get the strength they together provide. Unfortunately, this approach faces two difficulties. First, the spirit of most of the views about reasons that I like treat strength as qualitative and not quantitative and treat strength holistically in that it is partly determined by what other reasons there are. It is not obvious that this quantitative model fits with these ideas. Second, not all cases in which there are two reasons appear to “add up” in the same way or at all. It is not obvious how this can be modeled in the numerical framework. 1 Another account would take as basic the notion of the strength of a collection of reasons. And yet a third account would determine the strength of a collection by taking the strength of the conjunction of the reasons in the collection as basic. The trouble with these approaches is that they treat “adding up” as brute. It would be best to provide a systematic theory of the dynamics of “adding up” rather than take it as a brute fact. The first step in developing such a theory is getting a better sense of what the dynamics of “adding up” actually are and see if they suggest any constraints on the strength of a collection of reasons that is determined by the strength of an individual reason. For example, one 1 This is meant as a challenge and not as a decisive objection to this approach. And there are promising routes to pursue to meet the challenge. First, measurement theory teach us that if we have orderings that have certain features we can construct a numerical representation of a certain sort. It is an interesting question how this theory might be applied to reasons and whether it can satisfactorily give us numerical strengths for reasons. Second, it may be possible to treat the numerical strengths not as fixed points but as default starting points and thereby allow other reasons to bear on the ultimate strength of a reason. This second point also holds promise for accommodating cases where “adding up” works differently or not at all. 10 promising but not obviously true constraint is that the strength of a collection of reasons to do ᵯ� is bounded below by the strength of the weakest reason in that collection. Perhaps, there will not be enough constraints to give a fully non-brutal account of “adding up” but it would be good to see how far we can get. Finally, I also believe that there will need to be a distinctive theory of reasoning about how strong reasons are. This because the theory of the strength of reasons to be developed will also make prominent use of the theoretical notion of a non-derivative reason. As I have said, this notion is not one that agents typically are in a position to know about. Nonetheless, agents do often reason about the strength of reasons. Thus, a similar kind of ampliative theory of reasoning about strength will have to be developed. Thus, chapter one and two are the first steps in this larger project of providing an adequate logic, metaphysics, and epistemology of reasons, obligations, and their connection with one another. 2. Good Reasoning in General As we have seen, chapter two says that good reasoning with reasons and obligations fails to satisfy cut. Chapter three explores this issue of reasoning that fails to satisfy cut from a more general perspective. 2.1 Chapter Three Recall that for good reasoning to satisfy cut is for it to be the case that if two pieces of reasoning are good considered on their own, then a larger piece of reasoning that consists of performing these two pieces of reasoning back-to-back is also good. Of course, the practice of performing pieces of reasoning back-to-back is ubiquitous in both our practical and theoretical reasoning. And as I point out in chapter three (“Must Good Reasoning Be Cumulative Transitive?”), the assumption that we may perform such reasoning back-to-back is implicated in almost every puzzle about reasoning that arises in philosophy, logic, and computer science. What’s more, except for a few notable outliers, there is a broad convergence among formal theories of reasoning on the idea that good reasoning satisfies cut. This raises the natural question of whether this convergence among formal theories reveals a deeper truth about the nature of good reasoning—that it must satisfy cut. In order to evaluate this question, I extract a series of arguments from work in philosophical logic and computer science that suggests that these theories do reveal this deeper truth. I then show how each of these initially plausible arguments actually fails. Seeing this much opens the door for good reasoning to fail to satisfy cut, but it does not 11 explain why or how it might fail to satisfy it. To answer this question, I show that a broadly foundationalist epistemology not only is compatible with good reasoning failing to satisfy cut but combined with certain other claims, actually entails that good reasoning fails to satisfy cut. 2.2 The Foundationalist Interpretation of Reasoning about Reasons and Obligations We can situate the theory of reasoning developed in chapter two within this foundationalist picture. To see this, I must sketch a few more of the details of the picture that emerges in chapter three. Let’s begin by being more precise about reasoning and its structure. First let us write ᵮ� ⊩ s, t ᵯ� to represent the claim that for an agent s at a time t, the transition from belief in each proposition expressed by the sentences in ᵮ� to the belief in the proposition expressed by ᵯ� is a piece of good reasoning. For simplicity, let us leave implicit reference to agents and times and just write ᵮ� ⊩ ᵯ� . Next we may define two properties that this good reasoning relation might have: ⊩ satisfies monotonicity just in case if ᵮ� ⊩ ᵯ� and ᵮ� ⊆ ᵮ� , then ᵮ� ⊩ ᵯ� ⊩ satisfies cut just in case if ᵮ� ⊩ ᵯ� for all ᵯ� ∈ ᵮ� and ᵮ� ∪ ᵮ� ⊩ ᵯ� , then ᵮ� ⊩ ᵯ� Good reasoning is known to fail to satisfy monotonicity so understood. For example, it is good reasoning to conclude ‘Tweety flies’ from ‘Tweety is a bird’ and ‘Normally, birds fly’. But it is not good reasoning to conclude ‘Tweety flies’ from ‘Tweety is a bird’, ‘Normally, birds fly’, ‘Tweety is a penguin’, and ‘Normally, penguins don’t fly’. We can think of failures of monotonicity roughly as follows: Claims about good reasoning are claims about defeasible permissibility. In particular, we may say that the belief that Tweety is a bird and the belief that normally, birds fly defeasibly permit you to believe Tweety flies. The permission is defeasible because it can be “defeated” by other beliefs that you have such as the belief that Tweety is a penguin and normally penguins don’t fly. So (informally at least) we can think of claims about defeasible permission as essentially saying the belief that Tweety is a bird and the belief that normally, birds fly permit you to believe that Tweety flies unless you believe that Tweety is a penguin and normally, penguins don’t fly. If we implement this picture within the foundationalist framework, it turns out good reasoning fails to satisfy cut if some ‘unless’-clauses mention not just which beliefs you have but also the distinctive organizational structure of those beliefs. Here is an abstract example of how this works: Suppose (1) the beliefs corresponding to ᵮ� defeasibly permit the belief that ᵯ� and (2) the beliefs corresponding to ᵮ� together with the belief that ᵯ� defeasibly permit the belief that ᵯ� . And suppose that if we unpack (2)’s ‘unless’-clause it says the beliefs 12 corresponding to ᵮ� together with the belief that ᵯ� defeasibly permit the belief that ᵯ� unless the belief that ᵯ� is solely based on the beliefs corresponding to ᵮ� . These claims make two predictions: the first does not involve a failure of cut, the second does. First, if we also assume the belief that ᵯ� defeasibly permits the belief that ᵯ� , they predict that we can have a situation like this: ᵯ� ᵯ� ᵯ� ⇑ Add ᵯ� ⇑ ᵮ� ᵮ� , ᵯ� This diagram is to be interpreted as follows: The left hand side of it depicts an agent who starts out with a permissible foundational belief in each of the propositions expressed by the sentences in ᵮ� . And this agent permissibly non-foundationally believes that ᵯ� based on good reasoning from ᵮ� . Then, perhaps due to perception, ᵯ� is added to the agent’s foundations. So on the right hand side of the picture we see that the agent now also has a permissible foundational belief that ᵯ� . This results in the agent now permissibly non-foundationally believing that ᵯ� based on good reasoning from ᵮ� and ᵯ� . Let’s look at how (1) and (2) predict this structure. (1) predicts the left hand side will have ᵯ� as permissible. And (2) says that ᵯ� will not be permissible because (2)’s ‘unless’-clause is triggered because ᵯ� is solely based on ᵮ� . Now look at the right hand side. ᵯ� again is permissible because of (1). But now (2) also tells us ᵯ� is permissible as well. This is because we are assuming ᵯ� permits ᵯ� and this means ᵯ� is no longer solely based on ᵮ� so (2)’s ‘unless’-clause is no longer triggered. Another way the ‘unless’-clause of (2) could no longer be triggered is by ᵯ� becoming part of the foundations. This because foundational beliefs are not based on any other beliefs according to foundationalism. This leads to the second prediction: ᵯ� ᵯ� ᵯ� ⇑ Add ᵯ� ⇑ ᵮ� ᵮ� , ᵯ� This illustrates a failure of cut. Here is how. The right hand side says that ᵮ� , ᵯ� ⊩ ᵯ� . This is because if your foundational beliefs corresponding to ᵮ� are permissible and your foundational belief that ᵯ� is permissible, then you would be permitted to believe that ᵯ� according to (2). The picture before adding ᵯ� shows us that ᵮ� ⊩ ᵯ� because of (1). And it shows us that ᵮ� ⊮ ᵯ� because the ‘unless’-clause of (2) is triggered— ᵯ� is based solely on the beliefs corresponding to ᵮ� . 13 We may now apply this abstract model to reasoning about our reasons. Consider the following claims: (3) the belief that there is a reason to do ᵯ� defeasibly permits the belief that there is a reason to do ᵯ� when ᵯ� entails ᵯ� and (4) the belief that there is a reason to do ᵯ� and the belief that there is a reason to do ᵯ� defeasibly permit the belief that there is a reason to do ᵯ� and ᵯ� when ᵯ� is consistent with ᵯ� . And suppose that if we unpack (4)’s ‘unless’-clause it says “unless the belief that there is a reason to do ᵯ� or the belief that there is a reason to do ᵯ� is solely based on the belief that there is a reason to do ᵯ� for some ᵯ� inconsistent with ‘ ᵯ� and ᵯ� ’”. These principles predict the same structure: ᵯ� ᵯ� ᵯ� ⇑ Add ᵯ� ⇑ ᵮ� ᵮ� , ᵯ� where ᵮ� ={‘there is a reason to meet Sam for a drink’, ‘there is a reason not to meet Sam for a drink’}, ᵯ� is ‘there is a reason to meet Sam for a drink or torture a kitten’, and ᵯ� is ‘there is a reason to meet Sam for a drink or torture a kitten and not meet Sam for a drink’. Let’s look at this. Start with the left hand side. (3) tells us that the belief that there is a reason to meet Sam for a drink or torture a kitten and the belief that there is a reason to not meet Sam for a drink defeasibly permit believing that there is a reason to meet Sam for a drink or torture a kitten. The left hand side also says that we are not permitted to believe that there is a reason to meet Sam for a drink or torture a kitten and not meet Sam for a drink. This is because the ‘unless’-clause of (4) is triggered because the belief that there is a reason to meet Sam for a drink or torture a kitten is solely based on the belief that there is a reason to meet Sam for a drink. And ‘meeting Sam for a drink’ is inconsistent with ‘meeting Sam for a drink or torturing a kitten and not meeting Sam for a drink’. Now look at the right hand side. If we knew directly that there is a reason to meet Sam for a drink or torture a kitten rather than knowing it by inference then we are permitted to believe that there is a reason to meet Sam for a drink or torture a kitten and not meet Sam for a drink because the ‘unless’-clause of (4) is no longer triggered. In this way, the theory of reasoning with reasons and obligations developed in the earlier chapter can be implemented within the more general picture of reasoning developed in this chapter. 2.3 Looking Forward Now that we have an overview of chapter three as well as its connection with chapter two, let me mention three further projects that are suggested by our progress so far. First I would like to generalize and formalize the broadly foundationalist epistemology that entails good reasoning does not 14 satisfy cut. The idea would be to develop an abstract structure that allows us to input arbitrary collections of foundational beliefs and claims about defeasible permissibility and outputs a set of permissible beliefs. Having such a formal theory would enable me to more precisely compare the virtues and vices of this approach to existing formal theories. Second I would like to explore what the deeper philosophical foundations might be of reasoning that does not satisfy cut. As I point out in chapter three, simple identifiable features of reasoning (namely, that is is ampliative) lead it to have a non-monotonic structure. A question that I have tentatively explored and wish to explore further is whether there are some other simple identifiable features of reasoning that lead reasoning to fail to satisfy cut. As I explained, chapter three shows that good reasoning would fail to satisfy cut if there are claims about defeasible permissibility that are sensitive to the distinctive organizational structure of your beliefs. So this is one identifiable feature of reasoning that leads reasoning to fail to satisfy cut. But it would be good to understand exactly why claims about defeasible permissibility might be sensitive to the organizational structure of your beliefs. And it would be good to know whether there might be other features of reasoning apart from sensitivity to the organizational structure of your beliefs that lead to good reasoning failing to satisfy cut. Third it would be good to further explore the connection between my work on reasoning with reasons and obligations and this more general picture. In particular, I said earlier that we need a theory of reasoning about what we ought to do and what we have reason to do that is ampliative because agents don’t generally have access to the premises that they would need to deductively reason agglomeratively. An interesting question is how to fit this idea within a more general theory of when ampliative reasoning is appropriate. This issue is connected with deep problems about the nature of ampliative reasoning in general raised by the old and new riddles of induction. Considering how different views about these problems might affect the account given here is a natural next step to take. Thus, chapter three gives us the beginnings of a general picture of reasoning. This picture answers a number of general objections to the possibility of good reasoning that does not satisfy cut and fits nicely with the particular account of reasoning about reasons and obligations that I developed in earlier chapters. It also sets an agenda for further research that will give us a more precise general picture and a tighter fit between the general picture and the particular theory of reasoning that I have developed. 15 3. Dogmatism and Failures of Cut Chapters one through three form the core of this dissertation. Chapter four on the other hand takes inspiration from the ideas in chapter three and uses it to explore new topics. In particular, chapter four considers the bootstrapping problem. 3.1 Chapter Four Chapter four (“Bootstrapping, Dogmatism, and the Structure of Epistemic Justification”) focuses on the bootstrapping problem as it arises for dogmatist epistemological theories. The bootstrapping problem is that dogmatist theories appear to license a certain intuitively unacceptable chain of reasoning. I argue that these theories can avoid licensing this chain of reasoning only if good reasoning fails to satisfy cut. I then consider exactly how cut must fail for this problem to be solved. In particular, I consider two models for how cut might fail: probabilistic models and the broadly foundationalist model developed in chapter three. I first argue that probabilistic models of the failure of cut cannot help the dogmatist because the way cut fails to be satisfied on these models is incompatible with the spirit of dogmatism. I then argue that the foundationalist model holds more promise. I show exactly which claim about defeasible permissibility the dogmatist would have to adopt and show that this claim would solve the bootstrapping problem. I close the chapter by discussing how some theorists believe that there is a problem earlier in the chain of bootstrapping reasoning than is predicted by my account. I concede that this earlier problem cannot be solved by my model at least if we accept a certain plausible epistemological principle that often goes by the name of “single premise closure”. I then make the case that if there really is an earlier problem in the bootstrapping reasoning, then there are two problems with the bootstrapping reasoning and the dogmatist has a solution to the second one if they claim that good reasoning fails to satisfy cut in the way that I have described. 3.2 Looking Forward The key remaining issues for the approach are three. First the chapter shows that that the broadly foundationalist framework of chapter three and probabilistic frameworks make different predictions about how cut might fail. An important question to answer is whether there is some precise general way that we can characterize the difference between these frameworks. The other two remaining issues are brought to light by my discussion of whether there is an earlier problem in the bootstrapping chain of reasoning. In particular, the second issue concerns the connection between ampliative reasoning and single premise closure. To reason ampliatively is to 16 reason to a conclusion not guaranteed to be true by the premises. But given single premise closure, having reached that conclusion we can often deductively close the ampliative gap that we just leapt. This is a source of some of the main problems for dogmatism including the problem of whether there is an earlier error in the bootstrapping reasoning. Finally, probabilistic approaches can be used to push the point that there is an earlier error in the bootstrapping reasoning. But recent work suggests that the features of probabilistic approaches that cause this problem for dogmatism also may also lead probabilistic approaches to give an inadequate account of so-called “undercutting defeat”. It is worth exploring whether this is true and whether the alternative framework developed here holds some promise for helping us better understand such undercutting defeat. Thus, the picture that emerges is that though dogmatism may not be defeated by the bootstrapping problem, it continues to face certain difficulties. These difficulties in turn might be helpfully explored within the general framework developed in this dissertation. 4. Conclusion Thus, this dissertation focuses a number of overlapping structural questions in normative theory and establishes a number of important results about these questions. These results give us a rich picture of the structural features of and relations between reasons, obligations, and good reasoning. That said, they are far from the final word on this topic. Instead, the results raises fresh questions and thereby brings into sharp focus just how much more there is left to say. 17 Chapter 1 Conflicting Reasons, Unconflicting Obligations 0. Introduction One of the popular albeit controversial ideas in the last century of moral philosophy is that what we ought to do is explained by our reasons. I will refer to this, at certain times, as the popular idea 2 in ethics and, at other times, as the idea that reasons explain obligations. One of the central features of 3 reasons that accounts for their popularity among normative theorists is that they can conflict. For example, the fact that reasons conflict promises to give a neat explanation of (i) why in standard choice situations, an agent faces the task of choosing among a variety of incompatible options each of which has something to be said for it and (ii) how the competition of these conflicting considerations determines what such an agent ought to do. But in this paper, I argue that the fact that reasons conflict actually poses a problem for those who accept the popular idea in ethics. In fact, I argue that there are two closely related problems (or, if you prefer, two different ways of arriving at the same problem) that illustrate how difficult it is to make sense of cases involving conflicting reasons if we accept this popular idea. The first problem is a generalization of a problem in deontic logic concerning the existence of conflicting obligations (§1). The second problem arises from a tension between three ethical principles (§2-3). Having presented each of these problems, I develop a unified solution to them that is informed by results in both ethics and deontic logic. An important implication of this solution is that we must distinguish between derivative and non-derivative reasons and reconsider what the commitments of the popular idea really are (§4-5). 1. Conflicting Obligations, Conflicting Reasons The first problem is a generalization of a problem in deontic logic about conflicting obligations. So let’s begin by looking at this problem in deontic logic. 4 In outline, the problem is that there is a tension between the existence of conflicting 2 Though he uses different terminology, I take this idea to originate at least in spirit in Ross 1930. Other classic discussions include Dancy 2004a, Nagel 1970, Parfit 2011, Raz 2002, and Scanlon 1998. 3 For simplicity, I adopt the controversial assumption that ‘ought’, ‘should’, and ‘obligation’ are synonymous. Fortunately, these controversies are orthogonal to the issues of this paper. 4 For general discussion of the problem of conflicting obligations, see Brink 1994, Chellas 1980, Foot 1983, Gowans 1987b, Lemon 1962, Marcus 1980, McConnell 2010, Pietroski 1993, Sinnott-Armstrong 1988, van Fraassen 1973, and Williams 1965. And see the helpful collection Gowans 1987a. For more recent discussion that bears particularly on the issues discussed in this section see Hansen 2004, McNamara 2004, van der Torre and Tan 2000, and especially Goble 2009, 2013 and Horty 2003. 18 obligations and a number of desirable inferences concerning obligations. To give flesh to this outline, we can look at some cases that illustrate these desirable inferences. So consider the following case involving a speeding law. The laws of the country set the speed limit at fifty miles per hour. We may suppose then that drivers ought to drive slower than fifty miles per hour. Given that drivers ought to drive slower than fifty miles per hour, it also seems that drivers ought to drive slower than one hundred miles per hour. So this case illustrates the plausibility of the idea that if we ought to do something, then we also ought to do what follows from it. 5 Before going on, we should take a moment to be precise about which of the many senses of ‘ought’ and ‘reason’ we wish to focus on when discussing this case and the ones to follow. While I believe that the main ideas of this paper will apply to any sense of ‘ought’ and ‘reason’ for which the popular idea in ethics is plausible, I will for concreteness fix on one. I will be discussing what can be called objective normative reasons and objective all-things-considered obligations. To borrow the now standard terminology of Scanlon 1998, an objective normative reason for an agent to do some act is a fact or true proposition that “counts in favor” of the agent doing that act. What we have objective normative reason to do depends, in the first instance, on what the facts are and not what our beliefs about these facts might be. Similarly, objective obligations depend on the facts. What makes these obligations all-things-considered is that they are obligations in light of all the relevant normative considerations and not limited to considerations from some domain such as, e.g., morality or prudence. With this clarification in mind, we can say that the speeding law case illustrates the plausibility of the following general principle: Single Ought Closure (SOC): if S ought to do ᵯ� and ᵯ� entails ᵯ� , then S ought to do ᵯ� 6 And indeed, this case also illustrates an analogous principle concerning reasons: Single Reasons Closure (SRC): if there is a reason for S to do ᵯ� and ᵯ� entails ᵯ� , then there is a reason for S to do ᵯ� After all, in the speeding law case we may suppose that there is a reason for drivers to drive slower than fifty miles per hour. And given that there is a reason for drivers to drive slower than fifty miles per hour, it also seems that there is a reason for drivers to drive slower than one hundred miles per hour. 5 Cf. Cariani 2013: n. 1. 6 For most of this paper, I will assume that there are entailments among subsentential expressions that denote actions. When I turn in §5 to developing my ideas formally, I will be more careful. 19 Other cases illustrate further desirable inferences. Consider for instance, the case of Smith. Smith’s country requires him to fight in the army or perform alternative public service. We may suppose then that Smith ought to fight in the army or perform alternative public service. Smith is also deeply committed to a pacifist religion that requires him not fight in the army. We may suppose then that Smith ought not to fight in the army. Given that Smith ought to fight or serve and given that Smith ought not to fight, it seems that Smith ought to serve. 7 SOC alone cannot explain this case. This is because serving does not follow from fighting or serving and it does not follow from not fighting. It does, of course, follow from the conjunction of these claims: fighting or serving and not fighting. This, then, suggests that it is desirable to be able to infer that we ought to do a conjunction from obligations to do each conjunct: Ought Agglomeration: if S ought to do ᵯ� and S ought to do ᵯ� , then S ought to do ᵯ� and ᵯ� Or perhaps the case only suggests the following more conservative principle: Consistent Ought Agglomeration (COA): if S ought to do ᵯ� , S ought to do ᵯ� , and ᵯ� is consistent with ᵯ� , then S ought to do ᵯ� and ᵯ� If we could help ourselves to COA as well as SOC, we would have a tidy explanation of Smith’s case. From ‘Smith ought to fight or serve’ and ‘Smith ought to not fight’, we may infer ‘Smith ought to fight or serve and not fight’ by COA. And then by SRC, we may infer the desired result ‘Smith ought to serve’. And like the speeding law case, Smith’s case also illustrates the plausibility of an analogous inference concerning reasons: Consistent Reasons Agglomeration (CRA): if there is a reason for S to do ᵯ� , there is a reason for S to do ᵯ� , and ᵯ� is consistent with ᵯ� , then there is a reason for S to do ᵯ� and ᵯ� After all, we may suppose that there is a reason for Smith to fight or serve and a reason for Smith to not fight. And given the existence of these reasons, there also seems to be a reason for Smith to serve. SRC and CRA, then, neatly explain this case in a way that is analogous to how SOC and COA explain Smith’s case. Now that we have seen some desirable inference principles, we are in a position to see why it is hard to reconcile the existence of conflicting obligations with these principles. A case of conflicting 7 Cf. van Fraassen 1973: 18, Horty 2003: 578, Goble 2009: 459. 20 obligations arises when an agent has a number of obligations each of which may be individually things that the agent can do, but collectively the agent cannot do all of them. To illustrate the tension between the existence of such cases and the principles that I just identified in a simple way, let’s assume that there is a particular instance of conflicting obligations. For example: (1) You ought to meet Sam for a drink (2) You ought not to meet Sam for a drink By SOC applied to (1), we may infer: (3) You ought to meet Sam for a drink or torture a kitten Since it is possible to meet Sam for a drink or torture a kitten and not meet Sam for a drink, we may apply COA to (2) and (3) and infer: (4) You ought to meet Sam for a drink or torture a kitten and not meet Sam for a drink Finally by SOC again, we may infer: (5) You ought to torture a kitten Thus, the existence of conflicting obligations concerning meeting Sam for a drink together with SOC and COA entail the existence of the intuitively unacceptable obligation to torture a kitten. And more generally, it is not hard to prove SOC and COA entail that if S ought to do ᵯ� , S ought to do ᵯ� , and ᵯ� is inconsistent with ᵯ� , then S ought to do ᵯ� for any consistent ᵯ� . In other words, SOC and COA entail 8 that if obligations conflict, we have a kind of explosion of obligations. It is important to be clear here that this problem is not solved by simply rejecting or giving an independent argument against either SOC or COA. If we wish to solve the problem we must also offer a replacement principle and show that principle explains Smith’s case and the speeding law case while avoiding explosion. So the problem is not that SOC and COA are somehow irresistible and lead to problematic results when combined with the assumption that obligations conflict. The problem is a tension between explaining what is going on in certain cases and the assumption that obligations conflict. SOC and COA are helpful ways of making this tension vivid because they offer straightforward explanations of these cases and provably lead to unacceptable results when combined 8 For a proof see Goble 2013: n. 73. 21 with the assumption that obligations conflict. And indeed there is a whole family of problems in deontic logic with this flavor. Each member 9 of this family targets a different attempt to capture the intuitive results that we want in Smith’s case, the speeding law case, and other cases and shows that the attempt fails because it leads to some kind of explosion. One reaction to these results is that they show that obligations cannot conflict: the fact that there is a tension between obligations conflicting and these plausible inferences is a reductio of the claim that obligations conflict. In what follows, I will not criticize this reaction to the problem. Indeed, in what follows I will take it as a working assumption that obligations do not conflict. But whatever we might think of this solution to the problem of conflicting obligations, notice that SRC and CRA have the same structure as SOC and COA. So they lead to a problem with the same structure as the problem of conflicting obligations—just replace each reference to an obligation with a reference to a reason. Thus, SRC and CRA entail that if there is a reason for S to do ᵯ� , there is a reason for S to do ᵯ� , and ᵯ� is inconsistent with ᵯ� , then there is a reason for S to do ᵯ� for any consistent ᵯ� . 10 And more generally, since there is a whole family of problems involving conflicting obligations, there is a structurally analogous family of problems involving conflicting reasons. But unfortunately, while a plausible solution to the family of problems about conflicting obligations may be to deny that there can be conflicting obligations, the structurally analogous solution that claims that there cannot be conflicting reasons is not plausible at all—it is a platitude that reasons can conflict. This, then, is the first problem that arises due to the existence of conflicting reasons. And like its analog concerning conflicting obligations, the problem is not that SRC and CRA are sacrosanct and are incompatible with the existence of conflicting reasons. The problem rather is that there is a tension between explaining the relevant cases and the existence of conflicting reasons. SRC and CRA make this tension vivid because they offer a straightforward explanation of these cases and provably lead to unacceptable results when combined with the assumption that reasons can conflict. Now as I have presented it so far, this problem is connected to the popular idea in ethics because the popular idea in ethics invokes the notion of a reason. But it is useful to see that the problem 9 See Goble 2009. 10 Proof: Suppose that there is a reason to do ᵯ� , there is a reason to do ᵯ� , and { ᵯ� , ᵯ� } is inconsistent. By SRC and the reason to do ᵯ� , there is a reason to do ¬ ᵯ� . So we now have a reason to do ᵯ� and a reason to do ¬ ᵯ� . And by applying SRC to each of these reasons, we generate a reason to do ᵯ� ∨ ᵯ� and a reason to do ¬ ᵯ� ∨ ᵯ� for any consistent ᵯ� . Next note that since ᵯ� is consistent, either { ᵯ� , ᵯ� } is consistent or {¬ ᵯ� , ᵯ� } is consistent. Consider each in turn. Suppose { ᵯ� , ᵯ� } is consistent. Then we may apply CRA to the reason to do ¬ ᵯ� ∨ ᵯ� and the reason to do ᵯ� and generate a reason to do ᵯ� ∧ (¬ ᵯ� ∨ ᵯ� ). Finally applying SRC to this new reason allow us to derive a reason to do ᵯ� . Suppose instead {¬ ᵯ� , ᵯ� } is consistent. Then we may apply CRA to the reason to do ᵯ� ∨ ᵯ� and the reason to do ¬ ᵯ� and generate a reason to do ¬ ᵯ� ∧ ( ᵯ� ∨ ᵯ� ). Finally applying SRC to this new reason allows us to derive a reason to do ᵯ� . Thus in either case, we have a reason to do ᵯ� . So we have proven that SRC and CRA entail that if there is a reason to do ᵯ� , there is a reason to do ᵯ� , and { ᵯ� , ᵯ� } is inconsistent, then there is a reason to do ᵯ� for any consistent ᵯ� . 22 can be generated by relying on theoretical considerations concerning the connection between reasons and obligations as well. In particular, almost everyone who accepts the the popular idea in ethics accepts the following intuitively plausible connection between reasons and obligations: Obligations Entail Reasons (OER): if S ought to do ᵯ� , then there is a reason for S to do ᵯ� If we accept OER, this means that there is a reason corresponding to each obligation. Now SOC and 11 COA describe entailments among obligations. So if we were to accept OER in addition to these claims, we would need to ensure that there are reasons corresponding to the obligations involved in these entailments. A natural question to ask the advocate of the popular idea in ethics who accepts OER is what guarantees that there are reasons corresponding to the obligations involved in these entailments: what guarantees that reasons and obligations walk in lock-step in the way that OER requires? An elegant answer would be that the analogous entailments hold among reasons. That is, SRC and CRA hold. So SRC and CRA are compelling not just because of our judgments about cases but also because of this theoretical consideration. Since this problem of conflicting reasons has the same structure as the problem of conflicting obligations, a promising conjecture is that the solution to the first problem will have the same structure as solutions to the problem of conflicting obligations that allow for the existence of conflicting obligations. My solution makes good on this conjecture. But before we consider it in detail, we should look at the second problem concerning conflicting reasons. 2. Conflicting Reasons and How Reasons Explain Obligations The second problem arises as a tension between three plausible theoretical principles. We have already encountered the first two principles in passing: Reasons Allow Conflicts (RAC): there can be situations in which S has a reason to do ᵯ� and a reason to do ᵯ� but cannot do ᵯ� and ᵯ� Obligations Do Not Allow Conflicts (¬OAC): there cannot be situations in which S ought to do ᵯ� and ought to do ᵯ� but cannot do ᵯ� and ᵯ� As I said earlier, RAC is a platitude and we will be granting ¬OAC for the sake of argument. The third principle that is involved in this tension is a claim about how reasons explain 11 Of course, many philosophers who reject the popular idea in ethics also reject OER. For example, Foot 1972 famously rejects an analog of OER that concerns moral obligations (rather than all-things-considered obligations as our version of OER does; Foot’s stance on our version is not transparent). Similar comments apply to REOD below. 23 obligations: Reasons Explain Obligations Directly (REOD): if S ought to do ᵯ� , then this is in part explained by the fact that there is a reason for S to do ᵯ� Philosophers who accept the idea that reasons explain obligations are almost unanimous in their acceptance of REOD. , While philosophers often discuss why they believe obligations are explained 12 13 in terms of reasons, they rarely discuss why that explanation must validate REOD. So unfortunately there is, to my knowledge, no argument in print in favor of REOD. That said, it is possible to give a certain kind of motivation for REOD. Begin by comparing REOD to OER (which, recall, says that if S ought to do ᵯ� , then there is a reason for S to do ᵯ� ). The difference between OER and REOD is this: OER says that if there is an obligation to do something, there is a reason to do that very thing. But it does not make any claims about explanation. REOD, on the other hand, says that if there is an obligation to do something this is at least in part explained by the reason to do that thing. Because REOD is logically stronger than OER, an advocate of the popular idea in ethics can explain why OER holds by citing REOD. This is some evidence in favor of REOD. What’s more, REOD can look like the only plausible explanation of OER that an advocate of the popular idea in ethics can adopt. Consider how else an advocate of the popular idea might explain the truth of OER. One explanation of OER is that anytime an agent ought to do ᵯ� , the fact that she ought to do ᵯ� is sufficient to explain why there is a reason for her to do ᵯ� . If an obligation to do ᵯ� explains a reason to do ᵯ� , then OER would follow as a corollary. Unfortunately, this explanation looks to give up on the idea that reasons explain obligations because it looks like an instance of an obligation explaining a reason. So someone who thinks that reasons explain obligations would not be able to explain OER in this way. Another possible explanation might go like this: What explains why an agent ought to do something is that it leads to the best outcome. What explains why an agent has a reason to do something is that it leads to an outcome that is good in some respect. So if you ought to do ᵯ� , then this 12 According to my usage, the popular idea in ethics and therefore REOD assume that facts about reasons are prior to and explanatory of facts about obligations. So according to this usage, Broome 2004 does not accept the popular idea in ethics. This is because Broome explains the notion of a reason in terms of the prior notions of an obligation and a weighing explanation. 13 For our purposes, certain variants of REOD and OER will do just as well. While it is difficult to provide a general statement of what the weakest commitment is that would still be sufficient for my argument that is not hopelessly abstract, it is important to know that those who favor revisions to simplistic formulation of the popular idea in ethics such as Bedke 2011, Dancy 2004b, Gert 2007, and Greenspan 2007 will be committed to variants of REOD that will suffice for our problem. John Horty, Lou Goble, and Douglas Portmore are the only theorists that I know of who do not accept some such variant of REOD. Horty also rejects OER. §5 discusses their work. 24 must be because ᵯ� -ing has the best outcome. And if ᵯ� -ing has the best outcome, then ᵯ� -ing has an outcome that is good in some respect. Thus, if you ought to do ᵯ� , there must be a reason to do ᵯ� . So OER follows as a corollary of this view about the connection between obligations, reasons, and goodness. This explanation of the truth of OER is problematic in two ways. First and most obviously it is incompatible with the popular idea in ethics because it denies that reasons explain obligations and instead says both reasons and obligations are explained by a third thing, values. Second even if we could modify this explanation of OER so that it was consistent with the popular idea in ethics and only claimed that reasons are explain in terms something else, e.g., values, it would nonetheless be a less than ideal explanation of OER due to the fact that it requires not just the resources of the popular idea in ethics but also a further theory of the nature of reasons. This is less than ideal not because such theories are false, but because I think that we have good reason to bracket them for now. The main reason for wanting to bracket them is that those who accept the popular idea in ethics have diverse commitments. Some accept that values explain reasons, but not everyone does. For example, some believe that reasons are explained in terms of rational norms on desire and still others believe that reasons are basic in the sense that they cannot be explained in terms of any other normative notion. Since I would like to explore the prospects of the popular idea in ethics for all of these theorists, it would be best not to have to rely on further theories about the nature of reasons in solving this problem. Because of this, the strategy of explaining OER by relying on the idea that values or some other normative notion explains reasons should be set aside for now in hopes of finding a more ecumenical solution to our problem. So far then we have considered two alternative explanations of OER and found them unsatisfying. And it can seem like all the other explanations must be similarly unsatisfying as well. It can, at least at first glance, seem like the only alternative to explaining OER in terms of REOD is to either claim that obligations explain reasons or rely on some further theory about the nature of obligations and reasons. This leave us in the following situation: We can explain OER with the help of REOD. The only alternative explanations seem to involve giving up on the popular idea in ethics or relying on a further theory about the nature of reasons. Thus, REOD looks to be the most plausible and least controversial way for the advocate of the popular idea in ethics to explain OER. This, to my mind, is a powerful argument in REOD’s favor. And it is one that was worth developing because the solution that I will eventually present requires us to give up on REOD and therefore face up to this argument in its favor. 25 We have, then, the three elements of our second problem RAC, ¬OAC, and REOD. In the next section, I will show how there is a tension between these three theses. 3. The Tension We can illustrate the tension between these principles by considering two cases. 3.1 Two Breakfasts The first case to consider is the following one: Two Breakfasts Mary promised Jeff that she would meet him downtown for breakfast. Mary also promised Scott that she would meet him by the beach for breakfast. These promises are equally important. Even though Mary can make it to either of the breakfasts, she can’t make it to both because downtown and the beach are too far apart. In Two Breakfasts, Mary’s promise to Jeff gives her a reason to meet Jeff downtown for breakfast and her promise to Scott gives her a reason to meet Scott by the beach for breakfast. Since she cannot do both, this means that her reasons conflict. Given RAC, we should expect there to be cases involving conflicting reasons like Mary’s. And while RAC doesn’t entail that there are cases involving equally good conflicting reasons, Two Breakfasts is plausibly such a case: since the promises that Mary made are equally important, plausibly the reasons that stem from them are equally important as well. Finally, we may also stipulate that Mary does not have any other reasons that conflict with keeping either promise. Given that Mary’s reasons are equally good, what should she do? There seem to be only two remotely plausible answers to this question. One answer is that Mary ought to meet Jeff for breakfast and Mary ought to meet Scott for breakfast. Unfortunately, this answer is ruled out by ¬OAC: ¬OAC says that obligations cannot conflict. Since Mary cannot go to both breakfasts, ¬OAC entails that she cannot be obligated to go to each breakfast. The only other remotely plausible answer to our question about Mary’s case is that she ought to meet Scott for breakfast or meet Jeff for breakfast. What Mary ought to do is this disjunctive act. To 14 support this judgment, we can notice that we do not want to say that there is nothing Mary ought to do. If, for example, Mary chose to eat breakfast at home she would not be doing what she ought to be doing. But if Mary ought to do something and if it isn’t true that Mary ought to meet Jeff and ought to meet Scott, it looks like the only other candidate for what Mary ought to do is meet Jeff or Scott. Thus, we have seen that RAC and ¬OAC commit us to thinking that in cases involving equally 14 According to Horty 2003: 570-571, this so-called “disjunctive response” to cases like Mary’s was first explicitly stated in Donogan 1984. It has since been endorsed by Brink 1994, Horty 2003 and Goble 2013. 26 good conflicting reasons like Two Breakfasts, what agents like Mary ought to do is a disjunctive act like either meeting Jeff or meeting Scott. Now that we know this, we can ask how the popular idea in ethics can explain this obligation to do a disjunctive act like meet Jeff or Scott. And REOD tells us that in 15 order to explain an obligation to do ᵯ� we need to at least show that there is a reason to do that very act, ᵯ� . So if we are to explain why Mary ought to meet Jeff or Scott, we need to show that there is a reason for Mary to meet Jeff or Scott. And maybe there is a reason to meet Jeff or Scott in Two Breakfasts. Maybe the promise to Jeff is a reason to do that. Or maybe her promises taken together are the reason. It is not important for our purposes to decide which of these claims is correct. What is important is to see the general point that Two Breakfasts illustrates. It illustrates that in cases involving conflicting reasons, we need to provide some general principle that entails that there is a reason to do disjunctive acts like meeting Jeff or Scott. Two Breakfasts illustrates this general point because it was just the structural feature of this case—that it involved conflicting equally good reasons—combined with our three theses—RAC, ¬OAC, and REOD—that led us to the result that we need to have a reason to do this disjunctive act. What this means is that unless we supplement the idea that reasons explain obligations with a general principle that entails that there are reasons to do disjunctive acts in cases involving conflicting reasons like Two Breakfasts, our three theses will be incompatible. So what principle might we use to get the right result in cases like Mary’s? We have already encountered one that looks promising, SRC (which, recall, says that if there is a reason for S to do ᵯ� and ᵯ� entails ᵯ� , then there is a reason to do ᵯ� ). As we saw, SRC allow us to explain the speeding law case. And it also can similarly help explain Two Breakfasts. Since there is a reason to meet Jeff in this case and meeting Jeff entails meeting Jeff or Scott, SRC tells us that there is the desired reason to meet Jeff or Scott. 3.2 Lunch-Coffee-Dinner Unfortunately, SRC alone won’t suffice to explain the full range of cases involving conflicting reasons. Consider for example the case of Sally: Lunch-Coffee-Dinner Sally has made three promises. She promised Tom that she will meet him for lunch downtown. She promised Jack that she will meet him for coffee by the beach. She 15 Horty 2003: 572-573 was the first to notice that this poses a difficulty for the popular idea in ethics. But, as he presents it, the difficulty is narrowly tailored to the view in Brink 1994. Horty does not isolate the problem as one that arises from accepting REOD. Horty also does not consider, as we will below, whether the view can be saved by supplementing it with principles concerning entailments among reasons. 27 promised Ann that she will meet her for dinner in Santa Barbara. While Sally can make it to lunch and coffee, can make it to lunch and dinner, and can make it to coffee and dinner, she cannot make it to lunch, coffee, and dinner. There just isn’t enough time for all that driving. In this case, Sally has a reason to meet Tom for lunch downtown, a reason to meet Jack for coffee by the beach, and a reason to meet Ann for dinner in Santa Barbara. While Sally can do any two of these things, she cannot do all three. As before, RAC should lead us to expect that there are cases like Lunch-Coffee-Dinner. And it is independently plausible that Lunch-Coffee-Dinner is one involving equally good conflicting reasons. Given that Sally’s reasons are equally good, what should she do? One plausible answer is that Sally ought to meet Tom, ought to meet Jack, and ought to meet Ann. But ¬OAC tell us that there cannot be conflicting obligations. And presumably this means that just as pairs of obligations cannot conflict, sets of obligations cannot conflict either. The only other plausible thing to say about this case is that Sally ought to perform the disjunctive act of meeting Tom and Jack or meeting Tom and Ann or meeting Jack and Ann. That is, Sally ought to perform this act that is a disjunction of conjunctions. And, recall, according to REOD, 16 to explain an obligation to do ᵯ� we need a reason to do ᵯ� itself. So to explain why Sally ought to do this disjunction of conjunctions we need to show that there is a reason to do this disjunction of conjunctions. This means that we need our general principles to entail that such a reason exists in cases like Lunch-Coffee-Dinner. Unfortunately, SRC alone does not do this. If we apply SRC to any of the 17 individual reasons, the best we can do is generate a reason to meet Tom or meet Jack or meet Ann. But this is not the disjunction that we want. We want a disjunction of conjunctions. This is important because we want to be able to say that Sally ought to do at least two of the three acts and the simple disjunctions that SRC gets us do not entail that there is a reason to do at least two of the three acts. Thus, SRC alone is not sufficient to solve our problem. Now if we were to have a reason to do two of the acts, SRC would be enough to generate the reason to do the disjunction that we want. So the natural thing to do is adopt a principle that will allow us to generate a reason to do a pair of acts from the reasons to do each act. And we have already encountered such a principle, CRA (which, recall, says that if there is a reason for S to do ᵯ� , there is is a 16 Goble 2013: 22 concurs with this judgment about the case. 17 Goble 2013: 22 is the first to recognize this problem. Goble 2013: §4 is concerned with essentially the same issue as I am. The contribution of my development of this difficulty is that it shows that the problem arises as a tension between REOD and RAC and ¬OAC. Goble’s presentation focuses on validating so-called Standard Deontic Logic, delivering the disjunctive response, and explaining certain cases. My work is heavily indebted to Goble’s fascinating discussion. 28 reason for S to do ᵯ� , and ᵯ� is consistent with ᵯ� , then there is a reason for S to do ᵯ� and ᵯ� .) With CRA supplementing SRC, we are in a position to explain Sally’s case. Since there is a reason to meet Tom and a reason to meet Jack and since Sally can meet Tom and Jack, CRA entails that there is a reason to meet Tom and Jack. We may then apply SRC to the reason to meet Tom and Jack to generate the desired reason to meet Tom and Jack or meet Tom and Ann or meet Jack and Ann. Thus, CRA and SRC look to get us what we want. But of course, as we know from the first problem, we cannot accept both SRC and CRA. They entail an explosion of reasons. So this is the second problem: In order to resolve the tension between RAC, ¬OAC, and REOD, we are led into the project of developing principles concerning the entailments among reasons. And once again, SRC and CRA initially look like promising candidates. But since they lead to an explosion of reasons, they are unacceptable. In order to solve this problem we need to develop alternatives to SRC and CRA that suffice to explain Two Breakfasts and Lunch-Coffee-Dinner but do not lead to explosion. In light of these two problems concerning conflicting reasons, I conclude that it is difficult to make sense of cases involving conflicting reasons if we accept the popular idea in ethics. Having established this, I turn to developing my own solution. 4. Toward a Solution Obviously, there are many different principles that we could use to try to solve our two problems. But I will not pursue the project of canvassing the full range of these principles. Instead, since the problem of conflicting obligations is relatively well-known (in deontic logic at least), I begin by taking up the conjecture for solving the first problem that I mentioned earlier. The conjecture was that the solution to the first problem will have the same structure as solutions to the problem of conflicting obligations that allow for the existence of such obligations. I take up this conjecture in two stages. First I describe a certain structure shared by many solutions to the problem of conflicting obligations (§4.1). Second I turn to some familiar ideas in moral philosophy in order to adapt this structural feature to cases involving reasons (§4.2). While this only directly tells us how to solve the first problem, I observe that it also indirectly suggests that the solution to the second problem is to deny REOD (§4.3). §5 more rigorously develops the ideas that are introduced from an intuitive perspective in this section. 4.1 The Structure of Agglomeration The conjecture that we are pursuing is this: since our problem has the same structure as the problem of conflicting obligations in deontic logic, the literature in deontic logic might hold a solution. 29 Though I cannot discuss all of the details of this literature here, I can draw out, I believe for the first time, a certain structural similarity shared by many accounts developed in it. These accounts avoid explosion by denying COA (which, recall, says that if S ought to do ᵯ� , S ought to do ᵯ� , and ᵯ� is consistent with ᵯ� , then S ought to do ᵯ� and ᵯ� ). But since COA is motivated by considering cases like 18 Smith’s case, these accounts owe us some alternative treatment of this case if they are genuinely to solve the problem of conflicting obligations. And each of these accounts has its own distinct treatment of Smith’s case. For example, Paul McNamara (in his 2004) defines a special kind of obligation that he calls basic obligations. Let’s use the expression ‘basically ought’ to discuss these kinds of obligations and continue to use ‘ought’ for obligations whether they are basic or not. McNamara’s idea then is that though COA does not hold the following principle does: Basic COA (BCOA): if S basically ought to do ᵯ� , basically ought to do ᵯ� , and ᵯ� is consistent with ᵯ� , then S ought to do ᵯ� and ᵯ� . This allows McNamara to explain Smith’s case by claiming that in this case Smith basically ought to fight or serve and Smith basically ought to not fight. So by BCOA, Smith ought to fight or serve and not fight. And then by SOC, it follows that Smith ought to serve. Importantly this still allows McNamara to avoid the explosion result that we discussed earlier. This is because while McNamara accepts SOC and accepts that basic obligations entail obligations, he does not accept that a basic obligation to do ᵯ� entails a basic obligation to do ᵯ� if ᵯ� entails ᵯ� . To see how this avoids the explosion problem, suppose that basic obligations conflict: (1) You basically ought to meet Sam for a drink (2) You basically ought not to meet Sam for a drink Since basic obligations are a kind of obligation, we may apply SOC to (1) and generate: (3) You ought to meet Sam for a drink or you torture a kitten The next step in our derivation had us use COA to derive the following claim from (2) and (3): (4) You ought to meet Sam for a drink or torture a kitten and not meet Sam for a drink Since we reject COA, this step is blocked. Now BCOA would apply if (2) and (3) were claims about basic obligations. But since (3) is not a claim about basic obligations and since we said that McNamara 18 See Brown 1999; Hansen 2004; Horty 2003 and 2012; McNamara 2004, van der Torre and Tan 2000. 30 rejects the idea that a basic obligation to do ᵯ� entails a basic obligation to do ᵯ� if ᵯ� entails ᵯ� , we cannot generate some basic obligation version of (3) from (1). Thus, McNamara is able to avoid this explosion result by blocking the move to (4). It is a significant fact that McNamara’s solution is just one of a family of solutions that have this structure. Other theorists solve this problem in the same way except that they do not focus on basic obligations. Instead they discuss good reasons, phase-1 obligations, prima facie obligations, etc. All of 19 these solutions to the problem do the same thing. Instead of accepting COA they accept something like BCOA but replace basic obligations with good reasons, phase-1 obligations, prima facie obligations, etc. It is the structure of these proposals that allows them to solve the problem. They can explain Smith’s case by saying that in that case Smith basically ought or has good reason or phase-1 ought or prima facie ought, etc. to fight or serve and Smith basically ought or has good reason or phase-1 ought or prima facie ought, etc. to not fight and show that it follows from this and their principles that Smith ought to serve. They can avoid explosion because in order to apply BCOA or a version of it that talks about good reasons, phase-1 obligations, prima facie obligations, etc. to get (4) we would need (3) to be a claim about basic obligations, good reasons, phase-1 obligations, prima facie obligations, etc. But each of these views denies that we can use SOC on (1) to generate such a claim. Thus, it is the shared structure of these views that allows them explain Smith’s case while avoiding explosion. It is no 20 coincidence, then, that so many deontic logicians have converged on this structure despite the fact that these logicians have approached the problem of conflicting obligations from very different theoretical perspectives. 21 Since numerous deontic logicians have converged on this structure as a solution to the problem of conflicting obligations and since our problem has the same structure as this problem, a promising idea is that we can get a solution to our problem by taking advantage of this structure. What this means 19 Horty 2012 discusses good reason, van der Torre and Tan 2000 discusses phase-1 obligations, Horty 2003 discusses prima facie obligations. Brown 1999 and Hansen 2004 make similar distinctions. 20 More formally, we must chose some operator N and claim that if N ᵯ� , N ᵯ� , and ᵯ� is consistent with ᵯ� , then it ought to be that ᵯ� and ᵯ� . We must claim that N ᵯ� entails it ought to be that ᵯ� . We must deny that it ought to be that ᵯ� entails N ᵯ� . And we must deny that N ᵯ� entails N ᵯ� when ᵯ� entails ᵯ� . 21 There are four kinds of views about conflicting obligations that do not have this structure. First Lou Goble (see Goble 2009) solves the problem of conflicting obligations by giving up SOC. We should not go in for this as a solution to our problem because Goble’s logic will not allow us to generate the required reasons in Two Breakfasts. Second others accept something like SOC and COA but go on to deny the derivation of explosion (see, e.g., Beirlaen, Straßer, Meheus Forthcoming). These solutions require giving up certain structural features of logical consequence. While I defend such a view about reasoning with conflicting obligations in chapter two, I do not believe such a view is suitable for providing a logic of conflicting obligations. Third some views either fail to avoid explosion or fail to give any account of the cases that motivate our principles, for a comprehensive survey see Goble 2009, 2013. Thus, these three kinds of views cannot solve our problem. Finally, I have recently learned of a fourth kind of view developed in Goble ms. This view involves a radical departure from SOC, but has other features that may allow it to solve our problem. Unfortunately, I do not have the space here to discuss the prospects of this proposal. 31 is that we should hold on to SRC and reject CRA. And we should develop some reasons-analog to BCOA. 4.2 Non-Derivative CRA This insight will help us solve our problem only if we can determine what to use as an analog of basic obligations in our principle. To answer this question, I turn to a familiar distinction in ethics. The familiar distinction is the distinction between things that are intrinsically, or in my preferred but less common terminology, non-derivatively good and things that are relationally or, in my terminology, derivatively good. Things that are derivatively good are good in virtue of standing in some important relation to other things that are good. Things that are non-derivatively good, on the other hand, are good but not in virtue of standing in some important relation to other things that are good. So for example, we might think that pleasure is non-derivatively good. And since pleasure is non-derivatively good, it may be that eating ice cream is derivatively good because it stands in an important relation to the non-derivative good of pleasure; it causes it. So causal relations count as one of the important relations that something can stand to something else that is good in order to count as being derivatively good. But we should leave it open whether there are other important relations. For example, a 22 Kantian might believe that the only thing that is non-derivatively good is the good will but claim other things are derivatively good in virtue of standing in intentional relations to the good will. More generally, we should leave it open exactly which relations are the ones that are capable of making something derivatively good. 23 Now that we have introduced this distinction between derivative and non-derivative by talking about goodness, we can apply it to other normative notions. We can then help ourselves to a distinction between derivative and non-derivative reasons as well as a distinction between derivative and non-derivative obligations. So from now on, let’s explicitly write ‘non-derivative reason’ and 24 ‘derivative reason’ to discuss non-derivative reasons and derivative reasons respectively and use ‘reason’ without a modifier before it to discuss reasons without making a claim about whether they are derivative or not. Similarly, for obligations. And let’s use this distinction in our reasons-analog of BCOA. In particular, the principle is the following: 22 Cf. Kagan 1998, Korsgaard 1983. 23 To say that something is non-derivatively good is to say that its goodness is not explained by something else being good. This does not rule out that it can be explained in terms of some other normative or non-normative notion. 24 Cf. Parfit 2011: vol. 1 39, Väyrynen 2011: 190 and n. 20. 32 Non-derivative CRA (NCRA): if there is a non-derivative reason to do ᵯ� , a non-derivative reason to do ᵯ� , and ᵯ� is consistent with ᵯ� , then there is a derivative reason to do ᵯ� and ᵯ� 25 NCRA says that there is a derivative reasons to do ᵯ� and ᵯ� because ‘ ᵯ� and ᵯ� ’ stands in an important logical relation to what there is a non-derivative reason to do. Since NCRA is structurally analogous to BCOA and since this structure is one that deontic logicians have used to solve the problem, this suggests that adopting this principle will solve our problem. 26 In short, the proposal is that we can explain why there are certain derivative reasons by showing that these derivative reasons stand in an important logical relation to non-derivative reasons. Since this proposal makes good on our conjecture, it promises to solve the first problem. We will look at this in detail in §5. But before that, I want to make a final observation that will suggest a promising solution to the second problem as well. 4.3 Rejecting REOD The observation concerns how we might be able to accept the popular idea in ethics without accepting REOD: just as we can explain derivative reasons by showing that what we have derivative reason to do is that which stands in an important logical relation to what we have non-derivative reason to do, we might analogously explain what we ought to do by showing that what we ought to do is that which stands in an important logical relation to non-derivative reasons. Of course, we have not yet identified the exact logical relationship involved in explaining obligations. Nonetheless, this is enough to see, abstractly at least, that REOD might not be the only way that reasons could explain obligations. This suggests that we can straightforwardly resolve the tension between RAC and ¬OAC on one hand and REOD on the other by siding with RAC and ¬OAC and rejecting REOD. Of course, we also thought REOD was appealing because it looked like the most promising explanation of OER (which, recall, says that if you ought to do ᵯ� , then there is a reason for you to do ᵯ� ). But what makes the idea that we are considering so promising is that it shows that REOD might 25 The exact analog of BCOA would not say ‘there is a derivative reason’ in the consequent but would merely say ‘there is a reason’. So the principle given in the text is strictly stronger than the exact analog of BCOA. But this stronger principle is plausible and also solves our problem. See n. 35 for further discussion. 26 The derivative/non-derivative distinction also allows us to notice a more restricted version of REOD that would suffice for our second problem: Non-Derivative REOD: If an agent non-derivatively ought to do ᵯ� , then this is in part because the agent has a non-derivative reason to do ᵯ� . We can see that the problem arises even if we assume only NREOD rather than REOD. This is because what Mary and Sally non-derivatively ought to do are those disjunctive acts. This obligation is not explained by some other obligation. Though I do not have the space to discuss the details here, the fact that our problem arises even only assuming NREOD distinguishes it from superficially similar problems concerning instrumental reasons and obligations. See Bedke 2009, Raz 2005, Schroeder 2009, and especially Kolodny ms and Millsap ms for discussion of instrumental reasons. 33 not be the only ecumenical explanation of OER that is compatible with the idea that reasons explain obligations. Instead, if non-derivative reasons explain obligations as well as derivative reasons, we might be able to explain OER without accepting REOD. OER would be true because the obligations and reasons have a “common cause”: non-derivative reasons. To illustrate how this idea will work, return to Two Breakfasts where Mary ought to meet Jeff or Scott. Plausibly in this case, one of Mary’s promises gives her a non-derivative reason to meet Jeff and the other promise giver her a non-derivative reason to meet Scott. According to the theory that I will 27 develop in §5, the fact that Mary has a good non-derivative reason to meet Jeff, that Mary has a good non-derivative reason to meet Scott, and that meeting Jeff or Scott follows from each of these reasons—‘Mary meets Jeff or Scott’ follows from ‘Mary meets Jeff’ and it follows from ‘Mary meets Scott’—explains why Mary ought to meet Jeff or Scott. So REOD is false because the obligation to meet Jeff or Scott is not explained by a reason to meet Jeff or Scott; rather, it’s explained by the logical relation that holds between meeting Jeff or Scott and the reason to meet Jeff and the reason to meet Scott. Nonetheless, we do not need to reject OER which entails that there is reason for Mary to meet Jeff or Scott in this case. According to the theory that I will develop in the next section, the fact that Mary has non-derivative reason to meet Jeff, non-derivative reason to meet Scott, and meeting Jeff or Scott follows from at least one of these reasons—‘Mary meets Jeff or Scott’ follows from at least one of ‘Mary meets Jeff’ and ‘Mary meets Scott’—explains why there is a reason for Mary to meet Jeff or Scott. So OER holds but REOD does not: While Mary ought to meet Jeff or Scott and Mary has a reason to meet Jeff or Scott, the reason to meet Jeff or Scott does not explain the obligation to meet Jeff or Scott. Instead, the non-derivative reason to meet Jeff and the non-derivative reason to meet Scott are the “common cause” of both. Of course, all I have done so far is sketch how the solution will work and assert without proof the results that it will generate about Two Breakfasts. But the basic conjecture behind the solution is that obligations and derivative reasons are explained by the important logical relationship that they have with non-derivative reasons. The remainder of the paper just spells out this proposal in detail and proves that it delivers the promised results. 5. The Precise Solution I will first develop a simple formal theory of how reasons explain obligations that precisely 27 This is, I believe, in line with what many moral philosophers who accept the popular idea in ethics have thought starting with Ross 1930. See n. 36 for further discussion. 34 implements our conjecture. Then I will prove that it solves the problem as promised. 5.1 The Formal Theory The formal theory that I will develop is an adaptation of a formal system developed by John Horty (2012). I adapt the system in two ways. First I considerably simplify the system. Though my ideas could be developed using the full resources of Horty’s system, I am simplifying here in order to introduce my main ideas in the most approachable form. Second as will become clearer as we go on, I interpret some of the formal objects in my system differently than Horty interprets the analogous formal objects in his system. As we will see, this difference in interpretation turns out to be important because it is what will allow me to do something Horty cannot do: reject REOD while accepting OER and thereby solve our problem. 28 So much for preliminaries, let’s start to develop the system. To begin, we should be more precise about how we are using the lowercase Greek letters ᵯ� , ᵯ� , ᵯ� , etc. So far we have been treating them as subsentential expressions that denote actions and assumed that there can be entailments among these expressions. While we could continue to do this, it will prove easier to officially use lower case Greek letters as sentences. This will considerably simplify things for us below because it will allow us to appeal to the traditional notion of entailment among sentences. Accordingly, we will also officially have to treat ‘reason’ and ‘ought’ as operators on sentences rather than on subsentential expressions that denote actions. Of course, when we are talking informally, we will continue to adopt the more natural way of speaking where ‘reason’ and ‘ought’ take these subsentential expressions. Next since we know that the notion of a non-derivative reason is going to play an important role in this system, we should introduce a formal way of representing them. So let !( ᵯ� ) represent a non-derivative reason to do ᵯ� . While this formalism does not allow us to represent what the reason is to do ᵯ� , we do not need to add such details to our system because our problem does not turn on exactly what the reason is. Since we will often be interested in discussing not just individual non-derivative reasons but also collections of non-derivative reasons, we should introduce a device for representing these as well. We therefore use ℜ for a set of non-derivative reasons. It will also turn out to be useful when we are giving some definitions below to have a function, Consequent, that takes a non-derivative reason and returns the thing that we have a non-derivative reason 28 Horty notices that his system cannot accept OER and, for this reason, says he has an “austere” theory of reasons (2012: ch. 2, §1). By my lights, such austerity is a cost and the interpretation provided here can be thought of as an amendment to his system that does not have this cost. 35 to do. So the function would take a reason like !(You go to the store) and return You go to the store. We will also generalize this function so that it applies to sets of reasons. So the function would take a set of reasons like {!(You go to the store), !(You go to the coffee shop), !(You go to the bar)} and return the set {You go to the store, You got to the coffee shop, You go to the bar}. If we like, we can formally define this function Consequent that I have just presented intuitively as follows: Consequent[!( ᵯ� )]= ᵯ� Consequent[ ℜ]={x|there is some y such that y ∈ ℜ and Consequent[y]=x} So far then we can represent non-derivative reasons (!( ᵯ� )) and sets of non-derivative reasons ( ℜ) and have defined a function to pick out the things that there is a non-derivative reason do (Consequent). We know that we are going to explain derivative reasons as well as obligations in terms of non-derivative reasons. But intuitively the non-derivative reasons that explain obligations will be different than the ones that explain derivative reasons. We want obligations to be explained by good or undefeated non-derivative reasons, but it does not seem necessary to explain derivative reasons solely in terms of good or undefeated non-derivative reasons. This means that we will want to have some way of distinguishing good or undefeated non-derivative reasons from non-derivative reasons that are not good. To do this, let’s help ourselves to an ordering on non-derivative reasons ≤. We read !( ᵯ� ) ≤ !( ᵯ� ) as ‘there is a least as good of a non-derivative reason to ᵯ� as there is to do ᵯ� ’. Since, intuitively, there is at least as good of a non-derivative reason to go to the store as a there is to go to the store, we will define ≤ so that it is a reflexive relation in the sense that the following claim holds: !( ᵯ� ) ≤ !( ᵯ� ) And since intuitively if there is at least as good of a non-derivative reason to go to the store as there is to stay home and if there is at least as good of a non-derivative reason to go to the movies as there is to go to to the store, then there is at least as good of a non-derivative reason to go to the movies as there is to go to stay at home, we will define ≤ so that it is a transitive relation in the sense that the following claim holds: if !( ᵯ� ) ≤ !( ᵯ� ) and !( ᵯ� ) ≤ !( ᵯ� ), then !( ᵯ� ) ≤ !( ᵯ� ) Finally, since we have seen that Mary’s case makes it plausible to think that there are equally good 36 conflicting reasons, we know that there can be situations where ᵯ� is inconsistent with ᵯ� , !( ᵯ� ) ≤ !( ᵯ� ), and !( ᵯ� ) ≤ !( ᵯ� ). So we now have introduced a formalism that stands for the “there is at least as good of a non-derivative reason to do as” relation (≤). And this allows us to represent a choice situation in which there are some reasons of certain strengths. Since ℜ represents the reasons and ≤ tells us how good those reasons are, we can represent a choice situation in which there are some reasons of certain weight as an ordered pair, ⟨ ℜ, ≤ ⟩. We next want to define the notion of an undefeated reason. The intuitive idea of an undefeated reason is the idea of a reason that is at least as good as any reason that conflicts with it. For example, 29 suppose that I have a reason to go to the store, a reason to stay home, and cannot do both. The reason to go to the store is undefeated just in case it is not worse than the reason to stay at home. Generalizing this idea, we want to define an operator Undefeated ≤ that takes a set of reasons and returns a set of reasons that are undefeated according to the ordering ≤. If we write !( ᵯ� ) < !( ᵯ� ) as shorthand for ‘!( ᵯ� ) ≤ !( ᵯ� ) and it is not the case that !( ᵯ� ) ≤ !( ᵯ� )’, we will be able to write ‘there is a better non-derivative reason to do ᵯ� than there is to do to ᵯ� ’ as !( ᵯ� ) < !( ᵯ� ) . We then can define the operator that we want: Undefeated ≤ ( ℜ)={x ∈ ℜ|there is no y ∈ ℜ such that (i) x < y and (ii) Consequent[x] is inconsistent with Consequent[y]} In other words, a reason to do some act is defeated if there is a better reason to do some act that conflicts with it. And an undefeated reason is just a reason that is not defeated. 30 These definitions give us a formalism for discussing non-derivative reasons and how strong they are. There are many simplifications involved in this formalism (see n. 29), but it is enough for our purposes. And our purpose, recall, is to develop an account on which non-derivative reasons explain derivative reasons and obligations by showing that they stand in an important logical relation to non-derivative reasons. What we still need to do is describe this logical relation. We know that this logical relation is not the one envisioned by REOD: REOD says that an obligation to do ᵯ� is explained by a reason (perhaps non-derivative, perhaps not) to do ᵯ� itself. Very 29 Strictly speaking, what I will define is the notion of a reason being not worse than any other reasons. A reason being not worse and a reason being at least as good come apart in cases where reasons are incomparable with one another. But in order to present my ideas as simply as possible, I will ignore incomparability among reasons. Indeed, there are a whole host of phenomena concerning the weight of reasons that I will be ignoring in order to present my system simply (e.g., undercutting defeat, attenuation, how multiple reasons can “add up” to provide more support for an act, reinstatement). Luckily, as I said before, my system can be developed using the full resources of Horty’s system and these extra resources were developed precisely to understand these phenomena (see Hansen 2008 and Horty 2012 for further discussion). 30 We do not focus on non-derivative reasons that are better than reasons that conflict with them because the set of such non-derivative reasons is empty in cases like Two Breakfasts and Lunch-Coffee-Dinner where we have equally good non-derivative reasons (cf. Horty 2003: 572-573). 37 roughly, my proposal is that an obligation to do ᵯ� need not be explained by a reason to do ᵯ� itself. Rather it can be explained by the fact that ᵯ� logically follows from what there is undefeated non-derivative reason to do. And of course, ᵯ� might follow from what there is undefeated non-derivative reason to do even if there is no undefeated non-derivative reason to do ᵯ� itself. Now it is important that this is only a rough approximation of our official view. This is because, as we know, undefeated non-derivative reasons can conflict. So there can be cases where the things that an agent has undefeated non-derivative reason to do are inconsistent. But since everything follows from an inconsistent set, we do not want to identify what an agent is obligated to do with what follows from what she has undefeated non-derivative reason to do. A more subtle treatment is needed. A reasonable thought is that what we are obligated to do is what follows from the most inclusive consistent subsets of our undefeated non-derivative reasons. A most inclusive consistent subset is a subset of your reasons that is consistent and that you cannot add any more reasons to without making it inconsistent. Formally, we can model the notion of a most inclusive consistent subset of undefeated reason with the notion of a maximal consistent subset: ᵮ� is a maximal consistent subset of ᵮ� iff (1) ᵮ� ⊆ ᵮ� , (2) ᵮ� is consistent, and (3) it is not the case that there is a ᵮ� such that ᵮ� is consistent and ᵮ� ⊂ ᵮ� ⊆ ᵮ� . The idea then is that we ought to do is what follows from the maximal consistent subsets of our undefeated non-derivative reasons. Now there can be more than one maximal consistent subset of a set of undefeated non-derivative reasons. For example, in Two Breakfasts we may suppose that there is undefeated non-derivative reason for Mary to meet Jeff and undefeated non-derivative reason for Mary to meet Scott. And we may suppose that these reasons conflict. In this case the whole set of reasons is inconsistent and its maximal consistent subsets are a set containing just the reason to meet Jeff and a set containing just the reason to meet Scott. So this set of reasons has two maximal consistent subsets. 31 Having noticed that a set of reasons can have more than one maximal consistent subset, we face a choice about how to develop our view. One thing that we could say is that what we ought to do is what follows from some maximal consistent subset of our undefeated non-derivative reasons. Unfortunately, this option would contradict our working assumption that obligations do not conflict. To see this, recall that in Two Breakfasts there are two maximal consistent subsets; one containing ‘Mary meets Scott’, the other ‘Mary meets Jeff’. Obviously, ‘Mary meets Scott’ follows from ‘Mary meets Scott’ 31 I am simplifying here by treating materially inconsistent claims (‘Mary meets Scott for breakfast’ ‘Mary meets John for breakfast’) as logically inconsistent. Again, a fuller account can avoid this simplification (Hansen 2008 and Horty 2012). 38 so, according to the current proposal, Mary ought to meet Scott. Similarly, ‘Mary meets Jeff’ follows from ‘Mary meets Jeff’ so, according to the current proposal, it also would be true that Mary ought to meet Jeff. The current proposal should be rejected, then, because it would allow for these conflicting obligations. To avoid this result, we should instead say that what we ought to do is what follows from every maximal consistent subset. If we do this, then we get the desired result that Mary ought to meet Jeff or Scott. After all as I pointed earlier, ‘Mary meets Jeff or Scott’ follows both from ‘Mary meets Jeff’ and from ‘Mary meets Scott’. So Mary ought to meet Jeff or Scott. But we do not have that Mary ought to meet Jeff or that Mary ought to meet Scott. This is because ‘Mary meets Jeff’ does not follow from ‘Mary meets Scott’ and ‘Mary meets Scott’ does not follow from ‘Mary meets Jeff’. In sum, our idea is that what we ought to do is what follows from every maximal consistent subset of undefeated non-derivative reasons. We can then define a relation ⊨ HNRE that we interpret as telling us how non-derivative reasons explain obligations. This relation will hold between a collection of reasons of a certain weight and an obligation just in case the reasons explain the obligation. Recall that a collection of reasons of a certain weight is formally represented by a pair ⟨ ℜ, ≤ ⟩ and we can represent an obligation with the operator O. So we may define this relation as follows: ⟨ ℜ, ≤ ⟩ ⊨ HNRE O( ᵯ� ) iff ᵮ� ⊢ ᵯ� for every maximal consistent subset ᵮ� of Consequent[Undefeated ≤ ( ℜ)] where ⊢ is the logical consequence relation of ordinary propositional logic. This is just a formal statement of the ideas that we have been discussing informally so far. Undefeated ≤ ( ℜ), recall, is the set of undefeated non-derivative reasons and so, as desired, our definition tells us that what we ought to do is what follows from every maximal consistent subset of the set of undefeated non-derivative reasons. So far the formal system that I have described is essentially the same as the one developed by Horty. The main difference is that I have interpreted the elements of ℜ as non-derivative reasons whereas Horty interpreted these as reasons. This interpretative difference is important because it opens the door for us to explain derivative reasons in terms of non-derivative reasons. And, except for two differences, we can explain reasons in terms of non-derivative reasons in the same way that we explained obligations in terms of non-derivative reasons. The first difference is, as I mentioned earlier, that we allow that non-derivative reasons whether they are undefeated or not can explain derivative reasons. The second difference is that we focus on what follows from some maximal consistent subset of reasons rather than on what follows from every maximal consistent subset. We do this because we chose 39 to focus on every maximal consistent subset rather than some maximal consistent subset when discussing obligations because we wanted to rule out conflicting obligations. But since we have no qualms about reasons conflicting, we may focus on what follows from some maximal consistent subset and thereby allow reasons to conflict. So we will essentially use the same definition but remove the focus on undefeated reasons and allow that what we have reason to do is what follows from some maximal consistent subset. This leads to the following definition of when ⊨ HNRE holds between a pair ⟨ ℜ, ≤ ⟩ and the claim ‘there is a reason for it to be the case that ᵯ� ’ which we write as R( ᵯ� ): ⟨ ℜ, ≤ ⟩ ⊨ HNRE R( ᵯ� ) iff ᵮ� ⊢ ᵯ� for some maximal consistent subset ᵮ� of Consequent[ ℜ] As I have defined it, R( ᵯ� ) is a claim about reasons derivative or otherwise. Having defined this 32 notion, we may define a derivative reason as a reason that is not non-derivative. Formally this ends up looking like this: ⟨ ℜ, ≤ ⟩ ⊨ HNRE DR( ᵯ� ) iff ⟨ ℜ, ≤ ⟩ ⊨ HNRE R( ᵯ� ) and !( ᵯ� ) ∉ ℜ. where DR( ᵯ� ) is read as ‘there is a derivative reason for it to be the case that ᵯ� ’. Thus, my formal framework adapts Horty’s formal framework by thinking of the elements of ℜ as non-derivative reasons. This adaptation is what opens the door for us to explain derivative reasons 33 in terms of non-derivative reasons. In short, the framework says that what you are obligated to do and what you have derivative reason to do is that which stands in an important logical relation to what you have non-derivative reason to do. We now have everything that we need to solve our problem. 5.2 How This Solves The Problems We have seen that our problem turns on the status of a number of different principles and cases. I will go through the main principles in this subsection, verify that they hold (or fail to hold), and apply them to the relevant cases. 5.2.1 RAC, ¬OAC, Explosion 32 My view of reasons and obligations entails that they obey SOC and SRC respectively. These principles are controversial because they are arguably subject to counterexamples such as a generalization of Ross 1941’s paradox concerning imperatives. Though I do not have the space here to engage with the literature on this topic, I believe that these counterexamples can be answered and that SOC and SRC play useful theoretical roles (e.g., they allow us to smoothly explain cases like the speeding law case). 33 It may be worth noting how this system is related to two other systems. First, the system that can be found in Goble 2013: §4.4 is very similar to mine. When I came up with my system, Goble’s paper did not contain the system that is now found in his §4.4. At that time, the system that was closest to mine did not validate OER. Goble since has, I think independently, developed the system now found in §4.4. Second Portmore 2013 argues on very different grounds for a special case of my view. According to Portmore, if an agent has a non-derivative reason to do ᵯ� , a non-derivative reason to do ᵯ� , and ᵯ� ≠ ᵯ� , then ᵯ� and ᵯ� are inconsistent. In effect, Portmore thinks that there are only non-derivative reasons to do maximal consistent acts. 40 RAC says that reasons can conflict. As I said when introducing our system, we allow there to be conflicting non-derivative reasons. And we said that what we have reason to do is what follows from some maximal consistent subset of non-derivative reasons. Since we specifically chose to focus on some maximal consistent subset because it allows reasons to conflict, we know that our system accepts RAC. It is similarly not difficult to see that the system accepts ¬OAC. ¬OAC says that obligations cannot conflict. The system that I laid out says that you are obligated to do something if this thing follows from every maximal consistent subset of good non-derivative reasons. And, recall, we decided to focus on what follows from every maximal consistent subset of good non-derivative reasons precisely because this ruled out conflicting obligations. Finally, our system does not lead to an explosion of reasons. To see this, suppose that there are conflicting non-derivative reasons, !( ᵯ� ) and !(¬ ᵯ� ). Our system says that there is a reason to do ᵯ� because ᵯ� follows from the maximal consistent subset { ᵯ� } and that there is a reason to do ¬ ᵯ� because ¬ ᵯ� follows from {¬ ᵯ� }. But it does not say that there is a reason to do any arbitrary thing that you can do, ᵯ� , because ᵯ� need not follow from { ᵯ� } or from {¬ ᵯ� }. 5.2.2 REOD, OER Next consider REOD. Our guiding idea in designing the system was that REOD should not hold. While REOD says that if you are obligated to do something, this must be in part because there is a reason to do that very thing, our guiding idea has been that to be obligated to do something that thing need only stand in some important logical relation to what you have non-derivative reason to do―we do not require a reason to do ᵯ� in order to explain why you are obligated to do ᵯ� . Our account says that this logical relation is following from every maximal consistent subset. We have already seen how this works in Two Breakfasts but to emphasize the point, let’s consider a simpler albeit less interesting example in which you ought to either go to the store or watch TV. According to our account, this obligation might not be explained by a reason to go to the store or watch TV. Instead, there can be cases where you have a non-derivative reason to go to the store and this is the only undefeated non-derivative reason you have. Our account says this reason to go to the store suffices to explain the obligation to go to the store or watch TV because ‘you go to the store or watch TV’ follows from ‘you go to the store’. In this way, our account rejects REOD and thereby resolves the tension between it, RAC, and ¬OAC. Of course, we said that part of what was appealing about REOD is that it allowed us to explain the truth of OER (which, recall, says that if S ought to do ᵯ� , then there is a reason for S to do ᵯ� ). What is nice about our system is that it easily explains OER. To see this, notice that according to our system, if you ought to do ᵯ� , then ᵯ� follows from every maximal consistent subset of good non-derivative 41 reasons. Now it can be proven that if ᵯ� follows from every maximal consistent subset of good non-derivative reasons, then ᵯ� also follows from some maximal consistent subset of non-derivative reasons (good or otherwise). And according to our account, there is a reason to do ᵯ� when ᵯ� follows from some maximal consistent subset of non-derivative reasons. So our account predicts that if S ought to do ᵯ� , then there is a reason for S to do ᵯ� because the non-derivative reasons that explain the obligation to do ᵯ� will also suffice to explain the reason to do ᵯ� . 34 We saw how this worked in Two Breakfasts earlier but it may be worth reiterating this point by looking at the example just given of being obligated to go to the store or watch TV because you have undefeated non-derivative reason to go to the store. It is not hard to see that OER holds in this case for the reasons just outlined. There is a reason to go to the store or watch TV according to the account because ‘you go to the store or watch TV’ follows from a maximal consistent set of non-derivative reasons that contains the reason to go to the store. Thus, in this case there is an obligation to go to the store or watch TV, a reason to go to the store or watch TV, but the reason to go to the store or watch TV does not explain the obligation to go to the store or watch TV. Instead, they have a “common cause”, the non-derivative reason to go to the store. This shows that our account explains OER because non-derivative reasons are the “common cause” of obligations and reasons. Thus, we straightforwardly solve the second problem by rejecting REOD. 5.2.3 SRC, NCRA Let’s now turn to the first problem. The idea from §4 was to accept SRC and replace CRA with NCRA. Begin with SRC (which, recall, says that if there is a reason for S to do ᵯ� and ᵯ� entails ᵯ� , then there is a reason for S to do ᵯ� ). To see that this holds, recall that for there to be a reason to do ᵯ� is for ᵯ� to follow from a maximal consistent subset of your non-derivative reasons. Since “following from” is a transitive relation and since we know that ᵯ� follows from ᵯ� , this means that ᵯ� follows from that maximal consistent subset of your non-derivative reasons as well. Hence, there must also be a reason to do ᵯ� if there is a reason to do ᵯ� and ᵯ� entails ᵯ� . So SRC holds. And as we know this allows us to give a straightforward explanation of the speeding law case and Two Breakfasts. Next let’s turn to NCRA (which, recall, says that if there is a non-derivative reason to do ᵯ� , 34 Proof: We need to show that for any ⟨ ℜ, ≤ ⟩ if ⟨ ℜ, ≤ ⟩ ⊨ HNRE O( ᵯ� ), then ⟨ ℜ, ≤ ⟩ ⊨ HNRE R( ᵯ� ). So suppose ⟨ ℜ, ≤ ⟩ ⊨ HNRE O( ᵯ� ). Consider some maximal consistent subset of Consequent[Undefeated ≤ [ ℜ]], ᵮ� . By our supposition, ᵮ� ⊢ ᵯ� . By the definition of an Undefeated ≤ , ᵮ� ⊆ Consequent[ ℜ] and ᵮ� is consistent. So ᵮ� is either a maximal consistent subset of Consequent[ ℜ] or it isn’t. If it is, then we have ⟨ ℜ, ≤ ⟩ ⊨ HNRE R( ᵯ� ) because ᵮ� ⊢ ᵯ� . If it is not, then by definition there is some maximal consistent subset of Consequent[ ℜ], ᵮ� ’, such that ᵮ� ⊆ ᵮ� ’. By the monotonicity of ⊢, it follows that ᵮ� ’ ⊢ ᵯ� . So ⟨ ℜ, ≤ ⟩ ⊨ HNRE R( ᵯ� ). Thus, either way, ⟨ ℜ, ≤ ⟩ ⊨ HNRE R( ᵯ� ). 42 there is a non-derivative reason to do ᵯ� , and ᵯ� is consistent with ᵯ� , then there is a derivative reason to do ᵯ� and ᵯ� ). To see that this holds, recall a collection of non-derivative reasons will explain the existence of a reason to do ᵯ� and ᵯ� if ‘ ᵯ� and ᵯ� ’ follows from a maximal consistent subset of the non-derivative reasons. Since ᵯ� is consistent with ᵯ� , we know that the non-derivative reason to do ᵯ� and the non-derivative reason to do ᵯ� will both be members of a some maximal consistent subset of the non-derivative reasons. And since ‘ ᵯ� and ᵯ� ’ follows from any set that includes both ᵯ� and ᵯ� , this means that there will be a reason to do ᵯ� and ᵯ� if there is a non-derivative reason to do ᵯ� , there is a non-derivative reason to do ᵯ� , and ᵯ� is consistent with ᵯ� . So NCRA holds. 35 And NCRA allows us to explain Smith’s case and Lunch-Coffee-Dinner if we make certain assumptions. If we assume in Smith’s case that there is a non-derivative reason to fight in the army or perform alternative public service and assume that there is a non-derivative reason to not fight, then NCRA tells us that there is a reason to fight or serve and not fight. Applying SRC, then, gives us the desired reason to serve. If we assume in Lunch-Coffee-Dinner that there is a non-derivative reason to go to lunch, a non-derivative reason to go to coffee, and a non-derivative reason to do go to dinner, NCRA allows us to conclude, for example, that there is a non-derivative reason to go to lunch and go to coffee. Finally applying SRC give us the desired reason to go to lunch and coffee or lunch and dinner or coffee and dinner. 36 This shows that my account solves the reasons-analog of the problem of conflicting obligations in deontic logic as well as resolves the tension between REOD, RAC, and ¬OAC. 6. Conclusion In this chapter, I have argued that two related problems show that it is surprisingly difficult to make sense of cases involving conflicting reasons if we accept the popular idea in ethics. I then 35 This actually only proves a variant of NCRA that says ‘there is a reason’ rather than ‘there is a derivative reason’ in the consequent. In fact, cases where there is a non-derivative reason to do ᵯ� , a non-derivative reason to do ᵯ� , and a non-derivative reason to do ᵯ� and ᵯ� are counterexamples to NCRA as stated. This counterexample is however an artifact of the inessential simplifications that we have adopted to ease presentation. Our system as stated does not represent what the reason is to do an act. But plausibly, it cannot be that some fact p is both a non-derivative reason to do ᵯ� and also a derivative reason to do ᵯ� . So when we enrich our system to represent what the reason is, we can prove NCRA if we say p is a derivative reason to do ᵯ� just in case p is a reason to do ᵯ� and p is not a non-derivative reason to do ᵯ� . 36 Since it is an open question in moral philosophy what the non-derivative reasons are, what entitles me to make these assumptions? I make these assumptions for simplicity. Many other assumptions but not all assumptions would work. All that can be done to answer suspicion about my solution on this score is to consider alternative proposals on a case-by-case basis. For example, the alternative view that we have non-derivative reason to keep as many promises as we can or that we have non-derivative reason to minimize promise-breaking would work for my purposes But the idea that we only have non-derivative reason to keep all of our promises would not. This is the right result. If the only thing that we have non-derivative reason to do is keep all of our promises and we can’t keep all of our promises, then we really would not be obligated to keep two of three promises. Indeed, this consequence of my view shows just what is so implausible about this view of the normative significance of promises. Similar comments apply to Smith’s case. Thus, though I cannot prove this, I believe my view will get the right results in these cases. 43 developed a unified solution to these problems. An important upshot of this solution is that the distinction from moral philosophy between derivative and non-derivative reasons and work in deontic logic on conflicting obligations are important to the project of understanding how reasons explain obligations. It was these ideas that allowed us to clearly appreciate the structure of our problem. And it was these ideas that led to the central thought behind my solution: what we have derivative reason to do and what we are obligated to do is that which stands in an important logical relation to what we have non-derivative reason to do. If reasons explain obligations, then this is, at least to first approximation, how they explain obligations. 37 37 For helpful comments thanks to the audience of the USC speculative society, Josh Crabill, Lou Goble, Ben Lennertz, Alida Liberman, Errol Lord, Doug Portmore, Indrek Reiland, Henry Richardson, Jacob Ross, Barry Schein, Sam Shpall, Justin Snedegar, Julia Staffel, Sigrún Svavarsdóttir, Gabriel Uzquiano-Cruz, Aness Webster, and Ralph Wedgwood. Thanks most of all to Mark Schroeder for advice and criticism on every issue at every stage of this project. Finally, I thank the USC Provost’s PhD Fellowship and Russell Fellowship for support. 44 Chapter 2 Consequences of Reasoning with Conflicting Obligations 0. Introduction Since at least the 1960s, deontic logicians and ethicists have worried about whether there can be normative systems that allow conflicting obligations. Many of these worries stem from problems that 38 show a tension between a normative system allowing conflicting obligations and plausible logical principles for such a system. This has led to a rich literature in deontic logic concerned with developing logics that allow conflicting obligations while still validating a number of desirable inferences. 39 On the other hand, issues concerning how we may reason with conflicting obligations have received little direct attention. This is somewhat surprising because cases that illustrate compelling patterns of reasoning have been an important source of evidence about what the correct principles of deontic logic are. And while these cases certainly are a good source of evidence about this, they provide even more direct evidence for what the principles of good reasoning are. For example, consider the case of Smith, a citizen of a just country. Smith’s country requires him to fight in the army or perform alternative public service. So Smith ought to fight or serve. Smith is also deeply religious. Smith’s religion is committed to pacifism so it requires him not to fight. So Smith ought not to fight. Given that Smith ought to fight or serve and that Smith ought not to fight, we may conclude that Smith ought to serve. 40 41 In Smith’s case, it is good reasoning to conclude ‘Smith ought to serve’ from ‘Smith ought to fight or serve’ and ‘Smith ought not to fight’. This means (perhaps among other things) that if we permissibly believe that Smith ought to fight or serve, permissibly believe that Smith ought not to fight, and come to believe that Smith ought to serve by competently reasoning from these beliefs, we permissibly believe that Smith ought to serve. More generally, Smith’s case illustrates that the pattern 42 38 See Brink 1994, Chellas 1980, Gowans 1987b, Lemmon 1962, Marcus 1980, McConnell 2010, Pietroski 1993, Sinnott-Armstrong 1988, van Fraassen 1973, and Williams 1965. 39 See Goble 2009, Hansen 2005, Horty 2003, McNamara 2004, van der Torre and Tan. Goble 2013 provides a survey. 40 Cf. van Fraassen 1973: 18, Horty 1993: 73, Goble 200: 80. 41 In this case the obligation to fight or serve is based on the law and the obligation to not fight is based on religious commitments while the resulting obligation to serve is not solely based on either of these sources. So while some cases in which obligations derive from different sources involve ‘ought’s with different meanings, Smith’s case shows that other cases involve ‘ought’s with the same meaning and that there are patterns of good reasoning concerning these kinds of ‘ought’s (cf. Goble 2013:112-13). My discussion in this paper only applies to this second kind of case involving ‘ought’s with the same meaning. Thanks to the referee at Mind who pressed me to be clear about this. 42 So, as I am thinking of it, the reasoning we may do in Smith’s case is a mental process that starts with two beliefs and concludes in the formation of another belief. In this paper, I focus exclusively on reasoning with qualitative beliefs understood in this way. 45 of reasoning from ‘it ought to be that a or b’ and ‘it ought to be that not a’ to ‘it ought to be that b’ is a pattern of good reasoning. This pattern of good reasoning involves performing disjunctive syllogism 43 ‘under several “ought”s’. We can illustrate another pattern of good reasoning by considering the following case concerning a speeding law. The law sets the speed limit on major streets at fifty miles per hour. So drivers ought to drive less than fifty miles per hour. Given that drivers ought to drive less than fifty miles per hour, we may conclude that drivers ought to drive less than one hundred miles per hour. 44 In this speeding law case, it is good reasoning to conclude ‘Drivers ought to drive less than one hundred miles per hour’ from ‘Drivers ought to drive less than fifty miles per hour’. More generally, the speeding law case illustrates that the pattern of reasoning from ‘it ought to be that a’ to ‘it ought to be that b’ when a entails b is a pattern of good reasoning. Having used these two cases to identify two principles of good reasoning, we are now in a position to see why it is worthwhile to pay attention to issues concerning reasoning about conflicting obligations. It is worth paying attention because these two principles suggest that we cannot fruitfully reason with conflicting obligations. To see this, let us begin by supposing that there are conflicting obligations. In particular, let us assume that it ought to be that a and that it ought to be that not a. According to the second principle of reasoning that we identified, we may conclude ‘it ought to be that a or b’ from ‘it ought to be that a’ because a or b follows from a. Next according to the first principle of reasoning that we identified, we may perform disjunctive syllogism ‘under several “ought”s’. So we may conclude ‘it ought to be that b’ from ‘it ought to be that not a’ and ‘it ought to be that a or b’. Thus, our principles of reasoning taken together look to entail that we may reason from conflicting obligations—‘it ought to be that a’ and ‘it ought to be that not a’—to an explosion of obligations—‘it ought to be that b’ for any b. This result suggests that we cannot fruitfully reason with conflicting obligations. Reasoning that leads to explosion would be fruitless because agents who engage in such reasoning would be committed to thinking that there is no important distinction between claims of which it is true that they ought to hold and those of which it is false. And since no interesting normative system fails to treat this distinction as important, the fact that reasoning with conflicting obligations commits us to treating this distinction as unimportant should make us reluctant to believe that there are conflicting obligations. More conservatively, even if we would like to allow for the abstract possibility that there can be 43 For simplicity, I assume ‘ought’ and ‘obligation’ are roughly synonymous and I assume ‘ought’ is an unary operator on sentences. While both of these assumptions are controversial, the issues involved in these controversies are orthogonal to the central issues of this paper. 44 Cf. Cariani 2013: n. 1 46 conflicting obligations of some sort, the fact that we cannot fruitfully reason with them suggests that these kinds of obligations cannot play an important role in our lives. In short, it is worth paying attention to issues concerning reasoning because advocates of conflicting obligations have a problem making sense of how we may reason with such obligations. They face the problem of telling us how to reason with such obligations without reasoning explosively. More generally, even those who do not believe in conflicting obligations have reason to be interested in this problem. This is because there are analogs of it for other important normative notions as well. For example, consider the notion of a reason. We can use Smith’s case and the speeding law case to motivate analogous principles of reasoning for reasons by simply substituting ‘there is a reason to do’ for ‘it ought to be that’ in these cases and noticing that the resulting reasoning is intuitively acceptable. Since it is a platitude that reasons can conflict, this shows that everyone faces the problem of understanding how to reason with normative notions that allow for conflicts. And the solution 45 that I will develop in this paper can be generalized so that it applies to the notion of a reason. That said, this generalization will not be obvious and I will not have the space to present it here. Instead, I 46 will focus exclusively on the issue of conflicting obligations because it, unlike the issue of conflicting reasons, has been subject to considerable discussion. In particular, I will present an account of reasoning with obligations on behalf of the advocate of conflicting obligations. This account not only explains why explosive reasoning is bad reasoning but also explains why the reasoning in Smith’s case and the speeding law case are pieces of good reasoning that are neither enthymematic (that is, reliant on a suppressed premise) nor equivocating (that is, starting with an ‘ought’ that means one thing and ending with an ‘ought’ that means something else). The account is closely connected to the popular idea in ethics that reasons explain obligations and not only solves this problem, but also looks like a promising general theory of reasoning with conflicting obligations. In order to make the case for my account, I begin in §1 by considering whether we can make use of the rich literature on conflicting obligations in deontic logic to solve our problem. I argue that no deontic logic can solve our problem based on two structural features of the logical consequence relation (cut and monotonicity). Having used this argument to reject the most obvious potential solutions to our problem, I turn to developing my own account. I begin by presenting the popular idea in ethics that reasons explain obligations in §2. While this idea does not directly give us an account of reasoning, I show in §3 how to develop a plausible account of reasoning with conflicting obligations 45 Cf. Goble 2009: 454-8. 46 Though the key first component of this generalization is independently motivated in chapter one, the generalization itself is presented in Nair ms. 47 that is closely connected to this popular idea. This account of reasoning solves our problem. And it does so, in part, by entailing that the ‘reasoning consequence relation’ does not share the structural features of the logical consequence relation. These are the main results of this paper. The remainder of the paper is dedicated to extending these ideas by looking at other problems about reasoning (§4), considering alternatives to my account (§5), and drawing an important moral about the bearing of my account of reasoning on issues in deontic logic (§6). 1. Deontic Logic and The Problem I began the paper by noting that substantial attention has been paid to the task of providing an adequate logic of conflicting obligation. I also noted that logicians have used cases that illustrate patterns of good reasoning like Smith’s case and the speeding law case as an important source of evidence about what the correct principles of deontic logic are. It is unsurprising, then, that a problem analogous to the reasoning problem that I just presented has been discussed in this literature. 47 This suggests that a promising strategy to solve our problem about reasoning is to mine the resources of this rich literature in deontic logic. We might use one of the logics to provide an account of reasoning by claiming that we may reason in ways that correspond to valid inferences according to the logic of our choice. For this strategy to be successful, we would need to find a deontic logic that claims that the reasoning in Smith’s case and the speeding law case correspond to valid inferences, but explosive reasoning does not. In this section, I argue that this seemingly promising strategy is actually hopeless: no deontic logic could say that the reasoning in Smith’s case and the speeding law case correspond to valid inferences while the explosive reasoning does not. 1.1 ‘Logic’ In order to make this argument, I must begin by clarifying what I mean by ‘logic’ when I say that no deontic logic could get us the results that we want. As I will use ‘logic’, a logic allows us to define a logical consequence relation. More precisely, if we use upper case Greek letters like ᵮ� , ᵮ� , ᵮ� etc. for sets of arbitrary sentences and lower case Greek letters like ᵯ� , ᵯ� , ᵯ� etc. for arbitrary sentences, a logic tells us under what conditions ᵯ� is a logical consequence of ᵮ� . Typically logicians symbolize the logical consequence relation with a turnstile, ⊢. So in symbols, according to my usage of ‘logic’, a logic tells us under what conditions ᵮ� ⊢ ᵯ� (where this is read as ‘ ᵯ� is a logical consequence of ᵮ� ’). 48 And as I will use ‘logical consequence’, a logical consequence relation is intended to capture 47 See Goble 2009: 459. 48 I will also occasionally use these symbols as names for themselves and trust that context makes it clear when I am doing this. 48 when ᵯ� deductively follows from ᵮ� . That is, we should say that ᵮ� ⊢ ᵯ� only if the truth of the sentences in ᵮ� guarantees the truth of ᵯ� . In other words, the claim that ᵮ� ⊢ ᵯ� tells us that an argument with the sentences in ᵮ� as premises and ᵯ� as a conclusion is a valid argument in the sense of ‘valid’ that we are all taught in Logic 101. Of course, this is not the only reasonable way to use ‘logic’. John Burgess notes another important usage: Among the more technically oriented ‘logic’ no longer means a theory about which forms of argument are valid, but rather means any formalism, regardless of intended application, that resembles a logic in the original sense enough to allow it to be usefully studied by similar methods. (2009: viii) The sense of ‘logic’ Burgess identifies is wider than the sense that I have identified. And in fact, the 49 view I will go on to defend could be fairly called a deontic logic in this wider sense of ‘logic’. While this way of discussing my view is unobjectionable to my ears, I have found that it often obscures the import of the arguments of this paper as well as the central features of the account of reasoning that I will go on to develop. For this (purely pragmatic) reason, I will adopt the more narrow usage of ‘logic’ that I identified in the previous paragraphs. 1.2 The Argument Having clarified what I mean by ‘logic’, I am in a position to state the main claim of this section more precisely. The claim is that no deontic logic in this narrower sense of ‘logic’ just identified can solve our problem. In this subsection, I will argue for this claim. 1.2.1 Properties of Logical Consequence My argument begins by isolating two claims about logical consequence that any deontic logic must accept. The first claim is that if ᵯ� is a logical consequence of ᵮ� , then ᵯ� is a logical consequence of ᵮ� together with any other claim, ᵯ� . We can see why any deontic logic must accept this by noting that this claim just amounts to the obviously true claim that if ᵮ� guarantees the truth of ᵯ� this must mean that ᵮ� and ᵯ� together also guarantee the truth of ᵯ� . This feature of logical consequence is often called monotonicity: Monotonicity: if ᵮ� ⊢ ᵯ� , then ᵮ� , ᵯ� ⊢ ᵯ� where we understand ᵮ� , ᵯ� as shorthand for ᵮ� ∪{ ᵯ� }. The second claim is that if ᵯ� is a logical consequence of ᵮ� and ᵯ� is a logical consequence of ᵮ� 49 It would, for example, consider the system of default reasoning presented in Reiter 1980 to be a logic. 49 and ᵯ� together, then ᵯ� is a logical consequence of ᵮ� . While this claim is more complicated than the claim that logical consequence is monotonic, it is also a claim that any deontic logic must accept. After all, this claim just amounts to saying that if ᵮ� guarantees the truth of ᵯ� and ᵮ� and ᵯ� together guarantee the truth of ᵯ� , then ᵮ� guarantees the truth of ᵯ� . This feature of logical consequence is often called cut: Cut: if ᵮ� ⊢ ᵯ� and ᵮ� , ᵯ� ⊢ ᵯ� , then ᵮ� ⊢ ᵯ� Thus, any deontic logic must define a logical consequence relation that is both cut and monotonic. 1.2.2 Why Deontic Logic Cannot Solve the Problem I now want to make use of this result to show that no deontic logic can solve our problem. Our problem, recall, was to develop an account that says the reasoning in Smith’s case and the speeding law case correspond to valid inferences but explosive reasoning does not. The good reasoning in Smith’s case was reasoning from ‘it ought to be that a or b’ and ‘it ought to be that not a’ to ‘it ought to be that b’. And the idea of using deontic logic to solve our problem says that it can explain this reasoning by showing that it corresponds to a valid inference. In symbols, the idea that the reasoning in Smith’s case corresponds to a valid inference is the idea that the following claim holds: Logical Ought Disjunctive Syllogism (ODS ⊢ ): O(a ∨ b), O(¬a) ⊢ O(b) where O is read as ‘it ought to be the case that’, the connectives ∨, ¬ are understood as they usually are, and lower case italicized letters such as a, b, c etc. are for atomic sentences. Analogously, we said that the speeding law case illustrated that it is good reasoning to conclude ‘it ought to be that ᵯ� ’ from ‘it ought to be that ᵯ� ’ when ᵯ� entails ᵯ� . For the purposes of my argument, I will only need to discuss the special case of this inference that involves concluding ‘it ought to be that a or b’ from ‘it ought to be that a’. In symbols, the idea that this reasoning corresponds to a valid 50 inference is the idea that the following claim holds: 50 Some reject ODI ⊢ based, for example, on a generalization of Ross 1941’s puzzle concerning imperatives. Others find it to be undeniable, see Hilpinen and Føllesdal 1971: 22 and Nute and Yu 1997: 5. While I do not have the space to consider this issue in the detail it deserves here, three points can be made. First a popular solution to Ross’s paradox is to accept ODI ⊢ and explain the paradox away on pragmatic grounds (Castañeda 1981). Second though not clearly impossible, it is not obvious that the reasoning in the speeding law case can correspond to a valid inference if ODI ⊢ does not hold. In fact, this reasoning can be seen as an instance of ODI ⊢ going from ‘it ought to be that drivers drive forty-nine mph or forty-eight mph or…’ to ‘it ought to be that drive ninety-nine mph or ninety-eight mph or … or forty-nine mph or forty-eight mph or….’. Third, ODI ⊢ is entailed by other plausible principle so those who reject it face the task of explaining which of these principles they reject. Two examples: ODI ⊢ follows from ODS ⊢ and the assumption that O(t) where t can be any tautology; ODI ⊢ follows from the principle that ‘it ought to be that ᵯ� ’ follows from ‘it ought to be that ᵯ� and ᵯ� ’ together with the principle that ‘it ought to be that ᵯ� ’ follows from ‘it ought to be that ᵯ� ’ when ᵯ� and ᵯ� are logically equivalent. Thanks to the Thomas Baldwin for pressing me to address this issue. 50 Logical Ought Disjunction Introduction (ODI ⊢ ): O(a) ⊢ O(a ∨ b) Thus, ODS ⊢ and ODI ⊢ are formal statements of the idea that the reasoning in Smith’s case and the speeding law case correspond to valid inferences. To use deontic logic to solve our problem, we must find a deontic logic that accepts ODS ⊢ and ODI ⊢ but rejects explosive reasoning. Unfortunately, as I will now argue, there is no such deontic logic. As I have already explained, every deontic logic defines a cut and monotonic consequence relation. I now will make use of this fact to show that if we assume ODS ⊢ and ODI ⊢ hold we can derive that explosive reasoning corresponds to a valid inference. So begin by assuming that ODS ⊢ and ODI ⊢ , here numbered (1 ⊢ ) and (2 ⊢ ) respectively, hold: (1 ⊢ ) O(¬a), O(a ∨ b) ⊢ O(b) (2 ⊢ ) O(a) ⊢ O(a ∨ b) Monotonicity, recall, tells us that we may add a sentence to the left and keep what is on the right. So we can get (3 ⊢ ) by adding O(a) to the left of (1 ⊢ ) and similarly get (4 ⊢ ) by adding O(¬a) to the left of (2 ⊢ ): (3 ⊢ ) O(a),O(¬a), O(a ∨ b) ⊢ O(b) (4 ⊢ ) O(¬a),O(a) ⊢ O(a ∨ b) Cut then allows us to derive the result that explosive reasoning corresponds to a valid inference from (3 ⊢ ) and (4 ⊢ ). To see this, notice that (4 ⊢ ) is of the form ᵮ� ⊢ ᵯ� : O(¬a), O(a) ⊢ O(a ∨ b) |_______| |______| ᵮ� ᵯ� and (3 ⊢ ) is of the form ᵮ� , ᵯ� ⊢ ᵯ� : O(a), O(¬a), O(a ∨ b) ⊢ O(b) |_______| |_____| |___| ᵮ� ᵯ� ᵯ� So cut tells us that ᵮ� ⊢ ᵯ� : (5 ⊢ ) O(a), O(¬a) ⊢ O(b) 51 (5 ⊢ ) says that inferring ‘it ought to be that b’ from ‘it ought to be that a’ and ‘it ought to be that not a’ is a valid inference. This shows why the strategy of mining the resources of deontic logic to solve our problem is hopeless. After all, for that strategy to work it must show that the reasoning in Smith’s case and the speeding law case correspond to valid inferences even though explosive reasoning does not. But this derivation shows that it is impossible for the reasoning in Smith’s case and the speeding law case to correspond to valid inferences without explosive reasoning corresponding to a valid inference. And in fact this result tells us something more general than this. It tells us that if we are to solve our problem, the relation of good reasoning or the ‘reasoning consequence relation’ must not be both cut and monotonic. For if it were both cut and monotonic, then a parallel argument would show that it cannot be that the reasoning in Smith’s case and the speeding law case is good reasoning without it also being true that explosive reasoning is good reasoning. Of course, cut and monotonicity are non-negotiable for a logical consequence relation. For a reasoning consequence relation, however, they are negotiable. Monotonicity about reasoning says that, if you may reason to ᵯ� from ᵮ� , then you may reason to ᵯ� from ᵮ� plus anything. In essence, this means that learning something new cannot make it so you have to take back any of your old conclusions. Monotonicity for reasoning is negotiable simply because it is sensible to wonder whether learning something new might make it so you have to retract your old conclusions. Similarly cut about reasoning says that if you may reason to ᵯ� with the premises ᵮ� and you may reason to ᵯ� with the premises ᵮ� and ᵯ� , then you may reason to ᵯ� with only the premises ᵮ� . This basically means that you can draw the same conclusions whether you arrive at ᵯ� as a conclusion or have it from the start as a premise. Much like monotonicity, cut is negotiable because it is at least sensible to ask whether premises really do play the same role in our reasoning as conclusions. Thus, while solving 51 our problem with deontic logic is impossible because the logical consequence relation is both cut and monotonic, it may yet be possible to solve our problem about reasoning. What we need to do to solve this problem is develop an account that entails that the reasoning consequence relation is not both cut and monotonic. So what I will do is develop such an account. 1.3 Reconsidering the Problem But before we turn to my account, we may wish to try out a different strategy that allows us to stay closer to the idea of using deontic logic to solve our problem. The strategy is to solve our problem by claiming that the reasoning in either Smith’s case or the speeding law case is enthymematic or 51 Some suggest reasoning that does not satisfy cut is unstable (see for example, Makinson 1994: 43-4). Chapter three responds to this charge of instability from a general perspective. 52 equivocating reasoning. Of course, these are simple examples where we have clear judgements that we are dealing with non-enthymematic and non-equivocating reasoning. So rejecting these judgements without some good independent reason would be an ad hoc solution to our problem. But we have just seen that the 52 reasoning in Smith’s case and the speeding law case cannot correspond to valid inferences if we are to avoid saying that explosive reasoning corresponds to a valid inference. And it is tempting to think that this is a good independent reason for rejecting our judgements that our reasoning in these cases is non-enthymematic and non-equivocating. As tempting as this line of thought might be, it is too quick. What makes the thought too quick is that it is not generally true that we should be willing to reconstrue some apparently good piece of reasoning as enthymematic or equivocating if we find that it does not correspond to a valid inference. Remember we are using ‘valid inference’ to pick out an inference from ᵮ� to ᵯ� such that the truth of the sentences in ᵮ� guarantees the truth of ᵯ� . But there are many forms of reasoning that do not involve only inferring what is guaranteed to be true. For instance, we engage in inductive reasoning and common sense reasoning (for example, concluding ‘John smokes’ from ‘A reliable witness told me that John smokes’). Such reasoning does not correspond to some valid inference but this is not a good reason to reject our judgement that these types of reasoning are forms of good non-equivocating and non-enthymematic reasoning. To give one example that displays just how implausible the suggestion that we are currently discussing is, consider the idea that the reasoning from ‘A reliable witness told me that John smokes’ to ‘John smokes’ is enthymematic. This idea would say the reasoning from ‘A reliable witness told me that John smokes’ to ‘John smokes’ is good reasoning because it relies on a suppressed premise, p, and the inference to ‘John smokes’ from p and ‘A reliable witness told me that John smokes’ is a valid inference. But consider what p would have to be in order for this inference to be valid. One thing it could be is ‘Reliable witnesses always tell the truth’. But this claim is obviously false and rational agents do not generally accept it. So if the reasoning from ‘A reliable witness told me that John smokes’ to ‘John smokes’ implicitly relies on a premise like this, then, far from explaining why this is good reasoning, the strategy of treating this reasoning as enthymematic would conclusively show this reasoning is bad. Of course, we can consider other candidate premises. But all of the examples that I can think of are unacceptable for reasons similar to the one just given or end up making the explicitly given premise essentially irrelevant (for example, an implicit premise like ‘The thing the reliable witness said is true’). 52 Cf. Goble 2009: 466-7. 53 For this reason, it is implausible to explain all forms of good reasoning as equivocating or enthymematic for reasoning that corresponds to a valid inference. That said, since these forms of reasoning do not correspond to valid inferences, we may wonder what explains why they are forms of good reasoning. And in fact, one way of understanding what philosophers are up to when they are tackling the (new and old) riddle of induction is to understand them to be trying to answer a question like this. But the example of induction also teaches 53 us that taking questions about what explains why a certain form of reasoning is good reasoning seriously and not yet having fully worked out answers to such questions should not undermine our confidence that this form of reasoning really is good. In light of this, I conclude that we should continue to take Smith’s case and the speeding law case to provide us with strong evidence that certain patterns of reasoning with obligations are patterns of good reasoning. What I will do in the next two sections is develop an account of reasoning that entails that the reasoning in these cases is good reasoning. And while my account does not provide a fully worked out story of what explains why this reasoning is good, it will at least shed some light on this question. 2. The Popular Idea from Ethics In order to develop my account of reasoning with obligations, it will help to first say something about the nature of obligation itself. And for that, I turn to the popular idea in ethics that reasons explain obligations. 54 2.1 Versions of The Popular Idea I am not going to be arguing for the popular idea in ethics in this paper. Instead, I will simply be assuming that it is true. In fact, I will be assuming that a particular version of it is true and the primary task of this section is to explain the version of this view that I will be assuming is true. We get different versions of the idea that reasons explain obligations by considering different accounts of how reasons explain obligations. Unfortunately, while the idea that reasons explain obligations is popular, no particular version of this idea enjoys the same celebrity. To illustrate, one account that has been discussed is that we explain why you ought to do a by showing that you have better reasons to do a than to do any alternative to a. Unfortunately, I cannot adopt this account because I am interested in understanding reasoning with conflicting obligations on behalf of the advocates of such obligations, and this account looks to 53 See Hume 1739: 1.3.6 and Goodman 1955: ch. 3. 54 See Dancy 2004, Nagel 1970, Parfit 2011, Raz 1975, and Scanlon 1998. 54 rule out conflicting obligations. According to this account, we ought to do a and ought to do b when a and b conflict only if the reasons to do a are better than the reasons to do any alternative and the reasons to do b are better than the reasons to do any alternative. But given natural assumptions about what alternatives are, a and b are alternatives to one another. And given natural assumptions about what it is for one reason to be better than another, a reason to do a cannot be better than a reason to do b while that same reason to do b is better than that reason to do a. But since this would have to be true in order for there to be conflicting obligations, this account rules out conflicting obligations. So this version of the popular idea in ethics will not work for us because it is not compatible with the existence of conflicting obligations. Luckily, there are other versions of the popular idea in ethics and, in fact, one of the most sophisticated versions of the popular idea in ethics does allow for the existence of conflicting obligations. This account is due to John Horty (2012). Though one of the virtues of Horty’s account is how detailed it is, I will only introduce a simplified version of it in this paper. While I can develop my account of reasoning using the full 55 details of Horty’s system, such details are distracting for our purposes. We are not trying to develop the best version of the popular idea in ethics in this paper. I simply want to make use of this idea from ethics to develop an account of reasoning. So all we require are those details that we will need to develop our account of reasoning. 2.2 ⊨ Horty We can start to get a feel for this simplified version of Horty’s account by noticing that it is not just any reasons that explain obligations. I might, for example, have a reason to watch television all day. But it is usually not true that I ought to do this. Instead, I usually ought to do something else like teach my students, work on a paper, or clean my house. And this is generally because my reasons to do these things are much better than my reasons to sit at home and watch television all day. Now Horty’s account can tell us how reasons of different strength interact with one another. But since we are interested in understanding reasoning in this paper and not the strength of reasons, we will not worry about how we determine whether one reason is better than another. Instead, we will take it for granted that there is a class of good or undefeated reasons where an undefeated reason is understood to be a reason that is not worse than any reason that is incompatible with it. Since we will later use these ideas from ethics in developing our account of reasoning and since I will want to be able to compare this account with the ideas from deontic logic that we discussed 55 In fact, the simplified version of this system is equivalent to a theory first proposed in van Fraassen 1973. I will none the less continue to call it ‘Horty’s system’ because I intend my account of reasoning to ultimately embrace the much richer system developed in Horty 2012. 55 earlier, I will need to introduce a bit of formalism. So far all I have done is point out that the notion of a good reason plays a role in Horty’s system, so let me introduce a formal device for discussing good reasons. I will write !( ᵯ� ) for ‘there is a good reason for it to be the case that ᵯ� ’. And for simplicity once again, we will not bother with formally representing what the reason is. Since we will often be interested in what we ought to do in cases where we have more than one reason, it will also be helpful to have a formal device for a set of good reasons. I will use ℜ for such sets. Finally, it will be helpful to have a way of picking out the thing that we have a good reason to do (for example, ‘I go to the store’ in ‘there is a good reason for it to be the case that I go to the store’). So let me define a function Consequent that allows us to do this: Consequent[!( ᵯ� )]= ᵯ� It is also useful to generalize this so that we can take a set of claims about what there is a good reason to do and return a set of those things that there is a good reason to do: Consequent[ ℜ]={x|there is some y such that y ∈ ℜ and Consequent[y]=x} So far then we have introduced a formal way of talking about good reasons (!( ᵯ� )) and about sets of good reasons ( ℜ) along with a function to pick out the things that there are good reasons to do (Consequent). Now that we have identified the kinds of reasons that we are going to use to explain obligations, we need to say how these reasons explain obligations. Sometimes this is clear enough. For example, suppose you promise to meet Sally for lunch and you do not have anything else pressing to do. In this situation, you ought to meet Sally for lunch and this is explained by the fact that there is a good reason for you to meet her—namely the promise. So here you have a good reason to meet Sally for lunch and you also ought to meet her. But for other claims about what you ought to do in this case, things are less clear. For example, many people accept that if you ought to do ᵯ� and ᵯ� entails ᵯ� , then you to ought to do ᵯ� . Assuming that this is correct, this means that you also ought to meet Sally for lunch or meet Bill for lunch because this follows from meeting Sally for lunch. It is not clear however that you have good reason to meet Sally or Bill for lunch. After all, we usually do not think promising Sally is a reason to meet Sally or Bill. So while it is true that you ought to meet Sally or Bill, it is not obviously true that there is a good 56 56 This is not intended as a conclusive argument for this claim. Indeed, I argue against this view in chapter one. In terms of that framework, the point made here can be reframed as the claim that sometimes it can be that you ought to do something even though you do not have good non-derivative reason to do it.The point that I am making here is only intended to introduce Horty’s way of thinking 56 reason for you to meet Sally or Bill. If this is right, then it may be that you ought to do something even though you do not have good reason to do that thing. This is not a counterexample to the idea that reasons explain obligations. It just tells us that the connection between reasons and obligations is not one that allows us to say that every time you ought to do something there is a reason to do that very thing. Instead of saying this, what we can do is say that what explains why you ought to meet Sally or Bill is that this follows from what you have good reason to do. That is, you ought to meet Sally or Bill because you have a good reason to meet Sally and because meeting Sally entails meeting Sally or Bill. Horty’s account essentially embraces this idea about how reasons explain obligations. Roughly, obligations are the things that follow from what you have good reasons to do. It is important that this is only a rough idea and not the whole story. After all, everyone should accept that there can be situations in which you have two equally good reasons. So consider such a case (for example, you made equally important promises to Sally and to Bill) and suppose further that these reasons conflict (for example you cannot fulfil both promises). In this kind of case, we do not want to 57 identify what you ought to do with what follows from what you have good reasons to do. After all, you have a good reason to do each of a pair of inconsistent things. And since everything follows from an inconsistent set of claims and since everyone should agree that it is not true that you ought to do everything in this case, it cannot be true that you ought to do what follows from what you have good reasons to do. We avoid this result by focusing not on what follows from the whole set of reasons but rather on what follows from some most inclusive consistent subset of reasons. Since this idea can be hard to get your head around when it is stated this abstractly, it may be useful to have a heuristic that lets us think about this idea in less abstract terms. So think of it like this: A rational agent treats the things she thinks she has good reasons to do as goals and what such a rational agent ought to do are the steps in a plan that allows her to achieve those goals. In cases where she has incompatible good reasons, she has incompatible goals. So there is no single plan that can accomplish all of them. A ‘most inclusive’ consistent set of reasons is a collection of goals that is consistent and such that you cannot add more goals to that collection without making it inconsistent. This is the most inclusive set of goals that can be accomplished by a single plan. And so what follows from some such set can be thought of as a step in a plan that helps achieve the most inclusive set of goals that she can achieve with a single plan. Formally, we say a set of good reasons, ℜ, includes incompatible reasons when the things we about the issue so that the arguments of this chapter do not presuppose the work of any other chapters. As I mention earlier, Nair ms provides a unified picture of the account of chapter one and chapter two. 57 Cf. Marcus 1980: 125. 57 have good reason to do are inconsistent—that is, when Consequent[ ℜ] is inconsistent. And we will treat the idea of a ‘most inclusive’ consistent subset of a set of reasons as a maximal consistent subset of Consequent[ ℜ]: ᵮ� is maximal consistent subset of ᵮ� iff (1) ᵮ� ⊆ ᵮ� , (2) ᵮ� is consistent, and (3) it is not the case that there is a ᵮ� such that ᵮ� is consistent and ᵮ� ⊂ ᵮ� ⊆ ᵮ� . What we have said is that what you ought to do is what is entailed by some maximal consistent subset. We will use the logical consequence relation of ordinary propositional logic ⊢ PL as our formal model of entailment. This allows us to define a relation ⊨ Horty between a set of good reasons, ℜ, and a claim about what we ought to do, O( ᵯ� ), that holds just in case the set of good reasons explains the obligation: ℜ ⊨ Horty O( ᵯ� ) iff ᵮ� ⊢ PL ᵯ� for some maximal consistent subset, ᵮ� , of Consequent[ ℜ] This then is the idea of how reasons explain obligations that we will be using. And I will refer to this particular version of the popular idea in ethics by the name Reasons Explain Obligations or REO for short. To summarize, the basic idea is that what we ought to do is what follows from what we have good reason to do. We add the complication for cases where we have equally good reason to do two incompatible things that what we ought to do is what follows from some maximal consistent subset of what we have good reason to do. And it is precisely in these cases that this system allows for conflicting obligations. 58 Thus, I will be assuming REO is correct in what follows. And in the next section I will build 59 my account of reasoning on this idea about the nature of obligation. 2.3 Comparison to Deontic Logic Before I go on to do develop the account of reasoning that will solve our problem, I want to take a moment to spell out the relationship between REO and the results about deontic logic in §1. REO is a metaphysical thesis about the relationship between reasons and obligations. As such, it does not directly define a logical consequence relation between obligations in the way a deontic logic does. However, we can use the resources of REO to define a relation that tells us when a collection of obligations guarantees the truth of another obligation: one obligation will guarantee the truth of 58 Sinnott-Armstrong 1988 and Nagel 1979 argue that these are the cases where we face conflicting obligations. 59 This is not to say that this is the only view that I could have assumed in order to develop my account. For another view see Hansen 2007. 58 another just in case any collection of reasons that can explain why the first obligation holds suffices to explain why the second obligation holds. This leads to the following definition of a relation ⊢ Truth that tells us when a collection of obligations guarantees the truth of another obligation: ᵮ� ⊢ Truth ᵯ� iff for all ℜ such that ℜ ⊨ Horty ᵯ� for all ᵯ� ∈ ᵮ� , ℜ ⊨ Horty ᵯ� Thus, ⊢ Truth is a relation that tells us when one obligation guarantees the truth of another. Given the way that we have defined ⊢ Truth , it does not take much work to prove that it must be both monotonic and cut. And given our results from §1.2, this means that we cannot say that the reasoning in Smith’s case and the speeding law case correspond to truth guaranteeing inferences without saying that the explosive reasoning corresponds to a truth guaranteeing inference. Let us consider then which of these inferences corresponds to a truth guaranteeing inference and which of these does not. The reasoning in the speeding law case does correspond to a truth guaranteeing inference. Formally, this means that the following claim is true: O( ᵯ� ) ⊢ Truth O( ᵯ� ) if { ᵯ� } ⊢ PL ᵯ� And it is not hard to see why it is. After all, this claim will be true if any collection of reasons that explains why O( ᵯ� ) is true also explains why O( ᵯ� ) is true. For a collection of reasons to explain why O( ᵯ� ) is true is just for ᵯ� to follow from a maximal consistent subset of that set of reasons. Since we are understanding following from in terms of ⊢ PL and since ⊢ PL is a transitive relation, we know that if ᵯ� follows from this set of reasons and { ᵯ� } ⊢ PL ᵯ� then ᵯ� must follow from this set of reasons as well. Thus, any set of reasons that explains why O( ᵯ� ) is true must also explain why O( ᵯ� ) is true. Now that we have seen that the reasoning in the speeding law case corresponds to a truth guaranteeing inference, we know that the reasoning in Smith’s case must not correspond to a truth guaranteeing inference on pain of saying that the explosive reasoning corresponds to a truth guaranteeing inference. And in fact we can demonstrate with a single abstract example that neither the reasoning in Smith’s case nor the explosive reasoning corresponds to a truth guaranteeing inference. The example is a case where we have a good reason to do a and a good reason to do ¬a, ℜ={!(a), !(¬a)}. Evidently, this set of reasons has two maximal consistent subsets, {a} and {¬a}. Thus, this set of reasons explains why O(a) and O(a ∨ b) are true—they follow from a maximal consistent subset, {a}—and explains why O(¬a) is true—it follows from a maximal consistent subset {¬a}. But importantly this set of reasons does not suffice to make O(b) true—it follows from neither {a} nor 59 {¬a}. Thus, this set of reasons shows that it is possible for O(a), O(a ∨ b), and O(¬a) to be true while O(b) is false. So this shows that the reasoning we do in Smith’s case from ‘it ought to be that a or b’ and ‘it ought to be that not a’ to ‘it ought to be that b’ does not correspond to a truth guaranteeing inference. And it shows that the explosive reasoning from ‘it ought to be that a’ and ‘it ought to be that not a’ to ‘it ought to be that b’ doesn’t either. This means that REO and ⊢ Truth alone do not allow us to provide an account that solves our problem about reasoning. But what I will show in the next section is that the ideas that I have introduced in this section together with certain other assumptions do allow us to develop an account of reasoning with conflicting obligations. 3. How to Reason with Conflicting Obligations My account of reasoning is that we reason well when we reason as if we have good reasons to do what our premises say we ought to do. Let me explain what this mouthful means. Begin by noticing that there are two respects in which REO does not directly tell us anything about the reasoning that we are interested in. First REO is a metaphysical theory about the nature of obligation—it tells us how reasons explain obligation—and not a broadly epistemological theory of reasoning. Second, REO concerns the connection between reasons and obligations, but the reasoning that we are interested in concerns the connection between obligations and other obligations. My account bridges this gap between REO and the reasoning that we are interested in by taking on board two further claims. First, I claim that it is good reasoning to conclude ‘it ought to be that a’ from a set of premises about what you have good reason to do just in case REO says those reasons must make it true that it ought to be that a. Second, I claim that when ‘it ought to be that a’ is your premise, the conclusions you can draw about what you ought to do are the conclusions you can draw about what you ought to do from ‘there is a good reason to do a’. This does not mean that when your premise is ‘it ought to be that a’ you reason enthymematically by relying on the suppressed premise ‘there is a good reason to do a.’ And it does not mean that when your premise is ‘it ought to be that a’ you equivocate on its meaning by taking it to mean the same thing as ‘there is a good reason to do a’. My proposal instead is just that when you accept ‘it ought to be that a’ as a premise, the conclusions you can draw about what you ought to do are the same as the conclusions you can draw from ‘there is a good reason to do a’. In short, my idea is that the premise ‘it ought to be that a’ plays the same functional role in reasoning about what you ought to do as ‘there is a good reason to do a’. Of course, this is just a statement of the account. So in the remainder of this section, I motivate 60 it (§3.1) and demonstrate that it solves the problem (§3.2). 3.1 Motivating the Account There are two reasons why this account is plausible. The first reason is that it jointly satisfies two desiderata. The second is that it provides a plausible picture of reasoning. 3.1.1 First Reason We want to understand Smith’s case and the speeding law case as pieces of non-equivocating and non-enthymematic good reasoning. This means that these are cases of reasoning directly from claims about what we ought to do to other claims about what we ought to do. Call reasoning like this deontic reasoning. What is important to notice about deontic reasoning is that it involves reasoning from obligations to obligations directly and does not involve appealing, for example, to premises about our reasons. So the first, perhaps obvious, desideratum is just this: the correct account treats the reasoning that we are interested in as deontic reasoning. And the account that I have proposed does this. It is an account of how to reason from claims about obligations to other claims about obligations. Of course, I said that when you have ‘it ought to be that a’ as a premise you reason as if you have a good reason to do a. But as I insisted above, this does not mean you implicitly rely on ‘there is a good reason to do a’ in your reasoning and it doesn’t mean that when ‘it ought to be that a’ is your premise it means the same thing as ‘there is a good reason to do a’. Rather it is just that the conclusions that you can draw from ‘it ought to be that a’ are the same as the conclusions that you can draw from ‘there is a good reason to do a’. The second desideratum for an account of the reasoning in Smith’s case and the speeding law case is that it should fit well with REO. Of course, as I have emphasized, REO is, in the first instance, a metaphysical theory about the nature of obligation and not a broadly epistemology theory about deontic reasoning. That said, it is still natural to want your metaphysics and your epistemology to fit well together. And my account of deontic reasoning does fit closely with REO because it builds on REO with the help of the two further claims that I took on board. The fact that my account jointly satisfies these desiderata adds to its plausibility. And since my account rests on REO and two further claims and since we are taking REO for granted, the fact that my account of deontic reasoning satisfies these desiderata lends plausibility to the two further claims that I am making. 3.1.2 Second Reason My second reason provides a direct illustration of the plausibility of my account. The way to see this is to return to our heuristic about planning. 61 Consider then two ways an account of deontic reasoning might tell you to treat your premises. An account of deontic reasoning could tell you to treat the things your premises say that you ought to do as goals of a plan or as steps in a plan. In particular, if your premise in deontic reasoning is that you ought to do a, an account could tell you to treat a as a mere step in your plan or treat a as a goal. But if all you know is that you ought to do a, then it seems that you should structure your planning around a like you would structure your planning around a goal. If all you know is that you ought to do a, then a seems like it should have this kind of guiding role in your planning. This suggests that an account of deontic reasoning should tell you to treat the things your premises say that you ought to do as goals. If we think of deontic reasoning in this way and plausibly assume, as we did in section 2.2, that a rational agent treats the things she thinks she has good reason to do as goals, we can see that treating what our premises say that we ought to do as goals amounts to reasoning as if we have good reasons to do what our premises say we ought to do. This means that an account of deontic reasoning should tell us to reason as if we have good reasons to do what our premises say we ought to do. This discussion of planning is not intended as a description of what deontic reasoning is rather it is intended as a heuristic to bring out what is underlyingly plausible about my account of such reasoning. The idea in more abstract terms is this: As agents, we sometimes must reason with obligations even though we do not know which reasons explain those obligations. In this environment, it is reasonable to think that we treat the things our premises say we ought to do as guiding our reasoning about what else we ought to do in the way claims about reasons guide our reasoning about what we ought to do. The idea is that this is a sensible ampliative reasoning strategy. 60 61 For this reason, I think there is some plausibility to the idea that rational agents reason well when they treat the premise ‘it ought to be that a’ as playing the same role in their reasoning as ‘there is a good reason to do a’. This is not, of course, a full account of what justifies this reasoning strategy. But this is analogous to the situation that we find ourselves in with regard to other forms of reasoning such as inductive reasoning and common sense reasoning. For these forms of reasoning, we do not have a fully worked out story of why they are forms of good reasoning either. That said, my account does shed at least some light on why the deontic reasoning we engage in is good reasoning by satisfying the two desiderata and by presenting a plausible picture of what such reasoning consists in. Because of this, I believe that the account of deontic reasoning that I have 60 So the conclusions that you can draw about what you ought to do when you accept ‘it ought to be that a’ as a premise are the same as the conclusions you can draw from ‘there is a good reason to do a’ not solely because of the meaning of ‘it ought to be that a’ but also because of the fact that you accept it as a premise in the context of deontic reasoning. This is why deontic reasoning with ‘it ought to be that a’ is not enthymematic or equivocating for reasoning with ‘there is a good reason to do a’—it is a feature of acceptance as a premise in deontic reasoning not a feature of meaning that makes the difference. 61 By ampliative reasoning I mean reasoning that allows us to conclude more than what is guaranteed to be true by our premises. 62 offered has some independent plausibility. And as I will now demonstrate, it solves our problem. 3.2 Solving the Problem To do this, we will need to develop the account that I have just presented informally in enough formal detail (§3.2.1) to verify that it entails that the reasoning in Smith’s case and the speeding law case is good reasoning but explosive reasoning is not (§3.2.2). And we will need to verify that this account avoids the argument in §1 that showed that our problem cannot be solved if the reasoning consequence relation is both cut and monotonic (§3.2.3). 3.2.1 The Formal Implementation So let us begin our discussion by formally implementing the idea that we have been informally discussing so far. I said that this account takes REO for granted and builds on it with two further claims. Since REO is already presented in formal terms, all we need to do in order to formally implement this idea is formalize the two further claims. The first claim I added to REO was that we can reason from some claims about what we have good reason to do to the conclusion ‘it ought to be that a’ just in case according to REO, any set of reasons that includes these claims about what we have good reason to do explains why ‘it ought to be that a’ holds. So in symbols, you can conclude, for example, ‘it ought to be that b’ from ‘there is a good reason to do a’ just in case for every ℜ such that {!(a)} ⊆ ℜ, ℜ ⊨ Horty O(b). The second claim that I added to REO is that when your premise is ‘it ought to be that a’, the conclusions that you can draw from it are the conclusions that you can draw from ‘there is a good reason to do a’. To see how this works formally, recall that we just said that you can conclude ‘it ought to be that b’ from ‘there is a good reason to do a’ just in case for every ℜ such that {!(a)} ⊆ ℜ, ℜ ⊨ Horty O(b). Since my account says that when your premise is ‘it ought to be that a’, the conclusions that you can draw about what you ought to do are the conclusions that you can draw from ‘there is a good reason to do a’, this means that you can conclude ‘it ought to be that b’ from ‘it ought to be that a’ just in case for every ℜ such that {!(a)} ⊆ ℜ, ℜ ⊨ Horty O(b). This tells us how to formalize my account of deontic reasoning for the special case of reasoning from ‘it ought to be that a’ to ‘it ought to be that b’. Of course, we will want to know how to reason with sets of obligations rather than just a single obligation so we will need to generalize this idea to sets. This leads us to the following definition of a reasoning consequence relation that I will call for reasons that will become apparent later ⊩ NonCut : 63 O( ᵯ� 1 ), O( ᵯ� 2 ), … , O( ᵯ� n ) ⊩ NonCut O( ᵯ� ) iff, for all ℜ such that: (i) ℜ ⊨ Horty O( ᵯ� 1 ), O( ᵯ� 2 ), … , O( ᵯ� n ) and (ii) {!( ᵯ� i ) | 1≤i≤n} ⊆ ℜ ℜ ⊨ Horty O( ᵯ� ) 62 This definition tells us that we can conclude O( ᵯ� ) from a set of obligations just in case ℜ ⊨ Horty O( ᵯ� ) where ℜ is a set of reasons that we have placed conditions (i) and (ii) on. Condition (i) captures the idea that you take your premises to be true in reasoning. Condition (ii) captures the idea that the 63 premise ‘it ought to be that a’ plays the same functional role in reasoning as ‘there is a good reason to do a’. And the rest of the definition captures the idea that we may reason from some claims about what we have good reason to do to the conclusion ‘it ought to be that b’ just in case according to REO, any set of reasons that includes these claims about what we have good reason to do explains why ‘it ought to be that b’ holds. So this is just a generalization and formalization of the idea that we reason well when we reason as if we have good reasons to do what our premises say we ought to do. 64 3.2.2 The Cases Having formally implemented this account of deontic reasoning, I am now in a position to demonstrate in a precise way that it solves our problem. To solve our problem is to provide an account of deontic reasoning that entails that the reasoning in Smith’s case and the speeding law case is good reasoning but explosive reasoning is not. Formally, this means we want to show that the following claims hold: Reasoning Ought Disjunctive Syllogism (ODS ⊩ ): O(a ∨ b), O(¬a) ⊩ NonCut O(b) Reasoning Single Ought Closure (SOC ⊩ ) O( ᵯ� ) ⊩ NonCut O( ᵯ� ) when { ᵯ� } ⊢ PL ᵯ� 62 ⊩ NonCut is formally similar to a relation defined in Horty 1993 (and other systems inspired by this one). More generally, this paper owes many of its ideas to previous work by John Horty. In essence, the present account newly interprets, further develops, and connects various strands of Horty’s work. Let me briefly explain. Horty 1993 and 1997 define a consequence relation that is formally similar to the one defined here. At this time however, Horty did not distinguish between reasons and obligations. So this relation though formally similar to mine was not given the interpretation that I give it—that when you have ‘it ought to be that a’ as a premise you reasons as if you have good reason to do a. For the first time in Horty 1994 and more elaborately in Horty 2003, 2012, Horty distinguished between reasons and obligations. This distinction allowed him to recharacterize the relation defined in Horty 1993 and 1997 as a relation between reasons and obligations—namely, ⊨ Horty . But as we saw this relation is one that does not directly tell us about deontic reasoning. So my account shows how we can accept the theory developed in Horty’s later work and build on top of it an account of deontic reasoning that is formally similar to the one given in his earlier work. 63 This condition is independent of our core idea that we reason well when we reason if we have good reason to do what our premises say we ought to do. While I do find this further condition plausible, I am officially neutral about it because there are a number of interesting issues that I cannot adequately discuss here that turn on whether we reject (i): rejecting (i) would make a difference to how we treat obligations to do contradictory things and would change certain formal properties of the relation (it would make it so the relation is not reflexive and so that not all logically valid inferences are pieces of good reasoning). I will however assume (i) holds from now on because it allows me to state some of my main ideas in a more straight forward way (for example, it simplifies the proof of SOC ⊩ below) 64 While this tells us how to reason with obligations that have prejacents of arbitrary logical complexity, it does not tell us how to reason with arbitrarily complex sentences about obligation. I do not present an account of this here because it is not needed to solve the main problems about reasoning with conflicting obligation and introduces unnecessary complications. Nair ms presents a fully general account. 64 even though the following one does not: Reasoning Ought Explosion (OE ⊩ ) O(a), O(¬a) ⊩ NonCut O(b) Let us verify that my account gets these results. Begin with SOC ⊩ . According to the definition of ⊩ NonCut , you may conclude O( ᵯ� ) from O( ᵯ� ) when { ᵯ� } ⊢ PL ᵯ� just in case every set of reasons of a certain kind make O( ᵯ� ) true. The relevant sets of reasons according to our definition are ones where (i) O( ᵯ� ) is true and where (ii) you have a good reason to do ᵯ� . As we showed in §2.3, O( ᵯ� ) where { ᵯ� } ⊢ PL ᵯ� holds in every set of reasons where O( ᵯ� ) holds. Thus, O( ᵯ� ) must also hold in sets of reasons where (i) O( ᵯ� ) holds and (ii) you have a good reason to do ᵯ� . So SOC ⊩ holds. This shows that our account of reasoning entails that the reasoning in the speeding law case is good reasoning. Next let us consider ODS ⊩ . To show that this holds according to our definition, we need to look at sets of reasons where (i) O(¬a), O(a ∨ b) hold and where (ii) you have a good reason to do ¬a and a good reason to do a ∨ b. Since ¬a and a ∨ b are consistent, they must be a subset of some maximal consistent subset, ᵮ� , of your good reasons. Obviously {¬a, a ∨ b} ⊢ PL b. So since {¬a, a ∨ 65 b} ⊆ ᵮ� , it follows that ᵮ� ⊢ PL b. Since we have shown that b must follow from a maximal consistent 66 subset of reasons, O(b) holds in every such set of good reasons. This suffices to show ODS ⊩ and therefore shows our account entails that the reasoning in Smith’s case is good. But importantly our account does not say that OE ⊩ holds. To see why, we need to show that there is a set of reasons that does not make O(b) true even though it includes a good reason to do a and a good reason to do ¬a. And this is easy to do: Consider a set of just these reasons, ℜ={!(a), !(¬a)}. There are two maximal consistent subsets {a} and {¬a} neither of which entails b. So O(b) doesn’t hold. Because of this our account of reasoning entails that reasoning explosively is not good reasoning. Thus, our account of reasoning gets us exactly what we want: it entails that the reasoning in Smith’s case and the speeding law case is good reasoning but the explosive reasoning is not. 3.2.3 The Reasoning Consequence Relation I argued in §1 that we can get what we want only if the reasoning consequence relation is not both cut and monotonic. That argument appealed to the following derivation to establish this point. Replacing ⊩ for ⊢, the derivation began with (1 ⊩ ) which is ODS ⊩ and (2 ⊩ ) which is a special case of the reasoning in the speeding law case that we have called ODI ⊩ : 65 Any consistent set, ᵮ� , must be a subset of some maximal consistent subset, ᵮ� . For suppose, ᵮ� was not a proper subset of any ᵮ� , then ᵮ� it itself is a maximal consistent subset. 66 This follows by the monotonicity of ⊢ PL . 65 (1 ⊩ ) O(¬a), O(a ∨ b) ⊩ O(b) (2 ⊩ ) O(a) ⊩ O(a ∨ b) By monotonicity, we got (3 ⊩ ) and (4 ⊩ ): (3 ⊩ ) O(a), O(¬a), O(a ∨ b) ⊩ O(b) (4 ⊩ ) O(¬a), O(a) ⊩ O(a ∨ b) And then by cut, we are able to derive (5 ⊩ ) from (3 ⊩ ) and (4 ⊩ ): (5 ⊩ ) O(a), O(¬a) ⊩ O(b) This derivation taught us that if the reasoning consequence relation is both cut and monotonic then our problem cannot be solved. We know that my account solves our problem because, as we just saw, it accepts (1 ⊩ ) and (2 ⊩ ) but rejects (5 ⊩ ), but do not yet know how it solves the problem. One way it could solve the problem is by rejecting monotonicty in a way that blocks the derivation of (3 ⊩ ) or (4 ⊩ ). Another other way it could solve the problem is by rejecting cut in a way that blocks the derivation of (5 ⊩ ). Nothing in our discussion so far has suggested that one way of solving the problem is preferable to another. The account I have developed, as the name suggests, solves the problem by rejecting cut. And I will show in §5 of this paper that we can define a variant of it that solves the problem by denying monotonicity instead. While this variant shares in many of the advantages of the account that I am developing here, it will be easiest to understand this non-monotonic account after having appreciated the non-cut account. So for now let us focus on ⊩ NonCut and see how it blocks this derivation. Recall that intuitively cut told us that there is no important difference between premises and conclusions. Cut fails according to the current approach because premises have a special role. You reason as if you have good reason to do what your premises say that you ought to do. You are not committed to doing this for your conclusions. To see that this thought does lead to a denial of cut, let us consider (3 ⊩ ), (4 ⊩ ) and (5 ⊩ ) more carefully. 67 To see that (3 ⊩ ) holds, the sets of good reasons to consider are the ones where you have a good reason to do a, a good reason to do ¬a, and a good reason to do a ∨ b. Since the good reason to do ¬a and the good reason to do a ∨ b are consistent, they will be elements of a maximal consistent subset. As we just saw in showing that the reasoning in Smith’s case is good reasoning, O(b) holds when we have 67 Cf. Horty 199: 29. 66 such a maximal consistent subset. So (3 ⊩ ) holds. To see that (4 ⊩ ) holds, the sets of reasons to look at are ones where you have a good reason to do ¬a and a good reason to do a. Since a is consistent, it must be a subset of some maximal consistent subset ᵮ� . Obviously, a ⊢ PL a ∨ b and ᵮ� ⊢ PL a, so ᵮ� ⊢ PL a ∨ b. Thus, O(a ∨ b) must hold. Hence, (4 ⊩ ) holds. Notice what is happening here. In (4 ⊩ ) we do not reason as if we have a good reason to do a ∨ b even though in (3 ⊩ ) we do reason as if we have a good reason to do a ∨ b. This is because O(a ∨ b) is a premise in (3 ⊩ ) while it is a conclusion in (4 ⊩ ). And notice that in showing why (3 ⊩ ) held—in showing why we can conclude O(b)—we crucially relied on the fact that we reason as if we have good reason to do a ∨ b. As we have already seen, (5 ⊩ ), which is the formal statement of the claim that we may reason explosively, does not hold. Since (4 ⊩ ) and (5 ⊩ ) have the same premises and since our premises determine what we reason as if we have good reasons to do, we know that we will not reason as if we have a good reason to do a ∨ b when we have O(a) and O(¬a) as our premises. And as I just said in discussing (3 ⊩ ) reasoning as if we have a good reason to do a ∨ b is crucial to allowing us to conclude O(b). This is why we are not able to conclude O(b) from O(a) and O(¬a) alone. This shows that the solution to our problem flows directly from the features of the account that shed light into why the deontic reasoning that we are interested in understanding is good reasoning. A feature of that account was that we reason as if we have good reasons to do what our premises say we ought to do. It is this special role played by premises that leads to the denial of cut. 3.3 Taking Stock What we have done in this section is build an account of deontic reasoning that fits closely with REO. The basic idea is that you reason well when you reason as if you have good reasons to do what your premises say you ought to do. We showed that this idea provides an explanatory solution to our problem because it vindicates the reasoning in Smith’s case and the speeding law case and condemns explosive reasoning while shedding some light on why such reasoning is good and why such reasoning does not satisfy cut. That said, it is worth highlighting two limitations of this account and the arguments given for it. First, I do not believe that the independent motivation for my account decisively rules out every alternative to it. There are other accounts such as ⊢ Truth that also could be used to give an account of deontic reasoning that fits with REO. However as we have seen, ⊢ Truth does not solve our problem. So ⊩ NonCut has the advantage of solving our problem while still capturing this independent motivation. 67 A second limitation of my account is that it is built around REO, and REO was designed to allow conflicting obligations. In this respect, my solution does not add anything novel about what explains the existence of conflicting obligations. What my account contributes instead is a theory of deontic reasoning. This has been the central concern of this paper and it is to the distinctive problems about reasoning with conflicting obligations that my account provides an explanatory solution. Of course, part of why my solution is explanatory is that it is built around REO. But in my opinion, it is a virtue of my view that it allows for this kind of tight fit between our account of deontic reasoning and the popular idea in ethics. For these reasons then, I conclude that, my account of deontic reasoning provides an explanatory solution to the problem about reasoning that I introduced at the outset of this paper. We have now accomplished the main task of this paper. We began the paper with a problem about how to reason with conflicting obligations (§0). We saw that this problem cannot be solved by deontic logic and, more generally, cannot be solved by any account that entails that the reasoning consequence relation is both cut and monotonic (§1). We then took a detour to introduce a particular version of the popular idea in ethics, REO (§2). And in this section we used REO along with two further claims to develop an independently plausible account of deontic reasoning (§3.1) that solves our problem and entails that the reasoning consequence relation is not cut (§3.2). Having established our main results, I want to close the paper by considering some other problems (§4), discussing alternatives to my account (§5), and drawing a moral about the bearing of my account of reasoning on issues in deontic logic (§6). 4. Other Problems As we have seen from our earlier discussion, there is a close albeit imperfect connection between deontic logic and reasoning with conflicting obligations. Since there are a host of problems about conflicting obligations in deontic logic, it is natural to wonder about the corresponding problems for reasoning with conflicting obligations. While I believe that my account can solve the reasoning analogs of the problems about conflicting obligations from deontic logic that I am aware of, it would not be worthwhile for me to discuss each of these problems here. Instead, in this section I will provide a more abstract perspective on my account that will enable the interested reader to easily apply it to different problems concerning reasoning and verify for herself that it solves these problems. 68 To get this more abstract perspective, consider the following principle: 68 In making these remarks, I set aside issues concerning permission and conflicting obligations (see, for example, Brink 1994: 235-6). While these issues can be addressed within a generalization of my system that I develop in Nair ms, that generalization does not follow from anything that is said below. Thanks to Errol Lord for pressing me about permissions. 68 The General Reasoning Principle (GP ⊩ ): O( ᵯ� 1 ), O( ᵯ� 2 ), … ,O( ᵯ� n ) ⊩ O( ᵯ� ) if ᵮ� ⊢ PL ᵯ� for some ᵮ� such that ᵮ� is a maximally consistent subset of { ᵯ� 1 , ᵯ� 2, … , ᵯ� n }. 69 GP ⊩ succinctly encapsulates the ideas about deontic reasoning that are central to my account. Remember my idea about deontic reasoning says that we reason as if we have good reasons to do what our premises say we ought to do (§3). And recall that good reasons explain obligations by showing that they follow from a maximal consistent subset of such reasons (§2). Putting these together, we get that you may conclude ‘it ought to be that ᵯ� ’ if ᵯ� follows from a maximal consistent subset of the things your premises say that you ought to do. And this is what GP ⊩ says. GP ⊩ also makes it is easy to see how my account avoids the problem that we began the paper with. To appreciate this, let us consider our original problem in a slightly different light. That problem relies on ODS ⊩ and SOC ⊩ . A natural thought is that we can subsume both of these principles under a more general one: General Ought Closure (GOC ⊩ ): O( ᵯ� 1 ), O( ᵯ� 2 ), … , O( ᵯ� n ) ⊩ O( ᵯ� ) if { ᵯ� 1 , ᵯ� 2 , … , ᵯ� n } ⊢ PL ᵯ� Evidently, if we accept GOC ⊩ , we are committed to explosive reasoning because {a, ¬a} ⊢ PL b. So the problem that we began the paper with can be thought of as a problem about how to restrict GOC ⊩ . 70 What is nice about GP ⊩ is that it tells us what restriction my account puts on GOC ⊩ : it restricts GOC ⊩ by only considering what follows from maximal consistent subsets of your premises. And this restriction to maximal consistent subsets seems like the most natural restriction of GOC ⊩ that an advocate of conflicting obligations can adopt. What’s more, with a little effort it is easy to prove that GP ⊩ entails that the reasoning consequence relation has the structural properties that my account says it has (for example, it is not cut and not closed under substitution). And similarly it is not hard to prove 71 69 Cases where a premise says that a contradiction ought to be prevent this from being a biconditional. To strengthen this to a biconditional I would need to drop clause (i) in my definition of ⊩ NonCut or add the condition that none of your premises say that it ought to be the case that a contradiction holds. 70 I owe this way of thinking of our original problem to Jeff Horty. Cf. Schotch and Jennings 1981: 160-161. 71 We say: R is closed under substitution (in L) just in case if (1) ᵮ� R ᵯ� (where ᵮ� is a set of sentences of L and ᵯ� is sentence of L) and (2) ᵮ� * and ᵯ� * are the result of uniformly substituting some arbitrary sentence (in L) for an atomic sentence in ᵮ� and ᵯ� , then ᵮ� * R ᵯ� *. To see that ⊩ NonCut is not closed under substitution, we need only use GP ⊩ to check that while we accept ODS ⊩ we do not accept the following claim: (6 ⊩ ) O(¬a), O(a ∨ (a ∧ b)) ⊩ O(a ∧ b). This is the right result. Assuming that O(a ∨ (a ∧ b)) is logically equivalent to O(a) because a is logically equivalent to a ∨ (a ∧ b), (6 ⊩ ) is equivalent to 69 that GP ⊩ is equivalent to GOC ⊩ if your premise set does not contain conflicting obligations. 72 In this way, GP ⊩ is a succinct general statement of the main idea behind my account. And since this principle is simple, it will be easy for the interested reader to apply it for herself to various problems concerning reasoning. 5. Alternatives As we know, my account solves the problem that I began the paper with by defining a non-cut reasoning consequence relation. But our result from §1 suggested that our problem depends on both cut and monotonicity. And since nonmonotonic accounts of reasoning in other domains are familiar, this raises the question of whether there is a nonmonotonic account of deontic reasoning that can lead to an explanatory solution to our problem. So I want to show in this section how to develop a nonmonontic account of deontic reasoning that is a slight variant of ⊩ NonCut . 73 5.1 ⊩ NonMon The easiest way to think of this nonmonotonic account of deontic reasoning is as a relaxation of our original account of deontic reasoning. Recall that our original account says that for every premise you reason as if you have good reasons to do what that premise says you ought to do. The alternative view says that you only reason as if you have good reasons to do what some privileged subset of your premises say that you ought to do. To get a sense of why you might do this, consider the following abstract example. You have as premises O(a) and O(a ∨ b). Now O(a ∨ b) is not generally sufficient to ensure the truth of O(a). But we may suppose O(a) is sufficient for the truth of O(a ∨ b). If that is right, it might seem like you should only reason as if you have a good reason to do a. After all, O(a) is sufficient to ensure that O(a ∨ b) holds. So you really only need to reason as if you have a good reason to do a for O(a) because O(a ∨ b) can “come along for the ride”. More generally: when you have some premises, there will be some (perhaps improper) subset of these premises, ᵮ� , such that (i) ᵮ� ensures the truth of the whole premise set and (ii) no subset of ᵮ� entails the truth of the whole premise set—call this a minimal deontically sufficient subset. The idea is that (7 ⊩ ) O(¬a), O(a) ⊩ O(a ∧ b) which is a kind of explosion result that we should not accept (cf. McNamara 2004: 20, Goble 2009: 468). 72 These comments require either one of the strengthenings to a biconditional mentioned in n. 69. 73 What about other formal theories of reasoning that are not logics in the narrow sense? While I cannot discuss the details of these views here, it is worth pointing out that one such view known as adaptive deontic logic is strikingly similar to the views developed in this paper and especially this section. Adaptive logic can be thought of as a general theory of ‘reasoning as if’ (cf. Batens 2008). And adaptive deontic logics are a kind of non-monotonic formalism that has been used to approach our problem (Beirlaen, Straßer, Meheus 2013, Straßer, Meheus, Beirlaen 2012). What’s more, this formalism treats the problems discussed here in a similar way. Unfortunately, I must leave it to further work to compare in detail the differences and similarities between my proposal and the interesting proposals made by adaptive logicians. 70 you should only reason as if you have good reasons to do the things in a minimal deontically sufficient subset. Formally, this idea just amounts to tweaking ⊩ NonCut so that it only applies to minimal deontically sufficient subsets: O( ᵯ� 1 ), O( ᵯ� 2 ), …, O( ᵯ� n ) ⊩ NonMon O( ᵯ� ) iff, for some minimal deontically sufficient subset ᵮ� of {O( ᵯ� 1 ), O( ᵯ� 2 ), … , O( ᵯ� n )}, and for every ℜ such that: (i) ℜ ⊨ Horty ᵯ� for all ᵯ� ∈ ᵮ� and (ii) {!(x)|O(x) ∈ ᵮ� } ⊆ ℜ ℜ ⊨ Horty O( ᵯ� ) 74 As is easy to see, this account is exactly the same as ⊩ NonCut except that it says to reason as if you have good reasons to do what is in each minimal deontically sufficient subset. Of course, I have not presented a formal definition of a minimal deontically sufficient subset here. While this can be done, the intuitive idea of such a set is sufficient for our purposes so I will not bother to define the notion in the main text. 75 Though I will leave the proof of this to the reader who is interested in working through all of the formal details, as promised this account solves our problem because it denies monotonicity. Specifically, while this account like ⊩ NonCut accepts (1 ⊩ ), (2 ⊩ ), and (4 ⊩ ), it rejects: (3 ⊩ ) O(a) O(¬a), O(a ∨ b) ⊩ O(b) 74 I have chosen to write ‘some minimal deontically sufficient subset’ rather than writing ‘every minimal … ’ only because it is simplest to introduce the account in this way. All of the points made in the section also apply to the account that we get by writing ‘every’ rather than ‘some’. There are however two related problems that support adopting the ‘every’ proposal over the ‘some’ proposal (these remarks rely on the definition given in n. 75). First the ‘some’ proposal leads to an intuitively strange account of under we may conclude it O(a ∧ b) from premise set that contains O(a) and O(b). We would have: O(a), O(b) ⊩ O(a ∧ b) And we would lack: O(¬a ∧ b), O(a ∧ ¬b), O(a), O(b), ⊩ O(a ∧ b) But none the less, if we add O(¬b) to the premises we would strangely get O(a ∧ b). That is, we have the following: (*) O(¬b), O(¬a ∧ b), O(a ∧ ¬b), O(a), O(b) ⊩ O(a ∧ b) This oddity is related to a second strange result. While we would not have: (3 ⊩ ) O(a),O(¬a), O(a ∨ b) ⊩ O(b) we would have: (**) O(¬b), O(a),O(¬a), O(a ∨ b) ⊩ O(b) That is, if we add O(¬b) to our premises, we can suddenly conclude O(b). Though it takes work to show this, we only get (*) and (**) because the conclusions follows from one but not all of the minimal deontically sufficient subsets. So the ‘every’ proposal avoids these problems. I thank the anonymous referee who brought these problems to my attention. 75 Here is how to do it: ᵮ� is a deontically sufficient subset of ᵮ� iff (1) ᵮ� ⊆ ᵮ� and (2) ᵮ� ⊢ Truth ᵯ� for all ᵯ� ∈ ᵮ� . ᵮ� is a minimal deontically sufficient subset of ᵮ� iff (1) ᵮ� is a deontically sufficient subset of ᵮ� and (2) there is no ᵮ� such that ᵮ� ⊂ ᵮ� and ᵮ� is a deontically sufficient subset of ᵮ� . 71 Since (3 ⊩ ) follows from (1 ⊩ ) assuming monotonicity, this shows that ⊩ NonMon is not monotonic. Thus, ⊩ NonCut and ⊩ NonMon alike solve the problem that we have been considering. 5.2 Two Reasons to Prefer ⊩ NonCut That said, I think that there are two reasons to prefer ⊩ NonCut to ⊩ NonMon . The first reason concerns (3 ⊩ ): O(a),O(¬a), O(a ∨ b) ⊩ O(b) (3 ⊩ ) is accepted by ⊩ NonCut but rejected by ⊩ NonMon . My first reason for preferring ⊩ NonCut is that reasoning in accordance with (3 ⊩ ) is good reasoning. To see why I believe that this is good reasoning, consider the following variant of Smith’s case. As before, Smith is a citizen of a just country. The laws of this country require Smith to either fight or perform alternative public service. So Smith ought to fight or serve, O(f ∨ s). Also as before, Smith is a member of a pacifist religion. So Smith ought not to fight, O(¬f ). Now let us add that Smith comes from a family with a strong tradition of military service. So out of filial duty, Smith ought to fight, O( f ). In this case, it seems to me that we may conclude that Smith ought to serve, O(s). If Smith is deciding what to do, one of the steps in one of his (two incompatible) plans should be serving. Since this reasoning from O( f), O(¬f), O(f ∨ s) to O(s) is an instance of (3 ⊩ ), I believe that we should accept (3 ⊩ ). This is my first reason for preferring ⊩ NonCut . My second reason to prefer ⊩ NonCut is that it gives a more elegant account of deontic reasoning. To see this, consider the question of how we reason with some arbitrary premise, O(a). The answer according to ⊩ NonCut is that we reason as if we have a good reason to do a. The answer according to ⊩ NonMon , however, is not straightforward. According to ⊩ NonMon , a premise first plays a holistic role in determining a minimal deontically sufficient subset. Then if the premise is not in a minimal deontically sufficient subset, it plays no further role. If the premise, O(a), is in a minimal deontically sufficient subset, then you reason as if you have a good reason to do a. Obviously this is more complicated than the answer provided by ⊩ NonCut . And one vivid way to see the undesirable results of this complication is to consider the restriction ⊩ NonMon places on GOC ⊩ : O( ᵯ� 1 ), O( ᵯ� 2 ), … , O( ᵯ� n ) ⊩ O( ᵯ� ) if, for some ᵮ� such that ᵮ� is a minimal deontically sufficient subset of {O( ᵯ� 1 ), O( ᵯ� 2 ), … , O( ᵯ� n )} and for some ᵮ� such that ᵮ� is a maximal consistent subset of {x|O(x) ∈ ᵮ� }, ᵮ� ⊢ PL ᵯ� . 72 Evidently, this restriction lacks GP ⊩ ’s straightforward elegance. So this my second reason to prefer ⊩ NonCut . For these two reasons then, ⊩ NonCut may be the better account of deontic reasoning. None the 76 less, we should remember that both of these accounts of deontic reasoning provide explanatory solutions to our problems concerning reasoning with conflicting obligations. 6. Conclusion Interestingly, having such an account of reasoning with conflicting obligations not only solves the problems that we set out to solve in this paper but also has consequences for issues in deontic logic. As I hinted at earlier, the rich literature in deontic logic has been concerned with developing logics that validate many inferences and allow for conflicting obligations without leading to inconsistency or explosion. It is important to know that since at least Chellas 1974 there have been logics that allow conflicting obligations without leading to inconsistency and explosion. The problem that continues to occupy the attention of theorists is the problem of developing a logic that not only does this but also validates lots of inferences. In particular, deontic logicians have tried to develop logics that have enough valid inferences to explain the reasoning in cases like Smith’s case and the speeding law case. My account calls the idea that we have good reason to search for a logic that validates inferences corresponding to those cases in doubt. Because my account provides a direct explanation of the reasoning in these cases, there is little reason for us to insist on using the resources of deontic logic to explain them. This means that it is mistaken to think that we should be unsatisfied with a deontic 77 logic that does not validate enough inferences to explain these cases. The upshot is that the account of reasoning with conflicting obligations that I have provided alleviates the need to move beyond the logics of conflicting obligations that we have had since the mid-1970s. Thus, one of the consequences of the account developed here is that we can be satisfied with these relatively weak logics. Let us take stock: What we saw in this paper is that those who believe that there are conflicting 76 Another potential problem with ⊩ NonMon is that it does not have the property of cautious monotonicity: Cautious Monotonicity: if ᵮ� ⊩ ᵯ� and ᵮ� ⊩ ᵯ� , then ᵮ� , ᵯ� ⊩ ᵯ� . To see that ⊩ NonMon is not cautious monotonic, the interested reader may work through an example where ᵮ� is {O(a), O(¬a), O(b)}, ᵯ� is O(a ∧ b), and ᵯ� is O(¬a ∧ b). Thus both ⊩ NonCut and ⊩ NonMon fail to be cumulative relations (relations that are both cut and cautious monotonic). In this respect, they are both subject to the worries discussed in n. 51. I thank Lou Goble for asking me whether ⊩ NonMon is cautious monotonic. 77 So I disagree with Lou Goble (2005) who worries about using a consequence relation that is not a logical consequence relation to account for Smith’s case: “It does not give an account of the ordinary validity of the original argument regarding Smith’s service. Instead it offers a weaker substitute, perhaps to explain the appearance of (ordinary) validity” (407). My view is that in the first instances our firmest judgement about Smith’s case is that it is a piece of good reasoning. As I see it, deontic logicians have taken this as evidence that the relevant inference is valid. But since my account directly explains why it is good reasoning, there is no reason to try to account for this in terms of validity. 73 obligations face a problem about understanding reasoning that cannot be adequately resolved by appeal to deontic logic. We then developed an account of reasoning with conflicting obligations that provides an explanatory solution to this problem. And we saw that this account looks to be promising general theory of deontic reasoning. Ultimately, it is this ability to provide a general explanatory solution to our problems concerning deontic reasoning that I think is the main piece of evidence in favor of my account. And while I admit more work remains to be done to provide a fully worked out picture of deontic reasoning, the work I have done here is enough to equip the advocate of conflicting obligations with an account that they can use to solve the main problems about reasoning for their approach. More generally, I hope to have shown that there are distinctive problems concerning reasoning with conflicting obligations and that addressing these problems requires us to think hard about issues in both ethics and deontic logic. 78 78 Some of the ideas in this paper have been presented at the 2011 central APA session on deontic logic, the USC Speculative Society, Formal Ethics 2012, and the USC Deontic Modals Workshop. I thank the organizers, audience, and participants of these conferences. For helpful comments and discussion on this ideas of this paper, I thank Justin Dallman, Steve Finlay, Ben Lennertz, Errol Lord, Matt Lutz, Ryan Millsap, Kenny Pearce, Doug Portmore, Indrek Reiland, Jacob Ross, Barry Schein, Johannes Schmitt, Sam Shpall, Justin Snedegar, Scott Soames, Gabriel Uzquiano-Cruz, Aness Webster, Ralph Wedgwood, and especially Lou Goble and Jeff Horty. I also thank the anonymous referees at Mind as well as Thomas Baldwin for detailed feedback. I would most of all like to thank Mark Schroeder who has provided me with invaluable advice and criticism on every issue at every stage of this project. Finally, I thank the USC Provost’s PhD Fellowship and the Russell Fellowship for support. 74 Chapter 3 Must Good Reasoning Be Cumulative Transitive? 0. Introduction One way that we form new attitudes and revise and reaffirm our old attitudes is by reasoning from other attitudes that we have. And to first approximation, a piece of such reasoning is good when the attitudes that results from it will be rational if attitudes that it began with are rational. Good reasoning so understood has been studied from a variety of different perspectives by philosophers, logicians, and computer scientists. Interestingly, despite their different perspectives, almost all of these theorists have converged on a particular idea about the structure of good reasoning. Roughly, the idea is that if two pieces of reasoning are good on their own, then a longer piece of reasoning that consists of performing these two pieces of reasoning back-to-back must also be good. We will see later that this idea can be made precise as the claim that good reasoning must have a structural property known as cumulative transitivity or for short, cut. But even with this rough gloss, we are already in a position to appreciate the role that this convergence plays in theorizing. 79 Consider the shape that puzzles about reasoning typical take. Generally, puzzles about 80 reasoning isolate a number of prima facie plausible pieces of reasoning and show that when combined in a certain way, these pieces of reasoning lead to unacceptable results. This is then taken to be a reductio of at least one of the prima facie plausible pieces of reasoning. This illustrates how theorizing about good reasoning is standardly constrained by the idea that I identified: it is because of this convergence that these theorists approach puzzles about reasoning as though the reasoning to the problematic conclusion has to be good if each step in that reasoning is individually good. In this chapter, I wish to argue against this orthodoxy. In doing so, I will not give a counterexample to this idea or argue for a solution to some puzzle that relies on giving it up. To do that, I would have to look at each puzzle in detail and argue that the best solution to that puzzle would involve a failure of good reasoning to satisfy cut over the puzzling inferences. While I have done this 79 As far as I know, this convergence has only been noticed in formal work (Gabbay 1985; Kraus, Lehmann, and Magidor 1990; and Makinson 1994 are the seminal discussions). But as I explain in the next paragraph, it has been influential even in informal work despite going unnoticed. 80 Puzzles about reasoning of the kind that I have in mind may include the bootstrapping problem (see Cohen 2002), the lottery paradox (see Douven and Williamson 2006), the surprise test paradox (see Kripke 2011), the problem of conflicting obligation in ethics (see Goble 2009), and Prior’s objection to inferential theory of logical constants (see Prior 1960). 75 elsewhere, I wish to adopt a more general perspective in this paper that does not focus on any particular puzzle. 81 I adopt this general perspective because to fully evaluate whether a proposed solution to a puzzle is satisfying, we must not only focus on whether the proposal adequately captures our considered judgments about the particular pieces of reasoning involved in the puzzle. We must also look at how it fits with our broader commitments on the nature and structure of good reasoning: a proposed solution that captures our judgments about particular pieces of reasoning only by doing considerable violence to our broader commitments about the nature and structure of good reasoning is not a satisfying solution. For this reason, I will focus on high-level issues that suggest that any solution that entails that good reasoning fails to satisfy cut over a puzzling inference does considerable violence to our broader commitments about the structure of good reasoning because one such commitment is (or in any case, should be) that good reasoning must satisfy cut. Now as often happens with orthodox views, it is actually hard to find explicit arguments in favor of the idea that good reasoning must satisfy cut. Instead, the idea typically operates as an implicit background commitment that is revealed by how theorists approach puzzles about reasoning and as an emergent pattern among formal theories of reasoning. Nonetheless, we can extract two kinds of considerations that support thinking good reasoning must satisfy cut. First, theorists have articulated certain powerful intuitive thoughts behind that idea that good reasoning must satisfy cut. I will develop these thoughts into three compelling arguments for the conclusion that good reasoning must satisfy cut and then explain why these arguments are unsound (§2-4). Second, the idea that good reasoning must satisfy cut can look attractive simply because it is hard to see how exactly good reasoning might fail to satisfy cut. To respond to this worry, I develop an informal model of reasoning that is informed by work in traditional epistemology and show that it gives us a simple picture of how good reasoning might not satisfy cut (§5). But before I dive into the details of my arguments, it will help to begin by more carefully introducing our topic and bringing out its importance (§1). 1. Our Question Let’s begin then by clarifying what good reasoning is and how to think about its structure. 1.1 Good Reasoning from a Pre-theoretical Perspective We have a pre-theoretical grip on what it is for a piece of reasoning to be good. We can see this 81 See chapters two and four. 76 by considering certain ordinary cases of belief formation and certain puzzles. Consider then three examples of good reasoning. First suppose that based on your general knowledge about rain, you believe that if it is raining, then the streets will be wet and based on checking the weather report, you believe that it is raining. And suppose that based on these two beliefs, you form the new belief that the streets will be wet. This transition from the two beliefs to the formation of a new belief that the streets are wet is good reasoning in at least one reasonable and pre-theoretical sense of ‘good reasoning’. Second suppose that you believe that normally, birds fly and you believe that Tweety is a bird. And suppose that based on these two beliefs, you form the new belief that Tweety flies. This transition is also good reasoning. 82 Third suppose that you believe of each of a large sample of swans that it is white. And suppose that based on these beliefs, you form the belief that all swans are white. This too is good reasoning. Each of these examples illustrates our pre-theoretical grip on good reasoning. And we can also illustrate this pre-theoretical grip by considering certain puzzles. As I said my goal is not to solve any particular puzzle, but looking at one puzzle will help to illustrate the issue that I wish to discuss. So let’s consider the so-called bootstrapping problem in epistemology. The basic shape of the problem is that certain epistemological theories accept each of 83 two inferences and the problem arises when these inferences are performed back-to-back in a certain way. The problem comes up slightly differently for different theories because they accept slightly different inferences. I will illustrate the problem by focusing on the following two inferences: The first inference is the inference from ‘o looks red’ to ‘o is red’ and similarly for other colors. The second inference is the inference from a whole track of claims of the form ‘o looks red and is red’, ‘o* looks blue and is blue’, etc. to the claim ‘my color vision is reliable’. The puzzle has us imagine an agent who does not start out believing that her color vision is reliable. This agent has her friend set up a slide show and the friend does not tell her what color the slides are. She sits in front of the screen and the slide show begins. Based on her visual experience she comes to believe the slide looks red. By the first inference rule, she may conclude that the slide is red. The next slide comes up and she comes to believe that the slide looks blue. Based on the first inference rule she may conclude the slide is blue. The slide show goes on and by repeating this process, she comes to believe a whole track record of claims of the form ‘the slide looks red and is red’, ‘the slide 82 This example can be found in Reiter 1980 and elsewhere. 83 See Cohen 2002. 77 looks and is blue’. So using the second inference rule, she may conclude that her color vision is reliable. But intuitively, it is not good reasoning to come to believe that your color vision is reliable in this way. So in this puzzle, there is tension between the theory’s claims about what is good reasoning and our judgements that employ our pre-theoretical grip on what good reasoning is. And the idea that good reasoning must satisfy cut tells us that we can only resolve the tension by rejecting our judgments or rejecting the theory committed to each inference rule. This can look like an unpalatable choice. If however we accept that good reasoning might not satisfy cut, we have another option. We could consider the idea that each of these inferences is good on its own but that they cannot be performed back-to-back and so accept the inferences the theory is committed to as well as our judgments. This is of course not say that this is the correct solution to the bootstrapping problem or any other puzzle. To argue for that, one must do more than note the possibility that good reasoning might not satisfy cut. One must look at the details of the puzzle and see what alternative solutions are, what the costs of rejecting cut are, and argue that rejecting cut is the best solution. This is not the task of this chapter. Rather, the task of the present chapter is to argue against the idea that good reasoning must satisfy cut. This will thereby open the door for rejecting cut as a possible solution to these puzzles about reasoning. Arguing that it is the actual solution to any such problem is a task that can be accomplished without embarrassment only after detailed consideration of each problem. 1.2 A Gloss on Good Reasoning The focus of this chapter is this notion of good reasoning that we have a pre-theoretical grasp of. Though it is an interesting question what the best theory of good reasoning is, this is not a question that I will answer in this chapter. I do not wish to offer a theory here because which theory of good reasoning is correct partially depends on what the structure of good reasoning is. And since I wish to investigate this question in this chapter, providing an account of reasoning at the outset risks forcing answers to questions about structure that we would like to leave open at this stage. That said, having at least an unofficial gloss on what good reasoning is may help us to better fix ideas. As I am thinking of it, we reason from some attitudes to the formation of new attitudes or revision or reaffirmation of old attitudes. So I take it that that some (perhaps improper) subset of your attitudes constitute the starting points of your reasoning and reasoning is a mental process that returns attitudes as outputs when given these starting points as input. In all of our examples and in the remainder of the chapter, we will restrict our discussion to reasoning with beliefs (i.e, reasoning that 78 takes beliefs as inputs and returns beliefs as outputs). One way to elaborate on why the reasoning that we looked at earlier is good is as follow: The beliefs that we said that the agent has before the reasoning constitute all and only her (relevant) starting points. The belief that results after the reasoning is the output of a process that takes all and only these starting points as input. And this is good reasoning in the sense that the output of the process is a rationally permissible attitude given that all of the inputs are rationally permissible. 84 As I am thinking of things, we have a pre-theoretical grip on what rational belief is as well as what some earmarks of rational belief are: Rational beliefs some how make sense from the subject’s point of view and are immune from a distinctive kind of criticism or blame. True belief is neither necessary nor sufficient for rational belief. 85 There are of course many different theories of what rational belief consists in (eg., rational belief is belief that accords with the evidence, rational belief is belief formed by a reliable process). And some may believe there is not even a single notion of rational belief at play. I will not endorse any 86 particular theory of rational belief in this chapter. If it turns out that there are multiple notions of rational belief at play, I believe that the discussion that follows is neutral among these different notions. So this is our gloss on what good reasoning is and it is only intended as an unofficial heuristic that may aid in thinking about our topic. That said, I will insist on one official theoretical assumption. The assumption is that we are to think of belief in purely qualitative terms. This means we are to think of belief as distinct from credences or partial beliefs. This assumption is important because the theories that I wish to discuss—the theories that convergence on the idea that good reasoning satisfies cut—only concern beliefs understood in this purely qualitative way. So in order to address this convergence directly I will accept their picture of the nature of belief in what follows. What’s more, the issues that we are discussing look very different if we reject this assumption because theories that reject the assumption tend to also reject the idea that good reasoning satisfies cut. In the appendix, I provide a preliminary discussion of how cut fails according to these approaches as 87 84 So good reasoning is a process of reasoning that returns permissible beliefs when given a collection of beliefs such that (a) each member of the collection is permissible and (b) the collection consists of all and only the beliefs that constitute your starting points. The inclusion of (b) ensures our gloss is neutral on whether good reasoning satisfies cut. So one of the questions that we will explore in detail is whether (b) is a vacuous or otherwise unimportant condition on what it takes for a process of reasoning to be good. Cf. Makinson and van der Torre 2000. 85 On one reading, Williamson 2001 claims (fully) rational belief is knowledge so truth is a necessary condition for rational belief. With some adjustments, the discussion in this paper may apply to even this view of rational belief. To first approximation, the idea would be to focus on cases of reasoning that occur in environmental conditions that Williamson would call “the good case”. 86 See Cohen ms. 87 Recent explicit arguments for this assumption include Buchak forthcoming and Ross and Schroeder 2014. Those who reject this assumption may be divided into at least four camp. First there are classic Lockean credal reductivist whose ideas I discuss in the appendix. Second there are those who understand belief in terms of credence who are not classic Lockeans such as Leitgeb 2013, Lin 2013, van Fraassen 1995, Weatherson 2005, and Wedgwood 2012. It turns out to be a complicated question whether these theories entail good reasoning satisfies cut and how they compare to the theory developed here. I hope to return to considering this issue in 79 well as a comparison to the distinctive picture of how cut might fail developed in this chapter that embraces the assumption that these alternative pictures reject. 1.3 How to Frame Questions about Structure Now that we have an understanding of what good reasoning is, the next thing to do is introduce a more precise way of thinking about its structure. As it turns out, there are a variety of formal theories of reasoning. And these theories often model belief in very different ways. This makes it difficult to compare the different predictions and features that these theories have. That said, logicians and computer scientists have developed at least one way to compare these theories that has proven fruitful even though it abstracts from many of the details of each view. What I wish to do now is introduce this way of comparing theories as a way of making our question more precise. To get a feel for the model, it helps to begin by taking a step back from reasoning and turning to consider logic. Logic (at least in my stipulative use of this term) is concerned with valid arguments where a valid argument is one in which the truth of the premises guarantee the truth of the conclusion (if we like, we can add in virtue of form alone). Logical consequence is a relation that holds between a set 88 of premises and a conclusion just in case the argument consisting of those premises and that conclusion is valid. In the twentieth century, logicians discovered a number of interesting structural properties of the logical consequence relation. If we use uppercase Greek letters ( ᵮ� , ᵮ� , ᵮ� , etc.) for sets of sentences, lowercase Greek letters ( ᵯ� , ᵯ� , ᵯ� etc.) for sentences, and ⊢ for the logical consequence relation, we can write ᵮ� ⊢ ᵯ� for ‘ ᵯ� is a logical consequence of ᵮ� ’. And we can now succinctly state three structural properties that will be important for our purposes: ⊢ satisfies reflexivity in the sense that ᵮ� ⊢ ᵯ� for all ᵯ� ∈ ᵮ� ⊢ satisfies monotonicity in the sense that if ᵮ� ⊢ ᵯ� and ᵮ� ⊆ ᵮ� , then ᵮ� ⊢ ᵯ� ⊢ satisfies cut in the sense that if ᵮ� ⊢ ᵯ� for all ᵯ� ∈ ᵮ� and ᵮ� ∪ ᵮ� ⊢ ᵯ� , then ᵮ� ⊢ ᵯ� Spelling out in detail what each of these claims means makes it clear that logical consequence must have future work. Third there are eliminativist such as Jeffrey 1970 whose work has no obvious interaction with the ideas considered here. And fourth there are those who reduce credence to belief such as Easwaran ms. These theorists may accept the arguments of this paper but it is an interesting question that deserves further exploration whether this will have any consequences for the nature of credences. 88 There are of course other reasonable ways to use ‘logic’. John Burgess notes one such usage: Among the technically oriented “logic” no longer means a theory about which forms of argument are valid, but rather means any formalism, regardless of intended application, that resembles a logic in the traditional sense enough to allow it to be usefully studied by similar methods. (2009: viii) The sense of ‘logic’ Burgess identifies is wider than my sense. Many of the formal theories of reasoning that I will discuss count as logics in Burgess’s sense, but not mine. I have no principled objection to the wider usage, but adopt the narrower one in order to cleanly distinguish logic from a theory of reasoning. 80 these properties. 89 Logical consequence must be reflexive because the truth of all of the sentences in ᵮ� must guarantee the truth of each such sentence. Logical consequence must be monotonic because if the truth of the sentences in ᵮ� guarantees the truth of ᵯ� , then the truth of the sentences in ᵮ� together with any other sentences must also guarantee the truth of ᵯ� . Finally, logical consequence must be cumulative transitive because if the truth of the sentences in ᵮ� guarantees the truth of each sentence in ᵮ� and the truth of the sentences in ᵮ� and ᵮ� together guarantees the truth of ᵯ� , then the truth of the sentences in ᵮ� must also guarantee the truth of ᵯ� . Now one question that we can ask is to what extent good reasoning shares these structural properties. In asking this question, we must keep in mind that while logical consequence is normally thought of as a relation among sentences, good reasoning is a relation that holds between mental states. And as I said, different formal theories have had different ways of representing what the mental states of agents are. But it has proven fruitful in making comparisons between different formal theories to abstract from these difference by defining, for each theory, what we might call a “a good reasoning consequence relation” on analog to a logical consequence relation. So let us write ᵮ� ⊩ s, t ᵯ� to represent the claim that for an agent s at a time t, the transition from belief in each proposition expressed by the sentences in ᵮ� to the belief in the proposition expressed by ᵯ� is a piece of good reasoning. For simplicity, let us leave implicit reference to agents and times and just write ᵮ� ⊩ ᵯ� . And let us for simplicity, succinctly put this as the claim that it is good reasoning to conclude ᵯ� from ᵮ� . Finally, let us call what is to the left of ⊩ the premises and what is to the right the conclusion. 90 Using this technique, logicians and computer scientists have been able to draw a number of interesting comparisons between formal theories. And one result of this comparative work is that despite being designed to model very different kinds of reasoning (e.g., generic reasoning, legal reasoning, causal reasoning) all these formal theories (save the exceptions discussed in the appendix) converge on the result that good reasoning relation satisfies cut where we say: ⊩ satisfies cut just in case if ᵮ� ⊩ ᵯ� for all ᵯ� ∈ ᵮ� and ᵮ� ∪ ᵮ� ⊩ ᵯ� , then ᵮ� ⊩ ᵯ� The precise statement of our question, then, is whether good reasoning must satisfy cut and the answer 89 This is not to say that is is uncontroversial. For example, Williams 2008 and 2011 defends a view about vagueness that entails that that logical consequence fails to satisfy cut and Ripley 2013 defends a view of truth that entails that logical consequence fails to satisfy cut. 90 Indeed, I will abusively use ‘premises’ at different times to refer to the set of sentences on the left, the set of propositions expressed by those sentences, and the set of beliefs in those propositions. I trust context will make clear my meaning. Similar comments apply to my use of ‘conclusion’. 81 that I will be pursuing is that good reasoning might not satisfy cut. 91 As we have seen this question is interesting because it is implicated in almost every puzzle about reasoning. And now we have also seen that the question is interesting because almost every formal theory of reasoning converges on the idea that good reasoning does satisfy cut. So another way of thinking of our question is as asking whether this convergence is just an accident resulting from the range of cases considered by theorists or if it reflects a deeper truth about the nature of reasoning. 92 1.4 How to Argue about Structure Now that we understand our question about structure, how can we argue about structure at the level of abstraction that I am interested in? The best way to see how to do this is to not directly consider cut but instead begin by considering monotonicity. In particular, considering monotonicity will allow us to see that good reasoning does not have the same structure as logical consequence because good reasoning is not monotonic. Return to the example involving Tweety. There we said that it is good reasoning to conclude that Tweety flies based on the belief that Tweety is a bird and the belief that normally, birds fly. If we let ᵮ� ={‘Tweety is a bird’, ‘Normally, birds fly’} and ᵯ� be ‘Tweety flies’, we can write this as ᵮ� ⊩ ᵯ� . But now notice that it would not be good reasoning to conclude that Tweety flies based on the larger belief set consisting of the belief that Tweety is a bird, the belief that normally birds fly, the belief that Tweety is a penguin, and the belief that normally, penguins don’t fly. So we have it that ᵮ� ⊆ ᵮ� where ᵮ� ={‘Tweety is a bird’, ‘Normally, birds fly’, ‘Tweety is a penguin’, ‘Normally, penguins don’t fly’} and ᵮ� ⊮ ᵯ� . More generally, good reasoning does not satisfy monotonicity because some good reasoning is ampliative in the sense that it can be good reasoning to draw conclusions that are not guaranteed to be true by our premises but are only likely to be true or in some other way reasonable given our premises. 93 Reasoning with ‘normally’-claims is not the only kind of reasoning that is ampliative in this sense. In 91 Here is a sampling of the diverse array of formal result concerning this convergence. There are results concerning the relationships between properties of proofs and structural properties of good reasoning (Gabbay 1985). There are results that categorize different theories according to which structural properties they say good reasoning has (Makinson 1987 and 1994). There are results concerning whether and how different theories of good reasoning can be represented in a certain kind of preferential semantics (see Kraus, Lehmann, and Magidor 1990; Makinson 1987: 8-10 and 1994: §3.4; and Shoham 1987 for the seminal discussions and for more recent work, see Arieli and Avron 2000, Bezzazi, Makinson, and Pérez. 1997, and Schelechta 2007). And there are results concerning a correspondence between structural properties of good reasoning and different ways of choosing models where choice is analyzed with tools from social choice theory (see Rott 2001: ch. 6-8). 92 Goble 2013: §6.3 appears to take the view that it reflects a deep truth and criticizes Horty 1993 and 1997 and my chapter two on these grounds. 93 Two clarifications are in order. First in saying this, I should not be taken as implicitly assuming that good reasoning is supraclassical in the sense that if ᵮ� ⊢ ᵯ� , then ᵮ� ⊩ ᵯ� . I am officially neutral about this question (see Harman 1986 for arguments that good reasoning is not supraclassical). Second good reasoning being ampliative is compatible with there being certain domains (e.g. mathematics) in which every piece of good reasoning corresponds to a valid inference. 82 fact, most of the everyday common sense reasoning that we do as well as scientific reasoning (e.g., induction and abduction) is ampliative. 94 The fact that good reasoning is ampliative provides us with the crucial premise in an argument that shows that good reasoning fails to satisfy monotonicity. For take a piece of good ampliative reasoning from ᵮ� to ᵯ� , since ᵮ� does not guarantee the truth of ᵯ� it is possible to (consistently) add new information to ᵮ� that makes ᵯ� false, likely to be false, or otherwise unreasonable. But it would not be sensible to say that reasoning from ᵮ� together with this new information to ᵯ� is good reasoning because ᵯ� is not likely to be true or otherwise reasonable given this large set of premises. 95 For this reason, good reasoning is not monotonic. The question that we will pursue in this paper is whether we can isolate some high-level property of good reasoning like the property of ampliativity that entails that good reasoning must be cumulative transitive. 2. The Argument from Brute Intuition Before I turn to the best arguments in favor of thinking that good reasoning must satisfies cut, I wish to consider a less sophisticated argument in favor of thinking that good reasoning must satisfy cut. The argument seeks to settle the issue on brute intuition alone. The idea is that it is simply intuitively obvious that good reasoning must satisfy cut. While this argument is a particularly blunt one, it is a reaction that I often get when I first tell people about the question that I am interested in so it will be helpful to discuss it. And there is some merit to this reaction. After all, we do seemingly unproblematically engage in back-to-back reasoning and even I find the idea that good reasoning satisfies cut intuitively plausible when I reflect on it. So the first argument that I wish to consider says that it is intuitively obvious that good reasoning must satisfy cut. My response to this argument is that we should not rest an argument about the high-level properties of good reasoning on brute intuition like this. This response does not come from general skepticism about resting arguments on brute intuition. Rather I will show that in the case of high-level structural properties of good reasoning in particular our intuitions lead us astray. To see this, let me begin by reporting that many people who find cut intuitively obvious also find reflexivity and a close cousin of cut, transitivity, intuitively obvious: 94 Some philosophers are skeptical about ampliative inference (perhaps based on finding the riddles of induction (see Hume 1739: 1.3.6 and Goodman 1955: ch. 3) intractable). While I agree that there are interesting philosophical problems concerning how ampliative reasoning could be sensible, I adopt a firmly anti-skeptical frame of mind. Skeptics may take this an undefended assumption of this paper. 95 This is not a formal point. There are formal systems whose consequence relations are ampliative and monotonic (see Makinson 2005’s paraclassical logics). The point is the philosophical one that good reasoning that is ampliative must be non-monotonic. 83 ⊩ is reflexive just in case ᵮ� ⊩ ᵯ� for all ᵯ� ∈ ᵮ� ⊩ is transitive just in case if ᵮ� ⊩ ᵯ� for all ᵯ� ∈ ᵮ� and ᵮ� ⊩ ᵯ� , then ᵮ� ⊩ ᵯ� The trouble is that good reasoning cannot satisfy both reflexivity and transitivity because these two together entail monotonicity and we know good reasoning is not monotonic. To see the entailment begin by assuming the antecedent of monotonicity: (1) ᵮ� ⊩ ᵯ� (2) ᵮ� ⊆ ᵮ� + By reflexivity and (2), we know: (3) ᵮ� + ⊩ ᵯ� for all ᵯ� ∈ ᵮ� Then by transitivity, (3), and (1), we know: (4) ᵮ� + ⊩ ᵯ� and (4) is just the consequent of monotonicity. Thus, the fact that our intuitions favor both reflexivity and transitivity even though reflexivity and transitivity cannot both be true illustrates that our intuitions about high-level features of reasoning are unreliable. For this reason, we should not rely on such intuitions in the case of cut either. 96 3. The Fundamental Argument Having set aside this simple argument, I can now present the basic challenge for the idea that good reasoning might not satisfy cut. To start, consider what it would be for good reasoning to fail to satisfy cut. Cut says if ᵮ� ⊩ ᵯ� for all ᵯ� ∈ ᵮ� and ᵮ� ∪ ᵮ� ⊩ ᵯ� , then ᵮ� ⊩ ᵯ� . So if good reasoning were to fail to satisfy cut, there would be some sets of sentences ᵮ� and ᵮ� and sentence ᵯ� such that ᵮ� ⊩ ᵯ� for all ᵯ� ∈ ᵮ� and ᵮ� ∪ ᵮ� ⊩ ᵯ� , but ᵮ� ⊮ ᵯ� . Evidently, this tells us that when ᵮ� is on the left of ⊩ (i.e, when ᵮ� is a premise) together with ᵮ� we can conclude ᵯ� and but we cannot conclude ᵯ� is a when ᵮ� is to the right of ⊩ (i.e, when it is a conclusion from ᵮ� ). So if cut fails, this means that whether ᵮ� is on the left or right of ⊩, whether it is a premise or a conclusion, makes a difference. A natural question to ask is how could being on the left or right of ⊩ matter for what you can conclude? What is different about being on either side of this symbol? 96 Similarly, the good reasoning analog of the so-called “easy half” of the deduction theorem (if ᵮ� ⊩ ᵯ� →ᵯ� , then ᵮ� ∪{ ᵯ� } ⊩ ᵯ� ) can seem intuitive but can also be shown to be unacceptable (cf. Shoham 1987: 734). 84 The answer that I will be exploring in this paper is that the reason why what you can conclude depends on whether it is on the left or right is that the beliefs that constitute your premises (the left of ⊩) play a different epistemic role from the beliefs that constitute your conclusions (the right of ⊩). If that were so, we could explain why what conclusions you can draw differs depending on whether ᵮ� is on the left or right: the conclusions differ because premise-type beliefs in the propositions expressed by the sentences in ᵮ� have a different role than a conclusion-type beliefs in the propositions expressed by the sentences in ᵮ� . 97 Of course, this raises the question of whether the difference between premise and conclusion can bear this epistemic weight. And indeed I think this is a powerful way of making it clear how fraught the idea that good reasoning might not satisfy cut is. It requires an important difference between being a premise and a conclusion. And it is hard to see how this difference could matter so much. This idea—that there is no epistemically important difference between premises and conclusions—is the fundamental intuitive thought driving the argument that I will developed and criticize in this section. Indeed we can further motivate this already powerful thought by noting how we came to be familiar with the distinction between premises and conclusions. For most of us, this happened when we were taking Logic 101. There we were taught to formalize arguments so that they have premises and conclusions. And then in other philosophy classes we went on to formalize arguments that we found in various papers. In doing this, often what we used as a premise in a certain context would be a conclusion in certain other contexts. And this did not necessarily reflect anything of epistemic 97 This is not to say it is the only possible answer to this question. A different answer (which has not been explicitly developed as far as I know) says that that claims can mean something different depending on whether they are on the left or right of ⊩. To illustrate how this would answer the question, suppose there were a sentence ᵯ� # ᵯ� that meant what ᵯ� ∨ ᵯ� means when it was on the right but means what ᵯ� ∧ ᵯ� means when its on the left . Then it would be reasonable to conclude ᵯ� # ᵯ� from { ᵯ� }. And it would be reasonable to conclude that ᵯ� if you started out with { ᵯ� , ᵯ� # ᵯ� }. But it would not be reasonable to conclude ᵯ� from { ᵯ� }. This illustrates a failure of cut. (Discussion with Jeff Horty and Johannes Schmitt has also led me to see that we may locate this line of thought within a different and more general theoretical context as well. But encouraged by the inferential theory of meaning for logical constant (see Dummett 1975, MacFarlane 2009, and Prawitz 1983) and generalizing on it, we can imagine view on which the meaning of a term is a given by the pieces of good reasoning that can performed with it. It follows from this theory that # cannot be given a single univocal meaning. It could however be given one meaning as a premise and a different meaning as a conclusion. Of course, # is essentially Prior 1960’s ‘tonk’ which was used as an objection to the inferential theory to the effect that if ‘tonk’ is meaningful, then a contradiction follows. As Cook 2005 shows (and was implicit in Belnap 1962), this argument can be avoided on the assumption that logical consequence fails to satisfy cut. Since our focus is reasoning and not logic, we may remain neutral about this debate. See Restall 2005: 197 and Ripley 2013: §2.2 for recent discussion of whether the inferential theory of logical constants must accept that logical consequence satisfies cut) Despite the fact that this gives us a straightforward answer to the question of how good reasoning might fail to satisfy cut, I wish to set this approach aside. I find it unsatisfying because it appears to vindicates the idea that good reasoning fails to satisfy cut only by thinking of such reasoning as involving a kind of equivocation. That said, there are more sophisticated ways of developing this idea that may make it more palatable. For example, Stalnaker 1994: §3 discusses the idea that the implicit meaning of a claim might differ depending on whether it is a premise or conclusion while the explicit meaning does not. Update semantics can distinguish meaning as update potential from meaning as the object thats the output of an update potential applied to a conversational setting (see Veltman 1996). We can imagine a view on which a claim has the same update potential whether it is a premise or a conclusion, but its meaning in the second sense differs. I leave consideration of these interesting and underexplored approaches to one side in this paper. 85 importance. Rather it often just reflected the dialectical context in which we were engaged. This suggests that even once we fix on a particular agent at a particular time, we cannot distinguish between the beliefs that constitute her premises and the beliefs that constitute her conclusions. Instead, which beliefs constitute her premises and conclusions depend on dialectical and, perhaps, other pragmatic factors. So it seems that even for a given agent at a given time, certain beliefs constitute her premises relative to certain dialectical and pragmatics purposes but the same beliefs may constitute her conclusions relative to other dialectical and pragmatic purposes. These observation suggest the following argument: The premise/conclusion distinction is dialectical and pragmatic rather than purely epistemic. So it should not play an important role in determining what it is rational for us to believe. But if good reasoning does not satisfy cut, the premise/conclusion distinction does play an important role in determining what it is rational for us to believe. Thus, good reasoning must satisfy cut because the premise/conclusion cannot bear the epistemic weight that I require it to bear. To respond to this argument, I must show that there is an epistemically important difference between premises and conclusions. I will go about this indirectly in this section (§3) by showing that everyone should think that there is an epistemically important difference between premise/conclusions whether or not they accept cut. As we will see, this tu quoque response will unfold in a number of stages. I will provide a direct account of what the premise/conclusion distinction consists in only in §5 where I turn to developing a positive picture of what reasoning that fails to satisfy cut looks like. 3.1 The Traditional Way to Eliminate the Distinction My argument for thinking everyone needs a(n epistemically important) premise/conclusion distinction begins by investigating what it would take to eliminate this distinction (§3.1-3.3). It is only once we have a chased down the answer to this question that we will be in a position to see why everyone must make a premise/conclusion distinction (§3.4). We may begin to consider the question of what it would take to eliminate the premise/conclusion distinction by considering the traditional idea that cut is part of a particular package of principles that eliminates the distinction between premises and conclusions. The package is thought to consist of cut and cautious monotonicity where we say: ⊩ satisfies cautious monotonicity just in case if ᵮ� ⊩ ᵯ� and ᵮ� ⊩ ᵯ� , then ᵮ� , ᵯ� ⊩ ᵯ� and understand ᵮ� , ᵯ� as shorthand for ᵮ� ∪{ ᵯ� }. It is not hard to see why cut and cautious monotonicity together might be thought to eliminate 86 the premise/conclusion distinction. Cut says that if you can conclude ᵯ� when you have ᵯ� as premise together with the premises ᵮ� , then you can conclude ᵯ� when you have ᵯ� as a conclusion from the premises ᵮ� as well. In effect, cut says that ᵯ� is at least as inferentially powerful as a conclusion as it is as a premise. Cautious monotonicity on the other hand says that if you can conclude ᵯ� when you have ᵯ� as a conclusion from the premises ᵮ� , then you can conclude ᵯ� when you have ᵯ� as a premise together with the premises ᵮ� . In effect, cautious monotonicity says that ᵯ� is at least as inferentially powerful as a premise as it is as conclusion. We can summarize our discussion so far as follows: cut + cautious ⇔ no inferential difference non-cut + cautious ⇔ premises more inferentially powerful cut + non-cautious ⇔ conclusions more inferentially powerful non-cut + non-cautious ⇔ different inferential power but not easily comparable The conjunction of cut and cautious monotonicity is often called cumulativity: ⊩ satisfies cumulativity just in case if ᵮ� ⊩ ᵯ� , then ᵮ� ⊩ ᵯ� iff ᵮ� , ᵯ� ⊩ ᵯ� So the idea is that if good reasoning satisfies cumulativity, premises and conclusions are exactly as inferentially powerful as one another. David Makinson in his classic paper “General Patterns of Nonmonotonic Reasoning” articulates a particularly clear expression of this idea that good reasoning satisfying cumulativity is desirable: Why should these conditions be seen as important? Because they correspond to certain very natural and useful ways of organizing our reasoning. They tells us that when we are reasoning, we may accumulate our conclusions into our premises without loss of inferential power (cautious monotony) or amplification of it (cut). In this sense, the reasoning process is taken to be stable. (1994: 41) Makinson’s thought is that premises and conclusions having the same inferential power leads to good reasoning being organized in a natural, useful, and stable way. 98 Now Makinson is not fully explicit about what makes this organizational structure so desirable. But the fundamental intuitive thought that we encountered at the beginning of this section provides a natural answer to the question of why it is desirable: it is desirable for there to be no difference in 98 Stalnaker 1994: 18 and Jeff Horty in conversation have expressed a similar sentiment. 87 inferential power between premises and conclusions because, the fundamental thought goes, quite generally there is no epistemically important difference between premises and conclusion. And in fact it is tempting to take the short step from noticing that there is no difference in the inferential power of premises and conclusion if good reasoning satisfies cumulativity to thinking that this just means that there is no epistemically important difference at all between premises and conclusion if good reasoning satisfies cumulativity. And if we take this short step, we can see why this package of principles (cumulativity) appears to eliminate the premise/conclusion distinction. 3.2 Why It’s Mistaken The trouble is that the short step is a mistake. It assumes that the only epistemically important dimension of comparison between premises and conclusions is inferential power. Though this is a natural and tempting idea, it is false. There is another dimension of comparison between premises and conclusions that I call retractability. To begin to see this, it helps to know that cumulativity, non-monotonicity, and reflexivity are jointly consistent. Nonetheless, I will now show that if good reasoning is both non-monotonic and 99 reflexive, there is an epistemically important difference between premises and conclusions: conclusions are retractable but premises are not. To get the intuitive idea of retractability, focus on the example that we used to illustrate why good reasoning fails to satisfy monotonicity: we considered an agent who believes that Tweety is a bird and who believes that normally, birds fly. We noted that it was good reasoning to conclude that Tweety flies. And we also noted that this conclusion is retractable in the sense that if new information were added to the agent’s premises, it would be no longer good reasoning to conclude that Tweety flies. That is, if the agent were to learn, for example, that Tweety is a penguin and that normally, penguins do not fly, it would no longer be good reasoning to conclude that Tweety flies. More generally, we will say a particular claim ᵯ� is retractable when it is a conclusion just in case there is an ᵮ� and ᵯ� such that ᵮ� ⊩ ᵯ� and ᵮ� , ᵯ� ⊮ ᵯ� , Less formally, the idea is that ᵯ� is retractable when it is a conclusion just in case ᵯ� is the conclusion of some premise set and for at least one such premise set, you can add to that set and ᵯ� will no longer be a conclusion. To say conclusions are retractable is to say that there is at least one claim that is retractable when it is a conclusion. So non-monotonicity entails that conclusions are retractable in this sense. Next let us say a claim ᵯ� is retractable when it is a premise just in case there is an ᵮ� and ᵯ� such that ᵯ� ∈ ᵮ� and ᵮ� , ᵯ� ⊮ ᵯ� . Less formally, the idea is that ᵯ� is retractable when it is a premise just in cases ᵯ� is 99 See Gabbay 1985; Kraus, Lehmann, and Magidor 1990; Makinson 1994; and Shoham 1987 for examples of systems that have all of these properties. 88 a member of some premise set and for at least one such premise set, you can add to that set and ᵯ� will no longer be a conclusion. To say premises are retractable is to say there is at least one claim that is 100 retractable when it is a premise. Reflexivity (which recall says that for any set of sentences ᵮ� , ᵮ� ⊩ ᵯ� for all ᵯ� ∈ ᵮ� ) entails that premises are not retractable in this sense. For consider any set of claims ᵮ� and any element of that set ᵯ� , it trivially follows from reflexivity that ᵮ� , ᵯ� ⊩ ᵯ� because ᵯ� ∈ ᵮ� ∪{ ᵯ� } given that ᵯ� ∈ ᵮ� . What this means in less formal terms is that if good reasoning fails to satisfy monotonicity but does satisfy reflexivity, agents must distinguish the beliefs that constitute their premises from the beliefs that constitute their conclusions. This is because when such agents find out new information they may no longer be able to conclude ᵯ� if they had a conclusion-type belief that ᵯ� though they are guaranteed to be able to conclude ᵯ� if they had a premise-type belief that ᵯ� . Thus, there is an epistemically important difference between premises and conclusions if good reasoning is non-monotonic and reflexive: conclusions are retractable and premises are not. We can then sum up this second dimension of comparison between premises and conclusions as follows: monotonicity + reflexivity ⇔ not retractable non-monotonicity + reflexivity ⇔ only conclusions retractable monotonicity + non-reflexivity ⇔ only premises retractable non-monotonic + non-reflexivity ⇔ both retractable This teaches us that cumulativity does not in fact suffice to eliminate the distinction between premises and conclusions because good reasoning can be cumulative, non-monotonic, and reflexive. And if it were, there would still be a distinction between premises and conclusions. This teaches us that if good reasoning fails to distinguish between premises and conclusions, it must satisfy more conditions than cumulativity. In particular, further conditions are needed to ensure that there is no distinction in the retractability of premises and conclusions. Since we know good reasoning is not monotonic, this means that the correct set of conditions for eliminating the premise/conclusion distinction includes cumulativity, non-monotonicity, and non-reflexivity. 3.3 The Correct Approach to Eliminating the Distinction Unfortunately, even this package of principles does not suffice to eliminate the distinction. The 100 So we can say ᵯ� is retractable as a conclusion (as a premise) just in case ᵯ� is a conclusion of (element of) some premise set and and for one at least one such premise set, you can add to that set and ᵯ� cannot be concluded from that set together with what has been added to it. 89 way to see this is to notice that non-monotonicity and non-reflexivity tell us that both premises and conclusions are retractable, but they do not tell us that they are retractable under the exact same conditions. For example, we could imagine a theory on which premises are retractable only when the premise set is logically inconsistent but conclusions are retractable not just when they are logically inconsistent with the premises but also when they are unreasonable or improbable given the premises. The question then to ask is what conditions would ensure that premises and conclusions are retracted in exactly the same conditions. We will answer this question in two stages. First we will give a condition that ensures that conclusions are retracted when premises are. Second we will give a condition that ensures that premises are retracted when conclusions are. Let’s consider, then, the first part—determining what condition ensures that if you have to retract a claim when you accept it as a premise, you also have to retract it when you accept it as a conclusion. Since we have seen that for there to be no distinction between premises and conclusions in reasoning, good reasoning must be non-reflexive, we can suppose that there is some set of sentences ᵮ� and some sentence ᵯ� such that the following holds: ᵮ� , ᵯ� ⊮ ᵯ� This tells us that if we accept ᵯ� as a premise and ᵮ� as a set of premises simultaneously, we have to retract our acceptance of ᵯ� . If we are to say that there is no difference between premises and conclusions, we should say that we have to retract our acceptance of ᵯ� regardless of whether it is a premise or a conclusion. In other words, ᵮ� ⊮ ᵯ� . So the following condition ensures that conclusions are retractable when premises are retractable: Cautious Reflexivity: if ᵮ� , ᵯ� ⊮ ᵯ� , then ᵮ� ⊮ ᵯ� The contrapositive of this is easier to parse: if ᵮ� ⊩ ᵯ� , then ᵮ� , ᵯ� ⊩ ᵯ� Cautious reflexivity says that if you start off with some premises that make it permissible for you to believe that ᵯ� , adding ᵯ� to your premises cannot make it impermissible for you to believe that ᵯ� . Evidently, this plausible condition is entailed by reflexivity, but does not itself entail reflexivity. And interestingly it is redundant given the package of principles that we are considering. We 90 are considering the package that includes cautious monotonicity which says the following: if ᵮ� ⊩ ᵯ� and ᵮ� ⊩ ᵯ� , then ᵮ� , ᵯ� ⊩ ᵯ� Cautious reflexivity is redundant given the package of principles (and has the name that it does) because it is the special case of cautious monotonicity where ᵯ� is ᵯ� . So this shows that the first part of our task has already been achieved by accepting cautious monotonicity. Let’s turn now to the second part of our task—determining what condition ensures that if you have to retract a claim when you accept it as a conclusion, you also have to retract it when you accept it as a premise. Consider a case where a conclusion is retractable. This is a case where we can conclude ᵯ� from ᵮ� : ᵮ� ⊩ ᵯ� but we can no longer conclude ᵯ� when we add ᵯ� to our premises: ᵮ� , ᵯ� ⊮ ᵯ� In this case, ᵯ� is retractable when it is a conclusion. For ᵯ� to be retractable as a premise in this same condition, it would have to be that if we had started out with ᵯ� and ᵮ� as premises and then added ᵯ� to our premises, we similarly would have to retract ᵯ� : ᵮ� , ᵯ� , ᵯ� ⊮ ᵯ� So the following condition ensures premises are retractable when conclusions are retractable: Extra Non-monotonicity: if ᵮ� ⊩ ᵯ� and ᵮ� , ᵯ� ⊮ ᵯ� , then ᵮ� , ᵯ� , ᵯ� ⊮ ᵯ� While monotonicity trivially entails extra non-monotonicity (by falsifying the antecedent), extra non-monotonicity does not entail monotonicity. What extra non-monotonicity tells us in the presence of non-monotonicity is that there are extra cases in which learning new information makes it so you have to retract old conclusions. It says that if you start out concluding ᵯ� from ᵮ� and adding ᵯ� to your premises makes it no longer permissible to conclude ᵯ� , then the adding both ᵯ� and ᵯ� also makes it no longer permissible to conclude ᵯ� as well. Unlike cautious reflexivity, extra non-monotonicity is independent of cumulativity and non-monotonicity. 101 101 See Rott 2001: §5.2 for a system whose reasoning relation satisfies cumulativity and non-monotonicity, but fails to satisfy extra 91 We have finally determined how to eliminate the distinction between premises and conclusions: cut + cautious monotonicity (i.e., cumulativity) ⇔ same inferential power cautious reflexivity + extra non-monotonicity ⇔ same retraction conditions cumulativity + extra non-monotonicity ⇔ no premise/conclusion distinction Cumulativity eliminates the distinction in inferential power. Cautious reflexivity and extra non-monotonicity eliminate the distinction in retraction conditions. So cumulativity (which entails cautious reflexivity) and extra non-monotonicity eliminate the distinction between premises and conclusions. This then is the package of individually necessary and jointly sufficient principles that eliminates the distinction between premises and conclusions. 3.4 Why The Package is Unacceptable So the traditional view is correct in spirit if not in detail that in order to eliminate the distinction between premise and conclusion we must accept that good reasoning satisfies cut. But seeing the details of what it takes to eliminate the distinction between premises and conclusions is important because it puts us in a position to see why there must be an epistemically important difference between premises and conclusions. There must be such a distinction because the package of principles needed to eliminate the distinction between premises and conclusions is not acceptable. Good reasoning does not satisfy both cumulativity and extra non-monotonicity. In particular, no one should accept that good reasoning quite generally must satisfy extra non-monotonicity. We can illustrate why good reasoning does not satisfy extra non-monotonicity with the help of the following example. First, suppose you believe that Tweety is a bird and believe that normally, birds fly. Here it is good reasoning to conclude that Tweety flies. So ᵮ� ⊩ ᵯ� where ᵮ� ={‘Tweety is a bird’, ‘Normally, birds fly’} and ᵯ� is ‘Tweety flies’. Second, suppose you have those two permissible beliefs and then come to believe that Tweety is a defective winged bird and normally, birds with defective wings do not fly. Here it is no longer good reasoning to conclude that Tweety flies. So ᵮ� , ᵯ� ⊮ ᵯ� where ᵯ� is ‘Tweety has a defective wing and normally, birds with defective wings do not fly’, and ᵮ� and ᵯ� as they were before. This example then satisfies the antecedent of extra non-monotonicity. But intuitively, it does not satisfy the consequent. For suppose you have those three beliefs and non-monotonicity. It is essentially the kind of system that I described in the beginning of this subsection. So the reason why it is not extra non-monotonic is that reflexivity fails only if the premise set is inconsistent but it is not generally true that if ᵮ� , ᵯ� ⊮ ᵯ� , then ᵮ� ∪{ ᵯ� } ∪{ ᵯ� } is inconsistent. Thus, the system allows that we may have ᵮ� , ᵯ� , ᵯ� ⊩ ᵯ� even when ᵮ� , ᵯ� ⊮ ᵯ� . These remarks about the properties of Rott’s system follow from observation eight in Rott 2001: 138. 92 then see Tweety flying and on the basis of this come to believe that Tweety flies. In this case it does seem like good reasoning to conclude that Tweety flies despite the fact that Tweety has a defective wing. This is because some abnormal defective winged birds do fly and you saw Tweety flying. So ᵮ� , ᵯ� , ᵯ� ⊩ ᵯ� . This then is a counterexample to extra non-monotonicity. More generally, we can think of extra non-monotonicity as follows: When we permissibly believe that Tweety is a bird and permissibly believe that normally, birds fly, we are defeasibly permitted to believe that Tweety flies. This permission is defeasible because we can think of, e.g., permissibly believing Tweety is a penguin as “defeating” this permission because it is not permissible to believe that Tweety flies when you permissibly believe that Tweety is a penguin, that Tweety is a bird, and that normally, birds flies. Extra non-monotonicity in this terminology says that if you encounter a defeater, ᵯ� , for your belief that ᵯ� that is based on your set of premises ᵮ� , then that defeater functions as a defeater of any source which could give you ᵯ� as a premise. ᵯ� must make it so you can’t trust any external input that gives you ᵯ� . It should be no surprise that this is where we end up if we are trying to eliminate the distinction between premises and conclusions. This elimination would require that if something defeats an inferential chain to ᵯ� it defeats an external input that gives you ᵯ� as well. While this might be a useful property of certain kinds of reasoning in certain cases, it is not one that generally holds. For this reason, good reasoning does not satisfy extra non-monotonicity. And since extra non-monotonicity is necessary for eliminating the premise/conclusion distinction, it follows that good reasoning does distinguish between premises and conclusions. Thus, the fundamental intuitive thought driving the argument that we are considering is mistaken: there is in fact an epistemically important difference between premises and conclusions. 3.5 Where We’ve Got To Of course, the idea that good reasoning fails to satisfy cut does not merely say that there is some distinction or the other between premises and conclusions. Rather it says there is a very specific difference between premise and conclusions—they differ in their inferential power. The argument that we were considering in this section objected to this specific difference on the general grounds that there is no epistemically important difference at all between premises and conclusions. We have seen that the general grounds that this argument is based on are mistaken. But this response leaves open the possibility that the only difference between premises and conclusions might consist in a difference in their retractability rather than difference in their inferential power. This is as it should be. My goal is not to argue that there is some particular difference in the 93 inferential power of premises and conclusions but only to argue that we should not rule out this possibility. Still this leaves it open to the objector to develop some high-level argument that is consistent with there being some general difference between premise and conclusions but claims that there must not be a difference in inferential power in particular. This is exactly what our next argument attempts to do. 4. The Argument from Suppositional Reasoning The argument that seeks to show that there is no difference in inferential power between premises and conclusions by showing that if good reasoning were to involve such a distinction, we would no longer be able to engage in the epistemically important practice of suppositional reasoning. 4.1 The Fruitfulness of Suppositional Reasoning Lou Goble nicely illustrates the fruitfulness of this practice: In mathematics there may be interest in whether a certain proposition, ᵱ� , is true, or follows from presumptively true axioms, e.g. ZFC. Maybe it's a very hard question, many people work on it. Someone finally proves, not the result, but that ᵱ� follows from the axioms, e.g., ZFC, + proposition ᵱ� , where ᵱ� seems credible itself. That doesn't answer the original question, but it may well be regarded as significant progress. Now someone else, after much hard work, proves that ᵱ� is not independent of ZFC, but is entailed by it. That is hailed as a significant result. Moreover, the original question concerning ᵱ� now seems answered, which was, of course, the original point of interest. 102 This case illustrates how we often reason with a supposition to some conclusion, later show that the supposition is true, and in light of this, accept the conclusions that we arrived at under the supposition. And it shows how this is an ordinary but important practice in theoretical inquiries such as mathematics. But Goble does not intend this example to be narrowly restricted to inquires like mathematics where it is perhaps true that all forms of good reasoning correspond to deductive inference, inquires where ampliative inference is not good reasoning. Instead, it is one instance of a general practice that we engage in both in mathematics and in other inquires that allow for ampliative reasoning. Having established that suppositional reasoning plays this fruitful role, the argument is that good reasoning must satisfy cut in order to allow for such reasoning. In Goble’s example, a 103 102 Cf. Burgess 2009: 106-7. 103 This argument is inspired by discussion with Lou Goble as well as his case. It is also suggested in Kraus, Lehmann, and Magidor 1990: 177-178 and Makinson 1994: 43. 94 mathematician reasons well from not just her set of premises, ZFC, but her set of premises together with ᵱ� to the conclusion ᵱ� . In symbols, ZFC, ᵱ� ⊩ ᵱ� . Then later we are told that a mathematician is able to reason well to the conclusion ᵱ� from ZFC itself. So this means ZFC ⊩ ᵱ� . In cases like this, we have now shown ᵱ� . That is, ZFC ⊩ ᵱ� . But in order for this to be so, it seems that we must think that ZFC ⊩ ᵱ� follows from ZFC ⊩ ᵱ� and ZFC, ᵱ� ⊩ ᵱ� . And that is just cut. The idea then is that the fruitful practice of suppositional reasoning shows that there is no difference in the inferential power of premises and conclusions and thereby shows that good reasoning must satisfy cut. 4.2 Suppositional Reasoning without Cut I should begin my response to this argument by conceding that suppositional reasoning is a fruitful practice and that if it were true that good reasoning must satisfy cut in order to allow for this practice, I would agree that this a decisive argument that good reasoning must satisfy cut. Luckily, however, we can allow for suppositional reasoning even if good reasoning does not satisfy cut. To see this, recall that cut says that if ᵮ� ⊩ ᵯ� and ᵮ� , ᵯ� ⊩ ᵯ� , then ᵮ� ⊩ ᵯ� . So for cut to fail is for there at least one claim ᵯ� and one claim ᵯ� such that you can reason to ᵯ� when ᵯ� is a premise but not when it is a conclusion. This does not entail that in reasoning with any claim we must be sensitive to whether it is a premise or a conclusion. And it does not mean even for those claims ᵯ� which do require us to be sensitive to whether they are premises or conclusions that we cannot reason at all with them when they are conclusions. All it says is that we cannot reason in exactly the same way whether or not ᵯ� is a premise or a conclusion. This opens the door for an account of suppositional reasoning. In general, the idea would be that even for those claims that are sensitive to whether they are premises or conclusions for certain kinds of reasoning, there are other kinds of reasoning that are indifferent to whether they are premises or conclusions. Since this idea is abstract, consider an example: Logical Suppositional Reasoning: if ᵮ� ⊩ ᵯ� and ᵮ� , ᵯ� ⊢ ᵯ� , then ᵮ� ⊩ ᵯ� Logical Suppositional Reasoning says that if you can reason to ᵯ� from ᵮ� and if ᵯ� is a logical consequence of ᵮ� and ᵯ� together, then you can reason to ᵯ� from ᵮ� . This tells us that insofar as we are performing logically valid inferences we do not have to keep track of whether ᵯ� is a premise or a conclusion. This allows for suppositional reasoning but restricts its role. It says that if suppositional reasoning is to inform our belief formation, we must restrict ourselves to only logically valid inferences under suppositions. 95 Importantly, Logical Suppositional Reasoning is only an illustration of one way there could be suppositional reasoning even if good reasoning fails to satisfy cut. More generally, we can allow for 104 suppositional reasoning by isolating a class of inferences that are good regardless of whether a claim is a premise or a conclusion. When I present a positive picture of how good reasoning might not satisfy cut, we will see that this picture allows us to state in simple terms which kinds of inferences are good regardless of whether a claim is a premise or a conclusion (§5.5). Of course, it is true that if good reasoning fails to satisfy cut, we cannot reason indiscriminately under suppositions. But we have yet to see why this is a cost of the proposal. After all, examples like Goble’s do not establish that our fruitful practice is indiscriminate in this way. All that they establish is that we often engage in reasoning under suppositions that informs our beliefs. The examples alone do not tell us anything about the precise scope of this practice. Investigating this question would require us to look at the details of particular domains of reasoning and puzzles about reasoning. While this kind of investigation is needed in order to show that some particular case is one in which good reasoning fails to satisfy cut, it is not the goal of this chapter to present cases in which good reasoning fails to satisfy cut. Rather the goal of this paper is to defuse the best arguments for thinking that good reasoning must satisfy cut and develop an alternative picture of how good reasoning might not satisfy cut. And we have accomplished this task for the argument considered in this section. 105 What’s more, the strategy used to address this argument promises to generalize to deal with many (though perhaps not all) other arguments in favor of thinking that there is no difference in inferential power between premises and conclusions. After all, other arguments will tend to also use cases to show that there are certain general practices that require no inferential difference between premises and conclusions. And my response will be that a restricted set of inference rules that do not distinguish between premises and conclusions can in principle explain the practice without ruling out the possibility that some inferences are sensitive to the premise/conclusion distinction. So at least considered at the level of generality that we are working with in this paper, the prospects of developing any argument directly for the conclusion that there is no difference in inferential power between 104 So though Logical Suppositional Reasoning assumes supraclassicality (see n. 93), my response to the objection does not. 105 The basic idea of my response also responds to another argument that good reasoning must satisfy cut (which I first considered due to a discussion with Fabrizio Cariani). The argument has two premises. The first premise is so-called hard half of the deduction theorem (if ᵮ� , ᵯ� ⊩ ᵯ� , then ᵮ� ⊩ ᵯ� →ᵯ� ) .The second premise is modus ponens (if ᵮ� ⊩ ᵯ� →ᵯ� and ᵮ� ⊩ ᵯ� , then ᵮ� ⊩ ᵯ� ). These premises entail cut. To see this, assume the antecedent of cut—ᵮ� ⊩ ᵯ� and ᵮ� , ᵯ� ⊩ ᵯ� . By the first premise, we have ᵮ� ⊩ ᵯ� →ᵯ� . By the second premise, we can derive the consequent of cut, ᵮ� ⊩ ᵯ� . My response to this argument is to deny the first premise. I propose as an alternative explanation of the cases that motivate it: if ᵮ� , ᵯ� ⊢ ᵯ� , then ᵮ� ⊩ ᵯ� →ᵯ� . Like Logical Suppositional Reasoning, we use logically valid inferences as an example of the more general strategy of isolating a class of inferences that one can perform regardless of whether a claim is a premise or a conclusion. 96 premises and conclusions—directly for the conclusion that good reasoning must satisfy cut—look dim. 5. A Foundationalist Interpretation of the Structure of Reasoning So far our discussion has concerned arguments in favor of thinking that good reasoning must satisfy cut and we have seen how they fail. In discussing these arguments, I have taken a very abstract perspective in order to remain neutral between different kinds of theories about the nature of reasoning. This has the virtue of making the premises of my arguments against the idea that good reasoning must satisfy cut acceptable to many different audiences. But it has the vice of being so abstract that it makes it hard to see exactly how it is that good reasoning might fail to satisfy cut. In the remainder of the paper then, I propose to take on a more partisan perspective. In particular, I will be assuming a certain minimal foundationalist epistemology is correct. And I will use it to provide a more concrete interpretation of the main structural claims about good reasoning that we are discussing and I will show how within this picture, good reasoning might fail to satisfy cut. Though I do not claim that this is the only approach that allows us to understand how good reasoning might fail to satisfy cut, it is an approach that allows us to see relatively simply how good reasoning might fail to satisfy cut. , 106 107 5.1 The Framework The broadly foundationalist framework that I will be working with consists of two components; one more psychological, the other overtly epistemological. The psychological component is that an agent’s belief state is structured so that some of her beliefs count as foundational beliefs and others count as non-foundational beliefs. The non-foundational beliefs in an agent’s psychology are the ones that are based on her foundational beliefs in the sense that they are the products of reasoning from the foundational beliefs. In addition to this psychological component, there is an epistemological component of foundationalism that consists of two theses. The first thesis is that the permissibility of non-foundational beliefs is determined by their relationship to foundational beliefs. In particular, it is permissible to non-foundationally believe that ᵯ� just in case it is good reasoning to conclude ᵯ� from ᵮ� , you permissibly believe each of the propositions in ᵮ� , your beliefs in the propositions expressed by the 106 There are formal results concerning the correspondence between properties of good reasoning and certain kinds of semantic structures (see n. 91). The philosophical worry that arises from the fact that most of these results require good reasoning to satisfy cut is that there is no good way to make sense of what good reasoning that does not satisfy cut could be like. This section answers this worry but leaves the interesting formal question of whether a representation result can be had unanswered. I hope to return to this formal question in other work. 107 My approach is similar to Rott 2001: ch. 5, but the discussion starting at §5.3 sheds new light on how good reasoning might not satisfy cut. 97 sentences in ᵮ� are all and only your foundational beliefs. The second thesis is that the permissibility of foundational beliefs is not determined in this way. Different foundationalist views can give different accounts of how the permissibility of foundational belief is determined. For example, one account says a foundational belief is permissible in virtue of being caused in the right way by perception. This minimal form of foundationalism is a credible albeit controversial theory in epistemology. But it is worth emphasizing here that it is minimal in a way that makes it agnostic about many of the most controversial assumptions that traditional forms of foundationalism endorse. For example, it is compatible with but not committed to any of the typical internalist assumptions associated with foundationalism. 108 5.2 The Interpretation This model allows us to interpret the main claims that we have discussed so far. 109 5.2.1 Premise/Conclusion Distinction The first thing that I want to do is provide an interpretation of the distinctions between premises and conclusions. More precisely, I will provide an account of what an agent’s premises and conclusions are at a given time. The account is that an agent’s premises at a time are her foundational beliefs at that time and her conclusions at a time are her non-foundational beliefs at that time. 110 The foundationalist framework’s psychological component then ensures that there is a sharp distinction between premises and conclusions. And this suffices to show that this distinction is not artificial because it is grounded in an independently motivated theory. 5.2.2 Learning and Monotonicity The next thing that I want to do is provide an interpretation of the main structural claims that we have been discussing. In order to introduce this interpretation in an accessible way, it helps to focus on a different kind of structure that we have not yet discussed. It is what I call the structure of standard cases of learning. We can represent the structure as follows: 108 See Fumerton 2009 for an introduction to more traditional and less minimal forms of foundationalism and Chisholm 1989 for a classic defense. I do not assume as Pollock and Cruz 1999 do that foundationalism makes the so-called doxastic assumption. Moreover, though my discussion is conducted in terms that are not congenial to Pollock and Cruz’s direct realism, this is done only to simplify the discussion. See Bonjour 1978, Harman 1973, Lehrer 1974, and Sellars 1997 for criticisms of these traditional forms of foundationalism. 109 This interpretation is similar to thinking of non-monotonic reasoning as a kind of belief revision (cf. Stalnaker 1994: §4 and Rott 2001). A standard formal theory of belief revision is the AGM theory (Alchourrón, Gärdenfors, and Makinson 1985) and the Makinson-Gärdenfors identity (Makinson and Gärdenfors 1991) provides a bridge between that theory and non-monotonic consequence. However, that system of reasoning must (in the finite case at least) satisfy cut (in the presence of supplementary postulate K*7, see Makinson and Gardenfors 1991: 197 for discussion). More generally, we break with AGM in representing a belief state as a structured entity rather than a flat classical theory (cf. Rott 2001: ch. 5). 110 This is only one example of a non-artificial interpretation. Another example: premises are foundational beliefs; conclusions are belief whether they be foundational or non-foundational. 98 ᵯ� ᵯ� ᵯ� ᵯ� ᵯ� ⇑ Add ᵯ� ⇑ ᵮ� ᵮ� , ᵯ� Here is how read this figure. The left hand side of it depicts an agent who starts out with a permissible foundational belief in each of the propositions expressed by the sentences in ᵮ� . And this agent permissibly non-foundationally believes that ᵯ� and that ᵯ� based on good reasoning from ᵮ� . Then, perhaps due to perception, ᵯ� is added to the agent’s foundations. So on the right hand side of the figure we see that the agent now also has a permissible foundational belief that ᵯ� . This results, we may assume, in the agent now permissibly non-foundationally believing that ᵯ� based on good reasoning from ᵮ� and ᵯ� . I call this ‘the structure of standard cases of learning’ because often when we learn something new, we are able to draw further conclusions because of this. And this is what is represented in this figure. As we have seen, good reasoning fails to satisfy monotonicity and this means not all cases of learning work like this. Instead sometimes we can learn something new and have to take back some of our conclusions. We can then have this structure: ᵯ� ᵯ� ᵯ� ⇑ Add ᵯ� ⇑ ᵮ� ᵮ� , ᵯ� Here learning ᵯ� made it so that we have to retract some of our old conclusions. 5.2.3 Cut and Cautious Monotonicity We can similarly interpret cut and cautious monotonicity. To see what the question of whether good reasoning satisfies cut amounts to in this setting, it helps to compare it to the structure of standard learning. The structure of standard learning is this: ᵯ� ᵯ� ᵯ� ᵯ� ᵯ� ⇑ Add ᵯ� ⇑ ᵮ� ᵮ� , ᵯ� The natural interpretation of this is one on which the proposition expressed by ᵯ� is distinct from the proposition expressed by ᵯ� , by ᵯ� , and each of the propositions expressed by the sentences in ᵮ� . The question of whether good reasoning satisfies cut arises when the thing that we add to the foundations is the same as something that we already had as a conclusion. So for good reasoning to fail to satisfy cut is for the following structure to be possible: 99 ᵯ� ᵯ� ᵯ� ᵯ� ᵯ� ⇑ Add ᵯ� ⇑ ᵮ� ᵮ� , ᵯ� Here ᵯ� was a conclusion but adding it to the foundations leads to us being newly permitted in believing ᵯ� . If good reasoning fails to satisfy cut, a structure like this is possible. If it satisfies cut, a structure like this is not possible. Similarly, to introduce cautious monotonicity it helps to begin by looking at cases in which monotonicity fails: ᵯ� ᵯ� ᵯ� ⇑ Add ᵯ� ⇑ ᵮ� ᵮ� , ᵯ� Here again it is natural to interpret ᵯ� as expressing a proposition that is distinct from the proposition expressed by proposition expressed by ᵯ� , by ᵯ� , and each of the propositions expressed by the sentences in ᵮ� . But cautious monotonicity concerns the cases where this is not so. That is, it concerns cases like this: ᵯ� ᵯ� ᵯ� ⇑ Add ᵯ� ⇑ ᵮ� ᵮ� , ᵯ� Here ᵯ� was non-foundational but adding it to the foundations leads to it no longer being permissible to believe that ᵯ� . If good reasoning fails to satisfy cautious monotonicity, a structure like this is possible. If it satisfies cautious monotonicity, a structure like this is not possible. 5.2.4 The Fruits of the Interpretation This then gives us a simple interpretation of the main claims about structure that we have discussed. This interpretation allows us to isolate a question in foundationalist epistemology—can adding ᵯ� to your foundational beliefs add to your permissible non-foundational beliefs if you were already permitted to non-foundationally believe that ᵯ� ?—and the issue of whether reasoning satisfies cut is settled by how we answer this question. Though some epistemologists often tacitly assume an answer to this question, as far as I know the question itself has gone unnoticed in foundationalist epistemology. It is nonetheless a substantive question. And as we have seen, the core elements of 111 111 The seminal discussions in n. 108 do not explicitly pose or directly answer this question. 100 foundationalism do not entail an answer to it. In fact, it is somewhat unsurprising that foundationalism is compatible with the idea that good reasoning fails to satisfy cut. After all, foundationalism already gives foundational beliefs a distinctive epistemic role. In particular, the permissibility of non-foundational beliefs is explained by the relation that they stand to foundational beliefs. But the permissibility of foundational beliefs is not explained in this way. Thus, the foundationalist framework makes a sharp distinction between premises and conclusions and gives premises a distinctive epistemic role. Seeing this makes it natural to wonder what further assumptions we would have to make in order to go from foundationalism being merely compatible with good reasoning failing to satisfy cut to actually entailing that good reasoning fails to satisfy cut. What I want to do in the remainder of this section is to explain what kind of assumption we would need to add to the foundationalist framework in order for it to entail that good reasoning fails to satisfy cut. 5.3 How to Go Non-Cumulative-Transitive To begin, recall that we have been informally thinking of non-monotonicity in terms of defeasible permissibility—the belief that Tweety is a bird and the belief that Tweety flies defeasibly permit you to believe Tweety flies. The permission is defeasible in the sense that you are permitted unless a certain condition, e.g., your believing that Tweety is a penguin, holds. Formal theories make this informal talk precise. But for the purposes of illustrating my main idea from the informal perspective that we have adopted in this paper, let’s stick to this informal way of speaking. In all of the examples that we have looked at, the ‘unless’-clause of these claims about defeasible permission simply mention other beliefs that you might have (e.g., the belief that Tweety is a penguin). But according to foundationalism, to fully characterize your belief state it is not enough just to say what beliefs you have; the distinctive organizational structure of your beliefs must also be mentioned. And this in turn leaves it open that these ‘unless’-clauses may not just mention which other beliefs you have but also mention the distinctive organizational structure of your beliefs. I will now provide two examples of what such ‘unless’-clauses might look like. The first example will not involve a failure of good reasoning to satisfy cut but it will illustrate how organization might matter. This will pave the way for the second example which will illustrate how good reasoning might fail to satisfy cut. 5.3.1 A Preliminary Example So consider the following case. Suppose that you foundationally believe of some object o that o seems to have a rough surface. And suppose this belief defeasibly permits you to believe that o has a 101 rough surface. Finally suppose you come to believe that your sense of touch is malfunctioning. What may you conclude by reasoning now that you have learned this? One answer is that you may no longer conclude that o has a rough surface. Another answer is that you may still conclude that o has a rough surface. And a third answer is that it depends. Though I do not know of any knock down argument against the first two answers, I wish to explore the third by way of illustrating how the organization of your beliefs may matter. In particular, I wish to explore the idea that it depends on how your belief that o seems to have a rough surface came about: The idea is that your beliefs permit you to believe o has a rough surface unless you acquired your believe that o seems to have a rough surface through touching. So an agent who acquired the belief that o appears to have a rough surface by touching o would not be permitted to believe that o has a rough surface. But an agent who acquires belief that o appears to have a rough surface through visual perception would be permitted to believe o has a rough surface. This illustrates that how your belief came to be foundational might affect what other beliefs are permissible. We can also look at this issue from a more abstract perspective: What we have observed so far is that sometimes the truth of the belief that ᵯ� might make ᵯ� likely to be true or reasonable so long as that belief was arrived at in a certain way. Accordingly, claims about defeasible permissibility that make reference to how our beliefs were arrived at occupy a certain strategic position. Without such ‘unless’-clauses, we are faced with a choice. We can either say that the belief that ᵯ� does not defeasibly permit believing that ᵯ� or we can say that it does, but make no reference to how the belief that ᵯ� was arrived at. Both options are unsatisfying. The first alternative which simply rejects that the belief that ᵯ� defeasibly permits the belief that ᵯ� can seem too conservative because at least when the belief that ᵯ� comes about in the right way, the belief that ᵯ� seems reasonable and likely to be true. Of course, one way of pressing this alternative is to say that the belief that ᵯ� together with belief that this belief came about in the right way defeasibly permit the belief that ᵯ� . But given that the foundationalist psychology includes not just beliefs but also an account of what place within a larger structure these beliefs occupy and why they occupy this place, it is hard to see why we must insist that one have explicit beliefs about how a belief comes about for them to play the kind of role that I am suggesting that they play. After all, this way of pushing the first alternative concedes that how a belief came about matters and simply tries to capture this fact by insisting one must have beliefs about how one’s beliefs came about. This concedes the basic insight that how your beliefs came about matters and captures it in a less natural way. Thus, the first alternative is less plausible than the idea that I am developing. 102 The second alternative which places no conditions at all on where the belief that ᵯ� comes from seems far too liberal. It allows that the state that ᵯ� is permissible in cases where it seems unreasonable or unlikely to be true. While such an aggressive stance might be reasonable if there were no other way for us to become permitted to believe that ᵯ� , the idea that I have sketched allows us to be permitted to believe that ᵯ� without adopting this overly aggressive stance. So the example that I sketched earlier together with this theoretical consideration that I just offered argue in favor of thinking that claims about defeasible permissibility might have ‘unless’-clauses that mention the organizational structure of your beliefs. Now this example does not involve a failure of cut. But it does give us a taste of why ‘unless’-clauses might mention structure and this nicely paves the way for our second example which will illustrate a failure of cut. 5.3.2 A Failure of Cut Here’s the example: Suppose (1) the beliefs corresponding to ᵮ� defeasibly permit the belief that ᵯ� and (2) the beliefs corresponding to ᵮ� together with the belief that ᵯ� defeasibly permit the belief that ᵯ� . And suppose that if we unpack (2)’s ‘unless’-clause it says the beliefs corresponding to ᵮ� together with the belief that ᵯ� defeasibly permit the belief that ᵯ� unless the belief that ᵯ� is solely based on the beliefs corresponding to ᵮ� . Let’s look at what these claims predict. If we also assume the belief that ᵯ� defeasibly permits the belief that ᵯ� , they predict that we can have a situation like this: ᵯ� ᵯ� ᵯ� ⇑ Add ᵯ� ⇑ ᵮ� ᵮ� , ᵯ� Though this case is not what I called a standard case of learning, it shares its structure. In standard cases of learning, the new information plays a direct role in permitting the new belief. But in this case, it is the beliefs corresponding to ᵮ� and the belief that ᵯ� permit the new belief. What the belief that ᵯ� does is make it so the belief that ᵯ� is no longer solely based on the beliefs corresponding to ᵮ� and this makes it so the ‘unless’-clause of (2) is no longer true. In the lingo, the belief that ᵯ� is a defeater defeater. So in that example the ‘unless’-clause of (2) is no longer true because ᵯ� comes to be partially based on the belief that ᵯ� . But another way the ‘unless’-clause of (2) could no longer be true is by ᵯ� becoming part of the foundations so not based on any other beliefs. That is, we could have a situation like this: 103 ᵯ� ᵯ� ᵯ� ⇑ Add ᵯ� ⇑ ᵮ� ᵮ� , ᵯ� And now notice that this situation is one that demonstrates how good reasoning might fail to satisfy cut. Returning to the less graphic notation that we have been using for most of this paper, the picture of what happens after adding ᵯ� tells us that ᵮ� , ᵯ� ⊩ ᵯ� . This is because if your foundational beliefs corresponding to ᵮ� are permissible and your foundational belief that ᵯ� is permissible, then you would be permitted to believe that ᵯ� . The picture before adding ᵯ� shows us that ᵮ� ⊩ ᵯ� . And it shows us that ᵮ� ⊮ ᵯ� because the claims about defeasible permissibility that we are considering say that the beliefs corresponding to ᵮ� and the belief that ᵯ� permit you to believe that ᵯ� unless the belief that ᵯ� is based solely on the beliefs corresponding to ᵮ� . So this is an abstract example of how reasoning might not satisfy cut. And this shows that if we add a claim about defeasible permissibility of this sort to the foundationalist framework, the package of these claims entails good reasoning does not satisfy cut. This then gives us an informal model that allows us to understand how good reasoning might fail to satisfy cut. The fundamental idea is that good reasoning might fail to satisfy cut by being sensitive not just to which beliefs you have but also to the distinctive organizational structure of those beliefs. Let’s look at how this model might be applied. 5.4 Applications As I have said, here is not the place to develop and defend any application to a puzzle in detail. But considering how we might use this model to approach one puzzle will illustrate how the model may be helpful. So return to the bootstrapping problem. The problem arose by accepting two pieces of reasoning. First is the inference from ‘o appears red’ to ‘o is red’ and similarly for other colors. Second is the inference from a whole track record of claims of the form ‘o appears red and is red’, which we will call Track Record, to ‘my color vision works’. The solution to this puzzle that is suggested by the present approach is to specify the ‘unless’-clause of these claims about reasoning in a way that makes reference to the organizational structure of your beliefs. In particular, suppose we say the belief that Track Record permits believing that your color vision works unless the belief that o appears red and is red is solely based on the belief that o appears red. This predicts that {‘o appears red’, ‘o* appears blue’, …} ⊩ Track Record by the first inference rule. It also predicts that {‘o appears red’, ‘o* appears blue’, …}, Track Record ⊩ ‘my color vision works’ by the 104 second inference rule. But crucially {‘o appears red’, ‘o* appears blue’, …} ⊮ ‘my color vision works’ because the ‘unless’-clause of the second inference rule is triggered. This shows how the model developed here may be applied to the bootstrapping problem. 112 An interesting question for further exploration is which if any, puzzles are best solved in this way. I have considered the bootstrapping puzzle and certain puzzles that arise when reasoning about normative notions that allow for conflict elsewhere. But since cut is implicated in any puzzle that has 113 multiple steps, there are many potential further applications of this idea and it remains to be seen which of these applications are fruitful. 5.5 Taking Stock Let’s take stock. We introduced a foundationalist framework that allowed us to interpret the main elements of our discussion. We noted that within this framework it might be that good reasoning fails to satisfy cut. We then showed how the framework together with claims about permissibility that mention the distinctive organizational structure of your beliefs actually entails that good reasoning fails to satisfy cut. Finally, we put the model to work by applying it to the bootstrapping problem. This model also allows us to shed new light on questions that we discussed earlier. For example, we considered the question of how to do suppositional reasoning if good reasoning fails to satisfy cut. And we said that for it to be possible, there must be some class of inferences that is not sensitive to the premise/conclusion distinction. The present model allows us to isolate that class. The model says that inferences whose ‘unless’-clauses do not mention the organizational structure of belief are the ones that will be guaranteed to be safe in the context of suppositional reasoning. Thus, we have isolated a distinctive way in which reasoning with qualitative belief might not satisfy cut. Of course, nothing about the foundationalist framework forces us to say that good reasoning does not satisfy cut. But it does allow us to see how good reasoning could fail satisfy cut. And with the help of plausible albeit contestable assumptions, it generates examples in which cut fails to be satisfied. 6. Conclusion I conclude therefore that good reasoning might not satisfy cut. Of course, I have not argued that good reasoning must fail to satisfy cut In fact, there is a general formal trick that will always ensure that cut holds: Any time that we have a case where it seems that ᵮ� ⊩ ᵯ� and ᵮ� , ᵯ� ⊩ ᵯ� , but ᵮ� ⊮ ᵯ� , redescribe it as a case where it is not actually ᵮ� , ᵯ� , ᵯ� that are involved in the reasoning. Rather it is ᵮ� 112 See chapter four and Weisberg 2010 for further discussion. 113 See chapters two and four. 105 together with a marker that indicates that we believe ᵮ� inferentially (for short, ᵮ� -inferential) or ᵮ� together with a marker that indicates that we believe ᵮ� non-inferentially (for short, ᵮ� -non-inferential). And similarly for ᵯ� and ᵯ� . So that we actually have that ᵮ� -non-inferential ⊩ ᵯ� -inferential, that { ᵮ� -non-inferential, ᵯ� -non-inferential} ⊩ ᵯ� -inferential, and that ᵮ� -non-inferential ⊮ ᵯ� -inferential. But since we do not have that ᵮ� -non-inferential ⊩ ᵯ� -non-inferential, this is no counterexample to cut. I have no qualms with this all-purpose trick and accordingly this way of restoring cut. But this redescription does not change the significance of our results. We still have seen how within a certain picture of qualitative belief the idea that premises play a special role in reasoning makes sense. This is the philosophical work that we need to do in order to see why we need to resort to redescriptions like this in order to save cut. Perhaps what all of this really suggests is that thinking of reasoning only using a good reasoning relation of the sort that we have been talking about is not fruitful. Instead, in theorizing about reasoning we must explicitly represent the place within someone’s belief system different claims occupy. This idea is one that I am sympathetic to. As I said at the outset of this paper when I introduced the notion of a good reasoning relation, different theories of reasoning do have different ways of representing belief states. But the reason why looking at things from the point of view of a good reasoning relation was fruitful is that it allows us to make comparisons across very different kinds of theories. And one thing such comparative work showed is that despite having very different perspectives about the structure of belief, almost all theories converge on the idea that good reasoning satisfies cut. The work in this paper was aimed at investigating whether this convergence among theories is an accident or reveals a deeper truth. Now that we have argued that it does not reveal a deeper truth, we should of course return to theorizing about good reasoning that uses more than just the austere resources of the good reasoning relation. And indeed the informal picture that I provided in the last section gives us the beginnings of what such a richer representation of belief might look like. 114 We have then three main results: First everyone must make a sharp distinction between the role of premises and conclusions in reasoning (§3). Second, there are no successful arguments against thinking that one of the dimensions on which premises and conclusions differ is their inferential power (§4). Third within our minimal foundationalist model, the question of whether good reasoning fails to satisfy cut turns on the question of whether what we are permitted to believe depends not just on what 114 It also suggests that in developing a formal model, we should considered using more structured objects (e.g., ordered pairs of propositions and their derivational source) as the relata of our model’s consequence relation and so this more complicated relation might satisfy cut (cf. Brewka 1991’s restoring cautious monotonicity to default logic). 106 beliefs we have but also on the distinctive organizational structure of those beliefs (§5). We should then be open to exploring rejecting cut as a solution to the problems in logic, computer science, ethics, and epistemology that involve multiple steps of reasoning. And more generally, I hope that by clearing certain conceptual barriers to understanding reasoning that does not satisfy cut, the discussion in this paper has brought to light fresh philosophical and formal questions about the nature and structure of good reasoning. 115 115 I would like to thank audiences at USC, UCSB, Lingnan University, and the SoCal PhilMath + PhilLogic + FoM Workshop 5. I would also like to thank Tony Anderson, Jamin Asay, Andrew Bacon, Derek Baker, Fabrizio Cariani, Justin Dallmann, Kenny Easwaran, Rohan French, Lou Goble, Jeff Horty, Michael Johnson, Sarah Lawsky, Ben Lennertz, David Makinson, Andrei Marmor, Jennifer Nado, Kenny Pearce, Indrek Reiland, Michael Rescorla, Darrell Rowbottom, Nathan Salmon, Johannes Schmitt, Kenneth Silver, Justin Snedegar, Scott Soames, Gabriel Uzquiano-Cruz, Ralph Wedgwood, Sean Walsh, Aness Webster, Tim Williamson, Jiji Zhang, and Aaron Zimmerman. Thanks most of all to Mark Schroeder for advice and criticism on every issue at every stage of this project. Finally, I thank the USC Provost’s PhD Fellowship and the Russell Fellowship for support. 107 A. Quantitative Belief I have focused in this paper on belief understood in purely qualitative terms. In this appendix, I will discuss how our topic looks from a quantitative perspective. A.1 Probabilistic Theories of Reasoning According to this approach, we not only have beliefs but credence or partial beliefs. Though there are different ways of developing the details from here, for our purposes we can think of the approach of consisting of three theses. First is the Bayesian thesis that the credal state of a fully rational agent can be represented by a probability function and the learning of this agent can be represented as updating by conditionalization. Second is the Lockean thesis that beliefs are reducible to credences 116 over some high enough (but less than 1) threshold. Third is the conditional probability account of good 117 reasoning that identifies when it is good reasoning to conclude ᵯ� from ᵮ� with when the conditional probability of ᵯ� on ᵮ� is higher than the relevant threshold set by the Lockean thesis. These theses 118 entail that good reasoning does not satisfy cut. To see this, consider a one hundred ticket fair lottery that has exactly one winner. Let ᵮ� be a set of sentences describing the lottery. Let ᵮ� be the set of sentences {‘ticket one loses’, ‘ticket two loses’, …, ‘ticket fifty loses’} and let ᵯ� be the conjunction of these fifty sentences. Finally suppose that the threshold set by the Lockean thesis is .9. Now notice that we have ᵮ� ⊩ ᵯ� for each ᵯ� ∈ ᵮ� for we may suppose that conditional on the description of the lottery the probability we assign to, e.g., ‘ticket 1 loses’ is .99 and hence over .9. And notice that we also have ᵮ� ∪ ᵮ� ⊩ ᵯ� because conditional on the description of the lottery and that ticket one loses, that ticket two loses, …, that ticket fifty loses, the probability of the conjunction of these fifty sentences is 1. However, ᵮ� ⊮ ᵯ� because conditional on the description of the lottery the probability that all of tickets one through fifty loses is much lower than .9. So cut is not satisfied. Though this kind of probabilistic approach is powerful and well understood, I have set it aside for the purposes of this paper. Recall that I suggested that there is a convergence among formal 119 116 See Easwaran 2011. 117 See Foley 1994: ch. 4 and Sturgeon 2008. 118 The conditional probability of ᵯ� on ᵮ� is intended to be understood as the conditional probability of ᵯ� on the conjunction of the sentences in ᵮ� . For a detailed look at the relationship between probabilistic consequence relations and non-probabilistic consequence relations, see Hawthorne and Makinson 2007. For alternative ways of developing an account of reasoning with full belief in terms of probability, see Arló-Costa and Parikh 2005 and Makinson 2005: §5.4. 119 That said, I am not sure that this argument is convincing for two reasons. First its assumptions are controversial. Second the argument seems to be about a notion of good reasoning that is different than the one that I am interested in at least if we adopt the standard interpretation of each of these assumptions. The standard interpretation of Bayesianism says that rational agents have credences in every proposition and such agents only change their credences by conditionalization upon learning new information where paradigmatically, one learns something new through external input such as perception. Combining this with the Lockean thesis, it follows that a rational agent’s beliefs are only altered by learning new information. But the kind of reasoning that I am interested involves an agent who currently has certain beliefs and reasons from those beliefs without external input to the formation of a new belief. This suggests that the conditional credence account of good reasoning is about a different notion of good reasoning. 108 theories of reasoning on the idea that good reasoning fails to satisfy cut save two notable exceptions. Probabilistic theories are the most well-known exception. And this is thought to mark an important 120 difference between probabilistic models and non-probabilistic models and more generally, between models that think of beliefs in graded terms and models that think of beliefs as all-or-nothing states. 121 For this reason, I have set aside probabilistic approaches. I have argued, focusing only on beliefs understood as all-or-nothing states and not appealing to partial belief or probabilities, that good reasoning might not satisfy cut. In this way, I have directly spoken to the issue of whether this convergence among formal theories is a coincidence or reveals a deeper truth because non-probabilistic theories are the ones that converge on the idea that good reasoning must satisfy cut. A.2 A Brief Comparison Finally, we can briefly look at how the model that I have developed compares to probabilistic models. According to the picture of how cut might fail developed here, the failure of cut would be a joint property of (a) the fact that premises are foundational beliefs and so not based on other beliefs and (b) permissibility claims that have ‘unless’-clauses that mention the organizational structure of your beliefs. This is distinct from the way suggested by probabilistic approaches. Roughly, according to that framework cut fails as a joint property of (a) the fact that premises are certainties and (b) the fact that the conclusions you may draw from certainties differ from the conclusions that you may draw from uncertainties (e.g., you may always perform conjunction introduction with certainties). In other work, I show that these two different frameworks give us different tools for resolving puzzles about reasoning by showing that they lead to different accounts of the bootstrapping problem. And the fact that probabilistic frameworks and the theory developed here yield different predictions 122 adds some new wrinkles to the issue of whether any particular puzzle can be solved by rejecting cut over a puzzling inference. I’ll close by mentioning two. First, it would be good to have precise account of the difference between cases that can be modeled in a probabilistic framework and the cases that can be modeled in the framework that I have developed. A first step toward answering this question will be to formalize the informal picture developed here. Second, it is an interesting question whether we must choose between these frameworks or whether there is room for both. If we must choose, then those who wish 120 The other notable exception is the system of inheritance reasoning developed in Horty, Thomason, and Touretzky 1990 (see especially their §5.3). This theory bears some similarities to the one that I develop in my §5, but because these theories model belief very differently than I do, I am unable at this time to make any precise comparisons. I hope in future work to be able to more adequately discuss this system of inheritance reasoning. 121 Cf. Gabbay 1985: 447 and Makinson 1994: 36. Similar comments apply to theories that associate a strength of justification to all-or-nothing beliefs. 122 See chapter four. 109 to solve a puzzle by denying cut must adopt a particular framework and incur the commitment of denying the other. 110 Chapter 4 Bootstrapping, Dogmatism, and the Structure of Epistemic Justification 0. Introduction Consider the following quixotic attempt to determine whether your color vision is reliable. You ask your friend to set up a slide show in which every few seconds a colored slide will appear on the screen. Your friend does not tell you what color the slides are or in what order they will appear. You sit down and the slide show begins. The first slide comes up and you look at it and on the basis of your visual experience, you come to believe: (1) The slide is red. You next notice that you are having a visual experience of the slide being red and come to believe: (2) The slide looks red. You then reason from these two beliefs to form the belief: (3) The slide is red and it looks red. Having gotten this far, you conclude: (4) My color vision worked this time. The next slide comes up. And you go through the same reasoning for this slide. And so on for the whole slide show. From this stock of beliefs together you conclude: Track Record: My color vision worked n many times. where n can be as large as we like. Based on Track Record you finally conclude: Reliability: My color vision is reliable. 111 Despite your efforts, you have not made progress toward figuring out whether your color vision is reliable. If you did not know or were not justified in believing Reliability in the first place, this process of reasoning does not provide you with justification or knowledge now. The bootstrapping problem in epistemology arises for views that seem to be committed to the implausible result that this form of reasoning does generate justified belief in or knowledge of Reliability. Though this problem has been raised for a number of different kinds of theories, in this paper I will 123 concentrate on the problem as it arises for so-called dogmatist theories of epistemic justification. My argument will be that the only way for the dogmatist to avoid the bootstrapping problem is to claim that epistemic justification has what I will call, for reasons that will become clear later, a non-cut structure. This allows the dogmatist to admit that each step in this reasoning considered on its own is acceptable, but when stitched together, these pieces of reasoning are unacceptable (§2). The fact that this is the only plausible solution to the bootstrapping problem is in one way bad news. This is is because, as I will show, it adds another member to a family of recently uncovered results that show dogmatism is incompatible with certain connections between epistemic justification and probabilities (§3). But instead of stopping here and concluding that dogmatism is false, I try to make the best of it on the dogmatist’s behalf. I show that within a certain kind of foundationalist framework, we can make good on this idea that epistemic justification has the non-cut structure needed to solve the bootstrapping problem (§4-5). But let’s begin by more clearly explaining what dogmatism is and what the bootstrapping problem is 1. Dogmatism and Bootstrapping Though there are different versions of dogmatism, there are three core theses that I will take to be definitive of this family of views. The first thesis is that that a visual experience, e.g., of a table being red, provides defeasible justification for believing, e.g., that table is red. The second thesis is that this justification does not require the agent to have any prior knowledge or justified belief that in the present situation appearances are not deceiving. And the third thesis is that this justification does not require the agent to have any prior knowledge or justified belief that her visual system is reliable. Instead, dogmatism allows that an agent is defeasibly justified simply in virtue of having the visual 123 See Cohen 2002 for a seminal discussion of bootstrapping for dogmatism as well as a closure problem. Following Vogel 2007, and Weisberg 2010, I focus only on the bootstrapping problem in this paper. I do not believe the arguments of this paper on their own are sufficient to solve the closure problem (see §6 for discussion). Weisberg 2012 is a survey of work on the bootstrapping problem. Other discussions of the bootstrapping or closure problems include Altschul 2012; Becker 2012; Black 2008; Breisen forthcoming; Brueckner 2013; Brueckner & Buford 2009; Douven and Kelp 2013; Cohen 2005, 2010; Fumerton 1995; Hawthorne 2004: 73-77; Kalstrup 2012; Kornblith 2009; Markie 2005; Neta 2005; Scheall 2011; Titelbaum 2010; Weisberg 2010; Vahid 2007; van Cleve 1979, 2003; Vogel 2000, 2008; and Zalabardo 2005. 112 experience. 124 It is of course important that the justification is defeasible. That is, the agent could learn some new information that makes her no longer justified in believing that the table is red. For example, the agent might learn that her perception is unreliable or that in the current situation appearances are deceiving. But when the agent simply lacks knowledge that her perception is reliable and lacks knowledge that in the current situation appearances are deceiving, she is justified in believing the table is red on the basis of her visual experience as of it being red. And this is true whether or not as a matter of fact her visual perception is reliable or as a matter of fact in the current situation, appearances are deceiving. The bootstrapping objection to dogmatism is that it entails that the reasoning that I began the paper with provides new justification for believing Reliability. In order to present this objection and my solution perspicuously, it will help to adopt a bit of formalism for representing claims about epistemic justification. Officially, the dogmatist says that having a certain visual experience, e.g, of the table being red justifies you in believing, e.g., that the table is red. This suggests that a natural way to think of epistemic justification for an agent according to the dogmatist is as a relation that holds between mental states. While this is the natural and most direct way to think of the dogmatist view, it will ultimately prove convenient to formalize this view in a less direct way that allows us to connect our discussion to simple formal properties of relations that are familiar from philosophical logic. So to begin we will model epistemic justification for an agent, a at a time t in terms of a relation ⊩ a, t and in fact, we will from now on leave reference to the agent and time implicit and just write ⊩. This relation will not be used to relate mental states, but instead will be used, like a logical consequence relation, to relate a set of sentences and a sentence. Lower case Greek letters will be sentences ( ᵯ� , ᵯ� , ᵯ� etc.) and when context makes clear will also function as names of for themselves. Upper case Greek letters will be sets of sentences ( ᵮ� , ᵮ� , ᵮ� , etc.). So for example, we can write: {‘the table is red’, ‘the table is round’} ⊩ ‘the table is round and the table is red’ Or if we wish to be more succinct, we may for convenience omit the braces and write: ‘the table is red’, ‘the table is round’ ⊩ ‘the table is round and the table is red’ 124 See Huemer 2001, Pollock and Cruz 1999, and Pryor 2000 113 This is intended to represent the claim that an agent who is epistemically justified in believing that the table is red and is epistemically justified in believing that the table is round is epistemically justified in believing that the table is round and the table is red. So more generally, we read this as saying an agent who is epistemically justified in believing each of the propositions expressed by the sentences on the left is epistemically justified in believing the proposition expressed by the sentence on the right. This smoothly covers cases in which beliefs justify other beliefs. But the dogmatist treats cases of perceptual justification differently than cases of beliefs justifying other beliefs. As we saw, the dogmatist says that merely having a perceptual experience can provide epistemic justification for beliefs. In order to capture this in our formalism in a way that will still allow us below to make use of some simple formal properties of relations (rather than more complicated properties of relations), we will have to engage in a bit of harmless equivocation. So we will write: ‘the table looks red’ ⊩ ‘the table is red’ and this will be used to represent the claim that an agent who is in the state reported by the sentence on the left is epistemically justified in believing the proposition expressed by the sentence on the right. 125 With this formalism in hand, let’s return to the bootstrapping reasoning and consider which claims about epistemic justification would have to be true for that reasoning to lead to you having a justified belief in Reliability. To start, notice that since dogmatism does not require agents to be justified in believing Reliability from the start, it entails that there are some agents who are not justified in believing Reliability before the bootstrapping reasoning. Imagine you are such an agent who is engaging in the bootstrapping reasoning. It begins with you believing that the slide is red based on your visual experience of the slide being red. Since (2) from above is ‘the slide looks red’ and (1) from above is ‘the slide is red’ the claim about epistemic justification that would have to be true for this step is the following: Step 1: (2) ⊩ (1) The next step is noticing that the slide looks red to you. For simplicity, we will just assume you are justified in believing that the slide looks red to you from the start and so omit this step. From (1) and (2), you then come to believe that the slide looks red and the slide is red. Because (3) is ‘the slide looks red and is red’, the claim about epistemic justification that corresponds to this step is the following: 125 We will also see below that the positive proposal developed in §4-5 will not require any equivocation. 114 Step 2: (1), (2) ⊩ (3). From (3), you conclude that your color vision worked. Since (4) is the sentence ‘my color vision worked’, the claim about epistemic justification that corresponds to it is the following: Step 3: (3) ⊩ (4) Next after reasoning analogously to n instances of something like (4) that we will represent as (4) 1 , (4) 2 , …, (4) n , you are able to conclude that your color vision worked n times which is the proposition expressed by Track Record: Step 4: {(4) 1 , (4) 2 , …, (4) n } ⊩ Track Record Finally from Track Record you conclude that your color vision is reliable which is the proposition expressed by Reliability: Step 5: Track Record ⊩ Reliability So steps one through five lead to the result that you are epistemically justified in believing Reliability based on the bootstrapping reasoning. The objection to dogmatism is that each of steps two through five is acceptable and yet nonetheless an agent is not justified in believing Reliability in virtue of the bootstrapping reasoning, so Step 1 must be false. But Step 1 is a commitment of dogmatism. Let’s look at this. To see that Step 1 is a commitment of dogmatism we need only note that it is an application of the dogmatist’s general idea that having a visual experience as of ᵯ� epistemically justifies you in believing ᵯ� . The remaining steps are quite plausible. Step 2 and Step 4 involves performing “conjunction introduction”. While there are contexts in which this is not acceptable that we will discuss in greater detail later, in the present context it is acceptable (§3.1). Step 3 involves a simple analytic entailment. Finally, Step 5 is an instance of concluding by enumerative induction that your color vision is reliable based on many instances of it working correctly. In sum, it is unacceptable to think that the bootstrapping reasoning leads us to be newly justified in believing Reliability. So we must reject the conclusion of this reasoning. But this triggers what Jonathan Vogel (2007) calls “rollback”: We must say which step in the reasoning goes wrong. But since conjunction introduction, the simple analytic entailment, and induction looks overwhelmingly plausible when considered alone, it looks likes we roll all the way back to saying the first step in the reasoning is 115 faulty and that’s just a commitment of dogmatism. 2. The Structure of Epistemic Justification The only way out of this result is to somehow claim that while each step is acceptable on its own, you cannot do the steps back-to-back. While our formalism may have been initially unnatural, 126 what makes it worthwhile is that we can now state the idea that you cannot do the steps back-to-back simply as the idea that ⊩ is not a transitive relation where we say: ⊩ satisfies transitivity just in case if ᵮ� ⊩ ᵯ� for all ᵯ� ∈ ᵮ� and ᵮ� ⊩ ᵯ� , then ᵮ� ⊩ ᵯ� If the dogmatist were to deny this claim, then she could accept each step in the reasoning, but deny that they can be performed one after another. 127 As it turns out, transitivity follows from the following two properties that are commonly called monotonicity and cumulative transitivity or for short, cut: ⊩ satisfies monotonicity just in case if ᵮ� ⊩ ᵯ� and ᵮ� ⊆ ᵮ� , then ᵮ� ⊩ ᵯ� ⊩ satisfies cut just in case if ᵮ� ⊩ ᵯ� for all ᵯ� ∈ ᵮ� and A ⋃ ᵮ� ⊩ ᵯ� , then ᵮ� ⊩ ᵯ� So if the dogmatist wishes to solve the bootstrapping problem, she must reject at least one of these claims. And if this solution is to be anything but ad hoc the dogmatist must explain why epistemic 128 justification fails to satisfy either cut or monotonicity in a way that solves the problem. The remainder of the paper is dedicated to exploring whether such an explanation can be provided. In particular, I will explore whether the dogmatist can provide an explanation of why epistemic justification fails to satisfy cut in a way that solves the problem. I start with some bad news for the dogmatist. Recently, Jonathan Weisberg has attempted to use broadly Bayesian considerations to argue that epistemic justification fails to satisfy cut in a way that solves the bootstrapping problem. But I argue that in fact dogmatism is incompatible with this Bayesian picture (§3). While I am sympathetic to the reaction that this bad news is a reductio of dogmatism, I spend the rest of the paper trying to make the best of it on the dogmatist’s behalf. In particular, I will develop an alternative explanation of why epistemic justification fails to satisfy cut within a certain foundationalist 126 Cf. Vogel 2007: n. 43 who suggests but does not develop such a solution and Vogel 2000: n. 30 who is unimpressed by a related strategy. But Vogel 2000’s pessimism is directed at an application of it that says Step 3 cannot be performed after Step 2 and as we will see, my application of this strategy is different. 127 §5.2 gives an example that shows why epistemic justification is not transitive in general. But the interesting question is whether it is transitive over the bootstrapping reasoning. 128 To see that cut and monotonicity entail transitivity, assume the antecedent of transitivity. That is, assume (i) ᵮ� ⊩ ᵯ� for all ᵯ� ∈ ᵮ� and (ii) ᵮ� ⊩ ᵯ� . By applying monotonciity to (ii), we have (iii) ᵮ� ⋃ ᵮ� ⊩ ᵯ� . Finally by applying cut to (i) and (iii), we have the consequent of transitivity, ᵮ� ⊩ ᵯ� . 116 framework and apply it to the bootstrapping problem (§4-5). In doing this, I will be assuming that if the dogmatist can deny that the bootstrapping reasoning goes all the way through to the end of Step 5, this will suffice to solve the problems. I will however close the paper by considering some recent arguments that suggest that there is a problem much earlier in the reasoning (§6). But before turning to these tasks, it is worth pausing for a moment to consider whether the problem can be solved by claiming that epistemic justification fails to satisfy monotonicity over the bootstrapping reasoning. After all, I said that rejecting transitivity involves rejecting either cut or monotonicity so it is worth at least considering the idea of rejecting monotonicity before we turn to the idea of rejecting cut. This is especially important to do because it is a well-known and relatively uncontroversial commitment of dogmatism that epistemic justification fails to satisfy monotonicity in general. To see this, we need only note that monotonicity essentially says that if the states corresponding to the sentences in ᵮ� justify you in believing ᵯ� then no matter what new information you may learn from some external source you will still be justified in believing ᵯ� . But dogmatism says that while something looking red epistemically justifies you in believing it is red, if you were to find out from some external source that the lighting conditions are bad for distinguishing red objects from other objects, you would no longer be justified in believing it is red. So dogmatism already entails that epistemic justification fails to satisfy monotonicity. Despite this fact, it is nonetheless unsurprising that theorists have not thought that rejecting monotonicity could be used to solve the bootstrapping problem. As I noted, monotonicity fails because learning some new information from an external source might lead an agent to be no longer justified in believing something that she was originally justified in believing. Now consider that the bootstrapping reasoning begins with the agent learning that the slide is red and that it looks red. It is a commitment of dogmatism that the agent learns the slide is red. And it is obvious that an agent can learn that the slide looks red from introspection. For the failure of monotonicity to solve the problem at this stage in the reasoning, we would have to implausibly claim that learning that the slide looks red makes you no longer justified in believing that it is red. This way of trying to solve the bootstrapping problem is especially implausible for the dogmatist. She would be saying that noticing the perceptual state that justifies you in believing the slide is red defeats the justification for that belief. 129 Having seen why monotonicity failure cannot help us at the start of the bootstrapping reasoning, we can also see why it will not help in later stages in the argument. It won’t help because 129 Cf. Titelbaum 2010: 121’s discussion of how appeal to defeaters can’t help the reliabilist 117 these steps do not involve learning any new information from an external source. Instead, all of the new claims are inferred from previous claims. And this in turn shows us why rejecting cut looks to be a more initially promising candidate than rejecting monotonicity. Rejecting cut says that the fact that you inferred each of the sentences in ᵮ� from ᵮ� rather than having them both from the start makes a difference to whether you can conclude ᵯ� . In other words, it says that how you inferred a claim may matter for what it can justify. So it is possible that by rejecting cut we may say that the bootstrapping way of inferring is what makes it so we cannot perform the steps back-to-back. Our question, then, is what could explain why cut fails in this way. 3. The Bad News Jonathan Weisberg (in his 2010) has a simple answer to this question. He points out that so-called Bayesian epistemologists have a number of different probabilistic models of epistemic justification. And an interesting fact is that all of these different models of epistemic justification entail that epistemic justification does not satisfy cut. In this section, I will present the basic idea behind why these 130 Bayesian approaches entail that epistemic justification fails to satisfy cut and how this is supposed to solve the bootstrapping problem (§3.1). I will then argue that Bayesian approach is incompatible with dogmatism (§3.2). 3.1 A Probabilistic Explanation of Failures of Cut As I mentioned there are a number of different ways of modeling epistemic justification using probabilities. But in order to introduce the basic idea, I will adopt one specific and simple way of understanding these issues in Bayesian terms.. I will develop the idea with the help of three assumptions. First is the claim that agents have not just all or nothing beliefs but also have degrees of belief or credences and the degrees of belief or credence of a fully rational agent are representable by a probability function. Second is the claim that when a (fully rational) agent learns a new claim for certain, her credences evolve in way that can be modeled as updating by conditionalization. That is, per the first claim, the agent’s credences before she learning the new claim are represented by some probability function Pr. And when she learns some new claim ᵯ� for certain, her new credences are representable by a probability function Pr* where for any claim ᵯ� , Pr*( ᵯ� )=Pr( ᵯ� | ᵯ� ). The third and final claim is that a (fully rational) agent is epistemically justified in believing some claim only if her credence in that claim is above some sufficiently high (but 130 More precisely, Weisberg appeals to a defeater he calls No Feedback (2010: 533-534) and this defeater entails epistemic justification fails to satisfy cut on probabilistic grounds. 118 below 1) threshold t. With these assumption in hand, we can see how epistemic justification can fail to satisfy cut by considering the example of a lottery. The lottery has one hundred tickets and exactly one ticket is a winner. Let ᵮ� be a set of claims describing this lottery. Next let ᵮ� be the set of claims ‘ticket one loses’, ‘ticket two loses’, …., ‘ticket fifty loses’. And let ᵯ� be the conjunction of all the claims in ᵮ� . Finally suppose the relevant threshold is .9. Now imagine an agent who learns each of the claim in ᵮ� for certain. This agent now would be epistemically justified in believing that ticket one loses because we may suppose given the description of the lottery, the agent prior to learning ᵮ� had credences representable by a probability function Pr such that Pr(‘ticket one loses’| ⋀ ᵮ� )=.99 where ⋀ ᵮ� is a name for the conjunction of the claims in ᵮ� . And by analogous reasoning the agent would be epistemically justified in believing each member of ᵮ� . Nonetheless the agent would not be justified in believing ᵯ� because conditional on the description of the lottery, all of the first fifty tickets losing is much less probable than .9. And this is true despite the fact that Pr( ᵯ� | ᵮ� ⋀ ᵮ� ) = 1 where ᵮ� ⋀ ᵮ� is a name for the conjunction of the sentences in ᵮ� with the sentences in ᵮ� . What’s happening here? ‘ticket one loses’ is epistemically justified for an agent who knew ᵮ� ⋀ ᵮ� for certain but not justified for an agent who knew ᵮ� for certain and therefore was justified in believing ᵮ� . This lead to epistemic justification failing to satisfy cut because the kinds of inferences you can perform with certainties differ from the kinds of inferences you can perform with uncertainties. In particular, if you are justified in believing the members of ᵮ� but are uncertain, when you conjoin all of these claims the uncertainty of the conjunction may be as high as the sum of the uncertainties of each conjunct. Since this means the conjunction may be much more uncertain that any given conjunct, it may be that you are not epistemically justified in believing it. This cannot happen for certainties because certainties are not uncertain at all so the sum of their uncertainties is still zero and so the conjunction of certainties must be epistemically justified as well. So two claims explain why epistemic justification has a non-cut structure according to Bayesian epistemology. First, some of the things you are justified in believing are certain and others are uncertain. Second, what can be justified by a certainty differs from what can be justified by an uncertainty. Despite the fact that we have used conjunction introduction to illustrate how epistemic justification fails to satisfy cut, Weisberg does not apply the Bayesian approach to the bootstrapping problem by saying we cannot perform conjunction introduction after the other steps. Instead, he denies that we can do Step 5 after all the other steps. This is because it is actually not plausible to reject the conjunction introduction steps in the context of bootstrapping. 119 Consider Step 2 that proceeds from a belief that the slide looks red and a belief that the slide is red to the belief that the slide looks red and is red. In general, coming to form beliefs about the external world via perception gives us highly certain (though not totally certain) beliefs. And in general coming to have beliefs about about our present visual experiences via introspection give us highly certain (though not totally certain) beliefs. These kinds of beliefs outside of bootstrapping context obviously admit of conjunction introduction. If, for example, I form the belief that the table is red based on visual perception and I form the belief that the chair appears blue based on introspection, I am in normal contexts justified in believing the table is red and the chair appears blue. In other words, in standard contexts, summing the uncertainties associated with our beliefs about the external world that we arrived at through perception and the uncertainties associated with beliefs about our visual experiences arrived at through introspection does not make the uncertainty so high that we are not justified in believing the conjunction. So the general fact that Bayesian epistemology predicts epistemic justification does not satisfy cut is of no help to block this step absent some explanation of why this instance of conjunction introduction is relevantly different than other acceptable instances of it. 131 We can make a similar point with regard to the Step 4 that proceeds from instances of color vision working to the conjunction of those instances. We want to allow in general that agents can perform enumerative induction of the form that we have in Step 5—proceeding from a track record of some claiming holding to it holding generally. So for example, we want agents to be able to conclude that swans are generally white from track record of instances of seeing white swans. But the standard way in which one amasses such a track record is by separately establishing each of the instances. That is, one comes to believe that some swan a is white, then beliefs that swan b is white, and so on. So if our practices of enumerative induction are to be sound, we must be able to perform conjunction introduction of the sort we have in Step 4. So again, absent some special story, the general fact that epistemic justification fails to satisfy cut is of no help for denying the instances of conjunction introduction. 132 For this reason, it is sensible for Weisberg to deny that we can perform Step 5 after the other steps rather than deny that we can perform the conjunction introduction step. His idea is that while knowing for certain from the start that it looks red and is red justifies you in believing that your color vision is reliable, this does not mean that finding out that the slide looks red justifies you in believing 131 Cf. Vogel 2000: n. 24 132 Cf. Cohen 2010 who writes “we can stipulate that the number of conjuncts is small enough that I remain justified in believing the entire conjunction, but long enough to justify inductive inference. This is possible on the assumption that enumerative induction is possible” (143). 120 that your color vision is reliable. Thus, Weisberg’s theory tells us that epistemic justification fails to satisfy cut based on the independently motivated Bayesian grounds for thinking that epistemic justification fails to satisfy cut. And as can be seen by how Weisberg proposes to use this idea, his theory also tells us exactly which step cannot be performed after the other steps, Step 5. 3.2 The Incompatibility Unfortunately, as I will now argue, Weisberg’s account of why it is that we cannot perform Step 5 after the other steps is incompatible with dogmatism. To present this argument in the most approachable way that I know how, I will begin by simplifying our discussion in three respects. First, I will ignore the fact that in the bootstrapping reasoning, we look at many slides. Instead, I will just assume that we have a single instance of seeing a slide that looks red. And accordingly I will assume for simplicity that just a single instance of my color vision working is strong evidence that my color vision is reliable. Alternatively, we can think of this simplification as a case in which the agent sees the whole series of slides all at once and the agent is so constituted that she can attend to them in the same way that we can attend to a single slide. Second, since we have not yet found any reason to be suspicious of the simple analytic entailment from ‘the slide looks red and the slide is red’ to ‘my color vision worked’ considered on its own or after the other steps, we will ignore this step in the bootstrapping reasoning, Step 3. Finally, 133 since the previous subsection showed that conjunction introduction is plausible even in the context of the bootstrapping reasoning, we will also ignore the steps that feature conjunction introduction, Step 2 and Step 4. This leaves us then with just Step 1 and Step 5. With all of our simplifications, these steps look like this: In Step 1, the agent has a visual experience of a red slide and concludes the slide is red. And as we did earlier, we assume this agent is justified from the start in believing she is having a visual experience of a red slide. Then in Step 5 the agent concludes her color vision is reliable based on her belief that the slide looks red and her belief that she has a visual experience as of a red slide. With these simplifications in hand, we can start to think about whether the dogmatist can make use of Weisberg’s solution. For the dogmatist to be able to make use of the solution, she must allow that her commitments can be usefully modeled probabilistically. So the dogmatist must allow that we can think of an agent’s credences as representable by a probability function and must allow that a belief is justified only if the agent’s credence in it is above some threshold t. So let us, in particular, use Pr 0 for 133 But see §6 for discussion. 121 the probability function that represents the agent’s credences prior to the slide show. 3.2.1 A Constraint on the Prior Probability of Reliability The first step in showing Weisberg’s solution is incompatible with dogmatism is to isolate a constraint on the prior probability of Reliability (a constraint Pr 0 (Reliability)) that the solution entails. If we use r for ‘the slide is red’, lr for ‘the slide looks red’, the constraint is this: Pr 0 (Reliability) ≥ Pr 0 (r|lr)Pr 0 (Reliablity|r ∧ lr) + Pr 0 (Reliability ∧ ¬r|lr) Though this inequality can be an eyesore to look at initially, with a little work it won’t be hard to demonstrate why it must hold if Weisberg’s solution is to work. To start, notice that to solve the bootstrapping problem we really need to be able to show that we do not get any new justification for believing that our color vision is reliable. That is, intuitively, the bootstrapping reasoning not only doesn’t newly justify us in believing that our color vision is reliable it does not even give us more reason to believe this. In Bayesian terms, this means that our probability 134 that our color vision is reliable should not increase at all upon learning that the slide looks red. So if we let Pr 1 be the probability function that results after the agent learns that the slide looks red, for Weisberg’s solution to work it must be that: Pr 0 (Reliability) ≥ Pr 1 (Reliability) Now by definition of Pr 1 , we know that Pr 1 (Reliability) = Pr 0 (Reliability|lr). And it is not hard to prove that: Pr 0 (Reliability|lr) = Pr 0 (Reliability ∧ r|lr) + Pr 0 (Reliability ∧ ¬r|lr) 135 and that: Pr 0 (Reliability ∧ r|lr) = Pr 0 (r|lr)Pr 0 (Reliability|r ∧ lr) 136 So putting these together we have that: Pr 1 (Reliability) = Pr 0 (r|lr)Pr 0 (Reliability|r ∧ lr) + Pr 0 (Reliability ∧ ¬r|lr) 134 Cf. Cohen 2002: 317, Weisberg 2010: 532, White 2006: 543 135 In general, Pr( ᵯ� ) = Pr( ᵯ� ∧ ᵯ� ) + Pr( ᵯ� ∧ ¬ ᵯ� ), so Pr 1 (Reliability) = Pr 1 (Reliability ∧ r) + Pr 1 (Reliability ∧ ¬r). And so by the definition of Pr 1 , Pr 0 (Reliability|lr) = Pr 0 (Reliability ∧ r|lr) + Pr 0 (Reliability ∧ ¬r|lr). 136 This assumes that Pr( ᵯ� |ᵯ� ) = Pr( ᵯ� ∧ ᵯ� )/Pr( ᵯ� ). So Pr 0 (r|lr)Pr 0 (Reliability|r ∧ lr) = [Pr 0 (r ∧ lr)/Pr 0 (lr)][Pr 0 (Reliability ∧ r ∧ lr)/Pr 0 (r ∧ lr)]. Next [Pr 0 (r ∧ lr)/Pr 0 (lr)][Pr 0 (Reliability ∧ r ∧ lr)/Pr 0 (r ∧ lr)] = [Pr 0 (r ∧ lr)Pr 0 (Reliability ∧ r ∧ lr)]/[Pr 0 (lr)Pr 0 (r ∧ lr)]. Then we have that [Pr 0 (r ∧ lr)Pr 0 (Reliability ∧ r ∧ lr)]/[Pr 0 (lr)Pr 0 (r ∧ lr)] = Pr 0 (Reliability ∧ r ∧ lr)/Pr 0 (lr). Finally by definition, Pr 0 (Reliability ∧ r ∧ lr)/Pr 0 (lr) = Pr 0 (Reliability ∧ r|lr). So Pr 0 (Reliability ∧ r|lr) = Pr 0 (r|lr)Pr 0 (Reliability|r ∧ lr). 122 Finally recall that in order for Weisberg’s solution to work it must be that Pr 0 (Reliability) ≥ Pr 1 (Reliability). So this gets us our constraint: Pr 0 (Reliability) ≥ Pr 0 (r|lr)Pr 0 (Reliability|r ∧ lr) + Pr 0 (Reliability ∧ ¬r) This gives us a lower bound the prior probability of Reliability. The next thing to do to pave the way for my argument is to illustrate why this is a demanding constraint. I do this by showing how high this lower bound must be. 3.2.2 This Constraint Is Demanding Recall that the solution that we are considering accepts each step on its own, but rejects the claim that they can be performed one after the other. So it accepts Step 1 and Step 5 on their own. This tells us something about the value of Pr 0 (r|lr) and Pr 0 (Reliability|r ∧ lr) respectively. To say Step 1 on its own is acceptable is to say that if you learned that the slide looks red, you would be justified in believing that the slide is red. In the Bayesian model that we are working with this means that if you learn the slide looks red your probability that it is red must be t or greater: Pr 1 (r) = Pr 0 (r|lr) ≥ t Next, to say that Step 5 on its own is acceptable is to say that if you learned from the start that the slide looks red and is red, you would be justified in believing that your color vision is reliable. In the Bayesian model we are working with this means that if you learn that the slide looks red and is red, your probability that your color vision is reliable must be t or greater: Pr 0 (Reliability|r ∧ lr) ≥ t This facts tell us something about how high our lower bound must be. Our lower bound recall is this: Pr 0 (Reliability) ≥ Pr 0 (r|lr)Pr 0 (Reliability|r ∧ lr) + Pr 0 (Reliability ∧ ¬r|lr) The work we just did allow us to see how high this lower bound must be.: Pr 0 (r|lr)Pr 0 (Reliability|r ∧ lr) + Pr 0 (Reliability ∧ ¬r|lr) ≥ t 2 + Pr 0 (Reliability ∧ ¬r|lr) This essentially teaches us that your prior probability in Reliability must in fact be quite high. After all, it 123 is hard to deny that the threshold, t, must at least be as high as .9. So the lower bound on Reliability must be at least as high as .81 summed with Pr 0 (Reliability ∧ ¬r|lr). Having to have at least .81 credence in Reliability prior to the slide show is quite a demanding constraint. 3.2.3 Possibly Inconsistent And the fact that your prior probability must be quite high is incompatible with dogmatism. To begin, it may in fact be that the lower bound is straightforwardly inconsistent with dogmatism. For suppose the values of Pr 0 (r|lr) and Pr 0 (Reliability|r ∧ lr) are not right at the threshold but much higher. This has some plausibility: Perception and enumerative induction are not just any old ways of forming justified beliefs but are among the most epistemically important ways of forming beliefs. Arguably, part of their importance is that they give us beliefs that are not just justified but strongly justified. If that is right, then it may be that the product of Pr 0 (r|lr) and Pr 0 (Reliability|r ∧ lr) summed with Pr 0 (Reliability ∧ ¬r|lr) is greater than or equal to t itself. For example, if t were to be .96 or less and Pr 0 (r|lr) and Pr 0 (Reliability|r ∧ lr) were to be .98 or greater, the product of Pr 0 (r|lr) and Pr 0 (Reliability|r ∧ lr) alone would be greater than the threshold. And so the inequality would entail that your prior probability in Reliability is greater than t. This is flatly inconsistent with dogmatism. As we said, the dogmatist says you can form justified beliefs about the color of objects even without being justified in believing that your color vision is reliable in the first place. If we are to explain why you are not justified in Bayesian terms, we would need to require that your prior probability in Reliability is less than t. But we just saw that if Pr 0 (r|lr) and 137 Pr 0 (Reliability|r ∧ lr) are sufficiently greater than t, Weisberg solution would require your prior probability in Reliability to be greater than t. 3.2.4 Incompatible with the Spirit of Dogmatism But even if Pr 0 (r|lr) and Pr 0 (Reliability|r ∧ lr) do not have values that are sufficiently greater than t to lead to this simple inconsistency, the fact that the inequality requires us to have high though perhaps less than belief level credence in Reliability alone is incompatible with the spirit of dogmatism. This is because the reasons that tell against needing to be justified in believing that our color vision is reliable in order to form justified beliefs about the color of objects also tell against needing to be justified in having high credence in our color vision being reliable. Let me explain. The question of whether we need a prior justified belief that our color vision is reliable in order to form justified beliefs about the color of objects is a pressing one in epistemology 137 That said, since we have been assuming having a high probability is only a necessary condition for justification, we cannot say that the dogmatist must explain why you are not justified in Bayesian terms. So the argument given here would not apply to a dogmatist who wishes to give a non-probabilistic explanation of why you are not justified. But as I explain below, the argument to follow will apply to any view that involves even a modest connection between probabilities and justified belief. 124 because of the problem of the criterion. Here is Stewart Cohen’s tight presentation of that problem: A natural intuition [...] is that [...] sense perception can not deliver knowledge unless we know [....it] is reliable. But surely our knowledge that sense perception is reliable will be based on knowledge we have about the working of the world. And surely that knowledge will be acquired, in part, by sense perception. So it looks as if we are in the impossible situation of needing sensory knowledge prior to acquiring it. [...] Skepticism threatens. (2002: 309) Though Cohen puts the issue in terms of knowledge, it is clear that there is a parallel problem concerning justification. The problem can be resolved in one of two ways: reject the assumption that justification for believing that our color vision is reliable requires prior justified beliefs about the color of objects or reject the assumption that we need to be justified in believing our color vision is reliable prior to forming justified beliefs about the color of objects. The dogmatists adopts the second solution. What supports going in for the second option over the first? The first option would require that we can have a justified belief in our color vision being reliable even without first having any justified beliefs about the color of objects. This would mean that we first are justified in believing that our color vision is reliable. But since it is contingent whether our color vision is reliable, this, it 138 seems, would amount to countenancing a kind of deeply contingent a priori knowledge or justified belief. That’s puzzling. 139 There is, of course, much more to say about deeply contingent a priori justification before rejecting it. But my goal here is not to evaluate the dogmatist grounds for rejecting it. My goal is only to sketch the intuition that supports its rejection: the intuitive problem is that since it is a contingent whether our color vision is reliable, it is not plausible that we are justified in believing that it is reliable on a priori grounds—only experience can rule out genuine possibilities. But insofar as we accept this intuition, we should also be uncomfortable with our epistemic space being strongly biased toward our color vision being reliable given that it is contingent whether our color vision is reliable. Just as only experience can justify us in ruling out genuine possibilities, only experience can justify us in be strongly biased against certain possibilities. This is an especially plausible point in the present setting. In the present setting, we are assuming a belief that p is rational only if it is rational to have an extremely strong bias in favor of p. But if only experience can justify such an extremely strong bias, it is hard to see why experience would not be needed to justify a strong bias. This is what makes Weisberg’s solution incompatible with the spirit of dogmatism. Even if it 138 Here I put aside the coherentist solution that the dogmatist must also reject that says our justification comes “at the same time”. 139 While Kripke may have introduced convincing cases of contingent a priori knowledge, this example does not fit that model (e.g., it does not involve any reference fixing descriptions). I, following Hawthorne 2002 (who takes the term from Gareth Evans), flag this difference by calling the proposition deeply contingent. 125 allows that we can form justified beliefs about the color of objects without first having a justified belief that our color vision is reliable, it still requires that our epistemic space be strongly biased toward our color vision being reliable in order to form justified beliefs about the color of objects. Though this bias is not so strong as to amount to belief, it is still puzzling (from the dogmatists perspective) why we are justified in being biased toward this contingent fact being true on a priori grounds. 140 Finally, we can bring this same argument into still sharper focus if we look at it from a slightly different perspective. Consider what would happen if we tried to avoid being strongly biased and only exhibited a small bias toward our color vision being reliable, say have .65 credence in Reliability. Our lower bound entails that in order for this to hold it must be that Pr 0 (r|lr) < .81 or Pr 0 (Reliability|r ∧ lr) < .81. That is, either enumerative induction or perception only leads us to having .81 credence in our conclusion. This is problematic not just because it is unintuitive but also because it severely limits the role beliefs formed in this way can play. In particular it would be difficult to conjoin any such beliefs. For example, if we had just two independent such beliefs, our credence in their conjunction would be .66 or less. So if we were only slightly biased, this would entail that we cannot usefully combine the information we gain through induction or perception. But perception and induction are epistemically important belief forming processes. It is unacceptable to relegate them to this impotence with respect to conjunction introduction. Thus, even if Weisberg’s solution is not flatly inconsistent with dogmatism, it is incompatible with its spirit of dogmatism. As a dogmatist, we want to allow that we can form justified beliefs about about the color of objects without having our epistemic space be strongly biased toward our color vision being reliable. This is not something that Weisberg’s Bayesian solution can allow. 3.2.5 Bayesianism Is Incompatible with Dogmatism This is not a criticism of Weisberg. His idea may be correct as a diagnosis of what is wrong with the bootstrapping reasoning. It is just of no help to the dogmatist. Our result is essentially a new member to a family of result that have been uncovered in recent years that show dogmatism is incompatible with even relatively modest Bayesian assumptions about justification. Though we 141 started this section with some fairly strong assumptions to fix ideas, let’s look at what we actually relied on in making the argument. The dogmatist says that having a visual experience of the slide being red newly justifies us in 140 Cohen 2005: n. 1 conjectured that the bootstrapping problem would still arise for views that allowed that one had strong a priori evidence that falls short of knowledge or justified fully belief. Our discussion of Weisberg suggests that this conjecture is incorrect but nonetheless, that this should be of no comfort to the dogmatist. 141 See Cohen 2005: 424-425, Hawthorne 2004: 73-77, Wedgwood 2013: §2, and White 2006 126 believing that the slide is red. We have assumed that this means that upon learning the slide looks red, your credence in the slide being red will be quite high (e.g., .9 or higher). Enumerative induction is a 142 way of being newly justified in believing that your color vision is reliable based on the slide looking red and being red. We have assumed that this means upon learning the slide looks red and is red, your credence that your color vision is reliable will be quite high (e.g., .9 or higher). Finally, the spirit of dogmatism says that we not only fail to be a priori justified in believing our color vision is reliable but also fail to be justified in being strongly a priori biased toward thinking that our color vision is reliable. We have assumed that this means your credence in your color vision being reliable before the bootstrapping reasoning is moderate and not strongly biased (e.g., not above .8). Insofar as there is some connection between epistemic justification and credences understood probabilistically, these assumptions about how that connection works out in this particular case are plausible. They are compatible with many different views about the relationship between full belief and partial beliefs and take no stand on what the correct probabilistic measure of evidential support is. But as we have seen, these assumptions are incompatible with one another. For this reason, I conclude that our argument shows that dogmatism is incompatible with even relatively modest forms of Bayesianism. 4. The Non-Cut Structure of Justification in Foundationalist Epistemology One reaction to this result is that it is a reductio of dogmatism: Bayesianism is a well-understood, powerful, formal theory of rational belief and dogmatism either suffers from the bootstrapping problem or is incompatible with it. So much the worse for dogmatism. While I’m sympathetic to this reaction, I wish to make the best of it on the dogmatist’s behalf and explore an alternative perspective. Indeed, I believe that this alternative perspective is one that is in any case a more natural fit with dogmatism. This is because prominent dogmatists such as John Pollock and James Pryor are also critics of Bayesianism. 143 Now since we are rejecting Bayesianism and since Bayesianism is the only model that we have looked at so far that can explain why epistemic justification fails to satisfy cut, the first thing we must do is develop an alternative picture of how it could be so much as possible that epistemic justification fails to satisfy cut. While I cannot develop a theory as powerful and mathematically precise as Bayesianism 144 142 One way to resist this is to claim that updating by (standard) conditionalization is not an adequate model of perceptual justification. I believe this is the most plausible way for the dogmatist to resist, see Christensen 1992, Pryor 2013, Weisberg 2009 and forthcoming for discussion. Nonetheless, I will not consider this issue here because I do not have the space to do so and these problem also afflict the argument mentioned in n. 141. 143 See Pollock and Cruz 1999 and Pryor 2013. 144 In fact, as chapter three point out, all formal theories other than Bayesianism and certain systems of inheritance reasoning entail that epistemic justification satisfies cut. Chapter three considers from an abstract perspective whether this convergence among formal theories reflects a deeper truth about justification and argues it does not and then presents ideas very similar to the ones discussed in this section. 127 here, what I will do is show how a broadly foundationalist epistemology has the resources to explain why epistemic justification fails to satisfy cut. 145 I begin by developing a minimal account of what foundationalism is (§4.1) and then frame the question of whether epistemic justification satisfies cut within this perspective (§4.2). After this, I will show how it could be that epistemic justification fails to satisfy cut (§4.3). The task of §5 will be to explain how this possibility can be applied to the bootstrapping problem. 4.1 Bare Bones Foundationalism There are two components to the broadly foundationalist framework that I will be employing: one psychological, the other epistemological. The psychological component is that an agent’s epistemic states are structured so that at a given time, some of her attitudes (beliefs, perceptual states etc.) count as foundational and others count as non-foundational. And the non-foundational states are based on the foundational states. Now different theories will have different accounts of what the foundational states are. Dogmatism, for example, is committed to the idea that some perceptual states are foundational states. A Cartesian might say it is indubitable beliefs that are foundational. But the general foundationalist psychological claim is neutral about this. The epistemological component consists of two theses. The first thesis is that the justification of a non-foundational state is determined by the relation these state stand to foundational states. The second thesis is that the justification of non-foundational state is not determined in this way. On traditional approaches, foundational beliefs will somehow be immediately justified and non-foundational beliefs that can be inferred from them by a process of good reasoning will be justified in virtue of this relation. According to dogmatism, a perceptual state is not the kind of thing that can be justified and is a foundational state. And the non-foundational belief that the world looks the way it is represented as being by that perceptual state is justified in virtue of you being in that perceptual state. So foundationalism consists of a claim about the structure of an agent’s psychology together with a claim about the epistemological significance of this structure. Specific theories will tell us which states are foundational states and what epistemological role these states play. And similarly, they will say which states are non-foundational states and how the epistemic status of these states is dependent on the epistemic status of foundational states. This, then, is the minimal foundationalist framework that I will be working with. 145 Though Black 2008 does not note this, his view also requires that epistemic justification fails to satisfy cut. But Black rejects Step 3 being performed after the other steps. While the general framework I develop could in principle implement this solution, I prefer to accept Step 3 and reject Step 5. This allows me to maintain highly plausible closure principles. 128 4.2 Interpreting Structural Properties of Justification I now want to explain how to think about the different claims about the structure of justification that we have been discussing. In order to do this, it helps to start by looking at the structure of what we might think of as standard cases of learning. Consider, then, the following: ᵯ� ᵯ� ᵯ� ᵯ� ᵯ� ⇑ Add ᵯ� ⇑ ᵮ� ᵮ� , ᵯ� Begin by focusing on what is on the left. As before, ᵮ� is a set of sentences and we are meant to understand this diagram as saying the set of mental states corresponding to the sentences in ᵮ� are the foundational states that the agent has. The ⇑ represents epistemic justification. So it says that the mental states corresponding to ᵮ� justify the non-foundational mental states corresponding to ᵯ� and ᵯ� respectively. As we move to the right, we have Add ᵯ� and this tells us that the agent comes to be in the state corresponding to ᵯ� and this state is a foundational mental state for that agent. We then see that this new collection of foundational states justify not just the non-foundational states corresponding to ᵯ� and ᵯ� but also a new non-foundational state corresponding to ᵯ� . This is the structure of standard cases of learning in the sense that typically when we learn some new information we acquire additional new justified non-foundational states. With this structure in mind, let’s consider the two properties that I mentioned earlier, monotonicity and cut. As I said earlier, it is relatively uncontroversial that epistemic justification does not satisfy monotonicity. In foundationalist terms, this means that we can have a situation like this: ᵯ� ᵯ� ᵯ� ⇑ Add ᵯ� ⇑ ᵮ� ᵮ� , ᵯ� That is, adding a new foundational state can make it so you lose justification for being in the non-foundational state corresponding to ᵯ� . The easiest way to understand what cut is about is to compare it to standard cases of learning. The natural way to interpret the diagram of standard cases of learning is so that the mental state corresponding to ᵯ� is different from any of the mental states corresponding to the sentences in ᵮ� as well as the mental states correspond to ᵯ� and ᵯ� . But cut concerns cases where the mental state is the 129 same as one of these, say that ᵯ� : ᵯ� ᵯ� ᵯ� ᵯ� ᵯ� ⇑ Add ᵯ� ⇑? ᵮ� ᵮ� , ᵯ� If epistemic justification satisfies cut, then the situation depicted by this diagram cannot arise. If it fails to satisfy cut than it can arise. So far then, we have an interpretation of each of the structural claims about epistemic justification that we are interested in. With this in hand, I want to turn to showing how it might be that epistemic justification fails to satisfy cut. 4.3 How Justification Might Not Satisfy Cut The first thing to notice is that the foundationalist framework is compatible with epistemic justification satisfying cut as well as epistemic justification failing to satisfy cut. Whether epistemic justification satisfies cut turns on the following question: can adding the state corresponding to ᵯ� to the foundations lead you to have at least one new justified non-foundational state if you were already justified in having the state corresponding to ᵯ� among your non-foundational states? As far as I know, no major foundationalist has posed or answered this question. Nonetheless, it is a substantive question that is not answered by the bare bones foundationalist framework. And in fact, it is somewhat unsurprising that foundationalism is compatible with epistemic justification failing to satisfy cut. After all, according to foundationalism the foundational states are epistemically special in that they get their epistemic status in a different way than non-foundational states. Epistemic justification failing to satisfy cut suggests that they are special not just in how they get their epistemic status but also in what they are capable of justifying. In light of this, it is natural to wonder what we would have to add to the bare bones foundationalist theory in order for it to entail that epistemic justification fails to satisfy cut. I will answer this question in an abstract in this subsection. And then in §5 I apply this answer to the bootstrapping problem. To begin, let’s consider an informal way of thinking about defeasible justification. One way to think about the idea that having a visual experience as of something being red defeasible justifies you in believing that it is red is as saying that the visual experience justifies you in believing that it is red unless you believe that the lighting conditions are bad, believe that your visual system is malfunction, etc. In general, foundationalism can be informally thought of as making claims like ᵯ� justifies you in believing that ᵯ� unless such and such conditions holds. And according to foundationalism what goes in these ‘unless’-clauses are some claims about what other epistemic states you are in. 130 With this informal picture in mind, we can notice that the foundationalist psychology does not just claim that agents have epistemic states. It also claims that the epistemic states of an agent are organized in a certain way. And this means that the foundationalist could mention these facts about the organization of our beliefs in the ‘unless’-clauses as well. To explain why this might be a promising idea, I want set dogmatism aside for a moment and work through an example in terms that will be most congenial to more traditional forms of foundationalism. In particular, one form of foundationalism says that the foundational states are beliefs that are the direct result of perception (rather than the perceptual states themselves). According to this theory, we have foundational beliefs like the belief that o appears to have a rough surface. And this belief counts as defeasibly justified because it is the direct result of a perceptual experience This belief in turn defeasibly justifies the non-foundational belief that o has a rough surface. Now suppose also that you are justified in believing that your sense of touch is malfunctioning. Are you justified in believing that o has a rough surface or not? One natural (albeit not inevitable) answer is that it depends on which perceptual experience your belief that o appears to have a rough surface was the direct result of. If that belief was the direct result of visual perception, then it is plausible to think that you are still justified in believing that o has a rough surface. But if it was the result of touching, you are not justified in believing this. And one plausible (albeit not inevitable) way of getting this result is to allow ‘unless’-clauses that make reference to what your belief was the direct result of. In particular, we might say that the belief that o appears to have a rough surface and the belief that your sense of touch is malfunctioning justify you in believing that o has a rough surface unless your belief that o appears to have a rough surface is based on touching. This case illustrates that how your belief came about might affect what it can justify. There are, of course, alternative ways of treating this case. But my only goal here is to sketch one plausible way of treating this case that involves an ‘unless’-clause that mentions not just other epistemic states but how those states came about as well. What’s more, I believe that there is some plausibility to the idea that there are true claims about defeasible justification that have ‘unless’-clauses of this sort. This is because these kinds of claims occupy a strategic position between two alternative kinds of rules that we could use to handle these cases. In these cases, I have been suggesting (and here I am simplifying a bit) that an epistemic state that ᵯ� justifies an epistemic state that ᵯ� unless the epistemic state that ᵯ� came about in a certain way. The two main alternatives to this suggestion would be to simply deny that the epistemic state that ᵯ� justifies the epistemic state that ᵯ� or to claim that the epistemic state that ᵯ� justifies the epistemic state that ᵯ� no matter how it came about. The proposal I 131 am sketching strikes a strategic balance between these by not being as conservative a the first option or as liberal as the second. The first alternative which simply rejects that the epistemic state that ᵯ� justifies the epistemic state that ᵯ� can seem too conservative because at least when the epistemic state that ᵯ� comes about in the right way, the epistemic state that ᵯ� seems reasonable and likely to be true. Of course, one way of pressing this alternative is to say that the epistemic state that ᵯ� together with belief that this state came about in the right way justify the epistemic state that ᵯ� . But given that the foundationalist psychology includes not just epistemic states and but an account of what place within a larger structure these states occupy and why they occupy this place, it is hard to see why we must insist that one has explicit beliefs about how a belief comes about for them to play the kind of role that I am suggesting they play. After all, this way of pushing the first alternatives concedes that how an epistemic state came about matters and simply tries to capture this fact by insisting one must have beliefs about how one’s epistemic states came about. This, it seems to me, concedes the basic insight that how your epistemic states came about matters and captures it in a less natural way. Thus, the first alternative seems less plausible than the idea that I am developing. The second alternative which places no conditions at all on where the epistemic state corresponding to ᵯ� comes from seems far too liberal. It allows that the state that ᵯ� is justified in cases where it seems unreasonable or unlikely to be true. While such an aggressive stance might be reasonable if there were no other way for us to become justified in being in the state that ᵯ� , the idea that I have sketched allows us to be justified in being in the state that ᵯ� without adopting this overly aggressive stance. So the example that I sketched earlier together with this theoretical consideration that I just offered argue in favor of thinking that claims about defeasible justification might have ‘unless’-clauses that mention how an epistemic state comes about. Now the example gave does not involve a failure of cut but it does illustrate the significance of how a belief came about by considering the significance of two different ways the belief that o’s surface appears to be rough can come to be a foundational state. So we might also consider different ways in which a state can come about. For example, we can consider the difference between a state coming about in the way a foundational state comes about and a state coming about in the way a non-foundational state comes about. Here is how that might go. Suppose (1) the epistemic states corresponding to ᵮ� defeasibly justify the epistemic state that ᵯ� and (2) the epistemic states corresponding to ᵮ� together with the epistemic state that ᵯ� defeasibly justify the epistemic state that ᵯ� . And suppose that if we unpack (2)’s ‘unless’-clause it would say the epistemic state corresponding to ᵮ� together with the epistemic state that 132 ᵯ� justify the epistemic state that ᵯ� unless the epistemic state that ᵯ� is solely based on the epistemic states corresponding to ᵮ� . Let’s look at what these claims predict. If we add the supposition that the epistemic state that ᵯ� defeasibly justifies the epistemic state that ᵯ� , they predict that we can have a situation like this: ᵯ� ᵯ� ᵯ� ⇑ Add ᵯ� ⇑ ᵮ� ᵮ� , ᵯ� Though this case is not what I called a standard case of learning, it shares its structure. In standard cases of learning the new information plays a more or less direct role in justifying the new epistemic state. But in this case, it is the states corresponding to ᵮ� and the state corresponding to ᵯ� that justify the new state. What the state that ᵯ� does is make it so the state corresponding to ᵯ� is no longer solely based on the states corresponding to ᵮ� and this makes it so the ‘unless’-clause of (2) is no longer true. So in that example ᵯ� ’s justificatory role changes by becoming justified by the state that ᵯ� . But ᵯ� ’s justificatory role could also change if ᵯ� came to be part of the foundations. That is, we could have a situation like this: ᵯ� ᵯ� ᵯ� ⇑ Add ᵯ� ⇑ ᵮ� ᵮ� , ᵯ� And now notice that this situation is one that demonstrates how epistemic justification might fail to satisfy cut. Thus, once we admit that claims about epistemic justification might have ‘unless’-clauses that mention how beliefs come about, it is easy to come up with abstract claims about epistemic justification that would lead to epistemic justification failing to satisfy cut. And since I have argued that such rules are plausible, this shows that within a purely qualitative foundationalist epistemology we have the resources to explain why epistemic justification does not generally satisfy cut. 146 5. The Solution Of course, the fact that epistemic justification fails to satisfy cut in general does not by itself show that Step 5 cannot be performed after the other steps. But it is not hard to see how it can be 147 146 Cf. Weisberg 2010: 536’s discussion of evidentialism. 147 Indeed, the general idea of cut failing can be applied in many different ways to the bootstrapping problem including, for example, claiming that we cannot perform Step 3 after the other steps. (cf. Black 2008, Weisberg 2010: §4.3). I do not myself favor this view because it would require giving up on some plausible closure principles, see §6 for further discussion. 133 applied in a way the secures this result. Consider the claim that Track Record defeasibly justifies an agent in believing Reliability. There are many well-known defeaters for this inductive justification. For example, there are defeaters concerning the existence of competing evidence and concerning certain defective ways of gathering data. So the ‘unless’-clause associated with the relevant claim about justification will mention these defeaters. I wish to add another defeater to this ‘unless’-clause: I posit that the ‘unless’-clause says ‘unless Track Record is solely based on the perceptual state of the slide being red and introspective awareness of this state’. This solves the bootstrapping problem. 148 5.1 How the Solution Works Let’s look at this in detail. As we know from our work in §4, this kind of ‘unless’-clause predicts two things. One thing it predicts is that in cases where an agent’s belief that Track Record is based on something other than just the perceptual state of the slide being red and introspective awareness of this state, the belief that Reliability may be justified. And this is the right result. For example. if we learn the slide is red from testimony from the person who set up the slide show, this would allow us to gain some justification for Reliability. Of course, we use perception in learning by testimony. But my solution does not say that this is problematic. The account says justification for Reliability is defeated if the belief that the slide is red is solely based on the perceptual experience of it being red. But the account does not say justification for Reliability is defeated if the belief that the slide is red is based solely on other perceptual experiences such as the experience of the person who set up the slide show saying that the slide is red. In this 149 way, the solution allows that we can learn Reliability in the appropriate circumstances. 150 Another thing it predicts is that in the bootstrapping reasoning, we are not justified in believing Reliability. This is because in this case, our belief that Track Record originates just in the perceptual experience of the slide being red and introspective awareness of this state. And the ‘unless’-clause says that this is a case in which Track Record does not justify Reliability. To appreciate what is going on here, let’s contrast this solution with merely claiming that the 148 Though there are a number of (to my mind, fussy) counterexamples to this formulation of the ‘unless’-clause, I will work with it in what follows because it is the simplest way to capture the spirit of my proposal. A formulation designed to avoid these counterexamples would read something like ‘unless the belief that the slide is red that is involved is supporting Track Record is partially essentially based on the slide looking red’. 149 This shows how the present account would treat the case of Eliza in Weisberg 2010: 530 (cf. White 2006: 530) and thereby avoid a problem that plagues Vogel 2007’s views. 150 Cf. Weisberg 2010: 538-539. It may also be worth noting that this response assumes (to my mind, harmlessly) that the dogmatist will distinguish the reasons that we have from testimony to believe that the slide is red from the reasons we have from the experience of the slide being red to believe the slide is red. See Black 2008 especially n. 9 and Vogel 2008: n. 17 and 527-528 for closely related issues concerning distinguishing these reasons. 134 bootstrapping reasoning is bad reasoning. Recall that the bootstrapping problem relies on a number 151 of claims. It requires the claim that enumerative induction, two instances of conjunction introduction, and a simple analytic entailment are justified inferences considered alone. And it also requires the claim that over the bootstrapping reasoning, epistemic justification satisfies transitivity so that each of these inferences can be performed one after another. If all of these claims are true and Reliability is not justified, then dogmatism is false. To say only say that bootstrapping reasoning is bad would raise the question of which of the assumptions that the reasoning is based on is false. And this, in turn, would raise the specter of dogmatism being false because the reasoning must “rollback” all the way to the dogmatist’s claim. My solution by contrast says exactly where the reasoning fails and does so in a way that inoculates dogmatism from the bootstrapping objection. My solution says that enumerative induction, two instances of conjunction introduction, and the simple analytic entailment are all justified inferences considered alone but cannot be performed one after another because epistemic justification fails to satisfy transitivity over the bootstrapping reasoning. More precisely, it fails to satisfy cut over the bootstrapping reasoning. And §4 developed a picture of why epistemic justification fails to satisfy cut in general: it showed how within a broadly foundationalist epistemology, ‘unless’-clauses that reference how epistemic states come about could lead to epistemic justification failing to satisfy cut. What I have claimed in this section is that the enumerative induction involved in Step 5 has an ‘unless’-clause of this type that says ‘unless Track Record is solely based on a visual experience of the slide looking red and introspective awareness of this experiences’. This claim about defeasible justification occupies exactly the strategic position that I identified earlier. The alternatives to it would be to reject Step 5 even on its own or to accept it even in the context of bootstrapping reasoning. And we have already looked in detail at why both of these options are undesirable. Of course, I have not given an independent argument for this defeater. What I have done instead is argue that the only solution that the dogmatist can give to the bootstrapping problem is one that says epistemic justification fails to satisfy cut. And after showing that this is problematic on probabilistic grounds, I tried to make the best of this strategy on the dogmatist behalf. And the result is that this defeater is the dogmatist way out of the problem. Insofar as we are attracted to dogmatism and find the bootstrapping reasoning unacceptable, my hypothesized defeater is the best explanation of how both of these views could be true. 152 151 Cf. Weisberg 2010: 537-538’s response to a similar worry and Vogel 2000: 615-619’s objection to the claim that bootstrapping is unreliable. 152 This, in a way, underestimates the resources available to independently motivate this defeater. First it may be that this defeater is a specific instance of a more well-known defeater such as a biased sample defeater. Second the defeater that I have 135 This, then, is my solution to the bootstrapping problem on behalf of the dogmatist. 5.2 How Epistemic Justification Spends Let me close my presentation of my solution by considering an objection to it that can be gleaned from the work of Peter Markie (2005: 415, cf. Cohen 2005: 428). Markie criticizes another attempt to solve the bootstrapping problem for being incompatible with the following intuitive idea about epistemic justification: If it is reasonable for us to believe p and the truth of p increases the likelihood that another proposition, q , is true, then p is a reason (perhaps defeasible) for us to believe q. Just as money, however gained, still spends the same, so too reasonable beliefs, however gained, still epistemically support the same beliefs. (2005: 415) Now we have already discussed in some detail how probabilistic considerations are in fact deeply at odds with the dogmatist’s theory and decided to put them aside and try to make the best of it. So we will not concern ourselves with the aspects of Markie’s remarks that rely on probabilities. However, Markie’s more general idea that “reasonable belief, however gained, still epistemically support the same beliefs” is plausible in its own right and is incompatible with my solution. This is because my solution says that the belief that the slide is red when gained by visual perception of the slide being red does not support the same beliefs as the belief that the slide is red gained by testimony from the person who set up the slide show Nonetheless, I wish to now argue that epistemic justification is not in fact like money in the way Markie suggests. To see this, recall that we are pursuing the idea that epistemic justification does not satisfy transitivity because it fails to satisfy cut. But we also noted that epistemic justification may fail to satisfy transitivity because it fails to satisfy monotonicity. To illustrate, consider a case where a certain body of evidence ᵮ� justifies concluding ᵯ� by induction, but ᵮ� together with, for example, the claim ‘ ᵮ� is a biased sample’ does not. This leads to a failure of epistemic justification to be transitive as well for it is plausible that ‘ ᵮ� and ᵮ� is a biased sample’ justifies ᵮ� . So we have it that ‘ ᵮ� and ᵮ� is a biased sample’ justifies ᵮ� and ᵮ� justifies ᵯ� , but ‘ ᵮ� and ᵮ� is a biased sample’ does not justify ᵯ� . This shows why Markie’s principle is wrong. ᵮ� however gained does not support ᵯ� . ᵮ� gained by deduction from ‘ ᵮ� and ᵮ� is a biased sample’, for example, does not support ᵯ� . Now Markie may posited is a distinctive way of capturing the intuitions that support a number of principles that have been thought to be independently plausible in the literature on bootstrapping such as The No Self-Support Principle (Bergmann 2000: 168, Cohen 2002: 319, Fumerton 1995: 180), the Independence Principle (Black 2008: 606-609, Cohen 2005: 428, Markie 2005: 414), the No Rule Circularity Principle (Vogel 2007: 531). I do not myself wish to endorse either of these claims as motivations for the defeater. Instead, my hunch is that this defeater can be motivated by appeal to a certain general feature of reasoning that can be thought of as a kind of defeasible self-trust. I hope to return to this question in future work. 136 respond to this by pointing out that he is talking about defeasible justification and so ᵮ� however gained does defeasibly support ᵯ� . It is just that the support is defeated in this case. In particular, it is undercut so that there is no actual support for ᵯ� from ᵮ� in this case. I am happy with this response. But it entails that my solution is in fact compatible with Markie’s principle. For I may say that Track Record defeasibly supports Reliability however gained. It is just that in the bootstrapping reasoning the support is defeated. It is undercut so that there is no actual support for Reliability from Track Record. Thus, everyone must reject or qualify Markie’s principle. And so qualified, the principle is compatible with my idea. Of course, what defeats the justification in the example that I just gave is another belief whereas in the bootstrapping reasoning the organization of your beliefs defeats the justification. But we have developed a general theory on which we can make sense of exactly how and why the organization of your beliefs matters. I conclude therefore that my idea solves the bootstrapping problem for dogmatism. It solves the problem in the only way the dogmatist can, by entailing epistemic justification fails to satisfy cut. And it explains exactly how it fails to satisfy cut in a way that solves the problem. 6. The Problem Comes Earlier The bootstrapping problem as we have presented arises from the fact that the dogmatist appears committed to saying that the bootstrapping reasoning provides justification for Reliability. We have solved that problem. But some philosophers believe that in fact there is an earlier problem in the bootstrapping reasoning. They believe that we should not be able to get as far Step 3, that we should not be able to become newly justified in believing that our color vision worked this time. In this 153 section, I consider whether getting justification for believing our color vision worked this time is problematic and if so, what that might say about my solution. 6.1 The Earlier Problem Let me begin by conceding that there is something implausible about the idea that we are epistemically justified in believing our color vision worked this time based on the bootstrapping reasoning. And my solution actually does allow us to give an explanation of what is unintuitive about it. After all, it is natural to think that if we are justified in believing that our color vision worked this time in a given instance, this is some (perhaps weak) evidence that our color vision is reliable. And it seems clear that the bootstrapping reasoning is no evidence at all that our color vision is reliable. Thus, 153 See Cohen 2010: 145, 149; Kallestrup 2012: §4, Titelbaum 2010: 120-121, 128-129; and White 2006: §7. Though it is actually not clear exactly which steps each of these philosophers find problematic, I focus on Step 3 to fix ideas. 137 perhaps the source of intuitive discomfort with the claim that our color vision worked this time is that we believe this claim provides evidence for the claim that our color vision is reliable. 154 If this is right, my solution is well placed to explain this intuition away. After all, my solution allows that if we had the claim that our color vision just worked from the start, this would be evidence that our color vision is reliable. But crucially, the solution denies that having inferred it in the way that we do in bootstrapping reasoning, the claim provides any evidence that our color vision is reliable. While I do think this explanation has some plausibility, I will not rely on it in what follows. This is because there are other ways of bringing out what is implausible about the bootstrapping reasoning justifying us in believing that our color vision worked that are not explained by my solution. Let’s 155 look at two such ways. 156 First there are cases where instead of continuing with the bootstrapping reasoning after concluding that our color vision just worked, we are to an imagine an oracle says something like: ‘if your color vision works n times, then your color vision is reliable’ We may imagine that the oracle’s testimony justifies you in being fully confident in this claim. So after performing n instances of bootstrapping to the conclusion that our color vision worked, you may safely conjoin this with the oracle’s claim to get: ‘my color vision worked n times and if my color vision worked n times, then my color vision is reliable.’ Reliability then would follow from a simple deduction from this claim. Nonetheless, it seems implausible to think that this reasoning can newly justify you in believing your color vision is reliable. 157 This case cannot be explained by my solution because it does not involve a step of enumerative induction. And my solution is directed at that step. 158 154 This idea (together with other considerations) may be at play in Vogel 2000: 616-617 and White 2006: 530 and §7. 155 White 2006: 538 (cf. Cohen 2010: 144) suggests that the best explanation of our color vision working is our color vision being reliable so getting this far is a problem. But my approach would simply posit the same defeater for inference to the best explanation that we posited for enumerative induction and avoid this problem. 156 In addition to the two arguments that I present below, there are also arguments that show a tension between certain plausible principles and the idea that we can become newly justified in believing that our color vision worked (Titelbaum 2010: 128-129, White 2006: 548-549). I do not have the space here to discuss these principles. But as I explain below, the two arguments discussed in the main text show a tension between dogmatism and single premise closure and this is also true of these arguments. 157 See Cohen 2010: 145, 149 and Titelbaum 2010: 120-121 158 Though my official response to this objection is the somewhat concessive one given below, the dogmatist may be able to respond to this objection less concessively. I can only sketch the response here. Begin by considering the case where the oracle tells you that if your color vision works once, then it is reliable. This amounts to telling you that your color vision will either never work or it is reliable. If you were fully justified in being confident in this disjunction and spreading your confidence equally over each disjunct, the dogmatist should say this defeats your perceptual justification for believing 138 The second consideration is probabilistic. Though we won’t go through the details here, the basic idea is this. We can deduce ‘it is not the case that the slide looks red and is not red’ from ‘your color vision worked’. So presumably we are newly justified in believing this as well. But it can be shown with the help of some minimal assumptions that your credence in ‘it is not the case that the slide looks red and is not red’ in fact decreases upon seeing that the slide looks red. So it is hard to see how you could become newly justified in believing this claim. 159 My solution does not tell us anything that would allow us to resolve this tension between the dogmatist claim about justification and probabilities. So it does not explain this implausible feature of the claim that we can become newly justified in believing that our color vision worked through bootstrapping reasoning. Indeed, this second consideration is one of the members of the recent family of results that show dogmatism is incompatible with Bayesianism that I alluded to earlier. These two arguments considerations make vivid that there is a tension between dogmatism and the principle of single premise closure. Though there are different formulations of this principle, we can roughly put it as follows: if S is justified in believing p and competently deduces q from p then S is justified in believing q We appealed to this principle when discussing the first consideration when we assumed that we were justified in believing Reliability because it is deducible from ‘my color vision worked n times and if my color vision works n times, then my color vision is reliable’. We appealed to this principle when discussing the second consideration when we assumed that we were justified in believing that it is not the case that the slide looks red but is not red because it is deducible from ‘my color vision worked’. Since the closure principle is very plausible and it leads to the undesirable results that I just described if dogmatism is true, this is a problem for the dogmatist. And as we have seen, it is a problem that my solution does not solve. that the slide is red. This is just an undercutting defeater like learning that in the present circumstances it is as likely as not that the slide is red if it looks red This covers the case where n is one. But not all values of n will look like cases of undercutting as this one does. However, we may be able to generalize the strategy to cover these cases. The idea would be that all of them adjust how much justification you get for believing that the slide is red based on it looking red. The amount of justification you get for the slide is red constrains whether you are justified in believing this and whether this belief can be conjoined with other ones. If we hypothesize that the level of justification will never be high enough to admit of conjunction introduction of n conjuncts for n high enough to allow enumerative induction unless the oracle’s claim is sufficient on its own to justify us in believing our color vision is reliable, we avoid the objection (cf. Cohen 2010: 146-148‘s response to bootstrapping to super reliability). Though the details need to be spelled out, the basic idea is that the oracle’s testimony in effect gives you information about your reliability and this constrains the bootstrapping reasoning in a way that allows us to avoid this problems. 159 See Cohen 2005: 424-425, Hawthorne 2004: 73-77, and White 2006: §1-6 139 6.2 Two Problems That said, I believe that this is a different problem for dogmatists. To illustrate why, it is 160 helpful to know that those who have raised these objections think that it shows that dogmatism is wrong because we are in fact a priori justified in believing that if the slide looks red, it is red. If this were so, then we could rule out from the start the possibility that the slide looks red while it is not red and thereby avoid the implausible results mentioned by the second argument. And if this were so, then it would be plausible that learning that the slide looks red allows us to learn our color vision worked and combining this with the oracle’s testimony allows us to conclude that our color vision is reliable. Now we know that dogmatists are not comfortable with claims about a priori justification like this. They do not believe that we are a priori justified in believing that if the slide looks red, it is red and do not believe that we are a priori justified in believing that our color vision is reliable. To the dogmatist, this is a suspicious kind of deeply contingent a priori justification. But notice that the arguments that we have just considered only force us to admit that we are a priori justified in believing that if the slide looks red, it is red. They do not force us to admit that we are a priori justified in believing that our color vision is reliable. Perhaps, in order to solve these problems, we really do need to be a priori justified in believing that certain specific skeptical scenarios (such as it appearing you have hands when you don’t and it appearing the slide is red when it isn’t ) do not obtain. But this fact alone is not a reason to admit that we are also a priori justified in believing our color vision is reliable any more than it would be a reason to admit, e.g., that scientific knowledge is a priori contingent. Even if we must come to grips with a particular pieces of deeply contingent a priori justification, this fact alone is no reason to admit other deeply contingent claims are a priori justified unless we can construct similar arguments to show that they are. Now one of the results of this paper is just such a probabilistic argument for the claim that we are at least a priori justified in being strongly biased toward our color vision being reliable. And as I have said this is bad news for the dogmatist. But we have tried to make the best of it for the dogmatist by putting probabilistic considerations aside. We saw that if we do this, we can solve the bootstrapping problem. What we have not yet seen is why on non-probabilistic grounds we should admit that we are a priori justified in believing that our color vision is reliable. For this reason, I admit that there is a problem that arises once we get to the claim that our color vision works and my solution does not solve this problems. But the problem that we have focused on in this paper is a different problem. And it is one that is solved by the arguments of this 160 It was originally presented as a different problem in Cohen 2003. More recent discussion such as Vogel 2007 and Weisberg 2010 also treat it as a different problem. 140 paper. 6.3 Backstories Now this response is bound to be unsatisfying to some so I want to consider one objection to it that will help to bring out its plausibility. The idea of the objection is this: The case of a priori contingent justified scientific beliefs is not on all fours with the case of the a priori contingent justified belief in your color vision being reliable. This is because whatever “backstory” explains why we have a priori contingent justified belief in if the slide looks red, then it is red will also explain why we have a priori contingent justified belief in Reliability. 161 Of course, the success of this objection turns on what exactly the backstory is and I know of no systematic way to evaluate all possible backstories that there could be. So to respond to this objection, I will focus on a specific backstory that is due to Stewart Cohen. 162 Before I do this, let me be clear that what follows is not a criticism of Cohen. Cohen uses this backstory as part of explaining how to give a non-dogmatist diagnosis of what’s wrong with the bootstrapping reasoning. But our purpose is to consider whether this backstory forces the dogmatist who does admit a priori justified belief in if it looks red, then its red to also admit a priori justified belief in Reliability. As we will see below, we will get off board with a step in the argument because the dogmatist is not forced to accept it. This does nothing to show that the non-dogmatist shouldn't accept this step. Cohen’s backstory begins by observing that we can engage in a certain practice of suppositional reasoning. We suppose ᵯ� and reason to some conclusion ᵯ� on the basis of ᵯ� and assuming that this reasoning is good, it seems that we are epistemically justified in believing if ᵯ� , then ᵯ� . So for example, consider the modus ponens rule says that from ᵯ� and if ᵯ� , then ᵯ� , you may conclude ᵯ� . Now I can engage in a bit of suppositional reasoning. I start by supposing ᵯ� and if ᵯ� , then ᵯ� and then reason by modus ponens to the conclusion ᵯ� . So I am now epistemically justified in believing if ᵯ� and if ᵯ� , then ᵯ� , then ᵯ� which is a conditional corresponding to the modus ponens rule. And modus ponens is, of course, just one example. The same kind of suppositional reasoning can be applied to any basic rule of reasoning to show that we are justified in believing a conditional corresponding to this rule. If we assume that it is a basic rule of reasoning that we can conclude that the slide is red from the slide appearing to be red. This would entail that we are a priori justified in believing that if the slide looks red, then it is red. 161 I only discuss the possibility of generalizing the backstory about why the claim that if the slide is red, then it looks red is a priori justified explain why the claim that your color vision is a priori justified. I leave to one side direct attempts ot explain why the claim that your color vision is reliable is a priori justified such as the one in Wright 2004. 162 See Cohen 2010: 150-156 (cf. Hawthorne 2002: §1.2). Wedgwood 2013: §3 (especially n. 14) offers a similar backstory but does not claim that we are a priori justified in believing that our color vision is reliable (see n. 163 below for further discussion). 141 Now notice that just saying this much does not get us a priori justification in Reliability. The claim that our color vision is reliable is not a conditional corresponding to any basic rule of justification. Instead, it is a conclusion inferred by enumerative induction. And the rule of enumerative induction that we need for scientific inference involves a track record proposition not the proposition that if the slide looks red, then it is red. Cohen’s argument so far does not say that the track record proposition is a priori justified so this does not explain why Reliability would be a priori justified. But we can get an argument going for being a priori justified belief in Reliability if we engage in the following longer chain of reasoning: First suppose that the slide looks red and infer by the perceptual inference rule that it is red. Next conclude by conjunction introduction from these two claims that the slide looks red and is red. Then perform the simple analytic inference that allows you to conclude that your color vision worked. Repeat this process for many different colors and afterward infer a track record proposition. Finally, conclude Reliability by enumerative induction. In other words, Cohen’s idea is to suppose each of the perceptual experiences and then show Reliability can be concluded using the step that we identified at the beginning of the paper. So it appears that we are a priori justified in believing that whatever perceptual experience of the color of the slide we have, our color vision is reliable. And that more less amounts to simply saying that we are a priori justified in believing our color vision is reliable. But the dogmatist is not in fact forced to accept this argument. This is because it assumes more than just that suppositional reasoning allow us to be a priori justified in believing a conditional corresponding to each basic rule. It assumes that epistemic justification satisfies transitivity. And we have already seen that the dogmatist must deny this claim and seen how they should do so. We said that the inductive inference to Reliability fails because our basis for Track Record is just that the slide looks red. 163 Thus, Cohen’s backstory would force the dogmatist to say that we are a priori justified in believing our color vision is reliable only if epistemic justification is transitive in the way that I have argued the dogmatist must deny. But since dogmatists are not eager to add to the stock of deeply contingent a priori justified beliefs that they must accept, this is corroborating evidence that the view I have been developing is the best solution for the dogmatist to adopt and that the bootstrapping 163 Though Wedgwood 2013’s ideas are similar to Cohen’s they different in important ways. In particular, Wedgwood only claims that we are a priori justified in believing that if the slide looks red and my experiences contain no defeaters, then it is red and similarly for all of the conditionals corresponding to rules. This difference leads us to be unable to perform the suppositional reasoning that I sketched on Cohen’s behalf. This is because we cannot coherently add the supposition that no defeaters to enumerative induction are present after inferring the slide is red from the slide looks red because this itself is a defeater of that inductive inference according to the present account. To better understand this point as well as the point that I make about Cohen’s solution, it may be useful to compare how Cohen and Wedgwood would treat the failure of transitivity discussed in §5.2. 142 problem is a separate problem over and above the problem related to single premise closure. 7. Conclusion Overall, then, the results of this paper are mixed for the dogmatist. We have seen that the only way for the dogmatist to solve the bootstrapping problem is for her to claim that epistemic justification does not satisfy cut. And this is some bad news for the dogmatist because probabilistic considerations suggest that this solution is not compatible with dogmatism. But we then tried to make the best of it on the dogmatist’s behalf by developing an alternative non-probabilistic framework in which to implement our solution. The framework we used was a broadly foundationalist epistemology. We saw that in this framework, we can explain why epistemic justification fails to satisfy cut in a way that solves the bootstrapping problem. Though this doesn’t solve all of the problems for dogmatism, it, I submit, is the best the dogmatist can do by way of solving the bootstrapping problem. Whether the dogmatist best is good enough depends on whether sufficiently powerful alternatives to probabilistic frameworks for understanding justification can be developed, whether the tension between dogmatism and single premise closure can be resolved, and whether deeply contingent a priori justification is really problematic. For these reasons, it is a question best left for another day. 164 164 Thanks to participants in the USC dissertation seminar, Andrew Bacon, Kenny Easwaran, John Hawthorne, Ben Lennertz, Jacob Ross, Mark Schroeder, Scott Soames, Gabriel Uzquiano-Cruz, Ryan Walsh, and Ralph Wedgwood for comments on this paper or ideas related to it. 143 References Alchourrón, Carlos, Peter Gärdenfors, and David Makinson. 1985. “On the Logic of Theory Change” in Journal of Symbolic Logic 50: 510-530. Altschul, Jon. 2012. “Entitlement, Justification, and the Bootstrapping Problem” in Act Analytica 27: 345-366. Arieli, Ofer and Arnon Avron. 2000. “General Patterns for Nonmonotonic Reasoning” in Logic Journal of the IGPL 8: 119-148. Arló-Costa, Horacio and Rohit Parikh. 2005. “Conditional Probability and Defeasible Inference” in Journal of Philosophical Logic 34: 97-119. Batens, Diderik 2007. “A Universal Logic Approach to Adaptive Logics” in Logica Universalis 1: 221-242. Becker, Kelly. 2012. “Basic Knowledge and East Understanding” in Act Analytica 27: 145-161 Bedke, Matthew. 2009. “The Iffiest Oughts” in Ethics 119: 672-698. —. 2011. “Passing the Deontic Buck” in Shafer-Landau 2011: 128-153. Bergmann, Michael. 2000. “Externalism and Skepticism” in The Philosophical Review 109: 159-194. —. 2004. “Epistemic Circularity” in Philosophy and Phenomenological Research 69: 709-727. Beirlaen, Mathieu, Christian Straßer, and Joke Meheus. 2013. “An Inconsistency-Adaptive Deontic Logic for Normative Conflicts”, in Journal of Philosophical Logic 41: 285-315. Belnap, Nuel. 1962 “Tonk, Plonk, and Plink” in Analysis 22: 130-134. Bezzazi, Hasan, David Makinson, and Ramón Pino Pérez. 1997. “Beyond Rational Monotony” in Journal of Logic and Computation 7: 605-631. Black, Tim. 2008. “Solving the Problem of Easy Knowledge” in The Philosophical Quarterly 58: 597-617. Bonjour, Laurence . 1978. “Can Empirical Knowledge Have a Foundation?” in American Philosophical Quarterly 15: 1-13. Brady, Michael eds. 2011. New Waves in Metaethics. New York: Palgrave. Brewka, Gerhard. 1991. “Cumulative Default Logic” in Artificial Intelligence 50: 183-205. Briesen, Jochen. forthcoming. “Reliabilism, Bootstrapping, and Epistemic Circularity” in Synthese. Brink, David. 1994. “Moral Conflict and Its Structure” in Philosophical Review 103: 215-247. Broome, John. 2004. “Reasons” in Wallace, Pettit, Scheffler, and Smith 2004: 28-55 Brown, Mark. 1999 “Agents with Changing and Conflicting Commitments” in McNamara and Prakken 1999: 109-126. Brueckner, Anthony. 2013. “Bootstrapping, Evidentialist Internalism, and Rule Circularity” in Philosophical Studies 164: 591-597. — and Christopher Buford. 2009. “Bootstrapping and Knowledge of Reliability” in Philosophical Studies 145: 407-412. Buchak, Lara. forthcoming. “Belief, Credence, and Norms” in Philosophical Studies. Burgess, John. 2009. Philosophical Logic. Princeton: Princeton University Press. Cariani, Fabrizio. 2013. “Ought and Resolution Semantics” in Nous 47: 534-558. Castañeda, Henri Neri. 1981. ‘The Paradoxes of Deontic Logic’ in Hilpinen 1981: 37-86. Chellas, Brian. 1974: “Conditional Obligation” in Stenlund 1974, pp. 23-33. —. 1980. Modal Logic. Cambridge: Cambridge University Press. Chisholm, Roderick. 1989. Theory of Knowledge 3 rd ed. Englewood Cliffs: Prentice-Hall. Christensen, David. 1992. “Confirmation Holism and Bayesian Epistemology” in Philosophy of Science 59: 504-557. Cohen, Stewart. 2002. “Basic Knowledge and the Problem of Easy Knowledge” in Philosophy and Phenomenological Research 65: 309-329. —. 2005. “Why Basic Knowledge Is Easy Knowledge” in Philosophy and Phenomenological Research 70: 417-430. —. 2010. “Bootstrapping, Defeasible Reasoning, and A Priori Justification” in Philosophical Perspectives 24: 141-159. —. ms. “Theorizing about the Epistemic”. <http://www.stew-cohen.com/storage/Theorizing20about20the20Epistemic.pdf>. Cook, Roy. 2005. “What’s Wrong with Tonk” in Journal of Philosophical Logic 34: 217-226. Dany, Jonathan. 2004a. Ethics without Principles. Oxford: Oxford University Press. —. 2004b. “Enticing Reasons” in Wallace, Pettit, Scheffler, and Smith 2004: 91-118. Donagan, Alan. 1984 “Consistency in Rationalist Moral Systems” in Journal of Philosophy 81: 291-309. Douven, Igor and Christoph Kelp. 2013. “Proper Bootstrapping” in Synthese 190: 171-185 Douven, Igor and Timothy Williamson. 2006. “Generalizing the Lottery Paradox” in British Journal for the Philosophy of Science 57: 755-779. Dummett, Michael. 1973. “The Justification of Deduction”, reprinted in his 1978. Truth and Other Enigmas, Cambridge: Harvard University Press. 290-318. Easwaran, Kenny. 2011. “Bayesianism I” in Philosophy Compass 6: 312-320. —. ms. “Dr. Truthlove or How I Learned to Stop Worrying and Love Bayesian Probabilities”. <http://dl.dropboxusercontent.com/ u/10561191/Unpublished/Truthlove.pdf>. Foley, Richard. 1993. Working without the Net. Oxford: Oxford University Press. Foot, Philippa. 1972. “Morality as a System of Hypothetical Imperatives” in Philosophical Review 81: 305-316. —. 1983. “Moral Realism and Moral Dilemma” reprinted in Gowans 1987a. 250-270. Fumerton, Richard. 1995. Metaepistemology and Skepticism. Lanham: Rowman & Littlefield. 144 —. 2009. “Foundationalist Theories of Epistemic Justification” in Stanford Encyclopedia of Philosophy Winter 2009. <http://plato.stanford.edu/archives/win2009/entries/justep-foundational/>. Gabbay, Dov. 1985. “Theoretical Foundations for Non-monotonic Reasoning in Expert Systems” in K. R. Apt eds. Proceedings NATO Advance Study Institute on Logics and Models of Concurrent Systems. Berlin: Springer. 439- 457. —, C.J Hodger, and J.A. Robinson eds. 1994. Handbook of Artificial Intelligence and Logic Programming, volume 3. Oxford: Oxford University Press. —, John Horty, Xavier Parent, Ron van der Meyden, and Leendert van der Torre eds. 2013. Handbook of Deontic Logic and Normative Systems. London: College Publications Gert, Joshua. 2007. “Normative Strength and the Balance of Reasons” in Philosophical Review 116: 533-562. Goble, Lou. 2005. “A Logic for Deontic Dilemmas” in Journal of Applied Logic, 3, pp. 461-83. —. 2009. “Normative Conflicts and the Logic of ‘Ought’” in Nous 43: 450-489. —. 2013. “Prima Facie Norms, Normative Conflicts, and Dilemmas” in Gabbay, Horty, Parent, van der Meyden, and van der Torre 2013: 241-352. —. ms. “A Basic Deontic Logic for Normative Conflicts”. Goodman, Nelson. 1955. Fact, Fiction, and Forecast. Cambridge: Harvard University Press. Gowans, Christopher eds. 1987a. Moral Dilemmas. Oxford: Oxford University Press. —. 1987b. “Introduction” in Gowans 1987a: 3-33. Greenspan, Patricia. 2007. “Practical Reasons and Moral ‘Ought’” in Shafer-Landau 2007: 172-194. Hansen, Jörg. 2004. “Problems and Results for Logics about Imperatives” in Journal of Applied Logic 2: 39-61. —. 2005. “Conflicting Imperatives and Dyadic Deontic Logic” in Journal of Applied Logic 3: 484-511. —. 2008. “Prioritized Conditional Imperatives” in Autonomous Agents and Multi-Agent Systems 17: 11-35. Harman, Gilbert. 1973. Thought. Princeton: Princeton University Press —. 1986. Change in View. Cambridge: The MIT Press. Hawthorne, James and David Makinson. 2007. “The Quantitative/Qualitative Watershed for Rules of Uncertain Inference” in Studia Logica 86: 247-297. Hawthorne, John. 2002. “Deeply Contingent A Priori Knowledge” in Philosophy and Phenomenological Research 65: 247-269. —. 2004. Knowledge and Lotteries. Oxford: Oxford University Press. Hilpinen, Risto eds. 1971. Deontic Logic: Introductory and Systematic Readings. Dordrecht: D. Reidel. — eds. 1981. New Studies in Deontic Logic: Norms, Actions, and the Foundations of Ethics. Dordrecht: D. Reidel. — and Dagfinn Føllesdal. 1971. “Deontic Logic: An Introduction” in Hilpinen 1971: 1-35. Horty, John. 1993. “Deontic Logic as Founded on Nonmonotonic Logic” in Annals of Mathematics and Artificial Intelligence 9: 69-91. —. 1994. “Moral Dilemmas and Nonmonotonic Logic” in Journal of Philosophical Logic 23: 35-65. —. 1997. “Nonmonotonic Foundations for Deontic Logic” in Nute 1997: 17-44. —. 2003. “Reasoning in Moral with Moral Conflicts” in Nous 37: 557-605. —. 2012. Reasons as Defaults. Oxford: Oxford University Press. —, Richmond Thomason, and David Touretzky. 1990. “A Skeptical Theory of inheritance in nonmonotonic semantic networks” in Artificial Intelligence 42: 311-348. Huemer, Michael. 2001. Skepticism and the Veil of Perception. Lanham: Rowman & Littlefield. Hume, David. 1978. A Treatise of Human Nature. P.H. Nidditch ed. 2nd edition. Oxford: Oxford University Press. Originally published in 1739. Kallestrup, Jesper. 2012. “Bootstrap and Rollback” in Synthese 189: 395-413. Kagan, Shelly. 1998. “Rethinking Intrinsic Value” in Journal of Ethics 2: 277-297. Kolodny, Niko. ms. “Instrumental Reasons”. <http://sophos.berkeley.edu/kolodny/ITShortVersion5.pdf> Kornblith, Hilary. 2009. “A Reliabilist Solution to the Problem of Promiscuous Bootstrapping” in Analysis 69: 263-267. Korsgaard, Christine. 1983. “Two Distinctions in Goodness” in Philosophical Review 92: 169-195. Kraus, Sarit, Daniel Lehmann, and Menachem Magidor. 1990. “Non-monotonic Reasoning, Preferential Models and Cumulative Logics” in Artifical Intelligence 44: 167-207. Kripke, Saul. 2011. “On Two Paradoxes of Knowledge” in Philosophical Troubles. Oxford: Oxford University Press. 27-51. Lehrer, Keith. 1974. Knowledge. New York: Oxford University Press. Leitgeb, Hannes. 2013. “Reducing Belief Simpliciter to Degrees of Belief” in Annals of Pure and Applied Logic 164: 1338-1389. Lemmon, E.J. 1962. “Moral Dilemmas”, in The Philosophical Review 70: 139-158. Lin, Hanti. 2013. “Foundations of Everyday Practical Reasoning” in Journal of Philosophical Logic 42: 831-862. MacFarlane, John. 2009. “Logical Constants” in Stanford Encyclopedia of Philosophy Fall 2009 Edition. <http://plato.stanford.edu/archives/fall2009/entries/logical-constants/> Makinson, David. 1988. “General Theory of Cumulative Inference” in M. Reinfrank et al. eds Lecture Notes on Artificial Intelligence 346: 1-18. —. 1994. “General Patterns in Non-monotonic Reasoning” in Gabbay, Hodger, and Robinson 1994: 35-110. —. 2005. Bridges from Classical to Non-monotonic Logic. London: College Publications. — and Peter Gärdenfors. 1991. “Relations between the Logic of Theory Change and Nonmonotonic Logics” in André Fuhrmann and Michael Morreau eds The Logic of Theory Change. Berlin: Springer. 185-205. — and Leendert van der Torre. 2000. “Input/Output Logics” in Journal of Philosophical Logic 29: 383-408. 145 Marcus, Ruth Barcan. 1980. “Moral Dilemmas and Consistency” in Journal of Philosophy 77: 121-136. Markie, Peter. 2005. “Easy Knowledge” in Philosophy and Phenomenological Research 70: 406-416. Millsap, Ryan. ms. “The Balancing Theory of Ought and Reasons Transmission”. McConnell, Terrance. 2010. “Moral Dilemmas”, Stanford Encyclopedia of Philosophy Summer 2010 Edition. <http://plato. stanford.edu/archives/sum2010/entries/moral-dilemmas/>. McNamara, Paul and Henry Prakken eds. 1999. Norms, Logics, and Information Systems. Amsterdam: IOS Press. McNamara, Paul. 2004. “Agential Obligation as Non-Agential Personal Obligation Plus Agency” in Journal of Applied Logic 2: 117-152. Nagel, Thomas. 1970. The Possibility of Altruism. Princeton: Princeton University Press. —. 1979. Mortal Questions. Cambridge: Cambridge University Press. —. 1979. “The Fragmentation of Value” reprinted in Gowans 1987a: 174-87. Originally published in Nagel 1979. Nair, Shyam. ms. “A Formal Framework for Deontic Logic and Deontic Reasoning”. Neta, Ram. 2005. “A Contextualist Solution to the Problem of Easy Knowledge” in Grazer Philosophische Studien 69: 183-205. Nute, Donald eds. 1997. Defeasible Deontic Logic. Dordrecht: Kluwer. — and Xiaochang Yu. 1997. “Introduction” in Nute 1997: 1-18. Parfit, Derek. 2011. On What Matters. Oxford: Oxford University Press. Pollock, John and Joseph Cruz. 1999. Contemporary Theories of Knowledge 2 nd ed. Lanham: Rowman and Littlefield. Portmore, Douglas. 2013. “Perform Your Best Option” in Journal of Philosophy 110: 436-459. Pietroski, Paul. 1993. “Prima Facie Obligation, Ceteris Paribus Laws in Moral Theory” in Ethics 103: 489-515. Prawitz, Dag. 1985. “Remarks on Some Approaches to the Concept of Logical Consequence” in Synthese 62: 153-171. Prior, A.N.. 1960. “The Runabout Inference Ticket” in Analysis 21: 38-39. Pryor, James. 2000. “The Skeptic and the Dogmatist” in Nous 34: 517-549 —. 2004. “What Wrong with Moore’s Argument” in Philosophical Issues 14: 349-378. —. 2013 “Problems for Credulism” in Christopher Tucker eds. Seemings and Justification. Oxford: Oxford University Press. 89-133? Raz, Joseph. 2002: Practical Reasoning and Norms. Oxford: Oxford University Press. Originally published by in 1975 by Hutchinson & Co. —. 2005. “The Myth of Instrumental Rationality” in Journal of Ethics and Social Philosophy 1. Reiter, Raymond. 1980. “A Logic for Default Reasoning” in Artificial Intelligence 13: 81-132. Restall, Greg. 2005. “Multiple Conclusions” in Logic Methodology and the Philosophy of Science 12: 189-205. Ripley, David. forthcoming. “Pardoxes and Failures of Cut” in Austrialasian Journal of Philosophy Ross, Alf. 1941. “Imperatives and Logic” in Theoria 7: 53-71. Ross, Jacob and Mark Schroeder. 2014. “Belief, Credence, and Pragmatic Encroachment” in Philosophy and Phenomenological Research 88: 259-288. Ross, W.D. 1930. The Right and the Good. Oxford: Oxford University Press. Rott, Hans. 2001. Change, Choice, and Inference. Oxford: Clarendon Press. Scanlon, T.M. 1998. What We Owe to Each Other. Cambridge: Harvard University Press. Scheall, Scott. 2011. “Later Wittgenstein and the Problem of Easy Knowledge” in Philosophical Investigations 34: 268-286. Schlechta, Karl. 2007. “Nonmonotonic Logics: A Preferential Approach” in Dov Gabbay and John Woods eds. Handbook of the History of Logic vol. 8. Dordrecht: Elsevier. 451-516. Schotch, Peter and Raymond Jennings. 1981. “Non-Kripkean Deontic Logic” in Hilpinen 1981: 149-162. Schroeder, Mark. 2009. “Means-End Coherence, Stringency, and Subjective Reasons” in Philosophical Studies 143: 223-248. Sellars, Wilfrid. 1997. Empiricism and the Philosophy of Mind. Cambridge: Harvard University Press. Shafer-Landau, Russ eds. 2007. Oxford Studies in Metaethics 2. Oxford: Oxford University Press. — eds. 2011. Oxford Studies in Metaethics 6. Oxford: Oxford University Press. Shoham, Yoav. 1987. “A Semantic Approach to Nonmonotonic Logics” reprinted in Matthew Ginsberg ed. Readings in Non-Monotonic Logic. San Francisco: Morgan Kaufmann. 227-250. Sinnott-Armstrong, Walter. 1988. Moral Dilemmas. Oxford: Basil Blackwell. Stalnaker, Robert. 1994. “What Is a Non-monotonic Consequence Relation?” in Fundementa Informaticae 21: 7-21. Stenlund, Sören eds. 1974. Logical Theory and Semantic Analysis: Essays Dedicated to Stig Kanger on His Fiftieth Birthday. Boston: D. Reidel Straßer, Christian, Joke Mehus, and Matthieu Beirlaen. 2012. “Tolerating Deontic Conflict by Adaptively Restricting Inheritance” in Logique et Analyse 219: 477-506. Sturgeon, Scott. 2008. “Reason and the Grain of Belief” in Nous 42: 139-165. Titelbaum, Michael. 2010. “Tell Me You Love Me” in Philosophical Studies 149: 119-134. Vahid, Hamid. 2007. “Varieties of Easy Knowledge” in Act Analytica 22: 223-237. van Cleve, James. 1979. “Foundationalism, Epistemic Principles, and the Cartesian Circle” in The Philosophical Review 88: 55-91. —. 2003. “Is Knowing Easy – Or Impossible?” in Stephen Luper eds. The Skeptics. Burlington: Ashgate. 45-59. van der Torre, Leendert and Yao Hua Tan. 2000. “Two Phase Deontic Logic” in Logique et Analyse 171-172: 411-456. van Fraassen, Bas. 1973. “Values and the Heart’s Command” in Journal of Philosophy 70: 5-19. —. 1995. “Fine-Grained Opinion, Probability, and the Logic of Full Belief” in Journal of Philosophical Logic 24: 349-377. Väyrynen, Pekka. 2011. “A Wrong Turn to Reasons?” in Brady 2011: 185-207. Veltman, Frank. 1996. “Defaults in Update Semantics” in Journal of Philosophical Logic 25: 221-261. Vogel, Jonathan. 2000. “Reliabilism Leveled” in The Journal of Philosophy 97: 602-623. 146 —. 2008. “Epistemic Bootstrapping” in The Journal of Philosophy 105: 518-539. Wallace, R. Jay, Philip Pettit, Samuel Scheffler, and Michael Smith eds. 2004. Reason and Value. Oxford: Oxford University Press. Weatherson, Brian. 2005. “Can We Do Without Pragmatic Encroachment” in Philosophical Perspectives 19: 417-443. Wedgwood, Ralph. 2012. “Outright Belief” in Dialectica 66: 309-329. —. 2013. “A Priori Bootstrapping” in Albert Casullo and Joshua Thurow eds. The A Priori in Philosophy. Oxford: Oxford University Press. 226-246. Weisberg, Jonathan. 2009. “Commutativity or Holism” in British Journal for the Philosophy of Science 60: 793-812. —. 2010. “Bootstrapping in General” in Philosophy and Phenomenological Research 81: 525-548. —. 2012. “The Bootstrapping Problem” in Philosophy Compass 7: 597-610. —. forthcoming. “Updating, Undermining, and Independence” in British Journal of Philosophy of Science. White, Roger. 2006. “Problems for Dogmatism” in Philosophical Studies 131: 525-557. Williams, Bernard. 1965. “Ethical Consistency” reprinted in Geoffrey Sayre-McCord ed. Essays on Moral Realism. 1988. Ithaca: Cornell University Press. Williams, J. Robert. 2008. “Supervaluationism and Logical Revisionism” in Journal of Philosophy: 192-212. —. 2011. “Degree Supervaluational Logic” in The Review of Symbolic Logic 4: 130-149. Williamson, Timothy. 2001. Knowledge and Its Limits. Oxford: Oxford University Press. Wright, Crispin. 2004. “Warrant for Nothing (and Foundations for Free) Zalabardo, José. 2005. “Externalism, Skepticism, and the Problem of Easy Knowledge” in The Philosophical Review 114: 33-61. 147
Abstract (if available)
Abstract
This dissertation concerns a series of overlapping structural questions in normative theory. Chapters one and two focus on metaphysical, epistemological, and logical issues in ethics, in particular, issues concerning what we ought to do, what we have reason to do, and how reasons and 'ought's are connected. Chapter two develops a theory of good reasoning about what we ought to do and what we have reason to do. It provides a bridge between the issues in ethics discussed in chapter one and two and the issues in philosophical logic and epistemology concerning the nature and structure of good reasoning that take center stage in chapter three and four.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Contrastive reasons
PDF
The virtue of reasonableness: on the normative epistemic implications of public reason liberalism
PDF
What "ought" ought to mean
PDF
Reasoning with uncertainty and epistemic modals
PDF
Units of agency in ethics
PDF
Positivist realism
PDF
The case for moral skepticism
PDF
Rationality and the primacy of the occurrent
PDF
A perceptual model of evaluative knowledge
PDF
The metaphysics of social groups: structure, location, and change
PDF
Beliefs that wrong
PDF
Telling each other what to do: on imperative language
PDF
Comparative iIlusions at the syntax-semantics interface
PDF
A deontological explanation of accessibilism
PDF
On being bound: law, authority, and the politics of obligation
PDF
Reasoning with degrees of belief
PDF
Unlimited ontology and its limits
PDF
A synthesis reasoning framework for early-stage engineering design
PDF
Public justification beyond legitimacy
PDF
Feeling good and living well: on the nature of pleasure and its role in well-being
Asset Metadata
Creator
Nair, Gopal Shyam
(author)
Core Title
Reasons, obligations, and the structure of good reasoning
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Philosophy
Publication Date
08/06/2014
Defense Date
05/06/2014
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
cumulative transitivity,deontic logic,moral dilemmas,OAI-PMH Harvest,obligations,oughts,reasoning,reasons
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Schroeder, Mark (
committee chair
), Finlay, Stephen (
committee member
), Schein, Barry (
committee member
), Uzquiano-Cruz, Gabriel (
committee member
), Wedgwood, Ralph (
committee member
)
Creator Email
gshyam.nair@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-453743
Unique identifier
UC11287205
Identifier
etd-NairGopalS-2776.pdf (filename),usctheses-c3-453743 (legacy record id)
Legacy Identifier
etd-NairGopalS-2776.pdf
Dmrecord
453743
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Nair, Gopal Shyam
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
cumulative transitivity
deontic logic
moral dilemmas
obligations
oughts
reasoning
reasons