Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Belief as credal plan
(USC Thesis Other)
Belief as credal plan
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
BELIEF AS CREDAL PLAN by Justin M. Dallmann A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA in partial fulfillment of the requirements of the degree of DOCTOR OF PHILOSOPHY (PHILOSOPHY) December 2015 Copyright 2015 Justin M. Dallmann The mind, if it will proceed rationally, ought to examine all the grounds of probability, and see how they make more or less, for or against any probable proposition, before it assents to or dissents from it, and upon a due balancing the whole, reject or receive it, with a more or less firm assent, proportionally to the preponderance of the greater grounds of probability on one side or the other. John Locke, An Essay Concerning Human Understanding. Book IV , Chapter 15, 5. […] an opinion or belief is nothing but an idea, that is different from a fiction, not in the nature or the order of its parts, but in the manner of its being conceived. […] An idea assented to feels different from a fictitious idea, that the fancy alone presents to us: And this different feeling I endeavour to explain by calling it a superior force, or vivacity, or solidity, or firmness, or steadiness. […] its true and proper name is belief, which is a term that every one sufficiently understands in common life. […] It gives them [the ideas of the judgment] more force and influence; makes them appear of greater importance; infixes them in the mind; and renders them the governing principles of all our actions. […] the mind has a firmer hold, or more steady conception of what it takes to be matter of fact, than of fictions. David Hume, A Treatise of Human Nature. Section VII, Part III, Book I and Appendix. Acknowledgments This dissertation project got it’s start (a little late) after taking Mark Schroeder’s 2012 Spring semester epistemology course. Mark’s encouraging comments on the final paper, ‘ A Normatively Adequate Credal Reductivism’ would eventually lead to my first serious academic publication. Julia Staffel and Ben Lennertz have to be thanked for important comments on that work. It additionally benefited greatly from John Hawthorne’s comments presented at the Arché-CSMN graduate con- ference. Most centrally , as with all of my dissertation work, it also was greatly improved by the comments of the ineffable Kenny Easwaran, my advisor, who read many drafts. This dissertation is largely the result of the painful process of revising that pa- per. Though they now share no sentences, the chapter titled ‘Plans, persistent possibilities, and probabilistic belief states’ is a descendant of that paper. Impor- tant additional thanks for getting me to my current view are due to Ralph Wedg- wood, who very patently read and gave detailed comments on several versions of the chapter. Jacob Ross played an important role destroying several inadequate versions of the paper and the mistaken views that were endorsed therein over the course of multiple phone calls. Jonathan Weisberg should also be thanked for read- ing, and commenting on, an early version of this paper. Conversations with John Hawthorne, after his arrival at USC also greatly helped to clarify my thoughts on the subject and shape the current view. The chapter, ‘When obstinacy is a (better) cognitive policy’, and paper of the same name, have benefited greatly from discussions with Matthew Lutz, Jacob i Ross, Ralph Wedgwood, and especially Kenny Easwaran—who again read several drafts and talked though many of the arguments that appear there. Every one of the grad students at USC has given me important feedback on my work at some stage or another, in the graduate student reading group, the grad lounge, or elsewhere. Conversations with the students at USC have been at least as valuable as conversations with the university’s distinguished faculty . Though they may not have realized it, conversations with Greg Ackerman, Matthew Babb, Rima Basu, Michael Hatcher, Abelard Podgorski, Sam Shpall, and Jonathan Wright have all resulted in significant revisions to my work or thought. USC has a great graduate student community . For this, I would like to especially thank Lewis Pow- ell. In addition to possessing a constant willingness to discuss philosophical ideas, Lewis helped build up a sense of philosophical community at USC for which my wife and I are grateful. Most of my work at USC has benefited significantly from interactions with “the team”: Aness Webster, Kenny Pearce, Matt Lutz, and Shyam Nair. In addition to providing me with much of the early feedback I received at USC, they were a source of inspiration and philosophical motivation—asking ‘What would my (nearly fully rational) teammates do in my actual circumstances?’ is a good way to guide philo- sophical inquiry . These individuals regularly instantiated impressive ability , and motivation, and have been a source of philosophical inspiration throughout my time at USC. I am honored to have been a part of this cohort. I thank Scott Soames and Mark Schroeder for their continued effort to make the department an exciting place to do philosophy . In addition to being research powerhouses, they have had the desire and resolve to build an outstanding depart- ment that I have had the pleasure of benefiting from. I didn’t believe Scott at the admit open house when he said—in typical Scott fashion—that USC would be at the top of the philosophy department rankings by the time I graduated. That, it turns out, was a mistake. Other faculty at the University of Southern California that I am indebted to for taking the time to contribute to my philosophical de- velopment outside of the area of my dissertation include Andrew Bacon, Shieva ii Kleinschmidt, Scott Soames, Gabriel Uzquiano, and Gary Watson. Stephen Fin- lay is owed many thanks for his efforts reviewing my material for the job market. I also owe a debt to Brad Johnson, Rhonda Martens, Carl Matheson, and Chris Tillman from the University of Manitoba for philosophical discussion and support both prior to, and throughout, my time at USC. I owe a special philosophical and personal debt to the Ashfields and the Snedegar- Feamster pair. Amanda and Mike Ashfield have been great friends while at USC and compatriots in owning-a-new-person-while-in-graduate-school-in-a-foreign- country . Friday dinners with Emmy Feamster and Justin Snedegar were consis- tently one of the best parts of the week—even after they irrationally abandoned East Hollywood for the suburb life of Long Beach. The chapter titled ‘ A puzzle concerning evidence, belief, and credence’ benefited from conversation with Justin, as did my views on a broad range of topics. At a personal level, I would like to thank my parents. My mother: for imbuing me with a robust contrarianism, my father: for insisting that a job worth doing is worth doing well, and my step-father: for encouraging me early on to write creatively . Most of all I would like to thank my wife, Amanda Dallmann, and my son, Reid Dallmann, who made a contribution not mainly to the content of this dissertation but to things that are much more important: helping me be a better teacher to my students, pulling me away from a back-lit display and into real sunlight, and making my life go well. Lastly , I would like to thank the Flewellings and the Social Sciences and Hu- manities Research Council of Canada for financial support during the writing of this dissertation in the form of a doctoral award, number 752-2010-0298. iii Contents 1 Introduction 1 1.1 Unifying themes . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 The dialectical background . . . . . . . . . . . . . . . . . . . . . 6 2 Plans, persistent possibilities, and probabilistic belief states 15 2.1 Problems for credence-level centric views . . . . . . . . . . . . . . 16 2.2 Belief as epistemic plan . . . . . . . . . . . . . . . . . . . . . . . 38 2.3 Concluding remarks . . . . . . . . . . . . . . . . . . . . . . . . . 51 3 When obstinacy is a better (cognitive) policy 53 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.2 Two information response policies . . . . . . . . . . . . . . . . . 54 3.3 A queue-theoretic model . . . . . . . . . . . . . . . . . . . . . . 56 3.4 Comparing the policies . . . . . . . . . . . . . . . . . . . . . . . 58 3.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.6 Summing up . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.7 Technical Appendix . . . . . . . . . . . . . . . . . . . . . . . . . 73 4 A puzzle concerning evidence, belief, and credence 77 4.1 The puzzle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.2 Motivations for separation . . . . . . . . . . . . . . . . . . . . . 79 4.3 Evaluating the evidence principles . . . . . . . . . . . . . . . . . 87 4.4 Where now? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Bibliography 99 iv Chapter 1 Introduction This dissertation is about the relationship between the coarse-grained attitude of believing that a proposition obtains and the fine-grained attitude of assigning a de- gree of confidence, or credence, 1 to a proposition. One of its guiding ideas is that theorizing about these world-directed attitudes should take our cognitive limita- tions seriously . This marks a departure from most contemporary research, which is more often guided and shaped by formal models of ideal epistemic subjects. The dissertation is written as a series of standalone pieces. A benefit of this ap- proach is that the reader can pick up any chapter, read it, and understand it without having to read any of the others. A cost of this approach is that the ties between the chapters are less obvious than they would be if the dissertation were written as a continuous piece. One task of this introduction will be to briefly summarize the results from each chapter and show how those results are related. Another will be to sketch the larger dialectical picture into which the chapters are placed and highlight some of the assumptions that are operative in the chapters. 1 I use the terms ‘credence’ and ‘confidence’ interchangeably throughout the dissertation. 1 1.1 Unifying themes One theme that unifies this dissertation is the idea that credences, or confidences, are one of the more fundamental world-directed mental states. In particular, this dissertation constitutes a modest defense of the idea that full belief depends on credence, and is best understood as one kind of high credence. Call an account of belief with this feature a ‘credal priorism’. Chapter 2 engages with this theme directly . In it, I present a series of underappreciated objections to credal priorisms, diagnose how they put pressure on those views, and motivate a credal priorist view that avoids those objections. In particular, it is argued that the objections pose a problem for views that try to account for outright belief entirely in terms of levels of confidence. The view of belief developed avoids the worry by appealing to resources that outstrip a subject’s levels of confidence alone. The substance of the view developed touches on a second important theme: the importance of our cognitive limitations for understanding our mental states. According to the view advanced in chapter 2, belief is best thought of as a kind of credal plan whose primary role is to mitigate the effects of our cognitive limita- tions. The basic insight is that it is cognitively costly to remain responsive to every epistemic contingency . After weighty enough evidence for a proposition is taken into account, there is little to gain by responding to further pieces of information re- garding that proposition—especially if they are expected to be largely epistemically inconsequential. In these circumstances, the flexibility of being able to constantly fine-tune our epistemic stance towards that proposition can be traded off for a greater chance of gaining more substantive information bearing on other propo- sitions. The belief-as-credal-plan view proposes that belief consists in a policy to make a trade-off of that kind. Belief is an epistemic coordination point that makes up overall gains in a subject’s representational accuracy by resisting reconsideration on well-established issues. This theme is developed further in chapter 3. There I apply the mathematics of queues to quantify the cognitive costs of responding to information and illuminate 2 the mechanism of resisting reconsideration by which epistemic plans secure gains in representational accuracy . According to the queue model, the working memory that we use when processing information is like a line to get a coffee. The pieces of information that present themselves are analogous to potential patrons arriving to the coffee shop. Information processing is analogous to those patrons being served and leaving the queue. And, what is essential, the size of the queue is limited just like our working memories—using the same analogy , if the queue stretches to the coffee shop’s doors then any further possible patrons will be put off and pass the coffee shop by . The framework that I develop accounts for the fact that properly updating our confidences by some available information requires an investment in processing power (the rate at which patrons are served) and working memory (the length of the queue) which can come at the cost of not responding to other available informa- tion. This makes it possible to assess policies for responding to information under realistic assumptions about our cognitive abilities. I use these tools to show that it is better, in a range of normal circumstances and from the perspective of expected credal accuracy , for epistemic subjects like us to have a policy to resist reconsideration on issues for which we have substantial evidence, rather than to update on infor- mation indiscriminately as it presents itself. In terms of our analogy , it is better to turn away certain customers—the “screenplay writers”—before they enter the line. 2 These results explain the value of certain learning patterns, treated as “biases” by cognitive scientists and social psychologists. For example, in well-known studies, Bruner and Potter, Peterson and Ducharme, and Ross et al. take our tendency to weigh evidence received early in inquiry more heavily than evidence received later to be a shortcoming. By contrast, my results suggest that this effect might actually be a cognitively valuable, if perhaps not epistemically ideal, practice. I also leverage these results to provide a partial response to certain “demand- 2 I.e. to avoid turning away more profitable customers. 3 ingness” objections to the very possibility of credal reasoning advanced by Gilbert Harman (1986), Richard Holton (2013), and others. According to these objections updating confidences is too hard for subject’s like us, since the standard Bayesian model of ideal updating is too computationally demanding. The result shows that some of the demands can reasonably be offloaded at little cost. Combined with other ways of conserving cognitive resources—like using heuristic updating tech- niques instead of full-fledged Bayesian updating (2012)—the demandingness worry has little weight. Chapter 4 presents a puzzle about “all things considered” evidence in light of the relationship between credence and outright belief. Once it is realized that outright belief requires more than high confidence, we are faced with a difficult choice whenever there are principles that look equally plausible when spelled in terms of belief or in terms of high confidence. The paper explores the following two principles: 3 The belief-evidence condition. If, when rationally not believing p, learning only proposition e makes it rational for you to believe the proposition p, and it is irrational for you to believe p as it stands be- fore gaining this information, then e must be (all things considered) evidence for p for you and if, when rationally believing p, learning only propositione makes it rational for you to cease believing p, and it would not be rational for you to not believe p as it stands before gain- ing this information, then e must be (all things considered) evidence against p for you. The credence-evidence condition. If learning only proposition e makes it rational for you to raise your rational confidence in p by condi- tionalizing one, (i.e. updating your confidence ine to your confidence in p given e prior to learning e) then e is (all things considered) evi- dence for p for you and if learning only propositione makes it rational 3 Each of which finds endorsement in the literature. 4 to lower your rational confidence in p by conditionalizing on e, then e is (all things considered) evidence against p for you. On the face of it, these principles are both plausible. But, it is argued in chap- ter 4 that once it is admitted that belief and high confidence come apart, then it becomes possible to rationally go from failing to believe a proposition, learn some- thing that makes it rational to lower one’s rational confidence in that proposition, and rationally come to believe that proposition. Likewise, it is possible to go from rationally believing a proposition, learn a proposition that makes it rational to raise one’s confidence in that proposition, and come to rationally cease believing that proposition. Since a proposition cannot be both all things considered evidence for and against the same proposition, one of the belief-evidence condition or the credence-evidence condition have to go. The result is initially surprising. However, once we realize that belief and high credence come apart, we should expect a conflict. And, once we accept that ratio- nally believing a proposition requires rationally putting oneself into an epistemic stance that resists reconsideration, we should resolve the conflict by rejecting the belief-evidence condition. All things considered evidence for a proposition counts in favor of the truth of that proposition and credences are estimates of truth value, so it makes sense that changes in credence-levels track changes in all things considered evidence. Rational belief, on the other hand, must also be sensitive to the robustness of the reasons for the believed proposition—reasons that make it safe to resist reconsideration on the issue at hand. As a consequence, learning a proposition might all things considered weigh in favor of believing a proposition, by making one’s total evidence for that proposition much more robust, while at the same time making that proposition slightly less plausible. In this way , considera- tions from chapter 2 and chapter 3 yield a solution to the puzzle raised in chapter 4. 5 1.2 The dialectical background At this time, there is already an established literature that examines the relation- ship between credence, or confidence, and outright, or full, belief. That makes it worth spelling out how this background shapes the implicit assumptions made, and methods used, in this dissertation. 1.2.1 Credence and belief are mental states First of all, this dissertation takes as a starting point that it is makes sense to talk about belief and confidence, and to ascribe these mental states to subjects like us. In doing so, it implicitly assumes that eliminativisms, views that eliminate one or both of belief and credence from our ontology entirely , are false. This assumption has been defended at length elsewhere, but a few words can be said to motivate it here. We regularly invoke the mental states of believing and being more or less confident in a given proposition in everyday discourse. These mental states are not free floating and easily excised. They play an important explanatory role and are frequently invoked in order to explain the acts of those that have them. 4 The proponent of an eliminativism needs to provide substantive grounds for thinking that we could all be massively mistaken about our everyday assertions in addition to having to furnish us with the tools to satisfactorily explain our acts. 5 Defending the view becomes even harder if one wishes to be an eliminativist about one kind of mental state without being an eliminativist about the other, since then the proponent must also show that no reduction of the state being eliminated into the accepted state is likely to succeed. Showing that a state reduces to another 4 For a defense of the importance of mental states like belief in explaining action see (Williamson, 2000, pp. 75-83). Of course, Williamson would add that the mental state of knowing, rather than believing or assigning a credence is explanatorily primary , but we need not follow him on that point to make use of this insight. 5 Similar reasons to reject eliminativism can also be found in (Sturgeon, 2008) and (Ross and Schroeder, 2012). 6 does not show that the state that admits of reduction does not exist. Legoland’s life-sized Lego zebra is composed of, and thus in some sense reduces to, the Lego bricks that make it up. This fact does not show that the Lego zebra does not exist. At best, what it shows is that the Lego zebra is further removed from the “building blocks” of reality . All of this does not quite amount to a refutation of eliminativisms. 6 But, I will be happy here if the strong initial reasons to accept that we have beliefs and credences motivates my theorizing about them in this dissertation. 1.2.2 Initial considerations favor a credence-first picture As I mentioned above, one aim of my dissertation is to give a modest defense of credal priorism. My method for achieving that aim is to advance and develop a new view of that type that avoids some of the pressing objections that existing credal priorist views face. One of the reasons that the defense is merely modest is that I offer no definitive, knockdown, argument for that position. Offering a definitive argument for credal priorism would require an argument that singles it out as the only plausible view in the logical space of possible relation- ships between confidence in belief. Doing so would involve ruling out (i) the above eliminativisms in a more thorough way , (ii) non-dependence accounts—views that takes neither of the two mental states to be best understood in terms of the other, and (iii) belief-first accounts—views that attempt to understand credal facts in terms of belief facts. That would be a pretty tall order. That said, we have already seen initial reason to reject eliminativisms. We will now briefly examine some reasons to think that credal priorism is the most initially plausible hypothesis in the space of alternatives. With these on the table, the modest defense of the view in this dissertation goes a long way to cementing the preferability of credal priorism over its competitors. 6 For a sophisticated defense of an eliminativism about belief see (Churchland, 1981). 7 Considerations weighing against non-dependence A prima facie consideration that favors accounts according to which either belief facts ground confidence facts or vice versa over non-dependence views is that the latter multiply our mental states. Of course, there is nothing wrong with postu- lating additional mental states when doing so is required to adequately characterize our mental lives. But, if there are other acceptable views that don’t require pos- tulating an independent mental state, that is a point in their favor. Consequently , the case for non-dependence will be strong just to the extent that credence cannot be understood in terms of outright belief, or belief cannot be understood in terms of credence. The failure of credal priorist views proposed in the literature perhaps provides an inductive basis favoring this position. However, chapter 2 presents a credal priorist account that, I argue, is defensible. If that claim holds up, then the above considerations favor rejecting non-dependence views. The account developed in chapter 2 aside, general considerations also provide reason to resist non-dependence views. Belief and high credence are both world- directed attitudes that guide action and attitude revision in similar ways. What is more, credence and belief share a valence, or “march in step”. 7 It is difficult to come up with a case in which an agent believes p but assigns low credence to p. 8 These facts are in need of an explanation on a view in which high credence and belief can come apart. A compelling non-dependence account will provide an explanation of these coincidences. But, the difficulty of such a task in the context of the dialectic is noteworthy . It will be difficult for a non-dependence theorist to provide an explanation of the coincidences without also providing evidence for a dependency view. This is be- cause the obvious way to defend the view is to point out reasons for thinking that high credence and belief paradigmatically overlap. The line that the non-dependence 7 This point is emphasized in (Sturgeon, 2008, p. 146-8). 8 Though, in chapter 2 and chapter 4, I will present reasons for thinking that there are coun- terexamples to the converse claim, that if an agent has a high degree of credence in p, then that agent believes p. 8 theorist must tread is a fine one that becomes even finer as reductivist views be- come more plausible. Presumably , in addition to an explanation of the substantial overlap, a non-dependence theorist will also have to provide a counterexample to the total overlap of the two mental states. To see the difficulty , consider a natural evolutionary attempt to explain the datum from a non-dependence perspective: Say it could be shown that it is ad- vantageous to survival to consistently coordinate one’s actions across time. With this plausible premise, the non-dependence theorist could argue that we should expect credences and beliefs to “march in step” in most cases. For, if one can act on either one’s credences or one’s beliefs then, other things being equal, they had better “march in step” in order for the agent acting on them to arrive at the best outcome. Note though that the evolutionary premise also provides evidence against the non-dependence view. If consistently coordinating one’s acts across time makes one more fit, then the reduction hypothesis that we are not epistemically double book-keeping would appear to be an exemplary explanation of that fact—assuming that we are fairly fit in this way . The non-dependence theorist is in a bind since competing dependency views need to be refuted for the evidence to favor it. This loops us back to the central point: non-dependency views will only be as compelling as the case against understanding either belief as a kind of credence or credence as a kind of belief. Considerations weighing against belief-first views An important prima facie difficulty facing belief-first views that take every credal state to either depend on belief states, or be a kind of belief state, is that of capturing a fine-grained notion in terms of a coarse-grained one. Some of the most natural ways of doing so are difficult to defend. I will present a few to strengthen the worry for belief-first views—though, again, I do not pretend to have any in principle reason for ruling out belief-first views. One way to try to capture the granularity of credence in terms of belief is to 9 appeal to beliefs whose contents are of the right grain. The difficulty with this approach is finding a content of the right kind. Being the kind of creature that has world-directed mental states like belief and credence requires subjects with enough conceptual sophistication to grasp some propositions, the trouble is finding a concept (i) that all credence-havers share, (ii) that plausibly factors into a belief involving the proposition that a credence-haver assigns a credence to whenever that credence-haver assigns a credence to a proposition, and (iii) that is such that the credence-haver with that belief assigns the right credence to the proposition being assigned a credence in each case. So, for example, these constraints make it unlikely that we can understand a credence assignment of to p as being an outright belief that the probability of p is . If we try to understand ‘probability’ as subjective probability, or credence, the account fails the first desideratum of only appealing to concepts that every credence haver must have. 9 Just like a subject can believe a proposition without believing that she believes it or any other proposition, a subject can have a credence in a proposition without believing that she has a credence in it or any other proposition. Presumably , young children who fail to have the concept of credence, but who have credences, will be counterexamples here. Similarly , a credence assignment of to p cannot be understood as an outright belief that the objective chance of p is . Contemporary concepts of objective chance are a fairly recent development, whereas we arguably have always been the kinds of things that have confidences. 10 Beliefs about the evidential probability of a proposition, or how much eviden- tial support a given proposition possesses might do better on the first desideratum. It is at least a live possibility that in order to be a subject with world-directed states, like credences or beliefs, one needs to have some grasp of what counts in favor of a proposition. The second and third desiderata are more of a problem. In or- 9 It is perhaps worth noting that this proposal is not trivially circular. Compare: x is morally good if and only if S believes that x is morally good. Such an analysis is implausible, but not circular. Thanks to Shyam Nair for suggesting the comparison. 10 See (Hacking, 2006) for a discussion of the history of notions of probability . 10 der for the second desideratum to be satisfied, it must be the case that whenever a subject assigns a confidence to a proposition, that subject also has a belief about the strength of the evidence about that proposition. It is doubtful whether this is a necessary condition on assigning a confidence to a proposition. Considering the parallel case of outright belief, it seems like one can believe a proposition while having forgotten about any evidence underpinning that proposition. 11 The same sort of phenomenon seems operative in cases of assigning credences to a proposi- tion too. Satisfying the third desiderata is further complicated by requiring that a credence-haver’s notion of evidential strength track her degrees of confidence. One natural idea is to use traditional Bayesian accounts of evidential impact, which are specified in terms of confidences, to provide the account of evidential strength. 12 However, those analyses are many-one rather than one-to-one, in that they allow for very different levels of confidence to underpin the same levels of evidential sup- port. While I think it is useful to try to use appeal to evidential considerations to pin down a notion of confidence, more (highly non-trivial) work needs to be done in order to show that it is viable. 13 The constraints also make it unlikely that confidence might be a matter of bear- ing another affective attitude to towards a believed proposition. As Frank Ramsey notes, “the beliefs which we hold most strongly are often accompanied by prac- tically no feeling at all; no one feels strongly about things he takes for granted” (1926, p. 169). Given these difficulties, I think that there is initial pressure to resist belief-first views. That said, these observations are not decisive since there are lots of ways 11 For other problems, see Frank Ramsey’s (1926) paper, especially his criticism of Keynes at pp. 160–7. 12 For surveys of accounts of this kind see (Crupi et al., 2007), (Earman, 1992), (Fitelson, 1999), (Zalabardo, 2009). 13 For some preliminary work on this issue, see my ‘Taking confirmation first’ (2011). What should make us believe that it is “highly non-trivial” is the history of failed attempts to give an account of the weighting of evidential considerations. Bayesians get something close with their analyses of evidential impact, but there is good reason to think that this cannot be the whole story . 11 to underwrite a notion of confidence in terms of outright belief—many of which may be plausible, and most of which have yet to be explored. I think that one promising and under-explored way of making good on a belief-first view is to find some property of a system of beliefs, rather than individual beliefs, that might have the requisite structure and tie to reasoning. 14 However, spelling out an account of this kind in sufficient detail would be a substantive project and, having not been done, definitely falls outside of the “dialectical background” for this dissertation. So, having shown that there is strong initial reason to favor a credal priorist view over eliminativisms about belief or credence on the one hand, and belief-first views on the other, we can conclude that the modest case for credal priorism in this dissertation presently goes a significant way towards establishing the plausibility of credal priorism. At least, it does so to the extent that the account it presents fulfills its primary objective of avoiding existing counterexamples to credal priorism. 1.2.3 Ideal reasoner assumptions As I have already noted, one of the primary ways that this dissertation diverges from current discussions of the relationship between belief and credence is by focusing on how our limitations might inform our best theory of that relationship. On the traditional view, these limitations are largely ignored. The subjects of interest in traditional discussions are assumed to be ideally rational in that they lack these limitations. They are not fettered by processing constraints and can suss out the logical consequences of their cognitive commitments. It is against this background that several theorems establish that ideally ratio- nal subjects’ confidences should obey the probability calculus, in the sense that a 14 For example, Kenny Easwaran’s sufficient representation theorem shows how assigning values to believing truly and disvalue to believing falsely might underpin something like a notion of confi- dence (2014). Though there is a worry that his particular account fails to meet decideratum (iii) of specifying the right credences since it is not clear that the credences picked out by his representation theorems will be able to play their usual role in guiding practical reasoning. Alternately , if the de- tails of Gilbert Harman’s (1986) or Richard Holton’s (2013) suggestion that confidence is a matter of belief entrenchment could be clarified, that might yield another plausible belief-first proposal. 12 subject’s credences can be modeled by a functionC[] that maps propositions into the unit interval, [0;1], and is such that for any p: 1. C[p_:p] = 1, 2. C[p_q] =C[p]+C[q] when p and q are mutually exclusive, and 3. Logically equivalent propositions are assigned the same confidence. 15 Another result that follows under these assumptions is that a rational subject whose only goal is to maximize cognitive value always updates on any information she comes across. These idealizing assumptions and their consequences have, in no small way , shaped the discussion of the relationship between confidences and outright belief. Results like these have made possible, and encouraged, a thorough investigation of how the logical properties of ideally rational beliefs interact with the probabilistic constraints of ideally rational confidences. But, it has also had the consequence of focusing attention on a rather restricted set of puzzles concerning the relationship between belief and confidence. One example of the interaction between ideal belief and credence that comes up periodically in this dissertation, and so is worth describing here, is the so-called ‘lottery paradox’. The lottery paradox consists of four plausible but jointly unac- ceptable propositions: 1. Agglomeration. If it is rational to believe p and rational to believeq, then it is rational to believe p and q. 2. Coherence. A rational subject’s confidences obey the axioms of probability . 3. Consistency. It is never rational to believe a contradiction. 15 For important proofs of results like these, see (Earman, 1992), (Christensen, 1996), and (Joyce, 1998). In chapter 2, I use a slightly different axiomatization, more common in mathematics but still standard, that assumes that credal probability functions range over sets of (epistemic) possibilities. 13 4. Threshold. For any proposition p, if it is rational to be very confident of p, then it is rational to believe p. 16 To see how the puzzle arises consider a lottery that is known to be fair and that contains some large number n of tickets. In that case, by stipulation, a rational subject should be very confident that some ticket will win and be very confident in every proposition that claims of some ticket, t i , that it will not win. But, then by applying agglomeration to each of these propositions it looks like a rational subject must believe that t 1 loses, and t 2 loses, and …, t n loses and one of tickets t 1 t n wins—a contradiction violating consistency! 17 Work on ideally rational subjects has taught us a lot about the structure of rational belief, rational confidence, and the relationship between those two states. However, an exclusive focus on ideally rational subjects has also caused researchers to overlook how less-than-ideally-rational subjects, subjects with limitations like our own, might do better or worse according to their cognitive goals. It is against this backdrop that I develop my credal priorist account of belief and pursue my inquiry into how our cognitive limitations might inform the discus- sion. In so doing, this dissertation pushes into new terrain. The hope is that by approaching the study of belief, confidence, and their relationship from a different perspective, we will gain a richer understanding all three and thereby gain traction on the traditional questions too. 16 More carefully , the above are in tension if we grant the following caveats: fineness: For some large but finite set of incompatible propositions, it is rational to assign similar confidences to each member; and non-triviality: The threshold is low enough to guarantee that we rationally be- lieve a proposition that is more specific than the disjunction of all of the propositions that we are considering in a circumstance (see (Leitgeb, 2013) for further details). Also, strictly speaking, co- herence isn’t required to generate the version of the paradox presented below. However, it is usually assumed in these discussions and plays an important role in generalizing the puzzle. 17 Lottery-like considerations come up in both chapter 2 and chapter 4. The first discussion of the lottery paradox can be found in (Kyburg, 1961). For a important recent discussion of the generality of the lottery paradox, see (Douven and Williamson, 2006). Another important and related puzzle is Makinson’s, so-called, ‘preface paradox’, which first appeared at (1965). 14 Chapter 2 Plans, persistent possibilities, and probabilistic belief states Contemporary epistemologists have come to identify two important kinds of world- directed cognitive state: confidence and belief. These states differ in granularity . Confidences, or credences, 1 are fine-grained states that come in a variety of de- grees. One can lack confidence entirely or be perfectly confident in a proposition, but there is also a range of intermediary confidences that one might assign to a proposition. One might have some confidence, be mildly confident, very confi- dent, or extremely confident that a proposition obtains. On the other hand, the notion of belief at issue in this dissertation, sometimes called ‘outright’ or ‘full’ be- lief, is coarse-grained—one either believes a proposition or one does not believe that proposition. Yet, these states are importantly similar. By being at least somewhat confident that a proposition obtains, or by believing that proposition, a subject takes an epis- temic stance towards the content of the proposition. Because of this, both beliefs and confidences can guide those that have them by structuring their practical and theoretical deliberation. Given that we are correctly described as having beliefs 1 In what follows, I will use the labels ‘confidence’ and ‘credence’ interchangeably . 15 and being confident in propositions, it would be desirable to have an explanation of the notable similarity between the epistemic roles of belief and confidence re- spectively . One prevalent, and initially attractive, strategy for providing such an expla- nation is to give an account on which every belief state is a credal state. If every belief state is a credal state, then it should not be surprising that beliefs and credal states play similar roles. Call a general strategy of this kind a ‘credal priorism’. This chapter defends credal priorism against a powerful class of objections to its descriptive and normative viability . I argue that these objections are fatal for a wide range of credal priorist views, including most accounts on the market. A common feature of the affected views is that they are credence-level centric, in that they try to construct a complete ac- count of a subject’s belief in a proposition by appealing only to that subject’s levels of confidence, possibly involving levels of confidence in propositions other than the one believed, and possibly with respect to a contextual parameter. The objections all point to the role that (appropriate) belief plays in shaping epistemic activity—a role that cannot be determined by looking at a subject’s levels of confidence alone. The rest of this chapter develops an original credal priorism about belief that ap- peals to resources outside of the standard model in order to avoid the presented hard problems for credence-level centric views. According to this view, belief is properly understood by analogy to intention as being a kind of epistemic plan—a key feature of such plans being their resistance to reconsideration. 2.1 Problems for credence-level centric views Call any view that takes outright belief to be specifiable entirely in terms of a sub- ject’s levels of confidence, possibly in addition to levels of confidence in propositions other than the one believed and possibly with respect to a contextual parameter, a ‘credence-level centric view’. One subclass of the credence-level centric views, no- table for their simplicity and the amount of literature devoted to their exploration, 16 are the “lockean” or “threshold” accounts of belief. According to these views, be- lief is nothing apart from a sufficiently high credence. So, for example, one lockean view is that a subject believes a proposition just in case that subject is at least very confident of that proposition. Credence-level centric views count for almost the entirety of credal priorist views on the market. Consequently , problems for these views might be advanced as reasons to doubt the viability of credal priorism more generally , and so constitute problems that credal priorists have to address. This section develops and clarifies three recalcitrant, but often misunderstood, challenges for credence-level centric views, like lockeanism, that restrict their re- sources to features of the standard credal model. The first is that having any ratio- nal credence level in a proposition fails to encode adequate justification for appro- priate belief. The second is that credence-level centric views cannot accommodate the truism that “belief rules out live possibilities”. The third is that credence-level centric views count some states as outright beliefs even though they lack the world- implicatingness of belief, the thought being that credence-level centrism cannot explain why believing p constitutes a way of being right about whether p, if p, or being wrong about whether p, if not-p. 2.1.1 Substantivity and tethering Appropriate belief and appropriate high credence differ normatively . This consti- tutes a serious problem for lockeanism and recent credal priorist views that make much of belief ’s role in guiding action, while also putting constraints on which credence-level centric views are acceptable. Whether belief or high credence is appropriate depends on the evidence one has. However, the bodies of evidence that sanction high credence come apart from the bodies of evidence that sanction outright belief. In order for it to be appropri- ate to outright believe a proposition, that proposition has to be “tethered by the evidence” in a way that it doesn’t have to be in order to underpin an appropriate 17 high confidence. 2 One path to this conclusion follows from a principle about justified belief: Belief-justification substantivity. Indifference between members of a set of mutually exclusive propositions is insufficient evidence for justifying belief. Mere indifference—like a mere hunch, or suspicion favoring p above alternatives to p—is not the right kind of thing to ground rational belief. By contrast, the stan- dards for assigning a high credence are much lower than those required by belief- justification substantivity. Appropriate credence assignments only need to track the balance of plausibility in light of the evidence. And while sometimes a ra- tional high credence can be the result of substantial evidence, that needn’t always be the case. If upon drawing 1000 balls from an urn only one hundred have turned up green, it might be reasonable to assign a credence of .9 to the proposition that the next ball drawn will not be green. However, if all we know about the distribu- tion of balls in an urn is that it contains balls of ten different colors including green, then it is also appropriate to assign a credence of around .9 to the proposition that the next randomly drawn ball will not be green. The point similarly applies to propositions about everyday affairs. If all I know is that one of twenty consultants from the philosophy firm will be stopping by the factory today and that Jane is one of their consultants, I might rationally assign a credence of .95 to the proposition that Jane will not be at the factory today . 3 In cases where credence assignments are the result of indifference considera- tions, withholding belief is permissible. We aren’t required to believe that the ball was green in the case where we have only observed a handful of draws or that Jane 2 This distinction gives rise to interesting questions about the nature of evidence. Further dis- cussion of this issue is taken up in chapter 4. 3 Other cases in which credence is appropriately based on ignorance have been noted to cause problems for analyses of evidence in terms of confidence by both Peter Kung (2010) and Jim Pryor (2013). 18 won’t be at the factory . Belief is sensitive to more than the balance of evidence, so rational high confidence is insufficient for rational belief. 4 The objection to credence-level centrism from belief-justification substan- tivity extends to views with resources that outstrip credence levels, including re- cent accounts that add a condition requiring an agent to “act as if p” to an underly- ing credence-level centric framework. Many ways of rationally “acting as if p” are compatible with insubstantial justification for p. 5 For example, S might lack sub- stantial justification for p even if S ’s credence in p is high enough to justify acting as if p in the circumstance by performing the act that best satisfies S ’s preferences if p—like on Dorit Ganson’s view (2008, p. 453). Likewise, if S ’s updating on p doesn’t change any conditional preferences over things that matter in the circum- stance forS —like on Brian Weatherson’s view (2005, p. 422). Either of these ways of acting as if p is compatible with having insubstantial justification for p. The urn case above where your only relevant information is that the urns contain balls of ten distinct colors including green provides an example. Recall that in that case it was permissible to form a credence of .1 in the proposition that the next ball drawn would be green. If we suppose that you are now offered the chance to buy a bet that pays out $1.25 just in case the next ball drawn is not green at the cost of $1 dollar, you are justified in acting as if the next ball drawn will not be green in the circumstances and updating on the proposition that the next ball drawn will not be green in the circumstances will not change any of your conditional preferences 4 It is worth wondering why belief-justification substantivity obtains. After all, it is not just that indifference considerations are not weighty enough, since it is plausible that some of our central epistemic beliefs might be rational even if we cannot produce weighty evidence for them. For example, as Ralph Wedgwood has pointed out to me, it might be that certain epistemic “hinge” propositions, like that I am a reliable processor of information, can be justifiably believed without requiring that the believer possess weighty evidence for the belief—rational inquiry couldn’t get off the ground otherwise. My theory of belief in §2.2 will provide one explanation of the principle. Roughly , what I would like to suggest is that the thing that indifference considerations lack is a secure tie to the truth of a given alternative. 5 To use a distinction from decision theory , it is sometimes permissible to act when the outcomes are uncertain in a similar way how you would act when the risk of the outcomes is known. 19 in the circumstances—you should buy the bet either way . But, given the paucity of your evidence in the circumstance, it is permissible not to believe that the next ball drawn will be green. 6 Consequently , incorporating either way of acting as if the would-be-believed proposition obtains into an account of belief is insufficient to avoid the problem posed by belief-justification substantivity. 7 Belief-justification substantivity puts an important constraint on the shape of an adequate credence-level centric account. It tells against lockean views that take belief to be mere high credence, and action-centered threshold views like those of Weatherson and Ganson. The descriptive differences between how belief and confidence respond to evidence presented in what follows make it unlikely that any credence-level centrism is up to the task. 2.1.2 Ruling out The truism that belief “rules out” live possibilities has been noted to cause problems for lockeanism. 8 Believing p, it is said, “rules out” live possibilities that conflict with p, whereas merely assigning a high credence to p does not. Call this phenomenon ‘ruling out’. This section unpacks this idea to see how it poses a problem for credence-level centric accounts of belief. The truism, properly understood, consti- tutes our first hard descriptive challenge for these views. 6 Though you might want to believe that it probably won’t be green. 7 It is worth noting that some credence-level centric views might be able to avoid the worry . In particular, if my §2.2 diagnosis of the principle in terms of belief requiring reasons that are not easily overturned is right, then it looks like an appeal to a subject’s other levels of confidence could get at something like substantivity. For example, if the rational way to revise our beliefs when we learn a proposition e can be specified in terms of our levels of confidence, and rational robustness is understood as a claim about the conditions under which we revise our beliefs, then we might try to understand substantivity in terms of those levels of confidence. The idea that credal robustness is important for outright belief is explored in detail by Hannes Leitgeb (2013; 2014). His particular credence-level centrist proposal helps with the current normative problem but faces other difficulties, especially when its sphere of application is extended (beyond its intended normative domain) to cover the descriptive. See §2.2. 8 Most recently , (Buchak, 2013) has developed this point as an objection to lockean credal pri- orisms. 20 On the simple lockean picture, having a high credence is sufficient for belief. Because of this, the lockean is forced to hold that having a high credence is sufficient to “rule out live possibilities”. Consequently , she must appeal to a connection be- tween high credence in a proposition and ruling out the relevant set of possibilities for that proposition. Similarly , a credence-level centric view must capture ruling out by appealing to a connection between levels of credence and live possibilities. In what follows, I illustrate the difficulty by showing how two salient lockean pro- posals fail. I then clarify the phenomenon being characterized in light of these failures, and show how the difficulty extends to credence-level centric accounts more generally . “Live possibilities” as sets of subjective possibilities While the standard model of credences as probabilities can be distorting in this context, it does offer a glimmer of hope of specifying a connection between pos- sibilities and credence levels since it takes credences to be measures over subjective possibilities. Since the majority of lockeans adopt the standard model, and doing so facilitates a clear exposition of the options available to the lockean, I will also place the discussion into the standard framework in what follows. In doing so, it should be noted that the central arguments presented do not require the assump- tion that credal states have the rich structure had by probability spaces and, aside from special cases that will be flagged, can generally be made in non-probabilistic terms. 9 For our purposes, the standard model can be thought of as representing a ratio- nal credence-haver’s credal state as a functionP over sets of that credence-haver’s 9 In order to formulate the arguments more generally , first, talk of ‘subjective possibilities for a subject’ on the standard model should be understood as propositions that are assigned some confidence by that subject but regarding which that subject is not certain of its negation. Second, it must be assumed that if a subject assigns some confidence to a proposition without being certain of it, then that subject assigns some confidence to a proposition that conflicts with that proposition. See (Hawthorne, 2009) for a plausible weakening of the assumption that rational credences are representable by a probability function. 21 subjective possibilities. For each proposition the subject assigns some confidence, P maps the set of subjective possibilities for which that proposition obtains to a value in the unit interval that represents that subject’s level of confidence in that proposition. Unfortunately , it is difficult to straightforwardly make use of the connection between credence levels and the subjective possibilities of the standard model to elucidate ruling out. On the standard model, to have even slight credence in a proposition is enough to guarantee that the set of subjective possibilities cor- responding to that proposition is non-empty . Consequently , the propositions to which a subject assigns a non-zero credence do not rule out subjective possibilities in which that proposition obtains. This poses a problem for most lockean views since one way of believing, ac- cording to those views, is by having a confidence above a threshold but below 1. 10 This position is made plausible by considering cases where one believes a proposition but is more confident of some other proposition. But, having a cre- dence of this kind is compatible with assigning a positive credence of up to (1) to the negation of that proposition—indeed, the standard coherence constraints rationally require one to have such a credence—and this guarantees that subjective possibilities that conflict with the proposition are not ruled out. Worse, the standard model admits cases in which a rational credence-haver can assign a credence of 1 to a proposition while still leaving open the subjective possibility that that proposition’s negation obtains. 11 Some actual cases seem to be best modeled by probability spaces of this kind. Consider any case, common in scientific investigation, in which one knows that the value of a parameter should 10 Most, but not all. The views in (Wedgwood, 2012), (Clarke, 2013), and (Dallmann, 2014) are notable exceptions in that they require believers to be committed to assigning credence 1 to believed propositions. 11 This point is distinct from the one made in the previous paragraph only when couched in terms of the standard model. It relies on the fact that though credence 1 is (in some sense) maximal according to the standard model, it is not plausibly thought of as corresponding to the ordinary notion of being certain. 22 epistemically be treated as a continuous random quantity , for example, as being normally distributed. 12 In such cases, the probability function modeling one’s ra- tional credence assignments will assign zero to propositions of the form = x, for x falling in a suitable interval. Yet, there will be (uncountably many) propositions of this kind that are subjectively possible. The position is not made much more plausible by insisting that the credence functionP be “regular”, in the sense thatP only assign a credence of 0 to subjective impossibilities, or by insisting that belief is regular credence 1 in a context. Either proposal would allow there to be some credence level that ruled out possibilities, since a rational subject would have credence of 1 in p if and only if all subjective :p possibilities are ruled out. However, the position is untenably strong in that it requires that rational subjects be subjectively certain of everything that they be- lieve. Phenomenologically , it is implausible that belief requires certainty—even within a single context. The surface phenomenology is reinforced by noting that it is possible to believe a proposition while being more confident of some other proposition in a given context. But, this wouldn’t be possible if belief required cer- tainty . One might even felicitously ascribe such a mental state to oneself within a given context. 13 Another problem with non-contextualist credence 1 proposals is that they risk mispredicting agent behavior and misdescribing practical norms. Even granting that operationalisms about credence—i.e. views that take credences to merely be representations of an agent’s dispositions to act—are false, it is still the case that there is a close relationship between having a credence that rationalizes acting in a given manner and acting in that manner. Credence havers, on any reasonable view, generally act in the ways sanctioned by their credal states. Now, if an agent assigns credence 1 to p, then that agent should be willing to accept a bet on p at any odds. Each such bet will be rationally sanctioned by that 12 That is is such thatP[a < < b] = ∫ b a 1 p 2 e y/2 dy. 13 (Clarke, 2013) is subject to this objection. 23 subject’s credal state. But, believers need not and, I take it, generally would not accept bets on propositions they believe at life-or-limb odds. Worse, it is rational to believe some propositions without being willing to stake your life on them. So, neither belief nor rational belief entail subjective certainty . 14 Taken together, the preceding criticisms are good reason to think that no level of credence in a proposition will provide a plausible account of ruling out understood as the elimination of all subjective possibilities provided by the standard model of credences. Any level of credence below 1 fails to rule out some possibilities in the standard model, while credence 1 is only guaranteed to rule out all subjective possibilities in the model if we accept regularity , and that view is too demanding to be plausible. This provides motivation to try to relate credence level to some other notion of live possibilities that belief can be said to rule out. “Live possibilities” as substantially probable possibilities Since a lockean who wants to capture ruling out is committed to high credence alone ruling out conflicting live possibilities, a natural suggestion on behalf of the lockean is to understand the “live possibilities” to be ruled out as conflicting propo- sitions or events for which the subject has substantial credence. The thought here is that when a subject S comes to form a high credence in p, in most cases—and wheneverS is behaving in an epistemically rational manner—S will come to have a low credence in propositions that are in tension with p. More formally , let the lockean threshold for belief be and call a proposition a ‘live possibility’ for a subject just in case that subject assigns a credence greater than some threshold 15 to that proposition. Then cases in which a subject comes to have a credence of greater than in that proposition—a belief, on the lockean picture— will often be cases in which that subject (rationally) comes to have a credence less 14 The material inadequacy of the strong credence 1 proposal has been noted by many . See, for example, (Roorda, 2013), (Maher, 1993), (Kaplan, 1996), (Leitgeb, 2013, p. 1344) and (Williamson, 2000, p. 213) for similar arguments. 15 ‘’, as in: possibility . 24 than or equal to —the threshold for live possibility—in conflicting propositions. In that way , having a lockean belief will often rule out conflicting live possibilities according to the suggested definition. Natural considerations also help to bound the imprecision of the proposal. A minimal necessary condition on q being a conflicting live possibility for p’s for a subject is that the subject’s credence assignmentP satisfiesP [ p q ] <P[p]. 16 More- over, if S believes p, a sufficient condition for q’s being a live conflicting possibil- ity for p when P[p] > is that P [ p q ] < since the class of propositions that conflict with a believed proposition should at least include the propositions that overturn that belief when conditionalized upon. Conversely , a reasonable neces- sary condition on q’s being a live possibility that conflicts with p on this view is thatP [ q p ] , at least if conditionalization is a common way to update. This is so since, according to this view, a belief p “rules out” q by its being the case that whenP[p] ,P[q] . But, if conditionalization is obeyed, then when p is learned one sets one’s new credence functionP new to the value of one’s old cre- dence function conditional on p,P old [ p ] , in which caseP new [p] = 1 > . So, by the characterization of ruling out, it had better also be thatP new [q] . Thus, if we suppose (the empirical claim) that subjects revise their confidences by a process that approximates conditionalization a substantial amount of the time, a defender of this account of ruling out will want the necessary condition to obtain in at least those cases. Despite its virtues, several considerations make it unlikely that the revised pro- posal will bear fruit. First, there is the problem of narrowing in on a value for . At first glance, it might have been thought that the problem of specifying a non- arbitrary value for is no more difficult than specifying a non-arbitrary value for —a problem that the lockean must face in any case. However, specifying a value for faces additional problems. On the one hand, must be vanishingly small in order to include all of the 16 And, thus, thatP [ q p ] <P[q], if the subject’s credences are coherent. 25 live possibilities that actually prevent us from believing. Infamously , some subjects point to the possibility that a given lottery ticket will win as what prevents them from believing that the ticket will lose. After all, they claim, it is possible that it wins! 17,18 The phenomenon persists in cases where the known size of the fair lottery is increased, and thus one’s credence in the corresponding possibility is arbitrarily decreased. At the limit, it looks like must be pushed all the way to 0. On the other hand, there is pressure to think that must be substantially greater than 0. must be at least 1 , since otherwise no subject with a credence in a proposition falling between and 1 who also has a rational credence in the negation of that proposition will rule out the live possibilities incompatible with a believed proposition. So, the lower is, the higher must be. Relatedly , if is too small then the account risks predicting that we believe too little in virtue of it being too easy for there to be live conflicting possibilities. Thus, in order to do the work required of live possibilities in the theory of belief, it looks like must be fairly high. In sum, must be substantially high in order to accommodate many of our ordinary beliefs, but must be arbitrarily low in order to accommodate others. To- gether, the above considerations represent a substantial difficulty for views that wish to pursue the strategy of defining a notion of ‘live possibility’ in terms of a subject’s credence levels and the subjective possibilities of the standard model—in particular the view that understands a subjective possibility’s being live as that pos- sibility being assigned a credence above a fixed threshold . Since credence level is the only resource available to the simple lockean picture, there is good reason 17 Thus, I take it that lottery cases put pressure on the descriptive adequacy of lockeanism, which is our principle interest here, even setting aside the normative worries that such cases raise for rational agglomeration—the principle that if it is rational to believe p and rational to believeq it is rational to believe their conjunction (p and q). 18 It is worth noting that other subjects report having no problem forming a belief in the proposi- tion that the specified ticket will lose while assigning the same credences as the non-believer to both that proposition and the possibility that the ticket wins. It would be a point in favor of a theory if it could explain how both kinds of subject could be right about their respective internal states. I take it that the final account proposed here has this feature. 26 to reject the simple lockean picture. 19 I will now present a novel account of ‘live possibility’ that explains the descriptive inadequacy of lockeanism and which puts pressure on credence-level centric accounts more generally . A new proposal: “live possibilities” as persistent possibilities Call a subjective possibility q for a subject S and proposition p a ‘persistent possi- bility’ just in case (i)q conflicts with p and (ii)S is invariably disposed to considerq as an epistemic option, or as a candidate for the way things are, in situations where it is relevant whether p. It is these possibilities, I would like to suggest, that belief is naturally thought to “rule out”. 20 Cases of high credence in a proposition for which persistent possibilities are not ruled out help to make the case that those possibilities are not compatible with belief. For example, say that Amanda owns both a Chevette and a Fiat, and that driving either is qualitatively comparable from her perspective—they are both as comfortable, drive as well, are roughly as reliable, and so on. However, Amanda 19 It is worth noting that, if the credence-level centrist’s resource pool is expanded to include several credence functions—and, consequently , if we reject the simple lockean and credence-level centrist views—then credence levels alone might suffice to accommodate this general difficulty . If a believing agent sometimes possesses multiple credence functions, only some of which are maximal when an agent believes, a subject might rule out live possibilities with respect to his or her maxi- mal credence function while still being able to accommodate judgments that require believers to sometimes assign believed propositions a credence of less than 1 by appealing to another credence function. I take (Wedgwood, 2012) and (Dallmann, 2014) to be instances of this type of view. I set these types of view aside for the purposes of this paper. That said, such views face the very real challenge of explaining how it is that a subject might (i) be committed enough to one of his or her individual confidence assignments in order to capture ruling out, while (ii) also being committed enough to other confidence assignments to stave off objections to the credence 1 view, without thereby (iii) either being committed to irrationally assigning incompatible confidences to believed propositions, or otherwise being surprisingly “two minded”. See Wedgwood’s (2012) for one way to address this problem. 20 A review of the psychology literature lends support to this view. It is generally difficult for subjects to generate alternate hypotheses to ones that are initially favored when prompted and alternatives to held theories are very rarely arrived at spontaneously (Wason and Johnson-Laird, 1972), (Koriat et al., 1980), (Tetlock and Kim, 1987), (Baron et al., 1988), (Kuhn, 1989), (Baron, 2007). 27 has a preference to use less, rather than more, fuel when possible and this prefer- ence is the most weighty consideration in her decision of which vehicle to drive. She also has a high credence in the proposition that her Chevette is more fuel ef- ficient than her Fiat. Now, as is compatible with the story up to now, suppose that whenever Amanda decides to drive somewhere she recalculates the relative efficiencies of each vehicle on the basis of their salient properties. She reasons: ‘My Fiat is French, and thus a luxury vehicle. My Chevette is fairly old.’ …and so on. After each bought of deliberation, her credence in the proposition that her Chevette is more fuel efficient than her Fiat remains high and she drives to her des- tination in the Chevette. Is it plausible to hold that the epistemic subject believes that her Chevette is more fuel efficient than her Fiat? No, in the described case, Amanda does not treat the proposition that her Chevette is more fuel efficient than her Fiat as believed. The proposition is too open to debate to serve as a plan in epistemic deliberation, in the way that beliefs paradigmatically serve as plans in epistemic deliberation. Conversely , no matter how low a subject’s credence is in the proposition that, say , she will have to go to work on New Year’s Day , as long as she is disposed to seriously consider that proposition as an epistemic option whenever it is relevant, it will be hard to make the case that she believes that her office is closed on New Y ear’s Day . Believers generally act on the propositions that they believe, and reason as though the propositions that they believe obtain without being disposed to second guess, or reevaluate, what is believed. 21 The preceding examples make a good case for the conclusion that persistent possibilities, in the described sense, are the possibilities ruled out by belief. They 21 One psychologically plausible way in which such a low credence proposition might become “live” for our subject in such a case is if she takes a lot to turn on whether that possibility obtains— for example, if she cares deeply about breaking her office attendance record. The literature on skepticism, knowledge, and “pragmatic encroachment” is suggestive in this regard: see (DeRose, 1992, 1995; Lewis, 1996; Cohen, 1999; Hawthorne, 2004; Stanley, 2005; Fantl and McGrath, 2009). I will not speculate on the generality of this phenomenon, though empirical work gives reason to think that the stakes sensitivity phenomenon described is not completely general (Tetlock and Kim, 1987, p. 707). 28 also support the idea that persistent possibilities need not be the result of consid- erations at play in lottery cases—“ordinary” or “everyday” possibilities can also be persistent in the specified sense. Consequently , it should not be expected that standard diagnoses of why we often refrain from forming outright beliefs in lottery propositions will yield a complete explanation of ruling out by themselves. 22 The account of live possibilities as persistent possibilities explains why ruling out poses a serious problem for lockean views, as follows: It is a necessary con- dition on believing a proposition that it rules out conflicting live possibilities—in particular, persistent possibilities. And since belief does not entail absolute cer- tainty on the part of the believer, plausible lockean credal priorisms are committed to there being some level of confidence that does not encode absolute certainty such that assigning that confidence to a proposition constitutes believing, and thus is sufficient to count as believing, that proposition. But, for any level of confidence that does not encode absolute certainty there is an “ordinary” proposition p and possible epistemic subject S , such that S has a confidence of in p while there is a proposition q that is a persistent possibility for p for S by being brought to mind as conflicting with p whenever the subject engages in reasoning where whether p is relevant. Consequently , there are no plausible lockean views. In fact, it looks like the argument puts significant pressure on credence-level centric views more generally . The argument did not turn on any peculiarity re- garding the subject’s other confidences, conditional confidences, the level of the threshold, or any way that those features might vary with the context. As long as the view specified allows for one to be less than absolutely certain of a believed proposition, it leaves it open whether there is some persistent possibility for that proposition for that subject. So, ruling out constitutes a serious challenge to the descriptive adequacy of lockeanism and credence-level centric views. 22 The above cases did not seem to depend in any obvious way on the epistemic subjects in question only having access to “purely statistical information” (Harman, 1967; Nelkin, 2000), or particularly symmetric reasons for the propositions in question (Hawthorne, 2004, pp. 15-20), and so on. 29 2.1.3 Belief is world-implicating A ‘world-implicative’ constraint on belief has also been thought to cause problems for lockean credal priorisms. Roughly , the idea is that believing is a way of being right or wrong about the content of one’s belief. The constraint can be formulated more explicitly , if still roughly , as follows: World implication. If a subject S believes p when p is true, then S is right about whether p, and if S believes p when p is false, then S is wrong about whether p. 23 On the face of it, world implication is an undeniable truism, so it would be a real cost for lockeanism if its critics were right about its inability to accommo- date the principle. This section examines the ways in which the world-implicative constraint on belief has been, and might be, levied against the lockean and other credence-level centrists. It will turn out that special care needs to be exercised when assessing how world implication interacts with those views. A standard case against lockeanism from world implication found in the literature, and some natural ways of making it more precise, miss the mark. However, the observations made in this section fall short of providing those who endorse a credence-level centric view with a response to the objection from world implication. After examining and rejecting world implication inspired objections to lockeanism, a telling problem for lockeanism and related credence- level centric views based on the intuitive core of world implication is developed. Understood in this way , the worry from world implication is closely related to the worry for credence-level centric views from ruling out and patterns in the same way . Credence-level centric views can accommodate the parallel objection 23 This principle finds many defenders in the literature. For example, it is explicitly endorsed at (Fantl and McGrath, 2009, ch. 5) and (Ross and Schroeder, 2012). I here refrain from using the more common ‘correctness’ label for this constraint. That label suggests that the rightness and wrongness at issue is normatively substantive. But, that view is controversial, so it is best to avoid the suggestion here. 30 from world implication just in case they can accommodate ruling out. Since they have difficulty accommodating the latter phenomenon, they have difficulty accommodating the former too. The case against lockeanism from world implication is often motivated by considering specific cases in which one has a high credence but intuitively fails to satisfy the principle. Consider, for instance, the following example due to Fantl and McGrath: Consider a standard lockean view according to which belief is a matter of having a credence greater than some < 1. Suppose is .98. If you have a .99 credence for p, and p turns out to be false, it does not follow that you were wrong about whether p. If you were told ‘Ha, so you were wrong about whether p, weren’t you?’ you could reasonably say in your defense: ‘Look, I took no stand about whether p is true or false; I just assigned it a high probability; I assigned its negation a probability , too’. (Fantl and McGrath, 2009, p. 141) 24 In this case, the reader is supposed to judge that a subject who assigns a high credence to p fails to be wrong about whether p if it turns out p is false. The reason given for why the subject is not wrong about whether p is that the subject failed to take a stand on whether p as evidenced by her having assigned positive probability to the negation of p. But, if the subject is not wrong about whether p when she assigns a high credence to the false proposition, then having a high credence is not sufficient to secure world implication. Since world implication is a truism, the argument continues, the case motivates rejecting lockeanism. The case has some intuitive pull, as does the argument that it is supposed to support. However, without saying more, the extent to which it counts against lock- eanism and other credence-level centric views is unclear. In particular, as it is 24 The passage is also echoed approvingly at (Ross and Schroeder, 2012, p. 17). It is worth pointing out that assigning a credence to a proposition is a way of taking some stand on a proposition, so this formulation is somewhat misleading. The point is that there does seem to be a distinctive stand that one takes in virtue of believing that differs from the stand one takes in virtue of assigning a high confidence. Further details are filled in below. 31 currently stated, the argument seems to make room for the lockean to respond by showing that assigning a high credence to a proposition is a way of taking a stand on that proposition’s obtaining. But, a lockean can plausibly claim that assigning a high credence to a proposition p is taking some stand on p’s obtaining. When you are highly confident of p you take p to be a better estimate of how the world is than not-p. That estimate plausibly makes one accountable to the way the world is since that estimate is more accurate if p obtains and less accurate if not-p obtains. In light of the possibility of a lockean response along these lines, it will be important to get a handle on the kind of stand that believers take towards the propositions that they believe and the sort of rightness and wrongness to which taking such a stand gives rise. A natural first attempt to shore up the case against lockeanism from world implication focuses on a possible difference in the kind of accuracy conditions that apply to credence and belief respectively . The kind of rightness and wrongness involved in the lockean response above was a gradable notion. A credence in p is the kind of thing that can be more or less accurate depending on how confident one is of p. The more confident one is of a truth, the more accurate is one’s assignment. Conversely , the more confident one is of a falsehood, the less accurate is one’s assignment. The accuracy conditions that apply to credences, it might be argued, stand in stark contrast to the accuracy conditions that apply to belief. The kind of ac- curacy that accompanies belief is not gradable. Belief is either perfectly accurate, in cases in which one believes truly , or perfectly inaccurate in cases in which one believes falsely . 25 If this is the distinction that world implication draws between mere 25 Joyce puts the point this way: The difference between these two sorts of attitudes [credence and belief], I claim, has to do with the appropriate standard of accuracy relative to which they are evaluated. While both ‘aim at the truth,’ they do so in quite different ways. Full beliefs answer to a categorical, ‘miss is as good as a mile,’ standard of accuracy that recognizes only two ways of ‘fitting the facts’: getting them exactly right or having them wrong, where no distinctions are made among different ways of being wrong. (1998, p. 578). 32 high credence and belief, then the following is a clearer way of stating the part of world implication that is at odds with lockeanism: Rightness as perfect (in)accuracy. If a subject S believes p, then that attitude is perfectly accurate with respect to p if p, and perfectly inac- curate with respect to p if not-p. This interpretation is at odds with any credence-level centric view that allows cre- dence less than 1 to count as belief. A credence less than 1 in a true proposition could always be more accurate by being closer to 1. Conversely , a credence less than 1 in a false proposition could always be more inaccurate by being closer to 1. So, high but non-1 credences are never perfectly accurate, nor perfectly in- accurate. If we accept this reading of world implication, no high but non-1 credence in a proposition can be a belief, contra credence-level centric views like standard lockeanism. While this objection from world implication appears better situated to pose a problem for lockeanism and related views than the original, I will now show that the appearances are misleading. Remember that in §2.1.2 above we distinguished believing, from believing with certainty. The distinction can be leveraged to show that rightness as perfect accuracy fails to cause a problem for credence-level centric views. The response can be put in the form of a dilemma. Either believing requires certainty , or it doesn’t. If belief requires certainty , then credence-level centrism ac- commodates rightness as perfect accuracy. The way to understand certainty within a credence-level centric framework is as a certain kind of maximal credence. So, if world implication is true on this reading, then the relevant credence-level centric view of belief is one according to which only maximal credences count as beliefs. But, an assignment of the kind of maximal credences associated with certainty to a truth is perfectly accurate while a maximal assignment of that kind 33 to a falsehood is perfectly inaccurate. So, credence-level centrism accommodates world implication when belief is taken to require certainty . On the other hand, suppose that one way of believing is believing without being certain of what is believed. On this understanding of belief, the credence-level cen- trist could ask: ‘Can some believer do better from the point of view of accuracy by believing the truth?’. If the answer is ‘yes’, then rightness as perfect accuracy is false since believing p would not entail being perfectly accurate with respect to whether p. But, a reasonable case can be made for answering that question in the affirmative if belief does not require certainty . A believer who is not certain of a believed proposition could be doing better from the point of view of accuracy by be- lieving the truth with certainty . That attitude leaves no room for the falsity of what is believed—unlike merely believing while not being certain. Believing (without be- ing certain) is a way of leaving open subjective possibilities in which the proposition believed fails to obtain. Since, those possibilities inaccurately represent the world with respect to the proposition believed, believing (without being certain) is not an attitude that is perfectly accurate if the proposition believed is true. Consequently , if belief can be non-certain, then rightness as perfect accuracy is false. On ei- ther horn of the dilemma, the rightness as perfect accuracy reading of world implication fails to pose a problem for credence-level centrism. Connections between high credence and attitudes that are, in some sense, less committal than belief help to illustrate why focusing on differences in accuracy , or granularity more generally , is not a particularly promising way of coming to terms with the the challenge that world implication poses for credence-level centrism. Consider the following argument, which trades on a connection between high credence and suspicion: Argument 1 (High credence and suspicion). For all subjects S and propositions p: 1. One way of suspecting p is to assign a high credence to p. 34 2. If S suspects p, then S is right about whether p if p and wrong about whether p if not-p. 3. Thus, assigning a high credence to p is one way to be right about whether p if p and wrong about whether p if not-p. The premises of this argument are hard to deny . In virtue of assigning a high credence to p, a subject suspects that p obtains, and it is natural to point out that a subject is wrong about whether p if she suspected that p obtained when it did not and right about whether p if she suspected that p obtains while p. The connection between high credence and suspicion, as spelled out by the argument, helps to reinforce some observations that we have already made and head off other potential non-starter objections to credence-level centric views from world implication. First, it shows that having a high credence in a proposition is enough to take a stand on whether or not a proposition obtains that is evaluable using the coarse-grained labels ‘right’ and ‘wrong’. This is the case whether or not that subject assigns a positive confidence to the negation of p. Thus, it cannot be in virtue of such an assignment that confidences fail to satisfy world implication. Second, it shows that concerns about differences in credence and belief accu- racy are misguided—a suspicion that p obtains is accurate just in case p and in a similar way that a belief that p obtains is accurate when p obtains. Third, the connection heads off the worry that high credence is too “fine- grained” to count as belief. Coarse-grainedness is not particularly difficult to come by in this context. All that is required is a non-ad hoc broad distinction between credence assignments, like whether or not a fine-grained confidence in p consti- tutes suspecting whether p. Given such a distinction, it will be possible to make coarse-grained evaluations on the basis of one’s fine-grained credence assignments. The right way to leverage world implication against credence-level centric views should not focus on a distinction in grainedness between belief and confidence, but rather on a feature of world implication that precludes there being the kind of broad distinction between credence assignments that divide cases of belief from cases of non-belief. 35 Despite these observations, it would be hasty to conclude that world impli- cation is compatible with credence-level centric views like lockeanism. After all, the kind of stand a subject takes in virtue of suspecting p is very different from the kind of stand that a subject takes when she believes that p. This makes room for the possibility that the kind of rightness or wrongness associated with taking a stand of the latter kind differs from the kind of rightness or wrongness associated with taking a stand of the former kind. We will now argue that this possibility is actual. Belief-level commitment is not guaranteed by a credence assignment of any level. Plausibly , in order for subjects to take an epistemic stand on a proposition p of the kind associated with belief, it must not be the case that they readily acknowledge the possibility that p is false. That is not to say that a believer must be certain that the believed proposition obtains, only that the possibility that p is false is not one that they consistently accord epistemic consideration or take to be a candidate for the way things are in situations where whether p is relevant. In other words, a subject who is right or wrong about whether p in the same sense that a believer is right or wrong about whether p to some extent “ignores” the possibility of not-p. But, as we saw in §2.1.2, there being a persistent possibility in which p fails to obtain for a subject is perfectly compatible with that subject’s assigning a high credence to p. In fact, having a high credence in p is perfectly compatible with having a regularly triggered disposition to consider a possibility in which not-p as an epistemic option. Regularly considering a possibility in which not-p as an epistemic option is not a way of ignoring the possibility of not-p that is involved in the stand that one takes when one believes p. So, high credence is not sufficient to secure this reading of world implication, contra lockeanism. 26 26 Ross and Schroeder give a superficially similar diagnosis of the problem that world impli- cation poses for lockeanism at (Ross and Schroeder, 2012, p. 18). According to their objection, merely assigning a high credence to p cannot constitute believing p since, unlike a credence in p less than 1, believing is not an attitude that commits its haver to the possibility of not-p. On the present view, that conclusion is hasty since some ways of believing, like believing without certainty , are committal in this way . What is incompatible with believing p, I take it, is consistently considering the negation of p when whether p is relevant to a piece of reasoning at hand. 36 And, again, it does not look like an appeal to other credence-level centric re- sources can solve the problem that world implication poses. All that the problem requires to get off the ground is that the purported believer of p readily acknowl- edge the possibility of not-p as an epistemic option or considers not-p a candidate for the way the world could be when whether p is relevant. But, as long as the believer isn’t required to assign credence 1 to p, this is compatible with having any other confidences related to p, any threshold for belief in (:5;1), or any way that these features might vary with the context. Thus, there is pressure to think that no credence-level centric view will suffice as an account of belief. In sum, lockeans and other credence-level centric views have the resources to respond to several challenges that world implication has naturally been thought to raise. However, we have seen that there is some reading of world implication that weighs against credence-level centric views, constitutes a decisive objection to lockean credence-level centric views, and that that objection patterns with the objection from ruling out. 2.1.4 Credence level and priorism Lockeanism is a natural credal priorist view. Despite its initial attractiveness, it failed to satisfy the normative adequacy condition imposed by belief-justification substantivity. World implication and ruling out also constitute hard prob- lems for descriptive lockean accounts and their credence-level centric cousins, un- derpinning the following adequacy condition on an acceptable credal priorism: No acknowledged conflicting possibilities (NACP). If a credal state constitutes believing p, then that credal state does not readily acknowledge possibilities that conflict with p in the sense that a subject in that state is not disposed to seriously entertain possibilities in tension with p whenever it is relevant whether p. This principle places a necessary condition on belief and, because structural fea- tures of confidences do not seem to bear on NACP, it constitutes a hard problem 37 for lockeanism and credence-level centric views more broadly , providing a com- pelling reason to reject those views. The rest of the paper will demonstrate that these results do not preclude the viability of credal priorism. Credal priorists have more resources at their disposal to capture what it is to believe p than just the level of confidence assigned to p, the subjective possibilities over which confidences range, and other structural features of credences. Credal priorism, remember, is the general view that confidence is the primary mental state characterized by a mind-to-world direction of fit, and thus that every belief state is grounded in a credal state. But, it is compatible with credal priorism that the correct account of belief involves considerations that outstrip a subject’s confidence in a believed proposition p. For instance, it is compatible with credal priorism that other dispositions and credal considerations might bear on whether a subject believes p. Other things that might be relevant to whether one believes p might include facts about how learning various propositions might impact one’s credence assignment to p and facts about the characteristics of those states like their relative stability and their relationship to experience might be relevant to whether one believes. In what follows, it is argued that resources from this pool are capable of explaining the presented data. I suggest an alternative account, and show how it satisfies belief-justification substantivity and NACP. 2.2 Belief as epistemic plan In some interesting recent work, it has been suggested that outright belief is best thought of by analogy to Michael Bratman’s notion of intention. 27 According to this view, belief, like intention, is a kind of plan. But, whereas intentions are practi- cal plans, beliefs are epistemic plans. On this view, belief is a kind of a fixed point in reasoning, a standing epistemic plan to reason as though the believed proposition 27 See for example (Ross and Schroeder, 2012) and (Holton, 2013), whose accounts explicitly build on the model of (Bratman, 1985) and (Bratman, 1987). 38 obtains. In this way , believing a proposition involves relying on—or even taking for granted—that proposition while reasoning, and reasoning as though one can continue to rely on it. This facilitates the central function of belief: simplifying deliberation. Believing, according to this account, allows us to mitigate our cognitive short- comings and secure further gains in the process. Some of those gains are thor- oughly practical. For instance, having beliefs of this sort facilitates planing: relying on a premise allows us to plan in light of it while failing to rely on the sustained truth of any premise would multiply the deliberative burdens of planing beyond our capabilities. However, having beliefs is also cognitively valuable. Evidence for a proposition is often subject to diminishing returns—observing that there is broccoli in the fridge once is enough to settle whether I need to buy more, I don’t have to double-check by tasting it or looking in better light, and in the special case where I do have to take these steps either of these further actions will suffice to close the question. Moreover, since we are cognitively limited, there is only so much information that we can attend to and, at least sometimes, we will have more available than we can take into account—think of what it is like to attend a rich philosophy talk. In light of these facts, resisting reconsideration on these established propositions allows us to focus our cognitive powers on other issues where we stand to make a greater cognitive gain. 28 Another benefit of forming doxastic plans is that they guard against misleading evidence. Richard Holton points out that this might be useful in cases where one expects to engage with an interlocutor who is known to be rhetorically effective without being in an better epistemic position than oneself (2013, p. 18). But, the effect also presents itself in the case where one forms one’s doxastic state on the basis of weighty information but then forgets that information. In a case like this, if one were to reconsider one’s doxastic state in the face of misleading evidence after 28 In chapter 3, I show that in a wide range of circumstances one can expect to do better from the point of view of having an accurate doxastic state along these lines. 39 forgetting, one is likely to greatly alter one’s doxastic state for the worse in response. Since cases of forgetting are presumably common, so are the corresponding gains in forming doxastic plans that resists reconsideration. 29 For our current purposes, the most important feature of the belief-as-plan idea is that it can be incorporated into a credal priorism to explain how belief rules out conflicting live possibilities and why appropriate belief requires substantial justifi- cation, or so I will argue. In what follows, I begin by examining how one might incorporate the view that belief is a kind of epistemic plan into a credal priorist frame- work. Some salient credence-level centric accounts that have plan-like features— including important recent accounts that emphasize the role that credal stability plays in belief—are examined and rejected on the basis of considerations raised in the preceding discussion. I then develop a credal priorist account that takes the idea that belief is an epistemic plan seriously and while respecting the constraints on an adequate credal priorism about belief imposed by the above discussion. I argue that this view underwrites an informative diagnosis of how belief rules out conflict- ing live possibilities and why it is permissible to withhold belief when one’s reasons are constituted by indifference considerations, thereby avoiding the presented hard problems that credence-level centric views face. 2.2.1 Problems for recent accounts with plan-like features How might the idea that belief is an epistemic plan be subsumed by a credal pri- orism? The constraints that have come up in the discussion so far eliminate some obvious options. For instance, an analysis of outright belief as a kind of credence 1 ought to be resisted in light of the discussion in §2.1.2. This despite the fact that assigning credence 1 to a proposition is one way of treating that proposition as a fixed-point in reasoning, in the sense that a subject who assigns credence 1 to a proposition and updates by conditionalization will continue to assign credence 1 to that proposition upon learning any proposition that is not assigned zero credence. 29 I thank Abelard Podgorski for suggesting this further benefit of forming epistemic plans. 40 Some promising recent credal priorisms that incorporate plan-like aspects also fail to satisfy the established constraints. For example, several views which rec- ognize stability as being an important feature of belief fail to capture NACP. 30 According to these accounts, a subject rationally believes p just in case there is a proposition q such that p is true in all possibilities where q is true and the subject assigns a credence to q that falls at or above a (possibly contextually determined) threshold and is such that her credence inq, conditional on any proposition con- sistent withq, is at least. The state that they take to be (rational) outright belief can rightfully be said to resist reconsideration in a wide variety of circumstances and, as a result, is a quasi-fixed-point and a state that could be relied upon in reasoning. Unfortunately , a little reflection shows that the states described by these ac- counts do not generally rule out live possibilities in the required sense. For any subjectS , threshold in[:5;1], and proposition p: thatS satisfies the requirement with respect to p leaves open that there is a subjective possibility q for S and is such that p entails not-q. In fact, if the subject is rational and is less than 1 then the standard account guarantees that there is such a q for S . Now, the fact that q is a conflicting subjective possibility for p for S does not yet entail that S fails to believe p—again, subjects need not be certain of what they believe. But, it is also compatible with the account that q is a persistent possibility with respect to p for S . Nothing in the account prevents it from being the case that S is disposed to consider q as an epistemic option in situations where whether p is relevant. Take as an example, the case put forward by Hannes Leitgeb to illustrate the phenomenon of rationally maintaining a believed hypothesis in the face of evidence that counts against the conjunction of the hypothesis and an auxiliary hypothesis. 31 The example consists in a plausible model of mid-nineteenth century physicists’ 30 The views I have in mind here are those of (Arló-Costa and Pedersen, 2012), (Leitgeb, 2013), and (Dallmann, 2014). Though, it should be pointed out that the aim of these views is to give a nor- matively adequate account of belief and, consequently , do not purport to be descriptively adequate. 31 The example was first explored in (Dorling, 1979) and is endorsed as a case of belief by Hannes Leitgeb in his (2013, pp. 1353, 1385-6) exposition of his view. 41 confidences regarding the secular acceleration of the Moon. On this model, prior to Adams’s calculation of the secular acceleration of the Moon, physicists assigned the relevant fragment of Newtonian mechanicst a confidence of approximately .9. They assigned a confidence of approximately .6 to the auxiliary hypothesis h that the effects of tidal friction on the secular acceleration of the Moon is negligible — for definiteness, and to mirror Leitgeb’s presentation, let’s suppose that it is .598. Together, these propositions entail e 1 : that the secular acceleration of the Moon is a. But, Adams’s calculation of the secular acceleration of the Moon yielded e @ : that the secular acceleration of the Moon is some particular incompatible value. Suppose further, that researchers’ confidences, prior to learning e @ , were as fol- lows: P [ e @ t^:h ] = :018, P [ e @ :t^h ] = :00006. 32 Under these conditions, after learning e @ the physicists will have a confidence in t of (at least) :89 condi- tional on any possibility in the model consistent with t. Thus, after learning e @ , for :89, the physicists should believe t. Yet, whether or not the physicists believed Newtonian mechanics cannot be read off of these assignments. That is because the possibility that Newtonian me- chanics is false,:t, might still be live for an individual with the specified confidences. The possibility is still in the model of their mental state and, in this case, is even as- signed positive credence. Nothing in the formal description of the model prevents those individuals from consistently taking the failure of Newtonian mechanics to be an epistemic option in their reasoning and, consequently , having this fact interfere with relying on Newtonian mechanics in reasoning in the way that we rely on our beliefs in reasoning. Thus, nothing in the account guarantees that a subject who satisfies the re- quirement imposed by these sorts of account will satisfy NACP. Satisfying the re- quirement with respect to a proposition is not sufficient to count as believing that proposition. 33 These accounts capture important features that the lockean leaves 32 Again, here we follow (Dorling, 1979) and (Leitgeb, 2013). 33 Note that for all I have said here the requirement may still be a necessary condition on (rational) belief. 42 out. Nevertheless, as stated they cannot be the complete credal priorist story with respect to belief. The lesson to be drawn from the failure of such accounts to capture NACP is a general one. As long as a credal priorist account of outright belief in a proposition allows an incompatible proposition—possibly of measure 0—to be a subjective pos- sibility for the believer, then the account will have to explain why those possibilities are not persistent possibilities. Assigning a credence of a given level, or credence of a given level conditional on other (non-empty , or possible) propositions, to a proposition cannot explain why possibilities are not persistent possibilities. So, an adequate credal priorist account of belief must appeal to facts that outstrip a be- liever’s credence level in a believed proposition and the believer’s credence level conditional on other (non-empty , or possible) propositions. 2.2.2 Plans, persistent possibilities, and solutions I now wish to advance a credal priorism about belief that takes seriously the idea that belief is a kind of epistemic plan while meeting the constraints noted in our dis- cussion above. Recall that a guiding thought behind the belief-as-plan view is that the central way that belief derives its value is by being an epistemic coordination point. Beliefs are beneficial because, once formed, they free up cognitive resources by taking the good epistemic status of believed propositions for granted. 34 On the present view, beliefs accomplish this task by being a policy to disregard propositions that, if the policy is appropriate, should not matter. Richard Holton has recently sketched one way that belief disregards propositions in the context of a non-priorist account of belief. Though not completely unproblematic, his idea serves as a useful point of departure. According to Holton, the way in which belief is a standing epistemic plan, or fixed point in reasoning, is best understood in terms of the believer exhibiting a disposition to resist reconsideration on the matter of whether or not the believed proposition obtains. He goes on to unpack what it is 34 See chapter 3 for a quantitative analysis of those benefits. 43 for a proposition to ‘resist reconsideration’ in terms of “curtail[ing] our search” for further evidence regarding that proposition and “simply ignor[ing] new evidence” insofar as it regards that proposition, at least as long as that evidence is not too weighty (Holton, 2013, pp. 16-7). This way of characterizing one of the mechanisms that allow belief to act as an epistemic plan is insightful but, at the same time, requires clarification. The par- ticular use of ‘evidence’ in the characterization is potentially misleading, and risks making the account unnecessarily polemical. As stated, it seems like it is in tension with the popular internalist conception of evidence and rationality according to which “the evidence that a person has at a time [consists in] the things the person is thinking of or aware of at that time” (Feldman, 1988, p. 219) while a doxastic attitude towards a proposition is rational just in case it “fits the evidence” (Feld- man and Conee, 1985, p. 15). Certainly , if ignoring a proposition requires both deliberately dismissing it, so that one is aware of it, and failing to adjust one’s doxas- tic attitude in light of it, then ignoring evidence violates the internalist’s epistemic constraints. But, Holton’s characterization of the disposition to resist reconsideration makes it clear that he has a different notion of evidence in mind. Holton points out that in order for beliefs to provide a cognitively economic coordination point, the disposi- tion at issue must operate below the level of deliberate conscious reflection on the part of the subject. If the resistance to reconsideration were not a matter of brute disposition in this way , any benefit of the coordination point would be forfeit: “One cannot assess the new evidence each time to determine whether it should prompt a reconsideration, for that would already to be to embark on a reconsideration” (Holton, 2013, pp. 16-7). So, ‘evidence’, as Holton uses the term, need not be what one is thinking about or aware of at a time. This is good since it is far from clear that it is ever permissible to disregard evidence of this type. Continuing to believe that it will not rain today while being aware of, or thinking about, the fact that you are getting wet in the rain does not seem epistemically ideal. A great deal of the unclarity and polemical difficulty surrounding the proposal 44 can be avoided by characterizing ‘resistance to reconsideration’ without appealing to the contested notion of evidence. The idea that belief in a proposition involves its resisting reconsideration can helpfully be understood in terms of the believer not taking notice of, or being oblivious to, considerations that evidentially impact the proposition believed. Evidential impact, in turn, is naturally understood in terms of a subject’s conditional confidences: q evidentially impacts p forS just in caseS ’s con- fidence in p givenq differs fromS ’s unconditional confidence inq. This is the just the notion of qualitative confirmation familiar from the Bayesian philosophy of science, which is obviously related to, though may not precisely track, our pre-theoretical uses of the term ‘evidence’. 35 Glossed in this way , we can avoid appealing to the unqualified notion of evidence that gave rise to the internalist worry concerning actively or deliberately dismissing evidence that conflicts with one’s beliefs. A second way that belief disregards propositions that should not epistemically matter is by being treating believed propositions as true in reasoning. 36 Often the tendency to treat believed propositions as true in reasoning will manifest itself as a willingness to use the believed proposition as a premise in reasoning. But, this re- quires at least that the believer not be disposed to constantly bring to mind proposi- tions that are in tension with the believed proposition in reasoning as being relevant to that proposition. While the above characterization of outright belief did not make essential use of credence, it can be carried over to a credal priorist account of belief while at the same time opening up new ways to make the characterization precise. The propositions one is confident of might, like beliefs, resist reconsideration as a matter of brute disposition and be treated as true in reasoning. For this reason, I suggest then that outright belief in a proposition is best understood as follows: Belief-as-credal-plan. S outright believes p just in case S has a 35 See chapter 4 for further discussion. 36 This view is endorsed, among other places at (Williamson, 2000, p. 99), (Ganson, 2008), (Hawthorne and Stanley, 2008), (Ross and Schroeder, 2012) and (Holton, 2013). 45 high confidence in p which disregards propositions as not mattering to the truth of p, in the sense that 1. S ’s confidence resists reconsideration by S ’s being (defeasibly) disposed to overlook propositions that evidentially impact p for S as bearing on p, and 2. S treats p as true by being (defeasibly) disposed to not seriously entertain any q in reasoning as evidentially counting against p for S . The account could be made more precise along various dimensions. Fleshing out the account entails describing the circumstances in which the dispositions are trig- gered, what sorts of considerations mask them, as well as specifying the sorts of things that typically undermine them. 37 Presumably , if belief is to play the role required of a plan, the dispositions must be ones that manifests themselves fairly constantly . 38 It is plausible that the disposition to resist reconsideration tends to be under- mined in situations in which a believer comes across particularly weighty evidence against the believed proposition. 39 However, it is worth pointing out that the dis- position to resist reconsideration may be compatible with knowing that one will in fact change one’s mind with respect to a proposition and, as a consequence, will not resist reconsideration in one’s actual circumstance. In one possible sub-case of 37 A substantial portion of these details are the kinds of thing that must be informed by empirical investigation. For this reason, it is worth pointing out that empirical results do suggest that belief resists reconsideration in something like the way the current picture says it does. For a particularly extreme example, see (Ross et al., 1975) where the subjects in the study retained their opinions in the face of observations that should have completely undercut the initial evidence on which their opinion was based. See chapter 3 for further discussion of outright belief ’s resistance to reconsid- eration. 38 It is worth pointing out that this specification is consistent with it being the case that whenever a piece of evidence is consciously brought before one’s mind and consciously recognized as being evidence that bears on a given proposition that one then fully updates one’s beliefs by that proposition. 39 Psychological research suggests that we are quite good at evaluating the extent to which a piece of evidence bears on a given proposition—even if we are not particularly good at rationally revising our confidences in light of that evidence (Tentori, 2013). 46 this kind, belief may fail to be undermined when one knows that the evidence for one’s belief will be undercut—removing the belief along with it. To see this, consider a variation on Frank Arntzenius’s Shangri-La case: You can travel to Shangri-La via the mountain trail or by the ocean path but you know that, upon arrival, if you taken the ocean path then the guardians of Shangri-La will erase your memory of taking that path and replace it with the memory of tak- ing the mountain path. In this case, while taking the ocean path you will believe that you have taken the ocean path, even though you know that upon arrival you will fail to believe that you have taken either path. 40 The case highlights the dispo- sitional character of belief ’s resistance to reconsideration. If belief merely resisted reconsideration at the actual world, then cases like the Shangri-La case would be impossible. But, something can have a disposition toϕ under conditionc even if whenc ac- tually obtains it doesn’tϕ. For instance, compare this to a case in which salt (which has a disposition to dissolve when dropped in a glass of water) fails to actually dis- solve when dropped in a glass of water due to, by freak coincidence, becoming surrounded by a thin layer of air. Cases like the salt case are possible. Moreover, a quantity of salt would be soluble even if we knew—perhaps by the pronounce- ments of an oracle that was known to be reliable—that the quantity of salt would not dissolve because of a freak coincidence like the one described. With these clarifications of what “resisting reconsideration” amounts to, we can see how the belief-as-credal-plan account avoids the issues which were seen to pose a serious problem for lockeanism and other credence-level centric accounts. The account can explain belief-justification substantivity, the claim that justified belief in a proposition cannot be grounded in mere indifference consid- erations. It is a part of the belief-as-credal-plan view that belief resists recon- sideration and that believers treat the proposition as true by not reasoning with 40 I have to thank Ralph Wedgwood for the case, and pressing me to clarify this aspect of the kind of resistance to reconsideration that accompanies belief. See (Arntzenius, 2003, p. 356) for the original case. 47 propositions as evidentially counting against their beliefs. Both of these features of belief make it a paradigmatically stable state. But, the relative stability of a state dictates how robust one’s reasons must be in order to appropriately come to be in that state. In general, less robust reasons are sufficient to appropriately get into a less stable state, while more robust reasons are required to appropriately put oneself into a more stable state. For example, while you might not need much of a rea- son to desire to pursue some activity—like climbing Mount Everest—it takes much more robust reasons to make it rational to plan or resolve (or, if either the plan or resolution theories of intention are right, to intend) to pursue that activity . 41 This makes sense since if it is irrational to be in a state then stably being in that state will, other things being equal, decrease the likelihood that one rectifies the error. More- over, the more a state plays a role in guiding future activity , the more irrationally being in that state risks compounding that error. The risks in the case of belief are those of maintaining an inaccurate picture of the world in the face of otherwise compelling evidence against the proposition believed, and compounding those er- rors by relying on the belief in reasoning. Since, the belief-as-credal-plan view requires that the stability of belief outstrip that of mere high credence, it requires that the reasons for belief be more secure than reasons to assign a high credence to a proposition. 42 What I would like to suggest is that indifference considerations are not robust in the right kind of way . Doxastic states supported by mere indifference considerations are appropriate to revise in light of any evidence. The reasons supporting a belief, 41 Thanks here are due to Sam Shpall for discussion regarding this parallel between belief and intention. See (Bratman, 1985) for a discussion of the plan theory of intention. See (Schroeder, 2011) for discussion of the type of reasons needed to make it appropriate to be in a stable, activity structuring, state. 42 The epistemic commitments that one takes on in virtue of believing or assigning a credence are an interesting object of study independently of the considerations here, and would be worth examining further. The norms of epistemic commitments in general are under-explored and it is usually commitment to being in some epistemic state rather than the propositional commitments that come along with being in a state that have been studied (as, for example, in (Shpall, 2012) and (Ross, 2012)). 48 understood as credal plan, must make it permissible not to revise one’s confidences. So, belief-justification substantivity obtains. 43 This feature of outright belief also helps to explain why knowing a particu- larly good way of believing. When you know a proposition, that proposition is true. Drawing on Plato’s Meno (1985, 97A-98A), Timothy Williamson correctly points out that this fact will make it more difficult in general for that knowledge to be undermined—we are less likely to come across a defeater for a truth than for a falsehood (2000, pp. 78-80). Thus, when one knows, one’s reasons for be- lief are generally more objectively robust. So, in addition to being doxastically well-tethered to a truth in virtue of knowing, one also has objective reasons that are particularly well-suited to underpin a stable state like belief, which might otherwise be stable without having objective reasons that are well-suited in this way . The belief-as-credal-plan view also avoids the problems posed by persistent possibilities. Having a consistently triggered disposition to consider some proposi- tionq as an epistemic option in situations where whether p is relevant is not a way of overlooking q. Moreover, if q conflicts with p then q evidentially counts against p and being disposed to seriously entertainq in reasoning is not a way of treating p as true. Thus, if a subject satisfies belief-as-credal-plan for a proposition p there is no propositionq that is a persistent possibility for p forS . Belief-as-credal-plan rules out persistent possibilities. These advantages aside, it is important to point out that the presented view marks a departure from credence-level centric accounts of full belief. Most evi- 43 It would be good to have a story about why doxastic states based on indifference considerations are extra-sensitive to revision. I do not have a complete story , but the phenomenon Lara Buchack is tracking in her (2013) paper on belief and credence provides a partial one. There she suggests that causal information, above considerations that tilt the balance of evidence one way or the other, is often required to ground appropriate belief about matters that are causally efficacious. This information is epistemically modally more robust, in the sense that knowledge of causes comes along with knowledge about modal co-variations under possible interventions in the system. It would be an interesting further project to further unpack this notion and see how it extends to beliefs about non-causal matters such as beliefs in mathematical truths. Accounts of mathematical explanation are promising in this regard. See (Steiner, 1978), and my manuscript ‘Mathematical Explanation and Methods’ for possible ways to develop the suggestion. 49 dently , it makes use of resources that outstrip putative believers’ credence levels. However, what is important from the present perspective is that the view is con- tiguous with the idea that credences are the primary mental states characterized by a mind-to-world direction of fit. The view supports the core credal priorist idea that all world-directed epistemic states are just special cases of credal states. It also makes irreducible appeal to confidences when circumscribing belief, in its charac- terization of both ‘evidential impact’ and ‘treating as true’. Though transparently not a credence-level centric account, the account is in the same spirit as credence- level centric accounts of outright belief. Moreover, the departure from orthodoxy might not be as great as appearances suggest when we consider what a complete credal priorist account must look like. Insofar as belief acts in predictable ways across time and in response to available information, a complete account of belief will explain these diachronic features of its nature. The belief-as-credal-plan view might not be complete, but it goes some way to explaining the diachronic nature of belief by specifying a disposi- tional account of the responsiveness of belief to available information across time. Credence-level centric views must also say something more about the diachronic features of belief and it is unlikely that these facts can be captured only in terms of synchronic facts about a subject’s credence levels. It is true that if credences are dispositional states then, since dispositions gen- erally persist through time, this synchronic fact will have diachronic implications. But, even if credal states are dispositional states, this does not tell us anything about how they respond to propositions that epistemically bear on those states—which they surely do in a more or less predictable way , nor how those states are used in reasoning. Credence-level centric views must be extended to accommodate these facts. Though not quite a forced march, this suggests that a complete credence- level centric view will have to appeal to some further diachronic constraint on confidences too—perhaps, a disposition to update believed propositions by con- 50 ditionalization or something similar, 44 and an account of reasoning. But then the Belief-as-credal-plan view marks less of a departure from credence-level centric views than may have initially been suspected. Both views have to say something about responsiveness to propositions that evidentially impact one’s beliefs and how they are used in reasoning, the view on offer just makes some of the features of a complete account explicit. Appealing to resources from outside of the standard model also facilitates a certain kind of desirable structural move in debates concerning the distinction be- tween credence and belief. Understanding every aspect of outright belief in terms of general features of the standard credal model makes it unclear why we have a separate notion of outright belief at all—it looks like a distinction without a dif- ference. If, however, outright belief is best understood as a credence exhibiting features that are not generally had by credences and that cannot be directly read off of aspects of the standard model, then that feature might explain why we have the distinction. This, I suggest, is another reason for preferring the sort of credal priorism on offer to traditional priorisms which appeal only to resources from the standard credal model. 2.3 Concluding remarks Lockean credal priorism, and credence-level centric views of belief more generally , leave a lot to be desired. Given the simplicity and naturalness of such accounts, their shortcomings might be thought to give reason to doubt whether any credal priorism might be viable. This chapter argued that it was incorrect to think that some of the most telling reasons to reject lockeanism and credence-level centric 44 This “problem” is especially acute for views that lean heavily on context to specify what counts as belief at a given time, since the additional flexibility granted by the appeal to context is usually accompanied by greater diachronic flexibility . I will not press this objection to such accounts further here, but the priorist views I have in mind here are those of (Weatherson, 2005), (Ganson, 2008), (Wedgwood, 2012), (Clarke, 2013), (Leitgeb, 2013), (Leitgeb, 2014), and perhaps also that of (Arló- Costa and Pedersen, 2012) on plausible ways of extending the view to offer a descriptive account. 51 views extend to credal priorism about belief more generally . A normative problem for lockeanism, along with novel diagnosis of known— but often mischaracterized—objections to lockeanism were advanced. These ob- jections were found to put pressure on a feature of lockeanism and credal-level centric views that is not shared by every credal priorism. In particular, it was ar- gued that these worries were, at their core, a consequence of the fact that belief structures deliberation in a way that goes beyond tracking the balance of evidence that credence levels report. A credal priorist account of outright belief that avoids these worries—the belief-as-credal-plan view—was then developed. These results give good reason for a cautious optimism regarding the prosepects of credal priorism. This paper has argued that credal priorism is compatible with important insights concerning outright belief. This optimism must be tempered by empirical research and further investigation. However, it does not leave us without guidance on how to proceed in this regard. The account on offer fixes some of the descriptive features of belief, and suggests that the normative features of plans can be expected to play an important role in spelling out a normatively satisfactory credal priorist account of belief. 45 45 Research into the stability of belief, and how it might be captured in terms of credence, looks promising in this regard. See, for example, (Leitgeb, 2014), (Lawlor, 2014), and chapter 3 ‘When obstinacy is a better (cognitive) policy’. 52 Chapter 3 When obstinacy is a better (cognitive) policy 3.1 Introduction For epistemic subjects like us, updating our credences incurs epistemic costs. Ex- pending our limited processing power and working memory to properly update our credences by some information can come at the cost of not responding to other available information. It is thus desirable to flesh out and compare alter- native ways of taking information into account in light of cognitive shortcomings like our own. This paper is a preliminary attempt to do so. I argue that it is bet- ter, in a range of “normal” circumstances and from the point of view of expected credal accuracy , for epistemic subjects like us not to update on available information that bears on propositions for which substantial evidence has been gathered than it is to update on information as it presents itself. In order to clarify the argument, and enable comparisons between information-response policies more generally , I develop a queue-theoretic model of learning for subjects with cognitive limitations. The model characterizes how policies for responding to information interact with a subject’s limitations to yield confidences. Finally , I discuss implications of the discussion for work on confidence, outright belief, and the relationship between 53 those two states. The comparison of information-response policies helps to (i) ex- plain how some of the “biases” recorded in the social psychology literature might be cognitively valuable, (ii) clarify views that take outright belief to be a kind of epis- temic plan that resists reconsideration, and (iii) assuage certain “demandingness” worries for the hypothesis that we are credal reasoners. Demonstrations to the effect that epistemic subjects should always update on any information they come across assume that updating is epistemically cost free for the subjects of interest. 1 But this is not true for epistemic subjects like us, whose capacities are far from being epistemically ideal. For creatures like us, taking avail- able information into account taxes our processing resources and available working memory . Consequently , taking information into account incurs epistemic oppor- tunity costs—when there is enough available information, resources spent taking some subset of our available information into account is time not spent processing other available information. Given this shortcoming, it is unclear whether the pol- icy of responding to any relevant information as it becomes available is best for us. Our epistemic ends might be better served by adopting a distinct strategy . This paper explores that possibility . 3.2 Two information response policies Reasoners who are not subject to cognitive limitations should process any available information relevant to propositions of interest. A subject with cognitive limitations like our own might reason like an ideal reasoner, and to the same effect when not cognitively overburdened, by instantiating the following policy: The naïve policy. Take information into account for further processing on 1 See for example (Oddie, 1997), where it is argued that one does better from the point of view of epistemic value if one gathers all and any information that would make a difference to one’s cognitive state when one updates by conditionalization—and under the assumption that updating is cost free. (Good, 1967) also makes the assumption explicitly as a part of a similar argument for the practical value of gathering information. 54 a “first-come, first-served” basis whenever sufficient cognitive resources are currently available—no matter how much other information one is thinking through, or how much information is expected to arrive in the future, and whether or not it is expected to be evidentially weighty . Is this policy a good one? Plausible assumptions are sufficient to establish that better policies are possible when it comes to subjects like us. Since we can only hold a limited amount of information before our minds and we have limited processing power, it is very likely that there will be circumstances in which we can only properly process a proper subset of our available information. From this, and the fact that our information is not usually misleading, we can conclude that it is better from the point of view of accuracy to process information that will have a greater impact on our confidences than a lesser impact when we cannot process all of the available information. Since the naïve policy doesn’t prioritize processing information by its impact, a subject will generally do better by acting in accordance with another policy that processes high-impact information at the expense of low-impact information. Moreover, as the amount of available information increases, the situations where we can only process a subset of our available information will become more com- mon, amplifying the effect and justifying a more extreme prioritization of high- impact information over low-impact information. One way to favor responding to high-impact information over low-impact in- formation is to prioritize responding to pieces of information that bear on issues for which one has processed less weighty information rather than more. This is because inquiry is often subject to diminishing returns. Once substantial informa- tion has been gathered on a question, future information tends to make less of an impact—our attitude becomes more robust. Some paradigmatic cases of this phenomenon, which I will single out as ‘the “normal” cases’, are those of inquiry focused on ordinary mid-sized dry-goods where preliminary observation provides substantial evidence. An example of a “nor- mal” case, in this sense, is one where a subject is interested in whether there is 55 peanut butter sandwich in the fridge, looks in the fridge, and observes that there is a peanut butter sandwich in it. After this preliminary observation, she can be near certain that there is a peanut butter sandwich in the fridge. In this case, the evi- dence gained by picking up the sandwich, looking at it from another angle, smelling it, or tasting it won’t usually make much more progress on the question. But, even in the rare case that there is also an almond butter sandwich in the fridge, so that the first impression is not decisive, any of these additional pieces of information will usually be weighty enough. Once a substantial body of information on a question has been taken into account, paying further attention to information relevant to it does not greatly improve the accuracy of our judgment on that issue. One simple policy—or policy type, since it might be developed in several ways— that makes use of these observations is the following: The obstinate policy. Disregard any available information as bearing on a proposition once substantial information for that proposition has been pro- cessed, otherwise proceed naïvely . This policy prioritizes high-impact evidence over low-impact evidence in cases where inquiry is subject to diminishing returns. Assuming that more information presents itself than can be processed on average, it is reasonable to expect that sub- jects like us who adhere to it will be in a more accurate credal state than those who adhere to the naïve policy . The considerations offered in support of the obstinate policy over the naïve one are general, but their imprecision makes it difficult to properly assess the argument. I now turn my attention to developing a precise framework for comparing policies like those under discussion. 3.3 A queue-theoretic model In this section, I present a model of credal updating for subjects for whom process- ing information incurs a cost in time spent and who have “working memories” of 56 a limited capacity with which to store evidence as it is being processed, or awaiting processing. I begin with a simple statement of the idea and a general statement of the framework. Then, in §3.4, I model the two policies for responding to in- formation over a range of “normal cases” using the framework and see how they compare from the point of view of expected accuracy . The rough idea is straightforward. Assume there aren propositions whose truth values are of interest for a reasoner. Information relevant to those propositions will come in over time in a random way , with some probability of the next piece of in- formation being relevant to one or another of these propositions. Our reasoners will have a “working memory” that can store a small number of pieces of infor- mation during processing and the amount of time it takes to process each piece of information stored in working memory will also be subject to random variation. 2 As a piece of information comes in, our reasoner can begin updating on that infor- mation as long as she has space in her working memory . Otherwise, the reasoner is assumed to have too much on her epistemic plate and the information must go unnoticed. More formally , for each of thosen propositions of interest, let each of p 1 ,::: , p n pick out the truth on that matter, so that if whetherq obtains is of interest andq is in fact false, then one of the p i will be not-q. We treat the event that the next piece of information relevant to some p i arrives by future timet as a random variableX i for i2f1;2;:::;ng with distributionD i and i its mean rate of arrivals for a specified period. Our subjects are assumed to have a working memory capable of storing 2 I am thus assuming a multiple-tiered model of memory , or a “duplex model of memory”, of the kind that is standardly assumed in the verbal and visual learning and memory traditions. See Christopher Cherniak’s (1983) ‘Rationality and the Structure of Human Memory’ for a philosoph- ical defense of this assumption. It is plausible that how much one can store in working memory depends not only on the count of pieces of information but also the relative informativeness of that information (Awh et al., 2007). The model can track total informativeness by letting the slots in the queue correspond to minimal units of information—analogous to bits in computer memory , but the correct picture might instead turn out to be a hybrid of the “slot based” and “total infor- mativeness” models (Brady et al., 2011). These models are currently underdeveloped, making it difficult to differentiate and adjudicate between them. Consequently , I note the issue as a relevant complication only to set it aside. 57 up to m pieces of information for processing. Then we treat how information is processed in a similar way . The event that a piece of information stored in working memory will be processed by our reasoner by future timet is a random variableY with distributionD , where is the mean rate of pieces of information processed in a period of a specified duration. The system can thus be depicted as in Figure 3.1. Here each X i feeds its in- Figure 3.1: The general model of epistemic inquiry . formation into the working memory queue of length m. If the queue is full, any arriving piece of information is discarded. Otherwise, another spot in the queue becomes filled, reducing the available spots for further information by one. Finally , the number of empty spots in the queue increases by 1 according to the distribution Y —i.e. on average, pieces of information are processed per period of interest. This queue-theoretic framework provides a model of responding to informa- tion with memory and processing limitations. How exactly our reasoners update their credal states upon processing the information can then be specified exter- nally , along with a way of scoring accuracy , to calculate the expected accuracy of adopting a policy for responding to information. 3.4 Comparing the policies We can use this framework to compare the obstinate and naïve policies in a more nuanced way . Setting the parameters of the model to specific values, and clarifying the policies within the framework, yields definite predictions about the expected 58 accuracy of the cognitive state of a subject who adopts one of the precisified policies over the other. By repeating this for a range of parameter values we develop an understanding of the circumstances under which one policy is to be preferred to the other. Let us begin by formally specifying the impact of information on the subject’s confidences. At a minimum, the non-skeptical premise of the preliminary ar- gument for obstinacy requires that our subjects’ experiences are expected to be truth-tracking in that they expect their respective confidences to become more ac- curate as they update on more information. The initial argument for obstinacy also assumed that the epistemic returns of information for a given proposition are expected to be diminishing in the sense that as the information a subject has pro- cessed on a question becomes substantial further evidence is expected to rationally make less and less of an impact for that subject’s confidences. Here we will restrict the scope of the argument to the “normal cases”, in which initial observations that bear on whether p are expected to provide substantial ev- idence and have a greater impact on a subject’s confidence in p than observations made later in inquiry . Insofar as we are often preoccupied by “normal” cases, the simplification will not prevent us from drawing a general conclusion. However, we have to keep in mind the restriction when thinking about cases of theoretical, or scientific, inquiry where data is scarce and substantial data scarcer still. For all the more precise argument says, obstinacy may not be a better policy over this domain. We will capture the impact of information on a subject’s confidences with these qualifications in the general model by assuming that the confidences that we expect our subjects to have in each of the true propositions p 1 ;:::; p n are increasing func- tions of the number of observations that they have taken into account as bearing on those propositions, modulo some local deviation. Of course, it is not assumed that the subjects will know that the respective p i are whichever of p or:p are true for each i among the n propositions of interest prior to inquiry . For specificity , we make the conservative assumption that a subject’s expected credence in p i after taking into account i observations bearing on p i follows the 59 logistic function: E [ P [ p i i pieces of information relevant to p i ]] = 1 1+e i : The function ignores local deviation in the quality of the information on the grounds that as long as it is as likely to push one’s confidences towards the truth as away from the truth, that deviation will be washed out from the point of view of expected accuracy . Thus, a logistic function that incorporates deviation, as that in Figure 3.2 will produce the same results. 3 Figure 3.2: Possible credence progressions in the truth on successive updating. This logistic function is a modest assumption in this context. It encodes the typ- ical characteristics of the impact of observations on credence, like being in general increasing and being subject to diminishing returns. For this particular function, 3 The depicted progressions increase on average according to a logistic growth rate function centering the reasoner’s initial confidence on .5—i.e. 1 1+e i , where i is the number of pieces of information that have been taken into account by the subject as bearing on the specified proposition. The fact that the data can be misleading, especially initially , is then accommodated by including some stochastic variation as follows: 1 1+e i+N , where N is a normally distributed random quantity chosen independently at each time step so that the credence generally goes up, but sometimes regresses. The model of changes to a reasoner’s credal state given successive observations can consequently be viewed as a kind of inverse “geometric Brownian motion”. In the specific example, the three progressions were generated pseudo-randomly by the same function with N having mean 0 and variance 1. 60 the diminishing returns it encodes become pronounced after three or four obser- vations. This seems plausible for hypotheses about everyday objects—the peanut butter sandwich example motivated convergence after one or two observations. By expanding the range of cases covered by the original argument, this function errs, if at all, on the side of modesty for “normal cases” by underestimating the rational impact of initial observation on confidence assignments to the propositions that make up the argument’s focus. In any case, not much hangs on this choice since the effects of interest in the model are robust under plausible choices of impact functions for everyday inquiry . 4 A second choice point concerns the time at which information in working mem- ory should affect one’s confidences in the model. One possibility would be to adjust a subject’s confidences only after that information has been fully processed as it is re- moved from the queue. However, since the pieces of information placed in working memory need not be processed serially for ordinary reasoners, or even on a piece by piece basis, this stipulation would underestimate the information processed by a subject at a time by a small amount. Another option is to treat information as fully processed once it is placed in working memory . This stipulation would overesti- mate the information processed by a subject at a time, again by a small amount. The correct confidence distribution for a subject will lie somewhere between these options at a time. Since it is questionable whether any precise trade-off between these two pos- sible modeling choices will be meaningful, the results will be close under either choice, and the latter has the benefit of at least corresponding to the subject’s state once all information obtained at the time has been fully processed, I use the latter in what follows. Our reasoner’s credences will be adjusted by an observation in the model at the time that it gets put into the queue instead of when that specific piece of information is removed from the queue. An added benefit of this approach is 4 What particular choices of impact functions do tend to affect is the required number of ob- servations relevant to a proposition needed until the effects of further evidence become negligible. The logistic function is modest in this regard. 61 that this allows us to simplify the bookkeeping in what follows by disregarding the order in which the pieces of information have been put into the working memory queue. Our reasoners’ cognitive states will be assessed from the point of view of their accuracy. Letting 1 represent the truth value of a true proposition, 0 the truth value of a false proposition, and credences range over the unit interval [0;1], we follow the prevalent tradition in formal epistemology , of judging a credence in p as be- ing more accurate as the distance between that credence and the truth value of p decreases. 5 Again to fix discussion, we will assume a version of the popular quadratic mea- sure of inaccuracy for a total credal state S with respect to each proposition p i in fp 1 ;:::; p n g: D(S) = ∑ i (1P t [p i ]) 2 ; withP t the subject’s credence function at t. 6 Finally , in order to assess the obstinate and naïve policies, the distributions of the waiting times between observationsD i fori inf1;:::;ng, the distribution of the time it takes to process an observationD , and the amount of working memory m possessed by a given subject must be specified. Here we restrictD i andD to a class of distributions commonly used to model wait times for natural events. We will fo- cus on situations in which any relevant event is as likely to occur within any time interval of equal length, and whether or not an interval in question was recently preceded by another incident. This amounts to assuming that the following are 5 See (Joyce, 1998; Greaves and Wallace, 2006; Easwaran, 2013) for a representative sample of works in this tradition. 6 Here the rule is appropriate since the truth value of the p is known to be 1 and it will be assumed that our subject’s credences obey the complementation principle so that for any proposition p i among then—i.e.P[p i ] = 1P[:p i ]. A perfectly general account would allow an epistemic subject to choose any reasonable (in)accuracy measure. This isn’t feasible in the present case, because of the numerical nature of the results—though they are also robust under other popular inaccuracy measures. 62 pairwise independent: (i) the waiting times for previous pieces of information rele- vant to a proposition, (ii) the variable waiting time for the next piece of information relevant to a proposition at a time, (iii) the variable waiting times between pieces of information pertaining to different propositions, and (iv) the variable processing times of pieces of information past, present, and future. 7 Though these assumptions are simplifications, they are not unnatural or un- motivated at this level of generality . Constraints like the above make sense for processes like the number of people arriving at a bus stop during a work day or the wait times between rider arrivals, but not for either the number of buses which arrive throughout that interval or the wait times between buses—since they are scheduled. We are thus restricting our attention to cases in which information ar- rives more like people arrive at a bus stop than buses to that stop. In practice, these assumptions are used in modeling a wide range of systems like the number of pho- tons that reach a telescope, the number of mutations in a given segment of DNA, and the number of phone calls arriving at a call center in a specified period. That said, the model can be applied to other (less tractable) choices of variables for the X i and Y too. Under these assumptions, we can assess the two policies for responding to pre- sented information for a wide range of values of the working memorym we possess, number of propositions of interestn, the average rate at which evidence for a given proposition of interest presents itself, and the average rate at which our observers fully process a given piece of information . What range of values make sense? Setting m aside for the moment, it is not so much the values of n, , and , that are important from the point of view of the model, but rather the relationship be- 7 In this context, this is equivalent to the assumption that the number of pieces of information presented which are relevant to a proposition of interest and the number of pieces of information processed are independently Poisson distributed. Equivalently , that the waiting times are exponen- tially distributed. This entailment is well known, see (Billingsley, 1995, p. 190). Another effect of this choice is that our system becomes, using Kendall’s queuing-theory notation, a E/M/1/(m-1) queue. The transient—as opposed to long-run—behavior of this type of queue is not particularly well studied, thus the discussion might also be of some statistical interest. 63 tween the average rate of information arrival for any question in the designated time period, n, and the average rate at which information is processed in that period, . Consequently , I have fixed n at a manageable 3 and manipulated the relationship between the parameters by varying the choices of and given that n = 3. A robust range of conditions from cases in which information is on aver- age processed much quicker than on average it arrives— = 10n—to cases in which much more information presents itself on average than can on average be processed—10 = n—was examined. The appropriate value for the total amount of working memory m available to a subject at a time will depend on the limitations of the subject, or subjects, of interest. The psychological literature and introspection suggest that we can con- sciously assess the evidential impact of very few pieces of information at a time, and probably only 3 or 4. 8 However, since a clear conception of what values of m are appropriate depends on subtle and substantive philosophical and psychological theorizing, I examined choices of m ranging from 1 up to a value of 5. Call any event in which an observation is processed or a piece of information becomes available to our reasoner an ‘epistemic event’. For this range of values, and over the short to medium term of thirty epistemic events, the obstinate policy of ignoring information does provably better from the point of view of expected credal accuracy than the naïve policy of taking information into account as it ar- rives whenever the average rate of information processing is less than two times the average rate at which it arrives. In many cases, the effect is even more pronounced. For instance, with smaller working memories ofm = 1 orm = 2, the obstinate pol- icy is provably preferable up to, and beyond, the boundary case of thirty epistemic events when information is processed on average ten times faster than it arrives. For a working memory of m = 4, the obstinate policy is preferable over the first thirty epistemic events up to and including the case where information is processed four 8 See (Cowan, 2001, 2005) for a summary of the research on the limits of visual working memory . (Brady et al., 2011, pp. 1-5) provide a summary of recent results on the limits of visual working memory . 64 times faster than it, on average, arrives. 9,10 These results give us a good idea of when the obstinate policy will be preferable to the naïve one. Whenever the amount of available information exceeds our pro- cessing power, and the distributional assumptions are approximately correct, the obstinate policy will be preferable. One interesting effect that the model illustrates is that even if on average we process information more quickly (and sometimes much more quickly) than it arrives the obstinate policy will be preferable to the naïve one. Sometimes a lot of information just happens by chance to come in all at once or by chance we process less information than average. Whenever either or both of these occur to a sufficient extent, it results in an information bottleneck. This effect makes prioritizing high-impact information important even in cases where infor- mation is fairly scarce and a subject processes incoming information with relative ease. As a consequence, the model suggests that the obstinate policy might be bene- ficial even if we implement certain other strategies for overcoming our limitations. In particular, it might be beneficial if in response to overabundant information we implement a strategy that involves increasing the speed at which we process in- formation at the cost of a higher variance in the results, or even at a slight cost in accuracy . Heuristic, or otherwise quick-and-dirty , reasoning of this kind may increase the rate at which information is processed relative to the amount arriving, but the previous observation shows that obstinacy is still a better policy than the naïve one under modest increases in processing speed. Obstinacy is compatible with, and may be a good supplement to, an array of strategies to overcome our limitations. 9 Of course, adopting the obstinate policy is guaranteed to do worse in the long term since, in the longterm, the policy leads to indiscriminately ignoring all information with probability 1. From the current perspective, it is important that our reasoner, like us, epistemically operates in the present and near future rather than in the long run. 10 Details of the proofs can be found in the technical appendix §3.7. 65 3.5 Applications Some authors (correctly) hold that facts about our cognitive limitations should structure theorizing about our world-directed cognitive states like outright belief and credence. In this section, I argue that the above framework and results help to clarify some of this theorizing and explain why some of our cognitive behavior which seems deleterious might be beneficial. §3.5.1 presents some influential work in psychology documenting obstinacy effects in our reasoning processes. While these are often presented as “cognitive biases” to be overcome, the results above explain how they are useful. §3.5.2 shows how the results fill in some of the much needed details for accounts of belief as a kind of epistemic plan. In §3.5.3, I argue that the results concerning the obstinate policy undercut certain “demandingness” objections to the possibility that we are credal reasoners. 3.5.1 Obstinacy effects in psychology Work in social psychology has revealed a few ways in which we resemble obstinate reasoners. So-called ‘primacy effects’ provide one example. In Cameron Peterson and Wesley DuCharme’s now classic (1967) study on the phenomenon, subjects were told the distribution of colored chips in two urns. They were then presented with a series of data that they were told corresponded to draws from exactly one of the urns, with replacement. After each datum was presented, the subjects recorded their confidence in the hypothesis that the draws were from the first urn. The information presented to subjects favored the hypothesis that the draws were from the first urn rather than the second for the first 30 observations then, symmetrically , the next 30 “draws” favored the converse. The last forty “draws” also favored the hypothesis that the draws were from the second urn. If they were acting as perfect Bayesian reasoners, setting their confidence that the draws were from the first urn upon receiving information to their prior confi- dence in that proposition conditional on that information, the subjects would have had a confidence of .5 or less in the hypothesis that the data resulted from draws 66 of the first urn after their initial sixty observations. In fact, for most subjects, high confidence in the first hypothesis persisted well beyond sixty observations—over half of the subjects failed to reduce their confidence in the first urn hypothesis be- low .5 over the total 100 observations, by which point a Bayesian updater would have had a confidence below .05. Information provided later in the experiment had much less of an impact on their confidences than it would have if the reason- ers took it into account by updating in an ideal Bayesian way . Other experiments reveal a similar tendency . 11 Documented belief “persistence effects” constitute another way in which we resemble obstinate reasoners. The experiments that best illustrate belief persistence are one’s in which subjects retain an elevated confidence in a proposition after any additional support for that proposition is undercut (as opposed to merely being outweighed as in the Peterson and DuCharme experiment just discussed). In one experiment, Lee Ross et al. presented subjects with pairs of purported suicide notes, told them that each pair contained one real and one fabricated note, and asked them to say which one they thought it was (1975). After each response was elicited, subjects were told whether or not they were correct—the response was in fact predetermined by the researchers and independent of their performance. One test group received mostly positive responses, and another negative. After the test, the subjects were debriefed and told that the information that they had received was predetermined and independent of their actual choices. Nevertheless, those that received mostly positive responses still thought that they did much better at the task and would be better at it in the future then those who were told that their choices were mostly wrong. The effect was also present when an outside party , after watching the experiment and similarly debriefed, was asked to rate the participant on how well they performed and how they might be expected to perform in the future. In their discussion of the result, Ross et al. conclude that “the relevance, reliability , and validity of dubiously relevant, reliable, or valid 11 (Baron, 2007) contains a good summary of “primacy” results like the one described. 67 information is resolved as a function of its consistency with the attributor’s [sic.] initial impression” (1975, p. 889). Impressions, they take it, can be sustained by the evidence filtering effects that accompany them. Both primacy and persistence effects are usually presented as epistemically deleterious cognitive biases. Those conclusions are warranted when the evidence is misleading over an extended period, as was the case in the experiments. However, courses of evidence that are systematically misleading like those of the experiment are atypical. In cases where evidence is not misleading over an extended period, the above argument and model suggest that subjects who exhibit these “biases” might have a more accurate picture of the world as a result. 3.5.2 Clarifying the belief-as-plan view According to the belief-as-plan view, belief is best thought of by analogy to Michael Bratman’s notion of intention (Bratman, 1985). 12 Just as intention is, on this view, a kind of practical coordination point or a stable point that constrains future action, belief is a kind of epistemic coordination point or a stable point that shapes our rep- resentation of the world, guiding theoretical deliberation and inquiry . By forming a belief in a proposition, we become disposed to treat that proposition as true in reasoning. One of the central mechanisms by which belief plays this role, on this view, is by being disposed to resist reconsideration. The thought parallels the rough argument for obstinacy presented above. It is cognitively costly to remain responsive to every epistemic contingency . After weighty enough evidence for a proposition is taken into account, proponents of the belief-as-plan view argue, we usually expect that further information for that proposition will be largely epistemically inconsequen- tial. There is little to gain by responding to further information regarding that proposition. Thus, instead of being disposed to respond to such information, it 12 A defense of the belief-as-plan view can be found at (Holton, 2013). We follow the details of his presentation here. Similar positions are suggested by Ross and Schroeder (2012) and Weisberg (ms). 68 would be better to be disposed to disregard it in order to make up gains on other issues—to resist reconsideration on well-established propositions by being blind to information that bears on it. 13 According to the belief-as-plan view, that is what believing does. Believing a proposition firms up one’s epistemic stance towards the believed proposition. It trades off the epistemic flexibility of being able to constantly fine-tune one’s epis- temic stance towards a proposition in order make up epistemic gains on other issues. This should sound familiar. The obstinate policy , remember, recommends disregarding available information as bearing on a proposition when a substantial information for that proposition has been gathered. The motivation for the belief- as-plan view parallels the rough defense of the obstinate policy over the naïve one and stands to be clarified along the same lines as the model clarified the rough argument for obstinacy . The model and case study can be used to clarify the belief-as-plan view by filling in its details if either (i) credence and outright belief “march in step”, 14 in the sense that belief is always accompanied by high confidence, or (ii) if outright belief reduces to credence. If outright belief reduces to credence, then the application is direct. Belief ’s re- sistance to reconsideration can be understood as credal obstinacy and the obstinate policy is one way of filling in the details of the mechanism underling the formation of belief-as-plan states. A benefit of filling in the details this way is that it explains why forming beliefs in accordance with the credal analogue of the belief-as-plan view, the obstinate policy , is cognitively valuable since in normal circumstances 13 It is important that the stance towards future evidence more closely resembles blindness than deliberate ignorance. Deliberately ignoring information is irrational. On the other hand, if one is blind to information then, arguably , that is not (or at least less obviously) epistemically perni- cious. In addition, if the resistance to reconsideration were a deliberate or conscious matter the proposal would be self-undermining, since consciously considering how further information bears on a proposition already incurs the cost of reconsideration. 14 This locution, and observation, is due to Scott Sturgeon (2008). The thought is implicit in any account of belief that reduces belief to a variety of substantial confidence. 69 credences formed in this way are expected to be more accurate than those formed using a naïve policy . The model can also be used to unpack the details of our mental lives on the belief-as-plan picture if credence and belief merely “march in step”. If beliefs resist reconsideration according to the belief-as-plan view, then the high credences that accompany those beliefs will have to likewise “march in step” and resist reconsid- eration in the same cases. But, then whatever one’s account of how one gets into a belief-as-plan state, one will need an account of the mechanism by which credence gets into a state of resisting reconsideration too. The obstinate policy is a good way of spelling out that mechanism. If one updates in accordance with the obstinate policy while outright belief and credence “march in step”, then the obstinate pol- icy can provide an explanation of why forming plan-like beliefs promotes overall representational accuracy at the level of outright belief. By following the obstinate policy at the credal level, a subject increases the chances that the available information will propel more of her credences towards the truth than if she were to update naïvely—that is the effect driving the results in the case study . But, given the “march in step” phenomenon, one consequence of this is that following the obstinate policy will make it possible to form more outright beliefs in truths than if one updated naïvely . So, the obstinate policy promotes a credal state that is more fertile for forming accurate outright beliefs than the naïve policy . The hypothesis that credences adhere to the obstinate policy , together with the “march in step” claim, supports the idea that forming belief-as-plan states is good for overall representational accuracy . In sum, many find the view that outright belief reduces to confidence plausible. But, even if it is false, that they “march in step” is hard to deny . It is difficult to imagine what it could be like to believe a proposition without being more confident than not that that proposition obtains. In either case, the queue-theoretic model and case study provide a plausible way to unpack the cognitive mechanisms at the heart of the belief-as-plan view, and how entering into such states should be cognitively beneficial. 70 3.5.3 Demandingness worries The possibility of information response policies like the obstinate policy helps to alleviate, if not eliminate, certain “demandingness” objections to views that take us to be credal reasoners. In this vein, Gilbert Harman (1986, ch. 3), and more re- cently Richard Holton (2013, pp. 2-3, 10-2), have argued that we are not the kinds of creatures that explicitly reason with credences because doing so would outstrip our mental capacities—reasoning with credences is too cognitively costly for creatures like us. 15 According to one line of thought these authors advance, in order to be good credal reasoners we would have to be willing to take unrealistically many things into epistemic account. They both begin by assuming that credences are the kinds of states that are responsive to any non-trivial evidence. But, we are not the kinds of creatures that are capable of readily responding to all of the available information. 16 Consequently , the line of thought continues, the amount of cog- nitive processing that being a credal reasoner would require makes it implausible that we are credal reasoners. 17 15 Both Harman and Holton have additional arguments for the conclusion that we are not credal reasoners. It is less clear that the above discussion can help with those arguments so, while I don’t find them compelling, I do not take them up in what follows. 16 Holton and Harman give different reasons for why we cannot readily respond to all of the available information. Harman worries that doing so would require credal reasoners to have im- plausibly many conditional confidences waiting in the wing. Holton worries that doing so would make our mental lives unmanageably unstable, in that we would have to be constantly recalculating our credences in established propositions. 17 I am here focusing on the general features shared by each author’s objection rather than the specifics over which they differ. There are further worries for the specifics—many of which are clearly articulated in Julia Staffel’s (2012) criticisms of Harman’s (1986) version of the objection. It should also be noted that this objection should not be confused with the kind of “demanding- ness” objection which tries to establish that we cannot be credal reasoners on the grounds that the probabilistic computation that it entails is too taxing. I have nothing new to say about this objection, and largely agree with Staffel’s (2012) criticism of the objection—which turns, in part, on the possibility that we employ credal heuristics rather than engage in explicit probabilistic calculations when we reason credally . The objection I am focused on here concerns our ability to sensitively respond to information even if the computations involved in updating are of some simpler, non-probabilistic, variety . 71 However, the assumption upon which this general argument relies—that cre- dences are the kind of states that are responsive to any non-trivial information—is implausible. It marks another place that a theory of ideal reasoning, in this case Bayesian updating, is being misinterpreted as description of conscious delibera- tion. If we take our cognitive limitations seriously in the way that both Harman and Holton suppose we should then, contrary to what Bayesian models of ideal credal reasoning might suggest, credal states should not be expected to be respon- sive to absolutely any non-trival information. Some credences might resist recon- sideration in the same way that Holton takes belief states to fail to respond to some pieces of information as a matter of brute disposition. The obstinate policy serves as a simple proof of concept of how this might be so while illustrating one way that we stand to profit by instantiating a policy of disregarding information relevant to the propositions to which we assign some credence. The objection is no more per- suasive when leveled against accounts of reasoning with credences than it is when leveled against accounts of reasoning with belief. 3.6 Summing up For limited creatures like us, properly responding to information comes at a cog- nitive cost. In this paper, I laid out a queue-theoretic framework for precisely assessing how different policies for responding to information interact with some of our limitations to influence the cognitive value of our total credal state. Two simple policies for responding to information, ‘the naïve policy’ and ‘the obstinate policy’, were assessed within this framework under modest assumptions about our cognitive limitations, in a range of common or “normal” cases, and under defen- sible simplifying assumptions. Under these conditions it is provable that, from the point of view of expected credal accuracy , it is better for epistemic subjects like us not to update on available information that bears on propositions for which substan- tial evidence has been gathered than it is to update on information as it presents itself. 72 The conditions assumed in the model are most appropriate when the propo- sitions under investigation concern the familiar properties of ordinary mid-sized dry-goods. Given the central role that hypotheses of this kind play in everyday inquiry , the model helps to explain why some of our non-ideal techniques for re- sponding to evidence are nonetheless useful. By pointing out that credences might be formed in accordance with the obstinate policy , the above picture undercuts a “demandingness” objection to the possibility that we are credal reasoners and helps to give substance to the view that takes belief to be a kind of plan. 3.7 Technical Appendix This section provides the details of the result documented in §3.4. In order to derive the results, it was assumed that the waiting times between observations relevant to the truth of a proposition p i , X i for i2f1;:::;ng are identically but independently exponentially distributed with mean rate of arrival . Likewise, the waiting time for a proposition in the queue to be processed, Y , is independently exponentially distributed at a (possibly distinct) mean rate of pieces of information per time period. That is, the X i and Y have the following probability density function: f(x) = 8 > > > < > > > : e x if x 0 0 if x < 0 where is the mean rate at which pieces of information arrive for the given p i or are processed, respectively . Now, lett 1 ;t 2 ;::: be the sequence of times at which an epistemic event occurs, that is, either a piece of information relevant to a proposition is presented according to an X i or a piece of information is processed from the queue according to Y . Let S p 1 ;p 2 ;p 3 ;:::;q , with p k ;q2N andq m be the state in which our reasoner has made p 1 observations relevant concerning the truth of the first proposition of interest, p 2 observations relevant to the second proposition of interest, …, and for which q of 73 m states of working memory are currently being expended to process observations. A proposition and two corollaries follow from these definitions and observations: Proposition 1. The probability , at timet, that a givenV chosen fromfX 1 ;:::;X n ;Y g will occur next,P t [V = minfX 1 ;:::;X n ;Y g], is v +n , where 1 v is the mean ofV. By the no-memory property of X 1 ;:::;X n ;Y , this is so no matter which events have occurred before t. 18 This, in turn, yields the following probabilities for transitioning between states when the naïve policy for responding to presented information is operative: Corollary 2 (Naïve transition probabilities). Where ‘S i ! P S j ’ is the probability of transitioning from state S i to S j , and p k ;q2N: 1. S i ! P S j = +n if q < m and S i = S :::;p k ; ;q while S j = S :::;p k +1; ;q+1 ; 2. S i ! P S j = +n if S i = S :::;q while S j = S :::;q1 ; 3. S i ! P S j = n +n if S i = S j = S :::;q with q = m; 4. S i ! P S j = 0, otherwise. Moreover, the state-transition probabilities under the obstinate policy are given by the following: Corollary 3 (Obstinate transition probabilities). Where ‘S i ! P S j ’ is the proba- bility of transitioning from state S i to S j , information for a proposition of interest is ignored after! observations, and p k ;q2N: 1. S i ! P S j = +n if either (a) q < m, p k < ! and S i = S :::;p k ; ;q while S j = S :::;p k +1; ;q+1 ; or 18 See (Ross, 2007, p. 294) for a standard proof of the result. It is worth noting that the assump- tions support the more general proposition in which the means of the distributions of observations are not assumed to be equal. In that case, P t [V = minfX 1 ;:::;X n ;Y g] = v ∑ , where ranges over the mean rates of arrival for X 1 ;:::;X n and Y . 74 (b) q < m, p k = !, and S i = S j . 2. S i ! P S j = +n if S i = S :::;q while S j = S :::;q1 ; 3. S i ! P S j = n +n if S i = S j = S :::;q with q = m; 4. Otherwise, S i ! P S j = 0. The above observations allow us to compute the probability that an individual is in a given state after a specified number E of epistemic events for either of the discussed information-response policies. The result follows by application of a ver- sion of the Chapman-Kolmogorov Equations. In particular, letting S i ! P;E S j be the probability that beginning in stateS i one arrives atS j after E epistemic events, the probabilities follow by looking at each possible way that one can end up in a state S j from S i S i ! P;a+b=E S j = 1 ∑ k=0 (S i ! P;a S k )(S k ! P;b S j ): Specifying the starting state yields a distribution of probabilities for the possible states one might be in after E epistemic events according to the relevant method of responding to available information. With these details in place, fixingm,n,, and the epistemic subject’s starting state allows the subject to calculate the expected (in)accuracy of their credal stateS at a timet after a given number of epistemic events E. Recall that for a total credal state S in propositions p 1 ;:::; p n , the total inaccuracy D(S) of that state given by a version of the quadratic scoring rule is ∑ i (1P t [p i ]) 2 withP t the subject’s credence function at t. Now, letting S 1 ;S 2 ;::: be the possible states describing the pieces of informa- tion taken into account as above and the length of the queue att,o i (S) be the num- ber of propositions withi observations specified in the subjects total credal stateS , t 0 be the current time,P—without index—be our reasoner’s credence function at t 0 , andt t 0 be the time at which some given number of observations has occurred, 75 the expected inaccuracy of our reasoner’s credal state at t is: E[D(S)] =E [ E [ D(S) S i ]] ; by the Law of Total Expectation. =E 2 6 6 6 6 6 6 4 ∑ j o j (S i ) ( 1 1 1+e j ) 3 7 7 7 7 7 7 5 = ∑ i P[S i ] ∑ j o j (S i ) ( 1 1 1+e j ) Filling in the state transitions probabilities for either policy yields the expected ac- curacy of that policy at a time at which any specified number of epistemic events has occurred, as desired. 19 19 A Mathematica notebook for calculating the expected value of epistemic states in the test case for any number of epistemic events is available on request. 76 Chapter 4 A puzzle concerning evidence, belief, and credence Three very general, and widely accepted, principles relating evidence, belief and credence are incompatible. In this paper, I present the tense triad of principles and show how they come into conflict. I then sketch the motivation for each of the principles and weigh the costs of rejecting them. Towards the end, I weigh in on which principle to reject and develop one way out of the puzzle. 4.1 The puzzle The puzzle is generated on combining a widely accepted principle relating belief and evidence, a principle relating credence and evidence, and weak view about how belief in a proposition and a high credence in that proposition come apart. In particular, the following form a tense triad: The belief-evidence condition. If, when rationally not believing p, learning only proposition e makes it rational for a subject S to be- lieve (or accept) the proposition p, and it is irrational forS to believe p as it stands before gaining this information, then e must be (all things 77 considered) evidence for p for S and if, when rationally believing p, learning only proposition e makes it rational for S to cease believing p, and it would not be rational for S to not believe p as it stands be- fore gaining this information, then e must be (all things considered) evidence against p for S . The credence-evidence condition. If learning only proposition e makes it rational for a subject S to raise her rational confidence in p by conditionalizing on e, (i.e. by updating her confidence in p to her confidence in p givene prior to learninge) thene is (all things consid- ered) evidence for p for S and if learning only proposition e makes it rational forS to lower her rational confidence in p by conditionalizing on e, then e is (all things considered) evidence against p for S . Separation. Failure to rationally believe p is compatible with any non-maximal level of confidence in p, though it is possible to rationally believe p without being maximally confidant of p. The first step towards trouble starts by noticing the kinds of states that are possible under separation. According to that principle, high but non-maximal confidence is not sufficient for belief. Something beyond high confidence is needed. Separation entails that it is possible for some rational subject to assign a less than maximal credence to some proposition and believe that proposition while being possible that some rational subject assigns greater credence to that very propo- sition while failing to believe it. But, as long as there could be an epistemically permis- sible transition from the first state to the second upon learning some propositione, then separation makes possible scenarios where a rational subject goes from be- lieving a proposition p with some level of confidence, learns only e and rationally raises her confidence in p, but rationally ceases believing p as a result of the transition. Conversely , if there is an epistemically permissible transition from the latter state to the former upon learning some proposition e, then separation makes 78 possible scenarios where a subject goes from not believing a proposition p with some (presumably high) level of confidence, learns onlye and rationally lowers her confidence in p, but rationally comes to believe p as a result of the transition. The possibility of either of these scenarios poses a problem for the co-tenability of the credence-evidence condition and the belief-evidence condition. On the one hand, if a subject can rationally go from believing a proposition p to not believing p upon learning proposition e and raising her credence in p, then the belief-evidence condition says that e is (all things considered) evidence against p for that subject while the credence-evidence condition says thate is (all things considered) evidence for p for that subject. On the other hand, if a subject can ratio- nally come to believe p while rationally lowering her credence in p upon learninge then the belief-evidence condition says thate is (all things considered) evidence for p for that subject while the credence-evidence condition says that e is (all things considered) evidence against p for that subject. In either scenario, e ends up being both all things considered evidence for p and all things considered evidence against p. But, since propositions can be evi- dentially relevant in at most one of those ways, these scenarios are incoherent. So we have it that if there can be epistemically permissible transitions between states of the kind that are possible assuming separation, then the credence-evidence condition and the belief-evidence condition are incoherent. But, both par- ticular motivations for separation, as well as general considerations concerning epistemically permissible attitude change support the idea that epistemically per- missible transitions between the relevant mental states are possible. 4.2 Motivations for separation 4.2.1 General considerations Taking the general reasons first, assume that separation obtains. Then it is pos- sible that some subject S rationally believes p and assigns p credence . Similarly , 79 it will also be possible that a subject S ′ rationally fails to believe p and assigns p a credence ′ less than 1 but greater than . But, since subjects can rationally come to different epistemic states regarding a proposition, especially if their information differs, some of these possibilities will overlap: it is possible that some subject S rationally believes p and assigns p credence while a subject S ′ rationally fails to believe p and assigns p a credence ′ > . But, if it is possible that two subjects bear different, but non-extreme, rational credences to a proposition then that is good reason to think that a single subject could be in each of those states at different times. This line of reasoning may not apply in every case. So, for example, if sufficiently different initial epistemic attitudes towards some propositions are epistemically permissible but there could not be enough, or the right kind of information to ground a permissible transition from one to the other, then the line of reasoning will not apply . But, if there are cases like this, they are exceptional, and there is no special reason to think that the transitions between the types of states described will be of this kind. The inference is safe in the present context. Similarly , if a subject can rationally be in a non-extreme credo-belief state at one time and rationally be in another non-extreme credo-belief state at another time, then that gives a good reason for thinking that a subject could have an experi- ence (or series of experiences) that would permissibly lead her from one state to the other. Again, the reason is defeasible. Some transitions from one state to another are permissible in one direction but not the other. For example, when a transition involves opening up new epistemic possibilities there may not be any straightfor- ward permissible epistemic transition that results in closing them off or, conversely , if a subject closes off a possibility definitively , there may not be any straightforward permissible epistemic transition that results in reopening it. But, again, even if the kinds of possible transitions required to make the argument are undermined for some (or even most) propositions or sorts of transition, they are not for others. These general considerations provide substantial pressure for a proponent of separation to accept that a subject can rationally (i) go from assigning a cre- 80 dence state to a believed proposition to a higher credence state in that proposition and cease to believe it, and (ii) go from assigning some higher credence state to a proposition that is not believed to a lower credence state and come to believe that proposition. Either is sufficient to generate the puzzle. 4.2.2 Particular cases and principles that support separa- tion More specific arguments for separation also motivate the kinds of transitions nec- essary to generate the puzzle, and do so in very different ways. The remainder of the section will examine three. This will make it hard to reject separation while further motivating the puzzle. First though, a brief caveat is in order. Each of the arguments presented in- volve believing a proposition without being maximally confident of that proposi- tion. Several compelling arguments for the compatibility of believing with being less than perfectly confident have already been advanced 1 and the view is widely , though not universally , 2 accepted. But, rather than restating those arguments, in what follows I will simply assume that one can believe without being maximally confident of what is believed. Motivation 1: Purely statistical evidence and belief As commentators have noted, in some cases where subjects possess only statistical evidence for a proposition—for example, when a subject’s evidence is only that the proposition has an objective probability of obtaining that is near but less than 1—it 1 Briefly , one reason is that it seems phenomenologically obvious that we can believe a proposi- tion while being more confident of some other proposition. Another reason is that belief in a propo- sition is compatible with not being willing to bet on believed propositions as though one was max- imally confident. For more arguments, and further details, see §2, (Leitgeb, 2013) and (Maher, 1993). 2 For a few notable recent exceptions, see (Clarke, 2013), (Dallmann, 2014), and (Wedgwood, 2012). 81 is epistemically permissible to assign a high credence to that proposition without believing it. Concretely: Lottery evidence. The government is allotting 10 parcels of land in the West to interested parties, yourself included. You’ve been in- formed that the allotment will be made on the basis of a fair lottery between any interested parties, that at most one parcel per person will be allocated, and that there are n > 10 interested parties. In lottery evidence it makes sense to assign a credence of10/n to the propo- sition that you will be alloted land and 110/n to the proposition that you won’t. For arbitrarily largen, a common (though not universal) judgment is that it is epis- temically permissible for you not to believe that you will be allocated land—after all, it is possible that you win. Endorsing this judgment commits one to separation. Geological survey evidence. You are interested in mining precious metals from the land you were allocated. You have just read a rep- utable geological magazine that says that the fraction of land contain- ing precious metals in the region that your allotment occupies is f . In geological survey evidence, it makes sense to assign a credence of f to the proposition that your land contains precious metals. For arbitrarily large f , a common (though not universal) judgment is that it is epistemically permissible for you not to believe that your land contains precious metals—after all, it is possible that it does. Endorsing this judgment commits one to separation. If the judgments about epistemically permissible credence and belief in these cases—or in any similar case—is plausible then, again assuming that belief is com- patible with non-maximal credence, separation is vindicated. However, these cases also motivate the idea that belief and high credence are responsive to differ- ent features of evidence. 3 If that is right, then we can manipulate whether or not 3 Lara Buchak draws this inference on the basis of similar cases at (2013, p. 11). 82 a subject’s evidence is “purely statistical” to create cases containing the doxastic transitions that gave rise to the puzzle. Concretely: Gaining lottery evidence. The government is allotting 10 parcels of land in the West to interested parties, yourself included. Your initial information includes the fact that the government allots resources in purely nepotistic fashion, you have a rational credence of .95 that you will not be allocated land and rationally believe that you will not be allocated land. You then come to learn that the government has re- cently undergone a serious reform and that the allotment will be made on the basis of a fair lottery between any of the 500 interested parties. You rationally raise your credence in the proposition that you will not be allocated land to .98 but rationally cease believing it. Undercutting geological survey evidence. You are interested in mining precious metals from the land you were allocated. You have just read a reputable geological magazine that says that the fraction of land containing precious metals to land without in the region that your allotment occupies is 49/50. You assign a credence of .98 to the proposition that your land contains no precious metals but believe that it could nonetheless contain them. You then learn that the govern- ment who allocated the land took steps to ensure that no one who received an allotment received land containing precious metals, and that in 95% of cases where they allocate land they avoid giving out a parcel of land containing precious metals. Upon learning this in- formation, you rationally lower your credence in the proposition that your land contains no precious metals to .95 but come to believe that your land contains no precious metals. If either of these cases—or any like them—are possible, then separation obtains while one of the credence-evidence condition and the belief-evidence con- 83 dition must be given up. 4 Motivation 2: The belief-action connection There is a tight connection between belief and action. Believing p involves a dis- position to treat p as true by acting as though it obtained. Taking this idea to be part of the core of our notion of belief makes separation plausible and motivates the puzzle. In particular, if rationally believing p in a circumstance requires that it be rational to act as if p in that circumstance, and rationally acting as if p entails having credences that maximize the expected benefits of acting as if p, then the level of confidence required to rationally believe will be sensitive to what is at stake in acting as if p and changes in those stakes. 5 In that case, one might go from ratio- nally believing a proposition to failing to believe it while raising one’s credence in the proposition if the relevant costs and benefits change. Concretely: Gaining stakes information. You are waiting for a train to your allotment to pick up some ore that you have mined. The schedule painted onto the side of the station says that the next train will arrive an hour from now, at noon. On this basis, you rationally assign a credence of .95 to the proposition that the train will arrive in an hour, form a (rational) outright belief that it will and embark on a fifteen minute walk to town for an ice cream cone acting as if the train will arrive at noon. On the way , you overhear that the painted schedule 4 Some formal accounts of the relationship between outright belief and credence that recognize that purely statistical evidence can ground high credence without belief include that of Hannes Leitgeb (2014) and that of Hanti Lin and Kevin Kelly (2012). Both of these accounts make belief relative to a question or way of dividing up the logical space to which credences are assigned. On these views, changes in information or context that prompt different ways of cutting up epistemic possibilities can lead to transitions of the kind described. 5 Several authors have thought that the belief-action connection is central to the notion of belief in roughly this way—for example, see Fantl and McGrath (2002), Ganson (2008, pp. 452-3), Stal- naker (1987, p. 82), and Weatherson (2005, pp. 421-2). The connection between rational action and belief is standard in decision theory , or near enough for our purposes here. 84 is the most recent update and that outlaws are ransacking claims near your allotment looking for precious metals. You raise your credence in the proposition that the train will arrive at noon to .98, but return to the station to ensure that you catch the next train. You now rationally fail to act as if the train will arrive at noon, and thus, given the tight connection between belief and action, fail to rationally believe that it will arrive at noon. 6 Again, if this case—or any like it—are possible, then separation obtains while one of the credence-evidence condition and the belief-evidence condition must be given up. 7 Motivation 3: Logic and rationality A common thought is that logic informs what is rationally permissible: the laws of logic constrain rational outright belief while the laws of probability constrain rational confidence. With respect to rational outright belief, one plausible principle of this kind is: Agglomeration. For any propositions p andq, if a subject rationally believes p and rationally believesq, then it is epistemically permissible for that subject to believe (p and q). With respect to rational credence, logicality is usually understood as rationally requiring that a subject’s credence assignments be probabilistically coherent: Coherence. A rational subject’s credence assignments can (at a min- imum) be filled out as a probability function. 6 Note the parallel between this case and cases from the “pragmatic encroachment” literature, where they are taken to motivate the idea that your practical circumstances can influence whether you know a proposition obtains. 7 Two accounts of the relationship between outright belief and credence that respect a belief- action connection with these consequences are those of Dorit Ganson (2008) and Brian Weatherson (2005). 85 Together, these constraints defeasibly support separation. The probability of a conjunction of propositions that are not assigned maximal credence is always less than or equal to the probability of either conjunct, and the probability of the conjunction will be strictly less than that of its conjuncts whenever there is some positive probability that one of the conjuncts obtains while the other does not. Adding conjuncts to a proposition in this way rapidly plunges the probability of the resulting conjunction. Consequently , if agglomeration holds, then increasing the number of such propositions involves decreasing the level of confidence required to permissibly believe a proposition. For example, in the case of independent propositions, the probability of p andq is equal to the probability of p times the probability ofq. So, assuming agglomer- ation, having ten probabilistically independent beliefs at 95 percent confidence, requires that the believer’s confidence in the conjunction be less than .6, while having fourteen probabilistically independent beliefs at that level of confidence re- quires assigning the conjunction of those beliefs a credence less than .5. But, as- signing a low confidence to a proposition, and in particular confidence below .5, seems incompatible with outright believing. Now, pre-theoretically , it is very plausible that a subject could rationally believe ten independent propositions at a time. So, if agglomeration is to be maintained, then when the evidence rationally requires a subject to form a new belief in a proposition that is independent of a few of that subject’s other beliefs, the subject must give up a subset of those beliefs even though nothing credally relevant to them has been learned. Since those beliefs were all assumed to be credally independent and assigned credence .95, this further requires that the subject either (i) believes some propositions assigned credence .95 while failing to believe others, or (ii) gives up all of her previously held beliefs in those independent propositions upon rationally forming the new belief. Giving up multiple beliefs is quite drastic. And since the .95 confidence was 86 largely arbitrary , 8 if the subject can permissibly opt to give up a few of her in- dependent beliefs instead, then both (iii) separation holds, and (iv) the rational transitions needed to generate the puzzle are possible too. Again, this line of reasoning is defeasible. In the long history of discussions of agglomeration in probabilistic contexts each of the argument’s key assumptions have been reasonably resisted at some point or other. 9 But, despite resistance to individual premises here and there, overall each premise is plausible. If you find agglomeration and coherence compelling, then separation should be too. Once again we are faced with the puzzle. Together, the above arguments make it difficult to avoid the puzzle by rejecting separation—the thesis that failure to rationally believe p is compatible with any non-maximal level of confidence in p. If even one succeeds, we must confront the tense triad. In what follows, I look at the prospects for rejecting or revising the remaining evidence principles. 4.3 Evaluating the evidence principles 4.3.1 The credence-evidence condition According to the credence-evidence condition, if learning only proposition e makes it rational to raise your rational confidence in p by conditionalizing on e, then e is (all things considered) evidence for p; and if learning only proposition e makes it rational to lower your rational confidence in p by conditionalizing on e, then e is (all things considered) evidence against p. 8 In general, the effect is made more drastic by lowering one’s credence in each independent proposition and less drastic, but equally real, when one’s credence is higher (but non-maximal). 9 Notable examples include Kyburg’s (1961) rejection of agglomeration, as well as Leitgeb (2014), Lin and Kelly (2012), and Weatherson’s (2005) recent belief frameworks that severely restrict the number of independent propositions that a believer can have at a time. It is worth pointing out that the restriction of the number of believed propositions is largely implicit rather than being directly argued for in those frameworks. 87 As formulated, this condition is a fairly weak sufficient condition for when a proposition is evidence all things considered. Because of this, the principle avoids some of the major challenges that have been raised for understanding Bayesian measures of confirmation as measures of evidence. First, by being a merely sufficient condition for when a proposition is evidence, it avoids “old evidence problems”. 10 These problems arise when one endorses the converse of the conditionals in the credence-evidence condition, thereby tak- ing confidence raising to be a necessary condition for a proposition to be evidence, along with accepting conditionalization as a rational way to update one’s confi- dences, where conditionalization is the view that a subject should set her new credence in each proposition p to her old credence in p given e whenever the strongest proposition she learns is e. This package of ideas—i.e. the converse of the credence-evidence condi- tion and conditionalization—conflicts with judgments about when a proposi- tion is all things considered evidence. One consequence of conditionalization is that once a proposition p is learned, we set our confidence in p to our credence of p given p, i.e. its maximum value. By doing so, that proposition will not raise the confidence of any proposition via conditionalization from that point on— conditionalizing on it again will not alter our credences. However, it seems like things we have learned in the past can be evidence for new hypotheses. So, for example, even if people had noticed that the lower portion of a ship disappears first as the ship gets farther away as soon as they learned how to make ships, it still might be the case that later that same observation might serve as evidence for hitherto unconsidered hypothesis that the Earth is round. The current proposal bypasses this problem. The credence-evidence con- dition is not a full definition of evidence, it is a pair of sufficient conditions, one 10 Of course, given that a proposition cannot be both all things considered evidence for and against a proposition at the same time, the condition also guarantees the following necessary condition one‘s being evidence for p’: that one’s rational credence ine given p is not lower than one’s credence in p simpliciter. However, this necessary condition does not give rise to the puzzle. The original formulation of the “old evidence problem” is due to Clark Glymour (1980). 88 that tells us when something counts as all things considered evidence for a proposi- tion, and one that tells us when something counts as all things considered evidence against a proposition. If a proposition isn’t credence altering—like the “old evi- dence” propositions, the condition just doesn’t apply . The credence-evidence condition is also qualified so that it only applies to “all things considered” evidence. This sidesteps the “problem of prima facie evidence”. This problem gets off the ground by noticing that we are often interested in a defeasible, or prima facie, notion of evidence according to which a proposition by itself intuitively seems to provide reason for another but, via other factors that it bears on, ends up not providing reason for that proposition “all things considered”. As an example: Thirsty traveler. You are on trip into Silver City in search of the town emporium or a saloon to buy a sarsaparilla. Nearing town you see a building and form a .8 credence that it is a saloon a .1 cre- dence that it is the emporium, and a .2 credence that you can get a sarsaparilla there—self-respecting bar-keeps don’t serve sarsaparilla. As you approach, you see a well-dressed gentleman drinking a tall milk outside the building. You respond by rationally lowering your confidence in the proposition that it is a saloon to .6 but raise your confidence in the proposition that it is the town emporium to .3 and raise your confidence in the proposition that the establishment sells sarsaparilla to .4—self-respecting bar-keeps don’t serve milk either and well-dressed gentlemen tend not to frequent saloons. 11 In thirsty traveler there is some initial pull to think that the observation of the well-dressed gentleman drinking milk outside of the building provides some reason to think that the establishment is a saloon—some saloons do serve sarsaparilla and 11 Jim Pryor makes this point with a structurally similar example at (2013, section 5). Peter Kung (2010) raises the point from a slightly different angle. 89 milk after all, and your observation does make it extremely unlikely that the build- ing is anything other than the town emporium or a saloon. But it’s just that, all things considered, the evidence lends much more support to its being the empo- rium. If that is right, then judgments about when a proposition is intuitively prima facie evidence for another won’t always be accompanied by the confidence in the one being higher conditional on the other. While not everyone shares the eviden- tial judgment, the credence-evidence condition sidesteps the problem by only entailing results about what is “all things considered” evidence. The credence-evidence condition also requires that one’s initial confidence in p is epistemically acceptable. This preempts a few related worries. First, some have been tempted to think that in some cases where evidence has the effect of low- ering an irrationally high confidence count as cases of acquiring all things consid- ered evidence for that proposition. 12 So, for example, suppose our thirsty traveler had an initial credence of .99 in the proposition that the first building she sees on entering town will be a saloon (call this proposition ‘SALOON’) and that that cre- dence was based only on wishful thinking (or nothing at all). Some have thought that if the traveler, upon seeing masses of townspeople on the road with drinks in their hands, realizes that the first building she sees may well be a town’s person domicile or a saloon, and adjusts her confidence in SALOON to an appropriate .6, then that observation provided all things considered evidence for SALOON while bringing the traveler to lower her confidence in that proposition. While I’m not convinced that this is a case where the observation is an all things considered reason to believe SALOON—after all, from the internal perspective of the traveler, the observation made her rationally lower her estimate of SALOON’s truth—the current proposal avoids the worry by only applying when a subject has 12 This point is pressed by (Pynn, 2013). 90 a rational initial confidence assignment. 13 While the credence-evidence condition avoids notable issues for Bayesian theories of evidence, it is also entailed by all standard Bayesian confirmation mea- sures and is consequently supported by many of the considerations that speak favor of those theories. 14 Among the virtues of many of these theories is their ability to capture and explain many instances of scientific inference, including how testing the consequences of a theory supports that theory and when it is acceptable to re- ject auxiliary hypotheses instead of a “central” theory . 15 Largely because of these applications, Bayesian theories of confirmation are widely (though not universally) endorsed. 13 The case where one bases one’s credence on nothing at all is developed by Peter Kung (2010, pp. 5-6). His case against the credence-evidence condition faces the additional difficulty of motivating the rationality of the initial credence assignment, since it is not clear that one can or should assign a credence at all to propositions for which there is really absolutely no evidence, not even that afforded by symmetry or similarity considerations. 14 For example, it is entailed by each of the following quantitative measures of howe evidentially impacts h: • the difference measure d(h;e) =P [ h e ] P[h], • the log-ratio measure r(h;e) = ln ( P [ h e ] /P[h] ) , • the log-likelihood ratio l(h;e) = ln ( P [ e h ] /P [ e :h ]) , • Carnap’s(h;e) =P[h^e^k]P[k]P[h^k]P[e^k] for background knowledge k, • the covariance of h and e, • Christensen’s s(h;e) = [ P [ h e ] P[h] ] /P[:e], • Mortimer’s m(h;e) =P [ e h ] P[e], • Nozick’s n(h;e) =P [ e h ] P [ e :h ] , • Gaifman’s g(h;e) =P[:h]/P [ :h e ] , and • Crupi’s z(h;e) = [ P [ h e ] P[h] ] /P[:h] ifP [ d h;e ] 0 and [ P [ h e ] P[h] ] /P[h] other- wise. It also holds of the numerous measures which are ordinally equivalent to one of the above. The list of authors who advance or endorse measures of this kind is too long to reasonably list. See (Fitelson, 1999) and (Crupi et al., 2007) for discussions of a broad swath of Bayesian confirmation measures and their properties. 15 The Bayesian confirmation literature is vast. For a good overview of what it can and cannot do, see (Earman, 1992), (Easwaran, 2011), and (Howson and Urbach, 2006). 91 The truth-directedness of confidence also provides reasons to favor the credence- evidence condition. In particular, the transparency of credence lends support to the condition. Credence transparency is the observation that it is a conceptual truth about credence that the question of which credence to assign p collapses into the question of what our estimate of the truth value of p should be. 16 Combine with this the thought that we should assign our credence on the basis of whatever our all things considered evidence supports and we have it that all things consid- ered evidence for p just is whatever tells in favor of the truth of p. 17 But, that is just a version of the credence-evidence condition. In sum, the credence-evidence condition has a lot going for it. Anyone committed to a standard Bayesian measure of confirmation is committed to it, the condition is relatively immune to common lines of objection to Bayesian views of confirmation, and it benefits from the powerful results of Bayesian confirmation theory . Connections between truth and evidence also lend it support. It will be costly to avoid the puzzle by rejecting the the credence-evidence condition. 4.3.2 The belief-evidence condition The credence-belief condition is also carefully qualified and correspondingly plau- sible. It too is only a sufficient condition on one proposition being all things con- sidered evidence for (or against) another. Hence, the belief-evidence condition avoids the same (under-appreciated) problems that affect a necessary and sufficient version of the belief-evidence condition. The condition, or something close to it, is also widely endorsed. Elliott Sober, for instance, puts it this way: 16 See (Joyce, 1998) for discussion of credence as estimate of truth value. 17 This line of thought thus supports the common idea that gaining the doxastic attitudes that “fit” one’s evidence is the epistemically best way to use one’s evidence in trying to have the most accurate doxastic state. Compare Richard Feldman on outright belief: “gaining the doxastic attitudes that fit one’s evidence is the epistemically best way to use one’s evidence in trying to believe all and only the truths one considers” (1985, p. 20). 92 “If learning that e is true justifies you in rejecting (i.e., disbelieving) the proposition p, and you were not justified in rejecting p before you gained this information, then e must be evidence against p” and “[i]f learning thate is true justifies you in accepting (i.e., believing) the propo- sition p, and you were not justified in accepting p before you gained this information, then e must be evidence for p.” (Sober, 2008, p. 5) While this formulation fails to mention that the evidence picked out is of the all things considered variety , and so faces that objection squarely , it is otherwise very close to the belief-evidence condition. Other considerations appear to support the belief-evidence condition. To see one cost of denying it, consider the strangeness of situations in which the an- tecedent of one of the condition’s conditionals hold while the corresponding con- sequent fails to hold. For example, imagine someone asserting that learning e is what prompted them to believe p, but denying thate provided all things considered evidence for p. That would be strange. “Evidentialist” considerations also support the belief-evidence condition. 18 Consider Richard Feldman and Earl Conee’s classic formulation of the position: Evidentialism. Doxastic attitude D toward proposition p is epis- temically justified for S at t if and only if having D toward p fits the evidence S has at t. (Feldman and Conee, 1985, p. 15) The notion of “fit” at play in evidentialism suggests the following conceptual truth about what it takes for evidence to “fit” belief: Conceptual fit. For any propositions p and e, and subject S : 18 It is worth wondering whether these evidentialist considerations are compatible with a tight link between belief in action. Some authors have used cases emphasizing that link, like those used above in motivating separation, to argue against some forms of evidentialism (Ganson, 2008, and Weatherson (2005)). Consequently , those who would endorse the belief-evidence condition would do well to deny this motivation for separation. The other motivations for separation remain plausible under evidentialism. 93 (i) if S goes from rationally not believing p at time t, learns only proposition e from t to t ′ , and this makes it rational for S to be- lieve the proposition p att ′ , then addinge toS ’s stock of evidence made believing a better “fit” with S ’s evidence than not believ- ing, while (ii) ifS goes from rationally believing p at timet, learns only propo- sition e from t to t ′ , and this makes it rational for S to cease believing p at t ′ , then adding e to S ’s stock of evidence made believing a worse “fit” with S ’s evidence than not believing. If we add the following principle relating fit-facts to all things considered evi- dence, we can derive the belief-evidence condition: Fit-evidence valence identity. For any propositions p and e, and subject S : if adding only e to S ’s stock of evidence made believing a better (worse) “fit” with S ’s evidence than not believing, then e is all things considered evidence for (against) p for S . Putting these two together via hypothetical syllogism gives us the desired results that (i) if S goes from rationally not believing p, learns only proposition e, and doing so makes it rational for S to believe the proposition p, then e is all things considered evidence for p, and (ii) ifS goes from rationally believing p, learns only proposition e, and doing so makes it rational for S to cease believing p, then e is all things considered evidence against p. Like the credence-evidence condition, the belief-evidence condition has a lot going for it. It is initially appealing and supported by plausible princi- ples about the relationship between evidence and epistemically appropriate belief. Rejecting either condition on all things considered evidence comes at a cost. 94 4.4 Where now? On the face of it, the puzzle presents epistemologists with a difficult choice. Both the credence-evidence condition and the belief-evidence condition are mo- tivated epistemic principles that find support in the literature. But, in the face of separation, one of them has to go. For the remainder of this paper, I will argue that it is the belief-evidence condition that should be jettisoned. However, I will suggest a natural place to resist the arguments for the condition that allows us to keep much of what was attractive about the initial motivation for the condition. Exploring the implications of separation helps to get a handle on the puzzle. Recall that, according to separation, failure to believe is compatible with any non-maximal level of confidence. As a consequence, rationally believing outstrips rationally assigning a high confidence to a proposition. While it would be good to know what, in addition to rationally being highly confident of a proposition, is needed in order to count as rationally believing that proposition, the fact that something else is needed already favors giving up the belief-evidence condition. If rationally believing requires rational-high-credence-having+, then it will be sensitive to whatever the ‘+’ is in addition to being sensitive to whatever credence is sensitive to. 19 In light of credence transparency, this makes trouble for the belief-evidence condition. To see this, recall that credence transparency is the fact that the level of confidence one assigns to a proposition corresponds to how likely one thinks it is that that proposition is true. Now, it is plausible that in order for a proposition e to count as all things considered evidence—as opposed to prima facie evidence 20 —for some proposition p for a subject, e has to make 19 Some possibilities are (i) that the doxastic attitude be sufficiently stable—see (Ross and Schroeder, 2012), (Holton, 2013), (Leitgeb, 2013) and (Dallmann, 2015), or (ii) that assuming the believed proposition does not change any of the believer’s relevant preferences over a class of choice scenarios—see (Weatherson, 2005) and (Ganson, 2008), or (iii) commitment to treating the propo- sition believed “as true” in another way . 20 Which is, in all likelihood, the notion that is better tracked by our unqualified use of the term ‘evidence’. 95 p more likely to be true by S ’s lights. Together these observations entail that all things considered evidence will be sensitive to changes in a subject’s credence, but not necessarily to changes in whether one believes. Another consideration in favor or rejecting the belief-evidence condition is the epistemic primacy of credence over belief, as I hope that the modest defense of credal priorism in this dissertation has made plausible. If credence is epistemically prior to belief, then we should expect that central epistemic notions like evidence are best understood in terms of credence rather than belief whenever we have to choose between them. These observations also point to the fault in our argument for the belief- evidence condition: the fit-evidence valence identity. It is not the case that any increase in how tightly a subject S ’s evidence fits believing p due to learning some propositione corresponds to an increase inS ’s all things considered evidence for p. An increase in the all things considered evidence for p for a subject requires that p be made more likely to be true from that subject’s perspective. However, the fit between the evidence and the doxastic attitude of believing p can be increased by virtue of promoting some feature other than how likely to be true p is by the be- liever’s lights—it might promote the “+”. If that is right the relationship between all things considered evidence for a subject and truth gives us good reason to reject fit-evidence valence identity. However, the suggested resolution of the puzzle allows us to retain the premises in the argument motivating the belief-evidence condition. Appropriate belief can still be a matter of “fitting the evidence”, as required by evidentialism un- derstood as involving a notion of “fit” consistent with conceptual fit. The details of how appropriate belief fits the evidence will be a matter of what, beyond high confidence, is required for appropriate belief. To see how it might go, suppose that one of the additional features that rational 96 belief has and rational high credence lacks is doxastic stability. 21 Several philosophers have thought that believing requires having a stable disposition to treat the believed proposition as true. 22 This view is motivated by the thought that it makes sense for cognitively limited subjects like us to have cognitive fixed-points, or epistemic anchors, that we can rely on and take for granted in reasoning. It is also motivated by the thought that appropriate belief aims at knowledge, a state that is “securely anchored to the truth”. Given these motivations, the suggested example is not merely a recherché the- oretical possibility . But, if the “+” is something like this, then it might still be rea- sonable to think that appropriate belief is a matter of fitting the evidence. It’s just that appropriately believing p, on this view, amounts to the evidence both making p likely to be true by the believer’s lights, and that the reasons for thinking p likely to be true are sufficiently robust. 23 Even though these features of appropriate belief can pull in different directions, it is still plausible that they depend only on features of one’s evidence. Evidentialism can be retained if desired. Consequently , the proposed solution to the puzzle in principle allows us to retain a plausible under- pinning of the belief-evidence condition. Each of separation, the belief-evidence condition and the credence- evidence condition have their share of intuitive pull and independent motiva- tion. Yet, the tension between these plausible principles has gone largely unnoticed. I have presented reasons to think that the tension cannot be easily resolved—one of the principles will have to go. While any solution is bound to have some intuitive cost, I also hope to have shed light on which should be rejected: belief-evidence condition. The all things considered reasons to believe a proposition p outstrip the 21 It is worth noting that the present point doesn’t hang on our choice of doxastic stability as a “+”— though I think it is a plausible one. Even if belief is “prime”—so that there is no unified property that when added to high credence yields belief short of being-a-belief itself—and thus, in particular, that stable high credence is not sufficient for belief, still the above diagnosis can be applied. 22 For examples, see (Williamson, 2000), (Ross and Schroeder, 2012), (Holton, 2013), (Leitgeb, 2013) and (Dallmann, 2015). 23 See §2 for further discussion of the profile of reasons for belief under a view of this kind. 97 considerations that speak in favor how subjectively likely p is to be true. But, all things considered evidence for a proposition needs to speak in favor of the truth of that proposition. 98 Bibliography Arló-Costa, H. and Pedersen, A. P . (2012). Belief and probability: A general theory of probability cores. International Journal of Approximate Reasoning, 53(3):293–315. Arntzenius, F . (2003). Some problems for conditionalization and reflection. The Journal of Philosophy, 100(7):pp. 356–370. Awh, E., Barton, B., and Vogel, E. K. (2007). Visual working memory represents a fixed number of items regardless of complexity . Psychological Science, 18(7):622– 628. Baron, J. (2007). Thinking and Deciding. Cambridge University Press, Cambridge. Baron, J., Beattie, J., and Hershey , J. C. (1988). Heuristics and biases in diagnostic reasoning: Ii. congruence, information, and certainty . Organizational Behavior and Human Decision Processes, 42(1):88–110. Billingsley , P . (1995). Probability and Measure. John Wiley & Sons, New York, 3rd edition. Brady , T . F ., Konkle, T ., and Alvarez, G. A. (2011). A review of visual memory ca- pacity: Beyond individual items and toward structured representations. Journal of Vision, 11(5):1–34. Bratman, M. (1985). Intention, Plans, and Practical Reason. University of Chicago Press, Chicago. 99 Bratman, M. E. (1987). Davidson’s theory of intention. In Lepore, E. and McGlauchlin, B., editors, Actions and Events: Perspectives on the Philosophy of Don- ald Davidson, pages 14–28. Basil Blackwell, Oxford. Buchak, L. (2013). Belief, credence, and norms. Philosophical Studies. doi: 10.1007/s11098-013-0182-y. Cherniak, C. (1983). Rationality and the structure of human memory . Synthese, 57(2):163–86. Christensen, D. (1996). Dutch-book arguments depragmatized: Epistemic consis- tency for partial believers. Journal of Philosophy, XCIII(9):450–479. Churchland, P . (1981). Eliminative materialism and the propositional attitudes. The Journal of Philosophy, 78(2):67–90. Clarke, R. (2013). Belief is credence 1 (in context). Philosophers’ Imprint, 13(11):1–18. Cohen, S. (1999). Contextualism, skepticism, and the structure of reasons. Noûs, Philosophical Perspectives supplement, 33(s13):57–89. Cowan, N. (2001). The magical number 4 in short-term memory: A reconsidera- tion of mental storage capacity . Behavioral and Brain Sciences, 24(01):87–114. Cowan, N. (2005). Working Memory Capacity. Psychology Press, Hove, East Sussex, UK. Crupi, V ., Tenori, K., and Gonzalez, M. (2007). On bayesian measures of eviden- tial support: Theoretical and empirical issues. Philosophy of Science, 74(2):229– 252. Dallmann, J. (2014). A normatively adequate credal reductivism. Synthese, 191(10):2301–2313. doi: 10.1007/s11229-014-0402-9. Dallmann, J. (2015). Belief as Credal Plan. Ph.D. dissertation, University of Southern California. 100 Dallmann, J. M. (2011). Taking confirmation first: Towards a naive conception of confirmation theory . DeRose, K. (1992). Contextualism and knowledge attributions. Philosophy and Phe- nomenological Research, 52(4):913–29. DeRose, K. (1995). Solving the skeptical problem. The Philosophical Review, 104(1):1–52. Dorling, J. (1979). Bayesian personalism, the methodology of scientific research programmes, and duhem’s problem. Studies in History and Philosophy of Science Part A, 10(3):177–187. Douven, I. and Williamson, T . (2006). Generalizing the lottery paradox. British Journal for the Philosophy of Science, 57(4):755–779. Earman, J. (1992). Bayes or Bust: A Critical Examination of Bayesian Confirmation Theory. The MIT Press, London. Easwaran, K. (2011). Bayesianism ii: Applications and criticisms. Philosophy Com- pass, 6(5):321–332. Easwaran, K. (2013). Expected accuracy supports conditionalization—and con- glomerability and reflection. Philosophy of Science, 80(1):119–142. Easwaran, K. (2014). Dr. truthlove, or how i learned to stop worrying and love dr. truthlove, or how i learned to stop worrying and love bayesian probabilities. Fantl, J. and McGrath, M. (2002). Evidence, pragmatics, and justification. The Philosophical Review, III(1):67–94. Fantl, J. and McGrath, M. (2009). Knowledge in an Uncertain World. Oxford University Press, Oxford. Feldman, R. (2004 (1988)). Having evidence. In Evidentialism, pages 219–241. Oxford University Press, Oxford. 101 Feldman, R. and Conee, E. (1985). Evidentialism. Philosophical Studies, 48(1):15–34. Fitelson, B. (1999). The plurality of bayesian measures of confirmation and the problem of measure sensitivity . Philosophy of Science, 66(Proceedings Supplement):S362–S378. Ganson, D. (2008). Evidentialism and pragmatic constraints on outright belief. Philosophical Studies, 139(3):441–458. Glymour, C. (1980). Why I am not a Bayesian. In Theory and Evidence, pages 63–93. Princeton University Press, Princeton. As reprinted in (1998) Philosophy of Science, Curd and Cover Eds. Good, I. J. (1967). On the principle of total evidence. British Journal for the Philosophy of Science, 17(4):319–321. Greaves, H. and Wallace, D. (2006). Justifying conditionalization: Conditionaliza- tion maximizes expected epistemic utility . Mind, 115(459):607–632. Hacking, I. (2006). The Emergence of Probability. Cambridge University Press, second edition. Harman, G. (1967). Detachment, probability , and maximum likelihood. Noûs, 1(4):400–411. Harman, G. (1986). Change in View. The MIT Press, London. Hawthorne, J. (2004). Knowledge and Lotteries. Oxford University Press, Oxford. Hawthorne, J. (2009). The lockean thesis and the logic of belief. In Huber, F . and Schmidt-Petri, C., editors, Degrees of Belief, Synthese Library (Book 342), pages 49–75. Springer. Hawthorne, J. and Stanley , J. (2008). Knowledge and action. The Journal of Philos- ophy, 105(10):571–590. 102 Holton, R. (2013). Intention as a model for belief. In Vargas, M. and Yaffe, G., editors, Rational and Social Agency: Essays on the Philosophy of Michael Bratman, pages 1–20. Oxford University Press, Oxford. pre-print. Howson, C. and Urbach, P . (2006). Scientific Reasoning: The Bayesian Approach. Open Court, Peru, Illinois. Hume, D. (1740). A Treatise of Human Nature. The Floating Press. Joyce, J. M. (1998). A nonpragmatic vindication of probabilism. Philosophy of Science, 65(4):575–603. Kaplan, M. (1996). Decision Theory as Philosophy. Cambridge University Press, Cam- bridge. Koriat, A., Lichtenstein, S., and Fischhoff, B. (1980). Reasons for confidence. Journal of Experimental Psychology: Human Learning and Memory, 6(2):107–118. Kuhn, D. (1989). Children and adults as intuitive scientists. Psychological Review, 96(4):674–698. Kung, P . (2010). On having no reason: dogmatism and bayesian confirmation. Synthese, 177(1):1–17. Kyburg, H. E. (1961). Probability and the Logic of Rational Belief. Wesleyan University Press, Middletown. Lawlor, K. (2014). Exploring the stability of belief: Resiliency and temptation. Inquiry, 57(1):1–27. DOI: 10.1080/0020174X.2014.858414. Leitgeb, H. (2013). Reducing belief simpliciter to degrees of belief. Annals of Pure and Applied Logic, 164(12):1338–1389. Leitgeb, H. (2014). The stability theory of belief. Philosophical Review, 123(2):131– 171. 103 Lewis, D. K. (1996). Elusive knowledge. Australasian Journal of Philosophy, 47(4):549– 567. Lin, H. and Kelly , K. T . (2012). A geo-logical solution to the lottery paradox, with applications to conditional logic. Synthese, 186(2):531–575. Maher, P . (1993). Betting on Theories. Cambridge University Press, Cambridge. Makinson, D. C. (1965). The paradox of the preface. Analysis, 25(6):205–207. Nelkin, D. K. (2000). The lottery paradox, knowledge, and rationality . The Philo- sophical Review, 109(3):373–409. Oddie, G. (1997). Conditionalization, cogency , and cognitive value. British Journal for the Philosophy of Science, 48(4):533–541. Peterson, C. R. and Ducharme, W . M. (1967). A primacy effect in subjective probability revision. Journal of experimental psychology, 73:61. Plato (1985). Plato: Meno. Aris and Phillips. Pryor, J. (2013). Problems for credulism. In Tucker, C., editor, Seemings and Jus- tification: New Essays on Dogmatism and Phenomenal Conservatism. Oxford University Press. Pynn, G. (2013). The bayesian explanation of transmission failure. Synthese, 190(9):1519–1531. Ramsey , F . P . (1931 (Originaly 1926)). Truth and probability . In Braithwaite, R. B., editor, The Foundations of Mathematics and other Logical Essays, chapter VII, pages 156–198. Harcourt, Brace and Company , London. 1999 electronic edition. Roorda, J. (2013). Revenge of the wolfman: A probabilistic explication of full belief. Unpublished manuscript dated 1995. 104 Ross, J. (2012). Rationality , normativity , and commitment. In Shafer-Landau, R., editor, Oxford Studies in Metaethics, volume 7. Oxford University Press, Oxford. Ross, J. and Schroeder, M. (2012). Belief, credence, and pragmatic en- croachment. Philosophy and Phenomenological Research, pages 1–30. doi: 10.1111/j.1933-1592.2011.00552.x. Ross, L., Lepper, M. R., and Hubbard, M. (1975). Perseverance in self-perception and social perception: Biased attributional processes in the debriefing paradigm. Journal of personality and social psychology, 32(5):880–892. Ross, S. (2007). Introduction to Probability Models. Elsevier Academic Press, San Fran- cisco, 9th edition. Schroeder, M. (2011). Representational entities and representational acts. In Reis- ner, A. and Steglich-Petersen, A., editors, Reasons for Belief, chapter 10, pages 201–222. Cambridge University Press, Cambridge. Shpall, S. (2012). Moral and rational commitment. Philosophy and Phenomenlogical Research, pages 1–27. doi: 10.1111/.1933-1592.2012.00618.x. Sober, E. (2008). Evidence and Evolution: The Logic Behind the Science. Cambridge University Press. Staffel, J. (2012). Can there be reasoning with degrees of belief ? Synthese, pages 1–17. DOI: 10.1007/s11229-012-0209-5. Stalnaker, R. (1987). Inquiry. MIT , Massachusetts. Stanley , J. (2005). Knowledge and Practical Interests. Oxford University Press, Oxford. Steiner, M. (1978). Mathematical explanation. Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, 34(2):135–151. Sturgeon, S. (2008). Reason and the grain of belief. Noûs, 42(1):139–165. 105 Tentori, K. (2013). What kind of inductive reasoning? how evidence assessment shapes inference. Unpublished manuscript presented at the 2013 Formal Epis- temology Workshop (May , 2013). Tetlock, P . E. and Kim, J. I. (1987). Accountability and judgment processes in a personality prediction task. Journal of personality and social psychology, 52(4):700– 709. Wason, P . C. and Johnson-Laird, P . N. (1972). Psychology of Reasoning: Structure and Content. Batsford psychology series. Batsford. Weatherson, B. (2005). Can we do without pragmatic encroachment? Philosophical Perspectives, 19(1):417–443. Wedgwood, R. (2012). Outright belief. Dialectica, 66(3):309–329. doi: 10.1111/j.1746-8361.2012.01305.x. Williamson, T . (2000). Knowledge and its Limits. Oxford University Press, Oxford. Zalabardo, J. (2009). An argument for the likelihood-ratio measure of confirma- tion. Analysis, 69(4):630–635. 106
Abstract (if available)
Abstract
This dissertation is about the relationship between the coarse-grained attitude of believing that a proposition obtains and the fine-grained attitude of assigning a degree of confidence, or credence, to a proposition. One of its guiding ideas is that theorizing about these world-directed attitudes should take our cognitive limitations seriously. This marks a departure from most contemporary research, which is more often guided and shaped by formal models of ideal epistemic subjects. It is argued that an outright belief in a proposition is best thought of as a subspecies of high confidence in that proposition that exhibits plan-like features, including that of resisting reconsideration on that issue. Thinking of belief in this way avoids worries about the palpable differences between mere high confidence and outright belief and provides new insight into how we should understand evidential relationships between propositions.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Reasoning with degrees of belief
PDF
Process-oriented rationality
PDF
A perceptual model of evaluative knowledge
PDF
Beliefs that wrong
PDF
A deontological explanation of accessibilism
PDF
Aggregating complaints
PDF
Iffy confidence
PDF
Rethinking reductive realism in ethics
PDF
Hepatitis C in the post-interferon era: selected essays in health economics
PDF
Rationality and the primacy of the occurrent
PDF
Representation, truth, and the metaphysics of propositions
PDF
The case for moral skepticism
PDF
Contrastive reasons
PDF
Political decision-making in an uncertain world
PDF
Assessing the psychological correlates of belief strength: contributing factors and role in behavior
PDF
Units of agency in ethics
PDF
Essays on fair scheduling, blockchain technology and information design
PDF
Ancestral inference and cancer stem cell dynamics in colorectal tumors
PDF
Suicide talk as a vital sign: a theory-informed examination of individual and relational factors that influence suicidal disclosure
PDF
A review of the effects of juvenile delinquency on entrance into post-secondary institutions of higher education
Asset Metadata
Creator
Dallmann, Justin M.
(author)
Core Title
Belief as credal plan
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Philosophy
Publication Date
08/28/2015
Defense Date
05/11/2015
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
Bayesianism,Belief,bounded rationality,confirmation theory,credence,epistemology,evidence,OAI-PMH Harvest,plans,queuing theory
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Wedgwood, Ralph (
committee chair
), Arratia, Richard (
committee member
), Easwaran, Kenny (
committee member
), Ross, Jacob M. (
committee member
)
Creator Email
justin.dallmann@usc.edu,justin@jdallmann.org
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-172290
Unique identifier
UC11275421
Identifier
etd-DallmannJu-3858.pdf (filename),usctheses-c40-172290 (legacy record id)
Legacy Identifier
etd-DallmannJu-3858.pdf
Dmrecord
172290
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Dallmann, Justin M.
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
Bayesianism
bounded rationality
confirmation theory
credence
epistemology
plans
queuing theory