Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Meaningfulness, rules, and use-conditional semantics
(USC Thesis Other)
Meaningfulness, rules, and use-conditional semantics
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
1 Doctoral Dissertation Meaningfulness, Rules, and Use-Conditional Semantics Indrek Reiland University of Southern California 2 “#43. For a large class of cases – though not for all – in which we employ the word ‘meaning’, it can be explained thus: the meaning of a word is its use in the language.” Ludwig Wittgenstein, Philosophical Investigations “Now it is not quite appropriate to say that the meaning of a linguistic element is its use. But it is correct to say that what is called the ‘meaning’ of a word or other linguistic element is indicated by the rules for its use according to the prevailing usage.” Erik Stenius, “Mood and Language-Game”, p. 258 3 Table of Contents Preface 4 Chapter 1 Meaningfulness and Rules 7 Chapter 2 Rules, Not Conventions or Dispositions 28 Chapter 3 Use-Conditional Semantics and the Representational Core of Language I 46 Chapter 4 Use-Conditional Semantics and the Representational Core of Language II 61 Chapter 5 Use-Conditional Semantics, Indexicals, and Demonstratives 78 Chapter 6 Use-Conditional Semantics, Mood, and Conditionals 92 Conclusion 127 References 128 4 Preface In the summer of 2010 I attended a summer school in Budapest on semantics and pragmatics. During one of the sessions I was passing notes with a friend when I noticed that I was ending almost every message with an emoticon of some sort: probably either ‘:)’ or ‘;)’. Text messaging and talking to people over the internet had had its effect. Being in a playful mood, I thought of writing up a semantics for emoticons. And having been philosophically brought up in the tradition of philosophy of language going from Gottlob Frege and Bertrand Russell to Saul Kripke and David Kaplan to Scott Soames and Nathan Salmon, I was used to thinking of meanings of expressions in terms of their semantic contents. You know, names get objects, predicates get properties, sentences get structured propositions and so on. The most advanced tools I had in my toolbox were Kaplan’s characters, functions from contexts conceived of as n- tuples of different sorts of parameters to semantic contents, immortalized in his “Demonstratives” (Kaplan 1989a). So, I thought of writing a semantics for emoticons by assigning them characters. The idea was that the character of ‘:)’ could be a function from a context to the proposition that the speaker of the context is happy. Of course, as my friend Ben Lennertz immediately pointed out, this idea was hopeless. If emoticons had characters then one would expect them to compose so that ‘:) & ;)’ would make sense. But they don’t and ‘:) & ;)’ doesn’t make sense. Something was amiss. I only had a hammer and thus had misguidedly tried to use it on a something that wasn’t a (philosophical) nail. In the fall I returned to the USC and after telling him about this, my advisor Mark Schroeder directed me to David Kaplan’s unpublished paper “The Meaning of Ouch and Oops”. Kaplan argued that the meanings of at least some expressions like the interjection ‘Ouch!’ might be best described not by assigning them characters, but rather by describing the rules governing their use. This led me to the idea that the rule governing ‘:)’ could be, roughly, that one may use it iff one is happy. And this got me thinking that perhaps we can think of the meanings of all expressions in terms of rules of use or use-conditions. Working this thought out took most of the following four years and culminates in this dissertation. My intellectual and professional debt to Mark who has been working with me on this project since the beginning can’t be overstated. I started working with him as soon as I got to USC, and he’s been a constant source of professional advice and encouragement. He has also 5 influenced this dissertation immensely, both in its methodology and its content. Furthermore, he has continually and patiently taught me how to write in an effective manner. I always used to think that I was a good writer, but he has showed me time and time again how much I really need to improve. Finally, his own work provided a model of how to do philosophy at its best and he has served as an example of how productive a philosopher can be. Although we used to joke with other graduate students that the right question to ask oneself is “What would Mark want me to do?”, sometimes one can’t help but ambitiously asking “What would Mark do?”. In sum, I couldn’t have asked for a better person to work with. Thank you, Mark! My debt to Scott Soames is no smaller. I came to USC not knowing much about philosophy of language. Scott taught me most of what I know now. I was also used to primarily thinking about the big picture and how things hang together. He showed me that you can’t be sure that you know what’s really going on unless you’ve constructively and patiently worked out all the details. It is a constant struggle to heed to his advice to “put more flesh on the bone”, but I’m sure that my work is a lot meatier than it would’ve been without him. Robin Jeshion joined the project a bit later, bringing a very different and fresh perspective to bear on it and forcing me to think about lots of different things I hadn’t thought about. I’ve learned a lot from her and hope to continue to do so in the future. Barry Schein took a genuine interest in my work and was being kind enough to serve as an outside member on my committee. Thank you, Scott, Robin, and Barry! My fellow graduate students at USC have contributed immensely both to my development as a philosopher and to this dissertation. Ben Lennertz and Shyam Nair read almost everything I write, even if it’s not related to their primary interests. Both have a very different philosophical temperament from my own and I have to thank them both for helping me iron out many of the details while encouraging me to continue doing “the big work”. Hopefully we can talk philosophy for years to come! Others who have given me invaluable feedback and made my time here fun and productive include Justin Snedegar, Julia Staffel, Lewis Powell, Sam Shpall, Johannes Schmitt, Marina Folescu, Alida Liberman, Joshua Crabill, Brian Blackwell, John Kwak, Kenny Pearce, Aness Webster, Justin Dallman, Matt Lutz, Matt Babb, and Jen Liderth. I want to also thank Kenny Easwaran, Steve Finlay, Janet Levin, Karen Lewis, Ralph Wedgwood, George Wilson, and Gabriel Uzquiano at USC for reading parts of this dissertation and giving me comments. Steve and Janet were further very important to me as people I could 6 always go to for a pick-me-up chat. In the wider philosophical community I want to thank Alex Davies, Wayne Davis, Peter Hanks, Uriah Kriegel, Kathrin-Glüer Pagin, Francois Recanati, and Jeff Speaks for feedback and encouragement. This has been a long journey. I want to thank my parents and grandparents for making it possible for me to get where I am by inciting intellectual interests in me as early as was possible and continually fostering them. I want to also thank them for supporting my decision, apparently made around when I was 7 or 8, to become “a professor” – even if it eventually meant moving halfway across the world to pursue a PhD. A different debt is owed to my teachers at University of Tartu. As a freshman in political science I took a class in social and political philosophy. After the first seminar in which we discussed Hobbes’s Leviathan Margit Sutrop told me that I should become a philosopher. I want to thank her for that nudge. Somehow I was able to graduate with a BASc in Political Science without gathering any empirical data and writing a thesis in political philosophy instead. I have my advisor Alar Kilp to thank for that. I want to also thank Daniel Cohnitz and Paul McLaughlin for teaching me most of what I knew about philosophy before coming to USC during my MA studies and both of them and Alan Baker for writing me recommendation letters that got me here. Finally, I want to thank Bruno Mölder, Jaan Kangilaski, Gea Kangilaski, Riin Sirkel, Toomas Lott, Mats Volberg, and Külli Keerus for numerous useful conversations through the years and making the Estonian philosophical community a lively and fun one to engage with. Last, some more personal debts. Thanks to my friends in Los Angeles who have become a second family to me. Thanks to everyone who has helped keep me sane and surprisingly stress free through graduate school by either rolling around with me on the mat, or strolling around on the dancefloor. Finally, thanks to Pia for the love and support, for making my life amazing at the good times and helping me get through the bad ones. ☺ 7 Chapter 1 Meaningfulness and Rules “#432. Every sign by itself seems dead. What gives it life?” Ludwig Wittgenstein, Philosophical Investigations Introduction Some strings of linguistic symbols are meaningful or have a meaning in a language whereas others are not. For example, the expression ‘Bertrand is British’ has a meaning in English while the mere string of symbols ‘*#&’ does not. Furthermore, some expressions have multiple meanings or are ambiguous and some expressions have the same meaning as others or are synonymous with them. For example, ‘Bertrand went to the bank’ has multiple meanings in English and ‘Bertrand is a doctor’ has the same meaning as ‘Bertrand is a physician’. This raises the following natural question: (Q1, Nature of Meaningfulness) What is it for an expression to be meaningful or have a meaning in a language? 1, 2 Let’s say a bit more about what we have in mind when we’re talking about expression’s meaning in a language. An expression’s meaning in a language is what competent speakers have a grasp of. For example, the meaning of ‘Bertrand is British’ in English is what competent speakers of 1 This question is comparable, but importantly different from the question what is it for a mental state to have content. For a linguistic expression to have a meaning is not the same as for a mental state to have content. First, linguistic expressions have their meanings contingently and in a particular language unlike mental states that are individuated by their contents and thus have them necessarily (a sentence can have several meanings and perhaps even different meanings in different languages whereas a judgment with a different content is a different judgment). Second, it’s plausible that some linguistic expressions have meanings, but have “semantic contents” only relative to contexts (‘I’), and some have meanings, but no contents at all (interjections like ‘Ouch!’). 2 What is a language? David Lewis famously proposed that we can think of languages as sets of strings of symbols with their meanings (Lewis 1975). Of course, since intuitively most natural languages don’t cease to exist when one expression changes its meaning, it’s perhaps more natural to think of natural languages as evolving entities that are at each moment fully constituted by such “Lewisian” languages or language-stages, but that could be at the next moment be fully constituted by a different one (What relation has to hold between the different language-stages for them to be stages of that evolving entity? Obviously, there has to be a certain overlap in expressions and their meanings. However, equally important seems the fact that the practice of speaking one of those language-stages is somehow continuous with the practice of playing or speaking another. Compare Williamson 1996: 490, 2000: 239). 8 English have a grasp of. Furthermore, an expression’s meaning is what makes it possible for competent speakers to use that expression to speak that language and perform locutionary speech acts like saying something or telling someone to do something. 3 For example, the meaning of ‘Bertrand is British’ is what makes it possible for competent speakers to use that expression to speak English and say that Bertrand is British. Thus, the above question about the nature of meaningfulness is a question about what it is for expressions to have the features that competent speakers have a grasp of and what makes it possible for competent speakers to use the expression to speak a language and perform locutionary speech acts. My aim in this dissertation is to try to answer this question by developing and defending a promising view that I call the Rules view. In this introductory chapter I will set things up. I’ll start by saying more about the question about the nature of meaningfulness by distinguishing it from other related questions in philosophy of language, explaining how to best approach it, showing why it’s important, and setting out three constraints on acceptable answers (Sections 1- 3). I will then introduce the Rules view, show that it is well placed to be able to meet one of the constraints and specify what needs to be done to show that it can meet the other two (Section 4). Next, I will introduce two alternative views, the Conventions view and Dispositions view, and show that they’re equally as well placed to be able to meet the constraints. Finally, I’ll explain what we need to do to develop and defend the Rules view and provide an overview of the resultant structure of the dissertation (Sections 5-6). 1. Meaningfulness, Descriptive Semantics and Foundational Semantics Let me start by distinguishing the question about the nature of meaningfulness from other related questions in philosophy of language. It is useful to begin by reflecting on a common distinction between the enterprises of descriptive semantics and foundational semantics. I will show that our question is different from the questions that stand in the center of both of these enterprises. Here are three quotes from David Lewis, Robert Stalnaker, and David Kaplan, who all draw the aforementioned distinction roughly in the same way (my boldface): 3 I’m relying here on Austin’s distinction between locutionary speech acts like saying something or telling someone to do something vs. illocutionary speech acts like claiming, predicting, requesting, ordering etc. (Austin 1962, for discussion and defense see Forguson 1973, Recanati 1987). 9 I distinguish two topics: first, the description of possible languages or grammars as abstract semantic systems whereby symbols are associated with aspects of the world; and second, the description of the psychological and sociological facts whereby a particular one of these abstract semantic systems is the one used by a person or population. Only confusion comes of mixing these two topics. (Lewis, 1970: 19) First there are questions of what I will call descriptive semantics. A descriptive semantic theory is a theory that says what the semantics for the language is without saying what it is about the practice of using that language that explains why that semantics is the right semantics. A descriptive semantic theory assigns semantic values to the expressions of the language, and explains how the semantic values of the complex expressions are a function of the semantic values of their parts. … Second, there are questions, which I will call questions of foundational semantics, about what the facts are that give expressions their semantic values, or more generally, about what makes it the case that the language spoken by a particular individual or community is a language with a particular descriptive semantics. (Stalnaker 1997: 535). 4 There are several interesting issues concerning what belongs to semantics. The fact that a word or phrase has a certain meaning clearly belongs to semantics. On the other hand, a claim about the basis for ascribing a certain meaning to a word or phrase does not belong to semantics. "Ohsnay" means snow in Pig-Latin. That's a semantic fact about Pig-Latin. The reason why "ohsnay" means snow is not a semantic fact; it is some kind of historical or sociological fact about Pig-Latin. Perhaps, because it relates to how the language is used, it should be categorized as part of the pragmatics of Pig-Latin (though I am not really comfortable with this nomenclature), or perhaps, because it is a fact about semantics, as part of the Metasemantics of Pig-Latin (or perhaps, for those who prefer working from below to working from above, as part of the Foundations of semantics of Pig-Latin). (Kaplan 1989b: 573) Let’s start with Descriptive Semantics. Lewis, Stalnaker, and Kaplan all seem to think of it as the enterprise of developing a framework for describing or formally representing facts about the meanings of expressions of natural languages and how they compositionally interact with each other to generate the meanings of more complex expressions. Accordingly, the two main types of questions important to descriptive semanticists are the general question what is the best way to 4 See also Speaks 2010, Stanley & Szabo 2000: 223-224. 10 describe or formally represent such facts in general and specific questions about what the facts concerning about particular types of expressions are. To answer the general question is thus to come up with what I’ll call a semantic framework, a framework for describing or formally representing semantic facts. For example, the people who argue that we should prefer a semantic framework with structured propositions over one that involves sets of truth-supporting circumstances are involved in answering this question. To answer the specific questions is just to say what the semantic facts involving particular types of expressions are. For example, the people who argue for a Russellian theory of definite descriptions over some different theory are involved in answering such a specific question about definite descriptions. To be able to refer back to these questions, let’s number, name, and list them: (Q2, Description of Semantic Facts) What is it the best way to describe or formally represent facts about the meanings of expressions? (Q3, Description of Semantic Facts About Particular Types of Expressions) What are the facts about the meanings of particular types of expressions? It should be evident that these questions are not identical to the question about the nature of meaningfulness. The question about the nature of meaningfulness is a metaphysical question about the essence of meaningfulness, about what it is for something to be meaningful. The former of the above two questions is a methodological question about how to best capture the information we need to capture, whereas the latter is a question about what the information is like in specific cases. Of course, this doesn’t mean that these two sets of questions are unrelated. On the contrary, in the next few sections I’ll show that the reason why the question about the nature of meaningfulness is important is precisely because finding an answer to it is very useful for answering the question about the description of semantic facts (and thereby could in at least some cases also help with the questions about description of the semantic facts involving particular types of expressions). Let’s now move on to Foundational Semantics. Lewis, Stalnaker, and Kaplan all seem to agree that it is the enterprise of articulating the causal, historical, psychological, or social facts 11 that make it the case that a particular expression has the meaning that it does and not another one that it could possibly have had. Perhaps a little confusingly, Lewis asks which facts about a particular population P make it the case that they use a particular language L, understood as a set of symbols with their meanings. A little more clearly, Stalnaker asks what about a particular population P makes it the case that the “language” used by them, L*, understood just as a set of symbols, has the set of meanings that it does. Given the differing notions of language used by Lewis and Stalnaker it is clear that they are asking the same question. Perhaps most clearly and least committally, Kaplan just asks what makes it the case that something has the meaning that it does and not another one that it could possibly have had. Again, it’s clear that he’s asking the same question. To be able to refer back to it, let’s number, name, and list this question as well: (Q4, Grounding of Semantic Facts) What makes it the case that an expression has the meaning that it does and not another one? Stated in this way, it should also be evident that this question is not equivalent to the previous question about the nature of meaningfulness. Again, the question about the nature of meaningfulness is a metaphysical question about the essence of meaningfulness. However, this question is a different sort of metaphysical question, one about what sorts of facts ground facts about the actual meanings of expressions. 2. Unity In the previous section I distinguished the question about the nature of meaningfulness from other related questions in philosophy of language. Let me now explain how our question is best approached and propose a strong constraint on acceptable answers. For expressions to be meaningful is for them to have the property of being meaningful. And for anything to have any property is for it to have something in common with other things that have that property. Thus, for expressions to be meaningful is for each of them to have something in common with other expressions that are meaningful. This is the thing in virtue of the possession of which they count as being meaningful. Let’s call this common feature X. However, for different expressions to have different meanings is for each of them to have 12 something that it doesn’t have in common with other expressions. This is the thing in virtue of the possession of which they count as having different meanings. Let’s call this differing feature Y. Based on this we can see that the question about nature of meaningfulness is best approached by breaking it down into the question what is the nature of the common element X, and the question what is the nature of the differing elements, Y’s. Different views about meaningfulness can be thought of as giving different answers to the question about the nature of X and the nature of Y-s. For a really simple example, take the toy view that for any expression to have a meaning is just for it to have truth-conditions. 5 On this view what it is for ‘Bertrand is British’ and ‘Gottlob is German’ to have a meaning can be captured with the following statements: • ‘Bertrand is British’ is true iff Bertrand is British • ‘Gottlob is German’ is true iff Gottlob is German And we can see that the common element X is embodied in the schema ‘‘_’ is true iff _’, whereas the differing element Y is a different truth-condition. This way of approaching the question also enables us to articulate a strong constraint on acceptable answers which I will call the Unity Constraint. Namely, that a view of meaningfulness has to tell us what the nature of the common element X is and the nature of the differing elements Y’s are which is adequate for all the different types of expressions of natural language. In other words, it has to find an X which is the same not only in the case of names, predicates, and declarative sentences, but also in the case of indexicals and demonstratives, interrogative sentences, imperative sentences, and interjections and other similar phenomena: ‘Bertrand’ X Y 1 ‘is British’ X Y 2 ‘Bertrand is British’ X Y 3 ‘I’ X Y 4 5 I don’t think anybody has ever actually held this toy view. Rather, people like Davidson thought that one can describe the meanings of certain expressions satisfactorily by stating their truth-conditions (Davidson 1967). 13 ‘this’ X Y 5 ‘What time is it? X Y 7 ‘Read!’ X Y 8 ‘Ouch! X Y 9 To repeat, a view of meaningfulness has to find an X which is the same in the case of all the different kinds of words, phrases, and sentences in natural language. If a view fails to do this then it makes meaningfulness a disjunctive property. And surely, such a view is unacceptable or at least in need of very serious justification. It is comparable to a view on which being red, being blue etc. are not ways of being colored, but rather on which to be colored just is to be red or blue etc. where there is no underlying unity to what is it to be red and what is it to be blue. Surely, we’d need very strong reasons to take such a disjunctive view of being colored seriously. Same goes for a disjunctive view of meaningfulness. Let me conclude by showing how the Unity Constraint works. For example, consider for a moment the aforementioned toy view on which to have a meaning is to just have truth- conditions. On this view the common element Y is embodied in the schema‘‘_’ is true iff _’. It’s clear that this can’t be the common element in case of sub-sentential expressions like the name ‘Bertrand’, and predicate ‘is British’ because they don’t have truth-conditions. And this means that this toy view fails to satisfy the Unity Constraint. Now, one could try to supplement the toy view with the view that what it is for subsentential expression like ‘Bertrand’, and predicates ‘is British’ to have meaning is for them to contribute something to the truth-conditions of the sentences in which they occur. 6 However, even this supplemented toy view doesn’t give us a common element that is adequate for indexicals like ‘I’, demonstratives like ‘this’, interrogative and imperative sentences, and interjections like ‘Ouch!’. This means that this supplemented toy view also fails to satisfy the Unity Constraint. Of course, there are perhaps ways of supplementing it further, but this should be enough for us to be able to see how the Unity Constraint works. 6 It seems to me that taken as views about the nature of meaningfulness such views are problematic. Although we can describe the meanings of subsentential expressions in terms of what they contribute to the meanings of sentences in which they occur, surely they have meanings of their own, some properties in virtue of which they do the contributing. And it is the nature of these properties that we’re interested in when we’re asking about the nature of meaningfulness. 14 3. Conservativeness and Explanation In the previous section explained how the question about the nature of meaningfulness is best approached and proposed a strong constraint on acceptable answers. Let me now explain why I think it is one of the most important questions for philosophers of language to answer which leads to two further constraints on acceptable answers. Here’s a simple argument as to why the question about the nature of meaningfulness is an important question. Knowing what it is for an expression to have a meaning would straightforwardly lead to an answer to the question many philosophers of language care a lot about, namely, how to best describe facts about the meanings of expressions (Q2). For example, if for an expression to have a meaning is for it to have truth-conditions, then that makes it clear that we can best describe facts about the meanings of expressions by stating the truth-conditions. However, if for an expression to have a meaning is for it to stand in some relation to structured propositions, then that makes it clear that we can best describe facts about the meanings of expressions by describing the relation and the propositions by saying what their constituents are and what their structure is like. Therefore, the question what it is for an expression to have a meaning is an important question. Here’s how to make the same point more forcefully. Take any answer to the question how to best describe facts about the meanings of expressions (Q2). It seems that we can’t really be sure whether it works and whether the relevant descriptions provide us with any information about the meanings of expressions at all, unless we know how they’re related to an answer to the question what it is for an expression to be meaningful. More mildly, even if it is objected that we can be relatively confident that the descriptions provide us with some information, it is still true that unless we know the above we don’t know why they do this. In order to see this consider the currently dominant Kaplanian semantic framework articulated by David Kaplan in his classic “Demonstratives” (Kaplan 1989a). On this framework the meanings of expressions are formally represented by characters, functions from contexts thought of as n-tuples of parameters to things called “semantic contents”. 7 Alternatively, one can 7 It needs emphasizing that Kaplan himself used ‘linguistic meaning’ and ‘character’ synonymously and never claimed that linguistic meanings are functions from contexts to “semantic contents”, but only that they can be formally represented by them. He thus didn’t use ‘character’ to talk about the functions. This should be evident from the quote below: 15 also just talk about the expression semantically expressing its semantic content relative to a context. 8 For example, take the declarative sentences ‘Bertrand is British’, ‘I am a philosopher’, ‘Gottlob is here’, and ‘Ludwig is leaving now’. On the standard Kaplanian view their meanings can be formally represented by characters which take as their inputs contexts consisting of at least a possible speaker of the context c a , location of the context c l , a time of the context c t , and a the world-state of the context c w , and yield as their outputs the propositions that Bertrand is British, that c a is a philosopher, that Gottlob is at c l , and that Ludwig is leaving at c t (Kaplan 1989a). 9 Using [[ ]] for an expression’s character and <,> for a context we can represent this formally as follows: (i) [[‘Bertrand is British’]] <c a , c l , c t , c w > = <Bertrand, being British> (ii) [[‘I am a philosopher’]] <c a , c l , c t , c w > = <c a , being a philosopher> (iii) [[‘Gottlob is here’]] <c a , c l , c t , c w > = <Gottlob, being at c l > (iv) [[‘Ludwig is leaving now’]] <c a , c l , c t , c w > = <Ludwig, leaving at c t > Let us call the second kind of meaning, character. The character of an expression is set by linguistic conventions and, in turn, determines the content of the expression in every context. Because character is what is set by linguistic conventions, it is natural to think of it as meaning in the sense of what is known by the competent language user. Just as it was convenient to represent contents by functions from possible circumstances to extensions (Carnap’s intensions), so it is convenient to represent characters by functions from possible contexts to contents. (As before we have the drawback that equivalent characters are identified.) (Kaplan 1989a: 505) In not using ‘linguistic meaning’ and ‘character’ synonymously and using ‘character’ to talk about the functions, I’m following the relatively standard post-Kaplanian usage. 8 I am assuming that all previous frameworks, for example, Davidsonian frameworks that associate sentences with their truth-in-L conditions and Circumstantialist frameworks which associate sentences with their truth-in-L-at-Cr conditions, where Cr is some set of circumstances like possible world-states or situations, can be understood in terms of the Kaplanian one and are subsumed by it. For example, to say that ‘Bertrand is British’ is true-in-L iff Bertrand is British is just to say that ‘Bertrand is British’ semantically expresses the proposition that Bertrand is British relative to all contexts and that that proposition is true simpliciter iff Bertrand is British. Or, to say that ‘Bertrand is British’ is true-in-L-at-w iff Bertrand is British at w is just to say that ‘Bertrand is British’ semantically expresses the proposition that Bertrand is British relative to all contexts and that that proposition is true at w iff Bertrand is British at w. 9 I’ll henceforth assume that propositions are structured, consist of objects and properties, and can be represented by formal structures like <Bertrand, being British>. Nothing substantial hangs on this in the sense that if some other view of propositions turns out to be the correct, then it can just be substituted in. For defense and discussion of views consistent with these assumptions see Hanks 2011, MS; King 2007, 2009, Soames 2010, 2012, 2014, MS. 16 To fully understand these formal representations, we also need to know what it is for something to be the speaker of the context, the location of the context, the time of the context, and the world-state of the context. On the standard Kaplanian view the speaker of the context is the user of the expression, the location of the context is a location that contains wherever the speaker of the context is, the time of the context is a stretch of time that contains stretch over which the speaker of the context uses the sentence, and the world-state of the context is the world-state the speaker is in. Now, the formal features of the Kaplanian semantic framework should be clear enough. However, how can we be sure that descriptions in terms of characters provide us with any information about the meanings of expressions at all unless we know how they’re related to an answer to the question what it is for something to be meaningful? After all, characters are just functions that take n-tuples as inputs and yield things called “semantic contents” as their outputs. And even if it is objected that we can be relatively confident that descriptions in terms of characters do provide us with some information about the meanings of at least some types of expressions, without an interpretation of what that function represents, what a context is etc. that relates these things to some answer about the nature of meaningfulness, we don’t know how they do it. Thus, there’s a sort of mystery in the heart of descriptive semantics. We use a semantic framework the tools of which seem to provide us with some information about the meanings of at least some types of expressions. However, without an interpretation of them, we can’t really be sure whether this is in fact so and why exactly it is so. On the one hand the above argument shows forcefully that the question about the nature of meaningfulness is an important question. If we could find an answer to it and see how descriptions in terms of characters are related to it we could gain some certainty that they do provide us with information about the meanings of at least some types of expressions and we could also be able to explain why they do. On the other hand the fact that descriptions in terms of characters seem to provide us with information about the meanings of expressions also sets two constraints on any acceptable answer to our question about the nature of meaningfulness. First, that any view of the nature of meaningfulness be consistent with well-established frameworks in descriptive semantics. More precisely, it has to be consistent with us being able to formally represent the meanings of at least some types of expressions with characters. Let’s call this the 17 Conservativeness Constraint. Second, that any view must yield an explanation of why we can formally represent the meanings of at least some types of expressions by characters and exactly what happens when we do so. Thus, the essence of this requirement is that a view of meaningfulness explain why well-established frameworks in descriptive semantics work and what happens when we use them. Let’s call this the Explanatory Constraint. 4. The Rules View In the previous sections I’ve said more about the nature of meaningfulness and set out three constraints on acceptable answers. Let me now introduce the essentials of a promising answer. In the middle of the 20 th century it was a common idea in philosophy of language that meaningfulness has something to do with rules of use. For example, here’s Peter Strawson in his “On Referring” (my emphasis): The meaning of an expression cannot be identified with the object it is used, on a particular occasion, to refer to. The meaning of a sentence cannot be identified with the assertion it is used, on a particular occasion, to make. For to talk about the meaning of an expression or sentence is not to talk about its use on a particular occasion, but about the rules, habits, conventions governing its correct use, on all occasions, to refer or to assert. (Strawson 1950: 328). 10 A similar idea reappeared in 1980’s and 1990’s. For example, here are David Kaplan in “Demonstratives”, John Perry in “Meaning and Reference”, and Scott Soames in Beyond Rigidity (my emphasis): Among the pure indexicals are ‘I’, ‘now’, ‘here’ (in one sense), ‘tomorrow’ and others. The linguistic rules which govern their use fully determine the referent for each context. … The linguistic rules which govern the use of the true demonstratives ‘that’, ‘he’, etc. are not sufficient to determine their referent in all contexts of use. (Kaplan 1989a: 490-491) 10 It’s clear that the view is inspired by Ludwig Wittgenstein’s later work (Wittgenstein 1953). See also the quotes from papers published in the 1950’s in the beginning of William Alston’s paper “Meaning and Use” (Alston 1963). 18 Meaning, as I shall use the term, is a property of expressions—that is of types rather than tokens or utterances. Meaning is what is fixed by the conventions for the use of expressions that we learn when we learn a language. … To repeat, as I use the terms, meaning is what the rules of language associate with simple and complex expressions.... (Perry 1997) [D]ifferent indexicals typically have different meanings in the sense of being associated with different rules governing their use that must be grasped by the competent users of the language (Soames 2002: 103). 11 Nevertheless, nobody has really worked out this idea out to any length and it hasn’t been clear what exactly meaningfulness is supposed to have to do with rules. I think that the idea that meaningfulness is a matter of being governed by a rule of use is best viewed as an embryonic answer to our question about the nature of meaningfulness. To understand how meaningfulness could be a matter of being governed by a rule of use we need to focus on the fact that games like chess and languages like English are similar in the following respect. Intuitively, both are constituted by a set of intrinsically inert symbols that have somehow acquired some “significance”. For example, chess is at least partly constituted by the inert symbols – the pieces of the game – that have somehow acquired what we could call their “roles”. Similarly, English is at least partly constituted by inert symbols – the expressions of the language – that have somehow acquired meanings. However, now notice that in the case of games like chess it’s commonplace to think that they wouldn’t exist and playing them wouldn’t be possible if their rules weren’t in place. Thus, it is commonly said that they are at least partly constituted by their rules. 12 And the way the rules are thought to constitute the games is by giving the inert symbols, the pieces, their roles. The basic thought behind our idea as an answer to the question about the nature of meaningfulness is that, similarly to games like chess, languages like English wouldn’t exist and speaking them wouldn’t be possible if their rules weren’t in place. It can 11 See also William Alston’s, David Kaplan’s and Mark Schroeder’s discussions of it in Alston 1999, Kaplan MS, and Schroeder 2008a, 2008b. 12 It’s plausible that rules governing the movings of chess pieces don’t fully constitute the game of chess, but just chess (Bierman 1972, Marmor 2009: 40, Schwyzer 1969). This is because the same rules could also constitute other practices involving chess. For example, consider the imaginary rite of chess which is performed not with the aim of checkmating your opponent to win, but rather to determine the will of gods. Since this is not important for our purposes, I will mostly disregard this complication. For further discussion see Marmor 2009. 19 therefore be said that they are also at least partly constituted by their rules. And the way the rules can be thought to do this is by giving the inert symbols, the expressions, their meanings. How do rules give pieces their roles and expressions their meanings? Well, rules of chess tell us when it is permissible to move pieces: they specify the conditions in which it is permissible, according to those rules, for a player to move pieces. Similarly, rules of use can be taken to tell us when it is permissible to use expressions: they specify the conditions in which it is permissible, according to those rules, for a speaker to use the expressions. This means that we can take the rules of games like chess and rules of languages like English as having something like the following form (where ‘s’ ranges over speakers and players, ‘e’ over pieces and expressions, and ‘Y’ over conditions): ∀s (e is permissibly movable/usable by s iff Y) ∀s (s may move/use e iff Y) Thus, what the rules give pieces and expressions are their moving- and use-conditions. And on this view, roles and meanings can be identified simply with the moving- and use-conditions. 13 Let’s call this the Rules view. 14 13 There are at least two different implicitly accepted models of thinking about meaningfulness or having meaning, models which are not clearly distinguished and kept apart and which bear on the question of what meanings are. On one of them having meaning is thought of as comparable to having a child, on another it’s thought of as comparable to having color: Object Model: For one to have a child is for one to stand in the being a parent of relation to something else which is the child. Property Model: For something to have a color is for it to have a property, the property of being colored, which is the color. In one case what you have is a separate entity and you have that thing by standing in a certain relation to it or perhaps in some other way. In the other what something has is a property, its property. It seems to me that most philosophers of language have thought of having meaning on the Object Model. However, sometimes people have also thought of it on the Property Model. Here are two quotes by Jason Stanley, where he seems to vacillate back and forth between these two models: I take it for granted that the standing linguistic meanings of non-context sensitive words are often objects, properties, or events in the world. For example, I take it for granted that the standing linguistic meaning of ‘‘is taller than six feet’’ is the property of being taller than six feet, which some people have and others lack. (Stanley 2007a: 2) Since referring to an object is not a property of a linguistic expression, and linguistic meanings are properties of linguistic expressions, ordinary language philosophers sought an alternative account of 20 In order to see why the identification of meanings with use-conditions is plausible let’s try to find an example of what a rule governing the use of a particular expression might look like. In doing this I propose we start with the simplest cases. These might not be the cases most people are interested in or have talked about. Most people in contemporary philosophy of language have been interested in the representational side of language and therefore the paradigmatic cases people have been interested in have been atomic declarative sentences like ‘Bertrand is British’ and ‘Gottlob is German’ (and even then the question whether the declarative mood makes a special contribution as compared to the other moods has been largely neglected). However, atomic declarative sentences like ‘Bertrand is British’ aren’t really all that simple because they’re composed out of other meaningful expressions and they have a mood. Rather, following Paul Grice’s lead, the simplest cases seem to be interjections or interjective sentences like ‘Ouch!’ (Grice 1989b: 124). They are meaningful, but aren’t composed out of other meaningful expressions (for discussion see Recanati 1987: 241). And, unlike names and other sub-sentential expressions, they’re paradigmatically used to perform speech acts on their own which means that we can easily get intuitive evidence about what they’re semantically for – what linguistic meaning. According to it, the linguistic meaning of an expression is a rule for its proper use. (Stanley 2007a: 3) It is clear from the first quote that Stanley thinks of having a meaning on the Object Model. If the meaning of ‘is taller than six feet’ is the property of being taller than six feet then it isn’t a property of the expression. Rather, it’s something the expression stands in a certain relation to. However, in the second quote Stanley explicitly adopts the Property Model and says that meanings are properties of expressions. The Property Model is also explicitly embraced by John Perry, by inferentialists like Robert Brandom who treat meanings as relational properties of standing in inferential relations to other expressions and by Chomskyans like Paul Pietroski who at least seem to treat them as monadic, non-relational properties of expressions (Brandom 1994, 2000; Perry 1997, Pietroski 2003, 2005, 2006). For our purposes it doesn’t matter which model one adopts because it only bears on the question what meanings are and we can here identify them with use-conditions or properties of having use-conditions. 14 In the case of games like chess it’s entirely possible that its rules tell us that performing some action A is permissible in conditions Y whereas performing A in Y is intuitively impermissible all things considered. For example, consider a game with the aim of killing the most people and with the rule that one can shoot a person if he crosses an intersection from left to right and is male or crosses from right to left and is female. Although the rules say that shooting people in these conditions is permissible, surely shooting people on these conditions is impermissible all things considered. Similar situations arise with rules of language. For example, a rule of use of a language could tell us that some expression is permissibly usable in Y whereas given considerations of etiquette, morality etc. its use is impermissible in any conditions. How to account for this? I see two options. First, one could maintain that the rules invoke permissibility in an unrestricted sense, and that the verdicts delivered by the rules about particular cases are therefore true or false depending on permissibility all things considered. There is nothing especially puzzling about this because such cases will arise as soon as we think that there are any conflicting rules (Johnston 2013). Second, one could claim that the rules invoke permissibility in a restricted sense. For example, the rules of chess could tell us that moving a piece is permissible in some chess-related sense and rules of use could tell us that using a sentence is permissible in a semantic sense. However, then one must explain what this sense is. Since nothing here hangs on how this is to be resolved, I will leave discussion of this for a further occasion. 21 their meanings enable us to do with them such that we couldn’t do those things if they weren’t meaningful. 15 Thus, I propose we start with them. Let’s look then at interjections like ‘Ouch!’. Suppose, as has been suggested, that interjections like ‘Ouch!’ are semantically for expressing mental events like pain, somehow indicating or providing evidence of the fact that one is undergoing them without saying that one is (Kaplan MS). Then, on this view, we should take interjections like ‘Ouch!’ to be permissibly usable just in case the speaker is undergoing those mental events. Thus, on this view, something like the following rule can be taken to be the rule governing the use of ‘Ouch!’ in English: (1) ∀s (‘Ouch!’ is permissibly usable by s iff s is in pain) Now we can see why the identification of meanings with properties of having use-conditions or use-conditions is plausible. Suppose that (1) is the rule governing the use of ‘Ouch!’ in English. Then, if you grasp this rule, for example, either by explicitly knowing it or just knowing how to act in accordance with it, and if you know that your audience grasps it, then you can express your pain to that audience by using ‘Ouch!’ with its meaning. This is because if you do this your audience can assume that you’re in pain and thereby get the required information. This shows how grasp of such rules can do what grasp of meaning intuitively does: facilitate communication. And this makes the identification of meanings with use-conditions plausible. Now, the Rules view is a view about the nature of meaningfulness. However, it can also help us answer certain other questions in philosophy of language. For example, take the question how to best describe facts about the meanings of expressions (Q2). The Rules view entails a straightforward answer to this question: if for an expression to be meaningful is for it to be governed by a rule of use and meanings are use-conditions then the simplest and most direct way to describe or formally represent facts about the meanings of expressions is just to state their use- conditions. That is, the simplest and most intimate way to describe the meanings of expressions is to give a use-conditional semantics for these expressions. 16 15 Notice that I’m talking about interjections like ‘Ouch!’ which are a type of sentence, and not about expressive words like ‘ouch’ which could perhaps also used as a part of larger constructions. For example, consider ‘damn’ which occurs in an interjection in ‘Damn!’, but can also occur as part of phrases like in ‘that damn philosopher’. 16 As I’m using these terms, the Rules view is a view about the nature of meaningfulness whereas a use-conditional semantics for a language is, like any semantics, a description of the meanings of its expressions, in this case, by 22 Consider now the question what makes it the case that an expression has the meaning that it does and not another one (Q4). The Rules view helps us further with this question as well: if for an expression to be meaningful is for there to be a rule governing its use, then the question what makes it the case that it has the meaning that it does amounts to the question what makes it the case that there’s a particular rule governing it, rather than another one. However, this is just an instance of the more general question what makes it the case that some particular rule exists rather than another one? And this more general question isn’t special to language, but one that everybody who thinks that there are rules has to face. Finally, consider the question what it is for a speaker to understand an expression or be semantically competent with it. To be able to refer back to this question let’s number, name, and list it as well: (Q5, Nature of Semantic Competence) What is it for a speaker to understand an expression or be semantically competent with it? In other words, what is it for a speaker to grasp its meaning or to know how to use it in accordance with its meaning? Again, the Rules view gives us a straightforward answer to this question as well: if for an expression to be meaningful is for there to be a rule governing its use, then for a speaker to understand that expression or be semantically competent with it is for him to grasp the rule or to know how to act in accordance with it. Of course, this leads to the further question what is it for someone to grasp a rule or know how to act in accordance with it. But again, this is a more general question that isn’t special to language. We should now have a basic grip on the Rules view. Let me conclude by showing that it is well placed to be able to meet one of our three constraints and specify what needs to be done to show that it can meet the other two. Let’s start with the Unity Constraint. Recall that to meet this constraint a view has to find an X which is the same not only in the case of names, predicates, and declarative sentences, but stating their use-conditions. It is important to understand that the Rules view is by itself consistent with lots of different type of semantic theories as long as we understand the descriptions they give as giving us some information about use-conditions. We’ll come back to this in Ch. 3. 23 also in the case of indexicals and demonstratives, interrogative sentences, imperative sentences, and interjections and other similar recalcitrant phenomena. Recall also why the toy view on which to have a meaning is to have truth-conditions and even its supplemented versions fail the Unity Constraint. This is because not all expressions have or contribute something to truth- conditions. In contrast, consider the Rules view. On this view the common element X is embodied in the schema‘‘_’ is permissibly usable by s iff _’. And there’s no reason why this couldn’t be the common element in the case of sub-sentential expressions, indexicals and demonstratives, interrogative and imperative sentences, and interjections. This is because all of these expressions could have use-conditions. Thus, the Rules view is well placed to be able to meet the Unity Constraint. 17 Let’s now look briefly also at the Conservativeness and Explanatory Constraints. Recall that to meet these constraints a view must be consistent with us being able to formally represent the meanings of at least some types of expressions with characters and yield an explanation of why we can do this and what happens when we do so. Thus, in order to show that the Rules view satisfies the first constraint we need to make sure that all the different proposed descriptions that can be stated in terms of characters can also be stated in terms of use-conditions. And in order to show that the Rules view satisfies the second constraint we need to make sure that we are able to explain what we’re doing when we give descriptions in terms of characters by appealing to use- conditions. 5. The Conventions View and the Dispositions View I’ve now introduced the essentials of a promising answer – the Rules view – and argued that it is well placed to be able to meet one of the constraints. Let me now introduce two other views which are substantively different from the Rules view, but promise to meet the constraints in the same way. In the next section I’ll then explain what we need to do in the light of this. On the Rules view expressions have a meaning primarily in a public language thought of as an abstract object not tied to any particular community. And as we have seen, what it is for an 17 In fact, it looks like a lot of the people who have been attracted to the Rules view have been attracted to it because of this. For example, Strawson, Kaplan, Perry, and Soames were all worried that the standard ideas about meaningfulness were falsified by indexicals and demonstratives and suggested the Rules view as a replacement. Similarly, Stenius was worried about the fact that standard ideas seem to be falsified by sentences in different moods. 24 expression to have a meaning in such a language is for it to be governed by a rule that entails that it is permissible to use it in certain conditions (e.g. its permissible use-conditions). However, the Rules view is not without alternatives. As a first alternative, consider the Conventions view which should be familiar from David Lewis’s Convention and “Languages and Language” and has been later defended by Jonathan Bennett, Stephen Schiffer, and Wayne Davis (Bennett 1976, Davis 2003, Lewis 1969, 1975, Schiffer 1972). According to it expressions have a meaning primarily not in a public language, but in a particular community’s language or a communal language. And what it is for an expression to have a meaning in a community’s language is for there to be a conventional regularity in the community to use it in certain conditions (e.g. its conventional use-conditions). Such communal languages can then be used to do the work of languages in the ordinary sense, languages like English or German. As a second alternative, consider the Dispositions view which should be familiar from Saul Kripke’s Wittgenstein on Rules and Private Language and Donald Davidson’s later work (Davidson 1984, 1986, Kripke 1982). 18 According to it expressions have a meaning in the first place not in a public or communal language, but in a speaker’s idiolect. What it is for an expression to have a meaning in an idiolect is for the speaker to be disposed to use it in certain conditions (e.g. its dispositional use-conditions). Furthermore, if two speakers are disposed to use an expression in the same conditions and have a grip on each other’s dispositions then they can be said to have a shared language. Such shared languages can then be used to do the work of languages in the ordinary sense. At a certain level of generality these three views have a lot more in common than might be thought. Namely, they all take meaningfulness to consist in having some sort of use- conditions. In fact, they could even agree on what the use-conditions consist in and on a descriptive semantics for a particular language: an assignment of use-conditions to its expressions. What they disagree about is two things. First, whether for an expression to have a meaning in a language is for it to have a meaning primarily in a public language, communal language, or an idiolect. Second, whether use-conditions are conditions in which it is permissible 18 What I call the Dispositions view is a view about what it is for linguistic expressions to have meaning and not as a view about what it is for mental states to have content (for discussion of dispositionalist views of mental content see Boghossian 1989, Greenberg 2005). For discussion of Davidson’s later work see Lepore & Ludwig 2005: Ch. 17, Lepore & Ludwig 2007. 25 to use expressions, conditions in which they are regularly and conventionally used, or conditions in which one is disposed to use them. The commonality in terms of use-conditions means that like the Rules view, the Conventions and Dispositions view are well placed to be able to meet the Unity Constraint in the exact same way. Furthermore, they need to do the exact same things in order to meet the Conservativeness and Explanatory constraints. This means that it looks like we can’t prefer the Rules view based on the fact that it promises to meet the three constraints. And this leaves us with the question whether there are any good reasons to prefer the Rules view over the two alternatives. 6. What Is To Be Done My aim in this dissertation is to try to answer the question about the nature of meaningfulness by developing and defending the Rules view. The first thing we need to do is to see whether there are any good reasons to prefer the Rules view over the two alternatives that I introduced in the last section. This will be the task in the next chapter. My strategy will be to focus on the connection between an expression’s meaningfulness in a language and using it with its meaning in that language. I will first argue that uses in the sense of tokenings of expressions divide into mere uses and uses with meaning, where only the latter result in speaking a particular language and performance of locutionary speech acts. I will then develop the Rules view further, showing that it can give a plausible account of what it is to use an expression with its meaning, while arguing that neither the Conventions nor Dispositions view can give such an account. This gives us a conclusive reason to prefer the Rules view. (Ch. 2) The second thing we need to do in order to develop and defend the Rules view is to show that not only is it well placed to meet our three constraints, but that it can actually do so. In order to do this we need to develop a use-conditional semantics for natural language by assigning all the different types of meaningful expressions a plausible use-condition. This will be the task in the rest of the chapters. I’ll first develop a use-conditional semantics for the representational core of natural language and showing that the view also satisfies the other constraints in its case (Ch. 3-4). This shows that the use-conditional framework is conservative in being consistent with standard frameworks and enabling us to do what they do. 26 I’ll then continue by developing a use-conditional semantics for indexicals and demonstratives. I’ll also argue there for the auxiliary point that the use-conditional framework is not only conservative, but also more flexible than standard frameworks in enabling us to do more than they do – in this case by being able to capture an emerging and promising theory of demonstratives that can adequately account for their “discretionarity”, but can’t be stated in terms of characters. (Ch. 5) Finally, I’ll develop a use-conditional semantics for mood, discussing declarative, interrogative, and imperative sentences and conditionalizing devices. I’ll also argue here again for the auxiliary point that the use-conditional framework is more flexible than standard frameworks in enabling us to do more than they do – in this case by being able to represent two radically different theories of the semantics of mood in the same framework and to state an interesting and plausible theory of ‘if’ in clear and concrete terms. (Ch. 6) 27 Appendix: Five Questions in Philosophy of Language (Q1, Nature of Meaningfulness) What is it for an expression to be meaningful or have a meaning in a language? (Q2, Description of Semantic Facts) What is the best way to describe or formally represent facts about the meanings of expressions? (Q3, Description of Semantic Facts About Particular Types of Expressions) What are the facts about the meanings of particular types of expressions? (Q4, Grounding of Semantic Facts) What makes it the case that some expression has the meaning that it does in a language and not another one? (Q5, Nature of Semantic Competence) What is it for a speaker to understand an expression or be semantically competent with it? In other words, what is it for a speaker to grasp its meaning or to know how to use it in accordance with its meaning? 28 Chapter 2 Rules, Not Conventions or Dispositions Introduction My aim in this dissertation is to try to answer the question about the nature of meaningfulness by developing and defending the Rules view. As I argued in the previous chapter, the first thing we need to do in order to develop and defend the Rules view is to see whether there are any good reasons to prefer it over the Dispositions and Conventions view. This is the task in this chapter. My strategy will be to focus on the connection between an expression’s meaningfulness in a language and using it with its meaning in that language. I will first argue that uses in the sense of tokenings of expressions divide into mere uses and uses with meaning, where only the latter result in speaking a particular language and performance of locutionary speech acts (Section 1). And I will then argue that whereas the Rules view can give a satisfying account of use with meaning, neither the Conventions nor the Dispositions view has enough resources to provide anything of the sort (Sections 2-5). This gives us a conclusive reason to prefer it over the others. 1. Use With Meaning My strategy in seeing whether we have any reasons to prefer one of the three views of meaningfulness over others will be to focus on the connection between an expression’s meaningfulness in a language and using it with its meaning in that language. I’ll therefore argue next that uses in the sense of tokenings of expressions divide into mere uses and uses with meaning and set out initial constraints on an intuitive conception of what it is to use an expression with its meaning in a language. 19 19 Our distinction between mere uses and uses with meaning is connected to J. L. Austin’s distinction between phonetic, phatic, and rhetic acts. In Austin’s terminology, a phonetic act is an act of making certain noises (or scribbling down marks) whereas a phatic act is an act of making noises where those noises belong to a language with a grammar and where the noises are made as belonging to the language. A parrot can perform phonemic, but not phatic acts. Furthermore, a rhetic act is an act of making certain noises where those noises belong to a language and where the noises are made as belonging to the language and with their meaning and while fixing their reference. As we will see, a semantically incompetent speaker can perform phonetic, but not rhetic acts. (Austin 1962: 92-98, for discussion see Forguson 1973, Recanati 1987: 240-241). Mere uses result in either phonetic or phatic acts, whereas uses with meaning normally result in rhetic acts. 29 Let’s start by looking at uses by incompetent speakers. Consider a speaker who doesn’t know how to speak Estonian and uses one of its sentences. You can try it yourself by uttering or inscribing a sentence of Estonian, say ‘Lumi on valge’. It should be clear that even though you’ve used the sentence, you haven’t spoken any Estonian or said anything (compare Austin 1962: 97). In our terminology, you’ve merely used the sentence, but you haven’t used it with its meaning. However, when a semantically competent speaker utters the same sentence using it with its meaning she will have spoken Estonian and will have said that snow is white. Importantly, the above cases involving uses by incompetent speakers are not the only cases that show that we need to distinguish between mere uses and uses with meaning. Semantic competence doesn’t guarantee that one’s use will be a use with meaning. Even competent speakers can merely use expressions. Consider, for example, a competent speaker who utters some meaningful expression with the purpose of hearing the sound of her voice to check her vocal chords, test a microphone, soothe a baby, practice pronunciation etc. You can try it yourself by repeating a sentence of English mindlessly ten times like meditators sometimes do. Or consider a competent speaker who inscribes a meaningful sentence with the purpose of practicing one’s handwriting, doing calligraphy, or practicing typing speed. Again, you can try it yourself. It should be clear that even though in both cases you’ve used a sentence of English, you haven’t spoken any English or said anything. Rather, you’ve merely used the sentence. Again, the above cases of form-based or phonetic/orthographic uses are not the only cases where competent speakers merely use expressions. Even a communicative setting doesn’t guarantee that one’s use is wholly a use with meaning. Thus, consider a standard use of the English quote name ‘‘Bertrand is British’’. It’s clear that such a standard use results in tokening and thus using both the quote name ‘‘Bertrand is British’’ and the quoted sentence ‘Bertrand is British’. First, the speaker uses the quote name with its meaning which results in her mentioning the quoted sentence. However, one isn’t at the same using the quoted sentence with its meaning because the use doesn’t result in speaking English or saying that Bertrand is British. Rather, one is merely using it. We can also see the need to distinguish between mere uses and uses with meaning by reflecting on the fact that different languages share some of their expressions, although these expressions have different meanings. For example, in Estonian ‘Koristame ruumit!’ means ‘Let’s clean the rooms!’ whereas in Finnish it means ‘Let’s decorate the corpses!’. When a bilingual 30 Estonian-Finnish speaker uses such a sentence in a communicative setting she doesn’t normally speak both languages at once and doesn’t tell someone to do two things. Rather, she uses it with just one of its meanings and speaks only Estonian or Finnish. The same point can be made by reflecting on ambiguity. Suppose, as is natural, that languages contain ambiguous expressions: expressions which have more than one meaning. Consider then an ambiguous sentence like ‘Bertrand went to the bank’ which means ‘Bertrand went to the financial bank’ and ‘Bertrand went to the riverbank’. When a competent English speaker normally uses such a sentence in a communicative setting she doesn’t say two things. Rather, she uses the sentence with just one of its meanings and says a single thing. We’ve now seen five different cases which demonstrate the need to distinguish between mere uses and uses with meaning. Here they are, again: A. Uses by Incompetent Speakers B. Phonological/Orthographic Uses C. Quotation D. Languages Sharing Expressions E. Ambiguity However, these cases don’t only show that there’s an obvious need to distinguish between mere uses and uses with meaning. They also help us to articulate initial constraints on an intuitive conception of what it is to use an expression with its meaning in a language. For starters, Uses by Incompetent Speakers show that semantic competence is necessary for use with meaning. This gives us our first constraint: 1) Competence: To use an expression e with its meaning m in a language L, a speaker must be semantically competent with e in L. Next, Phonological/Orthographical uses show that semantic competence is not sufficient. Rather, to use an expression with its meaning even a competent speaker has to voluntarily do something. This gives us our second constraint: 31 2) Voluntariness: To use an expression e with its meaning m in a language L, a speaker must voluntarily do something. Finally, Languages Sharing Expressions and Ambiguity show that what the speaker has to voluntarily do can’t just be a matter of something general like trying to seriously communicate with someone. Rather, the speaker has to do something that makes her latch on to the particular expression’s particular meaning in that particular language. This gives us our third constraint: 3) Latch: To use an expression e with its meaning m in a language L, a speaker must voluntarily do something that puts her in some relation R to m (rather than to any other of its meanings either in L or in a different language). In sum, we can say that to use an expression with (one of) its meaning(s) in a language, a speaker has to be semantically competent with the expression in that language and voluntarily do something that puts her in some relation R to that meaning. This concludes my argument that we need to distinguish between mere uses and uses with meaning. In the next section I’ll discuss the Rules view and start looking at how it could provide an account of what it is to use an expression with its meaning. 2. Rules and a Simple Account The Rules view starts from rules governing use. On this view expressions have a meaning primarily in a public language not tied to any particular community. And what it is for an expression to have a meaning in such a language is for it to be governed by a rule that entails that it is permissible to use it in certain conditions (e. g. its permissible use-conditions). In order to get a better grip on this view let’s see how it can account for the fact that meaningfulness facilitates communication. Suppose that A wants to indicate to B that he believes that p. Suppose further that there’s a public language PL which is partly constituted by a rule which permits using ‘p’ iff one believes that p. Thus, one could say that in PL ‘p’ has a meaning that makes it suitable for indicating that one believes that p. Suppose further that both A and B grasp this rule and they know that the other does as well. Then A can use ‘p’ with its meaning 32 and thereby indicate or provide evidence that he believes that p and B can infer from the production of ‘p’ with its meaning that A believes that p and thereby get the intended information. Let’s now see how the Rules view could give an account of moving a piece with its role in a game and using an expression with its meaning in a language. 20 On our intuitive conception, to use an expression with (one of) its meaning(s) in a language a speaker has to be semantically competent with the expression in that language and use the expression while voluntarily doing something that puts her in some relation R to that meaning. On the Rules view an expression’s meaning in a public language is a permissible use- condition. Thus, a Rulist account of use with meaning has to consist of a specification of a voluntary action that puts a speaker in some relation R to the permissible use-condition. Which action and which relation? To get us started, consider John Searle’s suggestion that to play a game or speak a language one has to act in accordance with the role- or meaning- determining rules (Searle 1969: 33-34, 37). Taking this as a basis, here’s a simple Searlean account of use with meaning: (Simple Account) To use an expression with its meaning m in a language L is to use it while acting in accordance with the m-determining rule of L. In other words, the idea is that to use an expression with its meaning a speaker has to use it while the particular permissible use-condition obtains. Unfortunately the simple Searlean account doesn’t work. First, it satisfies neither Competence, nor Voluntariness, nor Latch. Even incompetent speakers can use expressions while the use-conditions obtain and that doesn’t result in them using the expressions with their 20 Not surprisingly, in the case of games like chess we also need a distinction between mere movings of pieces and movings of pieces with their roles in a game. Consider a “player” who doesn’t know how to play chess and moves one of its pieces. It should be clear that even though she’s moved the piece, she hasn’t played chess or made a move in the game. In our terminology, she’s merely moved the piece, but hasn’t moved it with its role. However, again even competent players can merely move pieces. Consider, for example, a competent player who moves pieces on a large board to represent a game that’s going on elsewhere. It should be clear that even though she’s moved a chess piece she hasn’t played any chess or made a move in the game. Rather, she’s merely moved the piece in representing a game. Finally, we can also see the need to distinguish between mere movings and movings with roles by reflecting on the fact that different games can share some of their pieces, although these pieces have different roles. When a player competent with both games moves a piece she doesn’t speak play both games at once. Rather, she moves it with just one of its roles and plays only one of the games. 33 meanings. Second, consider the fact that it seems possible to play many games while breaking their rules. For example, suppose you’re playing basketball, jump up for a rebound and accidentally elbow another player. This still counts as a move in the game and has consequences – it results in your getting penalized. 21 Similarly, one might think, it is possible to speak a language and use an expression with its meaning while breaking the meaning-fixing rule. In other words, it is possible to use an expression with its meaning while misusing it – using it while the permissible use-conditions don’t obtain. For example, suppose for a moment that Alston, Dummett, and Stenius were right that the permissible use-conditions of declaratives involve a proposition’s being true. Surely it’s possible to use an expression with its meaning even if the proposition is false. 22 This gives us a fourth constraint on an account of use with meaning: 4) Misuse: To use an expression e with its meaning m in a language L, a speaker must voluntarily do something that doesn’t rule out the possibility of misuses. However, if to use an expression with its meaning m is to act in accordance with the m- determining rule then, necessarily, all uses of an expression with its meaning are also correct or permissible uses – uses performed while the permissible use-conditions obtain. And this means that the simple account can’t satisfy Misuse. Let’s go back to Searle’s suggestion that to play a game or speak a language one has to act in accordance with the rules. We’ve seen that to play or speak one doesn’t have to actually act in accordance with the rules – that is neither sufficient nor necessary. However, perhaps what 21 You can’t play chess while breaking its rules because given the rules of chess a violation immediately terminates the game. However, you can play most sports like basketball or tennis while breaking some of their rules because violations generally don’t terminate the game but merely get you censured in the form of a penalty. 22 Compare Tim Williamson: “Constitutive rules do not lay down necessary conditions for performing the constituted act. When one breaks a rule of a game, one does not thereby cease to be playing that game. When one breaks the rule of a language, one does not thereby cease to be speaking that language; speaking ungrammatically is speaking English. Likewise, presumably, for a speech act: when one breaks a rule of assertion one does not thereby fail to make an assertion. One is subject to criticism precisely because one has performed an act for which the rule is constitutive.” (Williamson 1996: 451, 2000: 240, see also Glüer & Pagin 1999: 221) 34 Searle had in mind was rather that one has to rather follow the rules in the sense of trying to act in accordance with them. Thus, here’s a modified Searlean account of use with meaning: (Modified Account) To use an expression with its meaning m in a language L is to use it while trying to act in accordance with the m-determining rule of L. In other words, we could say that to use an expression with its meaning a speaker has to use it while trying to use it iff the permissible use-conditions obtain. This satisfies Competence because only semantically competent speakers know what the rules require. Furthermore, it also satisfies Voluntariness and Latch because it requires speakers to do something that puts them in a relation to a particular permissible use-condition. However, it also satisfies Misuse because trying to act in accordance with a rule is not inconsistent with failing to do so. Unfortunately, it seems that the modified account doesn’t work either. Above I asked you to consider the fact that it seems possible to play many games while breaking their rules. However, even more interestingly, it seems possible to play many games while doing so intentionally. This is what we call cheating. For example, suppose you’re playing basketball, jump up for a rebound and intentionally elbow another player because the game has reached to a point where good strategy requires intentional fouls. This still counts as a move in the game and has consequences – it results in your getting penalized. Similarly, one might think, it should be possible to speak a language and use an expression with its meaning by intentionally breaking the meaning-determining rule. In other words, it should be possible to use an expression with its meaning while intentionally misusing it – using it while knowing that the permissible use- conditions don’t obtain. Here the best cases involve intentional misuses for the purposes of deception. It’s perhaps simplest to see this if we assume that the permissible use-conditions of declaratives involve a proposition’s being true or being believed. However, since these views of what the use- conditions of declaratives are like are controversial, I’ll focus on what I take to be the theoretically least committal case of complex demonstratives like ‘she’ or ‘that woman’. On the standard view of complex demonstratives they’re directly referential subject to certain restrictions. The idea is that a complex demonstrative like ‘that woman’ has a part that enables one to use it to refer to something (the ‘that’-part), and a part that restricts to what one can use it 35 to refer to (the ‘woman’-part) (Braun 2008, Recanati 2001, Salmon 2002, Soames 2010a). Thus, both ‘she’ and ‘that woman’ are plausibly for talking about persons who are female. This means that their use-conditions are plausibly that you are thinking of a person who is female (or thinking of a person who you believe is female). Suppose now that we’re at a party and we see our mutual friend Jake dressed as a woman. I know that it’s Jake but I also realize that you don’t and take the person you see to be a woman. In an attempt to play a joke on you I point to Jake and utter ‘She’s tall’. In this case I use the sentence with its meaning and say of x that x is tall where x = Jake. However, since Jake isn’t female I’ve also clearly misused the sentence. Furthermore, I’ve intentionally misused it for the purposes of deceiving you into thinking that x is female. All of this shows that we should acknowledge a fifth constraint on an account of use with meaning: 5) Insincerity: To use an expression e with its meaning m in a language L, a speaker must voluntarily do something that doesn’t rule out the possibility of intentional misuses. However, if to use an expression with its meaning m is to try to act in accordance with the m- determining rule then no uses of an expression with its meaning can be intentional misuses. This means that the modified account can’t satisfy Insincerity. We seem to have established that Searle’s idea that to play or speak one has to act in accordance with the rules is misguided. Both are possible even if one is deliberately breaking the rules. Thus, we need to try something different. However, before we go on to do so, let’s look at whether the Conventions and Dispositions view can do any better. 3. Conventions, Dispositions, and a Simple Account The Conventions view starts from a community’s conventional regularities in use in certain conditions. On this view expressions have a meaning primarily in a particular community’s language or a communal language. And what it is for an expression to have a meaning in such a 36 language is for there to be a conventional regularity in the community to use it in certain conditions (e. g. its regular use-conditions). On Lewis’s view for there to be a conventional regularity in a community to use an expression e in certain conditions Y the following conditions have to hold: 1) they normally use e only in conditions Y (Regularity) 2) they believe that they normally use e only in conditions Y (Belief) 3) they expect that it will continue to be true that they normally use e only in conditions Y and this gives them good reason to use e only in conditions Y (Reason) 4) they have a general preference for others to continue to use e only in conditions Y (Preference) 5) it could be the case that they normally use e only in conditions Z (where Y and Z are mutually exclusive) and they believe this and they expect that it will continue to be true that they do this which gives them a good reason to do this, and they would have a general preference for others to continue to do this (Arbitrariness) 6) they have mutual knowledge of 1-5 (Mutual Knowledge) 23 In order to get a better grip on this view let’s see how it is supposed to account for the fact that meaningfulness facilitates communication? Suppose again that A wants to indicate to B that he believes that p. Suppose further that both belong to a community where it is conventionally regular to use ‘p’ iff one believes that p and that they thus share a communal language CL. Thus, one could say that in CL ‘p’ has a meaning that makes it suitable for indicating that one believes that p. Then A should be able use ‘p’ with its meaning and thereby indicate or provide evidence 23 There arguments against some of these conditions and there are alternative views of conventionality. For example, Seumas Miller has argued against Arbitrariness and Tyler Burge and many others have argued against Mutual Knowledge (Burge 1975, Miller 2001). Alternative views can be divided into those that are sympathetic to Lewis’s view and those that are not. For an example of the sympathetic kind, consider Andrei Marmor’s view. Marmor thinks that for a regularity in following a rule R to be conventional something like Regularity, Reason, and Arbitrariness must hold. For another example, Nicholas Southwood and Lina Eriksson think that for a regularity to be conventional Regularity, Belief, Reason, and Preference must hold but perhaps neither Arbitrariness nor Mutual Knowledge are necessary (Southwood & Eriksson 2011). In contrast, Margaret Gilbert’s view is of the unsympathetic kind because she thinks that conventions do not require regularities in behavior at all (Gilbert 1989). (However, Gilbert now seems to acknowledge that Lewis and her might simply be talking about different things (Gilbert 2007); see also Southwood & Eriksson who argue that Gilbert seems to rather talk about what are more naturally called norms). For another example of an alternative that’s of the unsympathetic kind, consider Ruth Millikan’s view that a regularity is conventional if it’s reproduced by precedent (Millikan 1998). For an overview see Rescorla 2011. 37 that he believes that p and B can infer from the production of ‘p’ with its meaning that A believes that p and thereby get the intended information. Since the argument to the effect that the Conventions view doesn’t have enough resources to give a satisfying account of use with meaning proceeds hand in hand with the argument to the effect that the Dispositions view doesn’t either, let’s also look briefly at the latter before setting out the argument. The Dispositions view is individualistic and starts from a very sparse basis: individual speakers’ dispositions to use expressions in certain conditions. On this view expressions have a meaning primarily in a speaker’s idiolect. And what it is for an expression to have a meaning in an idiolect is for the speaker to be disposed to use it in certain conditions (e. g. its dispositional use-conditions). Furthermore, if two speakers are disposed to use an expression in the same conditions and have a grip on each other’s dispositions then they can be said to have a shared language. In order to get a better grip on this view let’s see how it can account for the fact that meaningfulness facilitates communication? Suppose again that A wants to indicate to B that he believes that p. Suppose further that they are both disposed to use ‘p’ iff they believe that p and that they have a grip on each other’s dispositions making it the case that they can be said to share a language SL. Thus, one could say that in SL ‘p’ has a meaning that makes it suitable for indicating that one believes that p. Then A should be able use ‘p’ with its meaning and indicate or provide evidence that he believes that p and B should be able to infer from the production of ‘p’ with its meaning that A believes that p and thereby get the intended information. So far, perhaps so good. 24 , 25 However, let’s now see how the Conventions or Dispositions view could give an account of using an expression with its meaning in a language. 24 Here’s a serious worry about this view. What A needs to share a language with B is to have a grip on what her dispositions are. But how is such a grip to be gained? It seems that the only way we ever actually get a grip on how our interlocutors are disposed to use expressions is by figuring out which communal or public language they speak. As Ernie Lepore and Kirk Ludwig put it in the context of arguing that Davidson was mistaken to think that prior knowledge of conventions or rules is not necessary for communication for creatures like us: We clearly cannot know what someone’s dispositions to use words are without either having observed him over a period of time, or locating him in a linguistic community whose regularities in word use we have antecedently learned. Even with his complete physical description and a correct theory of physics, the computational problem would be intractable.” (Lepore & Ludwig 2005: 279). 38 On our intuitive conception, to use an expression with (one of) its meaning(s) in a language a speaker has to be semantically competent with the expression in that language and use the expression while voluntarily doing something that puts her in some relation R to that meaning. On the Conventions view an expression’s meaning in a public language is a regular use-condition whereas on the Dispositions view it’s a dispositional use-condition. Thus, Conventionalist and Dispositionalist accounts of use with meaning have to consist of a specification of a voluntary action that puts a speaker in some relation either to the conventional or the dispositional use-condition. Which action and which relation? To get us started, consider the natural Conventionalist suggestion that to speak a language one has to use an expression in a conventionally regular way. Similarly, consider the natural Dispositionalist suggestion that it is to use an expression while realizing the disposition to use in the sense that the disposition manifests in one’s use. Taking these as a basis, here are simple accounts of use with meaning: (Simple Account) To use an expression with its meaning m in a language L is to use it in the m-determining conventionally regular way. To use an expression with its meaning m in a language L is to use it while realizing the m-determining disposition. In other words, the idea is that to use an expression with its meaning a speaker has to use it while the particular regular or dispositional use-condition obtains. However, it should be clear from our discussion of the simple Searlean account in terms of acting in accordance with rules that these simple accounts don’t work. First, they satisfy Thus, it seems that we normally get a grip on how our interlocutors are disposed to use expressions by figuring out which language they speak. However, this seems to show that public or communal languages are prior to idiolects and that we should prefer the Rules or Conventions view over the Dispositions view. 25 Here’s another worry about the view. There are plenty of sentences which one is disposed to use primarily in conditions which don’t seem to have anything to do with their meaning. For example, consider the English “typing sentence” ‘the quick brown fox jumps over the lazy dog’ which contains all the letters of the English alphabet and was therefore mostly used to test typewriters. Or consider pronunciation sentences like My Fair Lady’s ‘the rain in Spain stays mainly in the plain’. Almost everybody is primarily disposed to use these in conditions which don’t have anything to do with the expression’s meaning. However, such dispositions are clearly irrelevant as far as the sentence’s meaningfulness is concerned. But it’s not immediately obvious on what basis we can separate out such dispositions as irrelevant to meaningfulness. 39 neither Competence, nor Voluntariness, nor Latch. Even incompetent speakers can use expressions while the regular or dispositional use-conditions obtain and that doesn’t result in them using the expressions with their meanings. Furthermore, the account has the consequence that, necessarily, all uses of an expression with its meaning are also correct uses – uses performed while the regular or dispositional use-conditions obtain. This means further that it can’t satisfy Misuse. Let’s go back to the natural suggestions. We’ve seen that to speak one doesn’t have to actually act in a conventionally regular way or realize the disposition – that is neither sufficient nor necessary. However, perhaps the idea was rather that one has to try to act in a conventionally regular way or realize the disposition. Thus, here are modified accounts of use with meaning: (Modified Account) To use an expression with its meaning m in a language L is to use it while trying to use it in the m-determining conventionally regular way. To use an expression with its meaning m in a language L is to use it while trying to realize the m-determining disposition. In other words, we could say that to use an expression with its meaning a speaker has to use it while trying to use it iff the regular or dispositional use-conditions obtain. We know that such an account satisfies Competence because only semantically competent speakers know what the regularities are or what the dispositions are like. Furthermore, it also satisfies Voluntariness and Latch because it requires speakers to do something that puts them in a relation to a particular regular or dispositional use-condition. However, it also satisfies Misuse because trying to act in a conventionally regular way and trying to realize the disposition are not inconsistent with failing to do so. However, again, it should be clear from our discussion of the modified Searlean account in terms of trying to act in accordance with a rule that these modified accounts don’t work either. I argued at length above that it is possible to speak a language and use an expression with its meaning while intentionally misusing it – using it while knowing that the use-conditions don’t obtain. This gave us the constraint Insincerity. However, if to use an expression with its meaning m is to try to act in a m-determining conventionally regular way or try to realize the m- 40 determining disposition then no uses of an expression with its meaning can be intentional misuses. This means that the modified accounts can’t satisfy Insincerity. We seem to have established that the natural suggestions that to speak is to use an expression in a conventionally regular way or while realizing the disposition to use are misguided. Speaking is possible even if one is deliberately not doing this. Thus, like in the case of Rules view, we need to try something different. 4. Rules and a New Account We concluded above that Searle’s idea that to play or speak one has to act in accordance with the rules is misguided. Both are possible even if one is deliberately breaking the rules. Thus, we need to try something different. To get us started afresh, notice that it’s very natural to think that mere movings and uses are not governed by the relevant rules. After all, if you merely move a knight diagonally on the board while enacting a play for your kid, you’re not intuitively breaking the rules of chess. In contrast, movings with a role and uses with meaning are governed by the relevant rules. If you move a knight diagonally on the board while playing chess then you are breaking the rules of chess. Since to play and speak one has to voluntarily do something that puts one in some relation to a rule, this suggests that what one has to do is to make the rule govern one’s use or moving. Thus, here’s a first pass at a new account of use with meaning: (New Account*) To use an expression with its meaning m in a language L is to use it while making the m-determining rule of L govern one’s use. 26 In other words, we could say that to use an expression with its meaning a speaker has to use it while making it the case that the use is permissible iff certain use-conditions obtain. However, this first pass at a new account is likely to strike us as somewhat puzzling. I said above that games like chess and languages like English are constituted by rules which 26 This is not a completely novel idea. Bernard Suits suggested in his marvelous analysis of playing games that to play a game a player has to accept the rules as governing one’s actions (Suits 1978: 45-47). Kathrin Glüer and Peter Pagin suggest similarly that to play a game one has to decide that the rules apply to one’s actions (Glüer & Pagin 1999: 221). Finally, William Alston has suggested that to play a game or speak a language one has to make one’s actions subject to the rules (Alston 1999: 62-63). 41 determine pieces and expressions’ moving- and use-conditions. But if rules already do this how could it then be further up to one to make it the case that one’s particular moving of a piece or use of an expression is permissible iff certain use-conditions obtain? In order to get rid of the puzzlement it’s useful to think about what it is for rules to exist and an intuitive distinction between a rule and its conditions of being in force (Pagin 1987: 22). 27 Take the traffic rule that it’s impermissible to make a right turn on red in New York. On one picture, this rule is a conditional content to the effect that it’s impermissible to make a right turn on red, if you’re in New York. However, it’s in force for all agents at all times and thus doesn’t really have substantive conditions of being in force. On this picture, for a rule to exist is for it to be in unconditionally in force. And to create a rule is to take a conditional content of the right sort and put it unconditionally in force for all agents in all times. On another picture, the rule that it’s impermissible to make a right turn on red in New York is an unconditional content to the effect that it’s impermissible to make a right turn on red. However, it has conditions of being in force to the effect that it’s in force for a person at a time iff the person is in New York at the time. On this picture for a rule to exist is for it to be conditionally in force. And to create a rule is to take an unconditional content of the right sort and put it in force for those who meet certain conditions. The first picture is implausible because it commits us to a wild amount of rules that are in force for each of us and leads to an unintuitive picture of authority and jurisdiction. For example, suppose I give myself the rule that I am not to eat any carbs this week. On this picture what I do is take the conditional content expressed by ‘If you’re me, you’re not permitted to eat carbs this week’ and put it in force for everybody. But this is crazy because I simply don’t have the authority to put rules in force for everybody! In contrast, the second picture delivers the intuitive verdict that what I do is take the unconditional content expressed by ‘You’re not permitted to eat carbs this week’ and put it in force just for myself. (Pagin 1987: 26) 28 Let’s therefore adopt the second picture. 27 What are rules anyway? On one common view, they’re imperative contents of the form ‘Do A!’ or ‘If C, do A!’. On another view, they’re propositions involving normative properties of the form ‘S may/ought/must do A’ or ‘S may/ought/must do A only if/if/iff C’. For the sake of convenience, I’m assuming the second view, although nothing hangs on this. For discussion of the two views in the context of epistemic rules see Boghossian 2008. 28 For further discussion of the two pictures, a parallel with unrestricted and restricted quantification and defense of the latter because the former leads to something akin to Russell’s paradox see Pagin 1987: 22-28. 42 Next, notice that rules like the aforementioned traffic rule and so-called constitutive rules like rules of games and languages seem to be very different insofar as their conditions of being in force. Take the traffic rule that it is impermissible to make a right turn on red that is in force for one iff one is in New York. Whether this rule governs your driving maneuver when you’re in New York is in no way further up to you. Thus, traffic rules are in force dependent on “external” conditions like whether you’re in New York. However, now take a rule of chess or rule of English. Whether these rules govern your particular movings or uses does seem further up to you. Thus, it seems that whether rules of chess or English are only in force for you when you move a piece or use an expression when you make it the case that they are or regard them as being in force. In short, the so-called constitutive rules seem to be in force on “internal” conditions like whether you put them in force or regard them as being in force (Pagin 1987: 186, Glüer & Pagin 1999: 221-222). Let’s now get back to our first pass at a novel account in terms of making one’s use governed by a rule. I said that it was likely to strike us as somewhat puzzling because if rules determine pieces and expressions’ moving- and use-conditions, how could it then be further up to one to make it the case that one’s particular moving of a piece or use of an expression is permissible iff certain use-conditions obtain? Now we can see that there’s no mystery. Rules of games and languages are unconditional contents to the effect that movings and uses are permissible in certain conditions. However, they’re in force only on the conditions that one puts them in force for oneself at the time. Given this, we can restate our new account as follows: (New Account) To use an expression with its meaning m in a language L is to use it while putting the m-determining rule of L in force for oneself at the time of the use. This satisfies Competence because only semantically competent speakers know what the rules require. And it satisfies Voluntariness and Latch because it requires speakers to do something that puts them in a relation to a particular permissible use-condition. However, it also further satisfies both Misuse and Insincerity because making a rule govern one’s use by putting the rule in force for oneself at the time of use is incompatible with neither breaking nor even intentionally breaking it. 43 A brief digression. I’ve presented an analysis of what it is to play and speak in terms of putting a rule in force for oneself at a time. It is a further question what is it for rules to be in force in general, and what it is for us to put a rule in force for oneself at a time in particular. I won’t have anything further to say about this here because my analysis is deliberately neutral between different further answers one could give. 29 To sum up, on the Rules view the picture that emerges is the following. Like games such as chess, languages like English are public, abstract objects not tied to any particular community. They’re constituted by rules that give expressions their permissible use-conditions, but that are in force only on the conditions that the speaker puts them in force for oneself at the time. Speakers are competent to speak a language like English if they grasp these rules. And for a speaker to speak a language like English or to use an expression with its meaning is for her to use it while putting the meaning-determining rule in force. I claim that this new account of using an expression with its meaning is a satisfying one. First, it satisfies all of our constraints. Second, it helps to fill in our example of how on the Rules view meaningfulness facilitates communication. Suppose again that A wants to indicate to B that he believes that p, that in PL ‘p’ is governed by a rule that permits its use iff one believes that p, that both A and B grasp this rule and they know that the other does as well. Then A can use ‘p’ with its meaning, that is use ‘p’ while putting the rule governing p in force for himself at the time. This amounts to representing oneself as doing something permissible iff the use-conditions obtain which amounts to providing evidence that the use-conditions obtain. And it should be clear how B can infer from this that A believes that p and thereby get the intended information. This concludes my argument that the Rules view can give a satisfying account of use with meaning. I the next and final section I will argue that neither the Conventions nor the Dispositions view has enough resources to provide anything comparable. 29 It’s clear that any account of putting a rule in force for oneself must at least involve the willingness to regard situations where one breaks the rule as situations in which one is justifiably subject to censure. Perhaps what it is for rules to be in force in general depends similarly on subjects attitudes towards them (for one account along these lines see Brandom 1994). 44 5. Conventions, Dispositions, and a New Account? We concluded above that the natural suggestions that to speak is to use an expression in a conventionally regular way or while realizing the disposition to use are misguided. Speaking is possible even if one is deliberately not doing this. Thus, we need to try something different. In the case of the Rules view were able to arrive at a satisfying account by appealing to the idea that rules give expressions their permissible use-conditions, but are in force only on the conditions that the speaker puts them in force for oneself at the time. This led to an account on which to use an expression with its meaning is to put the rule in force for oneself at the time. I will now argue that neither the Conventions nor the Dispositions view has enough resources to provide anything comparable. Let’s start with the Conventions view. On this view what it is for an expression to have a meaning in a communal language is for there to be a conventional regularity in the community to use it in certain conditions. However, a regularity in use by itself is just a series of uses with a frequency over a period of time. Such a thing couldn’t govern anything or be in force and thus couldn’t be something we can put in force for oneself at a time. Neither does it have any other comparable features. And appealing to the fact that the regularity is a conventional one doesn’t help either. A regularity in use doesn’t become something we can put in force just because people have beliefs, expectations, and preferences related to it. Let’s now extend the same criticism to the Dispositions view. On this view what it is for an expression to have a meaning in an idiolect is for the speaker to be disposed to use it in certain conditions. However, a disposition to use is just something that manifests in a use in its triggering-conditions. Such a thing couldn’t govern anything or be in force either, and thus couldn’t be something we can put in force for oneself at a time. Neither does it have any other comparable features. And appealing to the fact that another speaker has to have a grip on one’s dispositions doesn’t help either. A disposition to use doesn’t become something we can put in force just because somebody else has a grip on it. This concludes my argument that neither the Conventions nor the Dispositions view has enough resources to provide us with a satisfying account of use with meaning. Obviously, I haven’t considered every possible move on the behalf of these views. This might make you worried that I haven’t considered the best version. But remember, here’s the challenge: we need 45 to specify a voluntary action that puts the speaker in some relation to a conventional regularity in use in certain conditions or disposition to use it in certain conditions that is compatible with deliberately not being in those conditions while amounting to providing evidence that one is. And it’s not easy to see how this could be done. Conclusion My aim in this chapter was to see whether we have any good reasons to prefer the Rules over the Dispositions and Conventions view. My strategy was focus on the connection between an expression’s meaningfulness in a language and using it with its meaning in that language. I first argued that uses in the sense of tokenings of expressions divide into mere uses and uses with meaning, where only the latter result in speaking a particular language and performance of locutionary speech acts. And I then argued that whereas the Rules view can give a satisfying account of use with meaning, neither the Conventions nor the Dispositions view has enough resources to provide anything of the sort. Let me conclude by pointing out that this argument is more powerful and goes deeper than Saul Kripke’s argument in Wittgenstein on Rules and Private Language to the effect that meaningfulness is normative and the Dispositions view can’t account for that (Kripke 1982). Roughly, his argument goes as follows. First, one claims that it’s a commonplace that some uses of expressions are semantically correct while others are misuses. Second, one argues that this entails that these uses are semantically permissible while others are impermissible. Finally, one shows that the Dispositions view can’t account for semantic permissibility. In contrast, the argument I’ve presented shows that the Conventions and Dispositions view get ruled out before any such considerations arise because they can’t even explain how certain uses of expressions get to be semantically correct or misuses in the first place. 46 Chapter 3 Use-Conditional Semantics and the Representational Core of Language I Introduction In the end of the first chapter I pointed out that the second thing we have to do in order to develop and defend the Rules view is to show that not only is it well placed to satisfy the Unity, Conservativeness, and Explanatory Constraints, but that it can actually do so. In the present chapter and the following one I’ll lay the groundwork for the demonstration and get it started by developing a use-conditional semantics for the representational core of natural language. In this chapter I’ll proceed as follows. I’ll start by arguing that in order to show that the view satisfies the Unity Constraint we need to develop a use-conditional semantics for a natural language by assigning all the different types of meaningful expressions a plausible use-condition. I’ll also clarify what we need to do to show that the view satisfies further the Conservativeness, and the Explanatory Constraints. (Section 1) I’ll then continue by explaining how the standard requirement that a semantics be compositional bears on the task of developing a use-conditional semantics, distinguish between several different ways of doing this, and choose one (Sections 2- 3). Finally, I’ll get started on developing a use-conditional semantics for the representational core of natural language. I’ll do this by providing an overview of Irene Heim’s and Angelika Kratzer’s truth-conditional semantics which has become a baseline in linguistic semantics, and show how to build a use-conditional semantics on its basis (Sections 4-6). 1. The Three Constraints Our task is to show that the Rules view can satisfy the Unity, Conservativeness, and Explanatory Constraints. Let’s start by getting a better grip on what we need and what we don’t need to do in order to show this. Let’s start with the Unity Constraint. This is the requirement that a view of meaningfulness tell us what the nature of the common element X is and the nature of the differing elements Y’s are so that this works for all the different types of strings of symbols in 47 natural language. Thus, in order to show that the Rules view satisfies this constraint we need to develop a use-conditional semantics for a toy language sufficiently similar to natural language by assigning all of its different types of expressions plausible use-conditions. 30 However, there might be several plausible candidates for such a use-condition. What I want to stress here is that to show that the view satisfies the Unity Constraint we don’t have to choose between the different candidates. That is, we don’t have to resolve the relevant controversies in descriptive semantics. For example, we don’t have to settle the question whether names are individual constants or general terms that sometimes function as predicates or decide between different theories of definite descriptions etc. All we have to do is to find at least one plausible candidate, and to show that all of the plausible candidates can be stated in terms of use-conditions. Let’s look next at the Conservativeness Constraint. This is the requirement that a view of the nature of meaningfulness be consistent with us being able to formally represent the meanings of at least some types of expressions with characters (and because everything else can be understood in terms of characters, also with us being able to describe the meanings in terms of semantic contents relative to contexts, truth-in-L conditions etc.). Thus, in order to show that the Rules view satisfies this constraint we need to make sure that all the different proposed descriptions that can be stated in terms of characters etc. can also be stated in terms of use- conditions. If there’s a description in terms of characters that can’t be stated in terms of use- conditions then the view will fail this constraint. What I want to stress again is that to show that the view satisfies the Conservativeness Constraint we don’t have to choose between the different proposed descriptions. All we have to do is to show that they can be stated in terms of use- conditions. Finally, just to remind ourselves, let’s look also at the Explanatory Constraint. This is the requirement that a view of meaningfulness yield an explanation of why we can formally represent the meanings of at least some types of expressions with characters and exactly what happens when we do so. Thus, in order to show that the Rules view satisfies this constraint we need to make sure not only that all the proposed descriptions that can be given in terms of 30 As I’m using these terms, the Rules view is a view about the nature of meaningfulness whereas a use-conditional semantics for a language is, like any semantics, a description of the meanings of its expressions, in this case, by stating their use-conditions. It is important to understand that the Rules view is by itself consistent with lots of different type of semantic theories as long as we understand the descriptions they give as giving us some information about use-conditions. 48 characters can also be given in terms of use-conditions, but that we are able to explain what we’re doing when we give the former descriptions in terms of use-conditions. 2. Compositionality In the previous section I argued that to show that the Rules view satisfies the Unity Constraint we need to develop a use-conditional semantics for a toy language sufficiently similar to natural language. Let me next explain how the standard requirement that a semantics be compositional bears on this task. Now, to develop a use-conditional semantics for any language L we must assign all of the expressions of L use-conditions. The requirement that a semantics for L be compositional is the requirement that it satisfy the following principle: Compositionality: For every complex expression e in a particular language L, all (non- idiomatic) meanings of e assigned by the semantics are determined by its structure in L and the meanings assigned by the semantics to e’s parts. 31 Thus, to develop a compositional use-conditional semantics for L we must assign all of the complex expressions of L use-conditions that are somehow determined by their structure and the use-conditions we’ve assigned to their parts. Interestingly, given the nature of the Rules view, there’s only one way to develop a compositional use-conditional semantics. Recall that on the Rules view, to use any expression with its meaning and permissibly one must be in its permissible use-condition. Now, notice that to use a complex expression with its non-idiomatic meaning requires using all of its parts with their meanings. For example, to use ‘Bertrand is British’ with its standard meaning is to use ‘Bertrand’, ‘is’, and ‘British’ with their meanings. It follows from these two things that to use a complex expression with its non-idiomatic meaning and permissibly one must be in its use- conditions and the use-conditions of each of its parts. For example, to use ‘Bertrand is British’ 31 For discussion and refinements of the principle as well as arguments for and against the requirement that semantics must be compositional see Szabo 2007. 49 with its non-idiomatic meaning and permissibly one must be in its use-conditions and the use- conditions of ‘Bertrand’, ‘is’, and ‘British’. 32 Thus, the only way to develop a compositional use-conditional semantics for L is to assign all the complex expressions of L use-conditions that are somehow determined by their structure and that contain the use-conditions we’ve assigned to their parts. 3. Mentalism In the first section I argued that to show that the Rules view satisfies the Unity Constraint we need to develop a use-conditional semantics for a toy language sufficiently similar to natural language. Let me now distinguish between several different ways of doing this and choose one. Now, to develop a use-conditional semantics for any language L we must assign all of the expressions of L use-conditions. However, there are different ways of doing this depending on what we take use-conditions in general to consist in. For example, consider again the fact that in the case of interjective sentences like ‘Ouch’, the rules determine use-conditions that plausibly consist in the speaker’s undergoing a mental event like a pain. One might therefore think that the use-conditions for declarative sentences like ‘Bertrand is British’ consist in the speaker’s performing cognitive mental acts like entertaining the proposition that Bertrand is British or judging it to be the case (or the speaker’s being in mental states like believing that Bertrand is British or knowing it to be the case). Thus, one way to develop a use-conditional semantics for L is by trying to assign all of the expressions of L use-conditions that feature mental items. Let’s call this the Mentalist way. Alternatively, one might think that the use-conditions for declarative sentences like ‘Bertrand is British’ consist in just truth-conditions, like Bertrand’s being British. 33 Thus, another way to develop a develop a use-conditional semantics is by trying to assign all the different types of meaningful expressions rules that determine use-conditions that don’t necessarily have to do with the speaker’s mind. 32 Of course, to use a complex expression that has an idiomatic meaning with this idiomatic meaning is not to use all of its parts with their meanings. If ‘Bertrand is British’ had an idiomatic meaning that would make it possible to use it to say that Gottlob is German, then to use it with this idiomatic meaning would not be to use ‘Bertrand’, ‘is’, and ‘British’ with their meanings. 33 Compare Alston 1963, 1999, Dummett 1993, Stenius 1967: 267. 50 Now, to show that the Rules view satisfies the Unity Constraint we need to develop a use-conditional semantics for a toy language sufficiently similar to natural language. And in order to start developing such a use-conditional semantics we need to choose a particular way of doing this. I will choose the Mentalist way because it seems to me most promising. What I want to stress is that to show that the Rules view satisfies the Unity Constraint we don’t have to argue that the Mentalist way is preferable to other ways. All we need to do is to show that it’s one of the possibly several ways that enables us to assign all the different types of meaningful expressions plausible use-conditions. If other ways enable us to do this as well then this is all for the better. Let’s take stock. Our task is to show that the Rules view can satisfy the Unity, Conservativeness, and Explanatory Constraints. The groundwork for the demonstration is now laid. I’ve argued that in order to show that the view satisfies the Unity Constraint we need to develop a use-conditional semantics for a toy language sufficiently similar to natural language by assigning all the different types of meaningful expressions plausible use-conditions. I’ve also clarified what we need to do to show that the view further satisfies the Conservativeness, and the Explanatory Constraint. And I’ve explained how the standard requirement that a semantics be compositional bears on the task of developing a use-conditional semantics, distinguished between several different ways of doing this and chosen the Mentalist way. Let’s now get started on developing a use-conditional semantics for the representational core of natural language 4. Heim & Kratzer’s Truth-Conditional Semantics In this section and the next I’ll provide an overview of Irene Heim’s and Angelika Kratzer’s truth-conditional semantics which has become a baseline in linguistic semantics, and show how to build a use-conditional semantics on its basis. (Heim & Kratzer 1998) Let’s begin with an overview of Heim & Kratzer’s truth-conditional semantics. H&K take sentences to be phrase structure trees and assume that phrase structures are at most binary branching (Heim & Kratzer 1998: 45). Their truth-conditional semantics consists of three different parts. First, they propose an inventory of denotations: 51 A. Inventory of denotations Let D be the set of all individuals in the actual world. Possible denotations are individuals, “truth-values” (“true” – 1, “false” – 0), or functions built out of these like functions from individuals to truth-values, functions from individuals to such functions and so on: - Elements of D (type e) - Elements of {1, 0} (type t) - Functions from D to {0, 1} (type <e, t>) - Functions from D to functions from D to {0, 1} (type <e, <e, t>>) Second, they provide a lexicon which assigns for each item that may occupy a terminal node of a tree, its denotation. This is done by using a function [[ ]] that assigns denotations to expressions α. Thus, we can write [[α]] for the denotation of α. For example, here’s a part of their lexicon: B. Lexicon [[‘Bertrand’]] = Bertrand [[‘Gottlob’]] = Gottlob etc. for other proper names [[‘smokes’]] = λx : x . x smokes (This is to be read as: Let [[‘smokes’]] be the function that maps every x in D to 1 if x smokes, and to 0 otherwise) 34 [[‘likes’]] = λx . [λy : y . y likes x] (This is to be read as: Let [[‘likes’]] be the function that maps every x in D to a function that maps every y in D to 1 if y likes x, and to 0 otherwise) 34 H&K use the lambda notation in two ways. [λα : φ. γ] is to be read, depending on which way makes sense, either as “the function which maps every α such that φ to γ” or as “the function which maps every α such that φ to 1 if γ and to 0 otherwise” (Heim & Kratzer 1998: 37). 52 Third, they posit three general principles about how syntax affects semantic composition. C. Principles of Composition (I*) Terminal Nodes (TN): If α is a terminal node, [[α]] is specified in the lexicon. (II*) Non-Branching Nodes (NN): If α is a non-branching node, and β is its daughter node, then [[α]] = [[β]] (III*) Functional Application (FA): If α is a branching node, {β, γ} is the set of α’s daughters, and [[β]] is a function whose domain contains [[γ]], then [[α]] = [[β]]([[γ]]) (Heim & Kratzer 1998: 43-44) In order to see how this truth-conditional semantics works, let’s walk through the examples they begin with; that is, examples of sentences that consist of a proper name plus an intransitive verb like ‘smokes’ and two proper names plus an transitive verb like ‘loves’. They assume that the syntax associates such sentences with phrase structures like the following: a) [ S [ NP [ N Bertrand]] [ VP [ V smokes]]] b) [ S [ NP [ N Bertrand]] [ VP [ V likes] [ NP [ N Gottlob]]]] Let’s start with a). By TN we can get the denotations for the terminal nodes from the lexicon: [[‘Bertrand’]] = Bertrand and [[‘smokes’]] = λx : x . x smokes. By NN we know that the denotations of [ NP [ N Bertrand]] and [ VP [ V smokes]] are the same as that of ‘Bertrand’ and ‘smokes’. By FA we know that since [[[ VP [ V smokes]]]] is a function whose domain contains [[‘Bertrand’]], [[a)]] = what we get when we apply λx : x . x smokes to Bertrand. Thus, we know that we get 1 if Bertrand smokes and 0 otherwise. And this is what we need. Let’s now walk through b). Again, by TN we can get the denotations for the terminal nodes from the lexicon: [[‘Bertrand’]] = Bertrand, [[‘likes’]] = λx . [λy : y . y likes x], [[‘Gottlob’]] = Gottlob. By NN we know that the denotations of [ NP [ N Bertrand]], [ VP [ V likes]], and [ NP [ N Gottlob]] are the same as that of ‘Bertrand’, ‘likes’, and ‘Gottlob’. Then, by FA we 53 know that since [ VP [ V likes] [ NP [ N Gottlob]]] has as its daughters [ VP [ V likes]] and [ NP [ N Gottlob]], and since [[[ VP [ V likes]]]] is a function whose domain contains [[‘Gottlob’]], [[[ VP [ V likes] [ NP [ N Gottlob]]]]] = what we get by applying λx . [λy : y . y likes x] to Gottlob. Thus, we get λy : y . y likes Gottlob, a function that maps every y in D to 1 if y likes Gottlob and 0 otherwise. Finally, by FA we also know that since b) has its daughters [ NP [ N Bertrand]] and [ VP [ V likes] [ NP [ N Gottlob]]], and since [[[ VP [ V likes] [ NP [ N Gottlob]]]]] is a function which has as its domain [[‘Bertrand’]], [[b)]] = what we get when we apply λx : x . x likes Gottlob to Bertrand. Thus, we know that we get 1 if Bertrand likes Gottlob and 0 otherwise. And this is what we need. Looking at the two examples we can see why H&K take their semantics to be truth- conditional although the denotations of sentences are truth-values. This is because the denotations of predicates are given by specifying a condition like “maps every x in D to 1 if y smokes”. H&K take this to amount to specifying the denotations of predicates in a way that shows their meaning (Heim & Kratzer 1998: 20). Consequently, they also think that this yields a specification of the denotations of sentences in a way that shows their truth-conditions. Although they don’t say too much about how they think about this, one way to think about it is by seeing them as using ‘application’ in a non-standard sense. In the standard sense, to apply a function like ( ) 2 to an argument is to calculate its output. In order to do this you have to know which function it is you’re applying beyond knowing it just via a specification of a condition. If you know the function just via a specification of a condition, then “applying” it does not consist in calculating its output, but rather coming to know what its output is given a certain input and the further conditions. For example, to “apply” λx : x . x smokes to Bertrand is not to calculate its value, but rather to come to know that it yields 1 given Bertrand as input and one the condition that Bertrand smokes. Looking at the above examples we can also see why it’s irrelevant that III) doesn’t specify the linear order of β and γ. It applies in a unique way to each given binary branching tree because of the condition about [[β]] being a function whose domain contains [[γ]]. Thus, as we have seen, if α is of the form [ S NP VP] then it applies so that the right-hand daughter corresponds to β and the left-hand daughter corresponds to γ because it’s the only way to satisfy the condition. And, if α is of the form [ S VP NP] then it applies so that the right-hand daughter corresponds to γ and the left-hand daughter to β for the same reason. Thus, H&K’s semantics is an example of what is known as “type-driven interpretation” in that the procedure for calculating 54 the denotation of the mother node is determined by the semantic types (e, t, <e, t> etc.) of the daughter nodes (Heim & Kratzer 1998: 44) 5. Heim & Kratzer in Use-Conditional Semantics H&K offer us a truth-conditional semantics which specifies denotations that are either individuals, truth-values, or functions built out of these like functions from individuals to truth- values, and which purports to take all semantic composition to be functional “application”. Any such semantics can be interpreted in terms of a use-conditional semantics. In order to do that we need to explain what sorts of use-conditions we assign to expressions based on what Heim & Kratzer have taken to be their denotations and we need to explain how to think of semantic composition analogously to how they’ve done it. Here the main idea is that instead of individuals, functions, and intensions we will appeal to mental acts having to do with these things. The central mental act will be thinking of something, where this is neutral between different ways one can think of individuals (de re vs. descriptively). For proper names we will take the use-conditions to consist of thinking of the individuals that Heim & Kratzer take to be their denotations in some de re way. Here, then, are the use-conditions for proper names (using [[ ]] UC for a function that assigns the expression to its permissible use-condition): (2) [[‘Bertrand’]] UC = s thinks of Bertrand (3) [[‘Gottlob’]] UC = s thinks of Gottlob For intransitive and transitive verbs we will take the use-conditions to consist of thinking of the functions that Heim & Kratzer take to be their denotations. Here, then, are the use-conditions for intransitive and transitive verbs: (4) [[‘smokes’]] UC = s thinks of λx : x . x smokes (5) [[‘likes’]] UC = s thinks of λx . [λy : y . y likes x] 55 When it comes to FA we will say that its real import is to require us to “apply” the function and come to know what H&K think we come to know. Thus, we’ll rewrite the three principles as follows (to be able to represent syntactic relations with brackets I won’t here use [[ ]] UC for a function that assigns use-conditions to expressions, but #_# instead) (I*) Terminal Nodes (TN*): If α is a terminal node, #α# is specified in the lexicon. (II*) Non-Branching Nodes (NN*): If α is a non-branching node, and β is its daughter node, then #α# = #β# (III*) Functional Application (FA*): If α is a branching node, {β, γ} is the set of α’s daughters, and #β# consists of thinking of a function whose domain contains what #γ# consists in thinking of, then #α# = requires “applying” the function #β# consists in thinking of to what #γ# consists in thinking of. In order to see how this use-conditional semantics works, let’s walk through the previous examples: a) [ S [ NP [ N Bertrand]] [ VP [ V smokes]]] b) [ S [ NP [ N Bertrand]] [ VP [ V likes] [ NP [ N Gottlob]]]] Let’s start with a). By TN* we can get the use-conditions for the terminal nodes from the lexicon: #‘Bertrand’# = thinking of Bertrand and #‘smokes’# = thinking of λx : x . x smokes. By NN* we know that the use-conditions of [ NP [ N Bertrand]] and [ VP [ V smokes]] are the same as that of ‘Bertrand’ and ‘smokes’. By FA* we know that since #[ VP [ V smokes]]# requires us to think of a function whose domain contains what #‘Bertrand’# requires thinking of, #a)# = the sequence of acts consisting of the act of thinking of Bertrand, the act of thinking of λx : x . x smokes, and the act of “applying” λx : x . x smokes to Bertrand. As a result of performing this sequence of acts the speaker will come to know that ‘Bertrand smokes’ is 1 if Bertrand smokes and 0 otherwise. And this is what we need. 56 Let’s now walk through b). Again, by TN* we can get the use-conditions for the terminal nodes from the lexicon: #‘Bertrand’# = thinking of Bertrand, #‘likes’# = thinking of λx . [λy : y . y likes x], #‘Gottlob’# = thinking of Gottlob. By NN* we know that the denotations of [ NP [ N Bertrand]], [ VP [ V likes]], and [ NP [ N Gottlob]] are the same as that of ‘Bertrand’, ‘likes’, and ‘Gottlob’. Then, by FA* we know that since [ VP [ V likes] [ NP [ N Gottlob]]] has as its daughters [ VP [ V likes]] and [ NP [ N Gottlob]], and since #[ VP [ V likes]]# requires thinking of a function whose domain contains what #‘Gottlob’# requires thinking of, #[ VP [ V likes] [ NP [ N Gottlob]]]# = the act of “applying” λx . [λy : y . y likes x] to Gottlob. Thus, we compute the function λy : y . y likes Gottlob, a function that maps every y in D to 1 if y likes Gottlob and 0 otherwise. Finally, by FA* we also know that since b) has its daughters [ NP [ N Bertrand]] and [ VP [ V likes] [ NP [ N Gottlob]]], and since #[ VP [ V likes] [ NP [ N Gottlob]]]# requires us to think of a function which has as its domain what #‘Bertrand’# requires us to think of, #b)# = the sequence of acts consisting of all the preceding acts + the act of “applying” λx : x . x likes Gottlob to Bertrand. As a result of performing this sequence of acts the speaker will come to know that ‘Bertrand likes Gottlob’ is 1 if Bertrand likes Gottlob and 0 otherwise. And this is what we need. 6. More of Heim & Kratzer in Use-Conditional Semantics We’ve now seen examples of how H&K’s truth-conditional semantics works based on the examples they begin with. We’ve also seen how to interpret it in terms of a use-conditional semantics. In order to get a better grip on both H&K’s truth-conditional semantics and how to interpret it in terms of use-conditions let’s next look at some further examples involving quantification. In handling quantifiers, Heim and Kratzer add to the inventory of denotations because they conclude that quantificational determiner phrases like ‘something’ must be of the type <<e, t>, t>: A. Inventory of Denotations - Functions from functions from D to {0, 1} to {0, 1} (type <<e, t>, t>) 57 Then, they add to the lexicon: B. Lexicon [[something]] = λf ɛ D <e, t> . there is some x ɛ D e such that f(x) = 1 [[everything]] = λf ɛ D <e, t> . for all x ɛ D e, f(x) = 1 Finally, they continue applying the three general principles about semantic composition. Let’s now look at their examples consisting of a quantifier and an intransitive verb. They assume that the syntax associates such sentences with phrase structures like the following: c) [[ S [ DP [ N Something]] [ VP [ V vanished]]] d) [ S [ DP [ N Everything]] [ VP [ V vanished]]]] Let’s start with c). By TN we can get the denotations for the terminal nodes from the lexicon: [[‘something’]] = λf ɛ D <e, t> . there is some x ɛ D e such that f(x) = 1, and [[‘vanished’]] = λx : x . x vanished. By NN we know that the denotations of [ DP [ N Something]] and [ VP [ V vanished]] are the same as that of ‘something’ and ‘vanished’. And by FA we know that since [[‘something’]] is a function whose domain contains [[‘vanished’]], [[c)]] = what we get by “applying” λf ɛ D <e, t> . there is some x ɛ D e such that f(x) = 1 to λx : x . x vanished. Thus, we know that we get 1 if something vanished and 0 otherwise. And this is what we need. Let’s now walk through d). By TN we can get the denotations for the terminal nodes from the lexicon: [[‘everything’]] = λf ɛ D <e, t> . for all x ɛ D e, f(x) = 1, and [[‘vanished’]] = λx : x . x vanished. By NN we know that the denotations of [ DP [ N Everything]] and [ VP [ V vanished]] are the same as that of ‘everything’ and ‘vanished’. And by FA we know that since [[‘everything’]] is a function whose domain contains [[‘vanished’]], [[c)]] = what we get by “applying” λf ɛ D <e, t> . for all x ɛ D e, f(x) = 1 to λx : x . x vanished. Thus, we know that we get 1 if everything vanished and 0 otherwise. And this is what we need. The only thing we need to do to show how the above examples can be interpreted in terms of rules is to explain what sorts of use-conditions we assign to the added expressions based on what Heim & Kratzer have taken to be their denotations. Of course, since these added 58 expressions have as their denotation functions, we already know that their use-conditions will require thinking of those functions: (6) [[‘something’]] UC = s thinks of λf ɛ D <e, t> . there is some x ɛ D e such that f(x) = 1 (7) [[‘everything’]] UC = s thinks of λf ɛ D <e, t> . for all x ɛ D e, f(x) = 1 And since those are the only additions, that’s all we need to add. Everything else follows from the three rewritten principles. In order to see how this use-conditional semantics works let’s walk again through the examples. Let’s start with c). By TN* we can get the denotations for the terminal nodes from the lexicon: #‘something’# = thinking of λf ɛ D <e, t> . there is some x ɛ D e such that f(x) = 1, and #‘vanished’# = thinking of λx : x . x vanished. By NN we know that the denotations of [ DP [ N Something]] and [ VP [ V vanished]] are the same as that of ‘something’ and ‘vanished’. And by FA we know that since #‘something’# requires us to think of a function whose domain contains what #‘vanished’# requires us to think of, #c)# = the sequence of acts consisting of “applying” λf ɛ D <e, t> . there is some x ɛ D e such that f(x) = 1 to λx : x . x vanished Bertrand. As a result the speaker will come to know that we get 1 if something vanished and 0 otherwise. And this is what we need. Let’s now walk through d). By TN* we can get the denotations for the terminal nodes from the lexicon: #‘everything’# = thinking of λf ɛ D <e, t> . for all x ɛ D e , f(x) = 1, and #‘vanished’# = thinking of λx : x . x vanished. By NN we know that the denotations of [ DP [ N Something]] and [ VP [ V vanished]] are the same as that of ‘everything’ and ‘vanished’. By FA we know that since #‘everything’# requires us to think of a function whose domain contains what #‘vanished’# requires us to think of, #c)# = the sequence of acts consisting of “applying” λf ɛ D <e, t> . for all x ɛ D e , f(x) = 1 to λx : x . x vanished Bertrand. As a result the speaker will come to know that we get 1 if everything vanished and 0 otherwise. And this is what we need. I’ve by now shown how the Rules view can satisfy the Unity Constraint in the case of the constructions Heim & Kratzer’s semantics covers by developing a use-conditional semantics on its basis. Let me next show that it also satisfies the Conservativeness and Explanatory Constraints in its case by showing that the use-conditional semantics is consistent with Heim & 59 Kratzer’s way of describing the meanings of its expressions, and, moreover, that the view can explain what we’re doing when we give such descriptions. On Heim & Kratzer’s way of describing the meanings of expressions every expression is associated with something as their denotation or extension. Names are associated with objects, predicates and quantifiers with functions, and sentences with truth-values in a way that is supposed to yield knowledge of their truth-conditions. In order to see that the use-conditional semantics we’ve given is consistent with this, we need to do no more to than to notice that on our semantics all expressions are associated with mental acts involving these same things: names are associated with thinking of objects, predicates and quantifiers with thinking of functions, and sentences with sequences of mental acts involving functional “application”, yielding knowledge of truth-conditions. Thus our descriptions associate expressions with the same things as H&K’s descriptions. However, the Rules view can also satisfy the Explanatory Constraint in being able to explain what we’re doing when we associate expressions with something as their denotation or extension. To do that, all it needs to do is to claim that to associate an expression with something as its denotation is just a technical way of specifying what its use-condition requires performing some mental act with. Thus, to say that names have as their denotations objects is just to say that their use-conditions consists in performing some mental act with objects. To say that predicates have as their semantic contents functions is just to say that their use-conditions consists in performing some mental act with functions. And to say that sentences have as their denotations truth-values yielding knowledge of truth-conditions is just to say that their use- conditions require performing a sequence of mental acts which yield knowledge of truth- conditions. Conclusion Let’s take stock. In the last three sections we’ve taken Heim & Kratzer’s truth-conditional semantics and shown how to build a use-conditional semantics on its basis. We’ve done this in order to show that the Rules view can satisfy the Unity, Conservativeness, and Explanatory constraints in the case of whatever constructions Heim & Kratzer’s semantics covers. Of course, Heim & Kratzer’s semantics, even if made intensional, has certain well-known limitations when it comes to hyperintensionality. It would be nice to also have a semantics for the representational 60 core of language that doesn’t have such limitations. In the next chapter I’ll therefore provide an alternative use-conditional semantics for the representational core of language based on Scott Soames’s compositional theory of what it is to entertain different propositions. 61 Chapter 4 Use-Conditional Semantics and the Representational Core of Language II Introduction In the end of the first chapter I pointed out that the second thing we have to do in order to develop and defend the Rules view is to show that not only is it well placed to satisfy the Unity, Conservativeness, and Explanatory Constraints, but that it can actually do so. In order to do this we need to develop a use-conditional semantics for natural language by assigning all the different types of meaningful expressions a plausible use-condition. In the previous chapter I started developing a use-conditional semantics for the representational core of language by providing an overview of Irene Heim’s and Angelika Kratzer’s truth-conditional semantics which has become a baseline in linguistic semantics, and showing how to build a use-conditional semantics on its basis. Of course, Heim & Kratzer’s semantics, even if made intensional, has certain well-known limitations when it comes to hyperintensionality. Since it would be nice to also have a semantics for the representational core of language that doesn’t have such limitations, in this chapter I’ll therefore provide an alternative use-conditional semantics for the representational core of language based on Scott Soames’s theory of what it is to entertain different propositions. I’ll proceed as follows. I will start by providing an overview of Soames’s theory of what it is to entertain different propositions (Section 1). I will then show how to build a use- conditional semantics for propositional logic on its basis (Section 2). Finally, I’ll do the same for a small, but central fragment of English (Sections 3). 1. Soames’s Theory of Entertaining There is a new research program emerging in philosophy of mind on which our capacity for representing the world, our cognitive and conative lives in general, and the nature of the attitudes of believing and desiring in particular, are to be thought in terms of our capacity and disposition to perform different sorts of mental acts (Hanks 2007, 2011, 2013, MS, Kriegel 2014, Soames 62 2010b, 2014, MS). 35 For example, on Scott Soames’s view, our capacity to represent the world depends at bottom on our capacity to perform different sub-propositional acts like thinking of something, predicating, negating, conjoining, disjoining etc., the performing of which results in performing the basic propositional act of entertaining a particular proposition. The further propositional act of judging is then to be analyzed in terms of entertaining a particular proposition + performing the mental act of affirming it, and believing is to be analyzed in terms of being disposed to judge. Finally, propositions are identified with act-types of entertaining those propositions. (Soames 2010b, 2014, MS). 36 I will take Soames’s theory and show how to build a use-conditional semantics on its basis. For our purposes it is important to see what, according to Soames, it is to entertain different propositions. Let’s start with the acts composing entertainings of atomic subject- predicate propositions like the proposition that Bertrand is British and the proposition that Gottlob is German. Soames thinks that to entertain such propositions one must first think of a particular object, then think of a property and then predicate the property of the object. Thus, to entertain the proposition that Bertrand is British is to think of Bertrand, think of the property of being British and predicate it of Bertrand. Similarly, to entertain the proposition that Gottlob is German is to think of Gottlob, think of the property of being German and predicate it of Gottlob. We can represent this with the following notation (where ‘│’ represents entertaining, … SUBJ what is the subject of predication, and … PRED what is predicated): entertaining the proposition that Bertrand is British =│<Bertrand SUBJ , being British PRED > entertaining the proposition that Gottlob is German =│<Gottlob SUBJ , being German PRED > 35 For an extensive discussion of the relation of something like this research program to functionalist orthodoxy see Kriegel 2014. It might be more correct to say that it is re-emerging, given its roots in or at least similarities to the work of Franz Brentano, Edmund Husserl, Bertrand Russell, Herbert Price and others. 36 On Peter Hanks’s view our capacity to represent the world depends similarly at bottom on our capacity to perform different sub-propositional acts. However, in contrast to Soames, he doesn’t think of predicating as a neutral or forceless act resulting in the performance of an entertaining, but rather as forceful or committal, resulting directly in the performance of the basic propositional act of judging. Nevertheless, he agrees that believing is to be analyzed in terms of being disposed to judge and that propositions are to be identified with act-types, in his case of judging those propositions to be the case. For discussion of differences between Soames’s and Hanks’s views and criticism of the latter see Reiland 2012. 63 Thus, entertainings of atomic subject-predicate propositions are composed of acts of thinking of objects and properties and predicating properties of objects. Let’s now move on to the acts composing entertainings of quantified propositions like the proposition that someone is British and the proposition that everyone is British. Soames thinks that to entertain such propositions one must first think of a property, and then predicate another property of it. Thus, he thinks that to entertain the proposition that someone is British is to think of the property of being British and predicate the property of being instantiated of it. Similarly, to entertain the proposition that everyone is British is to think of the property of being British, and predicate the property of being universally instantiated of it. We can represent this with the following notation: entertaining the proposition that someone is British =│<being British SUBJ , being instantiated PRED > entertaining the proposition that everyone is British =│<being British SUBJ , being universally instantiated PRED > Thus, entertainings of atomic quantified propositions are composed of acts of thinking of properties and predicating properties of properties. Let’s now look at acts composing entertainings of atomic propositions which contain sub- sentential negation, conjunction, and disjunction like the proposition that Bertrand is not British, the proposition that Bertrand is British and German, and the proposition that Bertrand is British or German. Soames thinks that to entertain such propositions one must think of a property or a pair of properties and then negate it or conjoin or disjoin them. This results in thinking of a complex property. Then one must think of something and predicate the complex property of it. Thus, he thinks that to entertain the proposition that Bertrand is not British is to first think of the property of being British and negate it. This results in thinking of the property of being not British. Then one must think of Bertrand and predicate the property of being not British of him. Similarly, to entertain the proposition that Bertrand is British and German and the proposition that Bertrand is British or German is to first think of the properties of being British and being German and conjoin or disjoin them. This results in thinking of the property of being British and 64 German or the property of being British or German. Then one must think of Bertrand and predicate the relevant property of him. We can represent this with the following notation: entertaining the proposition that Bertrand is not British =│<Bertrand SUBJ , <NEG, being British> PRED > entertaining the proposition that Bertrand is British and German =│<Bertrand SUBJ , <CONJ, being British, being German> PRED > entertaining the proposition that Bertrand is British or German =│<Bertrand SUBJ , <DISJ, being British, being German> PRED > Thus, acts of entertaining atomic propositions which contain sub-sentential negation, conjunction, and disjunction are composed of acts of thinking of properties, acts of negating, conjoining, and disjoining properties, and acts of predicating the resulting complex properties of objects or properties. Finally, let’s look at the acts composing entertainings of complex propositions like the proposition that it is not the case that p, the proposition that p and q, and the proposition that p or q. Soames thinks that to entertain the proposition that it is not the case that p one must think of the proposition that p which amounts to entertaining it, think of the property of being not the case, and predicate the latter of the former. In the same spirit, we might think that to entertain the propositions that p and q and the proposition that p or q is to think of the two propositions, think of the properties of being jointly the case or being disjointly the case and then predicate the latter of the two propositions. 37 We can represent this with the following notation: entertaining the proposition that it is not the case that Bertrand is British =│<<Bertrand SUBJ , being British PRED > SUBJ , being not the case PRED > 37 In fact, Soames prefers to analyze these things in terms of conjoining and disjoining the two propositions. However, this won’t make a difference for our purposes in this chapter and for simplicity’s sake, I’ll diverge from him here. 65 entertaining the proposition that Bertrand is British and Gottlob is German =│<<Bertrand SUBJ , being British PRED > SUBJ , <Gottlob SUBJ , being German PRED > SUBJ , being jointly the case PRED > entertaining the proposition that Bertrand is British or German =│<<Bertrand SUBJ , being British PRED > SUBJ , <Gottlob SUBJ , being German PRED > SUBJ , being jointly the case PRED > Thus, acts of entertaining complex propositions are composed of acts of thinking of propositions and acts of predicating properties of propositions. 2. Propositional Logic Our aim in the next two sections is to provide a use-conditional semantics for propositional logic and a small fragment of English on the basis of Soames’s theory, and show that the view also satisfies the other constraints. Let’s start with propositional logic. Here’s a description of its syntax. Let’s use the following symbols: • Sentence letters: A, B, C… • Sentential connectives: ~, &, V • Brackets: (,) Here’s a specification of sentences (here we’ll use ‘α’ and ‘β’ as metalinguistic variables over sentences): i) Sentence letters are sentences. ii) If α is a sentence then ~α is a sentence. iii) If α and β are sentences then (α&β) is a sentence. iv) If α and β are sentences then (αVβ) is a sentence. v) Nothing else is a sentence. 66 Now, how could the expressions of propositional logic be meaningful in virtue of rules governing their use? In order to answer this question and to satisfy the Unity Constraint we must develop a use-conditional semantics for propositional logic by assigning each expression a use- condition. And in order to satisfy the Conservativeness and Explanatory Constraints we must show that this semantics is consistent with the standard way of describing the meanings of the expressions of propositional logic, and, moreover, explain what we do when we give such descriptions. Let’s start with the sentence letters. Since these are used to model atomic sentences we will associate them with use-conditions consisting of speakers entertaining atomic propositions. Thus, let’s define a propositional logic interpretation function as follows: a PL-interpretation function is a function that assigns to every sentence letter a use-condition consisting of a speaker’s entertaining an atomic proposition. Here, then is the form use-condition for a sentence letter like A in propositional logic will take: (8) [[A]] UC = s entertains the proposition that A Thus, the idea is that we associate each sentence with the act of entertaining an atomic proposition. Which proposition? For example, which proposition is the proposition that A? In the spirit of Soames’s view of entertaining, we take the act of entertaining a proposition as primary. Which proposition is entertained is derivable from this. For example, on Soames’s view of propositions the proposition entertained is just the act-type of entertaining the proposition. 38 Let’s now proceed to the sentential connectives ‘~’, ‘&’, and ‘V’. Since these are used to model ‘it is not the case that’, ‘and’ and ‘or’ (in their uses as sentential connectives), we will associate them with use-conditions consisting of speaker’s predicating the properties of being not the case, being jointly the case, and being disjointly the case. 39 Thus, here are use-conditions for ‘~’, ‘&’, and ‘V’ in propositional logic: 38 Soames’s view of mental acts can also be combined with other views of propositions. For discussion see Moltmann 2014. 39 Given Soames’s view of mental acts other choices are possible. Instead of associating ‘&’ and ‘V’ with use- conditions consisting of speaker’s predicating the properties of being jointly the case and being disjointly the case we could associate them with ones consisting of speaker’s conjoining and disjoining. However, for present purposes this doesn’t make any difference so for the sake of unity I decided to go with the act of predicating a distinct property in each case. 67 (9) [[~α]] UC = s entertains the proposition that α and predicates being not the case of the proposition that α (10) [[(α&β)]] UC = s entertains the proposition that α, entertains the proposition that β, and predicates being jointly the case of the proposition that α and the proposition that β (11) [[(αVβ)]] UC = s entertains the proposition that α, entertains the proposition that β, and predicates being disjointly the case of the proposition that α and the proposition that β Thus, the idea is that we associate ‘~’ with speaker’s predicating the property of being not the case of the proposition one must entertain in order to permissibly use the embedded sentence. Similarly, we associate ‘&’ and ‘V’ with speaker’s predicating the properties of being jointly the case and being disjointly the case of the propositions one must entertain in order to permissibly use the embedded sentences. Which propositions? If an embedded sentence is a sentence letter like A then it will be associated with an act of entertaining a proposition. And, as explained above, which proposition is entertained, is then derivable from this. However, what if the sentence is a complex sentence like ~A, A&B, or AVB etc.? Here we have to appeal to a feature of Soames’s theory of entertaining. We know that on it to entertain the proposition that p and to predicate being not the case of it amounts to entertaining the proposition that it is not the case that p. Similarly, we know that to entertain the proposition that p and the proposition that q, and to predicate being jointly the case of them amounts to entertaining the proposition that p and q. Finally, we know that to entertain the proposition that p and the proposition that q, and to predicate being disjointly the case of them amounts to entertaining the proposition that p or q. Thus, (9), (10), and (11) are equivalent to the following: [[~α]] UC = s entertains that ~α [[(α&β)]] UC = s entertains that (α&β) 68 [[(αVβ)]] UC = s entertains that (αVβ) This means that complex sentences like ~A, A&B, or AVB etc. are also associated with acts of entertaining propositions. And again, as explained above, which proposition is entertained is derivable from this. This completes our development of a use-conditional semantics for propositional logic. The rules that we’ve assigned to its expressions suffice to assign a compositional use-condition for any expression in the language. Rules like (8) are rules for sentence letters like A covering sentences of step i) of our syntax. Rules number (9), (10), and (11) assign rules to sentences with the form ~α, (α&β), and (αVβ) covering sentences of step ii), iii), and iv) in our syntax. And these are all the sentences in the language there are. Hence, the rules that we’ve assigned to the expressions of propositional logic suffice to assign a compositional use-condition to any expression in the language. I’ve by now shown how the Rules view can satisfy the Unity Constraint in the case of propositional logic by developing a use-conditional semantics for it. Let me next show that it also satisfies the Conservativeness and Explanatory Constraints in its case by showing that the use-conditional semantics is consistent with the standard way of describing the meanings of the connectives of propositional logic, and, moreover, that the view can explain what we’re doing when we give such descriptions. To get us started, let me first give an overview of the standard way of describing the meanings of the connectives of propositional logic. The standard semantics for the connectives of propositional logic is one we might call a Truth-Value semantics. It is given by using an interpretation function to assign truth-values to all sentence letters and then using truth-tables or, more rigorously, a valuation function to describe the effect connectives have on the truth-in-L-values of sentences that contain them (Sider 2010: 22). Thus, let’s define another propositional logic interpretation function as follows: a PL- interpretation function is a function I PL that assigns to every sentence letter either true-in-L (T) or false-in-L (F). Then, let’s define a PL-valuation function as follows: for any PL-interpretation function I PL , a PL-valuation function is a function V I that assigns to any sentence either T or F and which is such that for any sentences α and β: a. If α is a sentence letter then V I (α) = I PL (α); 69 b. V I (~α) = T iff I PL (α) = F; c. V I (α&β) = T iff I PL (α) = T and I PL (β) = T d. V I (αVβ) = T iff I PL (α) = T and I PL (β) = T or I PL (α) = T and I PL (β) = F or I PL (α) = F and I PL (β) = T This suffices to assign a truth-value for any expression in the language. PL-interpretation function assigns truth-values to sentence letters, covering sentences of step i) of our syntax. PL- valuation function assigns truth-in-L-values to sentences with the form ~α, (α&β), and (αVβ) covering sentences of step ii), iii), and iv) in our syntax. And these are all the sentences in the language there are. Hence, the semantics enables us to assign a truth-in-L-value for any expression in the language. Let’s now show that the use-conditional semantics can also satisfy the Conservativeness Constraint in being consistent with the Truth-Value semantics. The use-conditional semantics’ PL-interpretation function F assigns each sentence letter a use-condition consisting of a speaker’s entertaining an atomic proposition. And the rules assigned to the connectives suffice to assign each sentence a use-condition consisting of a speaker’s entertaining a proposition. Similarly, the Truth-Value semantics’ PL-interpretation function I PL assigns each sentence letter a truth-in-L-value. And the PL-valuation function V I assigns each sentence a truth-value. However, propositions are things that have truth-conditions in the sense that they are true and false depending on how the world is. Thus, using a PL-interpretation function that assigns use- conditions consisting of a speaker’s entertaining an atomic propositions is, ipso facto, assigning each sentence a truth-in-L condition, and, with the help of how the world is, a truth-in-L-value. This means that F + the world do what I PL does. And the rules assigned to the connectives do what the PL-valuation function does. 40 Finally, let’s show that the Rules view can also satisfy the Explanatory Constraint in being able to explain what we’re doing when we use a valuation function to describe the effect connectives have on the truth-in-L-values of sentences that contain them. On the Rules view for any expression to be meaningful is for it to have a rule of use. Thus, for connectives to be 40 It is easy to see why this is. After all, what is it for a sentence to have a truth-in-L-value? It is for it to have a truth- in-L-condition + the world to be a certain way. But what is it for a sentence to have a truth-in-L-condition. It is for it to “express-in-L”, that means, for it to somehow be associated in L with a proposition which has a truth-condition. And according to the Rules view for a sentence to be so associated in L with a proposition simply is for it to have a rule of use constitutive of its meaning in L that determines a use-condition which involves a proposition. 70 meaningful is for them have rules of use. These rules have an effect on the rules of use for complex expressions containing these connectives. And using a valuation function to describe the effect connectives have on the truth-in-L-values of sentences that contain them is just a simple way to describe this effect. Let’s take stock. In this section I’ve developed a use-conditional semantics for propositional logic by assigning each of its expressions a rule of use. It’s important to understand why I’ve done this. The reason doesn’t really have to do with propositional logic per se. For example, I’m not claiming that we should start giving a use-conditional semantics for propositional logic in our logic textbooks. For many reasons, giving the Truth-Value semantics is much better. Rather, I’ve developed a use-conditional semantics for propositional logic in order to show that the Rules view can satisfy the Unity, Conservativeness, and Explanatory constraints in the case of a simple language that relates in an interesting way to the representational parts of natural language. This is a possibility proof that there’s no reason why the Rules view couldn’t satisfy these constraints in the case of natural language and can at the same time serve as a model for similar demonstrations in the case of more complicated languages. 3. A Fragment of English In the previous section we developed a use-conditional semantics for propositional logic by assigning each of its expressions a use-condition. Although this suffices to show that the Rules view can satisfy the three constraints for at least one simple language, this is just the first step in our demonstration. After all, propositional logic is a very limited language that is definitely not similar enough to natural language for our purposes. Let’s therefore take the next step and proceed to a small fragment of English that can be thought to form the core of the representational part of natural language. Here’s a description of its syntax. • Names: ‘Bertrand’, ‘Gottlob’ • Predicates: ‘is British’, ‘is German’ • Quantifiers: ‘Something’, ‘Everything’ • Sentential connectives: ~, &, V • Brackets: (,) 71 Here’s a specification of well-formed formulas: i) A name followed by a predicate is a wff. ii) A quantifier followed by a predicate is a wff. iii) If φ is a wff then ~φ is a wff. iv) If φ and ψ are wffs then (φ&ψ) is a wff. v) If φ and ψ are wffs then (φVψ) is a wff. vi) Nothing else is a wff. Now, how could the expressions of this fragment be meaningful in virtue of rules governing their use? In order to answer this question and to satisfy the Unity Constraint we must develop a use- conditional semantics for it by assigning each expression a use-condition. And in order to satisfy the Conservativeness and Explanatory Constraints we must show that this semantics is consistent with the standard way of describing the meanings of its expressions, and, moreover, explain what we do when we give such descriptions. Let’s start with names. Applying Soames’s view we’ll associate them with use-conditions consisting of speakers thinking of particular objects. Thus, here are the rules constitutive of the meanings of ‘Bertrand’ and ‘Gottlob’: (12) [[‘Bertrand’]] UC = s thinks of Bertrand (13) [[‘Gottlob’]] UC = s thinks of Gottlob Now, let’s move on to predicates. Applying Soames’s view, we’ll associate them with use- conditions consisting of speakers thinking of particular properties, and predicating them. Thus, here’s the use-condition for the sentence ‘Bertrand is British’: (14) [[‘Bertrand is British’]] UC = s thinks of Bertrand, thinks of the property of being British and predicates being British of Bertrand 72 More generally, if we use f( ) for an interpretation function that assigns each name to an object in and each predicate to a property, then, for any sentence composed of a name a followed by a predicate ‘is F’, its use-condition will be the following: (15) [[‘a is F’]] UC = s thinks of f(a), thinks of f(is F) and predicates f(is F) of f(a) However, since on Soames’s view to think of an object, think of a property and predicate the property of an object results in entertaining a proposition, these amount to the following: [[‘Bertrand is British]] UC = s entertains <Bertrand, being British> [[‘a is F]] UC = s entertains <a, being F> Thus, the idea is that we associate each name with the act of thinking of an object and each predicate with an act of thinking of a particular property and predicating it. Let’s proceed to quantifiers. Again, let’s try applying Soames’s view and associate them with use-conditions consisting of speaker’s predicating the properties of being instantiated and being universally instantiated. Thus, here are the use-conditions for ‘Something is British’ and ‘Everything is German’: (16) [[‘Something is British’]] UC = s thinks of the property of being British, and predicates being instantiated of it (17) [[‘Everything is German’]] UC = s thinks of the property of being German, and predicates being universally instantiated of it More generally, if we again use f( ) for an interpretation function that assigns each predicate to a property, then, for any sentence composed of ‘Something’ and ‘Everything’ followed by a predicate ‘is F’, its use-condition will be the following: (18) [[‘Something is F]] UC = s thinks of f(is F), and predicates being instantiated of f(is F)) 73 (19) [[ ‘Everything is F’]] UC = s thinks of f(is F), and predicates being universally instantiated of f(is F)) Since on Soames’s view to think of a property and predicate the property of being instantiated or being universally instantiated of it results in entertaining a quantified proposition, these amount to the following: [[‘Something is F]] UC = s entertains the proposition that something is F [[‘Everything is F’]] UC = s entertains the proposition that everything is F Thus, the idea is that we associate quantifiers with acts of thinking of a particular property. We’ve by now assigned rules to names, predicates, and quantifiers of our fragment of English. However, it seems that the rules we’ve assigned to predicates and the rules we’ve assigned to quantifiers are in tension. Look at rule (15) versus (18) and (19) from above (renumbered as (20) versus (21) to (22)): (20) [[‘a is F’]] UC = s thinks of f(a), thinks of f(is F) and predicates f(is F) of f(a) (21) [[‘Something is F]] UC = s thinks of f(is F), and predicates being instantiated of f(is F)) (22) [[ ‘Everything is F’]] UC = s thinks of f(is F), and predicates being universally instantiated of f(is F)) On our way of handling it thus far the rule for F seems to make a different contribution to these rules. In the case of (20) it contributes the acts of thinking of a property and predicating it. However, in the case of (21) and (22) it contributes the act of thinking of a property of which something is predicated of. This is problematic, because it violates compositionality. Remember that the only way to develop a compositional use-conditional semantics for L is to assign all the complex expressions of L use-conditions that are somehow determined by their structure and that 74 contain the use-conditions we’ve assigned to their parts. However, the rules (20) and (21)-(22) can’t both contain the use-conditions we’ve assigned for predicates like F. One solution I see is to follow Richard Montague’s who treated names as quantifiers (Montague 1973). Adapted to Soames’s framework, the idea is that we should treat predication as not being of objects that they have properties, but rather of properties that they are something- instantiated. Thus, instead of saying that to entertain the proposition that Bertrand is British is to think of Bertrand, think of the property of being British and predicate being British of Bertrand, we should say that it is to think of these things and predicate being Bertrand-instantiated of the property of being British. This leads to the following reinterpretation of (20): (23) [[‘a is F’]] UC = s thinks of f(a), thinks of f(F), and predicates being f(a)- instantiated of f(F)) Modifying Soames’s view we’ll then claim that to think of an object o, think of a property and predicate being o-instantiated of the property results in entertaining a singular proposition, and thus that this amounts to the following: [[‘a is F’]] UC = s entertains <a, being F>) Thus, now the idea is that we associate each name with the act of thinking of an object, each predicate with the acts of thinking of a particular property and predicating being something- instantiated of it, and each quantifier with the act of thinking of a property. Let’s now proceed to ‘~’, ‘&’, and ‘V’. Since these are used to model ‘it is not the case that’, ‘and’ and ‘or’ (in their uses as sentential connectives), we will associate them again with use-conditions consisting of speaker’s predicating the properties of being not the case, being jointly the case, and being disjointly the case. Thus, here are the use-conditions for ‘~’, ‘&’, and ‘V’ in our fragment: (24) [[~φ]] UC = s entertains the proposition that φ and predicates being not the case of the proposition that φ 75 (25) [[(φ&ψ)]] UC = s entertains the proposition that φ, entertains the proposition that ψ, and predicates being jointly the case of the proposition that φ and the proposition that ψ (26) [[(φVψ)]] UC = s entertains the proposition that φ, entertains the proposition that ψ, and predicates being disjointly the case of the proposition that φ and the proposition that ψ Thus, the idea is again that we associate ‘~’ with speaker’s predicating the property of being not the case of the proposition one must entertain in order to permissibly use the embedded sentence. Similarly, we associate ‘&’ and ‘V’ with speaker’s predicating the properties of being jointly the case and being disjointly the case of the propositions one must entertain in order to permissibly use the embedded sentences. Which propositions? Again, if an embedded sentence is a predicate followed by a number of terms then it will be associated with an act of entertaining a proposition. And, as explained above, which proposition is entertained, is then derivable from this. However, what if the sentence is a complex sentence like ~φ, (φ&ψ), or (φVψ) etc.? Here we have to appeal again to a feature of Soames’s theory of entertaining. We know that on it to entertain the proposition that p and to predicate being not the case of it amounts to entertaining the proposition that it is not the case that p. Similarly, we know that to entertain the proposition that p and the proposition that q, and to predicate being jointly the case of them amounts to entertaining the proposition that p and q. Finally, we know that to entertain the proposition that p and the proposition that q, and to predicate being disjointly the case of them amounts to entertaining the proposition that p or q. Thus, (24), (25), and (26) are equivalent to the following: [[~φ]] UC = s entertains the proposition that ~φ [[(φ&ψ)]] UC = s entertains the proposition that (φ&ψ) [[(φVψ)]] UC = s entertains the proposition that (φVψ) 76 This means that complex sentences like ~φ, φ&ψ, or φVψ etc. are also associated with acts of entertaining propositions. And again, as explained above, which proposition is entertained is derivable from this. This completes the development of use-conditional semantics for our fragment of English. The rules that we’ve assigned to its expressions suffice to assign a compositional rule for any expression in the language. Rule (23) makes clear what the rules for atomic subject- predicate sentences are like, covering sentences of step i) in our syntax. Rules (21) and (22) make clear what the rules for quantified sentences are like covering sentences of step ii) in our syntax. Finally, rules number (24), (25), and (26) assign rules to sentences with the form ~φ, (φ&ψ), or (φVψ) covering sentences of step iv), v), and vi) in our syntax. And these are all the sentences in the language there are. Hence, the rules that we’ve assigned to the expressions of our fragment of English suffice to assign a compositional rule for any expression in the fragment. I’ve by now shown how the Rules view can satisfy the Unity Constraint in the case of our small fragment of English by developing a use-conditional semantics for it. Let me next show that it also satisfies the Conservativeness and Explanatory Constraints in its case by showing that the use-conditional semantics is consistent with the standard way of describing the meanings of its expressions, and, moreover, that the view can explain what we’re doing when we give such descriptions. On one standard way of describing the meanings of expressions of our fragment of English, every expression is associated with something as their semantic content. Names are associated with objects, predicates and quantifiers with properties, and sentences with structured propositions made up of those things. In order to see that the use-conditional semantics we’ve given is consistent with this, we need to do no more to than to notice that in our semantics all expressions are associated with mental acts involving these same things: names are associated with thinking of objects, predicated and quantifiers with thinking of properties, and sentences with entertaining structured propositions made up of those things. Thus our descriptions associate expressions with the same things as the standard descriptions. However, the Rules view can also satisfy the Explanatory Constraint in being able to explain what we’re doing when we associate expressions with something as their semantic content. To do that, all it needs to do is to claim that to associate an expression with something as its semantic content is just a technical way specifying what its use-condition requires performing some mental act with. Thus, to say 77 that names have as their semantic contents objects is just to say that their use-conditions require thinking of objects. To say that predicates have as their semantic contents properties is just to say that their use-conditions require thinking of properties. And to say that sentences have as their semantic contents propositions is just to say that their use-conditions require entertaining propositions. Conclusion In the end of the first chapter I pointed out that the second thing we have to do in order to develop and defend the Rules view is to show that not only is it well placed to satisfy the Unity, Conservativeness, and Explanatory Constraints, but that it can actually do so. In the previous chapter and this one I laid the groundwork for the demonstration and got it started by developing a use-conditional semantics for the representational core of natural language and showing that the view also satisfies the other constraints in its case. This shows that the use-conditional framework is conservative in being consistent with standard frameworks and enabling us to do what they do. In the next chapter I’ll continue the demonstration by providing a use-conditional semantics for indexicals and demonstratives. 78 Chapter 5 Use-Conditional Semantics, Indexicals, and Demonstratives Introduction In the previous chapter I started the demonstration that the Rules view can satisfy the Unity, Conservativeness, and Explanatory Constraints by providing a use-conditional semantics for the representational core of natural language. In this chapter I’ll continue by providing a use- conditional semantics for indexicals and demonstratives. 41 However, whereas in the previous chapter I proceeded by first providing a use-conditionals for predicate and propositional logic and then showing that it is consistent with the standard ways of providing a semantics, here I will proceed by first giving an overview of the different views of the semantics of indexicals and demonstratives and then showing how to state them in terms of use-conditions. More precisely, I’ll proceed as follows. I’ll start by looking at David Kaplan’s classic view of the semantics of indexicals ‘I’, ‘here’, and ‘now’ stated in terms of characters. I’ll show how to state it in terms of use-conditions and explain why we can formally represent the meanings of at least some types of expressions with characters and exactly what happens when we do so. (Section 1) I’ll then look at some alternative views of the semantics of indexicals and show how to state these in terms of use-conditions (Section 2). Next, I’ll look at some standard theories of the semantics of demonstratives also stated in terms of characters and show how to state these in terms of use-conditions. However, I’ll also argue that any such theory has a problem with multiple occurrences of demonstratives in the same sentence (Section 3). Finally, I’ll look at an emerging and promising theory of demonstratives which is motivated by the desire to avoid the problem with multiple occurrences, but can’t be stated in terms of characters. I’ll argue that it’s most naturally stated in terms of use-conditions. This shows that the use- conditional framework is not only conservative, but also more flexible than standard frameworks in enabling us to do more than they do – in this case by being able to capture an emerging and 41 Following relatively standard usage, I will use ‘indexical’ for ‘I’, ‘here’, and ‘now’, and ‘demonstrative’ for ‘this’, ‘that’, ‘this F’ etc.. However, I’m using these terms without assuming any underlying unity in their descriptive semantics. Thus, my usage doesn’t rule out that, as I actually happen to think, ‘I’ is unique and ‘here’ and ‘now’ belong with ‘this’ and ‘that’ as far as their descriptive semantics. 79 promising theory of demonstratives that can adequately account for their “discretionarity”, but can’t be stated in terms of characters. (Section 4) 1. Kaplan’s Semantics for Indexicals On Kaplan’s semantic framework the meanings of expressions can be formally represented by characters, functions from contexts thought of as n-tuples of parameters to things called “semantic contents”. Alternatively, one can also just talk about the expression semantically expressing its semantic content relative to a context. For example, take the name ‘Bertrand’. Assuming Millianism, it can be used only about a single person. On this view its meaning can therefore be formally represented by a character which takes as its input contexts consisting of at least a possible speaker of the context c a , location of the context c l , a time of the context c t , and the world-state of the context c w , and yields as its output Bertrand. Using [[ ]] for an expression’s character and <,> for a context we can represent this formally as follows: (i) [[‘Bertrand’]] <c a , c l , c t , c w > = Bertrand Notice that, assuming Millianism, the character of ‘Bertrand’ is a constant function in yielding as its output Bertrand no matter which context it takes as its input. However, take now the indexicals ‘I’, ‘here’, and ‘now’. A lot of people have thought that they can be used to talk about different things by different users, at different locations and times, although they can’t be used to talk about different things at the speaker’s discretion, or about different things by using the expression more than once in a sentence (Braun 1996: 145- 146, Kaplan 1989a: 490-491, Shoemaker 1968: 558-559). On the classic Kaplanian view their meanings can therefore be formally represented by characters which take as their inputs contexts consisting of at least a possible speaker of the context c a , location of the context c l , a time of the context c t , and the world-state of the context c w , and yield as their outputs c a , c l , and c t (Kaplan 1989a). Formally: (ii) [[‘I’]] <c a , c l , c t , c w > = c a (iii) [[‘here’]] <c a , c l , c t , c w > = c l 80 (iv) [[‘now’]] <c a , c l , c t , c w > = c t Notice that the characters of ‘I’, ‘here’, and ‘now’ are non-constant functions in yielding as their outputs different things depending on which context they take as inputs. Of course, to fully understand the above formal representations, we also need an interpretation of what it is for something to be the speaker of the context, the location of the context, the time of the context, and the world-state of the context. On the classic Kaplanian view the speaker of the context is the user of the expression. The location of the context is a location that contains wherever the speaker of the context is. The time of the context is a stretch of time that contains the stretch over which the speaker of the context uses the expression. 42 And the world-state of the context is the world-state the speaker is in. Thus, if we put the formal framework and the interpretation together we arrive at a view on which ‘I’ always refers to whoever is the user of the sentence, ‘here’ always denotes some location that contains whoever the user of the sentence is, and ‘now’ always denotes some time that includes the stretch of time over which the user uses the expression. Let us now see how to state this view in terms of use-conditions. All we need to do to do this is to give these characters in terms of use-conditions by using variables in our use-conditions and making what the variable stands for explicit. Thus, take any character of the form: [[‘e’]] <c a , c l , c t , c w > = c x . Translate it like this: [[‘e’]] UC = s thinks of c x . Finally, take the favoured interpretation of c x and make what it stands for explicit. For example, instead of ‘c a ’ we can use a variable ranging over speakers, ‘s’, and state the use-conditions of ‘I’ as follows: (27) [[‘I’]] UC = s thinks of s 43 Similarly, instead of ‘c l ’ and ‘c t ’ we can use variables ranging over locations and stretches of time ‘l’ and ‘t’, and state the rules governing ‘here’ and ‘now’ as follows: 42 Although Kaplan himself wasn’t explicit about this, something like this is required for the view to be plausible at all. For discussion, see Burge 1974: 216- 217, Recanati 2001. In Recanati’s opinion this should already push us towards alternative accounts of ‘here’ and ‘now’ on which they are synonymous to ‘at this place’ and ‘at this time’ and thus are like demonstratives in their descriptive semantics. I agree. 43 Plausibly, s has to not only think of him- or herself, but has to do it in a special first-person way. Similar points might apply in other cases. Since this doesn’t matter for us here, I won’t mention it further. For discussion see Hanks 2013, MS, Soames MS. 81 (28) [[‘here’]] UC = there is some location l such that s is at some part of l and s thinks of l (29) [[‘now’]] UC = there is some stretch of time t such that s uses ‘now’ over the course of some part of t and s thinks of t It should be obvious that these state the same view about the semantics of indexicals that Kaplan stated in terms of characters. I’ve now taken the first step in showing how the Rules view can satisfy the Unity Constraint in the case of indexicals by showing how Kaplan’s classic view can be stated in terms of use-conditions. By doing this I’ve also taken the first step in showing that the Rules view satisfies the Conservativeness Constraint by having shown that it is consistent with one standard way of describing the meanings of indexicals. Let me therefore next also show that it satisfies the Explanatory Constraint by explaining why we can formally represent the meanings of at least some types of expressions with characters and exactly what happens when we do so. Now, before I can do that, it’s important to be clear that I’m not claiming that every rule of use or use-condition can be represented as a character. As I’ll show below, some can’t. However, as was Kaplan’s original idea, some use-conditions can be represented as characters. Thus, on the hypothesis that for an expression to be meaningful is for there to be a rule that governs its use, we can explain why we can formally represent the meanings of at least some type of expressions with characters by showing how to give those use-conditions which can be represented as characters in terms of characters. Here then is what we’re doing when we formally represent meanings with characters. First, we bring the all the different sorts of variables that appear in use-conditions together into an n-tuple which is a context-schema like <s, l, t> or, in standard notation, <c a , c l , c t ,>. Second, we assign particular values to the variables to get a particular context like <David Kaplan, Los Angeles, 1977>. Finally, we take the use-condition of a particular expression and combine it with a particular context and look what we get. For example, the use-condition for ‘I’ combined with the above particular context will give us David Kaplan. Similarly, the use-condition of ‘here’ combined with that particular context will give us Los Angeles. And the use-condition of ‘now’ combined with that particular context will give us 82 1977. This is what happens when we formally represent the meanings of expressions with characters. And this shows that the Rules view also satisfies the Explanatory Constraint in case of characters. 2. Alternative Semantics for Indexicals Although Kaplan’s classic view of the semantics of indexicals is still widely accepted, there are several alternative views which boast being able to better deal with certain counterexamples and problematic cases. And remember that in showing that the Rules view can satisfy the three constraints we not only can, but should remain neutral between different proposed descriptions of the meanings of the relevant expressions. Let me therefore give an overview of the main alternative views, showing along the way how to state them in terms of use-conditions. The main problem with Kaplan’s view is taken to be the so-called Answering Machine Paradox. On Kaplan’s view meaningful uses of ‘I am here now’ will always result in the speaker’s saying that they are in the location of the context at the time of the context. Hence, meaningful uses of it will always result in the saying of something true. Thus, on his view meaningful uses of ‘I am not here now’ will always result in the saying of something false. However, as many have pointed out, uses of this sentence played back on the answering machine communicate something true and informative (Kaplan 1989a: 491, Sidelle 1991, Smith 1989, Predelli 1998). And this needs explanation. There are roughly two ways one can go about giving such an explanation. On one way we preserve Kaplan’s view of the semantics of indexicals and try to explain the phenomenon pragmatically by distinguishing between saying and meaning (Grice 1989, Kripke 1977). For example, one might think that while meaningful uses of ‘I am not here now’ always result in the saying of something false, we frequently use this sentence to mean what one can say with the sentence ‘I am not here when you’re listening to this’. This explains why the sentence played back on the answering machine communicates something true and informative. 44 On the other way we discard Kaplan’s view of the semantics of indexicals and try to provide an alternative 44 For an attempt at explaining the phenomenon pragmatically see Stevens 2009. For criticism of it and such explanations in general see Cohen 2013, Cohen & Michaelson 2013, Michaelson 2014. 83 one which explains the phenomenon semantically. There’s a variety of different proposals to this effect. All of these proposals are grounded in the same basic idea, namely that the time of use can come apart from the time when somebody “takes it in” by reading the inscription on a note or hearing the utterance played back on the answering machine. Initially, Alan Sidelle suggested that this means that if one uses a sentence one can at the same time prepare for or set up what intuitively count as distinct uses in the future (Cohen 2013, Sidelle 1991). Thus, when one writes a sentence down on a note or utters it into an answering machine, one prepares for or sets up a multitude of future uses which take place when somebody “takes in” the use. 45 However, it doesn’t seem important to claim that there are actually future uses which take place. What is essential is just that we distinguish between the time of use and the time when somebody “takes it in”. And all the alternative views which explain how uses of ‘I am not here now’ can communicate something true and informative semantically exploit this fact in one way or another. First, some people have suggested that what matters is just the time that the use is “taken in”. The view then is that the time of the context c t is not a stretch of time including the time during which the speaker uses the sentence, but rather a stretch of time including the time at which the use is “taken in” (Cohen 2013, Sidelle 1991). In cases of face-to-face communication this is supposed to not make a difference because the stretch of time of use and the time of “taking it in” are the same. However, in cases of notes and answering machines it does matter in allowing for meaningful uses of ‘I am not here now’ to result in the saying of something true, roughly that one is not at the location of the context at the time of the “taking in”. 46 It should obvious how to state this view in terms of rules of use: (30) [[‘now’]] UC = there is some stretch of time t such that ‘now’ is “taken in” over the course of some part of t and s thinks of t 45 This is not how Sidelle and Cohen put their view. Sidelle says that when you use it at first no use takes place. However, this is obviously false. A better way to put the view is in the above terms of multiple uses. Cohen now agrees (p. c.). 46 For criticism of this view see Michaelson 2014, Sidelle 1991. 84 The fact that this can be done is entirely unsurprising since this view essentially operates with the same character as Kaplan’s view, just changing the interpretation of what the time of the context is. And we already know that anything that can be put in terms of characters can be put in terms of rules of use. 47 Second, other people have suggested that what matters is not the time of use, nor the time the use is “taken in”, but the time the speaker intends the use to be “taken in”. Alternatively, it has been suggested that what matters is an idealized audience’s expectations of when the use will be “taken in”. The first view then is that the time of the context c t is not a stretch of time including the time during which the speaker uses the sentence, nor one when it’s actually “taken in”, but rather a stretch of time at which the speaker intends the use to be “taken in” (Predelli 1998a, 1998b, 2002). The second view is that it’s rather a stretch of time time at which the idealized audience expects the use to be “taken in” (Romdenh-Romluc 2002, 2006). In cases of face-to-face communication none of this is supposed to make a difference because the stretch of time of use and the time at which the speaker intends the use to be “taken in” or at which the idealized audience expects the use to be “taken in” are normally the same. However, in cases of notes and answering machines it does matter in again allowing for meaningful uses of ‘I am not here now’ to result in the saying of something true. 48 It should be again obvious how to state these views in terms of rules of use: (31) [[‘now’]] UC = there is some stretch of time t such that s intends the use of ‘now’ to be “taken in” at t and s thinks of t (32) [[‘now’]] UC = there is some stretch of time t such that an idealized audience expects the use of ‘now’ to be “taken in” at t and s thinks of t 47 This rule might sound somewhat paradoxical. How can you think of a time that is not yet to pass? In response, it is natural to claim that you think of the future moment descriptively as the time your use is “taken in”. However, one might still be worried that this commits the proponents of the view to more than they would like. I’m not sure this is the case. Consider the fact that even they would presumably want to say that by using ‘now’ you’re talking about a particular time and they wouldn’t presumably want to say that it’s completely mysterious to you what time it is. Rather, they’d say that you know full well that it’s the time your use is “taken in”. 48 For criticism of these views see Cohen 2013, Cohen & Michaelson 2013, Corazza & Fish, & Gorvett 2002, Gorvett 2005, Michaelson 2014. 85 And again, the fact that this can be done is entirely unsurprising for the above reasons. The discussion of the above three alternative views of the semantics of indexicals should nicely exemplify that if a view can be given in terms of characters, it can also be given in terms of use-conditions. Thus, for any other available or possible alternative view of the semantics of indexicals that can be given in terms of characters it should be clear how to give it in terms of use-conditions. However, there’s at least one view of the semantics of indexicals which has been offered which, as I will argue in the next section, is best not stated in terms of characters. That’s the view that ‘here’ and ‘now’ are complex demonstratives synonymous with ‘at this place’ and ‘at this time’ (Krasner 2006, Mount 2008, Recanati 2001). 49 Thus, ‘I am not here now’ is synonymous with ‘I am not at this place at this time’. This allows meaningful uses of ‘I am not here now’ to result in the saying of something true given that the speaker is thinking of the time of the playback. The reason why this view is best not stated in terms of characters is because any theory of demonstratives that can be stated in terms of just characters has a problem with multiple occurrences of demonstratives in the same sentence. I will try to show this in the next section. However, since I will also show in the following section that an emerging and promising view of demonstratives that doesn’t have this problem can be stated in terms of use- condition, we also get a demonstration that this view can be stated in terms of use-conditions as well. 3. Unsatisfactory Theories of Demonstratives Kaplan thought that demonstratives differ from indexicals in that they can be used to talk about different things by different users at the speaker’s discretion, or about different things by using the expression more than once in a sentence (Braun 1996: 145-146, Kaplan 1989a: 490-491, Shoemaker 1968: 558-559). Now, most theories of demonstratives on the table are stated in terms of characters. I will go through these theories and show since they’re stated in terms of characters they can be stated in terms of use-conditions. One might think that this isn’t really necessary since I’ve already shown that anything stated in terms of characters can be stated in terms of rules of use. However, I want to discuss them in order to be able to show that any theory of demonstratives that can be stated in terms of just characters without extra machinery has a 49 Let me register my preference for this view for the reasons raised in Recanati 2001. 86 problem with multiple occurrences of demonstratives in the same sentence. This sets up for the discussion of an emerging and promising account of demonstratives which is motivated by the desire of being able to avoid the problem with multiple occurrences. I’ll argue that it’s most naturally stated in terms of use-conditions. And this discussion enables me to make the auxiliary point that the use-conditional framework is not only conservative, but also more flexible than standard frameworks in enabling us to do more than they do – in this case by being able to capture an emerging and promising theory of demonstratives that can adequately account for their “discretionarity”, but can’t be stated in terms of characters. Let’s start from Kaplan’s theory of demonstratives. Kaplan treats demonstratives on the model of his ‘dthat’ operator and thinks that demonstratives are only meaningful together with associated “demonstrations” (Kaplan 1989a). He therefore adds “demonstrations” to the expressions. Here ‘demonstration’ is a technical term for whatever fits the bill, for example, pointing gestures which early Kaplan took to be similar to definite descriptions, or referential intentions, or states of thinking of. On his view demonstratives supplemented by “demonstrations” have as their semantic contents relative to a context whatever the “demonstration” picks out in the context. Thus, according to Kaplan the meanings of the complexes made up of demonstratives like ‘this’ and a “demonstration” dd can be described by adding the object of the “demonstration”, c dd , in the context which then also gets in the semantic content: (v) [[‘this’ + dd]] <c a , c l , c t , c w , c dd > = c dd Kaplan’s theory has a fatal flaw. It treats demonstratives as syncategorematic, as meaningless expressions that are meaningful only together with “demonstrations”. However, they’re not, but obviously meaningful on their own (Salmon 2002). People have therefore suggested two modifications. First, there’s the suggestion that we shouldn’t add “demonstrations” to the expressions, but to contexts (Salmon 2002). Second, there’s the suggestion that we shouldn’t add “demonstrations” to anything, but should instead add demonstrata or the demonstrated objects to contexts (Braun 1996, Caplan 2003). If we’d take these suggestions and stick to the view that the meanings of demonstratives can be stated in terms of characters we’d arrive at views on which demonstratives have as their semantic contents 87 relative to a context whatever the “demonstration” gives us in the context or just the demonstrata of the context. 50 Thus, on these views the meanings of demonstratives like ‘this’ can be described by adding the demonstrations c dd to the context which then delivers an object o or just adding the demonstrata c d in the context: (vi) [[‘this’]] <c a , c l , c t , c w , c dd > = c dd (vii) [[‘this’]] <c a , c l , c t , c w , c d > = c d It should be obvious by now how to give these theories in terms of use-conditions. 51 Now, both of the above theories and any theory of demonstratives that can be stated in terms of just characters has a problem with multiple occurrences of demonstratives. Here’s the problem as presented by David Braun: Consider (1). (1) That is bigger than that. The two occurrences of 'that' in (1) have the same linguistic meaning. So if linguistic meaning is the same as character, then the two occurrences of 'that' in (1) have the same character. Now if two expressions (or two occurrences of an expression) have the same character, then they have the same content in every context. So if linguistic meaning is character, then the two occurrences of 'that' in (1) have the same content in every context. But the content of (1) in a context is determined by the contents of its constituents in that same context. So in every context, (1) expresses the proposition that x is bigger than x, where x is the content of 'that' in 50 These are toy theories in that nobody has really held them. Rather, people like Braun, Caplan, and Salmon have held the sorts of views on which the meanings of demonstratives can’t be stated in terms of just characters (see below and fn. 11) 51 Here’s how. Since (v) and (vi) are identical characters they can be both put as (8), whereas (vii) can be put as (9): (33) [[‘This’]] UC = there is some d such that d is a demonstration and s engages in d and s thinks of the object of d (34) [[‘This’]] UC = there is some y such that s demonstrates y and s thinks of y If we think that to “demonstrate” something just is to think of it then these use-conditions can be simplified by erasing the parts about thinking of. 88 that context. But no object is bigger than itself. So if linguistic meaning is the same as character, then (1) expresses a false proposition in every context. Thus it should be impossible to utter (1) truly. But this is clearly wrong: there can be true utterances of (1) in which the two utterances of 'that' refer to different objects. (Braun 1996: 147-148) The basic problem is that characters are functions from contexts to something else such that for each context they give you a single thing. However, if you consider a sentence like ‘This is bigger than this’, then it’s clear that the meaning of the sentence allows us to use it to refer to two different things. This means that the meaning of this sentence can’t be formally represented as just a character because for some contexts it we need two different things, but characters can only give you one. One reaction to the problem with multiple occurrences is to say that the expression ‘this’ itself doesn’t have a character, but its occurrences have characters. However, this gets close to conceding that the meanings of demonstratives can’t be formally represented in terms of just characters without extra machinery and doesn’t really amount to a full theory of demonstratives by itself. 52 Thus, any theory of demonstratives that can be stated in terms of just characters has a problem with multiple occurrences of demonstratives. Let’s therefore look next at an emerging and promising theory of demonstratives which is motivated by the desire to avoid the problem with multiple occurrences. I’ll argue that it’s most naturally stated in terms of use-conditions. 52 There are two views which have been offered to supplement the above thought. On one view, proposed, but not endorsed by David Braun, and later endorsed by Ben Caplan, we have to think of contexts as including not a single demonstratum (or demonstration), but sequences of them with a focal demonstratum <o 1 , o 2 , …o n > so that a character from such a context yields the focal demonstratum. To deal with multiple occurrences we then claim that the context shifts after each occurrence of the demonstrative from the original one to one where the focal demonstratum is the next demonstratum in the sequence. (Braun 1996: 152-154, Caplan 2003: 209, fn.7). This view allows for the meaning of the demonstrative itself to be formally represented by a character. However, it has some really implausible commitments. For example, it seems to entail that the meaning of a sentence containing multiple occurrences like ‘This is bigger than this’ can’t be formally represented by a character because it can’t be a function from any particular context, but must be evaluated relative to different contexts. In the light of this it seems unclear whether this is anything more than a formal trick and for us to take it seriously at all we would need to see how to think of this in terms of an actual view about the nature of meaningfulness. On another view, proposed and endorsed by David Braun, we have to think of contexts as including a sequence of demonstrata with a focal demonstratum, however, we don’t formally represent the linguistic meanings of demonstratives as characters, but as functions from a sequence of characters with a focal character < c 1 , c 2 , …c n > to the focal character. On this view the meaning of a demonstrative can be represented as a function that takes a sequence of characters with a focal character and yields the focal character which in turn takes the context’s sequence of demonstrata, and gives us the focal demonstratum. To deal with multiple occurrences we then claim that the sequence of characters shifts after each occurrence of the demonstrative from the original one to one where the focal character is the next character in the sequence (Braun 1996: 155-164). Again, it’s unclear that this is anything more than a formal trick and for us to take it seriously at all we again need to see how to think of this in terms of an actual view about the nature of meaningfulness. 89 4. An Emerging Theory of Demonstratives In the light of the fact that theories of demonstratives which can be stated in terms of just characters have a problem with multiple occurrences, Francois Recanati, Jim Higginbotham, Scott Soames and others have proposed the outlines of a new promising theory (Higginbotham 2002, Recanati 2001, Soames 2010a). On this view the idea is roughly that demonstratives like ‘this’ are semantically for referring or, better, indicating that you are thinking of something. Complex demonstratives like ‘this F’ are just for referring to something that meets the restrictions set by the restrictor ‘F’. For example, ‘this’ is perhaps for referring to things that are proximal relative to some speaker-related perspective, ‘that’ is for referring to things that are distal relative to the same perspective, ‘this F’ is for referring to things that are proximal and are F or perhaps are believed by the speaker to be F, ‘you’ is for referring to things that are addressed, ‘he’ is for referring to male persons, ‘she’ is for referring to female persons etc. This theory doesn’t have a problem with multiple occurrences, but it also can’t be stated in terms of characters. Whereas Recanati and Soames have therefore just given the outlines in informal terms by telling us what these expressions are semantically for, Higginbotham has gone a step further and actually stated it in terms of rules of use of sorts. Here’s a poignant quote: I shall suppose that rules of use for words, like rules of use for tools and home appliances, are stated in imperatival form, as for example in (9)-(10): (9) ‘this N’ is to be used to refer to proximate salient objects satisfying N (10) The periphrastic future ‘will’ is to be used to restrict times to those of some interval following the time of u. There is of course a distinction between expressions that are truly demonstrative as in (9) and those that are merely indexical, as in (10): the speaker has some latitude about what to refer to with ‘this N’, but none whatever about the periphrastic future, apart from limiting its extent. (Higginbotham 2002: 5) In terms of our framework we can state the theory as follows: 90 (35) [[‘this’]] UC = there is some y such that it is proximal relative to perspective z that is R-related to s and s thinks of y 53 (36) [[‘that’]] UC = there is some y such that it is distal relative to perspective z that is R-related to s and s thinks of y (37) [[‘this F]] UC = there is some y such that y is an F and it is proximal relative to perspective z that is R-related to s and s thinks of y 54 (38) [[‘you’]] UC = there is some y such that s addresses y and s thinks of y This captures perfectly the idea that demonstratives are semantically for referring or signifying the fact that you’re thinking of something that meets certain restrictions. Of course, having seen how this view of demonstratives can be stated in terms of use-conditions, we can also see in rough outline how the view of the semantics of indexicals that ‘here’ and ‘now’ are complex demonstratives synonymous with ‘at this place’ and ‘at this time’ can be stated in terms of use- conditions. Conclusion Let’s sum up. In this chapter I’ve shown how the Rules view can satisfy the Unity, Conservativeness and Explanatory Constraints in the case of indexicals and demonstratives by showing how all views that can be stated in terms of characters can be stated in terms of use- conditions and by explaining why we can formally represent the meanings of at least some types of expressions with characters and exactly what happens when we do so. I’ve also argued for the auxiliary point that the use-conditional framework is not only conservative, but also more 53 z is some perspective that is related in some way R to the speaker. Specific versions of this view can be arrived at by setting restrictions or specifying which way it is. For example, a natural view is that it’s the perspective occupied by the speaker. 54 An alternative view is that y doesn’t have to be an F, but the speaker has to believe that it is. I actually find it more plausible, but I won’t argue for it here. 91 flexible than standard frameworks in enabling us to do more than they do – in this case by being able to capture an emerging and promising theory of demonstratives that can adequately account for their “discretionarity”, but can’t be stated in terms of characters. It’s time to leave indexicals and demonstratives behind and move on to discussing the semantics of mood and conditionals. 92 Chapter 6 Use-Conditional Semantics, Mood, and Conditionals Introduction In the previous chapter I continued the demonstration that the Rules view can satisfy the Unity, Conservativeness, and Explanatory Constraints by providing a use-conditional semantics for indexicals and demonstratives. In this chapter I’ll continue the demonstration by discussing the semantics of mood, and providing a use-conditional semantics for declarative, interrogative and imperative sentences and conditionals. However, whereas in the previous chapter I proceeded by first giving an overview of the different well-developed views of the semantics of indexicals and demonstratives and then showing how to state them in terms of use-conditions, here I will have to proceed by developing best versions of the different views at the same time as discussing how to state them in terms of rules of use. More precisely, I’ll proceed as follows. I’ll start by discussing mood in general and by posing two conditions of adequacy on any view of the semantics of mood. First, I’ll pose the Possibility Constraint which is the constraint that an adequate view of the semantics of mood must explain why we can use sentences to perform speech acts while we can’t use their accompanying clauses to perform speech acts. Second, I’ll pose the Difference Constraint which is the constraint that an adequate view of the semantics of mood must explain why sentences of different moods are especially suited to perform different speech acts, and just those that they are suited to perform (Section 1) I’ll then discuss two different views of the semantics of mood, the Content View and the Force View. I’ll start by discussing how these views handle declaratives, show how they purport to meet the constraints, develop them in the best way they can be in my opinion developed, and state them in terms of use-conditions. (Sections 2-3) I’ll then also show how these two views handle interrogatives and imperatives (Sections 4-5). This shows that the Rules view can satisfy the Unity, Conservativeness, and Explanatory Constraints in the case of declarative, interrogative, and imperative sentences. However, it also enables us to make the auxiliary point that the use-conditional framework is more flexible than standard frameworks in enabling us to do more than they do – in this case by being able to represent two radically different theories of the semantics of mood in the same framework. Next, I’ll provide a use- 93 conditional semantics for the words ‘not’, ‘and’, and ‘or’ in order to show that we can retain a single meaning for them in their occurrences in sentences with different moods (Section 6). Finally, I will provide an overview of two radically different sorts of views of ‘if’, Propositional and Suppositional, show how they interact with the two views of the semantics of mood, and show that we can state both in terms of use-conditions. This shows that the Rules view can satisfy the three constraints in the case of conditionals. However, it also shows again that the Rules view provides us with a semantic framework that enables us to do more than standard frameworks – in this case by enabling us to state the interesting and plausible Suppositional theory of ‘if’ in clear and concrete terms. (Section 7) 1. Two Constraints There are three main moods and accompanying clausal systems in natural language: the declarative, interrogative, and imperative. For simplicity’s sake I will restrict my attention to these. Here are some examples of sentences in these different moods: Declarative: ‘Bertrand is British’ Interrogative: ‘Is Bertrand British?’, ‘What time is it?’ 55 Imperative: ‘Read!’ And here are examples of their accompanying clauses: Declarative: ‘that Bertrand is British’ Interrogative: ‘whether Bertrand is British’, ‘what time it is’ Imperative: ‘to read’ The main difference between the sentences and the clauses is that only the sentences can be used to perform speech acts whereas the clauses by themselves cannot, whereas only the clauses can 55 Following Karttunen and others I distinguish between choice interrogatives like ‘Are you reading or are you not reading’ and search interrogatives like ‘Who is reading?’ (Karttunen 1977). These are also commonly known as ‘alternative questions’ and ‘wh-questions’, but I don’t like this terminology because some “wh-questions” feature words like ‘how’ and ‘question’ is anyway best reserved for the contents of interrogatives and not for the interrogatives themselves. 94 be used in reports of those speech acts and certain attitudes whereas the sentences by themselves cannot. Thus, ‘Bertrand is British’ can be used to say that Bertrand is British whereas ‘that Bertrand is British’ can’t. In contrast, ‘A said/believes that Bertrand is British’ is fine whereas ‘A said/believes Bertrand is British’ is not. Similarly, ‘What time is it?’ can be used to ask what time it is whereas ‘what time it is’ can’t. In contrast, ‘A asked/wonders what time it is’ is fine whereas ‘A asked/wonders what time is it’ is not. Finally, ‘Read!’ can be used to tell someone to read whereas ‘to read’ can’t. In contrast, ‘A told B to read’ is fine whereas ‘A told B read!’ is not. Erik Stenius made a big deal out of the above difference in his paper “Mood and Language Game” (Stenius 1967). 56 In this paper he asks us to consider the following three sentences: ‘You live here now’ ‘Do you live here now?’ ‘Live (you) here now!’ He then provides a Wittgenstein-inspired picture on which every one of those sentences consists of a sentence radical or a neutral sentence which provides it with its content + a modal element which provides it with its mood. The idea is that the sentence radical and the clause ‘that you live here now’ are similar in sharing their content with these sentences, but the sentences themselves differ in including the modal element. And it’s the presence of the modal element which 56 Frege was of course one of the first to come up with the idea of separating the content from the mood (Dummett 1973: 315-316). Stenius credits early Wittgenstein the same insight. Wittgenstein wrote in the Tractatus: 4.022 A sentence shows how things stand if it is true. And it says that they do so stand. Stenius thinks of showing as the contentful aspect and the saying as the contribution of the mood and comments as follows: . …The picture theory of sentence meaning can explain in what way a sentence shows how things stand if it is true. But it cannot explain how it manages to say that they do so stand. For from the fact that a sentence shows how things stand if it is true, it does not follow that it says that they stand so. A that- clause also shows how things stand if it is true – and nevertheless it does not say that they stand so. (Stenius 1967: 259) 95 ultimately explains why the sentences can be used to make speech acts, but can’t be used to make reports. 57 Subsequent work has put a lot of pressure on the view that all of these sentences consist of a sentence radical and thus have the same type of content. It’s much more common these days to think that interrogatives have as their content questions which can be thought of in terms of sets of answers, whereas imperatives have as their content properties or something alike (Belnap 1990, Hausser 1980, Karttunen 1977, Lewis 1972, Portner 2004). However, the desire for an explanation of why a sentence can be used to make speech acts and why its clausal counterpart can’t is to be taken seriously. Let us therefore pose what we can call the: Possibility Constraint: An adequate view of the semantics of mood must explain why we can use sentences to perform speech acts while we can’t use their clausal counterparts to perform speech acts (and why we can use the clauses in reports while we can’t use their sentential counterparts in reports). This constraint captures the first really interesting thing about the semantics of mood. Besides this first constraint, we can also pose another one. Let’s look again at our examples of sentences in the different moods: Declarative: ‘Bertrand is British’ Interrogative: ‘Is Bertrand British?’, ‘What time is it?’ Imperative: ‘Read!’ 57 Frege might have held a similar view (Dummett 1993: 204). Michael Pendlebury offers a related, but interestingly different view (Pendlebury 1986: 370-371). On his view every one of those sentences consists of a sentence radical which provides it with its initial content + a modal element which provides it with its mood which consists in a function from the initial content to a supplemented content. Thus, the sentence radical has as its content a state of affairs, and the modal element of is a function from that state of affairs to that state of affairs represented as true (declarative), as problematic (interrogative), or as required (imperative). The idea then is that the sentence radical and the that-clause are similar in sharing a common part with these sentences, but the sentences themselves differ in having a mood which results in it having the other part of its content. And again, it’s the presence of the mood which ultimately explains why the sentences can be used to make speech acts, but can’t be used to make reports. 96 The main difference between the sentences of different moods themselves is that they are somehow especially suited for performing different speech acts. Thus, ‘Bertrand is British’ is suited for saying that Bertrand is British. Similarly, ‘What time is it?’ is suited for asking what time it is. Finally, ‘Read!’ is suited for telling someone to read. Erik Stenius made a big deal out of this too in the aforementioned paper and others have followed suit (Boisvert & Ludwig 2005, Hare 1970, Schiffer 2003: 110-113, Segal 1990/1991, Stenius 1967). As we saw above, on his Wittgenstein-inspired picture every one of those sentences consists of a sentence radical or a neutral sentence which provides it with its content + a modal element which provides it with its mood. The idea then is that it’s the different modal element that ultimately explains why sentences of different moods are suited for performing different speech acts. As I noted above, subsequent work has put a lot of pressure on the view that all of these sentences consist of a sentence radical and thus have the same type of content. However, again, the desire for an explanation of why sentences of different moods are suited for making different speech acts is to be taken seriously. Let us therefore pose as our second constraint what we can call the: Difference Constraint: An adequate view of the semantics of mood must explain why sentences of different moods are especially suited for performing different speech acts, and just those they are suited to perform. This constraint captures the second really interesting about the semantics of mood. Armed with some understanding of mood and these two constraints, let’s go on to discuss and develop the different views of the semantics of mood that have been proposed. 2. The Content View One of the views about the semantics of mood that is widely adopted or at least assumed is that everything that needs to be explained is explained by content. Let’s call this the Content View. This view comes in two versions. On the Reductionist version the view is that everything is going to be explained ultimately by the same kind of content, usually propositional content, 97 although differing contents. For example, as David Lewis thought, declaratives are fundamental and have propositional contents whereas interrogatives and imperatives are synonymous with declaratives containing explicit performatives and thus have differing propositional contents (Lewis 1972). Thus, on Lewis’s view the imperative read ‘Read!’ is synonymous with the declarative ‘I command that you read’. 58 For another example, as Donald Davidson thought, declaratives are fundamental and sentences in the other moods can be analyzed by assuming that utterances of sentences in the other moods consist as if of utterances of two declarative sentences one of which is the mood-setter which indicates how the other utterance is to be taken (Davidson 2001). Thus, on Davidson’s view, an utterance of the imperative ‘Read!’ is as if the utterance of the two declaratives ‘This is an order’ and ‘You will read’. 59 In contrast to these views, on the Pluralist version of the Content View everything is going to be explained by different kinds of content, either propositional content, question content, or imperative content (Belnap 1990, Boisvert & Ludwig 2005, Hausser 1980). Now, before moving on, let me make a clarification. The clarification is that in claiming that there is a Reductionist version of the Content View, I’m not claiming that this is how those who serve as examples of reductionists like Lewis and Davidson always viewed their own views. Rather, they might have just put forward the Reductionist Thesis that declaratives are fundamental and that interrogatives and imperatives can somehow be reduced to declaratives. The problem with doing just that is that this by itself doesn’t include any view of the semantics of the declarative mood and thus doesn’t offer us an explanation of why sentences in that mood can be used to perform speech acts whereas their clausal counterparts can’t, nor why they are suited to perform just those speech acts they are suited to perform. Thus, the most obvious way of being a reductionist is to adopt a Reductionist version of the Content View on which everything that needs to be explained is explained by the same kind of content. I will proceed by focusing on the Pluralist version of the Content View. In handling declaratives this placement of focus won’t matter because on both versions of the view the story about declaratives can be the same. However, in handling interrogatives and imperatives it does. 58 For discussion and criticism see Boisvert & Ludwig 2005, Davidson 2001, Hornsby 1986, McGinn 1977, Segal 1990, Starr 2011, MS. 59 For discussion and criticism see Boisvert & Ludwig 2005, Dummett 1993, Hornsby 1986, Lepore & Leslie MS, Segal 1990, Starr 2011, MS. 98 I focus on the Pluralist version for following reason. Remember that my main aim is to show how to state this view in terms of rules of use. And if we can show how to state its story about declaratives in terms of rules of use and if the Reductionist version of the view is true then it will also be clear how to state the rules for interrogatives and imperatives. However, the same is not the case if the Pluralist version is true. And this is why I focus on it. 60 Let’s see how the Content View purports to satisfy the Possibility and the Difference Constraint. The basic idea as regards the first is that sentences can be used to make speech acts because it is in the nature of the speech acts that they can be done with a particular sort of content. For example, it’s in the nature of the speech act of saying that it can be done with propositions. The challenge is then to explain why the clausal counterparts can’t be used to make speech acts. I’ll come back to this below. The basic idea as regards the Difference constraint on the Pluralist version is that sentences in different moods have different kinds of content, something which everybody should accept anyway. Then the claim is that it’s in the nature of the speech acts of asking and telling that they can be done with those sorts of contents. And thus the sentences are especially suited to perform those speech acts that they are because they encode the relevant sorts of contents. Let’s now see how to state this view in terms of rules of use in the case of declaratives, leaving interrogatives and imperatives for later. The idea here is that the use-conditions for declaratives just involve the entertaining of a proposition. This captures the idea that there is just content. Thus, we can just take the rules that we ascribed to sentences of propositional and predicate logic in Ch. 3 to be the rules for declaratives: (39) [[A]] UC = s entertains the proposition that A This, then, is what the Content View takes the rules for declaratives to look like. Let me next draw two connections between the Content View and something else. First, let’s look at its connection to the recently popular Austinian view on which any use of a sentence with its meaning results in a the performing of a non-committal speech act of locuting or “expressing” a proposition (Austin 1962, Braun 2011, Camp 2007, Cappelen 2011). Here’s how, 60 I actually also happen to think that the Pluralist version is the only viable version of the Content View, but this doesn’t play a role here. For extensive criticism of the Reductionist version see Boisvert&Ludwig 2005, Segal 1990, Starr 2011, MS. 99 given the Rules of Use view, we can explain why this is. The basic idea is that to locute or “express” a proposition just is to express, in the ordinary sense of the word, one’s entertaining of a proposition by using a sentence with its meaning. This is because expression of a mental act or state is roughly analyzable in terms of provision of observable evidence that one is performing the act or is in the state (Davis 2003, 2005). This, together with the Rules of Use view yields the following analysis of locuting or “expressing”: “Expressing”: To “express” the proposition that p is to express one’s entertaining the proposition that p by using a sentence with its meaning. This amounts to nothing more than using some declarative sentence with its meaning such that it’s a sentence the use-conditions of which feature the speaker’s entertaining the proposition that p. After all, if one uses a declarative sentence with its meaning one is providing feasible evidence that one is in the use-condition. 61 Second, let’s look at its connection to the view that asserting is not a matter of convention, but rather a matter of some sort of communicative intentions like, perhaps, the intention to get your audience to believe something (Davis 2003, 2005, Grice 1957). On this view then, although any use of sentence with its meaning results in a locuting or “expressing” of a proposition, it doesn’t result in an assertion. Rather, this requires the relevant communicative intentions. The biggest challenge for the Content View is to explain why clauses can’t be used to make speech acts. If the explanation of why sentences can be used to make speech acts is just that sentences have a particular type of content and it’s in the nature of the relevant speech act that it can be performed with this type of content then this doesn’t explain why the clauses which 61 Similarly, “expression” of an object or a property can be analyzed in terms of using some expression with its meaning such that it’s an expression the use-conditions of which feature the speaker’s thinking of the object or a property. Talk of semantic expression as a relation not between a user, but expression itself and what it expresses could then be thought of as shorthand for talk about what one expresses when one uses the sentence with its meaning in cases where it’s always a single thing. Indexical expressions can then be said to semantically express things relative to Kaplanian contexts. However, demonstrative expressions, at least on the promising view discussed in the end of Ch. 5, can’t really be said to semantically express anything, even relative to Kaplanian contexts. 100 have the same type of content can’t be used to perform speech acts. 62 And if this can’t be explained then the Content View is hopeless because it doesn’t explain what needs explaining. Let me therefore also outline the most promising way to meet the challenge. 63 The natural thing to do is to claim that it is not just the content, but the content plus the way of encoding that explains what needs to be explained. Restricting ourselves again to declaratives for the moment, it’s clear that although both a sentence and the corresponding that- clause encode a proposition, the way of encoding is different since the sentence doesn’t but the that-clause involves the word ‘that’ which can be thought of as a singular term forming operator on declaratives. As a result sentences “express” their contents, whereas clauses “refer to” or “denote” them. The idea then is that it’s not the fact that sentences have a particular type of content which explains why they can be used to perform speech acts, but rather it’s the fact that they express that type of content. And what explains why clauses can’t be used to make speech acts is the fact that they don’t express the relevant type of contents, but rather refer to or denote them. Of course, for this to amount to a proper explanation, one needs to provide a further story why the difference in encoding really should lead to this difference in whether we can use the relevant expression to perform speech acts. And whether this can be done remains to be seen. Now, whether the challenge can be met or not, it’s not our purpose here to assess the Content View, but just develop it in the best way it can and show how it can be stated in terms of rules of use. Having done this, let’s move on to the other view. 3. The Force View Another view about the semantics of mood that is very natural and explicit in the work of early Ludwig Wittgenstein, Erik Stenius, John Searle, Michael Dummett, William Alston, Gabriel Segal and others is that mood semantically encodes force and that everything that needs to be 62 A similar point is made by William Starr in Starr 2011: 163-164. 63 One might try to explain this by appealing to the Gricean maxim of manner of being brief or not using more than needs to, to perform the speech act one wants to perform. The idea is that since sentences include less linguistic material than the corresponding clauses, the maxim of manner makes them seem better for performing speech acts than the clauses. However, this is unsatisfactory because it’s not as if sentences are just better for performing speech acts than the clauses. It’s that the clauses just can’t be used to perform speech acts. And it’s unclear whether manner can get us something so robust. 101 explained is explained by this (Alston 1999, Dummett 1973, 1993, Segal 1990/1991, Stenius 1967, Wittgenstein 1921). 64 Let’s call this the Force View. 65 Now, like the Content View, the Force View comes also in two versions. We already encountered the purest and most distinctive version that has been held by most who have held this view by discussing Stenius’s picture. Let’s look again at the following three sentences: ‘You live here now’ ‘Do you live here now?’ ‘Live (you) here now!’ On the Reductionist version of the view every one of those sentences consists of a sentence radical or a neutral sentence which provides it with its content + a modal element which provides it with its mood. The idea is that the sentence radical and the clause ‘that you live here now’ are similar in sharing their content with these sentences, but the sentences themselves differ in including a modal element. It’s the presence of the modal element which ultimately explains why the sentences can be used to make speech acts, but can’t be used to make reports. And it’s the presence of a different modal element in each case that explains why sentences of different moods are suited for performing different speech acts. As I noted above, subsequent work has put a lot of pressure on the view that all of these sentences consist of a sentence radical and thus have the same type of content. On the Pluralist version of the view, every one of the above sentences has a different content. Yet, the idea is still that the declarative sentence and the clause ‘that you live here now’ are similar in sharing their content, but the sentence differs in including a modal element. However, it’s still the presence of the modal element which ultimately explains why the sentences can be used to make speech acts, but can’t be used to make reports. And it’s still the presence of a different modal element in each case that explains why sentences of different moods are suited for performing different speech 64 This view has not only been explicitly defended, but has also been frequently assumed. For example, as I will show in the final section, everybody who has offered a “conditional assertion” view of indicative conditionals presupposes this view, whether they know it or not. For another example, Mark Schroeder assumes in his development of expressivism that it has hope only if this view is true (Schroeder 2008a, 2008b). 65 Sometimes the above people have been construed to hold the view that mood doesn’t have a semantics and that its encoding force is, though conventional, a pragmatic matter (Lepore & Leslie MS, Pendlebury 1986: 362). I think this is just a misunderstanding of the view. Rather, the view is that mood semantically encodes force. 102 acts. Thus, this version of the view takes seriously the need to acknowledge the existence of different types of content, but doesn’t think that everything that needs to be explained can be explained in terms of content. I will proceed by focusing on the Pluralist version of the Force View. In handling the declaratives this placement of focus won’t matter because on both versions of the view the story about declaratives can be the same. However, in handling the interrogatives and the imperatives it does. I focus on the Pluralist version for following reason. Remember that my main aim is to show how to state this view in terms of rules of use. And if we can show how to state its story about declaratives in terms of rules of use and if the Reductionist version of the view is true then it will also be clear how to state the rules for interrogatives and imperatives. However, the same is not the case if the Pluralist version is true. And this is why I focus on it. 66 Let’s now see how the Force View purports to satisfy the Possibility and the Difference Constraint. The basic idea as regards the first is that sentences can be used to make speech acts because they have a mood that semantically encodes force. The clausal counterparts can’t be used to make speech acts because they have their mood stripped away and thus lack force. I’ll provide one way of filling in the details of this below. The basic idea as regards the Difference constraint is that sentences in different moods encode different kinds of force, and that they are especially suited for performing different speech acts because of this. Let’s see how to state the Force View in terms of rules of use in the case of declaratives, leaving interrogatives and imperatives for later. The idea is that the use-conditions for declarative sentences involve the entertained proposition’s being believed or known. This captures the idea that there is not just content, but also force. Thus, if we take the rules we ascribed to sentences of propositional logic then all we have to add to get to rules of declaratives is to add the relevant part: (40) [[A]] UC = s entertains the proposition that A and believes/knows that A 66 Again, I actually also happen to think that the Pluralist version is the only viable version of the Force View, but this doesn’t play a role here. 103 Of course the details matter and there’s a big difference whether we take the addition to be belief or knowledge. However, what is important is that the Rules view is compatible with each of these choices. Now, before going on to fill in the details of the explanation of how this view purports to satisfy the constraints, let’s see how the view thus developed avoids the sort of problem Peter Geach thought falsified performative analyses of ‘true’, ‘good’ etc. and which has become known as the Frege-Geach problem for expressivism (Geach 1960, 1965, Searle 1962, Schroeder 2008b). It avoids the problem because thus developed it can accept what Frege insisted on, namely, that all semantic composition happens before the mood gets added. Thus, it is easy to see how to take the rules for two arbitrary declarative sentences α and β and arrive at the intuitively right rule for the declarative sentence ‘α or β. Here it is, for the moment modeled on the rule we ascribed to ‘V’ in propositional and predicate logic (I will revisit this in section 5): (41) [[‘α or β’]] UC = s entertains the proposition that α, entertains the proposition that β, and predicates being disjointly the case of them (= entertains the proposition that α or β) and s believes/knows that α or β) As you can see, all composition has happened before the mood gets added. Thus, the Force View thus developed doesn’t face the Frege-Geach problem. Let’s now provide one Dummett-inspired way of filling in the details of the explanation of how the view satisfies the constraints. Alternative ways might be possible, but this seems to me to be the most straightforward and promising one. First, notice that the Force View is entirely compatible with the recently popular view on which any use of a sentence with its meaning results in a the performing of a non-committal speech act of locuting or “expressing” a proposition (Braun 2011, Camp 2007, Cappelen 2011). Here’s how, given the Rules view, we can explain why this is. The basic idea again is that to locute or “express” a proposition just is to express, in the ordinary sense of the word, one’s entertaining of a proposition by using a sentence with its meaning. This is again because expression of a mental act or state is analyzable in terms of provision of observable evidence that one is performing the act or is in the state (Davis 2003, 2005). This, together with the Rules view allows for the following analysis of locuting or “expressing”: 104 “Expressing”: To express the proposition that p is to express one’s entertaining the proposition that p by using a sentence with its meaning. This amounts to nothing more than using some declarative sentence with its meaning such that it’s a sentence the use-conditions of which feature the speaker’s entertaining the proposition that p. After all, if one uses a declarative sentence with its meaning one is providing feasible evidence that one is in the use-condition. Moving on, remember that the explanation that the Force View offers of why sentences can be used to make speech acts is that they have a mood that semantically encodes force. The Dummettian idea then is that as in the case of locuting or “expressing”, to say that p just is to express a mental act or state again by using a sentence with its meaning. However, in this case the act or state is not an entertaining, but rather belief or knowledge. Thus, to say that p is just to express one’s belief or knowledge by using a sentence with its meaning. This is again because expression of a state is analyzable in terms of provision of observable evidence that one is in such a state (Davis 2003, 2005). This, together with the Rules view yields the following analysis of saying: Saying: To say that p is to express one’s belief / knowledge that p by using a sentence with its meaning. This amounts to nothing more than using some declarative sentence with its meaning such that it’s a sentence the use-conditions of which feature the speaker’s believing/knowing that p. After all, if one uses a declarative sentence with its meaning one is providing feasible evidence that one is in the use-condition. 67 Now it should be clear how on this way of filling in the details the Force View purports to satisfy both constraints. On the one hand, sentences can be used to make speech acts because 67 Although Dummett would probably be uneasy with the reference to mental items, the analysis is very closely related to his (Dummett 1973, see also Stainton 1997). It’s also in the spirit of the views of Austin and Searle (Austin 1962, Searle 1969). 105 they have a mood that semantically encodes force which figures into the analysis of the speech acts. On the other hand, clauses can’t be used to say something because they have their mood and therefore the force stripped away. This shows how the Force View satisfies the Possibility Constraint. All it needs to do to satisfy the Difference Constraint is to claim that sentences in other moods encode different kinds of force and then appeal to that in the analysis of the speech acts of asking and telling. How to do that will become clear in the next two sections. And of course, if all these sentences can be used to perform is exactly those speech acts then this explains the weaker and uncontroversial data point that they’re especially suited for performing those speech acts. (Of course, this also invites a multitude of objections to be looked at below). Before looking at objections to this view let’s clarify how it relates to the view that asserting is not a matter of convention, but rather a matter of some sort of communicative intentions like, perhaps, the intention to get your audience to believe something (Davis 2003, 2005, Grice 1957). On this view, although any use of sentence with its meaning results in a locuting or “expressing” of a proposition and a saying of it, it doesn’t result in claiming it. And, on this view, claiming is a matter of some sort of communicative intentions. This is a picture that I think is much closer to Grice’s original analysis which, after all, wasn’t an analysis of what it is to say something, but rather what it is to mean something. Thus, the Force View can easily accommodate both the insights of Austin, Dummett, and Searle as to the conventional nature of some speech acts, and those of Grice as to the intention-dependent nature of speaker meaning (Kölbel 2010). In my opinion this counts in favor of it. Since we also discussed the biggest challenge to the Content View, let’s next look at the most forceful objections to the Force View. 1st Objection The most common objection to the Force View targets the basic idea that mood semantically encodes force. The idea is that not every use of a declarative with its meaning seems to be impermissible if one doesn’t believe or know the relevant proposition. These counterexamples were already discussed by Frege and Davidson who discussed actors uttering sentences on stage, jokes, fiction, pretense etc., and have consequently been discussed by countless others (Davidson 1984, 2001, Dummett 1973, Green 1997, 2001, Kölbel 2010, McGinn 1977, Starr 2011, MS). 106 For example, consider cases having to do with acting as a translator from one language to another. In cases of translating one is clearly using the sentences with their meanings and speaking the language. However, intuitively one isn’t doing anything impermissible if one doesn’t believe or know the relevant proposition. Furthermore, there seems to be absolutely no pressure to believe or know the propositions. After all, one is simply translating. Reply The best reply to this objection on behalf of the Force View is that it proves too much. One way to see it is by noticing that all of these problems arise not only with declaratives, interrogatives, and imperatives, but with also interjections like ‘Ouch!’. One might think that if you utter ‘Ouch!’ on the stage, while joking, in fiction etc. or translate it you aren’t doing anything impermissible if you’re not in pain. However, it’s very hard to see what else the use-conditions of ‘Ouch!’ could consist in if not in the speaker’s being in pain. Another way of seeing it is by noticing, as Mitchell Green has extensively argued, that there are expressions in English which do conventionally encode force. For example, consider parenthetical constructions like ‘Bertrand, I claim, is British’ or ‘Gottlob, I suppose, is German’. One might similarly think that if you utter these sentences on the stage, while joking, in fiction etc. or translate it you aren’t doing anything impermissible if you’re not doing what’s required for claiming or supposing. However, as Green has argued, the only viable account of these expressions sees them as encoding force (Green 1997, 2001). This indicates that the counterexamples taken at face value prove too much and are to be somehow explained away across the board. One way to explain away the counterexamples is to follow Dummett in claiming that in all cases like these one is in fact saying something, but one is also doing something more which serves to lessen the sense of impermissibility (Dummett 1973: 310-311, 1993: 211-212). Thus, although one is saying something when acting as a translator, the conventions governing the practice of translating make it clear that one is doing something more, namely translating, and this somehow makes it seem as if one hasn’t done anything semantically impermissible if one isn’t in the use-conditions. However, the idea is, one has done something that is strictly speaking semantically impermissible. It’s just that it’s permissible all things considered once we factor in 107 the norms of translating, acting etc., which makes it seem as if it’s not impermissible (for further discussion, see Kölbel 2010). I’m aware that those who raised these objections will probably left unsatisfied by this Dummettian explanation. But unless they present further arguments all we have here is a clash of intuitions. They would say that it’s intuitively obvious that there are lots of semantically permissible uses of sentences where one isn’t in the use-conditions that are attributed to those sentences by the proponents of the Force View. The proponents of the Force View would maintain with Dummett that this is anything but obvious. Rather, the intuition to the contrary picks up on something else, namely that these uses are permissible all things considered. Thus, this objection is at best inconclusive. 2 nd Objection Another very common objection to the Force View coupled with the Dummett-inspired way of filling in the details targets the analysis of speech acts. The idea is that it’s just not true that a sentence in a given mood can only be used to make a particular speech act, nor that a given speech act can only be made by a sentence in a particular mood. Rather, one can use a declarative with its meaning and ask a question or tell someone to do something and one can use interrogatives and imperatives to say something. For example, Davidson claims that one can use the declarative ‘I’d like to know your telephone number’ to ask what someone’s telephone number is and that one can use the interrogative ‘Did you notice that Joan is wearing her purple hat again’ to say that Joan is wearing her purple hat again (Davidson 2001). Thus, using a declarative with its meaning is neither sufficient nor necessary for saying something and any explanation based on this claim is untenable. (Boisvert & Ludwig 2005, Davidson 2001, Lepore & Leslie MS, Segal 1990/1991). A slightly different, but very closely related objection is that it’s just not true that a given speech act can only be made by a sentence in a particular mood because it can also be made by phrases. For example, Robert Stainton has extensively argued that one can use the phrase ‘moving pretty fast’ to say of a boat that it is moving pretty fast. Thus, using a declarative with its meaning is not necessary for saying something and any explanation based on this claim is untenable. (Stainton 1995, 1997, 2006) 108 Reply The best reply to this objection on behalf of the Force View coupled with the Dummett-inspired way of filling in the details is by developing Dummett’s point that it conflates the force of the speech act and the point with which it was made (Dummett 1993: 209). I think this is best developed by again distinguishing between convention-dependent and locutionary speech acts like saying, asking, and telling versus intention-dependent and illocutionary speech acts like claiming, conjecturing, inquiring, ordering, requesting etc. 68 The idea is that the intuitions based on which people claim the above are intuitions for something else. Thus, for example, when one uses the declarative ‘I’d like to know your telephone number’ then one says that they’d like to know someone’s telephone number and thereby inquires into what their telephone number is. The former is a convention-dependent and locutionary speech-act associated with the declarative mood, whereas the latter is an intention-dependent and illocutionary speech act. For another example, if one uses the interrogative ‘Did you notice that Joan is wearing her purple hat again’ then one asks whether one’s addressee noticed that Joan is wearing her purple hat again and thereby claims that Joan is wearing her purple hat again. The former is a convention-dependent and locutionary speech-act associated with the interrogative mood, whereas the latter is an intention-dependent and illocutionary speech act. Similarly, the claim is that in the cases where we intuitively use a word or a phrase to make a speech act and this can’t be analyzed away in terms of ellipsis, we really are not saying anything, but rather performing some other speech act. For example, if using ‘moving pretty fast’ can’t be analyzed away in terms of semantic or syntactic ellipsis then the thing to say is that by using it we do not say of something that it is moving pretty fast nor anything else but rather claim that it is. 69 68 Of course, proponents of the Force View are not innocent of this confusion. Although Dummett realizes that we need some such distinction, Searle clearly doesn’t and thinks that mood semantically encodes illocutionary force. The present proposal is rather that mood encodes locutionary force. Thus, both “expressing” a proposition and saying it count as Austin’s locutionary and not illocutionary acts (Austin 1962). 69 In Stainton’s book-length development of the point that words and phrases can be used to perform speech acts, he relies on the quasi-technical verb ‘to assert’ (Stainton 2006). However, people have used this verb very differently. Some people have used it synonymously with the verb ‘to say’, whereas others have used it synonymously with the verb ‘to claim’. If Stainton uses it in the former way then I think his main claim is false, although he is right about something very closely related, namely that words and phrases can be used to claim things. If Stainton uses it in the 109 Again, I’m aware that those who raised these objections will probably left unsatisfied. But unless they present further arguments all we have here is again just a clash of intuitions. They would say that it’s intuitively obvious that interrogative and imperative sentences and perhaps words and phrases can be used to say things. The proponents of the Force View would maintain with Dummett that it’s anything but obvious. In fact, they’d maintain that it’s straightforwardly unintuitive and the intuition to the contrary picks up on something else, namely that we can use them to claim etc. the relevant things. Thus, this objection is at best inconclusive as well. Now, whether the objections are any good or not, it’s not our purpose here to assess the Force View, but just develop it in the best way we can and show how it can be stated in terms of rules of use. Having done this for declaratives, let us move on and see how the two views of mood handle interrogatives and imperatives and how to state their views of these expressions in terms of rules of use. This is the task in the next two sections. 4. The Two Views and Interrogatives Let’s start by looking at how the Content View handles interrogatives. We are focusing on the Pluralist version of it, on which everything is going to be explained by different kinds of content. Thus, in the case of interrogatives, everything is going to be explained by question content. This means that the view assumes that there are such things as questions or that there is question content. However, this by itself leaves it open what questions are and as such is compatible with all the different views on the table. 70 The Content View purports to satisfy the Possibility and the Difference Constraint in the case of interrogatives exactly as before. The basic idea as regards the first is again that sentences can be used to make speech acts because it is in the nature of the speech acts that they can be done with a particular sort of content. For example, it’s in the nature of the speech act of asking that it can be done with questions. The challenge is then again to explain why the clausal latter way, then his main claim is true, although this can’t then be construed as an objection to Dummett’s analysis who used it in the former way. 70 For example, on one relatively common view questions are or at least can be modeled as sets of possible answers (Hamblin 1973). On another view they can be modeled as sets of true answers (Karttunen 1977). On yet another view they can be modeled as functions from world-states to true and complete answers (Groenendijk & Stokhof 1982). For a relatively general overview of different accounts see Stanley 2011: Ch. 2. 110 counterparts can’t be used to make speech acts. The basic idea as regards the Difference constraint on the Pluralist version is again that sentences in different moods have different kinds of content. And thus the sentences are especially suited to perform those speech acts that they are because they encode the relevant sorts of contents. Let’s see how to state this view of interrogatives in terms of rules of use. The basic thought is that if there is such an act as entertaining a proposition, there must be also such an act as entertaining a question, no matter what questions are. And the idea here is that the use- conditions for interrogatives just involve the entertaining of a question. This captures the idea that there is just content. Thus, we can take the following to be the rules for interrogatives (INT = interrogative sentence, Q INT = a question related to INT): (42) [[INT]] UC = s iff s entertains Q INT Here are a few concrete examples: (43) [[‘Is Bertrand British?’]] UC = s entertains the question whether Bertrand is British (44) [[‘What time is it?’]] UC = s iff s entertains the question what time it is This, then, is what the Content View takes the rules for interrogatives to look like. As before, we can give an analysis of expressing a question in terms of using an interrogative sentence with its meaning: “Expressing”: To “express” the question Q is to express one’s entertaining of Q by using a sentence with its meaning. This amounts to nothing more than using some interrogative sentence with its meaning such that it’s a sentence the use-conditions of which feature the speaker’s entertaining the question Q. After all, if one uses an interrogative sentence with its meaning one is providing feasible evidence that one is in the use-condition. 111 We’ve seen that the Content View, extended to interrogatives, can easily be stated in terms of rules of use. Let’s therefore proceed to the Force View. We’re focusing again on the Pluralist version of it, on which sentences of different moods have different types of content, but everything is still explained by the presence of the modal element. This means that the view assumes that there are such things as questions or that there is question content. However, as before, this by itself leaves it open what questions are and as such is compatible with all the different views on the table. Let’s see how to state the Force View in terms of rules of use in the case of interrogatives. The idea is that the use-conditions for interrogative sentences involve not only the entertaining of a question, but also wondering what the answer is to it. This captures the idea that there is not just content, but also force. Thus, we can take the following to be the rules for interrogatives (INT = interrogative sentence, Q INT = a question related to INT): (45) [[INT]] UC = s entertains Q INT and wonders Q INT To take a few examples: (46) [[‘Is Bertrand British?’]] UC = s entertains the question whether Bertrand is British and wonders whether Bertrand is British (47) [[‘What time is it?’]] UC = s entertains the question what time it is and wonders what time it is As before, we can keep the analysis of “expressing” a question and add an analysis of asking. Remember that the explanation that the Force View offers of why sentences can be used to make speech acts is that they have a mood that semantically encodes force. The Dummettian idea then is that as in the case of saying, to ask a question, to ask for an answer to it just is to express a mental act or state again by using a sentence with its meaning. However, in this case the act or state is wondering what the answer is. Thus, to ask a question, to ask for an answer to it, is just to express one’s wonder what the answer is to it is by using a sentence with its meaning. This is again because expression of a state is analyzable in terms of provision of observable evidence 112 that one is in such a state (Davis 2003, 2005). This, together with the Rules of Use view yields the following analysis of telling: Asking: To ask Q is to express one’s wonder about what the answer to Q is by using a sentence with its meaning. This amounts to nothing more than using some interrogative sentence with its meaning such that it’s a sentence the use- conditions of which feature the speaker’s wondering about what the answer to Q is. After all, if one uses an interrogative sentence with its meaning one is providing feasible evidence that one is in the use-condition. The Force View purports to satisfy both constraints in the case of interrogatives exactly as before. On the one hand, sentences can be used to make speech acts because they have a mood that semantically encodes force which figures into the analysis of the speech acts. On the other hand, clauses can’t be used to say something because they have their mood and therefore the force stripped away. This shows how the Force View satisfies the Possibility Constraint. All it needs to do further satisfy the Difference Constraint is to claim that sentences in different moods encode different kinds of force and then appeal to that in the analysis of the relevant speech act, in this case, of asking. And of course, if all these sentences can be used to perform is exactly those speech acts then this explains the weaker and uncontroversial data point that they’re especially suited for performing those speech acts. We’ve now seen how the two views handle interrogatives and how to state their views of these expressions in terms of rules of use. Let’s therefore proceed to looking at how they handle imperatives. 5. The Two Views and Imperatives Let’s start again with the Content View. We are focusing on the Pluralist version of it, on which everything is going to be explained by different kinds of content. Thus, on this view, in the case of imperatives, everything is going to be explained by imperative content. This means that the view assumes that there’s imperative content. However, this by itself leaves it open whether 113 imperative contents are properties or special type of propositions (perhaps better, proposition- schemas). 71 The Content View purports to satisfy the Possibility and the Difference Constraint in the case of imperatives exactly as before. The basic idea as regards the first is again that sentences can be used to make speech acts because it is in the nature of the speech acts that they can be done with a particular sort of content. For example, it’s in the nature of the speech act of telling that it can be done with properties or with special type of propositions. The challenge is then again to explain why the clausal counterparts can’t be used to make speech acts. The basic idea as regards the Difference constraint on the Pluralist version is again that sentences in different moods have different kinds of content. And thus the sentences are especially suited to perform those speech acts that they are because they encode the relevant sorts of contents. Let’s see how to state such views of imperatives in terms of rules of use. Here’s one form this view could take. On this version it is thought that imperative contents are properties. Then we can appeal to the act of thinking of a property. And the idea here is that the use-conditions for imperatives just involve thinking of a property. This captures the idea that there is just content. Thus, we can take the following to be the rules for imperatives (IMP = interrogative sentence, P IMP = a property related to IMP): (48) [[IMP]] UC = s thinks of P IMP Here’s an example: (49) [[‘Read!’]] UC = s thinks of the property of reading If one thinks that imperative contents are properties, then this is what the Content View takes the rules for imperatives to look like. From a certain perspective the above view seems impoverished. To see this, let’s look at the following quotes by Roland Hausser and Paul Portner giving their intuitive take on imperatives (my boldface): 71 What I called above “imperative clauses” like ‘to read’ are infinitival phrases. For an overview of standard accounts of such phrases see Stanley 2011: Ch. 3. 114 I take it that an imperative denotes a property (roughly that property which the speaker wants the hearer to acquire). (Hausser 1980: 4) [We] suggest that imperatives denote properties, and so a To-Do List is a set of properties. For example, the imperative (3) denotes something like the property of leaving: (3) Leave! The conventional force of imperatives, what we can call Requiring, is to add the property denoted by the imperative to the addressee’s To-Do List. (Portner 2004) In the light of these claims, the above view seems impoverished in two respects. First, it doesn’t take into account that there is reference to an addressee who is to acquire the property. Second, it doesn’t take into account that the speaker desires that the addressee acquire this property. Now, there is another form that the Content View of imperatives can take can that can take into account the reference to the addressee. On this version it is natural to think that imperative contents are special types of propositions. 72 The idea here is that the use-conditions for imperatives also involve addressing someone who will become the subject in the relevant proposition. Then we can again appeal to the act of entertaining a proposition. But which proposition it is depends on who is addressed. Nevertheless, this captures the idea that there is just content. Thus, we can take the following to be the rules for imperatives (IMP = interrogative sentence, P IMP = a property related to IMP): (50) [[IMP]] UC is permissibly usable by s iff there is some y such that s addresses y, and entertains the proposition that y is P IMP Here’s an example: 72 Nevertheless, it seems to me that one could still claim that imperative content, the thing common between imperatives and “imperative clauses” are just properties. 115 (51) [[‘Read!’]] UC = there is some y such that s addresses y, and entertains the proposition that y reads If one thinks that imperative contents are this type of propositions, then this is what the Content View takes the rules for imperatives to look like. Although, it’s not overwhelmingly important for our purposes, let’s briefly discuss the relative merits of these two forms the Content View of imperatives can take. The former view, as we have seen, seems impoverished. However, it is unified in the sense that it takes imperatives to be like declaratives and interrogatives in that in each case everything that needs to be explained is explained by something that arises through composition of lexical material. The latter view gets rid of at least one form of impoverishment. However, it’s non-unified in that it takes imperatives to be special. The difference between the imperative ‘Read!’ and the word ‘read’ can’t be accounted for by assuming that the former includes extra lexical material. It doesn’t. Rather, on this view, the former contains a mood morpheme that makes a special contribution in introducing the demonstrative element. Be that as it may, what matters for us here is that both the Content View and Force View can easily be stated in terms of rules of use. Let’s therefore proceed to the Force View. We’re focusing again on the Pluralist version of it, on which sentences of different moods have different types of content, but everything is still explained by the presence of the modal element. This means that the view assumes that there is imperative content. However, this by itself leaves it open whether imperative contents are properties or special type of propositions. We saw above that the first version of the Content View of imperatives seems impoverished in two respects. First, it doesn’t take into account that there is reference to an addressee who is to acquire the property. Second, it doesn’t take into account that the speaker desires that the addressee acquire this property. The second version of the Content View was able to solve the first problem, but as a result became non-unified in that it took imperatives to be special. On the Force View we can solve the first problem the same way while maintaining a unified view. This is because on this view in the case of each of declaratives, interrogatives, and imperatives everything that needs to be explained is not explained solely by something that arises through composition of lexical material, but also by their containing a mood morpheme. 116 Furthermore, the Force View can also solve the second problem by putting it in the use- conditions that the speaker desires that the addressee acquire the property. Let’s then see how to state the Force View in terms of rules of use in the case of imperatives. The idea here is again that the use-conditions for imperatives involve addressing someone who will become the subject in the relevant proposition. Then we can again appeal to the act of entertaining a proposition. But which proposition it is depends on who is addressed. And furthermore, the use-conditions involve not only entertaining this proposition, but also desiring it to be the case. Thus, we can take the following to be the rules for interrogatives (IMP = interrogative sentence, P IMP = a property related to IMP): (52) [[IMP]] UC = there is some y such that s addresses y, entertains the proposition that y is P IMP , and desires that y is P IMP Here’s a concrete example: (53) [[‘Read!]] UC = there is some y such that s addresses y, entertains the proposition that y reads, and desires that y reads This, then, is what the Force View takes the rules for imperatives to look like. Moving on, remember that the explanation that the Force View offers of why sentences can be used to make speech acts is that they have a mood that semantically encodes force. The Dummettian idea then is that as in the case of saying, to tell someone to do something just is to express a mental act or state again by using a sentence with its meaning. However, in this case the act or state is a desire that they do it. Thus, to tell someone to do something is just to express one’s desire that they do it by using a sentence with its meaning because expression of a state is analyzable in terms of provision of observable evidence that one is in such a state (Davis 2003, 2005). This, together with the Rules of Use view yields the following analysis of telling: Telling: To tell someone to do something is to express one’s desire that they do it by using a sentence with its meaning. This amounts to nothing more than using some imperative sentence with its meaning such that it’s a sentence the use-conditions 117 of which feature the speaker’s desiring that some person that the speaker is addressing do it. After all, if one uses an imperative sentence with its meaning one is providing feasible evidence that one is in the use-condition. This completes our discussion of the two views of mood. We’ve seen that both the Content View and the Force View can be stated in terms of rules of use in case of all of declaratives, interrogatives, and imperatives. This shows that the Rules view can satisfy the three constraints in the case of declarative, interrogative, and imperative sentences. However, it also enables us to make the auxiliary point that the use-conditional framework is more flexible than standard frameworks in enabling us to do more than they do – in this case by being able to represent two radically different theories of the semantics of mood in the same framework. 6. ‘Not’, ‘And’, and ‘Or’ The task in this section is to provide a use-conditional semantics for the words ‘not’, ‘and’, and ‘or’ in order to show that we can retain a single meaning for them in their occurrences in sentences with different moods. In Ch. 3 I gave a use-conditional semantics for propositional and predicate logic. This involved providing rules of use for the connectives ‘~’, ‘&’, and ‘V’. There we did it by assuming that their use-conditions featured the mental acts of predicating the property of being false, the property of being jointly true, and the property of being disjointly true. However, these rules do not naturally extend to rules for the natural language words ‘not’, ‘and’, and ‘or’ which can not only combine with sentences, but also with sub-sentential expressions and therefore can and do occur not only in declaratives, but also in interrogatives and imperatives. Thus, we need to provide a use-conditional semantics for these words in order to show that we can retain a single meaning for them in their occurrences in sub-sentential phrases and sentences in different moods. Let’s start with the word ‘not’. The most important thing to realize is that unlike ‘~’ or the ‘it is not the case’ operator, ‘not’ doesn’t combine with sentences, but rather combines with other expressions at the sub-sentential level. For example, it can combine with the predicate ‘is British’ to form the complex predicate ‘is not British’. Here we can again appeal to an idea of 118 Soames’s from earlier. Namely, that the act of thinking of a complex property like the property of being not British can be analyzed into thinking of the property of being British and negating it. The idea then is that the rule for ‘not’ is simply that one may use it when one is negating something. For example, here’s at least a part of the rule for an arbitrary declarative sentence containing such a complex predicate (where f( ) is an interpretation function that assigns each constant to an object and predicate to a property): (54) [[‘a is not F’]] UC = s thinks of f(a), thinks of f(F), negates it (= thinks of f(not F)), and predicates being a-instantiated of the result (= entertains the proposition that a is not F) (+ on the Force View whatever needs to be added by mood) Thus, the idea is that we associate ‘not’ with the act of negating something. Let’s proceed to the words ‘and’ and ‘or’. Again, unlike ‘&’ and ‘V’ they are not just sentential connectives, but also combine with other expressions at the sub-sentential level. For example, not only can they combine with the predicates ‘is British’ and ‘is German’ to form the complex predicates ‘is British and German’ and ‘is British or German’, but also with the names ‘Bertrand’ and ‘Gottlob’ to form the phrases ‘Bertrand and Gottlob’ and ‘Bertrand or Gottlob’. Again, we can appeal to an idea of Soames’s. Namely, that the acts of thinking of complex properties like the property of being British and German or complexes of objects like Bertrand and Gottlob can be analyzed into thinking of the constituents and conjoining or disjoining them. The idea then is that the rules for ‘and’ and ‘or’ are simply that one may use them when one is conjoining or disjoining something. For example here are rules for some arbitrary declarative sentences containing such complex predicates, and for some phrases containing two names: (55) [[‘a is F and G’]] UC = s thinks of f(a), thinks of f(F), thinks of f(G), conjoins the latter two (= thinks of f(F and G)), and predicates being a-instantiated of the result (= entertains the proposition that a is F and G) (+ …) (56) [[‘a is F or G]] UC = s thinks of f(a), thinks of f(F), thinks of f(G), disjoins the latter two (= thinks of f(F or G)), and predicates being a-instantiated of the result (= entertains the proposition that a is F or G) (+ …) 119 (57) [[‘a and b’]] UC = s thinks of f(a), thinks of f(b), and conjoins them (= thinks of f(a and b)) (58) [[‘a or b’]] UC = s thinks of f(a), thinks of f(b), and disjoins them (= thinks of f(a or b)) Of course, to keep a single meaning, we also claim that the rules for their occurrences as sentential connectives are that one may use them when one is conjoining or disjoining two propositions. For example, here are the rules for arbitrary such declarative sentences: (59) [[(α and β)]] UC = s entertains the proposition that α, entertains the proposition that β, and conjoins them = (entertains that (α and β))) (60) [[(α or β)]] UC = s entertains the proposition that α, entertains the proposition that β, and disjoins them = (entertains that (α or β))) Thus, the idea is that we associate ‘and’ and ‘or’ with the acts of conjoining and disjoining something. This is enough for us to in order to show that we can retain a single meaning for them in their occurrences in sub-sentential phrases and sentences in different moods. For example, let’s look at imperatives. Intuitively, and by our previous rules the rule we need for ‘Don’t read!’ is either (61) or (62): (61) [[‘Don’t read!’]] UC = s thinks of the property of not reading (62) [[‘Don’t read!’]] UC is = there is some y such that s addresses y, thinks of y, thinks of the property of not reading, and predicates being y-instantiated the property of not reading (= s entertains the proposition that y doesn’t read)) (+ on the Force View that s desires that y doesn’t read) 120 Thus, in both cases one has to think of the property of not reading. However, to think of the property of not reading involves thinking of the property of reading and negating it. Thus, (61) and (62) can be analyzed into (63) and (64) which make it clear what ‘not’ contributes: (63) [[‘Don’t read!’]] UC = s thinks of the property of reading and negates it (= thinks of the property of not reading)) (64) [[‘Do not read’]] UC = there is some y such that s addresses y, thinks of y, thinks of the property of reading, negates it, and predicates being y-instantiated of the result (= s entertains the proposition that y doesn’t read)) (+ on the Force View that s desires that y doesn’t read)) Here we see again that ‘not’ just adds the part about negating. And ‘and’ and ‘or’ would just add the part about conjoining and disjoining. 7. ‘If’ In the previous section we discussed the words ‘not’, ‘and’, and ‘or’. Let’s now discuss the conditionalizing device ‘if’. To do this I will provide an overview of two types of radically different views of the semantics of ‘if’, show how they interact with the two views of the semantics of mood, and show that we can state both in terms of use-conditions. On the standard type of views ‘if’ is just another connective which relates two sentences and thus establishes a connection between two propositions which means that using the resulting sentence results in a speech act of the same type that would result from using one of the two sentences. For example, in using ‘p’ you say that p and in using ‘If p then q’ you say that if p then q. The substantive debate is over what the nature of the connection between the two propositions is: for example, whether it’s a truth-functional or non-truth-functional connection. Let’s call this type of views Propositional. 73 Before seeing how to state Propositional views in terms of rules of use let’s see how they interact with the two views in the semantics of mood. Here the important thing to realize is that 73 For an overview of different Propositional views see Edgington 2006. 121 since on this view ‘if’ is thought to be just a connective, it is compatible with both the Content View and the Force View. Furthermore, it doesn’t give us any reasons to endorse either over the other one. This relatively obvious point is worth stressing because, as we will see, this is not the case with the alternative type of views. Let’s now see how to state Propositional views in terms of rules of use. The idea here is that the use-conditions for conditionals feature performing the mental act of conditionalizing one proposition on another. Then, the idea goes, to entertain a conditional proposition is to entertain two propositions and conditionalize one on the other. So for ‘if p then q’ we get: (65) [[‘If p then q’]] UC = s entertains the proposition that p, entertains the proposition that q, and conditionalizes the proposition that q on the proposition that p (= entertains the proposition that if p then q) + …) All the aforementioned substantive debate over what the nature of the connection between the two propositions is boils down to a debate over what it is to conditionalize one proposition on the other. Of course the details matter and there’s a big difference what one takes the nature of conditionalization to be. However, what is important is that the Rules view is compatible with each of these different choices. Now, to set the stage for discussion of the alternative type of views, let’s note some important differences between connectives like ‘and’ and ‘or’ and conditionalizing devices like ‘if’. One way in which the connectives and ‘if’ differ is in that the former can take both sentences and subsentential expressions while the latter can only take sentences. For example, while Bertrand and Gottlob’ is acceptable, there’s no phrase involving ‘if’ which doesn’t involve a sentence that is acceptable. Another way in which they differ is that while the connectives take only two sentences of the same mood, ‘if’ can also take declarative and interrogative and declarative and imperative sentences. For example, while ‘Bertrand is British and who is German’ and ‘Bertrand is British or read this paper’ are not acceptable, ‘If Bertrand is British, then who is German’ and ‘If Bertrand is British, read this paper’ are acceptable. Both of these differences require explanation. And Propositional views at least seem not to yield such an explanation. Many people have therefore proposed very different type of views which aspire to do so. 122 On these alternative type of views ‘if’ is not just another connective, but rather operates directly on the mood of the sentence which means that using the sentence results in a different speech act that would result from using one of the two sentences. For example, in using ‘p’ you say that p, but in using ‘If p then q’ you don’t say that if p then q, rather, you say that q, conditional on that p being the case or, in other terms, you conditionally say that q, on p. And if it is not the case that p, then you haven’t said anything. Let’s call this type of views Suppositional (Belnap 1970, 1973, Barnett 2006, Bennett 2003, DeRose & Grandy 1999, Edgington 1995, Mackie 1973). Suppositional views are worth taking seriously because they yield an explanation of the above two differences. Unlike the connectives, ‘if’ takes only sentences because it operates on mood and only sentences have mood. And unlike the connectives ‘if’ can take declarative and interrogative and declarative and imperative sentences because the way it operates on mood enables it to operate on sentences with different moods: after all, not only can we conditionally say something, but we can also conditionally ask something or conditionally tell someone to do something. 74 Before seeing how to state Suppositional views in terms of rules of use let’s see how they interact with the two views in the semantics of mood. Here the important thing to realize is that since on this view ‘if’ is thought to operate on mood it seems incompatible with the Content View and requires as a background assumption the Force View. Thus, it gives us a reason to endorse the latter over the former. This will become clear in a moment, after we’ve seen how to state this view in terms of rules of use. Let’s therefore see how to state Suppositional views in terms of rules of use. To this point people have talked about this view in very general terms or using machinery which is not well suited to deal with it. And some have suggested that it has been neglected precisely because it’s unclear how to state it in concrete terms (De Rose & Grandy 1999: 408). 75 In contrast, the Rules view enables us to state it in concrete terms using machinery which is very well suited to deal 74 For further discussion of these and other types of reasons to take this view seriously see Barnett 2008, DeRose & Grandy 1999: 408-411, Edgington 1995: 287-288. Another reason to take this view seriously is because it can accommodate so-called biscuit conditionals like ‘If you want some, there are biscuits in the cupboard’ (DeRose & Grandy 1999). 75 DeRose & Grandy themselves try to state it by invoking the machinery of conversational implicatures, but are forced to say so many non-standard things about the nature of conversational implicature to get it to come out right that it’s clear that they’re really invoking something else (DeRose & Grandy 1999: 417-418 fn. 212). 123 with it. The idea is that since ‘if’ operates on mood what it does is take two sentences, strip away their moods, and then create a new sentence with a more complex mood. For example, it takes two declarative sentences, strips away the part dealing with belief/knowledge from both and then adds conditional belief/knowledge in one, conditional on the other’s being the case. More concretely, the rules for ‘If p then q’, ‘If p, then INT’, ‘If p then IMP’ are the following (INT = an interrogative, IMP = an imperative, Q INT = the question related to INT, P IMP = the proposition related to IMP): (66) [[‘If p then q’]] UC = s entertains the proposition that p, entertains the proposition that q, and conditionally believes/knows the proposition that q, on the condition that p is the case (67) [[‘If p then INT’]] UC = s entertains the proposition that p, entertains the question Q INT and conditionally wonders about the answer to Q INT on the condition that p is the case (68) [[‘If p, then IMP’]] UC = s entertains the proposition that p, entertains P IMP and conditionally desires that P IMP on the condition that p is the case Again, of course the details matter and there’s a big difference whether we take it to be conditional belief or knowledge in the case of declaratives. However, what is important is that the Rules view is compatible with each of these choices. Now, if we couple the above rules with the Force View’s analyses of saying, asking, and telling from before then we arrive at the following analyses of conditional saying, asking, and telling: Conditional Saying: To conditionally say that p is to express one’s conditional belief / knowledge that p by using a sentence with its meaning. This amounts to nothing more than using some declarative sentence with its meaning such that it’s a sentence the use-conditions of which feature the speaker’s conditionally believing/knowing that p. 124 Conditional Asking: To conditionally ask Q is to express one’s conditional wonder about the answer to Q by using a sentence with its meaning. This amounts to nothing more than using some interrogative sentence with its meaning such that it’s a sentence the use-conditions of which feature the speaker’s conditionally wondering about the answer to Q. Conditional Telling: To conditionally tell someone to do something is to express one’s conditional desire that they do it by using a sentence with its meaning. This amounts to nothing more than using some imperative sentence with its meaning such that it’s a sentence the use-conditions of which feature the speaker’s conditionally desiring that some person that the speaker is thinking of do it. These are nothing but natural extensions of the Force View’s analyses of saying, asking, and telling. Let’s come back to the claim that Suppositional views are incompatible with the Content View and require as a background assumption the Force View. The best way to see this is to focus on the fact that on the Suppositional views we need to make sense of the idea of conditional saying, whereas on the Content View all we have is entertaining. And we can’t make sense of conditional saying in terms of conditional entertaining because the idea of entertaining something on some condition just doesn’t make sense. We can perhaps be in a state such that if some condition obtains, we’re committed to entertaining the proposition, but this doesn’t amount to actually conditionally entertaining it. To make sense of conditional saying we really need to appeal to something forceful like conditional belief or knowledge. And this means that Suppositional views, if they are to be views of the semantics of ‘if’, presuppose that mood encodes force. Let me conclude this section by making again another auxiliary point of the sorts I made in the previous chapter and already once in this chapter as well. We saw that people have talked about the Suppositional view in very general terms or using machinery which is not well suited to deal with it. In contrast, the use-conditional framework enables us to state it in concrete terms 125 using machinery which is very well suited to deal with it. This means again that the use- conditional framework is more flexible than standard frameworks in enabling us to do more than they do – in this case by enabling us to state the interesting and plausible Suppositional theory of ‘if’ in clear and concrete terms. Conclusion Let’s sum up. In this chapter I’ve shown how the Rules view can satisfy the Unity, Conservativeness and Explanatory Constraints in the case of declarative, interrogative, and imperative sentences and conditionals by showing how the two different views about the semantics of mood can be stated in terms of use-conditions. However, I’ve also argued that the use-conditional framework is more flexible than standard frameworks in enabling us to do more than they do – in this case by being able to represent two radically different theories of the semantics of mood in the same framework and to state an interesting and plausible theory of ‘if’ in clear and concrete terms. 126 Appendix: A Classification of Speech Acts In this appendix I present an overview of the classification of speech act types that I find plausible and that I invoked above. Although not all of it is part of the Rules view in itself since the view is compatible with other classifications, it is a part of the package of the Rules view + the Force View as I’ve developed it. Uses = all tokenings of expressions Mere uses = Meaningful uses = tokenings of expressions performed tokenings of expressions performed without putting (one of) its rule(s) in force while putting (one of) its rule(s) in force Meaningful uses… of a subsentential expression result in expressing an object, property etc. of a declarative result in expressing a proposition and saying that it is the case of an interrogative result in expressing a question and asking for an answer to it of an imperative result in expressing a property or a proposition and telling someone to do something Uses… with communicative intentions of some type A: claiming etc. that a proposition is the case with communicative intentions of some type B: inquiring etc. for an answer to a question with communicative intentions of some type C: requesting etc. someone to do something On this classification uses divide into mere uses and meaningful uses. Only meaningful uses of sentences of a particular mood can result in the convention-dependent speech acts of saying, asking, and telling. However, all uses and not only meaningful uses of all sorts of expressions can, given the presence of right sorts of communicative intentions of a particular type, result in the intention-dependent speech acts like claiming, inquiring, requesting etc. 127 Conclusion My aim in this dissertation was to develop and defend a promising view of linguistic meaningfulness that I call the Rules view. On this view expressions have a meaning primarily in a public language thought of as an abstract object not tied to any particular community. And what it is for an expression to have a meaning in such a language is for it to be governed by a rule that entails that it is permissible to use it in certain conditions. I first argued in Chapter 2 that we have a conclusive reason to prefer the Rules view over alternative Conventions and Dispositions views. I then showed in Chapters 3-6 that it can allow the meaning of any expression to be the same kind of thing by developing a use-conditional semantics for the representational core of natural language, for indexicals and demonstratives, and for different moods and conditionals. I also demonstrated throughout that the use-conditional framework is not only conservative in being consistent with established frameworks, but is at the same time considerably more flexible in allowing us to describe the meanings of expressions they have trouble with. I think that it is fair to conclude that the Rules view is the most plausible view of linguistic meaningfulness on the table and that the use-conditional framework is potentially incredibly useful for doing descriptive semantics. 128 References Alston, W. 1963. “Meaning and Use”. Philosophical Quarterly, 13, pp. 107-124 Alston, W. 1999. Illocutionary Acts and Sentence Meaning. Ithaca: Cornell University Press Austin, J. L. 1962. How To Do Things With Words. Cambridge: Harvard University Press Bach, K. 1994. “Conversational Impliciture”. Mind & Language, 9, pp. 124-162 Bach, K. 2001. “You Don’t Say?”. Synthese, 128, pp. 15-44 Bach, K. 2005. “Context Ex Machina”. Semantics versus Pragmatics. Ed. Z. G. Szabo. New York: Oxford University Press, pp. 15-43 Barnett, D. 2008. “Zif is If”. Mind, 115, pp. 519-565 Belnap, N. 1970. “Conditional Assertion and Restricted Quantification”. Nous, 1, pp. 1-12 Belnap, N. 1973. “Restricted Quantification and Conditional Assertion”. Truth, Modality, and Syntax. Ed. H. Leblane. Amsterdam: North-Holland, pp. 48-75 Belnap, N. 1990. “Declaratives Are Not Enough”. Philosophical Studies, 59, pp. 1-30 Bennett, J. 1976. Linguistic Behavior. Cambridge: Cambridge University Press Bennett, J. 2003. A Philosophical Guide to Conditionals. Oxford: Oxford University Press Bierman, A. K. 1972. “Chessing Around”. Philosophical Studies, 23, pp. 141-142 Boisvert, D. & Ludwig, K. 2005. “Semantics for Nondeclaratives”. The Oxford Handbook of the Philosophy of Language. Ed. B. Smith, E. Lepore. Oxford: Oxford University Press Boghossian, P. 1989. “The Rule-Following Considerations”. Mind, 98, pp. 507-549 Boghossian, P. 2008. “Epistemic Rules”. Journal of Philosophy, 105, pp. 472-500 Brandom, R. 1994. Making it Explicit. Cambridge: Harvard University Press Brandom, R. 2000. Articulating Reasons. Cambridge: Harvard University Press Braun, D. 1996. “Demonstratives and Their Linguistic Meanings”. Nous, 30, pp. 145-173 129 Braun, D. 2008. “Complex Demonstratives and their Singular Contents”. Linguistics & Philosophy, 31, pp. 57-99 Braun, D. 2011. “Implicating Questions”. Mind & Language, 26, pp. 574-595 Burge, 1974. “Demonstrative Constructions, Reference, and Truth”. Journal of Philosophy, 71, pp. 205-223 Burge, T. 1975. “On Knowledge and Convention”. Philosophical Review, 84, pp. 249-255 Camp, E. 2007. “Prudent Semantics Meets Wanton Speech Act Pluralism”. Context-Sensitivity and Semantic Minimalism. Ed. G. Peter, G. Preyer. Oxford: Oxford University Press, pp. 194- 213 Caplan, B. 2003. “Putting Things in Contexts”. Philosophical Review, 112, pp. 191-214 Cappelen, H. 2010. “Against Assertion”. Assertion: New Philosophical Essays. Ed. J. Brown, H. Cappelen, Oxford: Oxford University Press, pp. 21-48 Cohen, J. 2013. “Indexicality and the Puzzle of the Answering Machine”. Journal of Philosophy, 110, 5-32 Cohen, J. & Michaelson, E. 2013. “Indexicality and the Answering Machine Paradox”. Philosophy Compass, 8, 580-592 Corazza, E., & Fish, W., & Gorvett, J. 2002. “Who is I?”. Philosophical Studies, 107, pp. 1-21. Davidson, D. 1967. “Truth and Meaning”. Synthese, 17, pp. 304-323 Davidson, D. 1984. “Communication and Convention”. Synthese, 59, pp. 3-17 Davidson, D. 1986. “A Nice Derangement of Epitaphs”. Truth and Interpretation: Perspectives on the Philosophy of Donald Davidson. Ed. E. Lepore. Cambridge: Blackwell, pp. 433-446 Davidson, D. 2001. “Moods and Performances”. Inquiries into Truth and Interpretation. Oxford: Oxford University Press, pp. 111-123 Davis, W. 2003. Meaning, Expression, and Thought. Cambridge: Cambridge University Press Davis, W. 2005. Nondescriptive Meaning and Reference. Oxford: Oxford University Press DeRose, K. & Grandy, R. 1999. “Conditional Assertions and Biscuit Conditionals”. Nous, 33, pp. 405-420 Dummett, M. 1973. Frege: Philosophy of Language. Oxford: Oxford University Press 130 Dummett, M. 1978. “The Social Character of Meaning”. Truth and Other Enigmas. Cambridge: Harvard University Press, pp. 420-430 Dummett, M. 1993. “Mood, Force, and Convention”. The Seas of Language. Oxford: Oxford University Press, pp. 202-223 Edgington, D. 1995. “On Conditionals”. Mind, 104, pp. 235-239 Edgington, D. 2006. “Conditionals”. Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/conditionals/ Fiengo, R. 2007. Asking Questions: Using Meaningful Structures to Imply Ignorance. Oxford: Oxford University Press Forguson, L. 1973. “Locutionary and Illocutionary Acts”. Essays on J. L. Austin. Ed. I. Berlin. Oxford: Clarendon Press, pp. 160-185 Geach, P. 1960. “Ascriptivism”. Philosophical Review, 69, pp. 221-225 Geach, P. 1965. “Assertion”. Philosophical Review, 74, pp. 449-465 Gilbert, M. 1989. On Social Facts. Princeton: Princeton University Press Gilbert, M. 2007. “Social Convention Revisited”. Topoi, 27, pp. 5-16 Glüer, K. & Pagin, P. 1999. “Rules of Meaning and Practical Reasoning”. Synthese, 117, pp. 207-227 Gorvett, J. 2005. “Back Through the Looking Glass: On the Relationship between Intentions and Indexicals”. Philosophical Studies, 124, pp. 295-312 Green, M. 1997. “On the Autonomy of Linguistic Meaning”. Mind, 106, pp. 217-243 Green, M. 2001. “Illocutionary Force and Semantic Content”. Linguistics & Philosophy, 23, pp. 435-473 Greenberg, M. 2005. “A New Map of Theories of Mental Content”. Philosophical Issues, 15, pp. 299-320 Grice, P. 1957. “Meaning”. Philosophical Review, 66, pp. 377-388 Grice, P. 1989a (1975). “Logic and Conversation”. Studies in the Way of Words. Harvard: Harvard University Press, pp. 22-40 Grice, P. 1989b. “Utterer’s Meaning, Sentence-Meaning, Word-Meaning”. Studies in the Way of Words. Harvard: Harvard University Press, pp. 117-137 131 Groenendijk, J. & Stokhof, M. 1982. “The Semantic Analysis of WH-Complements”. Linguistics & Philosophy 5, pp. 175-233 Hamblin, C. L. 1973. “Questions in Montague English”. Foundations of Language, 10, pp. 41- 53 Hanks, P. 2007. “The Content-Force Distinction”. Philosophical Studies, 56, pp. 141-164 Hanks, P. 2011. “Structured Propositions as Types”. Mind, 120, pp. 11-52 Hanks, P. 2013. “First-Person Propositions”. Philosophy and Phenomenological Research, 86, pp. 155-182 Hanks, P. MS. Propositional Content. Hare, R. M. 1970. “Meaning and Speech Acts”. Philosophical Review, 79, pp. 3-24 Hausser, R. 1980. “Surface Compositionality and the Semantics of Mood”. Speech Act Theory and Pragmatics. Ed. J. Searle, F. Kiefer, M. Bierwisch. Dordrecht: Reidel, pp. 71-95 Heim, I. & Kratzer, A. 1998. Semantics in Generative Grammar. Oxford: Blackwell Higginbotham, J. 2002. “Competence With Demonstratives”. Philosophical Perspectives 16, pp. 1-16 Hornsby, J. 1986. “A Note on Non-Indicatives”. Mind, 95, pp. 92-99 Johnston, C. 2014. “Conflicting Rules and Paradox”. Philosophy and the Phenomenological Research, 88, pp. 410-433 Karttunen, P. 1977. “Syntax and Semantics of Questions”. Linguistics and Philosophy, 1, pp. 3- 44 Kaplan, D. 1989a. “Demonstratives”. Themes from Kaplan. Ed. Joseph Almog, John Perry, Howard Wettstein. Oxford: Oxford University Press, pp. 481-563 Kaplan, D. 1989b. “Afterthoughts”. Themes from Kaplan. Ed. Joseph Almog, John Perry, Howard Wettstein. Oxford: Oxford University Press, pp. 565-614 Kaplan, D. MS. “The Meaning of Ouch and Oops. Explorations in the theory of Meaning as Use”. King, J. 2007. The Nature and Structure of Content. Oxford: Oxford University Press King, J. 2009. “Questions of Unity”. Proceedings of the Aristotelian Society, 109, pp. 257-277 132 Krasner, D. 2006. “Smith on Indexicals”. Synthese, 153, pp. 49-67 Kriegel, U. 2014. “Entertaining as a Propositional Attitude: A Non-Reductive Characterization”. American Philosophical Quarterly, 50, pp. 1-22 Kripke, S. 1977. “Semantic Reference and Speaker Reference”. Studies in the Philosophy of Language. Ed. P. A. French, T. E. Uehling Jr. & H. K. Wettstein. Minnesota: University of Minnesota Press, pp. 255-276 Kripke, S. 1982. Wittgenstein on Rules and Private Language. Cambridge: Harvard University Press Kölbel, M. 2010. “Literal Force: a Defence of Conventional Assertion”. New Waves in Philosophy of Language. Ed. S. Sawyer. New York: Palgrave Macmillan, pp. 108-137 Lepore, E. & Leslie, S-J. MS. “Mood Matters”. Lepore, E. & Ludwig, K. 2005. Donald Davidson: Meaning, Truth, Language, and Reality. Oxford: Oxford University Press Lepore, E. & Ludwig, K. 2007. “The Reality of Language: On the Davidson/Dummett Exchance”. The Philosophy of Michael Dummett. Ed-s. R. E. Auxier, .L. E. Hahn. Chicago: Open Court, pp. 185-228 Lewis, D. 1969. Convention. Oxford: Oxford University Press Lewis, D. 1970. “General Semantics”. Synthese, 22, pp. 18-67 Lewis, D. 1975. “Languages and Language”. Minnesota Studies in the Philosophy of Science. Ed. K. Gunderson. Minnesota: University of Minnesota Press, pp. 3-35 Mackie, J. L. 1973. Truth, Probability, and Paradox. Oxford: Oxford University Press Marmor, A. 2009. Social Conventions. Princeton: Princeton University Press McGinn, C. 1977. “Semantics for Nonindicative Sentences”. Philosophical Studies, 77, pp. 301- 311 Michaelson, E. 2014. “Shifty Characters”. Philosophical Studies, 167, pp. 519-540 Millikan, R. 1998. “Language Conventions Made Simple”. Journal of Philosophy, 95, pp. 161- 180 Miller, S. 2001. Social Action: A Teleological Account. Cambridge: Cambridge University Press 133 Moltmann, F. 2014. “Propositions, Attitudinal Objects, and the Distinction between Actions and Products”. Forthcoming in Canadian Journal of Philosophy Montague, R. 1973. “The Proper Treatment of Quantification in Ordinary English”. Approaches to Natural Language, , in J. Hintikka, J. Moravcsik, and P. Suppes. Dordrecht: Reidel, pp. 221- 242 Mount, A. 2008. “The Impurity of “Pure” Indexicals”. Philosophical Studies, 138, pp. 193-209 Pagin, P. 1987. Ideas for a Theory of Rules. Doctoral Dissertation: University of Stockholm Perry, J. 1997. “Indexicals and Demonstratives.” A Companion to the Philosophy of Language. Ed. B. Hale, C. Wright. Oxford: Blackwell, pp. 586-612 Pendlebury, M. 1986. “Against the Power of Force: Reflections on the Meaning of Mood”. Mind, 95, pp. 361-372 Pietroski, P. 2003. “The Character of Natural Language Semantics”. Epistemology of Language. Ed. A Barber. Oxford: Oxford University Press, pp. 217-256 Pietroski, P. 2005. “Meaning Before Truth”. Contextualism in Philosophy. Ed. G. Preyer, G. Peters. Oxford: Oxford University Press, pp. 253-300 Pietroski, P. 2006. “Character Before Content”. Content and Modality: Themes from the Philosophy of Robert Stalnaker. Ed. J. Thomson, A. Byrne. Oxford: Oxford University Press, pp. 34-60 Portner, P. 2004. “The Semantics of Imperatives within a Theory of Clause Types”. Proceedings of Semantics and Linguistic Theory 14. Ed. K. Watanabe, R. B. Young. Cornell University Linguistics Department: CLC Publications. Predelli, S. 1998a. “Utterance, Interpretation, and the Logic of Indexicals”. Mind & Language, 13, pp. 400-414 Predelli, S. 1998b. “I Am Not Here Now”. Analysis, 58, pp. 107-115 Predelli, S. 2002. “Intentions, Indexicals, and Communication”. Analysis, 62, pp. 310-316 Predelli, S. 2008a. “The Demonstrative Theory of Quotation”. Linguistics and Philosophy, 31, pp. 555-572 Predelli, S. 2011. “I Am Still Not Here Now”. Erkenntnis, 74, pp. 289-303 Recanati, F. 1987. Meaning and Force: The Pragmatics of Performative Utterances. Cambridge: Cambridge University Press 134 Recanati, F. 2001. “Are ‘here’ and ‘now’ indexicals?”. Texte, 127/128, pp. 115-127 Reiland, I. 2012. “Propositional Attitudes and Mental Acts”. Thought, I, pp. 239-245 Rescorla, M. 2011. “Convention”. Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/convention/ Romdenh-Romluc, K. 2002. “Now the French are invading England!”. Analysis, 62, pp. 34-41 Romdenh-Romluc, K. 2006. “‘I’”. Philosophical Studies, 128, pp. 257-283 Ryle, G. 2009 (1953). “The Theory of Meaning”. Collected Papers, Vol. 2. London: Routledge, pp. 363-385 Salmon, N. 2002. “Demonstrating and Necessity”. Philosophical Review, 111, pp. 497-537 Schiffer, S. 1972. Meaning. Oxford: Oxford University Press Schiffer, S. 2003. The Things We Mean. Oxford: Oxford University Press Schroeder, M. 2008a. “Expression for Expressivists”. Philosophy and Phenomenological Research, 76, pp. 86-116 Schroeder, M. 2008b. Being For: Evaluating the Semantic Program of Expressivism. Oxford: Clarendon Press Schwyzer, H. 1969. “Rules and Practices”. Philosophical Review, 78, pp. 451-467 Searle, J. 1962. “Meaning and Speech Acts”. Philosophical Review, 71, pp. 423-432 Searle, J. 1969. Speech Acts. Cambridge: Cambridge University Press Segal, G. 1990/1991. “In the Mood for a Semantic Theory”. Proceedings of the Aristotelian Society, 91, pp. 103-118 Shoemaker, S. 1968. “Self-Reference and Self-Awareness”. Journal of Philosophy, 65, pp. 555- 567 Sidelle, A. 1991. “The Answering Machine Paradox”. Canadian Journal of Philosophy, 81, pp. 525-539 Sider, T. 2010. Logic for Philosophy. Oxford: Oxford University Press Smith, Q. 1989. “The Multiple Uses of Indexicals”. Synthese, 78, pp. 167-191 Soames, S. 2002. Beyond Rigidity. Oxford: Oxford University Press 135 Soames, S. 2010a. Philosophy of Language. Princeton: Princeton university Soames, S. 2010b. What is Meaning? Princeton: Princeton University Press Soames, S. 2014. “Cognitive Propositions”. New Thinking About Propositions. By J. King, S. Soames, J. Speaks. Oxford: Oxford University Press, pp. 91-126 Soames, S. MS. Rethinking Language, Mind, and Meaning Southwood, N. & Eriksson, L. 2011. “Norms and Conventions”. Philosophical Explorations, 14, pp. 195-217 Speaks, J. 2010. “Theories of Meaning”. Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/meaning/ Stainton, R. 1995. “Non-Sentential Assertions and Semantic Ellipsis”. Linguistics & Philosophy, 18, pp. 281-296 Stainton, R. 1997. “What Assertion is Not”. Philosophical Studies, 85, pp. 57-73 Stainton, R. 2006. Words and Thoughts: Subsentences, Ellipsis, and the Philosophy of Language. Oxford: Oxford University Press Stalnaker, R. 1997. “Reference and Necessity”. A Companion to the Philosophy of Language. Ed. B. Hale, C. Wright. Oxford: Blackwell, pp. 534-554 Stanley, J. & Szabo, Z. 2000. “On Quantifier Domain Restriction”. Mind & Language, 15, pp. 219-261 Stanley, J. 2007a. “Introduction”. Language in Context. Oxford: Clarendon Press, pp. 1-29 Stanley, J. 2011. Know How. Oxford: Oxford University Press Starr, W. 2011. Conditionals, Meaning, and Mood. Doctoral Dissertation. Rutgers, The State University of New Jersey Starr, W. MS. “Mood, Force and Truth”. Stenius, E. 1967. “Mood and Language-Game”. Synthese, 17, pp. 254-274 Stevens, G. 2009. “Utterance at a Distance”. Philosophical Studies, 143, pp. 213-221 Strawson, P. F. 1950. “On Referring”. Mind, 59, pp. 320-344 Suits, B. 1978. The Grasshopper: Games, Life, and Utopia. Ontario: Broadview Press 136 Szabo, Z. 2007. “Compositionality”. Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/compositionality/ Williamson, T. 1996. “Knowing and Asserting”. Philosophical Review, 105, pp. 489-523 Williamson, T. 2000. Knowledge and Its Limits. Oxford: Oxford University Press Wittgenstein, L. 1953. Philosophical Investigations. Oxford: Blackwell
Abstract (if available)
Abstract
My aim in this dissertation is to develop and defend a promising view of linguistic meaningfulness that I call the Rules view. On this view expressions have a meaning primarily in a public language thought of as an abstract object not tied to any particular community. And what it is for an expression to have a meaning in such a language is for it to be governed by a rule that entails that it is permissible to use it in certain conditions. ❧ I first argue in Chapter 2 that we have a conclusive reason to prefer the Rules view over alternative Conventions and Dispositions views. I then show in Chapters 3-6 that it can allow the meaning of any expression to be the same kind of thing by developing a use‐conditional semantics for the representational core of natural language, for indexicals and demonstratives, and for different moods and conditionals. I also demonstrate throughout that the use‐conditional framework is not only conservative in being consistent with established frameworks, but is at the same time considerably more flexible in allowing us to describe the meanings of expressions they have trouble with.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Constraining assertion: an account of context-sensitivity
PDF
Reasoning with uncertainty and epistemic modals
PDF
Iffy confidence
PDF
Telling each other what to do: on imperative language
PDF
Representation, truth, and the metaphysics of propositions
PDF
Linguistic understanding and semantic theory
PDF
Instantial terms, donkey anaphora, and individual concepts
PDF
Conceptually permissive attitudes
PDF
Lessons From Frege's Puzzle
PDF
Positivist realism
PDF
Contrastive reasons
PDF
Reasons, obligations, and the structure of good reasoning
PDF
Narrowing the focus: experimental studies on exhaustivity and contrast
PDF
Comparative iIlusions at the syntax-semantics interface
PDF
Subjectivity, commitments and degrees: on Mandarin hen
PDF
What it is to be located
PDF
Reference time in the dynamics of temporal dependency in Korean
PDF
Classically conditioned responses to food cues among obese and normal weight individuals: conditioning as an explanatory mechanism for excessive eating
PDF
Essays on health economics
PDF
Finding technical trading rules in high-frequency data by using genetic programming
Asset Metadata
Creator
Reiland, Indrek
(author)
Core Title
Meaningfulness, rules, and use-conditional semantics
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Philosophy
Publication Date
06/20/2014
Defense Date
04/29/2014
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
convention,meaning,OAI-PMH Harvest,rules,semantics,use-conditions
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Schroeder, Mark (
committee chair
), Jeshion, Robin (
committee member
), Schein, Barry (
committee member
), Soames, Scott (
committee member
)
Creator Email
indrekreiland@gmail.com,reiland@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-421493
Unique identifier
UC11285946
Identifier
etd-ReilandInd-2565.pdf (filename),usctheses-c3-421493 (legacy record id)
Legacy Identifier
etd-ReilandInd-2565.pdf
Dmrecord
421493
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Reiland, Indrek
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
convention
meaning
rules
semantics
use-conditions