Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Instantial terms, donkey anaphora, and individual concepts
(USC Thesis Other)
Instantial terms, donkey anaphora, and individual concepts
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
INSTANTIAL TERMS, DONKEY ANAPHORA, AND INDIVIDUAL CONCEPTS by JOHN KEITH HALL III A dissertation submitted to the Graduate School of the University of Southern California in partial fulfillment of the degree of Doctor of Philosophy (Philosophy) written under the direction of Scott Soames and approved by ___________________________________ ___________________________________ ___________________________________ Los Angeles, California Degree Conferral: December 2017 ACKNOWLEDGMENTS I would like to thank a few individuals without whom this dissertation would have been immeasurably less successful, pleasant, timely, or existent, beginning with my parents John and Susan Hall, whose endless love and support are the greatest gifts I have received to date. It was in Daniel Garber’s course on the British empiricists in my sophomore year at Princeton that I first was introduced to philosophy, and I was immediately enthralled. I first explored the byways of philosophy of language in weekly office discussions with my undergraduate mentor and advisor Gilbert Harman, and I profited in those years not only from the consistently quick and insightful feedback I received from him on paper drafts and in our weekly meetings, but also from the example he set of what good philosophy looks like. I am extraordinarily fortunate to have been at the University of Southern California for my graduate school career. I am grateful to University for the many sources of funding I have received, for our world-class facilities (that philosophy library!), and for the excellent administrative support from the graduate school and the philosophy department, especially from our former department administrator Cynthia Lugo and the philosophy librarian Ross Scimeca. The department at USC greatly benefits from the stewardship of Scott Soames and Mark Schroeder, who set an example of hard work, professionalism, and personal excellence that inspires and motivates the graduate students. Their dedication to growing the program at USC and to helping prepare graduate students for later stages of their academic careers benefits us all immeasurably. I have received sound guidance from both of them on nearly every aspect and stage of the program at USC, and on numerous occasions in the doldrums of graduate school when I was full of self-doubt, I came away from meetings with them feeling encouraged, !ii energized, and even confident. I am deeply grateful for their efforts on my behalf and for their sound advice, but also for their seemingly boundless patience when I failed to meet their expectations. As the chair of my committee, Scott has additionally provided extensive feedback on my dissertation, and his insightful comments and questions have forced me to clarify or abandon certain lines of thought. But equally important, on numerous occasions I would come to Scott with a half-baked idea about which I was uncertain or less than confident, and Scott would assure me that the idea was not only promising but insightful and genuinely good. I am grateful to have had an advisor with such a keen sense of what lines of inquiry are fruitful or problematic and for expressing confidence in me and in my ideas. Many of the ideas in this dissertation germinated in a weekly directed research seminar I took with Robin Jeshion in my second year. Robin had the idea of inviting other graduate students to join our weekly meetings. This was one of the most beneficial and rewarding academic experiences in my graduate career, and I am grateful for the many stimulating discussions I had in this group with Marina Folescu, Brian Blackwell, Caleb Perl, Rima Basu, and Indrek Reiland. Robin continued to be central to the rest of my academic progression at USC, and on numerous occasions I received opportunities to write papers, give talks, go to conferences, present in seminars, or network with colleagues because she had recommended me. I cannot thank her enough for all of the opportunities that I was afforded because of her efforts. Throughout this dissertation writing process, I have benefitted greatly from her insightful and detailed feedback on my work. I would also like to thank the other members of my committee, Barry Schein and Andrew Bacon. At various stages, Andrew’s sharp observations and comments have required !iii me to make important corrections and clarifications that saved me from numerous technical and argumentative pitfalls. Throughout this writing process, I have struggled to stay abreast of the vast landscape of linguistic data on unbound anaphoric pronouns, a literature that extends beyond what I had originally foreseen when I took on this project. Bertrand Russell once recommended that philosophers “stock the mind with as many puzzles as possible, since these serve much the same purpose as is served by experiments in physical science” (Russell 1905, pp. 484–85). Russell’s point is equally apt with respect to linguistic data, and I am grateful to have had Barry’s deep stock of linguistic data as I attempted to make sense of them. Barry has been the most critical member of my committee, and my dissertation has greatly benefited from his encouragement to consider more kinds of problem sentences. My interest in the problem of unbound anaphoric pronouns emerged in a seminar I took with Karen Lewis and Scott Soames in my third year. Karen’s half of the seminar covered the semantics of definite descriptions, and it was here that I first became puzzled enough about donkey pronouns that I felt compelled to think and write about the topic for several more years; I thank her for introducing me to this topic and for her feedback on some of my very early thoughts on the subject. Various components of this dissertation were presented at conferences and workshops. Although I cannot hope to remember or thank everyone, I am most grateful for the feedback I have received from the PhilMiLaCog graduate student conference at the University of Western Ontario; USC’s own Pegasus Society, including Matt Babb, Rima Basu, Justin Dallman, Eric Encarnacion, Maegan Fairchild, Ben Lennertz, Shyam Nair, Caleb Perl, Abelard Podgorski, Indrek Reiland, Kenneth Silver, Justin Snedegar, and Aness Webster (among many others); François Recanati and the attendees at his author-meets-the-critics session on Mental Files at the 2014 Pacific APA; Hans Kamp; attendees of the workshop on mental files at !iv Boğaziçi University, including Paolo Bonardi, Aidan Gray, Ilhan Inan, Robin Jeshion, David Papineau, Ángel Pinillos, François Recanati, Lucas Thorpe, and Jack Woods; and attendees at the New York Philosophy of Language Workshop, including Daniel Harris and Matt Moss. I would like to thank USC faculty outside of my committee who have provided feedback on my writing, teaching, and ideas over the years, including and especially Janet Levin, Kadri Vihvelin, Gary Watson, Ed McCann, Gabriel Uzquiano, Andrei Marmor, Karen Lewis, Stephen Finlay, Jim Van Cleve, George Wilson, and Jim Higginbotham. At one workshop where I presented this work, a fellow graduate student (here unnamed), upon being asked what year he was in the program, answered, “It’s about time they take me around to the back of the shed.” My own graduate career has extended farther across time and space than I would have ever anticipated, including in its spatial dimension such cities as Los Angeles, Austin, San Jose, and San Francisco. In each of these places, I have been fortunate to have relied on good friends and loved ones for encouragement and support, many of whom I met in the philosophy department at USC. I feel privileged to have spent not a small segment of my life engaged with some of the most interesting questions humans can ask alongside some of the most interesting people who ask them, and my life is infinitely richer for the many friendships I have made along the way. Finally, I would like to thank my co-explorer of the universe James, who generously offered his uncorrupted linguistic intuitions on every variety of donkey sentence I could concoct. He was my dependable source of support and encouragement even after hearing for the umpteenth time “it’s nearly done”; without him, it might have never been. This dissertation is dedicated to him. !v TABLE OF CONTENTS Acknowledgments ii Table of Contents vi Introduction viii Chapter 1: Instantial Terms, Quantifiers, and Individual Concepts 1 1. Introduction 1 2. Previous Accounts of Instantial Terms 3 2.1 Implicitly Bound Variables 5 2.2 Arbitrary Objects 8 2.3 Arbitrary Reference 13 3. Instantial Terms and Individual Concepts 18 3.1 Instantial Reasoning 19 3.2 Instantial Discourse 25 3.3 Advantages over Previous Views 34 Chapter 2: The Problem of Unbound Anaphoric Pronouns: A Critical Overview 39 1. Introduction to the Problem 39 2. Dynamic Views 46 2.1 Discourse Representation Theory 46 2.2 Discursus: Quantification and Conditionals 50 2.3 Sentential Truth and Discourse Truth 60 2.4 Dynamic Predicate Logic 63 3. The E-Type View: UAPs as Referring Terms 67 4. The D-Type View: UAPs as Definite Descriptions 74 5. NP Deletion: [[it]] = [[the]] 76 6. UAPs as Context-Dependent Quantifiers 89 7. UAPs and Instantial Terms: A Second Look 98 Chapter 3: Toward a Positive Account 112 1. Quantificational Force 112 1.1 Quantified Background Conditions 116 1.2 Truth Theory 127 1.3 Quantified Background Conditions and Mediate Predication 133 2. Assigning Sentences Two-Component Propositions 139 2.1 Atomic Formulas 139 2.2 Nested Two-Component Propositions 143 2.3 Talking about Propositions 152 2.3 Modal and Intensional Subordination 155 2.4 Weak versus Strong Readings and Conditional Donkey Sentences 158 2.5 Attitude Reports 168 Literature Cited 174 !vii INTRODUCTION This dissertation explores two closely related problems in the philosophy of logic and language. The first problem concerns the meaning of so-called instantial terms, as when one stipulates “let p be a prime number” in a mathematical proof for the purposes of proving some conclusion about all primes. The second problem concerns certain uses of pronouns in natural language known as donkey pronouns, or as I call them, unbound anaphoric pronouns. Although the semantics of these two kinds of terms may upon initial exposition seem highly insular or specialized topics, the issues they raise touch on a variety of fundamental questions about the nature of reference, linguistic meaning, and mental representation. Indeed, a number of philosophers and linguists have argued that solving the second problem calls for a radical break with orthodox semantic theory as it has been practiced since Frege. Hence, these problems are of interest not just because we want to know how these terms operate in language, but because they raise fundamental questions about the nature of reference and linguistic meaning and their relation to cognition. Instantial terms do not just occur in formal contexts; they are ubiquitous in ordinary reasoning. For example, in arguing for general claims in epistemology or ethics, it is common to make stipulations like “Consider some person (call her ‘Mary’).” In such cases, we are not referring to anyone in particular; this is mere pretense. Rather, such terms are devices for establishing general conclusions. I take the popularity of the narrative thought experiment in academic philosophy to attest to the ubiquity of instantial terms in ordinary reasoning, and to our cognitive predilection to reason about general matters in what we might call the “particular !viii mode.” Hence the problem of providing an account of instantial terms’ meaning promises to shed light on our reasoning practices as used in virtually every area of general inquiry. Despite their ubiquity, it is not clear what such terms mean. On the one hand, they seem to make general claims: sentences containing “p” purport to concern all prime numbers. However, instantial terms bear important cognitive and inferential kinships with referring terms. In reasoning about “p” it is as if one is reasoning about a particular prime number. This tension between instantial terms’ putative singularity and instantial formulas’ truth-conditional generality has lead philosophers to different accounts of what such terms mean. According to some, instantial terms are quantifiers. According to others, they are referring terms. In the first chapter of my dissertation, I argue against previous referentialist and quantificationalist approaches. I then articulate and defend a novel alternative. The guiding idea of my approach starts with a popular view in the representational theory of mind according to which agents refer to an object o in their perceptual environment by tokening mental representations whose semantic function is to refer to o. When engaging in instantial reasoning, I argue, agents token mental representation-types I call individual concepts that play a similar role in cognition to these more fundamental mental representations that refer. However, unlike mental representations that refer, the individual concepts in instantial reasoning have taken on new, derived functions. In particular, their function is to represent a range of values, rather than any one object in particular. An individual concept is said to “encode” a background condition, as determined by the stipulative procedure by which the terms that express it was introduced (for example, the term ‘p’ above would express an individual concept encoding the condition being a prime number). An individual concept’s value- range consists of those objects which satisfy the background condition it encodes. I then develop !ix a framework that takes instantial formulas to express structured propositions which contain two-components: one is like a Russellian structured proposition which may contain not only objects, properties, functions, and so on, but also individual concepts; the other component is the set of “background conditions” encoded by the individual concepts in the first component. Together, these two components determine general truth-conditions for the propositions expressed by instantial formulas. In this way we can capture the truth-conditional generality of instantial formulas, while capturing the cognitive, syntactic, and inferential kinships between instantial terms and referring terms. In the rest of my dissertation, I explore whether this framework can be extended to the second problem, donkey pronouns. In brief, the problem is that whereas some pronouns function as referring terms or as bound variables, other cases do not fit easily into these two camps. For example, the pronoun ‘it’ in (1) needn’t refer to any particular donkey, but orthodox syntactic constraints prevent us from analyzing the pronoun as a variable bound by the antecedent quantifier ‘a donkey’: (1) Someone who owns a donkey vaccinates it. In the second chapter of my dissertation, I present an overview of the major linguistic data that any semantic theory of donkey pronouns needs to address. Along the way, I argue against the leading accounts of donkey pronouns in the literature. Of particular interest is the interaction of donkey pronouns with other operators in sentences in which they occur (modal operators, negation, etc.). As a few previous theorists have also observed, donkey pronouns are like instantial terms in that they appear to have quantificational force: they are devices of generality. However, the quantificational force of donkey pronouns appears to take scope over any operators in sentences in which they occur, no matter how deeply or explicitly they are !x syntactically embedded or whether they exist inside scope islands. This behavior is mysterious if donkey pronouns are quantifiers or definite descriptions, which easily admit narrow scope readings and do not exhibit such exceptional scope behavior. However, I argue that the wide- scoping behavior is easily explained if the propositions expressed by sentences containing donkey pronouns have two components of information on which their truth modally depends, one a structured complex containing individual concepts and the other the background conditions encoded by those concepts. Since background conditions are derived from the context of utterance rather than encoded in the semantically significant parts of the sentence, it is no wonder that they do not enter into scope relations with the other semantically significant parts of the sentence: that is, they scope over other operators. Hence, if modal and other operators take only the structured proposition component as argument, the truth theory I supplied in chapter 1 for two-component propositions treats the background conditions component like a wide-scope quantifier over the interpretation of the structured complex component, thus explaining pronouns’ wide-scoping behavior. In brief, I argue that the propositionalist framework I presented in chapter 1 for instantial terms constitutes a promising account for donkey pronouns. No radical revision of semantic theory is needed. However, since natural language is significantly more complex than the formal languages for which my account of instantial terms was developed, a number of modifications are required in order to account for the linguistic data presented in chapter 2. In the final chapter 3, I modify the framework presented in chapter 1 to do just that. The resulting view is a unified semantic analysis of instantial terms and donkey pronouns that accounts for a broad range of linguistic data. The view takes important concepts from discourse representation theory like “discourse referents” and “discourse conditions” (analogues of my individual !xi concepts and background conditions), but situates these ideas in a propositional framework. One advantage of this view is that we can retain a concept of sentential truth as fundamental, whereas for discourse representation theory truth for a discourse is fundamental. !xii CHAPTER 1: INSTANTIAL TERMS, QUANTIFIERS, AND INDIVIDUAL CONCEPTS 1. INTRODUCTION Philosophy aims to describe the world in general terms. Yet general inquiry often proceeds as if it were concerned with particulars. Having shown that there are Fs, I may stipulate that n is an arbitrary F, and proceed to reason about n. And having shown that n is G, I may conclude that all Fs are G. These informal patterns of reasoning are modeled in deduction systems by Existential Instantiation and Universal Generalization, which govern the introduction and elimination of so-called instantial terms. Existential Instantiation (EI) allows one to derive the instantial formula φ(a) from ∃xφ(x) under suitable constraints. And by Universal Generalization (UG), we may derive ∀xφ(x) from φ(b), again under suitable constraints. 1 As a technological innovation, instantial terms are incredibly useful, for they allow one to engage in general reasoning with more ease and concision than is possible with quantifiers. But the ubiquity of instantial terms attests as much to their utility as it does to our cognitive predilection to think and reason about particulars. As Quine observed in Word and Object (1960), the most basic elements of our conceptual repertoire represent the individual particulars of our external environment: “Entification begins at arm’s length; the points of condensation in the Throughout this chapter, “Universal Generalization” (UG) and “Existential Instantiation” (EI) will be 1 used as umbrella terms for various rules in natural deduction systems which are meant to model the corresponding informal patterns of reasoning described above. The term Existential Instantiation is often reserved for rules in which the instantial formula may be directly inferred from the existential premise, whereas the term Existential Elimination is used for rules which indirectly license the inference of a formula Q from (i) the existential premise ∃xφ(x) and (ii) Q’s derivation from the instantial formula φ(a). !1 primordial conceptual scheme are things glimpsed, not glimpses” (p. 3). Although Quine’s point is directed against views that would analyze talk of particulars in terms of a more conceptually primary stock of sense data, the point is relevant in that it also suggests the cognitive primacy of thoughts about particulars over thoughts of a more general or abstract nature. It is the mental analogues of referring terms, not quantifiers, that function to represent such particulars. If mental representations that refer to particulars are cognitively primary, then it is not surprising that, even in our quest to describe the world at increasing levels of generality and abstraction, we fall back on representational devices of a more familiar design. Despite their ubiquity, it is not clear what instantial terms mean. In reasoning about n for arbitrary number n, it is as if I am referring to and reasoning about a particular number. However, this kinship with referring terms makes instantial terms puzzling from a semantic point of view. If instantial terms refer, then we should expect instantial formulas to make singular claims about those terms’ referents. But instantial formulas seem truth-conditionally general. Whereas (1) concerns some particular numbers, (2) makes a generalization about all of them, which is to say none in particular: (1) (1+3) 2 = 1 2 + 2⋅1⋅3 + 3 2 (2) (a + b) 2 = a 2 + 2ab + b 2 Philosophers have responded to this tension between instantial terms’ putative singularity and instantial formulas’ truth-conditional generality in different ways. On some views, instantial terms are referring terms. On other views, instantial terms are variables bound by implicit quantifiers (or are quantifiers themselves). In this chapter, I explore why one might be attracted to, and how one might develop, an alternative to these views. !2 In the next section, I raise objections to prior accounts of instantial terms. I then present an alternative view according to which instantial terms semantically express concepts which are of a cognitive kind with those whose semantic function in cognition is to refer. But rather than identify instantial terms’ contents with their referents, I take them to be representational intermediaries, which, like Fregean senses, mediate between language and the objects in the world that language is about. As I conceive of them, individual concepts are mental representation-types which different agents token in cognition and whose semantic function is to designate the objects in an associated value range. In the framework I develop, the truth- conditional contribution of an individual concept to propositions of which it is a part is to quantify over the objects in its associated value range. In this way, instantial terms are like implicitly bound variables in their contribution to instantial formulas’ general truth conditions. But since the concepts expressed by instantial terms are of a cognitive kind with referring concepts, the special syntactic, cognitive, and inferential kinships between instantial terms and referring terms may be preserved. 2. PREVIOUS ACCOUNTS OF INSTANTIAL TERMS Although there are a number of views in the literature about the meaning of instantial terms, my discussion will be limited to that of Fine (1983; 1985a,b), King (1991), and Magidor & 2 Breckenridge (2012). But before we explore these alternative accounts, there is a prior question: Why take instantial terms to be meaningful at all? According to what has been called the instrumentalist view, instantial terms have an important role in helping us perform derivations, but they are themselves meaningless. By following the derivation rules which govern instantial See Rescher (1958), Price (1962), Tennant (1983), Mackie (1985), Martino (2001), and Shapiro (2004). 2 !3 terms in formal derivation systems, we derive true and meaningful quantified conclusions from true and meaningful quantified premises. But the instantial formulas between these bookends have no semantic significance per se. As previous authors have pointed out, an immediate problem with the instrumentalist view is that it instantial formulas like (a + b) 2 = a 2 + 2ab + b 2 seem not only meaningful, but also true. Taking instantial terms to be meaningless not only misses this fact, but it also gets backward the intuitive explanation of the validity of instantial inferences. On the instrumentalist view, instantial inferences are justified by the fact that they conform to derivation rules which have been shown to be adequate by some complicated proof-theoretical result—that is, their only justification is that they work. But this inverts the order of explanation: Intuitively, derivation rules for instantial terms are justified by the fact that the instantial inferences they license are truth preserving, rather than the inferences themselves being justified by their being licensed by suitably chosen rules. The local truth-preserving character of instantial inferences in a given deduction system explains the classical soundness of the overall system, not the other way around. Moreover, in practice, it is often difficult to see whether some quantified conclusion follows from a quantified premise. To check whether such an inference is valid, we may employ instantial reasoning to derive the conclusion from the premise via a chain of instantial formulas each step of which is intuitively valid. It is striking that reasoners are often more sure of the validity of instantial inferences than they are of the quantified inference whose validity instantial reasoning is used to verify. Moreover, reasoners can reliably make valid instantial inferences without knowing the complicated proof-theoretical results that guarantee that the !4 derivation rules which govern them are classically sound (Magidor & Breckenridge 2012). The instrumentalist view is hard pressed to make sense of these facts. Finally, as Fine (1985b) has emphasized, taking instantial terms to be meaningful helps to explain and justify restrictions placed on EI and UG in various deduction systems that otherwise might seem puzzling or arbitrary. Once one has supplied a semantics for instantial formulas in a given system, it is easier to see why the restrictions work when they do and how they might be corrected when they do not. Furthermore, explaining and justifying the restrictions in various deduction systems provides a means by which one might adjudicate between them. Since on the instrumentalist view, the only justification for the restrictions on EI and UG in various systems is that they work, systems may not be compared on this basis. Finally, as Fine’s work shows, providing a semantics for instantial terms allows for straightforward proofs of the classical soundness of natural deduction systems that would otherwise be cumbersome and indirect. Taking instantial terms to be intrinsically meaningful thus also comes with a number of practical advantages over the instrumentalist alternative. 2.1 Implicitly Bound Variables 3 According to variablist views, instantial terms are variables implicitly bound by quantifiers (or are implicit quantifiers themselves). For example, in the following argument, line 4 is analyzed as ∀xFx, line 5 as ∀xGx, and line 6 as ∀x(Fx & Gx): Argument 1 1. Everything is F. 2. Everything is G. 3. Let a be an arbitrary object. This section closely follows the discussion found in Magidor & Breckenridge (2012). 3 !5 4. a is F. 5. a is G. 6. a is F and a is G. 7. So, for all x, x is F and x is G. As other authors have pointed out, an immediate problem with this suggestion is that it makes lines 4, 5, and 6 mere repetitions of lines 1, 2, and 7, respectively. This makes it unclear what purpose those lines serve in the argument, since they merely make implicit what the premises and conclusion state explicitly. Moreover, whereas 4, 5 / 6 seems to be a trivial instance of conjunction introduction, the variablist maintains that it is instead the nontrivial inference 1, 2 / 7—the very inference whose validity 4, 5 / 6 is used to verify! The variablist approach thus fails to capture the intuitive logical form of instantial inferences and fails to explain the purpose of instantial formulas in derivations. Another difficulty for variablists concerns auxiliary suppositions, as when we stipulate that n is an arbitrary number and then further suppose that n is even for the purposes of showing that some conclusion follows. Clearly, in supposing that n is even, we are not supposing that all numbers are even, as the variablist would have it. King (1991) presents a more sophisticated version of this view according to which instantial terms are context-dependent quantifiers (CDQs) whose quantificational forces, restrictions, and relative scopes are determined by the linguistic or derivational contexts in which they occur. Although the formal truth theory King provides is provably sound for the derivation system he uses, it still faces the above-mentioned problems: instantial formulas often render implicit content made explicit elsewhere in the derivation, and seemingly simple instantial inferences are construed as involving rather complex inferential patterns. Moreover, on his semantics, all formulas subordinate to an auxiliary supposition are analyzed as material !6 conditionals whose antecedent is the supposition itself. But this does not adequately capture the role of auxiliary suppositions in instantial reasoning. As Fine argues (against a similar suggestion), “In making the supposition φ I am not asserting the trivial conditional φ → φ and in making an inference (say φ ∨ ψ) from a supposition (say φ) I am not inferring one conditional (φ → (φ ∨ ψ)) from another (φ → φ)” (1985b, p. 134). An additional problem for King’s semantics is that it assigns instantial formulas truth conditions that are too complex to be plausible candidates for those formulas’ meanings. For example, line 5 in Argument 2 is assigned the surprisingly complex truth conditions given by (3) below (where ‘L’ is ‘loves’): Argument 2 1. Somebody loves everyone. 2. Let John be such a person. 3. John loves everyone. 4. Let Mary be an arbitrary person. 5. John loves Mary. 6. Someone loves Mary. 7. Since Mary was an arbitrary person, everyone is loved by someone. (3) ∃a((∃x∀yLxy → ∀yLay) ∧ ∀bLab) ∧ ∀a((∃x∀yLxy → ∀yLay) → ∀bLab)) All else being equal, we prefer a semantics that does not assign formulas with a relatively simple surface form paraphrases in the regimented formal language that translates it with a highly complex form. A perfect correspondence between surface structure and logical form is neither plausible nor desirable, but the highly complex truth conditions King assigns to simple instantial formulas is undesirable. !7 A final problem for King’s semantics is that it makes the meaning of instantial formulas sensitive to seemingly unimportant features of the derivation structure. In the above argument, ‘John’ was introduced before ‘Mary’. Hence in the truth conditions King assigns line 5, the existential quantifier contributed by ‘John’ takes scope over the universal quantifier contributed by ‘Mary’. However, we could just as well have introduced ‘Mary’ before ‘John’. Intuitively, 4 this change should make no difference to the structure of the argument and interpretation of its sentences. However, on King’s semantics, this change inverts the scope order of the quantifiers in line 5 and so affects not only the interpretation of that line, but also the overall structure of the argument. In summary, King’s quantificational view and variablist views more generally have trouble capturing the intuitive logical form of instantial formulas and instantial inferences. By taking instantial terms to be variables bound by implicit quantifiers (or implicit quantifiers themselves), variablist views obscure the distinctive syntactic, inferential, and cognitive kinships between instantial terms and referring terms. 2.2 Arbitrary Objects Kit Fine proposes (1983; 1985a,b) that instantial terms refer to arbitrary objects, which are a special kind of object distinct from familiar (individual) objects. Each arbitrary object is uniquely associated with a set of individual objects that comprise its value range. The value range of the arbitrary triangle consists of all the individual triangles. The value range of the arbitrary prime number consists of all the primes. And so on. Arbitrary objects are subject to the King informally describes his view as taking instantial terms to be quantifiers. But the truth conditions 4 he assigns instantial formulas often do not allow a one-to-one identification of the instantial terms in the formula with the quantifier expressions in the truth conditions, as (3) illustrates. !8 principle of generic attribution (PGA), which says that an arbitrary object satisfies all and only those predicates satisfied by all the individual objects in its value range. Since all triangles have three sides, the arbitrary triangle has three sides. Since not all prime numbers are odd, the arbitrary prime is not odd (nor is it even). And so on. A moment’s thought reveals that PGA requires qualification. Applying PGA to the predicate ‘is an arbitrary object’ yields that an arbitrary object is an arbitrary object iff the individual objects in its value range are. But individual objects and arbitrary objects are supposed to be distinct. To overcome such difficulties, Fine makes a distinction between classical and generic conditions (predicates). A generic condition φ(x) is one that is subject to PGA, such as being a prime number and being even. A classical condition is a one that does not work this way. Being an arbitrary object and being an individual number are paradigmatic classical conditions and may be satisfied by an arbitrary object even if not satisfied by the objects in its value range. This distinction corresponds to two functions Fine takes instantial terms to play. In generic claims, instantial terms function to represent the objects in an associated value range, whereas classical claims correspond to the purely referential role of an instantial term to talk about its referent in its own right. Further complications are required to extend this picture beyond monadic predicates. n- tuples of arbitrary objects may have joint value ranges, consisting of n-tuples of admissible values. For example, let m be a number, and let n be the square of m. Then the joint value range for <m, n> consists of the pairs of numbers <i, j> such that j = i 2 . PGA is then reformulated as follows: !9 PGA4: If φ(x 1 , x 2 , …, x n ) is a generic condition containing no names for arbitrary objects, then φ(a 1 , a 2 , …, a n ) is true iff φ(a 1 , a 2 , …, a n ) is true for all admissible assignments of individuals i 1 , i 2 , …, i n to the objects a 1 , a 2 , …, a n . A second complication is that some arbitrary objects are dependent, where an arbitrary object b is dependent on arbitrary objects a 1 , a 2 ,… iff the values assigned to b must be determinable from the values assigned to a 1 , a 2 ,… An arbitrary object is independent if it is not dependent on any other objects. In our example above, the term ‘m’ refers to an independent arbitrary object, while ‘n’ refers to an arbitrary object dependent on m. These complications raise problems. Consider two integers, m and n. Nothing about this stipulation determines whether m and n are distinct, so we may neither conclude that m = n, nor that m ≠ n. Moreover, since neither m nor n is dependent on the other, both terms must refer to independent arbitrary objects each with the integers as its value range. But the one-to-one correspondence between independent arbitrary objects and associated value ranges requires that the terms ‘m’ and ‘n’ co-refer and hence that m = n. This is a bad result. Fine’s preferred solution is take m and n to be differently dependent on the arbitrary pair of integers p whose value range consists of ordered pairs of integers <i, j>. The value range of m is determined by the first member of each pair in p’s value range; conversely the value range of n is determined by the second member of each pair in p’s value range. Hence m ≠ n !10 after all. However this will not do: Just as we cannot conclude m = n from our stipulation, we also cannot conclude m ≠ n. 5 A simpler solution would be to take the identity symbol to be ambiguous between generic and classical readings. Taken classically, ‘m = n’ is true. Taken generically, ‘m = n’ is not true by PGA4 since not all pairs <i, j> of admissible values for <m, n> are such that i = j. However ‘m ≠ n’ is not true either, since not all pairs <i, j> of admissible values for <m, n> are such that i ≠ j. As Fine makes clear (p. 11), where ‘a’ is an instantial term and ‘φ(x)’ a generic condition, both ‘φ(a)’ and its negation ‘~φ(a)’ may be not true. For example if ‘a’ names the arbitrary number, neither ‘a is odd’ nor ‘a is not odd’ is true. (If a sentence is false only if its negation is true, then bivalence fails.) At least on their generic readings, neither ‘m = n’ nor ‘m ≠ n’ is true, as wanted. Although the generic/classical distinction resolves the foregoing difficulties, it lacks independent motivation. Moreover, it comes with a cost: We have to accept classical claims like ‘m = n’ which may not be derived in any deduction system and may even contradict derivable generic claims. The problem is inherent to any referential approach. For if an instantial term 6 Additionally, as Magidor & Breckenridge (2012) point out, which objects ‘m’ and ‘n’ refer to according to 5 Fine’s solution is a matter of which order they are introduced. Indeed, which object ‘m’ refers to is a matter of what happens later on in the derivation. If we make further stipulations of the form Let ‘α’ be an arbitrary integer, then ‘m’ refers to a something else: an arbitrary object dependent on the arbitrary triple of integers, or quadruple, and so on, as the case may be. Magidor & Breckenridge cite Fine himself in complaint: “It is a natural requirement on a derivation containing [instantial terms] that we know what those [terms] denote as soon as they are introduced; their interpretation should not depend upon what subsequently happens in the derivation” (Fine 1985b, p. 101). If an arbitrary object satisfies a predicate ‘φ(x)’ on its classical reading but not its generic reading, does 6 the predicate express the same property on both readings? If satisfying a predicate is just a matter of instantiating the property it expresses, “yes” leads to contradiction. One the other hand, taking generic and classical readings of a predicate to express different properties is also unattractive: accounting for the logical behavior of instantial terms should not require us to countenance a new class of “generic” properties. !11 refers, a theory of its meaning will necessitate saying something about its referent. (This is because the kind of explanation a referentialist offers for instantial terms’ logical behavior has to do with metaphysical facts about the referents themselves rather than semantic facts about the terms.) But this theoretical claim about the referent, as stated in the material mode, will not be part of what the instantial term is used to say as part of our ordinary linguistic practice. It would be better if our theorizing about instantial terms did not require us to alter how we use them, or admit new linguistic roles for them to play. Another cost of the generic/classical distinction is that classical negation fails: where n is an arbitrary integer, neither ‘n is even’ nor ‘n is not even’ is true. So does classical disjunction: ‘n is even or n is odd’ is true even though neither disjunct is. Either the semantical rule for disjunction fails, or we must provide some non-standard means of evaluating instantial disjunctions. But then other logical principles will have to be given up in its stead. Whatever 7 route we choose, Fine admits that it is ultimately impossible to achieve “complete logical parity between individual and arbitrary objects; the difference in their logical, or rather meta-logical behavior must show up somewhere” (1985b p. 12). Classical logic must be abandoned. Fine defends this consequence of his view by noting that a rejection of classical logic may already be required to account for phenomena like vagueness. But even if vagueness requires a global rejection of classical logic, it does not explain its local failure for instantial formulas. Plausibly, the predicate ‘is even’ is not vague. Why is it that ‘2 is even’ obeys classical negation, but ‘n is even’ does not? From my perspective, it is not the rejection of classical logic per se that is worrisome. Rather, it is the fact that the arbitrary object view attributes instantial terms’ distinctive logical behavior not to their semantical features (they simply refer), but rather See Fine (1985b, p. 11-12) for discussion. 7 !12 to metaphysical facts about their referents—a deficiency in arbitrary objects themselves, not in how we talk about them. On the arbitrary object view, instantial terms’ logical behavior is a brute metaphysical fact. A final issue is that, all else being equal, it would be desirable to account for instantial terms’ syntactic and logical behavior without inflating our ontology in this way. Fine agrees. He admits that ultimately there are no arbitrary objects: Arbitrary object talk is ultimately to be reduced to a theory that “trades in more respectable entities”. He likens himself to the nominalist about numbers who does not object to number theory per se, but only to its realistic construal—the Carnapian internal and external existence questions come apart. But if arbitrary object talk is to be reduced away, we are not told how this is to be done. Fine promises that “the final formulation of [his] theory will make it clear just how such a reduction might go”. A reduction to set theory seems like the most natural option, but nothing in Fine’s formal model theory requires this. If one reduction is ultimately to be preferred over another, then why insist 8 on an “intermediate level of theorizing” that posits arbitrary objects in the first place? 2.3 Arbitrary Reference Magidor & Breckenridge (2012) take instantial terms to refer to individual objects of the familiar sort. In stipulating “Let n be an integer”, the term ‘n’ randomly (arbitrarily) refers to one of the (individual) integers such as 2, 17, or 511. Which one? According to Magidor and Breckenridge, facts about which objects instantial terms refer to are not determined by any other Fine’s A-models append to classical models (i) a set A of objects (the arbitrary objects) disjoint from the 8 classical domain, (ii) a dependency relation < on A, and (iii) a nonempty set V of value assignments of individuals in the classical domain to objects in A (the values arbitrary objects might simultaneously take). See Fine (1985b, pp. 24-26) for the restrictions possible A-models must satisfy in order to be models of actual instantial discourse. !13 facts: They are brute semantic facts. Moreover, we cannot know which object an instantial term refers to. Hence, their account of instantial terms is akin to epistemicist accounts of vagueness, which take the extensions of vague predicates like ‘heap’ and ‘bald’ to have precise boundaries whose locations we cannot know. EI is rather straightforward to explain on this view: If there are Fs, we may introduce ‘a’ to arbitrarily refer to one of them. We may then derive ‘a is F,’ since we know that ‘a’ refers to an F even though we do not know which F ‘a’ refers to. But what about UG? From the fact that 2 is F it does not follow that all integers are F, and similarly for any other integer. How then can we conclude from the fact that n is F that all integers are F? Magidor and Breckenridge's answer is that UG is an improper rule of inference. With a proper rule of inference, the truth of the premises directly guarantees the truth of the conclusion. With an improper rule of inference, the truth of the conclusion is guaranteed in some other way. For example, in conditional introduction, the truth of the conditional φ → ψ is guaranteed by the fact that one can derive ψ from φ in an auxiliary derivation, rather than by any the truth of any premise. Similarly, Magidor and Breckenridge claim, in deriving ‘n is F’, one does not know which integer ‘n’ refers to. All one knows about the referent of ‘n’ are facts shared by all possible referents for ‘n’ (i.e. all integers). Hence, one knows that a valid derivation of ‘n is F’ does not turn on any facts about the referent of ‘n’ that are not shared by all possible referents (all integers). One is thereby secure in deriving ‘All integers are F’ from ‘n is F’. The conclusion follows not from the truth of the premise, but from the fact that the premise was derived under epistemic impoverishment. The arbitrary reference view comes with a number advantages. Unlike variablism, it accurately reflects the intuitive structure of instantial reasoning. Unlike the arbitrary object !14 view, the theory is rather simple and does not require an inflated ontology of arbitrary objects or positing a generic/classical ambiguity. Moreover, its epistemicist explanation of instantial terms’ distinctive logical behavior does not attribute that behavior to brute metaphysical differences between individual and arbitrary objects. Finally, classical logic is preserved. On the other hand, the mechanism of brute arbitrary reference invites its own incredulous stare. Many find epistemicism about vague terms like ‘heap’ to be incredible because for any given number of grains n, there does not seem to be any reason why the threshold between heaps and nonheaps should be n rather than n + 1 or n – 1. The arbitrary reference theorist is in a similar position: For any given member x of the instantiating class of an instantial term ‘a’, why should ‘a’ refer to x rather than some other member? Williamson (1994, 1996) attempts to diffuse what skeptics find outrageous about epistemicism by endorsing supervenience theses that might underlie their incredulity. In contrast, Magidor and Breckenridge do not take facts about instantial terms’ referents to supervene on any other facts, including facts about the use of instantial terms. Although one might try to develop a version of the view that allows for arbitrary reference facts to supervene on facts about use, Magidor and Breckenridge rightly regard such a view as dialectically inferior to their own: Such a view owes us an explanation of how the supervenience basis determines arbitrary reference facts in a way that both makes the latter unknowable and that serves as a technically adequate account of instantial terms in natural deduction. One might reject the possibility that we could know or offer an explanation of how the supervenience basis determines the arbitrary reference facts. But if one is going to allow that supervenience facts are brute, then why not remain content with the arbitrary reference facts being brute? !15 A different tack might be to argue against the supervenience of semantic facts on nonsemantic facts like facts about use. In a recent paper, Kearns & Magidor (2012) argue against the view that semantic facts supervene on nonromantic facts. Unfortunately, addressing their many examples and arguments would take us too far afield. If semantic supervenience fails (I should register that I do not find their arguments compelling), that may undermine some of the resistance to arbitrary reference. Rather than address arguments for or against semantic supervenience, I reserve the issue for another time. Another problem with arbitrary reference is the use of instantial terms in proof by contradiction, as when I stipulate “Let p be the largest prime number” for the purposes of showing that a contradiction follows. Since there is no largest prime, there is nothing to which ‘p’ may refer. But then why are instantial formulas in proof by contradiction cases meaningful? The arguments marshaled against instrumentalism about instantial terms seem no less powerful in proof by contradiction cases than in cases where the existential premise is true. We are as much in need of a positive semantic account of instantial terms in “bad” cases as in “good” ones, and yet the arbitrary reference view does not provide one. Magidor and Breckenridge acknowledge this difficulty for their view, but do not attempt to answer it. Instead, they argue that this seems to be the same problem as the problem of empty names, which is a problem for everyone. Furthermore, they see no reason why a solution to that problem could not be applicable to cases where arbitrary reference is involved. Let’s grant that empty names are a problem for everyone. It does not follow that the arbitrary reference theorist is free to apply any solution to that problem she likes to proof by contradiction cases. For there is no guarantee that all solutions are equally compatible with the arbitrary reference view. We cannot hope to do justice here to the full range of possible solutions !16 to the problem of empty names. Nor is this the place to assess which solutions would be most compatible with arbitrary reference. But one might reasonably worry that certain solutions to the problem of empty names may take on commitments which are better defended by, or give more credence to, arbitrary reference’s competitors. For example, one approach to empty names takes them to refer to non-existent objects (e.g., the non-existent largest prime). But such views tend to be structurally similar to Fine’s arbitrary object view in their commitments and in their responses to potential challenges. An 9 arbitrary reference theorist who appealed to non-existent objects in proof by contradiction cases might find herself in the uncomfortable dialectical position of defending many of the same moves used by Fine in defense of the arbitrary object view, undermining her own objections to that view. Similarly, another approach to empty names takes them to be disguised descriptions (quantifiers). Such views tend to run into some of the same problems as King’s CDQ view. An arbitrary reference theorist who applied a descriptivist solution to proof by contradiction cases may find it harder to sustain her objections to variablism. Without a positive proposal on the table, all of this remains speculative. But I find it telling that the solution to the problem Magidor and Breckenridge tentatively propose exemplifies my worry. Magidor and Breckenridge suggest that “a promising technical solution” is to take an instantial term ‘a’ introduced by a false existential premise ‘∃xFx’ to refer to an arbitrary object which is a non-F (that way, ‘~Fa’ would entail ‘~∃xFx’, which seems desirable). 10 Compare the “nuclear”/“extranuclear” properties distinction non-existent object theorists sometimes 9 make with Fine’s generic/classical distinction. It seems relatively clear in the context (p. 396) that Magidor and Breckenridge mean that ‘a’ refers to an 10 arbitrary object that is a non-F rather than that ‘a’ arbitrarily refers to some ordinary, individual object that is a non-F. Moreover, the latter interpretation is absurd: in stipulating “Let p be the largest prime” it is incredible that ‘p’ should arbitrarily refer to some walrus, carpenter, or other non-prime. !17 But this solution appeals to arbitrary objects, despite Magidor and Breckenridge’s own objections to that view! In summary, it is premature for Magidor and Breckenridge to write off proof by contradiction, as if the fact that empty reference is a problem for everyone entails that they are no worse off for it. I conclude that proof by contradiction cases remain a serious challenge for arbitrary reference. It will be the task of the next section to articulate an alternative view that can account for instantial terms regardless of whether the formulas that license their introductions are true and that does require the mechanism of brute arbitrary reference. 3. INSTANTIAL TERMS AND INDIVIDUAL CONCEPTS We began with this dilemma: instantial terms purport to be about particulars, and yet instantial formulas seem to be truth conditionally general. But as we have seen, taking instantial terms to be quantifiers misconstrues the intuitive structure of instantial reasoning and fails to capture the syntactic, inferential, and cognitive kinships between instantial and referring terms. Moreover, by construing instantial formulas as repetitions of the quantified formulas that license them, the variablist is at pains to explain why we engage in instantial reasoning in the first place. The arbitrary object view resolves this dilemma by taking instantial terms to refer to arbitrary objects. Since arbitrary objects are subject to the principle of generic attribution, formulas which predicate properties of those objects enjoy the requisite truth-conditional generality. However, there is another way of bypassing the referential and quantificational horns of the present dilemma. Rather than identify instantial terms’ semantic contents with those terms’ referents, we may take them to be representational intermediaries—mental entities that exist !18 between language and the objects and properties in the world that language is about. If these representational intermediaries are of a cognitive or psychological kind with those representations by means of which we mentally refer to and reason about particulars, that would help to explain why instantial reasoning purports to involve reference to particulars. In stipulating “Let p be a prime number” and then proceeding to reason about “p”, it is as if I am thinking and reasoning about a particular prime. But this is mere pretense: there isn’t any number in particular that I am referring to. In this way we seek to capture the kinships between instantial terms and referring terms without maintaining that instantial terms refer. In this section I articulate an account along these lines. I then show how the resulting account of instantial reasoning can be used to provide a semantic account of instantial terms and formulas. At the end of the section I show how the resulting view avoids the difficulties confronting previous views. 3.1 Instantial Reasoning Reasoning about particulars is cognitively easier than reasoning in generality. Our representational capacities are attuned, first and foremost, to the task of referring to particulars in our external environment. Perception of particular material objects constitutes our inaugural cognitive contact with the world, and it is to these which the infant’s words and concepts first apply. General thoughts—thoughts paradigmatically expressed by grammatically and hence conceptually complex sentences either containing quantifiers, or stated in the generic mood— are arrived at only discursively by extrapolating from particular instances and so are neither epistemologically nor conceptually basic. As reasoning about general matters becomes increasingly complex, the task becomes increasingly cognitively difficult. Small wonder then !19 that we find reasoning about general matters more cognitively taxing than reasoning about any particular instance. However, if we could reappropriate these more fundamental representational capacities for general reasoning tasks, it stands to reason that those tasks would be greatly facilitated. The reason we engage in instantial reasoning, I propose, is that it involves the deployment of representational devices which are of a cognitive or psychological kind with those involved in genuine reference. And so it is much easier to arrive at general conclusions by deploying these devices than it is to think via quantificational concepts, or other devices of generality. The popularity of the narrative thought experiment in academic philosophy attests as much to our cognitive predilection to reason about general matters in the “particular mode.” One plausible account of how referential capacities manifest in cognition posits the existence of mental representations whose semantic function is to refer. Although philosophers and cognitive scientists who posit entities of this kind have given them different names (mental names, files, symbols, notions, and the like) and different theoretical roles to play, they are united by the core idea that an agent mentally refers to an object by tokening a mental representation which bears an appropriate semantic relation to that object. Agents predicate properties of objects by performing some analogue cognitive operation on their mental representations of those objects (although they needn’t be aware of this, or have any introspective access into their own cognitive architecture). In this paper, I propose to call such entities “individual concepts.” Although this term is often used for functions from Carnapian (1947) state descriptions to individual constants (from possible worlds to individuals), my use of the term is more in line with what some call non- descriptive senses or modes of presentation. Like Fregean senses, individual concepts serve to !20 individuate our cognitive perspectives on objects of thought. Furthermore, they are public rather than private mental entities: as I conceive of them, individual concepts are mental representation-types which different subjects token in cognition. As non-descriptive modes of presentation, reference for individual concepts is not secured by objects satisfying some set of descriptive conditions. Instead, individual concepts are, as mental analogues of proper names, devices of direct reference. However, the existence of individual concepts is independent of the existence of their referents: we can token individual concepts whose function is to refer, but which do not. Since individual concepts, but not their referents, serve to individuate our cognitive perspectives, tokening an individual concept which does not refer has the same cognitive significance for the subject as if it had—it is still as if one is thinking about, and predicating properties of, an object. Instantial reasoning, I propose, also involves individual concepts which do not refer. But unlike the individual concepts whose function is to refer but which do not (cases of malfunction), the individual concepts in instantial reasoning do not refer because they have taken on new, derived functions. This is a third kind of case, to be distinguished from uses of individual concepts which successfully refer and from those involved in reference failure. The notion of mental representations with “derived” functions comes from Recanati (2012). On his view, mental files are cognitive vehicles by means of which we refer to objects and store information about them. However, there are also cases where we keep track of information that purports to be about the “same” individual, but where there is no individual that we are actually thinking and talking about: (3) I don’t believe that Mary had a baby and named her ‘Sue.’ (4) Bill saw a unicorn. It had a golden mane. !21 In example (3), the scenario I am reported not to believe is one in which the individual Mary named ‘Sue’ is the “same” as the baby Mary had. Similarly, in example (4), the individual that is said to have a golden mane is the “same” unicorn Bill is reported to see. However, there is not any genuine co-reference here. Rather, these sentences exploit our representational capacities 11 to keep track of information about objects that we refer to, by fashioning them to new cognitive tasks in which we merely purport to. For Recanati, these are cases where mental files have taken on new, derived functions from their primary, referential one. 12 My suggestion for instantial reasoning can be understood in a similar fashion. When we stipulate Let p be a prime number, we engage in a kind of simulated reference. We imagine a scenario like one in which we refer to a particular prime, and then proceed to reason and draw conclusions as we would in such a scenario. But we do not actually refer to any prime; this is mere pretense. The individual concepts deployed in instantial reasoning are not genuinely 13 Karttunen (1976) used these examples to argue that indefinite noun phrases denote novel “discourse 11 referents,” and that definite noun phrases denote familiar discourse referents introduced by their indefinite antecedents. Since “discourse referents” are distinguished from “genuine referents,” Karttunen hoped to capture the sense in which the noun phrases in examples (3) and (4) purport to be about the same respective individuals, while avoiding the claim that these terms involve genuine reference. My own individual concepts may be understood as playing the role of discourse referents. However, I avoid that terminology because it suggests that discourse referents are the objects of discourse (reference targets). But my individual concepts are neither the objects of instantial reasoning nor the referents of instantial terms. Rather, they are types of mental representations. Just as we might expect there to be some evolutionary story which explains how mental files evolved 12 from their basic use in perception to their use in ordinary referential communication, Recanati anticipates that there is some evolutionary story which explains how mental files have taken on new, derived functions in examples (3) and (4). I do not mean to suggest that thinkers pretend to refer with instantial terms in the way that one might 13 pretend to refer to some object in a bag one knows is empty. All I mean by this metaphor is (i) that, from the first person perspective, to introduce a term ‘p’ for an arbitrary prime and then proceed to reason about ‘p,’ it feels much like it would if one were actually referring to a particular prime, and (ii) that instantial reasoners needn’t think that they are actually successfully referring to a particular prime. !22 referential. But by virtue of being ontogenically rooted in individual concepts that are, they have referential purport: for the thinker, it is still as if one is referring to, and thinking about, a particular individual. On my view, the semantic function of the individual concepts tokened in instantial reasoning is not to refer, but to designate the objects in the relevant instantiating class. Individual concepts tokened in instantial reasoning encode descriptive information I call a background condition, as determined by the stipulative procedure by which that concept is introduced, and we shall say that an object is in the value range of an individual concept iff it satisfies the background condition encoded by that concept. In our previous example, the term ‘p’ was introduced via the stipulation Let p be a prime number. Hence the individual concept deployed when we think about p encodes the background condition being a prime number, and its value range (extension) is the set of objects satisfying this condition: that is, all the primes. More generally, tokens of the individual concept associated with an instantial term ‘a,’ as introduced by a descriptive condition φ(x), have as their value range the set of objects that satisfy φ(x). To understand what it is to think p is even on the present view requires us to take a step back. According to the present framework, an agent predicates a property F of an object o by performing some analogue operation on F and an individual concept m which refers to o. This 14 operation is distinct from predication itself, since predicating a property of an object is different from predicating that property of one’s individual concept of that object. Hence, the operation has to be given some other name; following Soames (2015), I propose to call it mediate One could equally well speak of operating on individual concepts and predicate concepts (rather than 14 properties), where the relation between a predicate concept and the property it is about is analogous to the relation between an individual concept and the individual is is about. Nothing turns on this. !23 predication, since it is an operation on the cognitive structures or representational intermediaries by means of which one predicates properties of objects. To mediately predicate a 15 property F of an individual concept m is thereby to (directly) predicate F of the object or objects m designates. Hence, when one mediately predicates a property F of an individual concept m which successfully refers to an object o, one represents the world as it actually is (i.e. veridically) iff o is indeed F. However, mediate predication can also occur with individual concepts whose function is to refer but which do not. In mediately predicating F of an individual concept which does not refer, I do not thereby predicate F of anything, and so my thought does not represent any object as being F. Since the relevant individual concept does not refer, my thought does not have veridicality or truth conditions. However, phenomenologically, mediate predication is all that matters: it is still as if I have predicated F of some object. One can also mediately predicate properties of individual concepts which have taken on derived functions. To mediately predicate a property of an individual concept in instantial reasoning is to predicate that property of the objects in its value range. If m is an individual concept with value range R, to mediately predicate the property F of m is to thereby predicate F of all of the objects (if any) in R. To think p is even, to reuse our previous example, is to mediately predicate the property being even of the individual concept associated with ‘p,’ and thereby (directly) predicate that property of all of the primes. Hence the thought is true iff (i) the concept’s value range is non-empty, and (ii) all of the objects in the value range (all of the prime numbers) are even. Soames (2015) uses the notion of mediate predication in his account of the propositional contents of 15 formulas containing function- and argument-terms. !24 Comparison with Frege may be helpful: the Fregean sense of a definite description ‘the φ’ determines an object o as its Bedeutung iff o uniquely φ’s. The Fregean thought expressed by the sentence ‘the φ is ψ’ determines the truth-value The True for the sentence iff the sense of ‘the φ’ determines a unique object o as its Bedeutung, and o is ψ. If nothing uniquely φ’s, then the Fregean thought does not have a Bedeutung, and the thought fails to determine a truth-value. Still, at the level of cognitive significance, it is still as if one has referred to, and predicated a property, of some object. The primary difference between this Fregean picture and the individual concepts tokened in instantial reasoning consists in their quantificational force: for the latter the quantificational force is universal. Individual concepts designate a range of values, rather than one uniquely. Hence mediately predicating some property of one of these concepts is to predicate that property of all the objects designated. In summary, the individual concepts tokened in instantial reasoning have a descriptive/ general semantical function in designating the set of objects which satisfy the descriptive background information they encode. But by virtue of being ontogenically rooted in more basic kinds of referential concepts involved in perception, these concepts retain a singular cognitive function, which is to say that they have a similar role to play in cognition to those concepts whose function is to refer. In particular, in deploying individual concepts in instantial reasoning, it still feels, from the first person perspective, as if one is thinking about a particular object. 3.2 Instantial Discourse In the previous section I have been talking about instantial reasoning. But we already have the ingredients for a semantic account of instantial discourse. The account is this. Individual concepts are the semantic contents of instantial terms. (We shall say that instantial !25 terms express individual concepts in order to distinguish individual concepts from instantial terms’ referents.) In introducing an instantial term ‘a’ via a descriptive condition ‘φ(x),’ ‘a’ semantically expresses an individual concept which encodes ‘φ(x)’ as its background condition. Individual concepts are the contribution instantial terms make to the contents of formulas of which they are a part. Since individual concepts are not the objects of instantial reasoning, but representational intermediaries, the contents of instantial formulas do not predicate properties of individual concepts. Instead, the content of the instantial formula ‘ψ(a)’ mediately predicates the property expressed by ‘ψ’ of the individual concept expressed by ‘a.’ Hence the content 16 expressed by ‘ψ(a)’ is true iff the value range of the individual concept expressed by ‘a’ is non- empty, and all of the objects in this value range are in the extension of ‘ψ(x)’. If a formula is true iff the content it expresses is, then this entails that the formula ‘ψ(a)’ is true iff (i) there is an x such that φ(x), and (ii) all x such that φ(x) are ψ(x), as wanted. Let us make these ideas more precise. I reserve the symbols a, b, c,… for instantial terms, m, n, p, … for individual constants, and x, y, z, … for variables. (Instantial terms function syntactically just as individual constants.) Suppose, to fix ideas, we are working with a Gentzen- style deduction system equipped with the following rule-schemas, where ‘a’ is an instantial term, ‘t’ is any term, and ‘P(a/x)’ the formula that results from replacing all occurrences of ‘x’ in ‘P’ with ‘a’: If you like, we may take agents’ acts of mediate predication to be fundamental, and understand talk of 16 contents mediately predicating a property of an individual concept to mean that agents who entertain that content mediately predicate that property of that individual concept. !26 EI (∃x)P UG a … a P(a/x) P(a/x) …(∀x)P Q Q EG P(t/x) UI (∀x)P (∃x)P P(t/x) EI is subject to the following restrictions: ‘a’ may not occur in ‘(∃x)P ,’ in ‘Q,’ or in any open suppositions. Similarly, UG requires that ‘a’ not occur in ‘(∀x)P’ or in any open suppositions. Scope lines are used to indicate which suppositions are in force at given line in the derivation. Formulas to the right of a given scope line are said to be “subordinate” to that scope line. A scope line announcing the introduction of an instantial term is “flagged” by that term to its left. Every occurrence of an instantial formula must be subordinate, for each instantial term it 17 contains, to a scope line flagged by that term, and a scope line flagged by an instantial term may be introduced only concurrently with that term (at the same line). Let the instantiating formula of an instantial term be the formula where the term is introduced. We distinguish between (i) an instantial term itself, as individuated by its instantiating formula, (ii) its occurrences in the derivation, and (iii) the alphabetic letter used to symbolize the instantial term in that application (but which in a different derivation could be put to a different use). On the present semantics, the individual concepts expressed by instantial We may think of EG and UI as each embodying two distinct rule-schemas: one in which ‘t’ is an 17 individual constant, another in which it is an instantial term. In the latter case, ‘P(t/x)’ must be subordinate to a ‘t’-flagged scope line. !27 terms are individuated by that term’s instantiating formula: two instantial term-occurrences express the same individual concept iff they have the same instantiating formula. In such a derivation system there are three ways of introducing instantial terms: via EI, via UI, or in an auxiliary supposition (to be used in an application of UG). These correspond to different background conditions encoded by individual concepts those terms express. In applications of EI or UI where ‘a’ is introduced from the quantified formulas ‘(∃x)P’ or ‘(∀x)P ,’ respectively, the background conditions of the individual concept expressed by ‘a’ is the formula ‘P ,’ and its value range is the extension of ‘P’ (the set of all the P’s). In contrast, the individual concepts expressed by terms introduced in an auxiliary supposition for use in UG have the completely general value range. The background conditions for such concepts is null, and so is vacuously satisfied by everything. However, this is not yet right. We have been playing fast and loose with the distinction between individual concepts qua mental representation types and their tokens. But strictly speaking, it is not individual concepts themselves that encode background conditions, but their tokens. This is because tokens of individual concepts can encode different background conditions at different times. As the derivation progresses, new background conditions may be added in auxiliary suppositions which change the range of values that a given instantial term represents. For example, an instantial term ‘a’ might initially represent the integers, and then later in the derivation we make the auxiliary supposition that a is even. Occurrences of ‘a’ at lines subordinate to this supposition express the same individual concept as before, but the derivational context supplies the added background condition being even. In order for subjects to count as having understood a given line in a derivation, they must token in thought the relevant individual concepts expressed by any instantial terms at that line, and those concepts must !28 encode the appropriate background conditions as determined by the foregoing derivational structure. Formally, auxiliary suppositions temporarily update background conditions by conjunction introduction: where ‘a’ is an instantial term whose occurrences prior to an auxiliary assumption ‘ψ(a)’ express an individual concept tokens of which encode the background condition φ(x), occurrences of ‘a’ at lines subordinate to this supposition express the same individual concept tokens of which now encode φ(x) & ψ(x). In order to keep track of which individual concepts go with which background conditions, it will be convenient to attach the symbol designating an individual concept as a subscript to the corresponding variables in the symbolic representation of the background conditions its tokens encode. For example, the background conditions encoded by tokens of an individual concept c representing the prime numbers would be written as prime(x c ). We may suppose that the contents of instantial formulas are, like Russellian propositions, structured complexes that serve as the primary bearers of monadic truth and falsity and whose constituents are the individual semantic contributions of the parts of the instantial formulas that express them. We further suppose for the purposes of this dissertation that other terms and predicates in an instantial formula contribute the objects, functions, and properties in the world that they are about to the structured proposition expressed by the formula, as on standard neo-Russellian views. Although a commitment to structured 18 propositions is controversial, a full defense of this framework will not be pursued here. Although allowing for structured propositions which contain both objects (referents) and individual 18 concepts (representational intermediaries) is potentially problematic, this is not the place to enter into debates between neo-Fregeans and neo-Russellians about the semantic contents of referring terms. For purposes of simplification and streamlining presentation, I sidestep this issue by presenting my view as a modification of the Russellian view, but one could presumably also go in for a fully Fregean version. !29 Together, the individual concepts, objects, and properties expressed by the parts of an instantial formula form a structured proposition which may be represented in the usual fashion by ordered n-tuples of symbols inside angular brackets and separated by commas. However, since the propositions expressed by instantial formulas mediately predicate properties of individual concepts, we need to distinguish these propositions symbolically from those which involve (directly) predicating properties of individual concepts. We shall use the convention of using boldface to symbolize individual concepts in propositions expressed by instantial formulas, in order to indicate that they are objects of mediate predication and to avoid Gray’s Elegy-type problems. A further divergence from Russellian propositions is required. This is because individual concepts help determine the truth value for propositions of which they are a part only in conjunction with the background conditions they encode, and as we have seen, a given individual concept (or rather, its tokens) may encode different background conditions at different times. Hence, the structured complex of individual concepts, objects, and properties expressed by an instantial formula does not yet determine a unique truth value. In order to retain the conception of propositions as primary bearers of monadic truth and falsity, the proposition expressed by an instantial formula is to be individuated not just by the structured complex whose constituents are the contents of the formula’s terms, but also by the background conditions those individual concepts’ tokens encode. Background conditions then serve to individuate the propositions expressed by instantial formulas, and we shall say that the propositions expressed by instantial formulas have two distinct “components”: one a structured complex of objects, properties, and individual concepts contributed by the parts of the instantial formula called the structured complex; the !30 other the proposition’s background conditions (how these are determined will be addressed momentarily). The two components will be represented by two expressions enclosed in angular brackets and separated by a dividing colon < : >, where to the left of the colon is the structured complex, symbolized by a nested structure of ordered n-tuples inside angular brackets, and to the right of the colon are the background conditions for the proposition. This is illustrated below: Instantial formula Two-component proposition ‘φ(a)’ expresses < <φ*,<c>> : ∆ > structured background complex conditions where c is the individual concept expressed by instantial term ‘a’ and which encodes background conditions ∆, and φ* is the property expressed by ‘φ.’ The letter c is boldface in order to indicate that it is the target of mediate rather than immediate (direct) predication by φ*. We shall say that an individual concept a is dependent on an individual concept b, written a < b, if the background condition encoded by a is a function of that of b. An individual concept a is independent iff it is not dependent on any other individual concepts. The dependency relation between two individual concepts is determined by their introduction. If a previously introduced instantial term ‘b’ occurs in the instantiating formula of a new instantial term ‘a,’ then the concept expressed by ‘a’ is dependent on that of ‘b.’ For example, if b is an integer and a the square of b, then then the individual concept expressed by ‘a’ is dependent on that of ‘b.’ It is a trivial exercise to show that the deduction system defined above entails that the dependency relation is a strict partial order: it is both transitive, asymmetric, and irreflexive. Moreover, the converse relation is well-founded: there is no infinite chain of individual concepts a 1, a 2, a 3, … for which a 1 < a 2 < a 3 < …. !31 For any proposition p, let p’s closure [p] be the set of all individual concepts in p, plus any other individual concepts that those concepts are dependent on. Since the converse of the dependency relation is well-founded, and since every proposition has a finite number of constituents, every proposition has a finite closure. The background conditions of a proposition p are simply the set of the background conditions encoded by the individual concepts in [p]. We can finally assign truth conditions to these propositions. We first assume a standard truth-theory for ordinary propositions containing no individual concepts. For example, <φ*,<o>> is true iff o instantiates property φ*; <NEG, <φ*,<o>>> is true iff <φ*,<o>> is not true, and so on. We then trivially extend this truth-theory to two-component propositions with no individual concepts: where p is a two-component proposition < SC p : ∅> with structured complex SC p containing no individual concepts and null background conditions, p is true iff SC p is true. Next, where p is a two-component proposition with structured complex SC p and c some constituent of SC p , let the substitution SC p (x/c) of x for c be the result of substituting x for all occurrences of c in SC p . Let us say that where ∆ is a background condition (open formula) containing variables x 1 ,x 2 ,x 3 ,…, an n-tuple of objects <o 1 ,o 2 ,o 3 ,…> satisfies ∆ whenever <o 1 ,o 2 ,o 3 , …> is in the extension of ∆. We can now state the truth conditions of a two-component proposition p = < SC p : ∆ > whose structured complex SC p contains individual concepts a, b, c, …, and where ∆ is an open formula containing free variables x a ,x b ,x c ,…: p is true iff (i) some n-tuple satisfies the background conditions ∆(x a ,x b ,x c ,…) in p, and (ii) for all n-tuples <o 1 ,o 2 ,o 3 ,…> satisfying the background conditions ∆(x a ,x b ,x c ,…), the substitution SC p (o 1 /a, o 2 /b, o 3 /c,…) of <o 1 ,o 2 ,o 3 ,…> for <a, b, c, …> is true. (If a sentence is false only if its negation is true, and if a sentence is true !32 (false) iff the proposition it expresses is, then we get that p is false only if (i) its background conditions are satisfied, and (ii) for every n-tuple <o 1 ,o 2 ,o 3 ,…> satisfying its background conditions, <NEG, <SC p (o 1 /a, o 2 /b, o 3 /c,…)>> is true. As with the arbitrary object view, classical negation is violated. Unlike the arbitrary object view, the violation has a semantic rather than brute metaphysical explanation.) To illustrate, take Argument 2 from section 2.1. At line 5, ‘John loves Mary’ expresses the proposition (5) where loves is the loving relation, and j and m are the individual concepts expressed by ‘John’ and ‘Mary,’ respectively: (5) < <loves, <j, m> : x j is a person who loves everyone & y m is a person > (5) is true iff (i) there are persons x and y such that x loves everyone and (ii) for all persons x and y such that x loves everyone, x loves y. These are the intuitively correct truth conditions for this formula. Moreover, note that since ‘John’ did not occur in line 4 where ‘Mary’ was introduced, the individual concepts expressed by those terms are independent. Hence their order of introduction makes no difference to the meaning of line 5, which was a problem for variablist views. Having provided truth conditions for propositions expressed by instantial formulas, the account of standard inference rules is rather straightforward. Given that there are Fs, we may introduce via EI an instantial term ‘a’ which expresses an individual concept encoding the background condition being an F. But then we are secure in inferring the formula ‘F(a),’ since by the above truth conditions, that formula will be true iff all Fs are F. Similarly, if a term ‘a’ expresses the completely general individual concept (one that encodes the null background condition), then an instantial formula ‘G(a)’ is true iff everything Gs. Hence we are secure in inferring the universal generalization ‘∀xGx.’ However, under the !33 context of the auxiliary supposition ‘H(a),’ the inference from ‘G(a)’ to its universal generalization does not go through. This is because the value range of ‘a’ has been restricted under the auxiliary supposition to the set of objects that satisfy ‘H(x).’ Under the auxiliary supposition, the formula ‘G(a)’ is true iff all Hs are Gs. It does not follow from this that everything Gs. It is an easy exercise to prove that these results fully generalize. 3.3 Advantages over Previous Views An advantage of the present view is that it does not require a commitment to the mysterious metaphysics of arbitrary objects or arbitrary reference. Moreover, recall that referential views’ theoretical claims about instantial terms’ referents, as stated in the material mode, are not part of what instantial terms are used to say in ordinary reasoning or deduction systems. An additional advantage of taking the semantic contents of instantial terms to be representational intermediaries is that since they are not the referents of instantial terms, theoretical claims about individual concepts may not be expressed by instantial formulas in the material mode. Hence unlike referential views, the present view does not require us to admit new “classical” or “purely referential” roles for instantial terms to play in order to state its theoretical claims. One advantage of the arbitrary object view was that it can explain the purpose of various restrictions on EI and UG in different deduction systems by taking instantial terms to refer to different kinds of arbitrary objects. The present view shares in this benefit: different restrictions correspond to different different background conditions. For example, if we take the instantial term ‘a’ in applications ∃xφ(x)/φ(a) of EI to be as above, but instead take the term ‘b’ in applications φ(b)/∀xφ(x) of UG to encode the background condition being a non-φ, then the !34 restrictions on EI and UG as formulated by Quine (1950) would result. On the other hand, if we interpret EI as the same, but interpret the instantial term in UG as having a null background condition, then the Copi-Kalish restrictions result. 19 It remains to show that we now have the resources to address difficulties confronting variablist views. We have already seen how the present view handles auxiliary suppositions: ‘Now suppose n is even’ is not semantically equivalent to ‘Now suppose all numbers are even.’ Rather, the effect of auxiliary suppositions is to restrict the value range of the relevant instantial term (say, from integers to even integers). Fine objects to this proposal: “in supposing that an arbitrary number n is even, I do not seem intuitively, to be restricting its values to those numbers that are even. Rather I seem to be following through the fate of particular arbitrary number, even one that ‘might’ be odd, and supposing that it is ‘even’’’ (1985b, p. 75). But the present view has the resources to account for this intuition that different occurrences of the same instantial term co-refer. Since individual concepts in instantial reasoning are of a cognitive kind with those involved in genuine reference, instantial terms purport to refer to particulars. Moreover, different occurrences of the same instantial term express the same individual concept from line to line in a derivation. If we maintain the neo-Fregean line that what it is for two terms to purport to co-refer (if they refer at all) just is for them to express the same individual concept (mental file, symbol, whatever), then different occurrences of the same instantial term will purport to co-refer for agents who grasp the meaning of those terms and their derivational context. In instantial reasoning it is not only 20 King (1991, p. 248) shows how his view also shares in this advantage. 19 Fine (2007) provides a number of objections to this kind of neo-Fregean approach to de jure co-reference 20 (what he calls “representing as the same”). I do not find these objections compelling, but space does not permit me to address them here. !35 as if we are reasoning about a particular object, but it is also as if we are reasoning about the same object from line to line. Our intuitions that instantial terms co-refer are artifacts of the cognitive system’s re-deployment of individual concepts fit for reference for new cognitive tasks in which reference is merely putative. Another worry for variablism was whether it can capture the intuitive structure of instantial reasoning. Like variablism, the present view takes the truth-conditional import of 4, 5 / 6 in Argument 1, which appears to be an instance of conjunction introduction, to be the same as 1, 2 / 7, which is not. The present view can explain why this is the case. In thinking a is F, we mediately predicate the property expressed by ‘F’ of the individual concept expressed by ‘a,’ and similarly for ‘a is G.’ This occurs regardless of whether the individual concept expressed by ‘a’ refers. Hence the inference ‘a is F’, ‘a is G’ / ‘a is F and a is G’ will involve the same cognitive operation on the same kinds of cognitive structures, regardless of whether ‘a’ is a name or an instantial term. To engage in instantial reasoning is not to think a singular thought in the sense that the thought is directly or immediately about some particular individual. But nor is to think a general or descriptive thought of the ordinary sort, as when I think Every integer is F. Rather, the thoughts expressed by instantial formulas are of a cognitive kind with paradigmatically singular thoughts, and so may appear to exemplify basic inference patterns like conjunction introduction even when their truth-conditional import diverges from the surface structure of instantial inferences. But if instantial formulas are truth-conditionally equivalent to quantified formulas, why do we engage in instantial reasoning in the first place? What is the point of 4, 5 / 6 in Argument 1 if those lines are truth-conditionally equivalent to 1, 2 / 7, respectively? The reason we engage in instantial reasoning on the present view is simply that reasoning with individual concepts is !36 easier than reasoning via quantificational locutions. And this is because individual concepts are of a cognitive or psychological kind with those engaged in reference. Instantial reasoning “repackages” explicitly quantificational thought-contents into quasi-singular contents for which valid inferential patterns are more familiar and easily grasped. Now, attentive readers may have noticed that, whereas Fine and Magidor and Breckenridge also take instantial terms to have universal quantificational force, on King’s 21 view, EI-licensed terms are existential rather than universal quantifiers. The source of this discrepancy can be traced to the fact that on King’s view, unbound anaphoric pronouns (e.g., donkey anaphora) are also context-dependent quantifiers (CDQs). Moreover, the data suggest that anaphoric pronouns with existentially quantified antecedents have existential force: (5) Ralph owns a donkey. He vaccinates it. (6) Every man who has a quarter will put it in the parking meter. (5) does not require that Ralph vaccinate every donkey he owns, and (6) does not require that every man with a quarter put all of his quarters in the meter. In these sentences, ‘it’ appears to function like an existential quantifier. Since King is especially impressed by the “felt connection” between instantial terms and donkey anaphora, he concludes that EI-licensed instantial terms should also be existential quantifiers. Although I think King is right that instantial terms and pronominal anaphora are cut from the same semantic cloth, and that unbound anaphoric pronouns with existential antecedents have (default) existential force, his interpretation of EI-licensed instantial terms embodies a subtle mistake. To see this, compare the following: For Fine, the instantial formula ‘φ(a)’ is true iff all of the objects in a’s value range satisfy ‘φ(x).’ 21 Similarly, Magidor and Breckenridge take ‘φ(a)’ to be assertible iff all of the possible referents of ‘a’ satisfy ‘φ(x).’ !37 (7) f(x) has an integer solution n. Furthermore, n is even. (8) Let ‘n’ be an (arbitrary) integer solution of f(x). We now show that n is even… (7) says that f(x) has an integer solution, and that f(x) has an integer solution that is even. It does not follow that all integer solutions of f(x) are even: the relevant quantificational force of ‘n’ is existential. The second sentence of (7) is not an inference from the first, but an elaboration of it. In contrast, (8) sets up an argument from the premise that f(x) has an integer solution to the conclusion that all integer solutions of f(x) are even. The stated target is not to show that some integer solution of f(x) is even, but that all are. The relevant quantificational force is universal. These observations generalize. Although both instantial terms and (non-deictic) anaphoric pronouns with existential antecedents purport to refer to a particular member of the instantiating class, instantial terms purport to refer to an arbitrary particular member of the instantiating class—one representative of the class as a whole. Pace King, the terms used in instantial reasoning—the kind of reasoning invoked by stipulations of the form ‘Consider some integer’, ‘Let ‘a’ be an arbitrary…’, and so on, and which natural deduction systems are meant to model—carry universal, rather than existential, force. This is appropriately reflected in the foregoing semantics. Whether donkey pronouns are of a semantic kind with instantial terms is an interesting and plausible hypothesis, but one that must recognize subtle differences in how such terms are used. In the remainder of this dissertation, we explore how the current program may be extended to unbound pronominal anaphora. !38 CHAPTER 2: THE PROBLEM OF UNBOUND ANAPHORIC PRONOUNS: A CRITICAL OVERVIEW At the end of the previous chapter, we noted that there appears to be a close connection between instantial terms in natural deduction and pronouns in natural language. In this chapter and the next, I investigate to what extent this connection holds up to closer scrutiny. We begin by first considering the problem posed by so-called unbound anaphoric pronouns in more detail and by critically examining various proposed solutions. The point of this survey will be threefold. First, we establish the major data that a theory of unbound anaphoric pronouns should explain. Second, we note various problems that confront previous approaches to the problem. Finally, we gain important insights from these previous approaches. The task of the next chapter will be to incorporate these insights in the framework of structured two- component propositions developed in Chapter 1. That framework will require certain modification to handle the much richer and more complex behavior of pronouns in natural language. But I hope to show that the connection between instantial terms and at least a broad class of unbound anaphoric pronouns can be captured in this framework. 1. INTRODUCTION TO THE PROBLEM An occurrence of a pronoun is anaphoric if its interpretation depends on that of another expression. Many anaphoric pronouns function as referring terms. In ‘Mark swims but he can’t ski,’ the pronoun ‘he’ refers to Mark. Other anaphoric pronouns function as variables bound by a quantifier antecedent. In ‘Every man loves his mother,’ ‘his’ functions as a variable !39 bound by ‘every man.’ However, some anaphoric pronouns do not fit comfortably in either of these two camps: (1) Ralph owns a donkey. Harry vaccinates it. A speaker who uttered (1) need not have any particular donkey in mind that she intends to be talking about. The speaker might have deduced (1) on purely general grounds from her beliefs that Ralph is a good donkey owner, that good donkey owners generally vaccinate, and that Harry is the local vet. Harry might even vaccinate many of Ralph’s donkeys. Still, a speaker who knew this could appropriately assert (1) in response to the question, “Do you know if there are any donkeys in the area that has been vaccinated?” If ‘it’ functions as a referring term in such contexts, then what does it refer to? Either it refers to an arbitrary one of Ralph’s donkeys (despite the fact that the speaker doesn’t have any particular donkey in mind ), or it refers to 1 something else (an “arbitrary” donkey, say), or it is a referring term without a referent. Each of these proposals is problematic for the same reasons that they were problematic for instantial terms. If the pronoun in (1) is not a referring term, then perhaps it is a bound variable. But if the pronoun is a bound variable, what quantifier binds it? The existential antecedent ‘a donkey’ is a plausible candidate, but this suggestion meets an immediate problem: the pronoun lies outside of that indefinite’s syntactic scope, in this case in a separate sentence altogether. How can a variable in one sentence be bound by a quantifier in another? One option might be to Even if the speaker had a particular donkey in mind—call it Eeyore—it is still not obvious that the 1 second sentence is true iff Harry vaccinates Eeyore. Suppose that Harry does vaccinate one of Ralph’s donkeys, but not Eeyore. Those whose considered judgments comport with Kripke’s (1977) assessment of Donnellan’s (1966) referential/attributive distinction are likely to think (1) is still true in such cases. !40 allow variable binding across the sentence barrier. This was Geach’s (1962) approach, who took 2 (1) to be semantically equivalent to (1a), whose logical form is given by (1b): (1a) Ralph owns a donkey that is vaccinated by Harry. (1b) [An x: Donkey x](Ralph owns x & Harry vaccinates x) Geach’s proposal takes the primary unit of semantic interpretation to be the entire discourse, rather than the individual sentences that compose it. But discourses change over time. Were the sentences in (1) to be followed by further pronoun-occurrences anaphoric on ‘a donkey’ as in (1 + ) below, Geach’s proposal would presumably require that the quantifier’s scope be continually extended as the discourse progressed in order to bind variables in later sentences. On this analysis, (1 + ) is semantically equivalent to (1 + a), which is assigned the logical form (1 + b): (1 + ) Ralph owns a donkey. Harry vaccinates it. It is brown spotted. (1 + a) Ralph owns a donkey that is vaccinated by Harry and is brown spotted. (1 + b) [An x: Donkey x](Ralph owns x & Harry vaccinates x & x is brown spotted) Geach’s suggestion then is most naturally interpreted as assigning logical forms to sequences of sentences or to discourses-at-a-time. So in Example (1 + ), we have a logical form for the first sentence by itself, the logical form (1b) for the first two sentences, and the logical form (1 + b) for all three sentences together. Continuations of this discourse would be treated analogously. The problem with this suggestion is that violates the highly natural idea that each of the sentences in (1 + ) is used to make a distinct claim, each of which can be assigned its own truth conditions. It is an unfortunate consequence of this conception of linguistic meaning that, were the discourse to include any false information, as if say, Harry does not vaccinate any of Ralph’s donkeys, then all continuations of the discourse will come out false. We want a notion Geach’s example involves a conjunction, but it is not importantly different from (1). 2 !41 of truth that is fine-grained enough to distinguish the case where the second sentence of (1 + ) is false and the third sentence is true, from the case where both the second and third sentences are false. Insofar as it makes sense to speak of entire discourses as being true or false, this is because discourses come to represent the world by virtue of being composed by individual claims which are representationally more basic. By taking discourses rather than individual sentences to be truth conditionally primary, this proposal gets things backwards. A different possibility then, still in the Geachean spirit, would be to take (1 + b) as an analysis of the second sentence in (1) rather than of the discourse as a whole, and similarly to assign (1 + b) to the final sentence of (1 + ). Although this would avoid the fine grainedness problem mentioned above, both versions of the Geachean proposal are insufficiently general: (2) Ralph owns few donkeys and Harry vaccinates them. (3) *Ralph owns no donkeys and Harry vaccinates them. According to Geach’s own proposal, ‘few donkeys’ takes wide scope over the conjunction so that (2) is semantically equivalent to the claim that few donkeys are both owned by Ralph and vaccinated by Harry. According to our imagined Geachean alternative, only the second conjunct in (2) is equivalent to this claim. But a little reflection reveals that this is neither what (2) nor its second conjunct means. As Evans (1977) notes, (2)’s most natural reading requires that Harry vaccinate all the donkeys Ralph owns, but this is not required if ‘few donkeys’ takes wide scope over the conjunction. Neither version of the Geachean view gets this reading. Moreover, (2) intuitively says that there are few donkeys owned by Ralph, not the weaker claim given by the present proposals that there are few donkeys both owned by Ralph and vaccinated by Harry. Indeed, these approaches will get the wrong result whenever the determiner of the pronoun’s antecedent quantifier is not right monotone increasing (MON↑), where a type <1, 1> determiner !42 Q is MON↑ on a domain M iff Q(A, B) entails Q(A, C) whenever B ⊆ C for all A,B,C ⊆ M. 3 Finally, by parity with prior cases, both versions should predict that either (3) or its second conjunct is equivalent to the perfectly acceptable ‘No donkeys are owned by Ralph and vaccinated by Harry.’ Since Example (3) is marked, neither (3) nor its second conjunct is equivalent to the interpretation predicted by the bound variable approach. Both Geach’s approach and its imagined analogue fail. (1) is an example of discourse anaphora, but it is part of a broader class of problem cases. In conditional and quantified donkey sentences like (4) and (5) below, we see a similar problem: (4) If Ralph owns a donkey, he vaccinates it. (5) Every farmer who owns a donkey vaccinates it. In contexts where the pronoun-occurrences in (4) and (5) are not used to talk about a specific donkey, they do not refer. But the pronouns also do not occur inside the syntactic scope of their quantifier antecedents and so cannot be bound by them either. More precisely, let’s say that an element A of a syntactic structure S c-commands another element B just in case B either is or is contained by the sister node to A in S. Syntactic theory suggests that a quantified NP can only bind variables that are c-commanded by it. In the syntactic structures of (4) and (5), the antecedents are too deeply embedded to c-command their respective pronouns. This means that if the pronouns are bound by their quantifier antecedents, those QPs must move from their Consider a discourse of the form ‘Q(A, B) & Cp’ where Q is a type <1,1> determiner, C is a 1-place 3 predicate, and p is a pronoun anaphoric on the quantifier Q(A, B). By conjunction introduction, the discourse Q(A, B) & Cp entails its first conjunct Q(A, B). On Geach’s proposal, the logical form of the discourse is Q(A, B ∩ C). So we have Q(A, B ∩ C) entails Q(A, B). Since B ∩ C ⊆ B, and A,B,C were arbitrary, Q must be right monotone increasing. Conversely, if Q is not right monotone increasing then the truth of the discourse does not entail the truth of the first conjunct as it should. So Geach’s analysis fails whenever Q is not right monotone increasing. !43 positions in the surface structures to c-command positions at LF (quantifier raising). However, the if-clause in (4) and relative clause in (5) form syntactic “scope islands” which do not allow internal elements to “scope out,” as the following pairs illustrate: (4a) *If Ralph owns every donkey he vaccinates it. (4b) Every donkey is an x such that if Ralph owns x, he vaccinates x. (5a) *Every farmer who owns every donkey vaccinates it. (5b) Every donkey is an x such that every farmer who owns x vaccinates x. If wide-scoping (quantifier raising) were allowed, (4a) and (5a) should be equivalent to (4b) and (5b), respectively. But they are not equivalent. Moreover, even if ‘a donkey’ could somehow scope out in (4) and (5), that would still not yield the correct truth conditions. (4) intuitively requires that Ralph vaccinate all of his donkeys, whereas wide-scoping only requires that there be some donkey of his that he vaccinates. Similarly, (5) is false if there are donkey-owning farmers in the domain of quantification that never vaccinate; but on the wide-scope analysis, (5) is true as long as some donkey is vaccinated by his owner(s). As with (1), the pronoun- occurrences in (4) and (5) cannot function as variables bound by their quantifier antecedents because they lie outside of the syntactic scopes. Let’s call occurrences of pronouns in which the speaker of the context of utterance does not have any particular individual in mind as the individual she intends to be talking about with her use of the pronoun no speaker reference uses. I will use the name “unbound anaphoric pronouns” (UAPs) or “unbound anaphora” to refer to no speaker reference uses of anaphoric pronouns which are not c-commanded by their antecedent quantifiers. As illustrated 4 Note that it is compatible with this definition that UAPs function as bound variables after all (perhaps 4 by quantifiers not phonetically realized in the syntax, as some maintain). !44 by these examples, the basic problem of UAPs is that they do not appear to be analyzable as either free variables or as referring terms. In the past several decades, there has been a flurry of semantic research surrounding UAPs but little consensus about their semantics. Among the most prominent approaches to the problem are the following: 1. Dynamic approaches (Kamp 1981, Heim 1982, Groenendijk and Stockhof 1991) 2. E-type views (Evans 1977, Heim 1990) 3. D-type views (Parsons 1978, Cooper 1979, Davies 1981, Neale 1990, Heim and Kratzer 1998) 5 4. NP-deletion (Elbourne 2005) 5. CDQ views (Wilson 1984, King 1987, 1993, 1994, 2004) In the following sections, I provide a critical survey of these five major approaches to the problem of unbound anaphora. The point of this exercise is not merely to identify problems with previous approaches to the problem, but also to identify what I take to be their major insights. In the next chapter, I propose a framework for dealing with unbound anaphora which I take to incorporate the insights, while avoiding the drawbacks, of previous approaches to the problem. Views that assimilate UAPs to definite descriptions are often called “E-type” in the literature, a term 5 coined by Evans (1977). This usage is unfortunate, because Evans took UAPs to be (rigid) referring terms, not definite descriptions. It is an unfortunate consequence of the current usage that many people attribute to Evans the latter view. Part of the blame rests with Evans himself: as Soames points out (1989, 2006), in conflict with how he informally describes his view, Evans’ formal proposal treats UAPs as definite descriptions which take wide scope over the largest clause containing the pronoun but not its antecedent. In this chapter I follow Neale (1990) in distinguishing “D-type” views which assimilate UAPs to definite descriptions from “E-type” views which take UAPs to be rigid referring terms. !45 2. DYNAMIC VIEWS 2.1 Discourse Representation Theory In the early 1980s, Hans Kamp (1981) and Irene Heim (1982) independently developed two similar theories that were motivated in part by the desire to provide a satisfactory account of pronominal anaphora. According to views in this tradition, discourses are assigned abstract structures that contain information about both the objects under discussion and the predicative material that has been attributed to those objects. As the discourse progresses, new objects and predicative material are added to this information structure. The meaning of a sentence is identified not with a proposition, as on traditional views, but with a context-change potential, i.e., the effect that an assertion of the sentence has on updating the information structure. How we construe these structures is not important for our immediate purposes, but it is natural to think of them as constituting a level of linguistic representation that mediates between language and the objects in the world—an abstract model of the mental representations conversational participants deploy when engaging in conversation. Indeed, Heim likens the cognitive task of a conversational participant to that of a file-clerk: “to understand an utterance is to keep a file which, at every time in the course of the utterance, contains the information that has so far been conveyed by the utterance” (1983, p. 167). Accordingly, the information structures in Heim’s file change semantics are called “files,” suggesting that they may be regarded as corresponding to something like a mental file. And in Kamp’s discourse representation theory (DRT), the information structures are called discourse representation structures (DRSs). Although there are important differences between DRT and file-change semantics, those differences do not concern us in here. Hence in what follows, I will present a simplified version of DRT which abstracts !46 away from some of the important differences between the two views. Interested readers should consult Kamp (1981), Kamp and Reyle (1993), and Heim (1982, 1983) for details. A DRS consists of two components: a set of discourse referents representing the objects under discussion, and a set of DRS-conditions representing the predicative information that has been attributed to those discourse referents in the course of the conversation. Whereas traditional semantic theorizing takes indefinite NPs to be existential quantifiers, in DRT indefinite NPs are likened to the variables of formal logic. In particular, the semantic function of indefinites is to introduce novel discourse referents into the conversation’s DRS and to record on the DRS any descriptive information predicated of the indefinite. To illustrate, consider the following: (6) A woman was bitten by a dog. She hit it. The effect of ‘a woman’ and ‘a dog’ in (6) is to introduce into the conversation’s DRS two novel discourse referents x and y, and to introduce the predicative information x is a woman and y is a dog. The verb contributes the information that x was bitten by y. We write this DRS as follows: (6a) [ x, y : woman(x), dog(y), bit(y, x)] In contrast, definite NPs serve to update the DRS-conditions by adding predicative information that is attributed to their antecedents’ discourse referents. The second sentence in (6) thus updates (6a) to yield (6b): (6b) [ x, y : woman(x), dog(y), bit(y, x), hit(x, y)] DRSs are then given a model-theoretic interpretation in the following way. An embedding function f for a DRS m in a model M is a partial variable assignment of elements in the domain of M to the discourse referents in m. Let’s say that an embedding function f verifies a DRS m in a !47 model M iff, for any discourse referent x in m, f(x) satisfies the DRS-conditions in m for x on the model-theoretic interpretation of those conditions given by M. (Alternatively, we say that m has a proper embedding in M iff there is an embedding function that verifies it in M.) A discourse representation structure m is true in a model M iff there is an embedding function f that verifies m in M. So (6b) is true in a model M iff there is some function f mapping x and y to objects in the domain such that according to M, f(x) is a woman, f(y) is a dog, and f(x) both hit and was bitten by f(y). In a word, (6b) is true on the intended model iff some woman hits and is bitten by some dog, as wanted. A whole discourse such as (6) is true in a model M iff its associated DRS is true in M. Since earlier stages of the discourse will correspond to different DRSs, those discourse-stages will have different truth conditions, as on Geach’s proposal. Turning now to donkey sentences, DRT treats conditionals and quantification by nesting DRSs inside each other. Here is the DRS for the conditional donkey sentence (4): (4c) [ [ x, y : Ralph(x), donkey(y), owns(x, y)] ⇒ [ : vaccinates(x, y)] ] (4c) is itself a pair DRSs linked by the conditional operator ⇒. This is to be understood as follows: (4c) is true in a model M iff some embedding function f verifies (4c) on M, where f verifies (4c) on M iff every extension of f that verifies [ x, y : Ralph(x), donkey(y), owns(x, y)] on M can itself be extended to verify [ : vaccinates(x, y)] on M. In a word, (4c) is true on the intended model iff Ralph beats every donkey he owns, as wanted. In this way, DRT can easily explain how the existential ‘a donkey’ in (4) seems to get universal force once embedded inside the conditional. The explanation comes not from any semantic contribution made by the indefinite itself, since indefinites in DRT are not quantifiers, but rather devices for introducing novel discourse referents and DRS-conditions on those referents. Instead, the apparent universal force of ‘a donkey’ in (4) comes from the model-theoretic interpretation of the conditional !48 operator ⇒. Other explicit or covert adverbs of quantification (‘sometimes,’ ‘usually,’ ‘never,’ ’rarely,’ etc.), we may suppose, contribute conditional operators with different quantificational forces. Quantified sentences work much like conditionals, so that relative clause donkey sentence (5) gets assigned the following DRS: (5c) [ [ x, y : farmer(x), donkey(y), owns(x, y)] ∀ [ : vaccinates(x, y)] ] (5c) is much like (4c) except now we have the ∀ operator, which may be understood as follows. As always, (5c) is true in a model M iff some embedding function f verifies (5c) on M. An embedding function f verifies (5c) in a model M iff every extension of f that verifies [ x, y : farmer(x), donkey(y), owns(x, y)] on M can itself be extended to verify [ : vaccinates(x, y)] on M. In a word, (5c) is true on the intended model iff every farmer who owns a donkey vaccinates every donkey he owns, as wanted. Other quantificational forces are similarly captured by substituting for ‘every extension of f...’ in the previous clause the appropriate quantificational determiner (‘some,’ ‘most,’ ‘few,’ etc.). DRT provides a powerful framework for dealing with the semantics and pragmatics of unbound anaphora, and has been further developed to shed light on other linguistic phenomena including tense, presupposition, and attitude reports. However, the framework also comes with a number of challenges. In the following sections §2.2-2.3 I address a select set of problems confronting DRT: one set of problems involves the interpretation of quantification and conditionals, and another set of problems having to do with the limitations of the notion of discourse truth. In §2.4 I argue that dynamic predicate logic is not in a better position to address these problems for DRT. !49 2.2 Discursus: Quantification and Conditionals One problem concerns so-called weak readings of donkey sentences, as briefly touched on at the end of the previous chapter. Compare (4) and (5), which require that every donkey-owning farmer vaccinate all of his or her donkeys, with (7)–(9): (7) Everyone who has a credit card used it to pay his bill. (8) Most men that have a nice suit will wear it to work. (9) No student who borrowed a book from Peter returned it. The pronouns in these sentences have existential rather than universal force: in (7), there is no requirement that everyone pay his bill with all of his credit cards, and mutatis mutandis for (8) and (9). (7)–(9) are naturally interpreted as follows: (7’) Everyone who has a credit card used a credit card he has to pay his bill. (8’) Most men that have a nice suit will wear a suit he has to work. (9’) No student who borrowed a book from Peter returned a book he borrowed. The existence of “weak” (existential) readings in relative clause donkey sentences is 6 widespread, and in many cases our intuitions about their truth conditions are just as robust as in sentences that exhibit “strong” (universal) readings. Moreover, the availability of weak versus strong readings in relative clause donkey sentences obeys a certain degree of systematicity. Kanazawa (1994) observes that which readings of quantified donkey sentences are available is predicted in part by the monotonicity properties of the head quantificational determiner. In particular, where the head determiner is left and right monotone increasing (↑MON↑: a, some, several, at least n), or left and right monotone decreasing (↓MON↓: no, at most n), only the weak reading is available. But where the head determiner is left monotone decreasing and right The terms “weak”/”strong” come from Chierchia (1995). They are somewhat misleading in that the 6 weak reading of (9), for example, entails the strong reading of (9). !50 monotone increasing (↓MON↑: every, all, free-choice any), or left monotone increasing and right monotone decreasing (↑MON↓: not every, not all), the strong reading is preferred. (However 7 pragmatic factors also play a large role in determining whether the strong reading is available when the determiner is ↓MON↑ or ↑MON↓. For example, in (7) ‘every’ is ↓MON↑ but the implausibility that anyone would pay their bill with more than one credit card makes the strong reading unavailable. ) 8 Kanazawa hypothesizes that monotonicity properties “select” those interpretations which preserve familiar, valid inferential patterns in non-anaphoric sentences. For example, the determiner ‘no’ licenses the inference from (10a) to (10b), given that ‘no’ is ↓MON and the set of farmers who own a donkey includes (is a superset of) the set of farmers who own a female donkey—and mutatis mutandis for the inference (10a) / (10c): (10a) No farmer who owns a donkey is poor. (10b) No farmer who owns a female donkey is poor. (10c) No farmer who owns and beats a donkey is poor. These “left monotonicity inferences” are also preserved in donkey sentences, as (11a) / (11b) and (11a) / (11c) illustrate: (11a) No farmer who owns a donkey vaccinates it. ‘Most’, which is MON↑, seems to allow both readings: whereas (8) clearly has a weak reading, ‘Most 7 farmers who owns a slave also owned his offspring’ is naturally interpreted strongly. The fact that ‘most’ does not uniformly favor either interpretation is naturally explained by the fact that ‘most’ is neither monotone increasing nor monotone decreasing on the left argument, and so does not license “left monotonicity inferences” of the sort that are preserved in donkey sentences when those sentences are interpreted in accordance with Kanazawa’s monotonicity principles (see next few paragraphs). For example, (7) displays a weak reading even though its head determiner is ↓MON↑. See Geurts (2002) 8 for further discussion of the pragmatic factors that influence the interpretation of donkey sentences, and for empirical support of Kanazawa’s observations. !51 (11b) No farmer who owns a female donkey vaccinates it (11c) No farmer who owns and beats a donkey vaccinates it. However, note that these inferences are valid if and only if (11a)–(11c) are given a weak interpretation, in accordance with Kanazawa’s monotonicity principles. More generally, left- 9 monotonicity inferences are preserved in quantified donkey sentences (when the inference affects the relative clause and hence the interpretation of the pronoun ) iff those sentences are 10 interpreted in accordance with the monotonicity principles given above. 11 Another familiar inference pattern is the traditional square of opposition: ‘Some A is not B’ and ‘Not every A is B’ are truth-conditionally equivalent, as are ‘Every A is not B’ and ‘No The following is a countermodel for the inferences on a strong interpretation: there are two farmers f 1 9 and f 2 and four donkeys d 1–d 4. f 1 owns d 1 and d 2 and f 2 owns d 3 and d 4. f 1 vaccinates d 1 but not d 2 and f 2 vacccinates d 3 but not d 4. f 1 beats d 1 but not d 2 and f 2 beats d 4 but not d 3. d 3 is a female donkey. Then neither f 1 nor f 2 vaccinates every donkey he owns (11a is true on the strong reading), but f 1 vaccinates every donkey he owns and beats, and f 2 vaccinates every female donkey he owns (11b/c are false on the strong reading) Note that the left-monotonicity inference from (11a) to ‘No young farmer who owns a donkey vaccinates 10 it’ goes through on either interpretation because ‘young’ does not modify the relative clause and so does not affect the interpretation of the pronoun. A qualification is required in order to exclude inferences like ‘No man who has a house waters it’ to ‘No 11 man who has a garden waters it’, which is arguably invalid even in cases where anyone who owns a garden owns a house. The reason that this inference is invalid on either interpretation, despite the fact that ‘no’ is ↓MON↓, is that ‘man who owns a house’ and ‘man who owns a garden’ provide different possible antecedents for subsequent pronominal anaphora, and so their truth-conditional contribution is not exhausted by their extensions. Hence, the relevant notion of “inclusion” at issue in left-monotonicity inferences in donkey sentences is not simply the subset/superset relation between the extensions of the relevant set-terms (N’ constituents). Kanazawa introduces a notion of inclusion that compares not just the extensions of the relevant set-terms, but also, for each member of the extension, the range of possible values of subsequent donkey pronouns (for example: for each donkey-owning farmer, his donkeys). These complications are not important for our purposes. Interested readers should consult Kanazawa (1994) §3.1 for further details. !52 A is B’. These familiar inference patterns are preserved in donkey sentences if interpreted in accordance with Kanazawa’s monotonicity principles, as (12) and (13) illustrate: (12) ‘Some farmer who owns a donkey does not vaccinate it’ ↔ ‘Not every farmer who owns a donkey vaccinates it’ (13) ‘Every farmer who owns a donkey does not vaccinate it’ ↔ ‘No farmer who owns a donkey vaccinates it’ Conversely, if in (12) ‘Some farmer…’ were given a strong interpretation and ‘Not every farmer…’ a weak one, in conflict with Kanazawa’s monotonicity principles, then those sentences would no longer be truth-conditionally equivalent, and mutatis mutandis for the sentences in (13). 12 Kanazawa goes on to show that other properties of determiners (symmetry, for example) that license familiar inference patterns also select for those interpretations of donkey sentences that preserve the inference patterns. What is striking about all of these examples is that intuitions about the validity of donkey inferences are often much more robust than intuitions about the truth conditions of the individual donkey sentences involved, as (12) and (13) illustrate. This suggests that intuitions about the truth conditions of donkey sentences may be implicitly tracking intuitions about what would validate certain donkey inferences, rather than the other way around. I find this explanation compelling. The point of this brief digression is to emphasize that weak interpretations of quantified donkey sentences are robust and widespread, and that the distribution of readings in donkey sentences has a plausible and systematic explanation in the preservation of basic inference patterns. Any theory of unbound anaphoric pronouns should accommodate these data. DRT’s The countermodel in footnote 9 also works here. 12 !53 interpretations of quantified donkey sentences are materially equivalent to the default interpretations predicted by Kanazawa’s monotonicity principles when the head determiner is ‘every’, ‘some’, or any Boolean combination thereof (i.e. no, not every, some but not every, etc.—up to isomorphism, there are 16). However, DRT does not make the right predictions for quantified donkey sentences headed by other determiners like ‘at most 3’, or when pragmatic factors “override” the default strong interpretations like in (7) and (8). This predictive failure is related to another problem DRT faces: the so-called proportion problem. Suppose there are exactly 10 donkey-owning farmers. 9 of the farmers own just one donkey each, and they never vaccinate. The last farmer owns 50 donkeys and vaccinates all of his donkeys. In this case (14) is intuitively false since only 1 of the 10 farmers vaccinates, while (15) is true since 50 of 59 donkeys are vaccinated: (14) Most farmers who own a donkey vaccinate it. (15) Most donkeys owned by a farmer are vaccinated by him. However, DRT predicts that (14) and (15) are both true, since most farmer-donkey pairs x and y such that x owns y are such that x vaccinates y. The problem, in a word, is that DRT takes quantification to be pairwise quantification and so is asymmetric (unselective) with respect to the head noun and any other nouns in the relative clause. As a result, (14) and (15) come out as materially equivalent. This gets the wrong results. Analogues of the weak interpretation of unbound anaphora and the proportion problem also arise for conditionals. Whereas (4), repeated below, is naturally interpreted as requiring that Ralph vaccinate all of his donkeys, (16) does not require that anyone pay his bill with all of his credit cards: (4) If Ralph owns a donkey, he vaccinates it. !54 (16) If a person has a credit card, he uses it to pay his bill. Since basic DRT treats conditionals with explicit or covert adverbs of quantification as quantifying over embedding functions which validate the antecedent’s DRS, it essentially understands donkey conditionals in terms of pairwise quantification. This fails to capture the intuitive truth conditions of (16) and in many other cases. Similarly, the most salient reading of (17) takes it to be materially equivalent to (14), (17) If a farmer owns a donkey, he usually vaccinates it. and so is false in the same case described above even though most farmer-donkey pairs vaccinate. Hence basic DRT also fails to account for conditionals whose adverb of quantification which is not semantically equivalent to some Boolean combination of ‘always’ and ‘sometimes.’ However, conditional donkey sentences are sensitive to contextual features in a way not exhibited by their relative clause counterparts. For relative clause donkey sentences, the domain of quantification is always instances of the head NP (i.e. farmers who own a donkey rather than donkeys owned by a farmer). However, in donkey conditionals, the relevant domain of quantification needn’t be the head of the conditional antecedent, and may be determined by pragmatic or contextual factors. For example, in (18), due to Chierchia (1995), the topicalization of ‘dolphin’ makes the salient interpretation of (18) one in which we are quantifying over dolphins rather than trainers or trainer-dolphin pairs; the existence of a dumb dolphin trained by many trainers to no avail is not enough to make (18) false: (18) Dolphins are truly remarkable. If a trainer trains a dolphin, she usually makes it do incredible things. !55 In contrast, if we were to make trainers the topic of the discourse by prefacing the conditional with “The trainers here are really incredible”, (18) would be taken to quantify over trainers instead of dolphins. In that linguistic context, if many trainers were unable to get the same dumb dolphin to perform tricks, (18) would be intuitively false. In further cases still, a pairwise interpretation seems to accord with our intuitions, as (19) illustrates (Chierchia 1995): (19) When a student gives a paper to a professor, she expects her to comment on it promptly. In this case, the relevant quantification seems to be over student-paper-professor triples. Although these problems facing DRT are difficult, they are not insurmountable. To start with quantified donkey sentences, one easy fix is to use a different universal operator ∀x in (5c) in lieu of the pairwise universal operator ∀ which selectively binds the values the discourse referent contributed by the head noun can take. An embedding function f verifies the DRS [[ x, y : farmer(x), donkey(y), owns(x, y)] ∀x [ : vaccinates(x, y)]] in a model M iff for every individual o in D M and for every extension g of f such that g(x) = o and g verifies [ x, y : farmer(x), donkey(y), owns(x, y)] on M, every extension h of g verifies [ : vaccinates(x, y)] on M. This accounts for the proportion problem since the operator ∀x is selective with respect to the discourse referent corresponding to ‘farmer’, rather than that of ‘donkey’. However, note that the above semantics only provides a “strong” reading of (5). To capture weak readings, we would presumably have to alter the semantics of the universal operator accordingly: an embedding function f verifies [[ x, y : person(x), credit-card(y), has(x, y)] ∀x [ : uses-to-pay(x, y)]] in a model M iff for every individual o in D M for which there is an extension g of f such that g(x) = o and g verifies [ x, y : person(x), credit-card(y), has(x, y)] on M, !56 there is an extension h of f such that h(x) = o and h verifies both [ x, y : person(x), credit-card(y), has(x, y)] and [ : uses-to-pay(x, y)] on M. However this proposal is problematic, since it identifies the source of weak readings in the interpretation of the universal quantifier operator itself. But intuitively, it seems like weak interpretations are due not to the interpretation of the universal quantifier, but to the interpretation of the donkey pronouns themselves. This is why the glosses given for (7)-(9) were (7’)-(9’), which substitute indefinites for the corresponding pronouns in (7)-(9), rather than replace ‘every’ with some other determiner I know not what. Indeed, this intuition is supported by the existence of “mixed” cases like (20), whose gloss is given by (20’): (20) Everyone with a 1 Metro card who took a 2 bus used it 1 when boarding it 2 . (20’) Everyone with a Metro card who took a bus used a Metro card he has when boarding every bus he took. In (20), there is no expectation that anyone used all of his Metro cards at a time. Still, the natural reading requires every cardholder to use a Metro card on every bus he boarded. Interpreting the universal quantifier strongly would require cardholders to use all of their Metro cards when boarding, and interpreting the universal quantifier weakly would allow cardholders to use cash on some bus trips. I have the intuition that (20) has neither of these truth conditions on its most natural reading. These mixed cases strongly argue against accounting for weak interpretations in the semantic interpretation of quantification, rather than in the pronouns themselves. A DR-theorist may try the same strategy as before, by replacing the universal operator ∀x which selectively binds the values of the discourse referent x, with a new universal operator ∀xy, which selectively binds the values of two discourse referents x and y, where x is contributed by ‘everyone’ and y by ‘a train’. But such proposal would also need to heed the !57 lessons of the proportion problem by quantifying over the values of x and y asymmetrically (non-pairwise). This may be implemented by the following laborious construction: an embedding function f verifies on a model M the DRS [[ x, y, z : person(x), Metro-card(y), bus(z), has(x, y), took(x, z)] ∀xy [ : uses-when-boarding(x, y, z)]] (abbreviated as [[LEFT] ∀xy [RIGHT]]) iff for every individual o in D M for which there is an extension g of f such that g(x) = o and g verifies [LEFT] on M, and for every individual o’ in D M for which there is an extension h of f such that h(x) = o, h(z) = o’, and h verifies [LEFT] on M, there is an extension j of f such that j(x) = o, j(z) = o’, and j verifies both [LEFT] and [RIGHT] on M. Although this strategy is clearly productive, it is amazing how the determiner ‘every’ can mean so many different things! From case to case, contextual factors like the fact that no one uses more than one Metro card to pay for the bus are affecting the interpretation of ‘every’. It is therefore unclear to me whether the resulting view could be fleshed out in a way that would be plausible from the perspective of language acquisition and production. All else being equal, it’s preferable that we generate the range of interpretations from a finite and hence learnable set of DRS-construction algorithms; attributing to speakers the ability to generate a large range of distinct and selective universal quantifiers owes some explanation. In any case, deriving the different readings of donkey pronouns in the interpretation of quantification is intuitively the wrong place to look, for reasons already discussed. Turning now to conditionals, it seems to me that despite the foregoing problems, the basic idea due to Lewis (1975) of analyzing conditional constructions with adverbs of quantification in terms of quantification over cases (or situations or embedding functions, what have you) whose restriction is given by the conditional antecedent, is essentially correct. The !58 problem is not with this basic idea, but with its implementation. As with the quantified donkey sentences, one solution to the foregoing problems is to alter the semantics for the conditional operator ⇒ so that it is selective with respect to the relevant discourse referent (dolphin v. trainer, e.g.) whose values are asymmetrically bound by the conditional operator. However, similar concerns apply as before: intuitively, the conditional construction with a given adverb of quantification does not change its meaning from case to case. Identifying the source of the varying interpretations of donkey conditionals in the conditional itself seems like the wrong approach. A more plausible solution, it seems to me, is to generate the different readings of donkey conditionals from pragmatic or contextual mechanisms that determine the “coarseness” of the situations or embedding functions in the domain of quantification. For example, in evaluating (16) we seem to have in mind situations which contain a single man and all of his credit cards, rather than counting each man-card pair separately. (16) then says: all of those coarse-grained situations s can be extended to situations where a/the man in s pays his bill with a card he has in s. Taking the domain of quantification to contain these coarser-grained situations would allow us to derive the apparent “universal” force of the donkey pronoun ‘he’ from the covert universal adverb of quantification, and the apparent “existential” force of ‘it’ from the default existential interpretation of the background conditions encoded by the individual concept expressed by pronoun itself, in analogue with the universal interpretation of the background conditions encoded by the individual concepts expressed by instantial terms in natural deduction, as defended in the previous chapter. As I shall show, this view resolves both the proportion problem and the analogue problem of the “weak” interpretation of donkey conditionals. Since context is already recognized to play an important role in delimiting the !59 domain of quantification, and since proponents of the selective operator approach already have to recognize the sensitivity of donkey conditionals to features of context (like the background knowledge that people rarely pay with more than one credit card), the present proposal is both simpler and, it seems to me, accords better with intuitions about what sentences intuitively mean. One of the questions in deciding between these two approaches will be which view can best explain which readings are available in which cases, and why. In the next chapter I will propose and defend an approach to donkey conditionals along these lines. I reserve explaining the details of the account for that chapter. 2.3 Sentential Truth and Discourse Truth Like on Geach’s proposal, DRT takes the primary unit of semantic interpretation to be the entire discourse. Individual sentences are meaningful only in the derivative sense that they update the DRSs of the discourses in which they occur. Discourses are then the primary 13 bearers of truth and falsity. But if truth for a discourse is just a matter of having a proper embedding, then were a discourse to include any false information, then all continuations of the discourse come out false. But we want a notion of truth that is fine-grained enough to distinguish discourses where a true sentence follows a false one, from discourses where a false sentence follows a false one. And we want a notion of truth that is able to make sense of the possibility that when two speakers in a conversation contradict one another, one asserting ‘P’ and the other ‘~P’, one of the two speakers has spoken truly, and the other falsely. On DRT, such Here Heim and Kamp’s theories diverge somewhat. On Heim’s view, the meanings of sentences are 13 ultimately identified with context change potentials (functions from contexts to contexts), where contexts are conceived of as sets of possible worlds. The intermediate representational level of “files” can thus ultimately be dispensed with, whereas Kamp takes his DRSs to be fundamental to his approach (see §2.4 below). !60 discourses fail to have a proper embedding on any model, and so are automatically false. Again, insofar as it makes sense to speak of entire discourses, or strings of sentences, as being true or false, this is because discourses come to represent the world by virtue of being composed of individual claims which are representationally more basic. DRT gets this backwards. Note that we could try to define a derivative notion of truth for a sentence in terms of the notion of truth for a discourse. Heim does exactly this, by defining truth for a sentence (relative to a true discourse) to be a matter of preservation of the truth of the discourse. Her idea is that for any true discourse D and for any sentence S as uttered in context C with intended logical form p, S is true with respect to D if the update of D by p is true, false if the update if 14 false, and truth-valueless if the update is undefined. (I omit relativization to models.) But this notion of truth for a sentence relative to a true discourse will not get us what we want, since it is only defined with respect to true discourses, whereas true sentences can also occur in false discourses. Since Heim’s derivative notion of truth of a sentence relative to a discourse is not applicable to sentences that occur in false discourses, it is unable to do justice to our pre- theoretic notion of sentence truth, and so is inadequate. We might instead try to define truth of a sentence S simpliciter as truth of S with respect to a true discourse (in Heim’s sense), for all true discourses D. But this will not work either, because the truth or falsity of a sentence often cannot be evaluated for truth or falsity independently of the context in which it occurs, and in any case whether a sentence is truth preserving for other discourse contexts is irrelevant to its truth value in its own context of In Heim’s picture, the grammar of the language assigns to each sentence S an LF representation; it is 14 these LF representations which are assigned “file change potentials”—i.e. updates on an arbitrary DRS or “file”. A sentence is said to encode a given file change potential only in the indirect sense that its (intended) LF does. I have omitted discussion of DRT’s construction rules for assigning DRSs to natural language sentences. !61 utterance. For example, suppose Ralph has exactly one donkey, and it is not brown. Then the second sentence of ‘Ralph owns a donkey. It is brown’ is false. But on DRT the logical form of the second sentence is something like brown(x). Given an arbitrary true discourse D, either the update of D by brown(x) will be defined or it won’t. If not, then the sentence fails to come out false with respect to every true discourse D, as wanted. If the update is defined, then it will be true whenever there is an object in the domain that is brown (and that satisfies any other predicative information associated with the discourse referent x in D, should there be any). Again, the sentence fails to come out false with respect to every true discourse D, as wanted. The general lessons from the foregoing are that a derivative notion of truth for a sentence S must be defined with respect to S’s actual discourse context, and that the notion must be defined even when S’s actual discourse context is false. What we would need then is something like the notion of whether a sentence S, as it occurs in a discourse D, would preserve the truth of D were D true. But it’s hard to see how such a notion could be fleshed out: how are we to change a false discourse into a true one in order to evaluate whether a given sentence would preserve its would-be truth? I am not confident that pursuing this idea would produce an adequate solution. I conclude that DRT fails to have adequate notion of sentence truth. But even if we restrict our attention to discourses containing all true sentences, DRT fails to make any distinction between the predicative material which determine what an anaphoric pronoun denotes from the predicative material that is merely predicated of the pronoun later on in the conversation. In DRT, all information attributed to a given pronoun is relevant for determining whether there is an object in the domain that answers to that pronoun (whether the DRS has a proper embedding in the model). But this prediction does not appear to be borne out. For example, in the discourse ‘All tigers have stripes. They live in the jungle. They !62 are carnivores’, only the descriptive material in the first sentence is relevant for determining the denotation of the pronouns in the latter sentences. Whether or not tigers live in the jungle, as the second sentence claims, is simply irrelevant to what the pronoun ‘they’ in the third sentence denotes, or more generally to the truth or falsity of the third sentence. DRT fails to make any such distinctions. My point in this section should not to be confused with the claim that we need to be able to interpret sentences containing anaphoric elements independently of their linguistic contexts. Since the interpretation of anaphors is a function of the interpretation of their antecedents, among other things, sentences often cannot be assigned truth values independently of the interpretation of the overall discourses in which they occur. Sometimes interpreting an anaphoric element may even require that we wait for its postcedent in a subsequent sentence (cataphora) before we can evaluate the sentence in which it occurs. And in certain marked cases, anaphoric resolution may be impossible precisely because of some aberrance in the discourse context. The dependence of anaphora resolution on discourse interpretation is not to be denied. My point rather, is that once the discourse has supplied the information necessary to resolve all of the anaphoric elements contained in a given sentence, we should be able to assign truth conditions to that sentence independently of whether the overall discourse is true or false. The notion of discourse truth and falsity should be derived from the notion of sentential truth, since it is not clear that we can derive the latter from the former. 2.4 Dynamic Predicate Logic The basic idea behind dynamic semantics is that sentences’ semantic function is to affect the interpretation of the discourse by updating the body of mutually recognized facts conversational participants keep track of and contribute to when engaging in conversation. In !63 DRT this is achieved in two-steps. First, sentences contribute updates to discourse representation structures. Second, these structures are given a model-theoretic interpretations. The combined effect of both steps is that sentences function to update the model-theoretic interpretations of discourses. In his (1981) paper, Kamp took the intermediate level of representation constituted by DRSs to be indispensable. But since on this two-step model all the model-theoretic interpretations do their work at the second step after the relevant DRS has been updated, subsentential expressions do not contribute anything to DRSs which themselves have model-theoretic interpretations. The resulting view is thus non-compositional in the sense that the meaning of a sentence is not a function of the meaning of its parts and its structure. The primary contribution of Dynamic Predicate Logic (DPL) was to show that one could retain the dynamical spirit of DRT while dispensing with the first step involving intermediate representations, and thus preserving compositionality. Although there are a number of versions of DPL, I will focus my attention on the version given in Groenendijk and Stokhof (1991). Unlike in DRT, DPL treats indefinites as existential quantifiers. The first sentence in (1) is thus analyzed in the same way that it would be in standard predicate logic: (1c) (∃x)(donkey x & Ralph owns x) DPL departs from standard predicate logic in analyzing the two sentences of (1) as a single unit (a conjunction), and by treating UAPs as free variables. The UAP in the second sentence is assigned the same variable as that of its quantifier antecedent in the previous sentence, representing their anaphoric connection: (1d) (∃x)(donkey x & Ralph owns x) & Harry vaccinates x Since the x in ‘Harry vaccinates x’ is not bound, we do not violate c-command constraints. !64 Recall that in standard predicate logic, a quantified formula ‘∃vQ’ is true on an assignment A iff there is an object o such that Q is true on an assignment A’ that differs at most from A in assigning o as the value of v (and similarly for the universal quantifier). Taking A as “input”, the quantifier ‘∃v’ delivers A’ as “output”. The interpretation of a quantifier effectively “shifts” the interpretation of the formula in its scope. This idea is implemented in DPL by identifying the meaning of a formula with how it shifts assignment functions. Given a range of input assignment functions which validate the discourse D to its left, the effect of updating D by a formula Q is to restrict the input assignment functions to just those that still validate the discourse after the update. To illustrate, suppose we have a discourse D which is validated by all assignments in some set A. The effect of an atomic formula ‘Rt 1 ...t n ’ is to retain only those assignments f ∈ A that validate the formula—i.e. are such that <f(t 1 ),...,f(t n )> is in the extension of R. In general, DPL takes the interpretation of a formula to be a set of ordered pairs of assignment functions <f, g>, considered as input and output assignments, such that a given pair <f, g> is in the interpretation of a formula Q iff the effect of Q is to shift f to g. In the case of an existentially quantified formulas, a pair <f, g> is in the interpretation of a formula ‘∃vQ’ iff there is an assignment h differing at most from f in what it assigns to v such that <h, g> is in the interpretation of Q. Similarly, a pair <f, g> is in the interpretation of a conjunction ‘P & Q’ iff there is an assignment h such that <f, h> is in the interpretation of P and <h, g> is in the interpretation of Q. It is easy to show that, given these semantic clauses, the formulas ‘∃vQ & P’ and ‘∃v(Q & P)’ are semantically equivalent when P contains a free occurrence of ‘v’. This allows us to treat (1) as semantically equivalent to Geach’s analysis (1b) without violating the syntactic constraints that made Geach’s suggestion problematic. !65 Furthermore, this approach has the advantage over DRT in that it is fully compositional and avoids a potentially unwanted commitment to an intermediate representational level of DRSs. The notion of sentential truth is defined relative to input assignments: a formula P is true relative to an input assignment f iff there is an output assignment g such that <f, g> is in the interpretation of P . From here we can define the notion of a satisfaction set for P , consisting of all the input assignments relative to which P is true, as well as the notion of a production set for P , consisting of all the assignments g which are outputs of some input in P’s satisfaction set. Similar notions can be defined for a discourse. It is easy to show, although we will not do it here, that the notion of discourse truth operative in DRT is equivalent to the notion of a discourse having a non-empty satisfaction set. In general then, DPL generates the same truth conditions for discourses as DRT. However it also lacks an adequate notion of sentential truth, for the same reasons. Consider the second sentence of (1), which is intuitively true iff Harry vaccinates a donkey of Ralph’s. DPL understands this sentence as the open formula ‘Harry vaccinates x’, which is true relative to an input assignment f iff there is an output assignment g such that <f, g> is in the interpretation of P . But of course, we’re not interested in whether that open formula is true relative to assignments that map ‘x’ to one of Barbara’s sheep, even if Harry vaccinates them too. In particular, we are interested only in those assignments that are in production set of the preceding discourse: those that map ‘x’ to some donkey of Ralph’s. So perhaps we can say that the second sentence of (1) is true iff it is true relative to one of the assignments in the preceding discourse’s production set. Although this will get us the desired truth conditions for (1), it reveals a more general problem: if the preceding discourse is false, then the production set will be empty. Like with DRT, we have no way to assess the truth of a sentence relative to its actual !66 discourse context independently of whether that preceding discourse is true. Hence, we still have no way of defining an adequate general notion of sentential truth in DPL. Moreover, since DPL generates the same truth conditions as DRT, we are also saddled with the problem of strong/weak interpretations and the proportion problem, both for quantified and conditional donkey sentences. So while DPL is an important contribution in that it shows that we can implement the core ideas of DRT in a non-representational framework without giving up compositionality, this is of no help when it comes to the problems previously presented. 3. THE E-TYPE VIEW: UAPS AS REFERRING TERMS A entirely different approach to the problem of unbound anaphoric pronouns defends the idea that UAPs are in fact referring terms, contrary to what I have argued at the beginning of this chapter. On Evans’ “E-type” analysis (1977), UAPs function as referring terms whose referents get fixed by description. In particular, a UAP refers to the object or plurality (if any) that uniquely verifies the smallest well-formed formula containing its antecedent. So in (1), ‘it’ 15 refers to the unique donkey Ralph owns. In (2) ‘them’ refers to the unique plurality of donkeys that Ralph owns. And in (3) no object verifies the clause containing the pronoun’s antecedent, so ‘them’ fails to refer. Readers should consult Evans (1977) for a more precise characterization of this view. Note that, as 15 Soames points out (1989, 2006), Evans’ formal proposal is inconsistent with how he informally describes his view. On Evans’ formal proposal, unbound anaphora are treated as definite descriptions which take wide scope over the largest clause containing the pronoun but not its antecedent—not as rigid referring terms. Note also that there is widespread confusion about Evans’ view in the literature: many mistakenly think that Evans’ view takes UAPs to be semantically equivalent to definite descriptions (‘Every farmer who owns a donkey vaccinates the donkey he owns’). This is no doubt due to the widespread tendency to use “E-type” as a name for views that take UAPs to be definite descriptions, in conflict with how Evans’ original intended meaning of the term. !67 Before getting to more complicated cases, an immediate problem with Evans’ view is that UAPs whose antecedents fail to have a unique verifier fail to secure a unique referent, and so sentences that contain them fail to have a truth value. But the second sentence of (1) is intuitively true in a case where Ralph owns more than one donkey. Indeed, there seems to be no problem with the following dialogue: (21) Ralph owns a donkey. Harry vaccinates it. Ralph owns another donkey too, but Harry only vaccinates one of them. Similarly, (1) is intuitively false, not truth-valueless, in a case where Ralph owns many donkeys but Harry vaccinates none of them. In response to these worries, Evans notes that (1) and (1a) are not interchangeable: we are reluctant to use (1) to express the general thought that ‘Some donkey is owned by Ralph and vaccinated by Harry’. Evans claims that it is inappropriate to use a pronoun in place of an indefinite in cases where a speaker is merely aware of the existence of potentially several Fs, but doesn’t have any discriminating knowledge that would identify a particular F—that is, when the speaker fails to have a particular individual “in mind” or when the speaker is unable to answer “Who?” or “Which?” in response to her use of the pronoun. This oddness would be unexplained, he claims, if the pronouns in such cases had the truth conditions Geach assigns to them. But his argument applies equally well to any view that takes UAPs with an indefinite antecedent to be truth-conditionally equivalent to an existential quantifier. This argument is uncompelling, for several reasons. First, it may be granted that uttering (1) in lieu of (1a) sometimes indicates that the speaker has a particular donkey in mind. But I deny that this presupposition (in the broadest sense of the term) is cross-contextually robust, or indefeasible. In uttering (21) the speaker may not have any particular donkey in mind, nor any !68 “discriminating knowledge”. She may simply know the general fact that Harry vaccinates one of Ralph’s donkeys. Still, (21) seems perfectly acceptable. Or I might utter (22), (22) Some farmer who owns a donkey beats it. not because I am aware of any donkey beatings, but merely because my knowledge of human irascibility makes (22)’s truth statistically likely. Here, ‘it’ seems to have the force of an existential quantifier, not a referring term. Second, the defeasible oddness of uttering (1) in certain no-speaker-reference contexts does not entail that it not true in such contexts. Suppose an utterance of (1) misleads someone into thinking that Ralph owns only one donkey or that the speaker is talking about one of Ralph’s donkeys in particular. I submit the intuition that the second sentence of (1) is nevertheless true in this case. The pronoun ‘it’ in (1) and the indefinite ‘a donkey of Ralph’s’ may differ in use, presuppositional profile, and cognitive significance, but still be materially equivalent. However, Evans also considers a liberalized version of his stated view which he considers “a perfectly adequate fall-back position” (p. 516), according to which a UAP refers to whatever object or plurality uniquely verifies not just the clause containing its antecedent, but also whatever extra descriptive information that the subject might “supply on demand”. So in (1) and (21) perhaps the speaker has some way of identifying just one of Ralph’s donkeys as the donkey she has “in mind”, and this information, in conjunction with the explicit material in the pronoun’s antecedent, serve to identify a unique donkey as the referent of ‘it’. It is unclear how “thin” this extra descriptive information is allowed to be. For example, should it be required that the speaker produce descriptive information “on demand” which has a unique satisfier? Or is it merely required that the speaker have a particular individual “in !69 mind”? These two possibilities might come apart if one can have a particular individual in mind without being able to supply a description which uniquely denotes it, and conversely if having a particular individual in mind requires something more than having a uniquely-satisfied description (consider: ‘the tallest spy’). It is unclear how high Evans’ takes the epistemic bar on reference to be. Suppose someone reads in the newspaper that Harry vaccinates just one of Ralph’s two donkeys. A speaker who utters (21) solely on the basis of this information has no discriminating knowledge about the donkeys except for the trivial knowledge that the vaccinated donkey is the one that is vaccinated. Is this enough for the speaker to have the vaccinated donkey in mind—to have secured a referent for ‘it’? Answering ‘yes’ would allow 16 Evans to explain why (21) seems perfectly acceptable in such circumstances. But by liberalizing the epistemic bar for reference to a degree that it becomes trivial to satisfy would deprive Evans’ epistemic constraint of its explanatory value in accounting for the putative oddness of using (1) in lieu of (1a). Matters are worse for relative clause donkey sentences. Evans understands quantified formulas like ‘Every F is G’ in terms of the truth of their substitution instances, so that ‘Every F is G’ true iff for every true substitution instance ‘A is F’ where ‘A’ is some (new) referring term, ‘A is G’ is true as well. UAPs whose antecedent occurs in the scope of a higher operator are to be Part of the problem here is that “knowledge who/which”, “discriminating knowledge”, and “having in 16 mind” are notoriously context-sensitive. As Quine noted (1977), sometimes when we are asked “Who?” or “Which?”, we are given a face but are being asked for a name; other times we have a name, but are being asked for a face or some other identifying information. The notions of discriminating knowledge and having an individual in mind fare no better. To adapt an example from Manley and Hawthorne (2012), a detective investigating the mayor’s murder might not count as having any suspect in mind if he cannot form a list of names of potential suspects. On the other hand, he might count as having a suspect in mind if he believes that the murderer was the same individual who robbed the bank the day before. See Manley and Hawthorne (2012) chapter 3 for arguments against Evans’ epistemic acquaintance condition on reference. !70 evaluated by first applying the rule for interpreting the operator, and then applying the rule for interpreting unbound pronouns. The quantified donkey sentence (3) is true iff for every true substitution instance ‘A is a farmer owns a donkey’ where ‘A’ is some (new) referring term, ‘A vaccinates it’ is true as well, where ‘it’ refers to the donkey A owns. An unintuitive consequence of this view is that the quantified donkey sentence (5) is not true whenever any farmer owns more than one donkey, since in that case the pronoun ‘it’ will fail to secure a unique referent with respect to each farmer. This is surely wrong. Note that it won’t do to suppose that these sentences involve an accommodated domain restriction to farmers that have just one donkey. As Heim reminds us (1990), we judge (5) to be false, not truth-valueless, in cases where a multiple-donkey owning farmer never vaccinates. And in sentences like (7) there is no temptation to suppose that we are only talking about individuals who have only one credit card. Evans does not explicitly discuss donkey conditionals, but presumably he would approve of the Lewisian quantificational approach. Evans understands quantified donkey sentences by first applying the interpretation of the quantifier and then applying the interpretation of E-type pronouns to the substitution instances. An analogous semantic procedure for conditionals would presumably also have to interpret the conditional operator quantificationally, so that (4) is true iff for every true substitution instance ‘Ralph owns D’, where D names the unique donkey Ralph owns, ‘Ralph vaccinates D’ is true as well. But if Ralph owns more than one donkey, D fails to refer and the sentence is vacuously true no matter what Ralph does. And in other cases with two indistinguishable indefinites, the uniqueness condition built into the E-type approach would also seem to result in problematic reference failure: !71 (23) If a bishop meets a bishop, he always blesses him. (24) Everyone who bought a sage plant bought nine others with it. It is a consequence of (23) that whenever two bishops meet each other, they bless each other. There is no way, relative to each such meeting, of identifying one of the two bishops as the referent of ‘he’ and the other as ‘him’. And (24) requires every sage plant-buyer to be a 10 sage plant-buyer. The speaker needn’t have, relative to each buyer, a particular sage plant in mind as the referent of ‘it’. Yet these speeches are perfectly acceptable, contrary to what the E-type view would seem to predict. A better strategy for avoiding the problematic uniqueness implications of the E-type view might be to understand quantification and conditionals not in terms of their substitution instances, but rather in terms of quantifying over situations. Situations are like partial possible worlds in that they are possible ways some things might be—objects instantiating properties and bearing relations to one another—but are more finely-grained than possible worlds in that they needn’t include anything more than a single individual. A minimal situation where p is true is a situation s in which p is true but for which there is no situation that is a proper part of s where p is true. For example, if Ralph owns n donkeys, there are n minimal situations where Ralph owns a donkey: one for each Ralph-donkey pair. An extension of a situation s is a situation s’ that has s as a part. Following Heim (1990)’s E-type approach, we might take the conditional donkey sentence (4) to have an unpronounced universal quantifier over minimal situations, so that it means something like: Every minimal situation s where Ralph owns a donkey can be extended to a situation s’ where Ralph vaccinates f(s), where f(s) is a partial function that for each argument s has as its value the unique donkey Ralph owns in s. For each such minimal situation !72 s, there is a unique donkey in s that Ralph owns, so f(s) will be well-defined over its relevant domain. Similarly, the quantified donkey sentence (5) is true iff, every farmer x is such that for every minimal situation s where x owns a donkey, s can be extended to a situation s’ where x vaccinates f(x, s)—where f(x, s) is defined over pairs <x, s> where s is a situation in which x is a farmer who owns a donkey, and f(x, s) = the unique donkey x owns in s. In this way we can secure the uniqueness of the pronoun in (5) without requiring that every farmer owns at most one donkey. However, this approach runs into several problems. First, although quantifying over situations seems well motivated for certain conditionals with adverbs of quantification, it lacks independent motivation for quantified donkey sentences. Second, this approach runs into the same problems previously encountered for DRT, which analogously quantified over embedding functions. In particular, we still have the problem of weak readings and the proportion problem. As before, accounting for these problems by modifying the interpretation of quantification or conditional operator on a case by case basis comes with a significant motivational burden. The E-type adds to these problems by building uniqueness implications into the pronouns, which was not a feature of DRT. So whereas the bishop sentence (23) presents no special difficulty for DRT, that sentence an especially difficult problem for the E-type view. Since (23) involves a universal adverb of quantification ‘always’, it makes no truth-conditional difference whether it requires an asymmetric or symmetric reading. On a symmetric reading, the interpretation of (23) will be of the form: For all minimal situations s where a bishop meets another bishop, s can be extended to a situation s’ where f 1(s) blesses f 2(s), where f 1(s) = the unique bishop in s, and f 2(s) = the unique bishop in s. Hence on a symmetric reading, f 1 and f 2 will not be well-defined because there are two bishops in s. And on an asymmetric reading, the !73 interpretation of (23) will be of the form: For all minimal situations s where there is a bishop which can be extended to a situation s’ where f 1(s) meets another bishop, s can be extended to a situation s’’ where f 1(s) blesses f 2(s), where f 1(s) = the unique bishop in s, and f 2(s) = the unique bishop that f 1(s) meets. But this semantics entails that every bishop only meets one other bishop, which (23) clearly does not require. Hence Heim’s appeal to situations to solve the uniqueness problems of the E-type view fails. The problem of uniqueness implications and “indistinguishable participants” has generated its own sub-literature. In the following two sections I will discuss just two prominent approaches to these problems: Neale’s “D-type” view (1991), and Elbourne’s NP-deletion view (2005). I will argue that both views encounter a number of problems. One of the conclusions of 17 these sections will be that taking UAPs to carry uniqueness implications carry a number of important costs. Were Evans or Heim appeal to similar solutions to solve the E-type view’s problems with uniqueness implications, they would share in these costs. Although not universally conclusive, these problems motivate the search for an alternative approach. 4. THE D-TYPE VIEW: UAPS AS DEFINITE DESCRIPTIONS The basic idea of the D-type approach is that UAPs “go proxy for” definite descriptions. This idea has its origins in Parsons (1978), Cooper (1979), and Davies (1981), however in this section I will focus on the most well-known version given by Neale (1991). On Neale’s view, the pronoun in (1) is to be interpreted as the definite description ‘the donkey Ralph owns’, whereas the pronoun in (5) is to be interpreted as ‘the donkey x owns’, where ‘x’ is bound by the Readers are referred to Ludlow (1994), who provides a solution that distinguishes the two bishops in 17 the meeting situation by their thematic roles. See Schein (1993, p. 95-96) and Elbourne (2005, p. 142-45) for discussion. !74 quantifier ‘Every farmer’. In general, if p is a pronoun anaphoric on, but not c-commanded by a quantifier ‘[Dx: Fx]’ that occurs in an antecedent clause ‘[Dx: Fx](Gx)’, then p is interpreted as ‘[the x: Fx & Gx]’. Neale defends a standard Russellian semantics for definite descriptions formulated in the terms of generalized quantifier theory: when ‘F’ is singular, ‘[the x: Fx & Gx] (Hx)’ is true iff |(F ∩ G) \ H| = 0 and |(F ∩ G)| = 1, where the set-term F denotes the extension of ‘F’, and similarly for G and H. When ‘F’ is plural, ‘[the x: Fx & Gx](Hx)’ is true iff |(F ∩ G) \ H| = 0 and |(F ∩ G)|> 1. Neale’s view inherits the same problems with uniqueness implications as E-type views. To address this problem Neale adopts an idea from Parsons (1978) and Davies (1981) according to which some D-type pronouns are to be interpreted as “numberless” descriptions that are neither singular nor plural. The rough idea is that (5) says something like: every farmer who owns a donkey vaccinates whatever donkey or donkeys he owns. More formally, the proposal is that a pronoun antecedent on, but not c-commanded by, a quantifier ‘[Dx: Fx]’ that occurs in an antecedent clause ‘[Dx: Fx](Gx)’ is interpreted as ‘[whe x: Fx & Gx]’, where ‘[whe x: Fx & Gx] (Hx)’ is true iff |(F ∩ G) \ H| = 0 and|(F ∩ G)| ≥ 1. The semantics for numberless descriptions avoids the problem of uniqueness implications without appealing to situation semantics. However, it trades in the uniqueness implications of Russellian descriptions for the maximality implications of numberless descriptions. On the numberless reading, (1) no longer requires that Ralph owns just one donkey. Instead, it requires that Harry vaccinates all of Ralph’s donkeys. But no reading of (1) has this implication. Hence the numberless approach will not work for all cases. Neale owes us an explanation of when a UAP receives a numberless reading, and when a Russellian one. !75 Neale suggests (1990, p. 237) that whether a UAP gets a numberless reading may turn on whether the speaker has a particular individual “in mind”: if the speaker has a particular donkey in mind then the pronoun in (1) gets a Russellian interpretation; otherwise it is numberless. However, we have already seen why this suggestion is inadequate: there are cases where the speaker doesn’t have any donkey in mind, but can still felicitously assert (1), (21), and (22) on the basis of some general or statistical knowledge. If the pronouns in these sentences are quantifiers, they seem to have the quantification force of an existential quantifier, not a Russellian or numberless description. Matters are worse in embedded cases. In (7) we clearly do not have in mind, relative to each person, a unique credit card that he uses to pay his bill. So it would seem that the numberless interpretation is in play. But (7) does not require that anyone with a credit card pay with all of their cards, as the numberless interpretation would require. So neither the Russellian nor the numberless interpretation can account for weak readings of donkey sentences. For this reason I would not recommend that the E-type theorist take on Neale’s numberless proposal: it seems to cause more problems than it solves. 5. NP DELETION: [[IT]] = [[THE]] Elbourne (2005) defends an interesting view according to which pronouns are occurrences of the definite determiner ‘the’ whose sister NP has undergone NP deletion (is unpronounced), as when I say ‘My shirt is the same as his’ [...shirt]. On this view, (5) looks like (5d) at LF, where the strikethrough indicates unpronounced material, and is truth-conditionally equivalent to (5e): (5d) Every farmer who owns a donkey beats it donkey. (5e) Every farmer who owns a donkey beats the donkey. !76 Elbourne hypothesizes that ‘it’ and ‘the’ are in fact different phonetical realizations of the same lexical entry whose conditioning environment is determined by the pronunciation of its NP sister. That is, just as ‘no’ and ‘none’, ‘a’ and ‘one’, ‘your’ and ‘yours’ are plausibly different phonetical realizations of the same lexical entry (respectively) as demonstrated by (25)-(27), perhaps the ungrammaticality of ‘it donkey’ can be explained along similar lines, as in (28): (25) Two heads are better than none (no*) head. (26) Two heads are better than one (a*) head. (27) My donkey is better than yours (your*) donkey. (28) Ralph bought a donkey, while Harry vaccinated it (the*) donkey. 18 An immediate worry for this view is that NP deletion, like other kinds of ellipsis, is often possible only if the elided material has some linguistic antecedent at LF, if not an exact copy, as in (25)–(27). Hankamer and Sag (1976) illustrate this point for VP deletion: (29) [A tries to stuff a 9-inch ball through a 6-inch hoop.] B: It’s not clear that you’ll be able to do it. (30) [Same context.] B: *It’s not clear that you’ll be able to. (31) A: I’m going to stuff this ball through this hoop. B: It’s not clear that you’ll be able to. B’s utterance in (29) contains the pronoun ‘it’ which refers to A’s getting the ball through the hoop, which was made salient by the conversational context. B’s utterance in (30), in contrast, involves VP-deletion, but is marked even though the contextual set up is the same, suggesting However, the alleged contrast is not cross-linguistically robust: both Hans sieht den Mann (“Hans sees 18 the man”) and Hans sieht den (literally: “Hans sees the”, i.e.: “Hans sees the man”) are acceptable in German. One might still identify [[it]] and [[the]] in English while giving distinct analyses for definite articles and personal pronouns in German, but that would undermine the aspirations for a univocal semantics which motivates Elbourne. !77 that the elided VP anaphor is unable to secure its referent via pragmatic mechanisms. (31) demonstrates that the deleted VP requires a linguistic antecedent, and so is subject to syntactic rather than pragmatic control processes. Moreover, Grinder and Postal (1971) have argued on the basis of sentences similar to (32) that certain anaphoric pronouns require a deleted antecedent, because as (33) demonstrates, the indefinite ‘a bishop’ in the first clause cannot serve as the antecedent for ‘he’: (32) I’ve never met a bishop, but Mary has met one, and she says he blessed her. (33) *I’ve never met a bishop, but Mary says he blessed her. Elbourne notes that there are exceptions to this generalization: when playing with a friend’s dog, I can say “Mine does the same thing”. However these are exceptions that prove the rule. This means that by appealing to NP deletion in part of his explanation of UAPs, Elbourne is mostly limited to deleted material that can be found in a linguistic antecedent, although Elbourne is happy to require as many exceptions as are convenient to account for the data. 19 Unfortunately, Elbourne never gives us a theory of NP-deletion: he never tells us how to determine what the deleted material is given the linguistic antecedent, or how and to what extent the conversational context can supply its own material. But this seems like a critical component of his view, without which his view does not provide specific predictions which One kind of case that we will not be discussing concerns UAPs with disjunctive antecedents: ‘If Mary 19 sees John or Bill, she waves to him’. Elbourne faults dynamic views for being unable to account for such cases, and argues that it is a point in favor of NP-deletion that it can, by taking ‘him’ to mean ‘the man’. But this suggestion seems especially ad hoc; moreover, by what process did ‘man’ get there if it has no linguistic antecedent? Elbourne is happy to admit puzzlement, but notes that disjunctive cases also arise for VP-deletion. But I’m inclined simply to reject the data: I hear disjunctive antecedent cases as marked, although whose communicative intent is obviously capable of being divined by post-hoc rationalization processes. !78 may be empirically tested. Clearly more work needs to be done in understanding the scope and limits of NP-deletion before Elbourne’s theory can be fully evaluated. Another worry of this flavor concerns pronominal contradiction cases, as first discussed by Strawson (1952). Consider the following dialogue: (34) A: I saw a crocodile in the swamp today. B: It wasn’t a crocodile. This is Florida. It was an alligator. B’s correction may be licensed even when made on the basis of general knowledge about native Floridian fauna native. In such a case, ‘it’ does not refer to the alligator A saw (moreover, A may have seen many alligators that day). But ‘it’ also cannot mean ‘the crocodile’, as one would expect to be predicted by NP-deletion, on pain of making B’s assertion contradictory. Plausibly, what’s going on in cases like this is that B is understood to mean something like: ‘The “crocodile” you speak of wasn’t a crocodile.’ Pronominal contradiction cases seems 20 unexplainable for NP deletion. These worries are important because one of the primary motivations for NP deletion is an aspiration for semantic uniformity: Elbourne argues that all else being equal, it would be desirable if all uses of pronouns could be explained by a unified semantic theory. Many theorists take UAPs to function differently from their referential and bound variable uses. Elbourne argues that this pronominal ambiguity is neither desirable nor necessary. More ambitiously, he defends the view that all definite NPs—names, pronouns, definite descriptions, demonstratives, etc.—are syntactically and semantically of a kind: they take as arguments an index and an NP . From the perspective of universal grammar, this unification of syntax and semantics for definites is supposed to facilitate language acquisition and production, since a language learner See Ludlow and Neale (1991, p. 198-99) for a suggestion of this sort. 20 !79 who has learned to use one definite has thereby learned them all (Elbourne 2005, p. 1). However, if there cases that are not explainable by NP deletion, or which require positing otherwise unmotivated pragmatic control processes governing anaphora resolution, that would undermine the semantic uniformity claim, and hence one of the primary motivations for the view. Turning now to truth conditions, Elbourne adopts a Fregean interpretation of the definite determiner, according to which ‘the F’ denotes the unique F if there is one, and otherwise is undefined. Hence ‘The F is G’ is true (false) iff there is a unique F and it is (not) G, and undefined iff there is not a unique F. The second important feature of Elbourne’s semantics is that all semantic values are relativized to situations: singular terms denote functions from situations to individuals; predicates denote functions from individuals to functions from situations to truth-values; sentences denote functions from situations to truth-values, and so on. The resulting view is one in which sentences effectively denote sets of situations whose 21 Both of these commitments are highly controversial and potentially problematic, however I will only 21 mention two problems for situation semantics here. One familiar worry is the problem of de re attitude reports. Soames (1987) shows why any semantic theory which satisfies certain minimal assumptions cannot identify the semantic contents of sentences with truth-supporting circumstances, no matter how finely grained. The problem is that on such views, the de re belief reports that ‘Hesperus’ refers to Hesperus and that ‘Phosphorus’ refers to Phosphorus entail that the subject believe that ‘Hesperus’ and ‘Phosphorus’ co-refer. Elbourne is aware of this difficulty, but rejects one of the assumptions that generates Soames’ result: namely, that names are (always) directly referential (see Elbourne 2005 ch. 6). Non-persistent (non-right monotone increasing) quantifiers also present a difficulty for situation semantics: ‘Every farmer who owns no donkeys is poor’ does not entail that every farmer is poor, contrary to what Elbourne’s view would predict. One standard recourse is to reinterpret all quantifiers so that they are persistent, as in Kratzer (1989). But this creates problems as well: ‘Every tree is laden with wonderful apples’ doesn’t entail that all trees in the world bear fruit. Elbourne points to two standard recourses: Barwise and Perry (1983) suggest that we take the assertion to represent only a limited part of the world. A different option suggested by Kratzer (1989) is to build an implicit restrictor into the quantifier, however this option also meets a number of problems. See Zweig (2013) for further discussion. !80 mereological structure is roughly isomorphic to the syntactic structure of the sentence itself. For example, the truth conditions of (4), whose syntactic structure is given by (4d), are given by (4e): (4d) [[always [if [[a farmer] [λ 6 [[a donkey] [λ 2 [t 6 owns t 2]]]]]]] [[he farmer] beats [it donkey]]] (4e) λs . every minimal situation s 1 ≤ s such that: there is an individual x and a minimal situation s 2 ≤ s 1 such that x is a farmer in s 2, such that there is a minimal situation s 2 ≤ s 3 ≤ s 1 such that: there is an individual y and a minimal situation s 4 ≤ s 3 such that y is a donkey in s 4, such that there is a minimal situation s 2 ≤ s 5 ≤ s 3 such that x owns y in s 5, can be extended to a minimal situation s 1 ≤ s 6 ≤ s such that: the unique farmer in s 6 beats in s 6 the unique donkey in s 6 Here is a graphical representation of the nested situation described in (4e): A similar analysis is given for relative clause donkey sentence (5). From my perspective, the most impressive application of this idea is Elbourne’s solution to the problem of indistinguishable participants. Elbourne shows that the lexical entries and composition rules he provides assign the bishop sentence (23) the following truth conditions: (23a) λs . every minimal situation s 1 ≤ s such that: there is an individual x and minimal situation s 2 ≤ s 1 such that x is a bishop in s 2, such that there is a minimal situation s 2 ≤ s 3 ≤ s 1 such that: there is an individual y and a minimal situation s 4 ≤ s 3 and y is a bishop in s 4, such that there is a minimal situation s 4 ≤ s 5 ≤ s 3 such that x meets y in s 5, !81 can be extended to a minimal situation s 1 ≤ s 6 ≤ s such that: the unique bishop in s 6 blesses in s 6 the unique bishop in s 6. Note that since there are two bishops in s 6, we have not yet solved the uniqueness problem. However, the two bishops are distinguished by their positions in the structured situation described, so by changing the descriptive contents (deleted NPs) of the pronouns to something like the distinguished bishop (i.e. the unique bishop in s 2) and the nondistinguished bishop (i.e. the unique bishop in s 5), we can individuate them to solve the uniqueness problem. The only remaining issue then is to explain how this supplementary descriptive material gets into the deleted NPs. Elbourne suggests that pragmatic mechanisms supply the missing description, as has been suggested as a solution to the problem of incomplete descriptions: perhaps ‘The table is covered with books’ does not entail that there is only one table in the world because pragmatic mechanisms have supplied unpronounced descriptive material that individuates the table to which the speaker intends to refer. However, this suggestion meets two worries. The first is the obvious fact that no speaker possesses a theory of situation semantics that would ever allow her to identify the relevant sub- situations in which the two bishops respectively occur. Elbourne needs to explain how pragmatic mechanisms complete the descriptions in the bishop sentence without attributing to speakers some implausible grasp of complex syntactic and semantic theory. The second worry is the one mentioned before: NP deletion, like other kinds of ellipsis, is largely subject to syntactic rather than pragmatic control. This worry is exemplified by (35): (35) Two bishops are better than one bishop. If one bishop meets another one bishop, he blesses him. !82 In (35), we already have to appeal to syntactic processes to copy ‘bishop’ at three deletion sites. If ‘he’ and ‘him’ also involve NP-deletion of the same sort, we should expect the deleted material to be the same as their antecedents. To posit a second, pragmatic process of NP- deletion completion for ‘he’ and ‘him’ requires a significant motivational burden and empirical support. But in any case, Elbourne’s solution to the problem of indistinguishable participants is impressive and derives further support from (23’): (23’) *If a bishop and a bishop meet, he blesses him. Elbourne argues that (23’) is marked because the first ‘bishop’ is not syntactically distinguished and so anaphora resolution is blocked. The two bishops could be distinguished if the second ‘a bishop’ could move out of the conjunction via quantifier raising (QR) to occupy a syntactically distinguished position at LF, however this is not allowed by Ross’s (1967) Coordinate Structure Constraint (CSC): ‘Every bishop and one nun carried the piano’ does not have a reading where for every bishop x there is a nun y (possibly different nuns for different bishops) such that x and y carried the piano. Hence in evaluating (23’) the two bishops do not occupy different positions in the structured situation denoted by the conditional antecedent, and hence the pronouns in the consequent are not able to be resolved. Another problem for Elbourne’s semantics is its inability to account for weak readings of donkey sentences, as well as the contextual sensitivity of the asymmetric readings of donkey conditionals. Elbourne recognizes these difficulties, but argues that they are not a problem for the NP-deletion view per se since the problematic sentences retain their weak readings even when the relevant pronoun is replaced by an overt definite description: (7’’) Everyone who has a credit card used the credit card to pay his bill. !83 Rather than address this problem, Elbourne sets it aside. Indeed, one common strategy that Elbourne and D-type theorists appeal to time and again when confronted with difficulties for their views is the interchangeability of UAPs with definite descriptions. (In the next section we will see this in action when we look at the scope interaction between UAPs and other operators.) If some sentence is difficult to account for, one simply notes that the UAP is replaceable by an appropriate definite description. Since everyone needs to account for the interpretation of the substituted sentence, acknowledging that the interpretive difficulty remains once the substitution has been made is a way of arguing that it is a problem for everyone, and not for descriptive views per se. Since pronouns are almost always interchangeable with definite descriptions, this makes arguing against descriptive views more challenging. However, it seems to me that appeals to the interchangeability of definite descriptions and pronouns has a quasi-pragmatic explanation. To illustrate, first note that in (36a) the pronoun and definite description are licensed by the the indefinite in the previous sentence without which they are infelicitous (36b): (36a) There is a Chinese restaurant across the street from my house. It/The restaurant shut down today. (36b) *It/*The restaurant shut down today. (36c) The restaurant across the street from my house shut down today. These data are fodder for the novelty/familiarity analysis of definiteness according to which indefinites function to introduce novel discourse referents into the conversation, while definites function to pick up on familiar discourse referents that have been already introduced either by a prior indefinite, or by some salient, mutually-recognized feature of the conversational context (Stalnakerian common ground). However, this is not the only possible explanation. Note that !84 when a definite is descriptively rich enough to allow the speaker to uniquely identify the intended object of discussion without first introducing that object into the conversation via a prior indefinite, its use becomes felicitous, as in (36c). Fans of novelty/familiarity argue that this is because the relevant familiarity presupposition has been accommodated in the sense of Lewis (1979). Although I do not have room to engage in a full debate of this issue here, I would 22 suggest an alternative explanation: in cases where that referent is something whose existence is not already presupposed as part of the common ground, the (in)felicitousness of a definite description is explained by whether hearers should be able to identify the speaker’s intended referent. In cases where the context does not supply an obvious discourse referent for a definite, its felicity requires that the speaker supply the reader with the necessary means to identify one, either by providing a definite which is sufficiently descriptive, or by some other means (e.g., pointing). Conversely, when the intended referent is first introduced via a prior indefinite, or made salient by the conversational context, the subsequent definite can afford to be as descriptively minimal as grammatically possible: i.e., have no overt content at all. Pronouns have no overt content, and in English third-person pronouns are only marked for gender and number. This has a number of consequences. First, speakers must rely exclusively on the conversational context, along with number and gender features, to identify pronouns’ intended referents. Second, pronouns are immanently replaceable by suitably minimal descriptive analogues (‘‘the woman’), since they too provide hearers little help in identifying speakers’ intended referents. In contexts where the speaker knows that the hearer will be able to use contextual information rather than overt descriptive information to help her identify the speaker’s intended referent, the use of a minimal definite description often has the Manley and Hawthorne (2012, chapter 5) present a number of objections to the familiarity analysis of 22 definites, which I endorse. !85 same communicative effect as a corresponding pronoun. Hence it is of no surprise that donkey pronouns are often replaceable by a relevant minimal description. Far from showing that pronouns are equivalent to definite descriptions (with unpronounced NP sisters), this correlation has a simpler, quasi-pragmatic explanation. If this is right, then since pronouns have no overt descriptive content whereas descriptions necessarily have overt content, we should expect that pronouns and minimal definite descriptions are not perfectly interchangeable. Consider the following (Roberts 2005): (37) A woman entered stage-left. Another woman entered stage-right. She (*the woman) was carrying a basket of flowers. Here we cannot felicitously use ‘the woman’ because it does not uniquely pick out either of the women as its referent. However ‘she’ is acceptable, and it refers to the woman who was mentioned last (who entered stage-right). What explains this difference? The answer, I suggest, following Roberts (2003), is that pronouns presuppose that their discourse referents are maximally salient. This makes good sense because by virtue of having no overt descriptive content other than gender and number, pronouns give the hearer virtually no information to go on in determining the pronoun’s intended referent other the relative salience of the objects under discussion, which we may suppose is given some (possibly non-total) order <. But since conversation is a cooperative process, this means that in hearing a pronoun, an interpreter may assume that she already possesses all the information necessary to determine the pronoun’s intended referent. To do otherwise would be to make one’s assertion uninterpretable, and hence non-cooperative. Since the hearer’s goal is to pick out a unique object as the pronoun’s discourse referent, and since the only information she has is the relative salience of the objects under discussion, and since the only principled way to single out an object in the salience !86 ordering among others is to pick the maximal element, the hearer may conclude from a speaker’s use of a pronoun that the object with maximal salience is the speaker’s intended referent. In other words, it only makes sense that the division of labor between definites we use in cooperative communication mirrors differences in their surface structure, and in particular whether they contain any overt descriptive material: pronouns are those definite devices we use when contextual salience is the information by which our utterances are to be interpreted. In contrast, the reason the use of the definite description ‘the woman’ in (37) is marked, despite contributing no more reference-identifying information than ‘she’, is that unlike pronouns, definite descriptions do not carry the presupposition that their referents are the most maximally salient objects under discussion. ‘Salience’ here is not merely situational. It can also be syntactic. In (37) both women are in some sense equal participants to the scene described. But the second woman enjoys a privileged relation with respect to the syntactic structure: she was mentioned last. But salience is also not simply a matter of being mentioned last. In (38) the use of the pronoun is marked, and does not refer to the tall woman even though she was mentioned last: (38) A short and a tall woman entered from opposite sides of the stage. *She picked up some flowers. Plausibly, anaphora resolution is not possible in this case because neither woman may be syntactically distinguished as the most salient object of discussion without violating the coordinate structure constraint. In contrast, in (39) ‘she’ clearly refers to the first woman whose topicalization has made her the most salient woman in the conversation, while ‘her’ clearly refers to the second-mentioned, less salient woman by process of elimination: !87 (39) A woman meets another woman at the center of the stage. She picks up some flowers, and then hands them to her. These data support Elbourne’s observation that syntactic structure is of critical importance in resolving pronominal anaphora, and in understanding the bishop sentence. However, at no point in explaining the foregoing data have we appealed to situation semantics. Hence, it seems likely that an explanation of the bishop sentence may be possible without taking on board the highly-controversial commitment to situation semantics uses in his explanation. In the next chapter, I will attempt to do just that. Before ending this section, it is worth noting one final problem with NP-deletion. Consider again (22), repeated below, which I argued is felicitous even if the speaker does not have singular grounds for her assertion: (22) Some farmer who owns a donkey beats it. Intuitively, (22) is true as long as there’s a farmer who beats one of his donkeys. On a flat-footed Fregean analysis, the pronoun in (22), which Elbourne takes to be the description ‘the donkey’, fails to secure a referent since there is more than one donkey in the world. According to Elbourne’s favored approach to the problem of incomplete descriptions (see especially Elbourne 2013), uniqueness is secured for incomplete descriptions by positing an unpronounced situation variable in the syntax relative to which the uniqueness of the description can be secured. But what could the relevant situation variable possibly be for (22)? Since the speaker has derived (22) on purely general, statistical grounds, and does not have any particular farmer, donkey, or farmer-donkey beating situation in mind, there is no situation in particular that we are talking about. Hence if there is a covert situation variable in the syntax, it is not clear what it should refer to. In general, I struggle to see how views that take UAPs to carry truth-conditional !88 uniqueness implications are able to account for such cases, as well as non-speaker reference uses of (1) and (21). 6. UAPS AS CONTEXT-DEPENDENT QUANTIFIERS A final solution to the problem of unbound anaphora takes UAPs to be context- dependent quantifiers (CDQs), as introduced in the previous chapter. Wilson (1984) was the first to emphasize the close connection between instantial terms in natural deduction and pronouns in natural language, and was the first to argue that instantial terms and UAPs are both context- dependent quantifiers. However, in what follows I will focus on the most well worked-out version given by King (1994). Recall from the previous chapter that the quantificational forces, domain restrictions, and scope permutations of CDQs are determined by their linguistic contexts. Moreover, a type <1, 1> determiner Q is right monotone increasing on a domain M iff Q(A, B) entails Q(A, C) whenever B ⊆ C for all A,B,C ⊆ M, and Q is symmetric iff Q(A, B) entails Q(B, A) for all A,B ⊆ M. On King’s view, the quantificational force and domain restriction (“bound”) of a UAP are a function of (i) the monotonicity and symmetry properties of its quantified antecedent, (ii) its antecedent’s bound (restrictor), and (iii) any predicative material that has been attributed to its antecedent and any other precedent pronouns in the same anaphoric chain. In particular, when a UAP’s antecedent is a universal quantifier, the UAP is a repetition of its antecedent. Hence in ‘Every farmer who has a donkey vaccinates. They spay and neuter, too’, the pronoun ‘they’ simply means ‘every farmer who has a donkey’. Second, when a UAP’s quantifier antecedent is not a universal quantifier and not symmetric monotone increasing, the UAP is a universal quantifier whose restriction is given by that of its antecedent plus any material predicated of it. So in (2), the pronoun ‘they’ means ‘all of Ralph’s donkeys’ because !89 ‘few’ is not symmetric monotone increasing. Finally, when a UAP’s quantifier antecedent is symmetric monotone increasing, the UAP is a quantifier with the same quantificational force as its antecedent but whose restrictor is the conjunction of its antecedent’s restrictor and any material predicated of its antecedent and any other pronouns in the same anaphoric chain. So in (1), the pronoun ‘it’ means ‘a donkey of Ralph’s’, but the pronoun in (1+) means ‘a donkey of Ralph’s who is vaccinated by Harry’. In this way, UAPs with symmetric monotone increasing antecedents accumulate predicative information attributed to pronouns as the discourse progresses. As previously remarked, one of the primary failings of DRT is that it wrongly predicts that all pronouns accumulate predicative information in this way, regardless of the quantificational forces of their antecedents. This information may be summarized as follows: Antecedent CDQ interpretation [∀x : Fx]Gx [∀x : Fx] [Qx : Fx]Gx, where ‘Q’ is neither ‘∀’[∀x : Fx & Gx] nor symmetric MON↑ [Qx : Fx]Gx...Hp..., where ‘Q’ is symmetric [Qx : Fx & Gx & Hx & ...] MON↑ and ‘p’ is a prior UAP also anaphoric on [Qx : Fx] The final way that UAPs are context dependent is in the various scope permutations they permit with other operators in the sentences in which they occur. First note that since CDQs are quantifiers, they can take wide and narrow scope with respect to each other: (40) At most four golfers 1 bogeyed several holes 2. They 1 played them 2 conservatively. (41) Several holes 2 were bogeyed by at most four golfers 1. They 1 played them 2 conservatively. !90 To fix ideas, suppose in (40) ‘at most four golfers’ takes wide scope over ‘several holes’, whereas in (41) these scopes are reversed. On King’s view, CDQs always take the relative scope positions with respect to one another as their antecedent quantifiers. In accordance with the above interpretations, this view predicts that the second sentence in (40) means that that all the golfers who bogeyed several holes played several holes they bogeyed conservatively. In contrast, (41) says that several holes that were bogeyed by at most four golfers were played conservatively by all the golfers that bogeyed them. These predictions seem correct. UAPs also appear able to enter into scope relations with attitude verbs: (42) A man murdered Smith, but John does not believe that he murdered Smith. (43) A man murdered Smith. The police suspect he escaped through the window. (44) A man murdered Smith. John thinks Bill knows where he is staying. Evans argued that since E-type pronouns are rigid referring terms, they do not permit de dicto readings in attitude contexts. He used (42) as supporting evidence for his view, since the de dicto reading is in fact unavailable there, whereas replacing the pronoun with the description ‘the man who murdered Smith’ allow for both de re and de dicto readings. Unfortunately, Evans was wrong to think that UAPs do not permit de dicto readings: the natural reading of (43) is one in which the police’s suspicion is not de re, in conflict with what Evans’ view predicts. Similarly, (44) admits an intermediate scope interpretation whereby John thinks de dicto that Bill has de re knowledge of Smith’s murderer. On King’s view, the permissible scope permutations CDQs may take with other operators is also a function of their linguistic context. Let’s say that a type <1, 1> determiner Q is existence entailing iff Q(A, B) entails A ∩ B ≠ ∅, nonexistence entailing iff Q(A ,B) entails that A ∩ B = ∅, and existence indeterminate otherwise. Finally, let’s say that an occurrence of a quantifier is !91 existentially positive iff (i) its determiner is existence entailing or existence indeterminate and (ii) it does not take narrow scope with respect to a nonexistence entailing quantifier or a non-factive operator (where an operator O is factive iff for any sentence S, O(S) entails S). King provides the following scope constraint on CDQs: Scope Constraint (SC): If a CDQ is existentially positive, then so is its antecedent. SC provides a nice explanation of why (3) is marked. There, the quantifier ‘no donkeys’ is not existentially positive, so by SC the pronoun ‘them’ cannot be existentially positive either. Since there are no other quantifiers or operators for ‘them’ to take narrow scope under, this means that the determiner of ‘them’ must be nonexistence entailing. But by the rules given previously for specifying the interpretations of CDQs, no CDQs are nonexistence entailing. Contradiction. The real power of SC however lies in its ability to predict the behavior of anaphoric pronouns in a wide variety of intensional contexts, as the following examples attest: (45) Mary believes that some student flunked the exam. He is sitting right there. (46) It ought to be the case that some friend of Ann’s apologize to Suzi. He is an accountant. The first sentences of (45) and (46), taken in isolation, permit two relevant scope possibilities. However, the CDQs in the second sentences are existentially positive, which means by SC that their antecedents are required to be existentially positive as well. Since the ‘belief’ and ‘ought’ operators are not factive, SC requires that the antecedents in these sentences take wide scope over their respective operators in the first sentences: Mary’s belief concerns a specific student, and the relevant deontic requirement concerns a particular friend of Ann’s. !92 SC thus promises to explain a wide variety of data. However there are problems. One issue is that SC is too weak. For example, consider the de re reading of the first sentence of (47), which says that there is a certain computer such that Lydia knows John bought it: (47) Lydia knows that John bought a computer. Lydia hopes it is an IBM. SC allows for the second sentence to be interpreted de dicto. But it does not seem that a de dicto reading is available when the first sentence is de re. Here King suggests that one could either complicate SC in some way in order to block this reading, or provide some pragmatic explanation as to why it is unavailable. But since King views SC as itself having a pragmatic explanation, he does not view these options as importantly different. Much more serious is that in many cases UAPs cannot take narrow scope with respect to non-attitudinal operators. To see this, return for a moment to (1). In this chapter I have been arguing that the second sentence of (1) is true iff Ralph has a donkey that is vaccinated by Harry; UAPs with existential antecedents have the quantificational force of an existential quantifier. What are the modal truth conditions of (1)? Intuitively, they are as follows: (1) is true at a world w iff at w, there is a donkey Ralph owns that is vaccinated by Harry. It is a consequence of these truth conditions that the pronoun ‘it’ is not a rigid designator in the following sense: there is no individual x such that necessarily, the proposition that it is vaccinated by Harry is true iff x is vaccinated by Harry. The pronoun’s contribution to the truth conditions of the second sentence is not that of a proper name (an individual), as Evans would have it, but rather that of an existential quantifier whose restriction is given by that of its antecedent (being a donkey), plus any descriptive material attached to its antecedent (being owned by Ralph). This suggests that UAPs with existential antecedents are properly analyzed as existential quantifiers, as CDQ predicts. !93 However, if that were all to the story, we should expect sentences containing such pronouns to exhibit the same interpretations as sentences containing the corresponding existential quantifier. But they do not. Whereas existential quantifiers exhibit scope ambiguities with other operators in sentences in which they occur, the existential force of anaphoric pronouns always takes wide scope over any modal or temporal operators, quantifiers, or negations. Consider the sentence (48) and its continuations (48a–d) and (48a’–d’): (48) Ralph owns a donkey. (48a) ...It’s not the case that it is vaccinated by Harry. (48a’) ...It’s not the case that a donkey Ralph owns is vaccinated by Harry. (48b) ...Everyone in town vaccinates it. (48b’) ...Everyone in town vaccinates a donkey Ralph owns. (48c) ...It could be that it’s vaccinated by Harry. (48c’) ...It could be that a donkey Ralph owns is vaccinated by Harry. (48d) ...It will be the case that it’s vaccinated by Harry. (48d’) ...It will be the case that a donkey Ralph owns is vaccinated by Harry. Whereas the truth of (48a) is consistent with the possibility that another of Ralph’s donkeys is vaccinated by Harry (no contradiction would ensue if we continued (48a) with ‘But Ralph owns another donkey and Harry vaccinates that one’), (48a’) admits a reading that prohibits any of Ralph’s donkeys from being vaccinated by Harry. Similarly, (48b) describes a town in which 23 everyone vaccinates the same donkey, whereas (48b’) admits a weaker reading that does not require this. Finally, whereas (48c) requires that there be some possible world where one of Ralph’s actual donkeys is vaccinated by Harry, (48c’) admits a reading that is more permissive: Moreover, we are inclined to view (48a) as involving a kind of presupposition failure in a case where 23 Ralph is not a donkey owner, since in that case ‘it’ does not denote. But on a narrow-scope construal, no presupposition failure is predicted: ‘It’s not the case that Harry vaccinates one of Ralph’s donkeys’ seems fine when Ralph is not a donkey-owner. !94 it merely requires that there be a world where Harry vaccinates a donkey of Ralph’s (not necessarily a donkey that he owns in the world of evaluation of the matrix clause). Similar remarks apply for the temporal case in (48d)/(48d’). This behavior is unexpected if pronouns are ordinary quantifiers. Nor is it predicted by SC. What we would like, and what the CDQ analysis does not provide, is a principled explanation as to why the existential force of UAPs with existential antecedents always takes widest scope over any operators in the sentences in which they occur. How can it be that the pronoun in (1) is not a rigid designator in the sense given by RD, and yet the pronoun in (48c) is rigid in the sense given by that sentence’s truth conditions TC? RD There’s a donkey of Ralph’s x such that for all possible worlds w, the proposition that Harry vaccinates it is true at w iff x is vaccinated by Harry at w, and so on for all other propositions expressed by sentences containing ‘it’ (holding fixed the pronoun’s anaphoric background context). TC For all possible worlds w, the proposition that it’s possible that Harry vaccinates it is true at w iff at w, there’s a donkey of Ralph’s x such that it’s possible that x is vaccinated by Harry. What makes the falsity of RD and the truth of TC so mysterious is the plausibility of the following bridge principle BR: BR For all propositions p: if for all possible worlds w, the proposition that p is true at w iff at w, q, then for all possible worlds w, the proposition that it’s possible that p is true at w iff at w, it’s possible that q. Given the following truth conditions TC1 for (1), !95 TC1 For all possible worlds w, the proposition that it is vaccinated by Harry is true at w iff, at w, there is a donkey of Ralph’s that is vaccinated by Harry. BR entails that the truth conditions of (48c) are as follows (we require that the anaphoric backgrounds of the pronouns in (1) and in (48c) are the same, which they are): For all possible worlds w, the proposition that it’s possible that Harry vaccinates it is true at w iff it’s possible at w that there is a donkey of Ralph’s that Harry vaccinates. But these truth conditions imply the necessary material equivalence of (48c) and (48c’), which we reject. In sum, UAPs do not permit the range of scope permutations with non-attitudinal operators that quantifiers exhibit. This wide-scoping behavior is entirely unexplained if UAPs are quantifiers, as CDQ maintains. The CDQ-theorist might try to account for the data by complicating SC to stipulate that UAPs must take widest scope over any operators in the sentences in which they occur. On this view, the logical relationship between the second sentence of (1) and (48c) is not that of a sentence ‘S’ and its modalization ‘ S’, and so BR does not apply. That is, the logical relationship between (1) and (48c) is that of ‘[∃x: Fx](Gx)’ and ‘[∃x: Fx] Gx’: the existential force of the pronoun is required to magically “scope out” when we modalize. However, requiring that UAPs-as-quantifiers must take widest-scope would lack any independent motivation: it simply stipulates the behavior to be explained. Moreover, the requirement would fail to explain why the quantifiers that are UAPs’ CDQ-interpretations are not also required to take widest scope. Why do the indefinites in the primed sentences (48a’)- !96 (48d’) permit narrow scope interpretations while the pronouns in the unprimed sentences (48a)- (48d) do not? In any case, other data show the proposal is a non-starter. Recall that when a pronoun’s antecedent quantifier lies inside a syntactic scope island, there is no way for its antecedent to “scope out” from its position in the surface structure to a c-command position at LF, which would be required for the pronoun to be bound by its antecedent. Using this fact, we can forestall the wide scope response by throwing in some scope islands for good measure: Ralph owns a donkey… (49a) Everyone who sees it waves to it. (49b) Everyone who sees a donkey of Ralph’s waves to a donkey of Ralph’s. (50a) If anyone sees it, they wave to it. (50b) If anyone sees a donkey of Ralph’s, they wave to a donkey of Ralph’s. In (49a) and (50a), there is a requirement that, for each individual, the donkey waved at is the donkey seen. Again, intuitively, the existential force of the pronoun scopes over relative clause and conditional operator, respectively. This coordination between the two pronouns is missing when we substitute existential quantifiers for the pronouns in (49b) and (50b). But unlike before, no wide-scoping mechanism can explain the interpretations of (49a) and (50a): indeed, unlike in the previous cases, in (49b) and (50b) no wide-scope interpretation is even available! In sum, the CDQ-theorist cannot simply stipulate that UAPs-as-quantifiers are required to take wide-scope over any operators in sentences in which they occur. The CDQ-theorist cannot assimilate UAPs to quantifiers without abandoning otherwise well-motivated c- command constraints on quantifier movement. !97 7. UAPS AND INSTANTIAL TERMS: A SECOND LOOK The fact that UAPs appear to scope out of scope islands suggests that the wide-scoping behavior of UAPs is not due to some element in the syntax which can move at LF to wider scope positions (quantifier raising), but comes from some parameter of the interpretation of the entire sentence. In the previous chapter we saw similar behavior with instantial terms. An instantial formula is true iff the open formula that results after replacing its instantial terms with appropriate variables is satisfied by all of the objects in the terms’ (joint) value range. Hence the universal quantificational force of instantial terms appears to take wide scope over any operators in formulas in which they occur. But this is not because instantial terms are themselves quantifiers which raise to wide scope positions at LF. Rather, the wide scope interpretations are a consequence of the truth-theory for instantial formulas. The apparent wide-scoping behavior out of syntactic scope islands of both pronouns and instantial terms suggests that King’s “felt connection” between instantial terms and UAPs may be quite strong. If UAPs also contribute individual concepts which encode background conditions as determined by their antecedents, and if those background conditions feature in the propositions expressed by sentences containing UAPs, then perhaps we can explain how pronouns appear to scope out of scope islands without violating c-command. For example, in the discourse ‘Every farmer owns a donkey. He feeds it every day’, ‘he’ would express an independent individual concept f and ‘it’ would express an individual concept d dependent on f such that the joint value range of <f, d> consists of pairs of donkey-owning farmers x and donkeys y owned by x. The second sentence of the discourse would express the two-component proposition !98 <<f feeds d every day> : x f is a donkey-owning farmer & y d is a donkey owned by x f > (abstracting away from the internal structure of the structured complex on the left). According to the truth-theory for two-component propositions given in the previous chapter, this proposition is true iff (i) there is a donkey-owning farmer x and donkey y owned by x, and (ii) for all donkey-owning farmers x and donkeys y owned by x, x feeds y every day. These are intuitively the correct truth conditions for this sentence. Hence the universal quantificational force of ‘he’ and ‘it’ automatically scopes over the quantifier ‘every day’, but this is not due to any quantifier movement. Rather, it is a consequence of the truth-theory for two-component propositions. Extending this basic picture to account for all the data presented in this chapter will require a number of changes to our previous framework of two-component propositions. For starters, we need to allow that individual concepts can be associated with different quantificational forces. As touched on at the end of the previous chapter and as explored in §2.2, UAPs can have existential quantificational force in addition to universal quantificational force (weak v. strong readings). This amendment, and the further changes that it will require, will be introduced and explained in the next chapter. However, before we get to that, the idea of marrying unbound pronouns to instantial terms meets a number of initial worries. To start, there are contexts in which UAPs can take narrow scope. For example, we have already seen with (43) and (44) that UAPs permit de dicto readings with attitude verbs. Another well-known example of this is Geach’s “intentional identity” sentence (1967): !99 (51) Hob thinks a witch blighted Bob’s mare. Nob wonders whether she (the same witch) killed Cob’s sow. The relevant interpretation here is one where (51) is consistent with the fact that there are no witches (existent or non-existent). On a de dicto reading, both the pronoun and its indefinite antecedent are under the scope of (different) opacifying predicates. If two-component propositions are conceived of as the cognitive event-types involving individual concepts (mental representation-types) agents token when they understand the sentences that expresses them, then a natural and plausible account of attitude reports involving UAPs takes them to express relations between the subject and a two-component proposition. Hence (43) would report the police as bearing an appropriate psychological relation to a two- component proposition tokens of which involve mediately predicating escaping through the window of an individual concept m tokens of which encode the background condition man who murdered Smith (with associated existential force). In the truth-theory for this proposition, the background conditions act like an existential quantifier which scopes over the interpretation of the structured complex component. But since the whole two-component proposition is nested inside the proposition expressed by (43), these background conditions do not take scope over the semantic contributions made by the subject and by the attitude verb, allowing for a de dicto reading of the sentence. In the next chapter I will show how this basic picture can be extended to provide a powerful framework which can account for a wide variety of pronominal anaphora in attitude constructions. Modal subordination cases (cf. Roberts 1989, 1996) provide another kind of case where UAPs must take narrow scope. In these cases, the fact that a pronoun’s antecedent scopes under an operator requires that the pronoun itself be in the scope of another modal operator: !100 (52) A wolf might walk in. #It eats you. (52’) A wolf might walk in. It would eat you. Suppose ‘a wolf’ is read as taking narrow scope under the modal operator ‘might’. On this interpretation, the second sentence of (52) is infelicitous because there is no actual wolf under discussion for the pronoun to refer to; the second sentence can only be read as an independent assertion. However, by adding a modal as in (52’), we can read the second sentence as a continuation of the first: it says something like If a wolf were to walk in, it would eat you. The pronoun in this sentence cannot be interpreted as taking wide-scope over the modal for the same reason as before. Instead the first sentence makes salient a set of worlds in which a wolf walks in. And the second sentence says it eats you, where ‘it’ is a/the wolf that enters in those worlds made salient by the first sentence. Modal subordination is part of a broader phenomenon of intensional subordination whereby a pronoun whose antecedent is in the scope of some intentional operator, quantifier, opacifying predicate, etc., takes scope under its own intensional operator, quantificational determiner, adverb of quantification, etc.: (53) Usually Fred buys a muffin in the morning. It’s always oat-bran. (54) If John bought a book, he’ll be home reading it by now. It’ll be a murder mystery. (55) Harvey courts a girl at every convention. She always comes to the banquet. (56) You should eat a bagel. It would fill you up. (57) John doesn’t have a car. It would be in the garage. In (53), the pronoun ‘it’ is subordinate to the adverb of quantification ‘always’. The second sentence says that in all mornings s where Fred buys a muffin, m is oat-bran, where m is a/the muffin Fred buys in s. The quantificational force of the pronoun cannot take scope over the !101 adverb because the background conditions for the pronoun include a variable which is bound by ‘always’. Similar remarks apply to (54)–(57). I view these intensional subordination cases as more of an added complication for the two-component propositional framework than as a problem for the idea that unbound pronouns and instantial terms are of a semantic kind. One license for this optimism comes from the fact that, even in intensional subordination cases, the quantificational force of UAPs scopes over any operators within the subordinated context. To illustrate, consider the following: (58) There is no inter-Mercurial planet but there might have been. It would be called ‘Vulcan’. It would be the case that, if its orbit had been larger than Mercury’s, it wouldn’t have been the inter-Mercurial planet. The pronoun in the second sentence of (58) is subordinate to the modal operator ‘would’: we are talking about a would-be rather than an actual inter-Mercurial planet. In the third sentence we have a counterfactual operator which is also modally subordinate to the possibility under consideration. Intuitively, the quantificational force of the subordinated pronoun takes scope over this counterfactual operator. That is, the third sentence means something like, in all possible worlds w where there is an inter-Mercurial planet, if V had had a larger orbit than Mercury’s, then V would not be the inter-Mercurial planet, where V is the inter-Mercurial planet in w. Assuming a Lewisian possible-worlds analysis of counterfactuals, the sentence is true iff for all possible worlds w where there is an inter-Mercurial planet, there’s an inter-Mercurial planet x at w such that in all the closest possible worlds w’ to w such that x has a larger orbit than Mercury at w’, x isn’t the inter-Mercurial planet at w’. If the subordinated pronoun were simply a covert definite description, we should expect there to be a reading where the quantificational force of the subordinated pronoun takes scope !102 under the counterfactual. Indeed, this reading does seem available (although highly unlikely) if we replace the pronoun in (58) with the relevant description: (58’) There is no inter-mercurial planet but there might have been. The inter- Mercurial planet would be called ‘Vulcan’. Of course, it would be the case that, if the orbit of the inter-Mercurial planet had been larger than Mercury’s, the inter- Mercurial planet wouldn’t have been the inter-Mercurial planet. The counterfactual in (58’) has a reading where both the antecedent and the consequent of the counterfactual are necessarily false. On this reading, the third sentence of (58’) is true iff for all possible worlds w where there is an inter-Mercurial planet, in all the closest possible worlds w’ to w such that the inter-Mercurial planet at w’ has a larger orbit than Mercury at w’, the inter- Mercurial planet at w’ isn’t the inter-Mercurial planet at w’. This “counter-possible” reading is obviously highly improbable, and so the most salient reading of (58’) is the one that accords with the reading given to (58). Still, it does seem that this counter-possible reading is available for (58’) whereas no such reading is available for (58) itself. Admittedly, (58) is highly contrived. But its validity as a data point is bolstered by similar wide-scoping behavior in other subordination cases. Consider the following, where again we are interested in readings of the first sentences where the antecedent takes narrow scope: (59) You should plant some hydrangeas in your garden. They would grow beautiful pink flowers. They would grow blue flowers if I had planted them at my house (which as acidic soil). (59’) You should plant some hydrangeas in your garden. The hydrangeas you plant in your garden would grow beautiful pink flowers. The hydrangeas you plant in your garden would grow blue flowers if I had planted the hydrangeas you plant in your garden at my house (which as acidic soil). !103 (60) At every olympics, there’s an amateur swimmer. Usually, she comes in last in every event. (60’) At every olympics, there’s an amateur swimmer. Usually, the amateur swimmer comes in last in every event. (61) In every city, the mayor is always the one that gives final approval to a proposed public transportation project. Usually, it takes over a decade before he attends the ribbon-cutting ceremony at its completion. (61’) In every city, the mayor is always the one that gives final approval to a proposed public transportation project. Usually, it takes over a decade before the mayor attends the ribbon-cutting ceremony at its completion. (62) Sara hopes the next president is from the Bay Area. ??Sara thinks it won’t be too long before he is Asian. (62’) Sara hopes the next president is from the Bay Area. Sara thinks it won’t be too long before the next president is Asian. (59) and (59’) are just like (58) and (58’), but perhaps a little less contrived. Intuitively, the quantificational force of the pronouns ‘they’ and ‘them’ scopes out of the counterfactual operator even though the conditional is a scope island. (59’) permits a counter-possible reading which, although not salient, seems nevertheless available, while no such reading exists for (59). (60) says that at most olympics where there’s an amateur swimmer, it’s the same amateur swimmer that comes in last in every event (but not necessarily the same person at every olympics). In contrast, (60’) admits a narrow-scope reading where it’s different swimmers who come in last. Similarly, (61) requires that in most cities, the mayor that approved a project is the same individual who sees its completion (even if she’s since left office and is no longer mayor). In contrast, (61’) admits a narrow-scope reading where it is required that the individual that sees a project’s completion is the mayor, but not required that she be the same mayor who originally approved the project. Finally, the second sentence of (62) sounds odd because it !104 reports Sara as thinking that some bay area native she hopes will become president will soon be Asian. But this is an unusual belief to have: we do not ordinarily think that people can change their ethnicity. In contrast, (62’) is not odd if read with ‘the next president’ under the scope of the temporal operator in the second sentence. It does not attribute to Sara the belief that anyone will change their ethnicity. 24 These and similar data suggest that although pronouns are not rigid designators in the same sense that proper names are, they play a similar role in thought and in language. As rigid designators, proper names allow us to think about their referents even in counterfactual circumstances where those referents do not satisfy descriptive conditions we take them to actually satisfy. Linguistically, this is reflected in the fact that, even when a proper name is embedded in a modal operator, as in the sentence ‘ (...a…)’, the resulting sentence is true iff the open formula ‘...x…’ that results from replacing the name in the modal’s prejacent with a variable is satisfied in all possible worlds w’ accessible from the world of evaluation by the actual referent of the name. (In contrast, definite descriptions in modal contexts like ‘ (...[the x: Fx]…)’ permit readings where those sentences are true iff the open formula ‘...x…’ that results from replacing the description in the modal’s prejacent with a variable is satisfied in all possible worlds w’ accessible from the world of evaluation by the unique F in w’.) This was essentially Kripke’s observation in Naming and Necessity about the modal profile of proper names. Similarly, when the antecedent of an unbound pronoun is unembedded, the pronoun purports to refer to a particular individual that satisfies the descriptive conditions in its antecedent: in (1) it as if I am referring to an actual donkey of Ralph’s. But, as I have argued in Admittedly, sometimes I can hear (62) as reporting the same thing as the unproblematic, narrow-scope 24 reading of (62’). This will be addressed below. !105 this chapter, UAPs are not referring terms. Since they do not genuinely refer, UAPs are not rigid designators in the sense that they refer to the same object in all possible worlds. Still, as we have seen in (48)-(50), when a pronoun with an unembedded antecedent is itself embedded in the scope of a modal operator, quantifier, conditional, scope island, and so on, the pronoun’s apparent quantificational force scopes over any operators in sentences in which it occurs (attitude contexts being an important exception). If UAPs function like instantial terms in semantically expressing individual concepts which encode background conditions, then this wide-scoping behavior is readily explained: the pronoun’s background conditions, as determined by its antecedent and as encoded by the individual concept it expresses, are not part of the syntax, but feature in the second component of any two-component proposition expressed by sentences containing the pronoun. These background conditions act like a quantifier which scopes over the interpretation of the structured-complex component of the proposition. In a word, pronouns with unembedded antecedents purport to refer to a particular individual that satisfies the pronoun’s background conditions in the actual world, allowing us to (purport to) think about what that individual would be like even in counterfactual circumstances where it does not satisfy those conditions. Modal subordination cases simply shift the world of evaluation for a pronoun’s background conditions from the actual world to some set of counterfactual worlds made salient by the clause containing the pronoun’s antecedent. Still, within these counterfactual worlds, a pronoun in a subordination case purports to refer to an individual that satisfies its background conditions in these worlds, allowing us to consider what that would-be individual would be like in yet further counterfactual circumstances where it does not satisfy those background conditions. But this too can be explained within the two-component propositional framework if, !106 like attitude verbs, subordinating operators can themselves take as argument two-component propositions. Although I reserve the details of such an account for the next chapter, for now it is easy enough to see how such an approach would provide a unified explanation of the foregoing data: the apparent quantificational force of a pronoun always takes the same scope as that of its background conditions, as determined by its antecedent. When the antecedent is unembedded, the pronoun’s apparent quantificational force takes widest-scope because its background conditions act as a quantifier which takes scope over the structured complex component of propositions expressed by sentences in which it occurs. When the antecedent takes narrow scope with respect to some operator, the pronoun’s apparent quantificational force scopes over all operators except for the relevant subordinating operator, because its background conditions act as a wide-scope quantifier over the structured complex component as before, but the whole two-component proposition is nested within a larger proposition as an argument of the subordinating operator. Although previous authors, including Evans (1977), Berger (2002), and Salmon (2006a,b), have also noted that unbound anaphora “are like definite descriptions which insist upon widest scope” (Evans 1977, fn. 67) they have not all agreed on how to capture this fact. Evans of course takes UAPs to rigidly refer to the unique satisfier of their antecedents, but we have previously seen data that suggest that UAPs (i) are not rigid designators in the sense of RD (repeated below) in the case of (1), and (ii) do not have a uniqueness requirement, but may have existential or universal quantificational force (strong v. weak readings). (1) Ralph owns a donkey. Harry vaccinates it. RD There’s a donkey of Ralph’s x such that for all possible worlds w, the proposition that Harry vaccinates it is true at w iff x is vaccinated by Harry at w, !107 and so on for all other propositions expressed by sentences containing ‘it’ (holding fixed the pronoun’s anaphoric background context). Berger (2002), Salmon (2006a,b), McKinsey (1986), Soames (1989), and Neale (1990) agree with these intuitions about the non-rigidity of UAPs. Berger and Salmon make the further observation that UAPs’ apparent quantificational force seems to take widest scope, and so although UAPs are not rigid in the sense of RD in the case of (1), they are rigid in the sense of TC in the case of (48c). (48c) Ralph owns a donkey. It’s possible that Harry vaccinates it. TC For all possible worlds w, the proposition that it’s possible that Harry vaccinates it is true at w iff at w, there’s a donkey of Ralph’s x such that it’s possible that x is vaccinated by Harry. But Berger and Salmon do not attempt to extend these observations beyond the simplest of cases, and in particular do not explore how one might account for the interaction of unbound anaphora with modals, conditionals, attitude reports, intensional subordination cases, or for the strong v. weak readings of UAPs. Moreover, since they do not state their observations about the truth conditions of modal sentences containing UAPs’ in terms of propositions, they do not recognize that for the propositionalist, the non-rigidity and wide-scoping behavior of pronouns requires the denial of the plausible bridge principle BR: BR For all propositions p: if for all possible worlds w, the proposition that p is true at w iff at w, q, then for all possible worlds w, the proposition that it’s possible that p is true at w iff at w, it’s possible that q. How could BR be false? One possibility is that propositions contain two components of information on which the truth of the proposition modally depends, but modal operators take !108 as argument only one component of the proposition expressed by their prejacents, namely the structured complex. As I will explain in the next chapter, rejecting BR in this way will require that we give up certain equivalences between modal sentences and certain theoretical claims about the modal truth conditions of the propositions expressed by those modal sentences’ prejacents, but it seems to me this theoretical cost for our theory of propositions is small, and in any case, accommodating both RD and TC intuitions would require some theoretical commitments to be abandoned. It should be noted that although I have situated my view of unbound anaphora within a theory of structured propositions, it is not part of my view that all of the machinery and theoretical commitments that come with this framework are strictly necessary for an account of unbound anaphora. But a commitment to structured propositions is well-motivated independent of these issues: I maintain that there are strong reasons to posit such entities as the contents of sentences, bearers of truth-values and modal properties, objects (in some sense or other) of the attitudes, and so on. By taking unbound pronouns to semantically express individual objects which encode various descriptive information, we may accommodate certain insights DRT makes about the cognitive task of keeping track of the entities mentioned and information conveyed in conversation. Like Heim’s “file-cards”, my individual concepts are conceived of as mental representation-types which contain information that in the course of the conversation has been attributed to their denotations. However, by marrying these insights from DRT about the information structure subjects maintain in cognition when engaging in conversation with the theory of propositions, a chief advantage of my view is that we may retain our pre-theoretic commitment to sentential truth as primary. In any case, I think the !109 question of how to account for unbound anaphora within a theory of propositions is itself an interesting and worthwhile question. Now, before we see all of this in action, it’s worth noting that there is another set of unbound anaphoric pronouns that the present view is not designed to account for. Geach coined the term “pronouns of laziness” for pronouns that are used in lieu of more prolix expressions that are repetitions of their antecedents: (63) John gave his paycheck to his mistress. Everyone else deposited it in the bank. (64) In the kitchen, the windows overlook the mountains. In the bedroom, they face the ocean. (65) Boston has a Republican Mayor. Soon he’ll be a Democrat. In (63), ‘it’ does not refer to John’s paycheck, but is simply shorthand for the longer ‘his paycheck’. Similarly in (64) we are not talking about the windows in the kitchen, but the windows in the bedroom. Finally, whereas Evans had argued, as support for his view, that (65) only has a reading where someone changes political parties, not everyone has agreed that this is the only reading of this sentence. Although the ‘wide-scope’ reading does seem to me to be much easier to hear, I agree with descriptivists that (65) also seems to permit a narrow-scope reading where it does not require anyone to change parties, but simply predicts that the Democrat will when the next mayoral election. In certain moods I can also hear a narrow-scope reading for (62) along similar lines, although again I find the wide-scope reading much more prominent. Do examples like (65) show, as some descriptivists have assumed, that pronouns are definite descriptions after all, since they can apparently enter into (narrow) scope relations with operators in sentences in which they occur, contrary to what I have been arguing? I think such !110 conclusions would be premature. The problem for this suggestion is that it makes incorrect predictions for the broad range of data where pronouns appear to only take the same (wide) scope as their background conditions, even in intensional subordination cases. To account for these data, the descriptivist would have to explain the narrow-scope readings in these cases are not heard even though they are technically available. Moreover, the descriptivist would have to explain why in many such cases, narrow scope readings are not nearly as salient as wide-scope readings: the two readings seem asymmetric in their availability. A more parsimonious and simpler explanation for these narrow-scope cases, it seems to me, would be to suggest that the two readings of (65) and similar cases is not due to different scope permutations of the pronoun and the temporal operator, but are instead due to two separate semantic mechanisms. In particular, the narrow scope reading is generated by some mechanism of copying the descriptive material from its antecedent. Since everyone seems to agree that some such copying mechanism is already needed to explain pronouns of laziness as in paycheck sentences (63), this explanation of the narrow-scope reading of (65) has independent motivation. Unfortunately, for reasons of time and space a more thorough investigation into the syntactic mechanisms that govern the generation of pronouns of laziness will not be pursued. !111 CHAPTER 3: TOWARD A POSITIVE ACCOUNT In this chapter, I modify the framework of two-component propositions presented in chapter 1 in order to account for the more complex behavior of unbound anaphoric pronouns in natural language. 1. QUANTIFICATIONAL FORCE In chapter 1, I argued that instantial terms purport to refer to particular members of their instantiating class (value range), and hence bear important syntactic, inferential, and cognitive kinships with referring terms. As we have seen, unbound anaphoric pronouns (UAPs) exhibit similar phenomenology. In (1) the pronoun ‘it’ need not refer to any donkey in particular. Nevertheless, we can continue the discourse in (1a) as if we were referring to a particular donkey-owning farmer and a particular donkey he owns: (1) Someone who owns a donkey beats it. (1a) I bet he vaccinates it too. One difference between instantial terms and pronouns, however, is that while instantial terms and the pronouns in (1) both purport to refer to individuals that satisfy their antecedents, the individuals to which instantial terms purport to refer are taken to be representative of the value range as a whole: conclusions drawn using an instantial term ‘p’ denoting the prime numbers have the truth-conditional import of general claims about all primes. In contrast, the pronouns in (1) and (1a) have existential rather than universal force: we cannot conclude from (1)/(1a) that every donkey-owning farmer beats/vaccinates all of his donkeys. This was a fundamental difference between instantial terms and certain unbound pronouns noted in chapter 1. !112 However, there are also cases where the individuals to which pronouns purport to refer are taken to be representative of all the objects that satisfy their antecedents: (2) Each degree candidate walked to the stage. He took his diploma from the Dean. (due to Partee, in Roberts 1989) (3) Every chess set comes with a spare pawn. It is taped to the top of the box. (Sells 1985) (4) Every farmer who owns a donkey beats it. He vaccinates it too. Craige Roberts dubbed these “telescoping” cases, since they are cases where “we begin a narrative with a statement about a class of individuals, then we zoom in on one instantiation of that class to continue the narrative” (1989, p. 718). But this is a pretty good description of the putative function of instantial terms. This suggests that UAPs can also play the role of instantial terms in purporting to refer to arbitrary or representative members of the relevant instantiating class, and hence have universal rather than just existential force. As we have seen, a related dichotomy occurs for UAPs embedded in relative clause and conditional donkey sentences: the so-called strong versus weak readings. Recall that whereas (5) intuitively requires that every donkey-owning farmer vaccinate every donkey he owns, (6) only requires that every suit-owning man wear some suit that he owns: (5) Every farmer who owns a donkey vaccinates it. (6) Every man who owns a suit wears it to work. In the previous chapter I argued that trying to deliver these different readings by modifying the interpretation of the head quantifier or conditional operator was undesirable. Different readings of quantified donkey sentences are not due to different interpretations of the universal quantifier: ‘every’ means the same thing in both (5) and (6). This is why in the intuitive glosses we gave for (5) and (6), we replaced the pronouns with the universal quantifier every donkey he !113 owns and the indefinite (existential quantifier) some suit he owns, respectively. Moreover, because of “mixed” cases containing multiple UAPs in the same quantified donkey sentence, some of which are “strong”, some of which are “weak”, trying to account for the full range of possible interpretations of quantified donkey sentences by positing different interpretations for the head quantifier would require an endless variety of possible readings for quantifiers. From the perspective of explaining language acquisition and production, this is undesirable. A more plausible approach is to generate the different readings of quantified donkey sentences by taking the UAPs themselves to carry different quantificational forces (for conditionals, see below). In addition to being more theoretically attractive for the reasons just given, this suggestion receives independent support from the fact that we already have reasons to posit a dichotomy between UAPs with existential and universal quantificational force in unembedded cases. Capturing UAPs with existential force in our framework of two-component propositions will require technical modifications and will raise important philosophical issues about what it is to mediately predicate properties of the individual concepts expressed by UAPs with existential force. These issues are addressed below. But before we address these issues, there is a prior question: are these the only possible quantificational forces UAPs can take? Recall that according to King’s CDQ view, UAPs whose antecedents are symmetric monotone increasing have the same quantificational force as their antecedents. However, it’s not obvious to me that this prediction is correct. Consider determiners of the form ‘at least n’, which are symmetric monotone increasing: (7) Ralph owns at least four donkeys. He vaccinates them. Is the quantificational force of ‘them’ in the second sentence at least four or is it all? Consider a case where Ralph has 10 donkeys but vaccinates just 4 of them. It is not obvious to me that the !114 second sentence of (7) is true in this case. Conversely, if Ralph has exactly three donkeys and vaccinates them all, it is not obvious to me that the second sentence is false. I hear the second sentence as true iff Ralph vaccinates all of his donkeys, in conflict with what the CDQ view predicts. However, a full exploration of this issue is beyond the scope of this dissertation since the issues it raises concern the interpretation of quantifiers more generally. 1 Kanazawa makes the following empirical observation regarding the strong v. weak interpretation of relative clause donkey sentences of the form ‘Q-many farmers who own a donkey beat it’ (1994, p. 113): People’s intuitions about donkey sentences with respect to consistent donkey- beating situations accord with the truth conditions given by the weak reading and the strong reading. By “consistent donkey-beating” situations, Kanazawa means situations where either (i) all the farmers beat all of their donkeys, or (ii) all the farmers beat none of their donkeys. In contrast, the E-type and pairwise interpretations of relative clause donkey sentences can only account for the intuitive truth-values of donkey sentences in cases where those truth-values accord with what the strong and weak readings already predict. The fact that when intuitions are clear about the truth conditions of donkey sentences, they always accord with either the weak or the strong reading lends credence to the hypothesis that the only available interpretations of donkey sentences are given by the weak and the strong readings. But if the different readings of relative clause donkey sentences are due to the pronouns in those sentences having universal or existential quantificational force, then this suggests that the only available quantificational forces for pronouns are universal or existential. Cf. Szabolcski (2010) and the references therein. 1 !115 For these reasons, in the rest of this section, I focus on how to capture existential and universal quantificational forces in my present framework of two-component propositions. Should it be shown that UAPs carry other quantificational forces, or should it turn out that we need to posit a truth-conditional uniqueness constraint on UAPs, I take it to be an advantage of the present framework that it can easily accommodate these other quantificational forces without any damage to the overall view, as will become clear. 1.1 Quantified Background Conditions In chapter 1, the background conditions encoded by (tokens of) individual concepts were represented by open formulas. Hence in applications of EI or UI where an instantial term ‘a’ is introduced by the quantified formulas ‘∃xFx’ or ‘∀xFx’, respectively, the background conditions of the individual concept expressed by ‘a’ are represented by the open formula ‘Fx’. The subsequent formula ‘G(a)’ was taken to express the two-component proposition (8) (8) < <G, <c>> : Fx c > where G is the property expressed by ‘G’, c is the individual concept expressed by ‘a’ (the bold- face indicates that c is a target of what I called mediate predication by the property G), and Fx c are background conditions for the proposition (where the subscript is used to pair the background condition with the corresponding individual concept c). In the truth-theory for two-component propositions, (8) is true iff (i) something satisfies the background conditions Fx c , and (ii) everything satisfying the background conditions Fx c is G. In order to capture the existential quantificational force of UAPs within the framework of two-component propositions, background conditions encoded by individual concepts will now not only include descriptive information, but also a quantificational force. In effect, they will no longer be represented by open formulas, but by quantifiers. More specifically, !116 background conditions will be represented by type <1, 1> generalized quantifiers of the form ‘[Qx: Fx]’ where Q ∈ {∀,∃}. The descriptive information ‘Fx’ still functions to pick out the range of objects which the individual concept represents, and we shall still say an individual concept has a value range, which consists of all and only those objects which satisfy the descriptive information it encodes. How background conditions are determined is highly sensitive to both the linguistic context and speakers’ worldly knowledge. However, certain regularities can be ascertained which constitute at minimum statistical likelihoods, if not default rules, which govern what background conditions are encoded. These generalizations may be stated as follows: BC. Let p be an occurrence of a pronoun anaphoric on, but not c-commanded by, a quantifier ‘[Qx: Fx]’ which occurs in an antecedent clause ‘[Qx: Fx](Gx)’. If Q is existential, the background condition encoded by (tokens of) the individual concept c expressed by p is [∃x c : Fx c & Gx c ]; otherwise, the background condition is [∀x c : Fx c & Gx c ]. For example, in the discourse (9), (9) Ralph owns a donkey. He vaccinates it. the pronoun ‘it’ expresses an individual concept d which encodes the quantified background condition [∃xd: xd is a donkey owned by Ralph]. In contrast, the antecedent of the pronoun ‘he’ in (2) is the universal quantifier ‘each degree candidate’. Hence by BC the pronoun ‘he’ will express an individual concept p which encodes the universal background condition [∀xp: xp is a degree candidate who walked to the stage]. Recall that an individual concept a is dependent on another individual concept b (written a < b) iff the value range of a is determinable from that of b, and an individual concept is !117 independent iff it’s not dependent on any other individual concept. As before, we require that the dependency relation between individual concepts is a strict partial order: it is transitive, asymmetric, and irreflexive. Moreover, the converse of the dependency relation is well founded: there can be no infinite set of individual concepts a 1 , a 2 , a 3 , … such that a 1 < a 2 < a 3 < …. Now, if the antecedent clause ‘[Qx: Fx](Gx)’ of a UAP p contains a variable bound by the antecedent quantifier of another UAP q, and if q’s antecedent takes scope over that of p, then the individual concept expressed by p will be dependent on that expressed by q. In such a case, we will use whatever variable subscripted by q that is used to represent q’s background conditions in the background conditions for p. For example, in (10), suppose the scope order of the quantifiers ‘a Cotswold farmer’ and ‘a Cotswold sheep’ is given by the surface form. (10) A Cotswold farmer owns a Cotswold sheep. He vaccinated it. Then the antecedent clause of ‘he’ is [∃x: farmer(x)]([∃y: sheep(y)](x owns y)) and the antecedent of ‘it’ is [∃y: sheep(y)](x owns y), where ‘x’ is bound by the antecedent of ‘he’. Hence ‘he’ will express an individual concept f that encodes the background condition [∃x f : farmer(x) & [∃y: sheep(y)](x f owns y)], while ‘it’ will express an individual concept s dependent on f which encodes the background condition [∃y s : sheep(y s ) & x f owns y s ]. The dependency of background conditions is important in order to avoid problems with simple-minded analyses of (10) according to which ‘he’ has the force of a Cotswold farmer who owns a Cotswold sheep and ‘it’ has the force of a Cotswold sheep owned by a Cotswold farmer. Sommers (1982) pointed out that such an analysis fails for (10) whenever no Cotswolder vaccinates his own sheep. The present approach avoids such shortcomings. If we like, background conditions may also include constraints related to the pronoun’s person, gender, and number. In this chapter I will not discuss the semantic constraints imposed !118 by a pronoun’s phi-features, which are governed by a rather complicated set of rules . Also if we like, BC is easily modified to allow plural UAPs to encode quantified background conditions involving plural (first-order) quantifiers ∃xx and ∀xx. If the antecedent of an occurrence of plural UAP is a plural existential quantifier ‘[∃xx : Fxx]’ that occurs in an antecedent clause ‘[∃xx : Fxx](Gxx)’ then the background condition encoded by (tokens of) the individual concept c expressed by p may be the plural quantifier [∃xx c : Fxx c & Gxx c ]. For example, in ‘Ralph has some donkeys. He gathers them in the yard’, ‘them’ would express an individual concept d which encodes the background conditions [∃xx d : xx d are donkeys owned by Ralph]. Similar addenda to BC can be made when the antecedent is a non-existential plural quantifier. However, plural pronominal anaphora raise a host of difficult issues which would require us to take a large detour from the present project. 2 Another issue raised by BC concerns uniqueness constraints. Evans had argued that in examples like (11), pronouns do have uniqueness constraints, which is what differentiates them from their Geachean existential paraphrases, as in (12): (11) Ralph has some donkeys. He vaccinates them. (12) Ralph has some donkeys that he vaccinates. The analogue of the uniqueness constraint for plurals is the maximality constraint, which requires that the property expressed by the predicate in the sentence applies to all the objects denoted by the NP (however this is to be construed). In (11), this amounts to the requirement Schein (1993, ch. 4; 2005, f.n. 58) argues that examples like (i) describe a complex situation that can be 2 captured by no logical form involving some combination, linear or branching, of the singular or plural quantifiers ‘three ATMs’, ‘two new clients’, ‘exactly two new passwords’, and ‘two slips of paper’: (i) Three ATMs gave two new clients each exactly two passwords each on two slips of paper. Cf. McKay (2006) and Schein (2005, f.n. 8) for further discussion. !119 that Ralph vaccinate all his donkeys. But if, as we assume, ‘some donkeys’ is represented at LF by an existential quantifier, BC predicts pace Evans that (11) merely requires that Ralph vaccinate some of his donkeys. However, we can address Evans’ objection in the same way that we did for the alleged uniqueness constraint in singular UAPs. Note that (11) can be followed felicitously by (13), explicitly denies any maximality constraint: (13) Ralph has some donkeys. He vaccinates them. He has some others at the farm, but he doesn’t vaccinate those. Moreover, the heard difference between (11) and (12) admits a pragmatic explanation. In (12) the use of ‘some’ generates a scalar implicature that Ralph doesn’t vaccinate all of his donkeys. In contrast, the first sentence of (11) has no such implicature. (If any scalar implicature is generated at all, it is the trivial one that Ralph does not own all of the world’s donkeys—but we already knew that.) Unless some further material is provided which explicitly mentions Ralph’s unvaccinated donkeys, as in (13), the hearer is invited to assume by the speaker’s omission that the donkeys mentioned in (11) (the vaccinated ones) constitute all of Ralph’s donkeys. Hence (11) invites the speaker to infer that Ralph vaccinates all of his donkeys. But this inference isn’t part of the pronoun’s background conditions, as determined by BC. It is important to reiterate that the BC is a highly defeasible rule. One way in which it is defeasible concerns the quantificational force it assigns to background conditions. In the previous chapter we saw that in quantified donkey sentences of the form ‘Q-many farmers who own a donkey vaccinate it’, whether the pronoun is universal or existential depends in part on the monotonicity properties of the head determiner Q (universal quantificational force defeasibly correlates with the head determiner Q being ↓MON↑ or ↑MON↓, while existential quantificational force more reliably correlates with the head determiner Q being ↓MON↓ or !120 ↑MON↑). However this correlation with monotonicity properties is also highly defeasible: worldly knowledge also plays a role in determining the background conditions of UAPs. The fact that practically no one wears multiple suits at a time militates in favor of a weak reading of the pronoun in ‘Every man who has a suit wears it to work’. But the fact that no analogous convention prevents one from vaccinating all one’s donkeys allows for a strong reading of ‘Every farmer who has a donkey vaccinates it’. I refer the reader to the discussion in the previous chapter, and the references made therein, for explanations of these contextual variations. Another way that BC is defeasible concerns the amount of descriptive information encoded in a pronoun’s background conditions. This is also highly context sensitive. Consider the following: (14) Few MPs came to the party. They had a good time. (15) Few MPs attend parties. They prefer to stay at home. (16) All presidential candidates registered with the FEC. They have started campaigning in earnest. In (14) the intuitive value range for the pronoun ‘they’ consists of MPs who came to the party; the background conditions encoded are [∀x p : x p is an MP who came to the party]. In contrast, in (15) the value range for ‘they’ is intuitively not just MPs that attend parties as BC predicts, but all MPs. Similarly, in (16) ‘they’ seems to pick out all of the presidential candidates rather than just those that registered with the FEC. This can be brought out by supposing that the first sentence is false: some presidential candidates have not registered. In such a case, the second sentence still seems to require of all the presidential candidates that they have started campaigning in earnest (it’s not enough for just those who registered to have begun !121 campaigning). But this is not what BC predicts, since it asks us to include the predicate ‘registered with the FEC’ in the pronoun’s background conditions. But then on the truth-theory we will give below (16) can be true in a case where not all candidates have started campaigning, as long as those who have registered have. These intuitions are subtle, and it is not entirely clear to me what explains these differences. Plausibly, the monotonicity properties of the antecedent’s quantificational determiner, the lexical semantics of the antecedent’s N-bar constituent and of the predicate it attaches to, as well as background knowledge about the subject matter all interact to determine the background conditions of a UAP-occurrence. But it is not clear to me at the present time what the relevant generalizations are, and how the calculus of these defeasible rules should be structured in order to make the right predictions. In the literature, we have seen a good deal of variation on this issue, but very little argumentation for these variations. Evans takes a UAP to rigidly refer to the unique satisfier of the smallest well-formed clause containing its antecedent (quantificational restrictor and predicate information included). Neale takes a pronoun whose antecedent is in a clause of the form ‘[Qx: Fx]Gx’ to go proxy for a definite description of the form ‘[the x: Fx]Gx’. However, King assumes that the descriptive material ‘Gx’ in the predicate predicated of a pronoun’s antecedent is incorporated into the pronoun’s descriptive content only when the antecedent quantificational determiner is not universal (when the determiner is universal only the material Fx in the antecedent’s restrictor is retained). And Elbourne takes the easy route and simply gives no theory which predicts what NP material is deleted, although sometimes he indicates that it is a very minimal (incomplete) description. Clearly more work needs to be done in this area. !122 A closely related difficulty for BC concerns whether the background conditions for UAPs with existential antecedents is cumulative. Recall that in DRT, as in King’s CDQ view, UAPs with descriptive antecedents accumulate any descriptive material predicated of prior UAPs in the same anaphoric chain (i.e. that are anaphoric on the same antecedent), so that as the discourse progresses, more and more descriptive material must be jointly satisfied in order for a sentence or the discourse containing a pronoun in the chain to be true. In the present framework, such a view amounts to the suggestion that UAPs with an existential antecedent accumulate predicative information predicated of prior pronouns in the same anaphoric chain in their background conditions. For example, in the following extension of (9), (9+) Ralph owns a donkey. He vaccinates it. He beats it. the idea is that the pronoun ‘it’ in the second sentence encodes the background condition [∃y d : y d is a donkey owned by Ralph] while the pronoun in the third sentence encodes the background condition [∃y d : y d is a donkey owned and vaccinated by Ralph]. This is not what BC says to do. Although we have not yet shown how to construct the two-component propositions expressed by sentences containing UAPs, or specified how to assign truth conditions to such propositions, for now we can foretell that it is a consequence of the cumulative proposal that the second sentence of (9+) comes out true iff Ralph vaccinates a donkey he owns. And the third sentence comes out true iff Ralph beats a donkey he owns and vaccinates. If Ralph has two donkeys, one which he vaccinates but does not beat and another which he beats but does not vaccinate, then this cumulative proposal predicts that the second sentence ‘He vaccinates it’ is true, while the third sentence ‘He beats it’ is false. Conversely, if the second and third sentences had been uttered in the reverse order, ‘He beats it’ (as uttered second) would be true in that !123 same scenario, while ‘He vaccinates it’ (as uttered last) would be false. If the descriptive information were not cumulative in (9+), then it could be that there is a donkey Ralph beats, and a different donkey that Ralph vaccinates (assuming the pronouns’ quantificational force is existential). But this is most likely not what is intended: the speaker purports to be referring to the same donkey from one sentence to the next. But if the quantificational force of the pronoun is existential, the only way to ensure that the third sentence of (9+) entails that there is a vaccinated, beaten donkey is to incorporate vaccination into the last pronoun’s background condition. Although I find these considerations to be compelling in certain cases, they also raise problems in others. If in the course of conversation, one uses an anaphoric pronoun with an existential antecedent to make a false claim, or one that is otherwise not jointly satisfiable with the pronoun’s background conditions, this cumulative proposal predicts that no future utterances using that pronoun can be true. Although this prediction may be desirable in certain cases, in many discourses truth is more resilient than this. For starters, we can often deny claims that our interlocutor has made without the conversation crashing. If I know that Ralph never beats his donkeys, I might assert ‘No, Ralph doesn’t beat it’ in response to (9+). This should not be enough to prevent my interlocutor from going on to make further true claims with ‘it’. Note that in many such “pronominal contradiction” cases, of which Strawson’s (1952) case (17) is classic, the denier has singular grounds for her denial, allowing for the possibility that the use of her pronoun is simply deictic: (17) A: A man fell in front of the train. B: He didn’t fall, he was pushed. In contrast, in the cases we are interested in, the denier does not have singular grounds for her assertion and her use of the pronoun does not refer. Suppose the denier of (9+) makes her denial !124 simply on general grounds: Ralph isn’t that kind of donkey-owner. In such a case, I maintain, the denier is merely making as if she refers, but her use of the unbound anaphoric pronoun is not genuinely referential. There are a number of different responses one could make to such cases that have been explored in the literature. This is not the place to venture a positive account of what is going on in such non-referring pronominal contradiction cases (the multitude of kinds of case makes it likely that no single semantic account will cover all of them). But for our purposes all that matters is that if pronouns with existential antecedents accumulate descriptive information in their background conditions, pronominal contradiction cases will end up crashing the conversation in the sense of preventing all future assertions with the pronoun from being true. Since many conversations can survive pronominal contradiction both in the intelligibility and truth of its sentences, this is an undesirable prediction for the cumulative proposal. In response, the cumulativist about background conditions may suggest that background conditions are merely defeasibly cumulative, and that conversational participants are also generous accommodaters. If a speaker’s assertion or denial in a pronominal contradiction case would be obviously unacceptable, uninterpretable, or untrue on the default interpretation predicted by the cumulative proposal, and if that assertion or denial would be acceptable, interpretable, and true if the offending information were removed, then ceteris paribus and within certain limits, the offending information is thereby removed from the relevant pronoun’s background conditions. Even if a version of this accommodationist response can be shown to work for explicit denial cases, it is not obvious to me that a similar accommodationist story will save the cumulative proposal in subtler cases. In some discourses containing chains of UAPs with !125 existential antecedents there will be many claims made. Often we may make mistakes and assert falsehoods or otherwise predicate information of a pronoun which is not jointly satisfiable with the other descriptive information predicated of prior pronouns in the chain. Many of these errors will go unnoticed. But as it is typically described, accommodation involves some recognition on the part of the hearer that something is required of her in order to interpret the speaker’s utterance in a way that it would not otherwise be interpreted. For example, Thomason (1990) characterizes accommodation as the principle: “Adjust the conversational record to eliminate obstacles to the detected plans of your interlocutor” (p. 344, my emphasis). But this suggests that accommodators have the capacity not only to recognize the speaker’s plans, goals, and intentions with respect to the conversation, but also to adjust the conversational record when necessary to meet those goals. In contrast, when the speaker casually and unknowingly predicates information not jointly satisfiable with the other descriptive information predicated of prior pronouns in the chain, there is often no inkling that any conversational goal, intention, or principle has been flouted. In contrast, in pronominal contradiction cases, the inability of the pronoun to satisfy the descriptive material initially predicated of it does register in the phenomenology of the exchange. One gets the sense that the speaker has flouted some conversational principle, and that the hearer is expected to recognize and adjust her interpretation of the utterance accordingly. Indeed, I take it that the prosodic focus that often accompanies pronominal contradiction cases is meant to help the hearer recognize that something is required of her in order to properly (re)interpret the speaker’s assertion. In sum, if an accommodationist story is to be given which would explain how we may still assert truths with UAPs after our conversation is littered with unnoticed errors and !126 inconsistencies, it will need to appeal to an account of accommodation that differs significantly from how that process is typically described. These considerations are by no means conclusive, and the matter is made difficult by the fact that intuitions about truth conditions in these cases are quite delicate. But rather than build into BC some cumulative procedure for UAPs with existential antecedents, I prefer a more minimalist approach. BC is default, but it is very defeasible. As a discourse progresses, we may add or subtract information encoded in background conditions as various contextual and lexical parameters require. 1.2 Truth Theory In this section, we define truth for two-component propositions containing quantified background conditions. We begin by first familiarizing ourselves with a standard truth theory for ordinary Russellian structured propositions of the following sort. In the following we will be primarily interested in the definition of truth for structured propositions, since the part that assigns propositions to formulas will be revised once those formulas include UAPs. But the latter part is included to help contextualize the truth theory. Where ‘R’ is an n-place relation and ‘t 1 ’, ... , ‘t n ’ are terms, the proposition expressed by an atomic formula of the form ‘Rt 1 ...t n ’ relative to an assignment A is (18), where R* is the n- place property expressed by R relative to A and for all i, o i is the referent of t i relative to A: (18) <<o 1 , ... , o n >, R*> (18) is true at a world w iff <o 1 , ... , o n > is in the extension of R*, at w. Where ‘P’ and ‘Q’ are formulas, the proposition expressed by a negation ‘~P’ relative to an assignment A is given by (19), where NEG is the property of being not true and P* is the proposition expressed by P on A. The proposition expressed by a conjunction ‘P & Q’ relative to !127 A is given by (20), where CONJ is the relation of being jointly true and P* and Q* are the propositions expressed by P and Q on A, respectively: (19) <NEG, <P*>> (20) <CONJ, <P*, Q*>> (19) is true at a world w iff P* is in the extension of NEG (is not true) at w, and (20) is true at w iff <P*, Q*> are in the extension of CONJ (are jointly true) at w. Other truth-functional connectives are treated analogously. For any propositional function f, let the extension |f| of f be the set of individuals that f maps to true propositions, and let EVERY be the function from propositional functions f to the set of all propositional functions g such that |f| ⊆ |g|. Where ‘F’ and ‘G’ are formulas that contain free variable ‘v’, the proposition expressed by the quantified formula ‘[∀v: F]G’ relative to an assignment A is (21), where F* is the propositional function from objects x to propositions expressed by ‘F’ relative to an assignment A’ that differs at most from A in assigning x as the value of ‘v’, and G* is defined analogously: (21) <<EVERY, F*>, G*> (21) is true at w iff G* is in EVERY(F*) at w, i.e., iff |F*| ⊆ |G*|. The existential determiner ∃ is treated analogously: it expresses a proposition containing SOME, the function from propositional functions f to sets of propositional functions g such that |f| ∩ |g| ≠ ∅. Other type <1,1> determiners can be defined analogously using the familiar set-theoretic machinery from generalized quantifier theory. The modal formula ‘ ◻P’ expresses the proposition (22) relative to an assignment A, where P* is the proposition expressed by ‘P’ relative to A and NEC is the property of being !128 necessarily true. The attitude report ‘t believes that P’ expresses the proposition (23) relative to A, where o and P* are as before and Bel is the believes relation: (22) <NEC, <P*>> (23) <Bel, <o, P*>> (22) is true at a world w iff P* instantiates NEC (is true at all accessible worlds) at w. (23) is true at a world w iff <o, P*> instantiate Bel at w, that is, if o believes P* at w. Other modal operators and attitude verbs are treated analogously. We can now extend this basic truth theory for ordinary Russellian structured propositions to include two-component propositions. For any two component proposition p, let p’s closure [p] be the set of individual concepts in the structured complex SC p of p plus any other individual concepts that those individual concepts in SC p are dependent on. The background conditions component ∆ p of a two-component proposition p will no longer be a conjunction of open formulas, but an ordered set of the quantified background conditions encoded by the individual concepts in p’s closure. The order on ∆ p is suggestively called “scope”, and if one quantified background condition Q 1 is ordered before another Q 2 then Q 1 is said to “take scope over” Q 2 . This ordered set of quantified background conditions may be represented by a quantifier prefix according to the familiar convention that a quantifier Q 1 appears to the left of a quantifier Q 2 in the prefix iff Q 1 takes scope over Q 2 . Now, if the scope order of the quantified background conditions in ∆ p were isomorphic to the dependency relation between the corresponding individual concepts in [p], then the background conditions would also be a partial order, which is to say that the resulting quantifier prefix would be possibly non-linear. This constitutes a departure from classical logic, where quantifier prefixes always have a linear (total) order. Henkin (1959) discovered that one !129 could define partially-ordered (branching) quantifier structures such as (24) in terms of second- order quantification over Skolem functions. And Hintikka (1973) argued that branching quantificational structures were necessary to correctly analyze certain natural language sentences like (25) and (26), which if analyzed using only linearly-ordered quantifiers would create extra dependencies unintended by their English counterparts: (24) ∀x 1 ∃y 1 φ(x 1 , x 2 , y 1 , y 2 ) ∀x 2 ∃y 2 (25) Some relative of each villager and some relative of each townsman hate each other. (26) Some book by every author is referred to in some essay by every critic. These and similar examples give credence to the idea that some natural language quantified expressions may have a branching rather than linear structure. Hence branching quantification has a legitimate interest and use independent of the issues discussed in this dissertation. Although the present framework could allow for branching structures in its quantified background conditions, I will not be presenting such a view in this dissertation. First, branching quantification is a complex and difficult issue which is largely orthogonal to the issues discussed in this dissertation. Presenting a framework of two-component structured propositions with quantified background conditions is complicated enough; adding branching quantification would mostly distract. Secondly, although it is clear what it means syntactically for quantifiers to be non-linearly ordered, there is presently no consensus about how to devise a semantics for branching structures involving generalized quantifiers of arbitrary structural complexity. Adding branching quantification to the present framework would unnecessarily !130 embroil the present framework in unsettled issues of active research. More importantly, it would impede our ability to give a general truth theory for two-component propositions. 3 In this chapter, I sidestep branching quantification in the following way. Let a linearization of a partially ordered set (S, <) be a total ordering of the same set (S, <*) such that if a and b are elements of S such that a < b, then a <* b. The background conditions ∆ p of a two- component proposition p will now be given by the linearization of the quantified background conditions encoded by the individual concepts in p’s closure, where the scope order between quantified background conditions is isomorphic to the dependency relation between corresponding individual concepts. A two-component proposition p whose closure contains 4 individual concepts {c 1 ,...,c n } will hence be of the form: (27) < p* : [Q 1 x c1 : φ 1 x c1 ]...[Q n x cn : φ n x cn ] > where for all i, Q i ∈ {∀,∃}. As we shall see in §2.2, the structured complex p* can itself be a two- component proposition (i.e., two-component propositions can be nested inside one another). Roughly put, (27) is true iff: (i) the restrictor conditions φ 1 x c1 & … & φ n x cn are jointly satisfied (the joint value range is non-empty), and (ii) the formula that results from treating the background conditions as taking wide scope over the interpretation of the structured complex component SC p is true. Cf. Barwise (1979), Sher (1990), Sher (1997) for attempts to provide semantical definitions of branching 3 structures involving generalized quantifiers. Of course, there will not always be a unique linearization of the background conditions. Since 4 propositions are individuated by background conditions, this has the consequence that there will not be a unique proposition expressed. It is immaterial to me if we retain a branching quantifier structure in the proposition expressed, and instead interpret it in the truth theory as if it had already been linearized. However, in the interest of a simpler presentation I will write out linearized background conditions in two-component propositions, as if there were always a unique one. !131 More precisely, although more tediously, we can state the truth conditions as follows. First, let’s say that if {c 1 ,...,c n } are individual concepts with associated subscripted variables x c1 , …, x c2 , and if f is a propositional function, then f[o 1 /c 1 , …, o n /c n ] is the propositional function differing at most from f in replacing, for each i and for any argument x of f, (i) any occurrences of c i in the structured complex of f(x) with o i , (ii) any occurrences of x c1 in the background conditions of f(x) with o i , and (iii) any propositional functions g in f(x) with g[o 1 /c 1 , …, o n /c n ]. 5 Second, if p is a proposition, two-component or otherwise, then the substitution p(o 1 /c 1 , …, o n / c n ) is the result of substituting, for each i, (i) any occurrences of c i in p’s structured complex with o i , (ii) any occurrences of x ci in p’s background conditions with o i , and (iii) any propositional functions f in p with f[o 1 /c 1 , …, o cn /c n ]. Now, we can say that (27) is true iff: (i) the extension of φ 1 x c1 & … & φ n x cn is non-empty, and (ii) for q 1 -many o 1 ’s that satisfy φ 1 (x c1 ), for each of which for q 2 -many o 2 ’s that satisfy φ 2 (ox c2 ), ...etc…., for each of which for q n -many o n ’s that satisfy φ n (x cn ), p*(o 1 /c 1 , …, o n /c n ) is true, where q 1 , …, q n are the quantifier conditions (some or every) associated with the quantifiers Q 1 , ..., Q n . Similarly, (27) is false iff: (i) the extension of φ 1 x c1 & … & φ n x cn is non-empty, and (ii) for q 1 -many o 1 ’s that satisfy φ 1 (x c1 ) ...etc.... <NEG, p*(o 1 /c 1 , …, o n / c n )>> is true. As in chapter 1, there is a truth-value gap when the background conditions are not satisfied. This machinery seems quite complicated, but all that it really says is that all background conditions act like wide scope quantifiers over the interpretation of the structured complex component. To illustrate, consider (9) again: (9) Ralph owns a donkey. He vaccinates it. This last recursive step will be required once we allow that two-component propositions may be nested 5 inside the structured complex of another structured proposition (see §2.2 below). !132 Let us stipulate that the second sentence of (9) expresses the proposition (28) (abstracting away from details): (28) < Ralph vaccinates d : [∃x d : x d is a donkey owned by Ralph] > (28) is true iff there is a donkey owned by Ralph that he vaccinates (the first condition (i) becomes redundant); false iff there is a donkey owned by Ralph that he doesn’t vaccinate; and truth-valueless iff Ralph doesn’t have a donkey. Before we look at how to assign two-component propositions to sentences containing UAPs it will be worth taking a step back to say a little more about where these truth conditions come from, and how this picture fits in with the notion of individual concepts introduced in chapter 1. 1.3 Quantified Background Conditions and Mediate Predication In chapter 1, it was argued that individual concepts are the cognitive vehicles by means of which agents refer to individuals. The existence of individual concepts is independent of the existence of their referents: we can token individual concepts which function to refer, but which do not. Since individual concepts, but not their referents, serve to individuate our cognitive perspectives, tokening an individual concept which does not refer has the same cognitive significance for the subject as if it had. For the subject, it is as if he is referring to, and predicating properties of, a particular individual. Individual concepts are also the cognitive means by which we predicate properties of individuals. But of course, in predicating a property of an individual concept’s referent, we don’t predicate the property of the individual concept itself. Instead, we need some new name for the cognitive act performed on an individual concept when we predicate a property of its referent. In chapter 1 I called this cognitive act mediate predication, since it is an operation on the !133 cognitive structures or representational intermediaries by means of which one predicates properties of objects. To mediately predicate a property F of an individual concept m is thereby to (directly) predicate F of the object or objects m designates. Hence, when one mediately predicates a property F of an individual concept m which successfully refers to an object o, one thereby predicates F of o, and so represents the world veridically iff o is indeed F. If one mediately predicates F of an individual concept m which fails to refer, one’s thought fails to represent something as being a certain way in the sense that one has failed to predicate F of something. Hence the thought is neither true nor false. Unlike the individual concepts which function to refer but do not (cases of malfunction), the individual concepts in instantial reasoning have taken on new “derived” functions. Our cognitive system has reappropriated these cognitive vehicles for new semantic uses. The semantic function of individual concepts in instantial reasoning is to designate the objects which satisfy the background conditions they encode. To mediately predicate a property F of an individual concept m which designates a value range of objects is to predicate F of (all of) the objects in m’s value range. Hence, when one mediately predicates F of m, one thereby predicates F of all of the objects in m’s value range, and so represents the world veridically iff all of those objects are F. It should be clear that this picture can no longer apply as-is once individual concepts encode quantified background conditions. If an individual concept p encodes the universal background conditions [∀x p : x p is a degree candidate who walked to the stage], as in (2), the value range of p will be the set of degree candidates who walked to the stage. This makes sense as the objects ‘he’ in (2) intuitively represents. To mediately predicate the property expressed by ‘took his diploma from the Dean’ of the individuals in this value range would be to predicate !134 that property of all the degree candidates who walked to the stage. If there are no such degree candidates, one has predicated the property of nothing. Hence the thought expressed is true iff there are such degree candidates (clause (i) above), and all of them took their diplomas from the dean (clause (ii) above). But these are the truth conditions the formal semantics above already predicts. So far, so good. The trouble comes in extending this picture to individual concepts with existential background conditions: (9) Ralph owns a donkey. He vaccinates it. (28) < Ralph vaccinates d : [∃x d : x d is a donkey owned by Ralph]> The pronoun ‘it’, I’ve argued, is acting like a kind of instantial term which purports to refer to one of Ralph’s donkeys. But the pronoun needn’t refer to any donkey of Ralph’s. On the other hand there is a sense in which ‘it’ may be thought of representing all of the objects of which it purports to refer to one, that is, all of the objects in its value range. In this sense, ‘it’ may be thought of representing all of Ralph’s donkeys. In this way we can retain the view that to mediately predicate the property being vaccinated by Ralph of the individual concept d would be to predicate that property of d’s value range. This would help make sense of the value range condition (i) in the above semantics. For if Ralph has no donkeys, in mediately predicating a property of d, one would fail to predicate that property of anything. Hence our thought would be neither true nor false. But what if Ralph does have donkeys? In mediately predicating the property being vaccinated by Ralph of the individual concept expressed by ‘it’, I do not predicate that property of any of Ralph’s donkeys, let alone all of them. But then what property do we predicate, and of what do we predicate it, when we mediately predicate vaccination of an individual concept !135 encoding the existential background conditions [∃x d : x d is a donkey owned by Ralph]? And how does this deliver the results of the truth-theory explained above? Scott Soames has recommended the following solution to the foregoing problem (personal communication). Suppose we think of the act involved in thinking He vaccinates it as an act of indiscriminate predication of the property being vaccinated by Harry of “any and all” objects in the extension of the individual concept. To do so is not to think about each such object and predicate the property of it. Rather, one is “indifferent about the predication target, so long as it is [in the extension of the concept]” (p.c.). When we perform such an act of indiscriminate predication, the property that we thereby directly predicate is determined by the quantified background conditions and the property that we indiscriminately predicate. So in indiscriminately predicating vaccination of Ralph and “it”, we thereby predicate the property being such that one of Ralph’s donkeys is vaccinated. The property is parasitic on the truth conditions of the quantified truth conditions, but we get the truth conditions we want and plausibly we need to posit such properties anyway. I welcome this suggestion, but would construe it in the following way. Suppose rather than taking the predication target to be some member of the value range we care not which, we instead take the objects in value range to be the predication targets (plural). To predicate a (monadic) property of some predication targets is not to predicate that property of a singular object which has those targets as its constituents or parts (a plural object, set, mereological sum, what have you); rather, it is to predicate a property which is jointly satisfied by the targets. Plausibly, we need such a conception of predication in order to make sense of non-distributive predications of plurals. The property expressed by ‘x carried a table’ is a monadic property 6 Cf. McKay (2006) for a detailed defense of (first-order) plural predication. 6 !136 which can be satisfied either by a single individual as in ‘Opal carried a table’, or by several individuals jointly as in ‘Opal, Ralph, and Jose carried a table’. In the latter case, we can predicate this property of Opal, Ralph, and Jose distributively so that by doing so we thereby also predicate the property of Opal, and of Ralph, and of Jose. But we can also predicate the property of those three individuals non-distributively in which case our doing so does not thereby predicate the property of each individual among them: what we are claiming is that they individuals carried the table together. What property is (directly) predicated of the objects in the value range? I suggest it is the property expressed by the lambda abstract λxx.[∃x : x is among xx](ψx), where the plural variable xx ranges over pluralities. For example, in (9) we mediately predicate the property being vaccinated of Ralph of an individual concept d which encodes the background conditions [∃x d : x d is a donkey owned by Ralph]. In doing so, we thereby (directly) predicate of Ralph’s donkeys the property being such that one of them is vaccinated by Ralph. On this way of thinking of things, the agent who entertains the proposition expressed by the second sentence of (9) performs a cognitive act not unlike that performed when one asserts of Ralph’s donkeys “one of them is vaccinated by Ralph” (but of course, he does this by performing a different cognitive operation on the individual concept whose value range is Ralph’s donkeys). We still get clause (i) in the above truth conditions since if Ralph doesn’t have any donkeys we will not predicate the property of anything. And if Ralph has donkeys, the property guarantees the truth conditions desired for clause (ii). Extending this picture to predications involving dependent individual concepts requires further complications. Consider a sentence ‘R(t 1 ,...t n )’ which expresses a two-component proposition p of the form, !137 < <<c 1 , ..., c n >, R*> : [Q 1 x b1 : φ 1 x b1 ]...[Q m x bm : φ m x bm ] > where R* is the property expressed by the n-place predicate ‘R’, c 1 is the concept expressed by ‘t 1 ’, and so on, and [Q 1 x b1 : φ 1 x b1 ]...[Q m x bm : φ m x bm ] are the background conditions encoded by the linearization of p’s closure <b 1 ,...,b m > (for some m ≥ n). Let ‘R(t i /x bi )’ be the formula that results from replacing, for each i, every occurrence of t i in ‘R(t 1 ,...t n )’ with its corresponding variable among {x b1 , …, x bm } (the numberings needn’t match up). In mediately predicating R* of <a 1 , ..., a n >, the predication targets are the ordered-pairs xx satisfying is the extension of the conjunction of the background conditions’ restrictors φ 1 x b1 & … & φ n x bm , and the property predicated of them is the property expressed by the following lambda abstract: λxx.[Q 1 x b1 : ∃y 2 …∃y bm <x b1 , y 2 , …, y bm > among xx] [Q 1 x b2 : ∃y 3 …∃y bm <x b1 , x b2 , …, y bm > among xx] … [Q 1 x bm : <x b1 , x b2 , …, x bm > among xx](R(t i /x bi )) For example, consider the discourse (29) the second sentence of which, we stipulate, expresses the proposition (29’): 7 (29) Every farmer has a suit. He wears it to church on Sundays. (29’) << f wears s> : [∀x f : x f is a farmer who has a suit][∃y s : y s is a suit of x f ] > In mediately predicating wears of the concepts f and s, we thereby predicate of the pairs <x, y> of farmers x and suits y owned by x the property being such that for all x for which there is a y such that <x, y> is among them, there is a z such that <x, z> is among them, such that x wears z. Hence (29’) and the second sentence of (29) are true iff every farmer who has a suit is such that he wears some suit of his to church on Sundays. Here and elsewhere I abstract away from the internal structure of the structured complex component of 7 the proposition. !138 Note however that the representational significance of mediately predicating properties of individual concepts is a function not only of the property and the background condition encoded by the concept, but also depends on the relative scope of that background condition with respect to other operators. This will be addressed in section 2.2. 2. ASSIGNING SENTENCES TWO-COMPONENT PROPOSITIONS We are now ready to assign two-component propositions to sentences containing UAPs. It will be helpful to use Greek letters α, β, γ, etc. for UAPs in order to distinguish them from other terms. In what follows it will also be instructive to first present a simplified picture and then explain what modifications of this picture are required in order for it to fully generalize. 2.1 Atomic Formulas To start, let ‘R(t 1 , …, t n )’ be an atomic formula that expresses a (Russellian) proposition SC p relative to an assignment A, and for all i, let o i be the value of term t i relative to A. Furthermore, let ‘R(α 1 /t 1 , …, α n /t n )’ be the formula that results from replacing, for all i, all occurrences of t i with a UAP α i in ‘R(t 1 , …, t n )’, and suppose the UAPs α 1 , ..., α n express concepts c 1 , …, c n (respectively). The structured complex component of the two-component proposition p expressed by ‘R(α 1 /t 1 , …, α n /t n )’ relative to an assignment A is given by (30): (30) SC p (o 1 /c 1 , …, o n /c n ) In a word, the structured complex component of the two-component proposition expressed by a an atomic formula containing a UAP is isomorphic to that of the proposition expressed by a sentence in which the UAP has been replaced by a term, as given by the foregoing Russellian theory. !139 The more difficult part of the theory is to specify the proposition’s background conditions. In easy cases, none of the UAP’s antecedents are in the scope of a higher operator. In such cases the proposition’s background conditions are simply the linearization of the background conditions encoded by the individual concepts in the structured complex’s closure, as determined by BC. As we have seen, (9) expresses the proposition (28), while (29) expresses the proposition (29’): (9) Ralph owns a donkey. He vaccinates it. (28) < Ralph vaccinates d : [∃x d : x d is a donkey owned by Ralph]> (29) Every farmer has a suit. He wears it to church on Sundays. (29’) << f wears s> : [∀x f : x f is a farmer who has a suit][∃y s : y s is a suit of x f ] > In these cases d and f are independent individual concepts, and s is an individual concept dependent on f. In the structured complexes in (28) and (29’), none of the individual concepts therein are dependent on individual concepts not in the structured complex itself. But this needn’t always be the case. Consider (31): (31) Ralph 1 has a neighbor who has a donkey. He 1 helps vaccinate it. Suppose ‘a donkey’ takes scope under ‘a neighbor’. In that case, the background conditions for ‘it’ will contain a variable bound by the existential quantifier ‘a neighbor’. Hence, if the background conditions for the proposition expressed by second sentence contain only the background conditions for ‘it’, there will be a free variable in the background conditions, and our truth theory will not deliver a truth-value. In such cases, the individual concept for the donkey is dependent on a covert (unexpressed) individual concept for the neighbor. The individual concept n for the neighbor !140 encodes the quantified background condition it would encode if it had been expressed, as determined by BC: it is [∃x n : x n is a neighbor of Ralph’s who has a donkey]. Hence ‘it’ can now express an individual concept d dependent on n which encodes the background condition [∃y d : y d is a donkey of x n ]. The second sentence of (31) expresses the proposition (31’) (abbreviating), (31’) << Ralph vaccinates d> : [∃x n : neighbor(x n )][∃y d : donkey-of(y d , x n )] > which is true iff there is a donkey-owning neighbor of Ralph’s x and a donkey y owned by x such that Ralph helps vaccinate y. More generally, let ‘R(t 1 , …, t n )’ be an atomic formula that expresses a (Russellian) proposition SC p relative to an assignment A, and for all i, let o i be the value of term t i relative to A. If α 1 , ..., α n are UAPs that express concepts c 1 , …, c n (respectively) which are not subordinate to any higher operator (i.e. their antecedents are not in the scope of a higher operator), then the two-component proposition p expressed by ‘R(α 1 /t 1 , …, α n /t n )’ relative to an assignment A is given by (32): (32) < SC p (o 1 /c 1 , …, o n /c n ) : ∆ p > where ∆ p is the linearization of the background conditions encoded by p’s closure [p]. To generalize (30) and (32) further still, we need to allow the possibility that the UAPs’ antecedents may occur in the scope of a higher operator, and so those UAPs’ background conditions may contain a variable bound by that operator. This is also a kind of concept dependence since the concepts’ background conditions will be determinable from the values of the variable bound by the higher operator. For example, in the quantified donkey sentence (33), the instantial term α is assigned by BC the background condition [∃y c : card(y) & x has y] which contains the variable x. !141 (33) Everyone who has a credit card uses it to pay his bill. The proposition expressed by the atomic formula ‘x uses α to pay x’s bill’ relative to an assignment A is undefined because for all we’ve said according to (32), there is nothing to bind the “free” variable x in α’s background conditions [∃y c : card(y) & x has y]. Where ‘x’ was bound by ‘everyone’ in (33), it now occurs free in the background conditions. Moreover, there is no covert individual concept for the individual concept expressed by α to be “dependent” on: the values for the “free” variable x are explicitly quantified over by ‘everyone’. Hence what we need is to allow quantifiers to bind variables in the background conditions encoded by individual concepts in their scope: ‘everyone’ must bind the ‘x’ in the background conditions [∃y c : card(y) & x has y]. This is achieved as follows. As before, let ‘R(t 1 , …, t n )’ be an atomic formula that expresses a (Russellian) proposition SC p relative to an assignment A, and for all i, let o i be the value of term t i relative to A. Furthermore, let α 1 , ..., α n be occurrences of UAPs in a formula ‘R(α 1 /t 1 , …, α n /t n )’ that express concepts c 1 , …, c n (respectively), and let ∆ be the linearization of the background conditions encoded by the closure of {c 1 , …, c n } (as determined by BC) which contain “free” variables x 1 , …, x m . The two-component proposition p expressed by the atomic formula ‘R(α 1 /t 1 , …, α n /t n )’ relative to an assignment A is given by (34): (34) < SC p (o 1 /c 1 , …, o n /c n ) : A(∆) > where A(∆) is the result of substituting for each i, the value of A(x i) for every occurrence of x i in ∆. To see how (34) helps us account for (33), recall the clause previously given for quantified sentences. Earlier we said that where ‘F’ and ‘G’ are formulas that contain free !142 variable ‘v’, the proposition expressed by the quantified formula ‘[∀v: F]G’ relative to an assignment A is (21) (21) <<EVERY, F*>, G*> where F* is the propositional function from objects x to propositions expressed by ‘F’ relative to an assignment A’ that differs at most from A in assigning x as the value of ‘v’, G* is defined analogously, and EVERY is the function from propositional functions f to the set of propositional functions g such that |f| ⊆ |g|, where |f| is the extension of the propositional function f. Although we will modify (21) momentarily, for now it is enough to see that (34) and (21) together predict that the proposition expressed by (33) relative to an assignment A is (33’), (33’) < <<EVERY, F*>, G*> : > where F* is the propositional function from objects o to the proposition expressed by ‘[∃y : card(y)](x has y)’ relative to an assignment A’ that differs at most from A in assigning o as the value of x, and G* is the propositional function from objects o to the proposition expressed by ‘x uses α to pay x’s bill’ relative to an assignment A’ that differs at most from A in assigning o as the value of x—i.e. the proposition <o uses y to pay o’s bill : [∃y c : card(y) & o has y] >. Hence the variable ‘x’ in the background conditions [∃y c : card(y) & x has y] encoded by α is bound by the head determiner ‘every’. 2.2 Nested Two-Component Propositions The next step to generalizing this picture is to allow that operators may take entire two- component propositions as arguments, allowing two-component propositions to be nested inside the structured complex component of another two-component proposition. This will !143 require a few modifications to the truth-conditional system just given, and will require us to say something more about what it is to mediately predicate properties of individual concepts. We begin with the truth conditions. Now, in some formulas which occur in the scope of an operator, there may be some UAPs whose antecedents lie outside of (i.e., prior to, or subsequent to in the case of cataphora) the operator, and there may be other UAPs whose antecedents lie inside of the operator. Let’s say that where ∆ is some background condition and 1 is some propositional operator in a two- component proposition p, ∆ takes scope over 1 in p iff ∆ are in the background conditions component of the proposition in which 1 is in the structured complex component (i.e. p is of the form < ...1… : ...∆... >); otherwise, 1 takes scope over ∆ in p iff ∆ is in the background conditions component of some proposition which 1 has as its argument (1 takes as its argument some two-component proposition in which is nested some two-component proposition of the form <... : ...∆...>). In general, we have the following rule for background conditions’ scope: BCS. The background conditions encoded by a UAP take highest scope over all operators in the proposition expressed by the sentence in which it occurs except for those which take scope over its antecedent. BCS has a simple and intuitive explanation. Background conditions are a kind of semantic information—something that individuates the proposition expressed—that are introduced by the pronoun’s antecedent into the conversational background (Stalnakerian common ground or Lewisian conversational score). Because this information is introduced by the pronoun’s antecedent, its semantic significance operates at the level of the antecedent’s scope. That is, in ordinary cases where the antecedent is unembedded and stated in a factual !144 mood, the antecedent clause serves to say something about the actual world, and so updates the conversational background in which subsequent sentences occur. Since subsequent sentences are interpreted against the information the antecedent has already contributed, pronouns in these subsequent sentences anaphoric on the antecedent have an apparent quantificational force which takes widest scope over any operators those sentences may contain (even if the pronoun is itself embedded inside a scope island). Like instantial terms, which purport to refer to a member of the instantiating class, UAPs purport to refer to one of the objects satisfying the antecedent’s clause, even though they are not in fact rigid designators (see §6–7 of the previous chapter). In contrast, when the antecedent is embedded inside some other operator, or is used in some sentence with a nonfactual mood (i.e., with a subjunctive grammatical mood, with modal auxiliaries like would or might, in the scope of with expressions like suppose that, etc.), the semantic information it supplies does not apply to the overall conversational context (how the interlocutors take the actual world to be), but to some hypothetical or temporary conversational background. Hence the apparent quantificational force of pronouns anaphoric on such an antecedent takes scope under the subordinating operator. Inside the hypothetical context, the pronoun still purports to refer to some object which satisfies the antecedent’s clause, and so its background condition takes wide scope over any other operators inside the subordinated context. To illustrate how this works in the present framework, first consider a simple kind of subordination involving negation: (35) Ralph owns a donkey and it’s not the case that he vaccinates it. (36) It’s not the case that Ralph owns a donkey and he vaccinates it. !145 In (36) but not (35) ‘a donkey’ takes scope under the negation. Hence in the proposition expressed by (35) the background condition [∃y d : donkey(y) & Ralph owns y] will take scope over the negation operator, while in the proposition expressed by (36) the background condition will take scope under the negation operator. (The conjunction operator will take scope over the background condition in both propositions expressed because otherwise the background condition would take scope over the antecedent in the earlier conjunct): (35’) < <CONJ, <Ralph owns a donkey, <<NEG, <Ralph vaccinates d>> : [∃y d : donkey(y) & Ralph owns y] > >> : > (36’) < <NEG, <CONJ, <Ralph owns a donkey, <Ralph vaccinates d : [∃y d : donkey(y) & Ralph owns y]>> >> : > By way of generalizing on these predictions, note that by definition a UAP cannot be in the scope of its antecedent. Hence if a sentential operator takes as syntactic argument a formula which contains both a UAP and its antecedent, it follows that the antecedent must occur in a scope island which prevents it from raising to a c-command position at LF. But in that case, the background conditions of the UAP will not take scope higher than the island-making operator, since by BCS, the background conditions encoded by a UAP cannot take scope over operators which take scope over its antecedent. If a sentential operator 1 takes as syntactic argument a formula S which contains both a UAP and its antecedent, the UAP’s background conditions will not occur at the “highest” level of the two-component proposition expressed by S, but will occur in some two-component proposition nested within the two-component proposition expressed by S. Conversely, the background conditions at the “highest” level of the proposition expressed by a formula S cannot !146 be encoded by a UAP in S whose antecedent is also in S: any background conditions at the “highest” level must be encoded by UAPs in S whose antecedents are outside of S itself (say, in an earlier part of the discourse). It follows that if the two-component proposition p expressed by a formula S is < SC p : ∆ >, then taking the proposition expressed by ‘1(S)’ to be < <1*, SC p : ∆ >, where 1* is the contribution of ‘1’ to the proposition expressed by ‘1(S)’, is perfectly consistent with BCS. In other words, sentential operators may operate on the structured complex component of the proposition expressed by their syntactic arguments. That is, in lieu of (19), (20), and (21), which specified the propositions expressed by negations, conjunctions, and quantified formulas, we now have the following. If ‘P’ is a formula that expresses relative to an assignment A the proposition < SC P : ∆ P >, the proposition expressed by ‘~P’ relative to A is (37): (37) < <NEG, <SC P >> : A(∆ P ) > Similarly, suppose ‘Q’ is a formula that expresses relative to an assignment A the proposition < < SC Q : ∆ Q >. Let ∆ QinP be the background conditions in ∆ Q which correspond to antecedents which occur in ‘P’. Then the proposition expressed by ‘P & Q’ relative to A is (38): (38) < <CONJ, <SC P , <SC Q : ∆ QinP >> : A(∆ (P&Q)/QinP ) > where ∆ P&Q/QinP is the linearization of the background conditions encoded by the individual concepts in the closure of SC P and SC Q —except for those in ∆ QinP . Together, (37) and (38) predict (35’) and (36’) as the propositions expresse d by (35) and (36), respectively. Before we move to quantified formulas, it should be remarked that a consequence of BCS is that mediately predicating a property F of an individual concept a and then negating the result is not the same as negating the “representational import” of mediately predicating F of a. !147 By “representational import” of mediate predication, I mean the cognitive event-type involving direct (immediate) predication one thereby performs in in performing an act of mediate predication (see section 1.3). This is because, as we said above, the negation “operates on” the structured complex component, not on the entire two-component proposition. More concretely, consider the second sentence of “Ralph owns a donkey. He vaccinates it”. On the present account, to entertain the proposition expressed by the second sentence is to mediately predicate of some individual concept d the property of being vaccinated by Ralph, where to do that is thereby to predicate of Ralph’s donkeys the property of being such that one of them is vaccinated by him (this is its representational import). But now consider (35) where “he vaccinates it” has been negated. In entertaining the negated conjunct, one negates the result of mediately predicating the property of being vaccinated by Ralph of d, but in doing so one thereby says of Ralph’s donkeys that some one of them is not vaccinated by him. And this is importantly different from saying of Ralph’s donkeys that some one of them is vaccinated by him, and then saying that isn’t true! Since background conditions do not semantically compose like other propositional constituents, the “representational import” of negating formulas is not straightforwardly a matter of predicating the property not being true of the representational import of the formula negated. (However, we can also negate an entire two component proposition, background conditions in all: see section 2.3 below.) To handle quantified formulas, we first need the notion of a two-component proposition where the background conditions encoded by the individual concepts in some set S have been removed. For example, if p is a two-component proposition of the form < SC p : [∀x c1 : ….x c1 ...][∀y c2 : … c1 , c2 ...][∃z c3 : … c1 , c2 , c3 ...][∃w c4 : … c1 , c2 , c3 , c4 ...]> which contains individual concepts c 1, c 2, c 3, and c 4, then the proposition !148 < SC p : [∃z c3 : … c1 , c2 , c3 ...][∃w c4 : … c1 , c2 , c3 , c4 …]> is the result of removing, as an arbitrary example, the background conditions encoded by {c 1 , c 2 }. Let ‘F’ and ‘G’ be formulas that contain a free variable ‘v’ and which occur in the quantified formula ‘[∀v: F]G’. Let α 1 , ..., α n be UAPs that occur in ‘F’ or in ‘G’ whose antecedents occur outside of the quantified formula ‘[∀v: F]G’, while β 1 , ..., β n are the remaining UAPs in ‘F’ and γ 1 , ..., γ n are the remaining UAPs in ‘G’ whose antecedents occur inside ‘[∀v: F]G’. The proposition expressed by the quantified formula ‘[∀v: F]G’ relative to an assignment A is (39), where ∆ α is the linearization of the background conditions encoded by the closure of the individual concepts expressed by the α’s, F* is the propositional function from objects o to the proposition (or propositional function) expressed by ‘F’ relative to an assignment A’ differing at most from A in assigning o to ‘v’ with the background conditions encoded by the closure of the individual concepts expressed by the α’s removed, and G* is defined similarly: (39) < <<EVERY, F*>, G*> : A(∆ α ) > To illustrate, consider the following discourse (40): (40) Ralph has a female donkey. Every child who saw one of her foals fed it a carrot. The N-bar constituent of ‘every’ is child who saw one of her foals. Since the antecedent of ‘her’, the indefinite ‘a female donkey’, occurs in the prior sentence, by BCS the background condition for ‘her’ will take scope over the operator EVERY in the proposition (40’) expressed by (40): (40’) < <<EVERY, F*>, G*> : [∃y d : y d is one of Ralph’s female donkeys] > Thus, we don’t want the background condition [∃y d : y d is one of Ralph’s female donkeys] to recur in the the values of the propositional functions F* and G*—it’s already taken scope over !149 those propositional constituents. That is, F* will not be the function from objects o to (41), but rather the function from o to (42), where the background conditions in (40’) have been 8 removed: *(41) <<o is a child who saw a foal of d> : [∃y d : y d is one of Ralph’s female donkeys]> (42) <<o is a child who saw a foal of d> : > Similarly, ‘fed it a carrot’ will contribute to the proposition expressed (relative to A) the function G* from objects o to the propositional function expressed by ‘x fed β a carrot’ (relative to an assignment A’ differing at most...) where β encodes the background conditions [∃z d : z is a foal of y d ]: (43) < <o fed f a carrot> : [∃z f : z is a foal of y d ]> Hence the (40’) is true iff there is a female donkey owned by Ralph y such that <<EVERY, F*[y/ d] >, G*[y/d]> is true, which in turn true iff the extension of F*[y/d] is in the extension of G*[y/ d]. But since F*[y/d] is the propositional function from from objects o to the proposition < <o is a child who saw a foal of y> : > and G*[y/d] is the propositional function from objects o to the proposition < <o fed f a carrot> : [∃z f : z f is a foal of y]>, we get that the proposition is true iff there is a female donkey owned by Ralph y such that every child x who saw a foal of y is such that there is a foal z of y such that o fed z a carrot. Note that (42) and (43) are not propositions per se because they do not determine unique truth-values 8 given that there are no background conditions associated with d and f there. Rather, we may think of them as kinds of propositional functions, ones which determine a truth value if supplied the relevant background conditions. !150 Note that when a UAP α’s antecedent, or the predicative material α’s antecedent attaches to, itself contains another UAP β, the individual concept expressed by α will be dependent on that of β. The asymmetry of dependence between individual concepts entails that two UAPs cannot both occur in each other’s background conditions, by which we mean that if one UAP α contains in its background conditions a variable subscripted by a distinct UAP β, then the background conditions for β cannot also contain a variable subscripted by α. Together with BC, this means that two UAPs cannot both occur in each other’s antecedents or the predicative material to which those antecedents are attached. It is a consequence of these restrictions that whenever we have cases of “crossing reference” where two putative anaphoric pronouns “reference” each other, as in the Bach-Peter’s sentence (44), one of the pronouns is in fact a bound variable, and not a UAP: (44) The pilot who shot at it hit the MiG which chased him. Our current view is able to handle this complex example in both the case where ‘the pilot…’ takes scope over ‘the MiG…’ as in (44a) and in the case where the scope order is reversed, as in (44b): (44a) [the x: pilot(x) & x shot at α][the y: MiG(y) & y chased x](x hit y) (44b) [the y: MiG(y) & y chased β][the : pilot(x) & x shot at y](x hit y) In (44a), ‘it’ is a UAP whose antecedent is ‘the MiG who chased x’ while ‘him’ is a bound variable; in (44b), ‘him’ is a UAP whose antecedent is ‘the pilot who shot at y’ while ‘it’ is a bound variable. Without slogging through the details, we claim that the present theory is capable of correctly interpreting both (44a) and (44b). Hence our theory already provides a very powerful framework within which we can account for a wide variety of scope combinations !151 among quantifiers, negations, other sentential operators, and UAPs and their antecedent quantifiers. 2.3 Talking about Propositions It is easy to see how the foregoing theory is able to account for the apparent wide- scoping behavior of UAPs that we observed at the end of chapter 2. While the second sentence of (9) expresses the proposition (28), which is true at a world w iff at w, there’s a donkey of Ralph’s that he vaccinates, the second sentence of (45), which is the negation of the second sentence of (9), expresses the proposition (46), which is true at a world w iff at w, there’s a donkey of Ralph’s that he doesn’t vaccinate: (9) Ralph owns a donkey. He vaccinates it. (28) < Ralph vaccinates d : [∃x d : x d is a donkey owned by Ralph]> (45) Ralph owns a donkey. It’s not the case that he vaccinates it. (46) < <NEG, <Ralph vaccinates d>> : [∃x d : x d is a donkey owned by Ralph]> Similarly, the modalization of the second sentence of (9), ‘It might be the case that Ralph vaccinates it’ expresses the proposition (47) which is true at a world w iff at w, there a donkey y of Ralph’s such that it’s possible relative to w that Ralph vaccinates y: (47) < <POS, <Ralph vaccinates d>> : [∃x d : x d is a donkey owned by Ralph]> where POS is the modal property being possibly true. Although our theory of propositions gets the right results, it raises an important issue for the standard conception of propositions and how we talk about them. If propositions serve as the semantic contents of sentences (relative to contexts), the primary bearers of truth, falsity, and modal properties, and the referents of that-clauses, then we should expect that when a !152 sentence ‘S’ has as its semantic content the proposition that p, then the sentence ‘It’s not the case that S’ should have as its semantic content the proposition that it’s not the case that p, i.e. the proposition that results from applying the negation operator to the referent of the that-clause ‘that S’ (i.e., p). And we should expect that if the content of ‘S’, i.e. the proposition that p, is true at a world w iff at w, q, then the content of ‘It’s not the case that S’, i.e. the proposition that it’s not the case that p, is true at a world w iff at w, it’s not the case that q. But this is ostensibly challenged by the foregoing data. The second sentence of (45) is used to express the two- component proposition (46), which is the result of mediately predicating NEG not of the entire two-component proposition (28), but of the structured complex alone. Hence it’s not the case that (46) is true at a world w iff at w, it’s not the case that (28) is true at w. However, we clearly are able to talk, as we have been doing throughout this chapter, about the conditions under which a given two-component proposition p is true or false, possible or necessary, believed or asserted, without our predicates being construed as applying to the structured complex component of p rather than to p itself. If in a situation s where Ralph doesn’t have any donkeys I assert that (28) is not true, what I assert is true. But in asserting that (28) is not true, what I assert is clearly not (46), which would fail to be true in s. The wide- scoping behavior of background conditions does not prevent me from making claims which have two-component propositions, background conditions and all, as their proper targets. Hence sentential operators like ‘it’s not the case that’, ‘it is necessarily true that’, ‘possibly’ etc., as well as predicates like ‘is not true’, ‘is a necessary truth’, ‘is possible’, etc., express modal and alethic properties which can be either directly predicated of truth-bearing two-component propositions, or mediately predicated of structured complexes, which are not truth-bearers per !153 se but may be thought of as proto-propositions which, when supplied with the right 9 background conditions, determine a complete, truth-evaluable proposition. That is, just as we may directly (immediately) predicate the property G of an object o or mediately predicate G of an individual concept a, where these two cognitive acts are related but distinct, we can directly predicate the property NEG, the property being not true, of a bearer of truth values (a two- component proposition), or we can mediately predicate NEG of the structured complex of that proposition (i.e., a proto-proposition). To mediately predicate NEG of <Ralph vaccinates d> where d encodes the background condition [∃x d : x d is a donkey owned by Ralph] is to thereby directly predicate of Ralph’s donkeys the property of being xx’s such that one among the xx’s is not vaccinated by Ralph. In contrast, in saying that (28) is not true, I directly predicate the property NEG of that proposition. In some cases it may be ambiguous whether one’s intended predication target (direct or mediate) is the structured-complex or the whole proposition. Consider: (48) A: Ralph has a donkey. He vaccinates it. B: That’s not true. If B clarifies “Ralph doesn’t have any donkeys”, we know that she meant to predicate the property being not true of the entire proposition A asserted. If B says instead “Ralph hasn’t vaccinated his donkey yet” we know that the negation is instead internal to (presupposes) the background conditions: being not true is mediately predicated of the structured complex, rather than being directly of the entire two-component proposition. Thanks to Scott Soames for suggesting this term. 9 !154 2.3 Modal and Intensional Subordination Since we already allow for operators to take two-component propositions as argument, we have all that we need to account for modal and intensional subordination cases, of which quantified donkey sentences are but a special case. However, it’s worth spending a little time working through some examples to show how the present framework is able to handle the relevant data. In what follows, we assume a standard Kratzerian analysis of modals (Kratzer 1977, 1981) which, although not uncontroversial, constitutes what I take to be the orthodox theory of modals in the literature. According to the Kratzer semantics, modals take two contextual parameters as argument in addition to the proposition expressed by the prejacent (its “scope”): a set of propositions (the “modal base”) which determines a function f from a world of evaluation w to the set of worlds accessible from w, and another set of propositions (the “ordering source”) g which generates a partial ordering ≤ g of the worlds in the modal base. 10 The “best” worlds relative to a world of evaluation w will therefore be the worlds in the modal base f(w) that the ordering source g ranks the highest: Best f,g (w) =def. {w’ ∈ f(w) : [~∃w’’ : w’’ ∈ f(w)](w’’ < g w’)} 11 (In plain English: the best worlds are the ones that are not ranked strictly lower than any other worlds in the modal base.) Modals then quantify over the best worlds. Roughly, ‘must p’ is true w ≤ g w’=def. {p ∈ g : p is true at w’} ⊆ {p ∈ g : p is true at w}. (In plain English: w is at least as good as 10 w’ iff all the propositions in the ordering source that are true at w’ are also true at w. We define < g in terms of ≤ g in the standard way: w < g w’ =def. w ≤ g w’ and ~(w’ ≤ g w).) Notice that by convention w ≤ g w’ means that w is ranked as least as high as w’ (lesser is better). Here I assume what is known as the “Limit Assumption” for the sake of simplicity. 11 !155 at a world of evaluation w iff p is true at all the g-best worlds f-accessible from w; ‘may p’ is true at w iff p is true at some of the g-best worlds f-accessible from w: [[must p]] is true at <f, g, w> iff [∀w’ : w’ ∈ Best f,g (w)](p is true at w’) [[may p]] is true at <f, g, w> iff [∃w’ : w’ ∈ Best f,g (w)](p is true at w’) In this way, Kratzer’s semantics captures different flavors of modality by assigning different values to the modal base and ordering source. Following Roberts (1989), we suppose that the effect of uttering a discourse like (49) is to add the material in the scope of ‘might’ (a wolf walks in) into the modal base of the subsequent modal ‘would’ (which has the modal force of must), (49) A wolf might walk in. It would eat you. Hence the second sentence means something like: in all the worlds where a wolf walks in, it eats you, where “it” is a wolf that walks in in one of those contextually-salient worlds. Since we are not here concerned with analyzing modals per se, we can simplify the foregoing presentation somewhat by simply taking modals to take a single contextually-determined parameter B which is a function from a world of evaluation w to the contextually-determined “best” worlds at w, as defined above in terms of the modal base f and ordering source g. Let’s say that a UAP α which occurs in a modal subordination formula ‘1(P)’ is subordinate to ‘1’ in ‘1(P)’ iff its antecedent occurs in the scope of the modal or intensional operator which licenses ‘1’. Stated in our framework of two-component propositions, we have it that the proposition expressed by ‘must(P)’ relative to A is given by (50): (50) < <NEC, <P*, B>> : A(∆ α ) > !156 where ∆ α is the linearization of the background conditions encoded by the closure of the individual concepts expressed by the UAPs not subordinate to ‘must’ in ‘must(P)’ (i.e. whose antecedents are not in the scope of the modal which licenses ‘must’), B is the contextually- determined function from worlds w to best-worlds B(w), and P* is the two-component proposition expressed by ‘P’ relative to A with the background conditions ∆ α removed. NEC is the relation x is true at all of the worlds in y, a relation which holds at a world w of a two-component proposition P* and a contextually determined parameter B iff P*is true at all of the worlds w’ in B(w). Hence a Russellian proposition of the form <NEC, <P*, B>> is true at a world w iff the property NEC applies to <P*, B>. Note that (50) embodies a modification to BCS. Recall that BCS said a UAP’s background conditions take highest scope over all operators in the proposition expressed by the sentence in which it occurs except for those which take scope over its antecedent. As applied to (49), BCS incorrectly predicts that the background conditions for ‘it’ should take scope over the modal operator ‘would’ since ‘would’ does not itself take scope over ‘a wolf walks in’ in the prior sentence. But this is not right: in a modal subordination case, the UAP’s background conditions are under the scope of the subordinating operator. Hence, what we need is the following: BCS*. The background conditions encoded by a UAP take highest scope over all operators in the proposition expressed by the sentence in which it occurs except for any subordinating operators or for those which take scope over its antecedent. Equipped with BCS* and (50), the proposition expressed by the second sentence of (49) relative to an assignment A is (49’): !157 (49’) < <NEC, <<w eats you : [∃x w : x w is a wolf that walks in]>, B>> : > Where B is the contextually determined function from worlds w to sets of (nearby) worlds where a wolf walks in. (49’) is true at a world w iff <w eats you : [∃x w : x w is a wolf that walks in]> is true at at all worlds in B(w), that is, in all worlds where a wolf walks in. Modal subordination cases involving other modal forces (‘might’, ‘may’, etc.), as well as adverbial quantification (‘usually’, ‘always’) can be handled analogously. 2.4 Weak versus Strong Readings and Conditional Donkey Sentences We have already shown how the present framework is able to handle so-called weak readings of donkey sentences like (33), repeated below: (33) Everyone who has a credit card used it to pay his bill. To account for strong readings of donkey pronouns, we simply take the pronoun’s background condition to have universal rather than existential force. Hence (51) expresses the proposition (51’) where G* is now the function from objects o to propositions expressed by ‘x vaccinates α’, where ‘α’ encodes the universal background conditions [∀y d : y d is a donkey owned by x]: (51) Every farmer who owns a donkey vaccinates it. (51’) < <<EVERY, F*>, G*> : > We can also account for “mixed” cases like (48) where some of the pronouns have strong readings and others weak: (52) Everyone with a Metrocard 1 who took a bus 2 used it 1 when boarding it 2. Again, this is captured by taking the first pronoun representing Metrocards to have existential background conditions while the second pronoun representing buses has universal background conditions. As stated previously, taking UAPs with existential antecedents to encode universal !158 background conditions constitutes an exception to the defeasible rule BC for determining the background conditions encoded by UAPs. As discussed in the previous chapter and at the beginning of the present one, there seem to be plausible explanations for the landscape of various readings of quantified donkey sentences. Conditional donkey sentences also express weak versus strong readings. However, as we saw, conditionals exhibit more contextual variation in their interpretations. Whereas quantified donkey sentences always quantify over objects which satisfy the head determiner’s N-bar constituent (e.g., farmers who own donkeys rather than farmer-donkey pairs in (51)), with conditionals what set of situations or events being quantified over or described may change from context to context. For example, Chierchia (1995) argues that the topicalization of dolphins in (52) makes that sentence true if most trained dolphins do incredible things for (one of) their trainers, even if most trainers don’t get dolphins to perform, and most dolphin-trainer pairs aren’t successful at getting the dolphin to perform feats for the trainer. The existence of a dumb dolphin trained by very many trainers to no avail is not enough to make (52) false. (52) Dolphins are truly remarkable. Usually, if a trainer trains a dolphin she makes it do incredible things. On the other hand, the topicalization of trainers in (53) makes the second sentence true if most dolphin trainers get dolphins to do incredible things: (53) The trainers here are so talented. Usually, if a trainer trains a dolphin he gets it to do incredible tricks. In yet other cases, a pairwise interpretation seems called for, as in (54) (again due to Chierchia 1995) where we seem to be quantifying over or describing situations each of which involves a student, paper, professor triple. !159 (54) When a student gives a paper to a professor, she expects her to comment on it promptly. These intuitions are somewhat delicate, and it is not obvious to me that the truth conditions Chierchia assigns these sentences are entirely correct. Suppose nine out of ten of the dolphins always do tricks for a certain talented trainer at SeaWorld, but those nine dolphins rarely do tricks for any of the many other trainers that train them. Then it is true that most trained dolphins do tricks for a trainer they are trained by (i.e. the talented one), but it isn’t obvious that the second sentence of (52) is true in such a case. Nevertheless, Chierchia’s larger point is still valid: the different topicalization in (52)/(53) changes the truth conditions of the second sentence, by changing the domain of situations they quantify (trained dolphins v. dolphin trainers), even if it is difficult to say, or perhaps even indeterminate, what condition (52) and (53) place on members of these respective domain. Hence donkey conditionals do exhibit more contextual variation in their interpretations than quantified donkey sentences. What explains these contextual variations and how do we generate these readings in our present framework? In this section I propose two mechanisms of explanation. I will assume that some version of the view that donkey conditionals quantify or describe situations or events is !160 correct. This view corresponds to what I take to be the pre-theoretical, intuitive understanding 12 of these sentences’ meanings. Moreover, for many donkey conditionals, we have a pretty good intuitive grasp of the kinds of situations or events they quantify or describe. For example, in processing (55), we do not seem to be counting each of Ralph’s credit cards separately, imagining a separate situation for each different card. Only theoreticians of language, not ordinary speakers, would ever think to divide up the cases with that fineness of grain: (55) If Ralph has a credit card, he usually uses it to pay his bill. Rather, we seem to imagining typical scenarios where Ralph has a credit card. In some such situations Ralph may have just one credit card in his wallet, but more likely he has several. Since we aren’t really thinking about whether or not he has multiple cards, we don’t bother to consider each one as a separate case. Rather, all we are interested in are situations where Ralph has some credit card or other (one or more). If he has multiple credit cards in one of the situations imagined, we include them as well in that situation. Intuitively, in (55) we are saying that in most such situations s where Ralph has a credit card (s includes all of his cards, and is Schein (2003) argues that if-clauses are plural, definite descriptions of events. One advantage of this 12 approach is that it provides an elegant solution to a problem Barker noted (1997) for the situation- quantificational analysis of nested donkey conditionals. “If a theory is classical, then if it is inconsistent, it is usually trivial” clearly does not predicate of each classical theory the property of being trivial in most situations where it is inconsistent—theories are not sometimes consistent, sometimes not. Rather than rescuing the situation-quantificational approach by opting for a non-compositional analysis of nested donkey conditionals, Schein’s analysis of if-clauses as plural descriptions of events avoids Barker’s problem because the condition supplied by the matrix clause applies to the situations or events the if- clause plurally describes, rather than to each individual situation/event therein (i.e., Barker’s sentence is true iff situations involving classical theories are such that the situations among them involving inconsistent theories are such that most theories among them are trivial). This issue is beyond the scope of this dissertation; however, the approach to donkey conditionals ventured here could equally well apply to a Schein-type analysis. !161 otherwise as much like the actual discourse context as possible), thereafter and all else being equal, there is a situation s’ where he uses one of his cards in s to pay his bill. 13 In contrast, in (56) we are counting credit cards separately, exactly one for each situation: (56) If Ralph has a credit card he maxes it out on frivolous purchases. Here each of the situations quantified over involves just one credit card. Intuitively, in evaluating (56) we are thinking of each credit card as a separate case. Hence (56) is true iff in all situations s where Ralph as a credit card in s, he maxes out a credit card he owns in s. As previously discussed, (52) and (53)’s truth conditions are less certain, but Chierchia’s truth conditions can also be captured along the lines of (55). That is, in (52) we quantify over situations each of which involves a trained dolphin and its trainers. We say of most such situations s that a dolphin in s is made to do tricks by a trainer of the dolphin in s. Hence (52) is true iff most trained dolphins are made by a trainer to do tricks. Conversely, in (53) we quantify over situations involving a trainer and all the dolphins he trains. We say of most such situations s that a trainer in s gets a dolphin he trains in s to do tricks. Hence (53) is true iff most trainers get a dolphin they train to do tricks. Schein (2003) argues that donkey conditionals have both a temporal direction and a ceteris paribus 13 clause. He uses the ceteris paribus clause in particular to account for the “weak readings” of donkey conditionals. Even better, if the parking meter fare is one dollar, then “If Ralph has a quarter in his pocket, he puts it in the meter” requires Ralph to put not one (weak), and not all (strong), but just four of his quarters in the meter. This reading is captured if each quarter satisfies the condition of being placed in the meter or tripping the ceteris paribus clause. Although I think Schein is right that both these features are needed, we cannot rely only on the ceteris paribus clause alone to account for the contextual sensitivity of donkey conditionals: we still need to vary the situations being quantified in order to capture the intuitive situations donkey conditionals quantify, as well as Chierchia’s (1995) data. We can include both of these features into a situation-quantificational analysis of conditionals as we have done above, or as Schein does, into an account that takes indicative conditionals to be a plural description of events. A full discussion of these features will not be explored here. !162 More generally, we address the contextual sensitivity of donkey conditionals with respect by varying the domain of quantification rather than the interpretation of the conditional operator. Rather than capturing asymmetric readings of conditionals by having the conditional operator select for certain distinguished noun phrases in the antecedent, we partition the “distinguished" individuals denoted by those noun phrases into distinct situations. Any other “non-distinguished” individuals mentioned in the antecedent may be included altogether in the situation of the relevant “distinguished” individual they are associated with. So for example, 14 we separate out the dolphins in (52) into different situations but include in each dolphin’s situation all of its trainers; whereas in (53) we separate out the trainers in separate situations but include in each trainer’s situation all of the dolphins he trains. The proposal helps not only with the proportion problem but also with weak and strong readings. The “distinguished” individuals separated into distinct situations will effectively be quantified over with the force of the conditional operator’s adverbial modifier (always, sometimes, usually, etc.). This gets the strong readings for those UAPs when the adverbial quantifier is always. For other, “non-distinguished” individuals in the situations quantified over, we can capture their weak readings by taking them to have existential background conditions and their strong readings with universal background conditions. Hence in the “mixed” case (57), we quantify over situations which contain just one person taking just one bus with all his Metrocards, and we say of all such situations s that α uses β to board γ, where α is a UAP that encodes [∃x p : x p is a person that has a Metrocard and takes a bus], β encodes [∃y c : y c is a Metrocard of x p ] and γ encodes [∃z b : z b is a bus boarded by x p ]: (57) If someone has a Metro card 1 and takes a bus 2 , he uses it 1 to board it 2 . Although see the discussion of the bishop sentence below. 14 !163 Note that the notion of a minimal situation is not being directly invoked here. That notion was introduced because if we quantify over all situations that validate the antecedent simpliciter, then we end up including, say, multiple donkeys within the same situation and so uniqueness will not be secured (and even if we don’t want a uniqueness constraint we need to separate out the donkeys in different situations in order to capture asymmetric readings involving usually, as the proportion problem shows). So we don’t want the situations to be too big, which is why the situations need to be minimal. But the minimality device also has difficulties of its own, for which there are epicycles of solutions. In this section we sidestep 15 these difficulties by going straight for the desired end result: to capture the intuitive situations the conditional asks us to quantify or describe. We have the contextual flexibility to quantify over minimal situations that validate the antecedent when that’s called for, but we also have the contextual flexibility of broadening the grain (e.g. including multiple dolphins, or trainers, in the same situation) to allow for weak readings (e.g. including all of the credit cards in the same situation so we don’t count them separately) when that’s needed as well. We can be neutral as to how this is achieved, and in particular whether there is a syntactically located contextual domain restriction variable, or merely a contextually sensitive parameter of the semantic interpretation (or perhaps the relevant readings are simply outputs of some post-semantic pragmatic process). There are likely a complex array of factors which determine the domain of Kratzer (2014) summarizes the following antecedents as causing trouble for minimality, 15 (i) Whenever snow falls… (how much is minimal snow?) (ii) Whenever between 20 and 200 guests come… (minimality quantifies over just 20 guests) (iii) Whenever the cat eats more than one can of food… (is there a minimal amount of cat food?) (iv) Whenever no one shows up… (minimality quantifies over all situations) Cf. Schein (1993, ch. 9-10) for a discussion of this issue, and a proposal that provides a recursive definition of the kinds of situations or events which exemplify or render (rather than just satisfy) a given formula. !164 quantification for a given conditional. My claim is that our current view is able to get the right results once supplemented with some such account that specifies the contextually determined domain of situations donkey conditionals intuitively quantify over. Turning finally to the bishop sentence, it seems to me that we need some way of making the two bishop meeting situation asymmetric. This is because if we look at cases where the adverb of quantification is usually as in (58), I register the intuition that (58) requires that more than half of the possible blessings occur: (58) Usually, if a bishop meets a bishop, he blesses him. Suppose there are n bishops each of which has a one-on-one meeting with all other bishops. Then there will be n*(n - 1)/2 total meetings, and hence n*(n - 1) total opportunities for blessings (two per meeting). Now, if (55) simply means something like in more than half of the bishop meetings, one of the bishops blesses the other, then it could be consistent with (58) that half of the bishops never bless and the other half of the bishops bless most but not all of the time. But that does not seem right: (58) requires that most blessing opportunities are taken—i.e., that there are at least n*(n - 1)/2 blessings. However, if (58) quantifies over meetings simpliciter, then there will be n*(n - 1)/2 situations quantified over. But ‘usually’ plausibly means more than half. Hence, if we want the result to be that there are at least n*(n - 1)/2 blessings, then we need there to be twice as many situations quantified over as there are meetings. To double the number of situations, we need to treat the bishops asymmetrically. One proposal due to Ludlow (1994) is to take the bishops in each meeting situation to be distinguished by different roles: one bishop is the agent of the situation, while the other is the patient. Hence (58) is true iff most situations s where an agent bishop meets a distinct patient bishop, an agent bishop in s blesses a patient bishop in s. However, as Elbourne complains, “it is !165 prima facie plausible to say that symmetrical relations do by definition constitute eventualities whose arguments have identical thematic roles, if we are to maintain any relationship between thematic roles and discernible differences in the properties of entities in extralinguistic reality” (2006, p. 142). Recall that Elbourne’s alternative solution treats the bishops asymmetrically by positing an internal mereological structure to the meeting situation isomorphic to the antecedent’s syntax, explaining the unacceptability of (59) given the coordinate structure constraint, (59) If a bishop and a bishop meet, he blesses him. However, in addition to the problems for this solution discussed in the previous chapter, it would seem that Elbourne is open to his own charge. That is, why should we think that the extra-linguistic mereological structure of the symmetric meeting situation denoted by the antecedent is asymmetric with respect to the two bishops—and isomorphic to the antecedent’s syntax? If Ludlow is guilty of begging the question by positing some asymmetric thematic roles as part of the extra-linguistic reality of the meeting situation for which we have no independent reasons to accept, as Elbourne claims, then I would question whether Elbourne’s own posited asymmetry enjoys the required independent motivation. While both thematic roles and situation semantics do work in other semantic problem-domains, I’m not sure it’s fair to say that Elbourne’s posited asymmetry of the meeting situation is more motivationally robust than Ludlow’s. Perhaps we can thread this needle by taking the situations in the domain of quantification to be not asymmetric meetings per se, but symmetric meetings whose participants are distinguished by their being agents and patients of a potential blessing event. That is, rather than take x meeting y to be a distinct event from the y meeting x event, we instead !166 take every meeting event to be associated with two distinct potential blessing events, one where one x is a potential blesser and y a potential blessee and another where y is a potential blesser and x a potential blessee. The quantifier then quantifies over these latter events: not meetings of bishops per se, but meetings of bishops that are potential blessing opportunities, where there are two such opportunities/events per meeting. The proposal lets us keep the meeting relation symmetric by identifying the asymmetry with something external to it. And it does a decent job of capturing the cases or situations (58) intuitively quantifies or describes. I leave it to the reader to decide if this proposal has any advantage over that of Ludlow or Elbourne with respect to the plausibility of the event metaphysics it posits. Whichever view of these we opt for, it is clear that taking pronouns to express individual concepts which encode existential background conditions will account for asymmetric readings the bishop sentence once the bishops are suitable distinguished. We may suppose that the proposition expressed by (58) is something like the following, where (58c) is (an abbreviation of) the structured propositions C in (58’): (58’) <IF usually <A, C>> (58c) <a blesses b : [∃x a : x a is a bishop that has the opportunity to bless a bishop that he meets][∃y b : y b is a bishop distinct from x a that has the opportunity to be blessed by a bishop that he meets] (58’) then is true iff for most contextually-determined partition of the situations where A is true, C is also true. The relevant partition involves situations where there are only two bishops x and y, they meet, and one of the pair has the opportunity to bless the other while the other has the opportunity to be blessed. And (58c) is true in a situation s iff there is an individual x a in s that is !167 potential blessing bishop of another bishop in s he meets, an individual y b distinct from x a that is a potential blessee of x a , and x a blesses y b . 2.5 Attitude Reports A final set of problem cases concerns the interaction of donkey anaphora and attitude reports. To begin, consider the second sentence of (60): (60) Someone murdered Smith. Detective Peters suspects he escaped through the window. On the relevant reading, Peters does not know who murdered Smith; his suspicion is simply based on a trail of blood in Smith’s apartment that leads to the window. Plausibly, this reading is captured by taking the second sentence to relate Peters to the two-component proposition (60a) expressed by the complement clause: (60a) < <m escaped through the window> : [∃x m : x m murdered Smith] > On this view, the second sentence of (60) requires Peters to entertain, or be suitably disposed to entertain, (60a) in a certain way characteristic of suspicion—that is, with whatever features suspicions have by virtue of which entertaining a proposition p constitutes suspecting it, rather than some other propositional attitude like desire, hope, or wonder. For Peters to entertain (60a) is for him to mediately predicate of some individual concept m—or rather, of his personal token of m (see chapter 1)—the property escaping through the window. In doing so, he thereby represents the individuals in m’s value range as being such that one among them escaped, where m’s value range consists of those individuals that satisfy the condition [∃x m : x m murdered Smith]. Hence the proposition Peters is reported to suspect is true iff someone murdered Smith and escaped through the window. !168 The proposition expressed by the second sentence in (60) is given schematically by (60b), where Sus is the suspicion relation and to entertain (60b) is to directly predicate Sus of (60a) and Peters, (60b) <Sus, <Peters, (60a)>> Peters bears the Sus relation to (60a) iff Peters mediately predicates, or is suitably disposed to mediately predicate, in a way characteristic of suspicion, the property of escaping of his token of m. Hence, (60) is true iff Peters is suitably disposed to mediately predicate the property of escaping through the window of his token of m—in simpler terms, iff Peters believes that “he” escaped through the window, where “he” represents for Peters whoever murdered Smith. Although this view is on the right track, there is an immediate problem. In other cases, the background conditions appear not to be aspects of the reported attitude but rather are merely features of the conversational context shared by the speaker and his interlocutors. This can be illustrated by Geach’s famous intentional identity sentence: (61) Hob believes a witch blighted Bob’s mare. Nob believes she killed Sob’s sow. (61) can be true even if Nob doesn’t know of Hob or Bob (and Hob doesn’t know of Nob or Sob). For example, suppose both Hob and Nob read the same newspaper article that claims that a witch Helga has been blighting local farm animals. Hob concludes, “She blighted Bob’s mare,” while Nob similarly exclaims, “She killed Sob’s sow.” Unbeknownst to either, there is no witch: the animals died of natural diseases. Nevertheless, the second sentence is still true. If the individual concept expressed by the pronoun in the complement clause encodes the background condition [∃x: x is a witch who blighted Bob’s mare] or [∃x: x is a witch believed by Hob to have blighted Bob’s mare], then the present account takes Nob to mediately predicate !169 the property of killing Sob’s sow of a token of an individual concept that encodes such a condition. But if Nob doesn’t have any knowledge of Hob or Bob, it would be imprudent to ascribe to him concepts which encode background conditions that concern Hob or Bob if background conditions are to have any psychological reality. To address this problem, we will need to liberalize the relation that needs to hold between the subject of the report and the proposition expressed by the complement clause. Intuitively, what we would like is for (61) to require that Hob and Nob’s private mental representations be the tokens of the same public individual concept (mental representation- type) without also requiring that Hob and Nob’s individual concept-tokens encode the same background condition. That is, we want the truth conditions of the attitude report to depend on the individual concepts in the two-component proposition expressed by the complement clause, but not on the background condition component. The relation Bel that a de dicto belief report predicates of the subject i and the content of complement clause <SC p : ∆> is true of such a pair whenever i entertains, or is suitably disposed to entertain, <SC p : ∆*> in a way characteristic of belief, where ∆* are some background conditions potentially distinct from those ∆ that feature in the complement clause. What it is to entertain a two-component proposition is the same as before: it is to perform some structured cognitive act involving directly or mediately predicating properties and relations of objects, individual concepts, constituent propositions, propositional functions, and so on. Applying this to (61) itself, we suppose the first sentence makes salient some individual concept (mental representation-type) a token of which is Hob’s target of mediate predication by the property of blighting Bob’s mare. The second sentence tells us that Nob mediately predicates the property of killing Sob’s sow of a token of the same individual concept tokened by Hob. !170 Plausibly, Hob and Nob’s private mental representations count as tokens of the same individual concept by virtue of the fact that Hob and Nob both acquired their belief that there is a malignant witch in the area after reading the same newspaper asserting that fact (they share a common causal origin). Similarly, the speaker’s report is presumably grounded in his knowledge not only that Hob and Nob both believe that there is a malignant witch, but that their respective beliefs were formed in response to the same newspaper article. Hence the mental representation that the speaker uses to represent Hob and Nob’s witch also shares their distal causal origin and so is also a token of the same individual concept. The speaker’s token of this individual concept encodes the background condition [∃x w : x w is a witch who blighted Bob’s mare], but Hob and Nob’s tokens of this individual concept may encode different background conditions. Since neither individual concepts nor their tokens are individuated by background conditions (indeed, a token concept can encode different background conditions at different times), allowing this is not a problem. The individual concept concept-type and its corresponding background condition feature in the proposition P expressed by the complement clause in the second sentence of (61): <<w killed Sob’s sow> : [∃x w : x w is a witch who blighted Bob’s mare]> The proposition expressed by the second sentence of (61) is then (62) <<Bel, <Nob, P>> : > where <Nob, P> instantiate Bel iff Nob entertains some proposition of the form <<w killed Sob’s sow> : ∆*> in a way characteristic of belief, where to entertain <<w killed Sob’s sow> : ∆*> is to mediately predicate killing Sob’s sow of (a token of) the individual concept w. This captures the sense in which Hob and Nob’s beliefs are directed at the same individual (they share the !171 same witch concept) without requiring that their witch concepts denote anything (no nonexistent witches are needed), nor that Nob have any knowledge or Hob or Bob (they need not feature in any explanation of his psychology or behavior). This seems to get us what we want for (61), but when we turn back to (60), we now have the problem that the present view no longer requires that the background conditions [∃x m : x m murdered Smith] are part of Detective Peters’ reported attitude. It can’t be just any mental representation that serves as the target of Peters’ mediate predication by the property escaping through the window. It has to be a mental representation Peters uses to represent Smith’s murderer. One strategy for capturing this would be to build some contextual flexibility into the attitude verb so that, in certain contexts like (60), the attitude verb is sensitive to the background conditions in the proposition expressed by the complement clause, while in other cases like (61), the attitude verb ignores background conditions. However, a simpler and more theoretically attractive solution would be to build the requisite variation into the conditions in which the subject tokens the same individual concept as that tokened by the speaker. In cases like (60) the concept made salient by the antecedent is one whose semantic function is exhausted by the fact that is serves to represent Smith’s murderer. For anyone’s mental representation to be a token of this concept, it must also function to represent Smith’s murderer. Hence, in order for (60) to be true, Detective Peters must have a mental representation which functions to represent Smith’s murderer (and must mediately predicate of it the property of escaping through the window). In contrast, in (61), the concept made salient by the antecedent is one that serves as the target of Hob’s mediate predication by the property of blighting Bob’s mare. In order for someone to token this concept, it is not required that their token represent anything about Hob or Bob. For example, it might be !172 sufficient that the token mental representation functions to represent a witch that has been blighting local farm animals, and that this mental representation shares its causal origin with that of Hob’s witch mental representation (the same newspaper article). Or, following Cumming (2013), we might maintain that the subject of the report and Hob share a witch concept whenever their private mental representations are paired one-to-one by some communicative chain made up of pairs of individuals mutually disposed to “coordinate” their private mental representations by means of a common public linguistic symbol (say, some name for the witch about which they are attempting to communicate). In a word, the strategy here is to capture the variation in whether the background conditions seem truth-conditionally relevant to the attitude reported not by building the variation into the semantics, but into the metaphysics of the tokens-of-the-same-concept relation. !173 LITERATURE CITED Barker SJ. 1997. E-type pronouns, DRT, dynamic semantics, and the quantifier/variable-binding model. Linguistics and Philosophy 20:195–228 Barwise J. 1979. On branching quantifiers in English. Journal of Philosophical Logic 8:47–80 Barwise J, Perry J. 1983. Situations and Attitudes. Cambridge, MA: MIT Press Berger A. 2002. Terms and Truth. Cambridge, MA: MIT Press/Bradford Carnap R. 1947. Meaning and Necessity. Chicago: Univ. Chicago Press Chierchia G. 1995. Dynamics of Meaning: Anaphora, Presupposition, and the Theory of Grammar. Chicago: Univ. Chicago Press Copi IM. 1954. Symbolic Logic. New York: MacMillan. 1st ed. Cooper R. 1979. The interpretation of pronouns. In Syntax and Semantics, vol. 10, ed. F Heny, H Schnelle. New York: Academic Press Davies M. 1981. Meaning, Quantification, and Necessity. London: Routledge and Kegan Paul Donnellan K. 1966. Reference and definite descriptions. Philosophical Review 75:218–303 Elbourne P . 2005. Situations and Individuals. Cambridge, MA: MIT Press Evans G. 1977. Pronouns, quantifiers, and relative clauses (I). Canadian Journal of Philosophy 7(3):467–536 Fine K. 1983. A defense of arbitrary objects. Proceedings of the Aristotelian Society 57(Suppl. vol.):55–77 Fine K. 1985a. Natural deduction and arbitrary objects. Journal of Philosophical Logic 14:57–107 Fine K. 1985b. Reasoning with Arbitrary Objects. Oxford: Basil Blackwell Fine K. 2007. Semantic Relationism. Malden, MA: Blackwell Geach PT. 1962. Reference and Generality. Ithaca, NY: Cornell Univ. Press !174 Geach PT. 1967. Intentional identity. Journal of Philosophy 60:627–32 Gentzen G. 1934. Untersuchungen über das logische Schließen. Mathematische Zeitschrift 39:176–210, 405–31 Geurts B. 2002. Donkey business. Linguistics and Philosophy 14:39–100 Grinder J, Postal P . 1971. Missing antecedents. Linguistic Inquiry 2:269–312 Groenendijk J, Stockhof M. 1991. Dynamic predicate logic. Linguistics and Philosophy 14:39–100 Hankamer J, Sag I. 1976. Deep and surface anaphora. Linguistic Inquiry 7:391–428 Heim I. 1982. The semantics of definite and indefinite noun phrases. PhD Thesis, Univ. Mass. Amherst Heim I. 1983. File change semantics and the familiarity theory of definiteness. In Meaning, Use and Interpretation of Language, ed. C Schwarze and A von Stechow, pp. 126–89. Berlin: De Gruyter Heim I. 1990. E-type pronouns and donkey anaphora. Linguistics and Philosophy 13:137–78 Heim I, Kratzer A. 1998. Semantics in Generative Grammar. Malden, MA: Wiley-Blackwell Henkin L. 1959. Some remarks on infinitely long formulas. In Infinitistic Methods: Proceedings of the Symposium on Foundations of Mathematics, Warsaw, 2–9 September 1959, pp. 167–83. Warsaw, Poland: Pergamon Hintikka J. 1973. Quantifiers vs. quantification theory. Dialectica. 27:329–58 Kalish D. 1967. Review: Irving M. Copi Symbolic Logic. Journal of Symbolic Logic 32:254 Kamp H. 1981. A theory of truth and semantic representation. In Formal Methods in the Study of Language, ed. AG Groenendijk, TMV Janssen, MBJ Stokhof, pp. 227–322. Amsterdam: Mathematical Centre Kamp H, Reyle U. 1993. From Discourse to Logic. Dordrecht, Netherlands: Kluwer !175 Kanazawa M. 1994. Weak vs. strong readings of donkey sentences in a dynamic setting. Linguistics and Philosophy 17(2):109–58 Karttunen L. 1976. Discourse referents. In Syntax and Semantics: Notes from the Linguistic Underground, vol. 7, ed. J McCawley. New York: Academic Press Kearns S, Magidor O. 2012. Semantic sovereignty. Philosophy and Phenomenological Research 85(2):322–50 King J. 1987. Pronouns, descriptions, and the semantics of discourse. Philosophical Studies 51:341–63 King J. 1991. Instantial terms, anaphora, and arbitrary objects. Philosophical Studies 61:239–65 King J. 1993. Anaphora and operators. Philosophical Perspectives 8:221–50 King J. 1994. Intentional identity generalized. Journal of Philosophical Logic 22:61–93 King J. 2004. Context dependent quantifiers and donkey anaphora. In New Essays in the Philosophy of Language, Supplement to the Canadian Journal of Philosophy, vol. 30, ed. M Ezcurdia, R Stainton, C Viger, pp. 97–127. Calgary, Canada: Univ. Calgary Press King J. 2007. The Nature and Structure of Content. Oxford: Oxford Univ. Press King J, Soames S, Speaks J. 2014. New Thinking About Propositions. Oxford: Oxford Univ. Press. Kratzer A. 1977. What ‘must’ and ‘can’ must and can mean. Linguistics and Philosophy 1:337–355 Kratzer A. 1981. The notional category of modality. In Words, Worlds, and Contexts: New Approaches in Word Semantics, ed. Eikmeyer H-J, Rieser H, pp. 38–74. Berlin: De Gruyter Kratzer A. 1989. An investigation into the lumps of thought. Linguistics and Philosophy 12:607–53 !176 Kratzer A. 2014. Situations in natural language semantics. Stanford Encyclopedia of Philosophy, Spring 2016 edition, ed. Zalta N, published Jan. 20 2014. https://plato.stanford.edu/entries/situations-semantics/ Kripke S. 1977. Speaker’s reference and semantic reference. Midwest Studies in Philosophy 2:255–76 Lewis D. 1975. Adverbs of quantification. In Formal Semantics, ed. E Keenan, pp. 178–88. Dordrecht, Netherlands: Kluwer Lewis D. 1979. Scorekeeping in a language game. Journal of Philosophical Logic 8:339–59 Ludlow P . 1994. Conditionals, events, and unbound pronouns. Lingua e Stile 29(2):165–83 Ludlow P , Neale S. 1991. Indefinite descriptions: in defense of Russell. Linguistics and Philosophy 14:171–202 Mackie J. 1958. The rules of natural deduction. Analysis 19:27–35 Magidor O, Breckenridge W. 2012. Arbitrary reference. Philosophical Studies 158:377–400 Manley D, Hawthorne J. 2012. The Reference Book. Oxford: Oxford Univ. Press Martino E. 2001. Arbitrary reference in mathematical reasoning. Topoi 20:65–77 McKay T. 2006. Plural Predication. Oxford: Oxford Univ. Press McKinsey M. 1986. Mental anaphora. Synthese 66:159–75 Neale S. 1990. Descriptions. Cambridge, MA: MIT Press Parsons T. 1978. Pronouns as paraphrases. Unpublished manuscript. Pettigrew R. 2008. Platonism and Aristotelianism in mathematics. Philosophia Mathematica 6(3):310–332 Price R. 1962. Arbitrary individuals and natural deduction. Analysis 22:94–96 Quine WVO. 1950. Methods of Logic. New York: Holt Quine WVO. 1960. Word and Object. Cambridge, MA.: MIT Press Quine WVO. 1977. Intensions revisited. Midwest Studies in Philosophy 2:5–11 !177 Recanati F. 2012. Mental Files. Oxford: Oxford Univ. Press Rescher N. 1958. Can there be random individuals? Analysis 18:114–-17 Roberts C. 1989. Modal subordination and pronominal anaphora in discourse. Linguistics and Philosophy 12:683–721 Roberts C. 1996. Anaphora in intensional contexts. In The Handbook of Contemporary Semantic Theory, ed. S Lappin, pp. 215–46. Oxford: Blackwell Roberts C. 2003. Uniqueness in definite noun phrases. Linguistics and Philosophy 26:287–350 Roberts C. 2005. Pronouns as definites. In Descriptions and Beyond, ed. M Reimer, A Bezuidenhout, pp. 503–43. Oxford: Oxford Univ. Press Ross J. 1967. Constraints on variables in syntax. PhD Thesis, Mass. Inst. Tech. Salmon N. 2006a. A Theory of Bondage. The Philosophical Review 115:415–48 Salmon N. 2006b. Pronouns as variables. Philosophy and Phenomenological Research 72:656–65 Schein B. 1993. Plurals and Events. Cambridge, MA: MIT Press Schein B. 2003. Adverbial, descriptive reciprocals. In Philosophical Perspectives, vol. 17, Language and Philosophical Linguistics, ed. J. Hawthorne, D Zimmerman, pp. 333–67. Malden, MA: Wiley-Blackwell Sells P . 1985. Restrictive and non-restrictive modification. CSLI report 85-28, CSLI, Stanford Soames S. 1987. Direct reference, propositional attitudes, and semantic content. Philosophical Topics 15:47–87 Soames S. 1989. Review of Gareth Evans’ Collected Papers. The Journal of Philosophy 86:141–56 Soames S. 2006. Descriptive names v. descriptive anaphora. Philosophy and Phenomenological Research 72(3):665–73 !178 Soames S. 2015. Rethinking Language, Mind, and Meaning. Princeton, NJ.: Princeton Univ. Press. Sommers F. 1982. The Logic of Natural Language. Oxford: Clarendon Shapiro S. 2012. An “i” for an i: Singular terms, uniqueness, and reference. Review of Symbolic Logic 5(3):380–415 Shapiro SC. 2004. A logic of arbitrary and indefinite objects. In Principles of Knowledge Representation and Reasoning: Proceedings of the Ninth International Conference (KR2004), ed. D Dubois, C Welty, M Williams. Menlo Park: AAAI Press Sher G. 1990. Ways of branching quantifiers. Linguistics and Philosophy 13:393–422 Sher G. 1997. Partially-ordered (branching) generalized quantifiers: a general definition. The Journal of Philosophical Logic 26:1–43 Strawson, PF. 1952. Introduction to Logical Theory. London: Methuen Szabolcsi A. 2003. Quantification. Cambridge: Cambridge Univ. Press Tennant N. 1983. A defense of arbitrary objects. Proceedings of the Aristotelian Society 57(Suppl. Vol.):79–89 Thomason RH. 1990. Accommodation, meaning, and implicature: interdisciplinary foundations for pragmatics. In Intentions in Communication, ed. Cohen P , Morgan J, Pollack M, pp. 325–63. Cambridge, MA: MIT Press/Bradford Williamson T. 1994. Vagueness. London: Routledge Williamson T. 1996. What makes it a heap? Erkenntnis 44:327–39 Wilson G. 1984. Pronouns and pronominal descriptions: a new semantical ‘category’. Philosophical Studies 45:1–30 Zweig E. 2013. When the donkey lost its fleas: persistence, minimal situations, and embedded quantifiers. Natural Language Semantics 14:283–96 !179
Abstract (if available)
Abstract
This dissertation explores two closely related problems in the philosophy of logic and language. The first problem concerns the meaning of so-called instantial terms, as when one stipulates “let p be a prime number” in a mathematical proof for the purposes of proving some conclusion about all primes. Despite their ubiquity, it is not clear what such terms mean. Are they quantifiers, as some have maintained, or are they referring terms? This dissertation articulates an alternative to previous referential and quantificational approaches to the problem. On the view defended, instantial terms express concepts which are of a cognitive kind with those whose function is to refer, and so share syntactic, inferential, and cognitive kinships with genuine referring terms. But unlike referring terms, the concepts expressed by instantial terms designate a range of values. In this way we can explain why formulas containing instantial terms have general rather than singular truth-conditions, while phenomenologically, instantial reasoning purports to be about particulars. The second half of the dissertation explores whether this framework can be extended to account for donkey pronouns or unbound anaphoric pronouns. Chapter 2 argues against the leading accounts of donkey pronouns in the literature. Chapter 3 modifies the framework developed in chapter 1 for instantial terms to account for the more complex linguistic behavior of donkey pronouns in natural language. The resulting modified view presents a unified semantic analysis of instantial terms and donkey pronouns that accounts for a broad range of linguistic data.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Telling each other what to do: on imperative language
PDF
Meaningfulness, rules, and use-conditional semantics
PDF
Representation, truth, and the metaphysics of propositions
PDF
Conceptually permissive attitudes
PDF
Positivist realism
PDF
Constraining assertion: an account of context-sensitivity
PDF
Discourse level processing and pronoun interpretation
PDF
Reasoning with uncertainty and epistemic modals
PDF
Reference time in the dynamics of temporal dependency in Korean
PDF
Linguistic understanding and semantic theory
PDF
Vietnamese pronouns in discourse
PDF
Vagueness, ontology, and the social world
PDF
The metaphysics of social groups: structure, location, and change
PDF
Fictions of reference: character and language in nineteenth-century British literature
PDF
Number marking and definiteness in Bangla
PDF
Comparative iIlusions at the syntax-semantics interface
PDF
Neuropsychological functioning in older adult long-term cancer survivors
PDF
Building adjectival meaning without adjectives
PDF
Epigenetic regulation of endocrine aging transitions of the perimenopausal and menopausal brain
PDF
Subjectivity, commitments and degrees: on Mandarin hen
Asset Metadata
Creator
Hall, John Keith, III
(author)
Core Title
Instantial terms, donkey anaphora, and individual concepts
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Philosophy
Publication Date
11/14/2017
Defense Date
10/24/2017
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
donkey anaphora,instantial terms,natural deduction,OAI-PMH Harvest,pronouns,reference
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Soames, Scott (
committee chair
), Bacon, Andrew (
committee member
), Jeshion, Robin (
committee member
), Schein, Barry (
committee member
)
Creator Email
j.keithhall@gmail.com,james.ashenhurst@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-454979
Unique identifier
UC11264275
Identifier
etd-HallJohnKe-5911.pdf (filename),usctheses-c40-454979 (legacy record id)
Legacy Identifier
etd-HallJohnKe-5911.pdf
Dmrecord
454979
Document Type
Dissertation
Rights
Hall, John Keith, III
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
donkey anaphora
instantial terms
natural deduction
pronouns
reference