Compositionality. “The meaning of the whole depends on (and only on) the meanings of the parts and the way they are combined.”

Slides:



Advertisements
Similar presentations
The Extended Mind.
Advertisements

Metasemantics of Complex Expressions Michael Johnson Hong Kong University.
Propositional Attitudes. FACTS AND STATES OF AFFAIRS.
Summer 2011 Tuesday, 8/ No supposition seems to me more natural than that there is no process in the brain correlated with associating or with.
Causal Theories.
Propositions. Recap Common Three-Way Equivalence: Sentence meanings The objects of the attitudes The referents of ‘that’-clauses We can call whatever.
Authority 2. HW 8: AGAIN HW 8 I wanted to bring up a couple of issues from grading HW 8. Even people who got problem #1 exactly right didn’t think about.
Meaning Skepticism. Quine Willard Van Orman Quine Willard Van Orman Quine Word and Object (1960) Word and Object (1960) Two Dogmas of Empiricism (1951)
Theory of knowledge Lesson 2
CAS LX 502 Semantics 10b. Presuppositions, take
Modality Lecture 10. Language is not merely used for conveying factual information A speaker may wish to indicate a degree of certainty to try to influence.
Copyright © Cengage Learning. All rights reserved.
Introduction to Ethics Lecture 8 Moore’s Non-naturalism
Week 3a. UG and L2A: Background, principles, parameters CAS LX 400 Second Language Acquisition.
1 Language and kids Linguistics lecture #8 November 21, 2006.
The Problem of Induction Reading: ‘The Problem of Induction’ by W. Salmon.
Albert Gatt LIN3021 Formal Semantics Lecture 5. In this lecture Modification: How adjectives modify nouns The problem of vagueness Different types of.
INFINITE SEQUENCES AND SERIES
Albert Gatt LIN1180 – Semantics Lecture 10. Part 1 (from last week) Theories of presupposition: the semantics- pragmatics interface.
The Language of Theories Linking science directly to ‘meanings’
Language, Mind, and Brain by Ewa Dabrowska Chapter 2: Language processing: speed and flexibility.
Its Grammatical Categories
Syntax.
Compositionality. Wilhelm von Humbolt famously described language as a system that “makes infinite use of finite means.”
Copyright © Cengage Learning. All rights reserved.
Linguistic Theory Lecture 3 Movement. A brief history of movement Movements as ‘special rules’ proposed to capture facts that phrase structure rules cannot.
Chapter 4: Lecture Notes
Mathematics as a Second Language Mathematics as a Second Language Mathematics as a Second Language Developed by Herb Gross and Richard A. Medeiros © 2010.
CAS LX 502 8b. Formal semantics A fragment of English.
Atomic Sentences Chapter 1 Language, Proof and Logic.
Seminar in Mind & Language Introduction. Language.
APPLICATIONS OF DIFFERENTIATION Indeterminate Forms and L’Hospital’s Rule APPLICATIONS OF DIFFERENTIATION In this section, we will learn: How to.
5 BASIC CONCEPTS OF ANY PROGRAMMING LANGUAGE Let’s get started …
11.2 Series In this section, we will learn about: Various types of series. INFINITE SEQUENCES AND SERIES.
Common Fractions © Math As A Second Language All Rights Reserved next #6 Taking the Fear out of Math Dividing 1 3 ÷ 1 3.
Unit 5 : PREDICATES.
Introduction to Philosophy Lecture 14 Minds and Bodies #3 (Jackson) By David Kelsey.
1 Chapter 8 Hypothesis Testing 8.2 Basics of Hypothesis Testing 8.3 Testing about a Proportion p 8.4 Testing about a Mean µ (σ known) 8.5 Testing about.
Section 2.3 I, Robot Mind as Software McGraw-Hill © 2013 McGraw-Hill Companies. All Rights Reserved.
Review. The Idea Theory Mind Idea of a Dog Dog Partly Resembles Sees Dog.
Albert Gatt LIN3021 Formal Semantics Lecture 4. In this lecture Compositionality in Natural Langauge revisited: The role of types The typed lambda calculus.
The Commutative Property Using Tiles © Math As A Second Language All Rights Reserved next #4 Taking the Fear out of Math.
CTM 2. EXAM 2 Exam 1 Exam 2 Letter Grades Statistics Mean: 60 Median: 56 Modes: 51, 76.
Rules, Movement, Ambiguity
Dividing Decimals # ÷ 3.5 next Taking the Fear out of Math
Lecture 4  The Paleolithic period (or Old Stone Age) is the earliest period of human development. Dating from about 2 million years ago, and ending in.
Naming & Necessity. Classical Descriptivism.
Multiplication of Common Fractions © Math As A Second Language All Rights Reserved next #6 Taking the Fear out of Math 1 3 ×1 3 Applying.
Philosophy and Logic The Process of Correct Reasoning.
What are the strengths and weaknesses of Descartes’ Trademark Argument? StrengthsWeaknesses p , You have 3 minutes to read through the chart you.
PHILOSOPHY OF LANGUAGE Some topics and historical issues of the 20 th century.
The inference and accuracy We learned how to estimate the probability that the percentage of some subjects in the sample would be in a given interval by.
This week’s aims  To test your understanding of substance dualism through an initial assessment task  To explain and analyse the philosophical zombies.
Philosophy of Religion Ontological Argument
The Cosmological Argument for God’s Existence
The Causal-Historical Theory
Skepticism David Hume’s Enquiry Concerning Human Understanding and John Pollock’s “Brain in a vat” Monday, September 19th.
Aristotle’s Causes.
Skepticism David Hume’s Enquiry Concerning Human Understanding
Philosophy of Mathematics 1: Geometry
Introduction to Philosophy Lecture 14 Minds and Bodies #3 (Jackson)
Part I: Basics and Constituency
Structural relations Carnie 2013, chapter 4 Kofi K. Saah.
Use Theories.
Mind-Brain Type Identity Theory
Do we directly perceive objects? (25 marks)
On your whiteboard: What is innatism? Give two examples to support it
True or False: Materialism and physicalism mean the same thing.
Introduction to Semantics
Validity and Soundness, Again
Presentation transcript:

Compositionality

“The meaning of the whole depends on (and only on) the meanings of the parts and the way they are combined.”

ARGUMENTS FROM COMPOSITIONALITY

What’s at Stake? Before we consider arguments for or against compositionality, let’s look at what’s at stake. At various points, compositionality has been used to argue against all of the theories of meaning we have considered in class.

Vs. the Idea Theory According to the idea theory, the meaning of a word is an idea, where ideas are construed as something like “little colored pictures in the mind.” Let’s consider an example: what’s your idea of a pet?

Idea of a Pet

OK, now what’s your idea of a fish?

What’s at Stake? Before we consider arguments for or against compositionality, let’s look at what’s at stake. At various points, compositionality has been used to argue against all of the theories of meaning we have considered in class.

Vs. the Idea Theory According to the idea theory, the meaning of a word is an idea, where ideas are construed as something like “little colored pictures in the mind.” Let’s consider an example: what’s your idea of a pet?

Idea of a Pet

OK, now what’s your idea of a fish?

Idea of Fish

Now try to combine those ideas into the idea of “pet fish.”

Vs. the Idea Theory That clearly doesn’t work. Notice that we cannot say that in the context of “____ fish” “pet” means something other than. This would make the meaning of “pet” non-local (depend on surrounding context) and that’s not allowed on any compositional theory. Conclusion: the idea theory violates the principle of compositionality.

Vs. Verificationism Let’s suppose that the meaning of a sentence is the set of experiences that it probably causes you to have. So a cow will probably cause you to hear cow- sounds, so cow-sounds are part of the meaning of “cow.” In other words the probability of cow- sounds is increased by the presence of cows.

ColorAnimalThreat Level BrownDogSafe BrownAntSafe BrownPigSafe BrownGoatSafe BrownCowDANGER! RedCowSafe WhiteCowSafe BlackCowSafe OrangeCowSafe

Cows are Safe Let’s suppose that the vast majority of cows are safe. So the meaning of “cow” does not include the experience of bodily harm, because encountering a cow lowers, rather than raises, the chances that you’ll experience bodily harm.

Brown Things are Safe Let’s also suppose that brown things are in general safe. So again, “brown” doesn’t have the experience of bodily harm as part of its meaning either. You’re less likely to experience this around brown things than around other-colored things.

Brown Cows are Dangerous However, suppose that the small number of dangerous cows and the small number of dangerous brown things are all brown cows. Thus the meaning of “brown cow” contains the experience of bodily harm. That experience confirms the presence of brown cows.

Brown Cows are Dangerous But how is this possible? Neither the set of experiences that is the meaning of “brown” nor the set of experiences that is the meaning of “cow” contains the experience of bodily harm.

Brown Cows are Dangerous The meaning of “brown cow” thus seems to depend on something other than the meanings of its parts, “brown” and “cow”: verificationism violates the principle of compositionality.

The Causal-Historical Theory Let’s call that baby ‘Feynman’ Feynman

The Causal-Historical Theory Let’s call that baby ‘Feynman’ Feynman Historical Chain of Transmission

The Causal-Historical Theory Denotation Feynman

No Connotations The causal-historical theory, unlike the other theories we’ve considered so far, does not use a connotation (idea, experience, definition) to determine the denotation. Denotations are determined by non-mental facts.

Direct Reference Theory The resulting view is known as direct reference theory. This is just the claim that names and natural kind terms “directly” refer to their denotations, and that connotations aren’t involved in mediating the process.

Direct Reference Theory The meaning of a name, for example, is the person named. There is nothing more to meaning than reference/ denotation.

Semantic Closure TRUE: Lois Lane believes Superman can fly. FALSE: Lois Lane believe Clark Kent can fly.

Vs. Direct Reference But notice that the two sentences differ only in that parts with the same meaning (reference) have been swapped: ‘Superman’ and ‘Kent.’ So it seems like if compositionality is true, then the direct reference theory is false.

Vs. the Use Theory Does knowing how word W1 is used and how W2 is used suffice for knowing how [W1 W2] is used? This seems unlikely.

Imagine teaching a Martian how the word ‘black’ is used. We might show it color samples or something.

Similarly we might teach the Martian how ‘people’ is used, by giving examples.

Black Person?

Fundamental Acceptance Property Recall that for Horwich, the fundamental acceptance property underlying all uses of ‘black’ is to apply ‘black’ to surfaces that are clearly black. Suppose we taught a Martian this. And suppose we taught a Martian how to apply ‘human’ or ‘person’ (distinguishing us from other apes). Could the Martian work out how to use ‘black person’? I think not.

Interests in Exaggeration We (humans) have a vested interest in exaggerating differences in skin tone in order to effect a certain constructed social order.

Interests in Exaggeration Unless you know about this proclivity to exaggerate, (which doesn’t affect normal color ascriptions), then you can’t predict ascriptions of the form ‘COLOR + person.’

Interests in Exaggeration Using ‘black’ (or ‘red’ or ‘yellow’) for a color and using ‘person’ for a certain sort of animal doesn’t determine how to use the ‘COLOR + person’ form.

Vs. The Use Theory The point is that complex expressions can acquire uses that aren’t determined by the uses of their parts. Consider the English insult “Mama’s boy.”

Vs. The Use Theory If you’re a child, and a male, it’s not insulting to be called a ‘boy.’ Nor is ‘Mama’ an insult. But put them together, and that’s insulting, even if you are a Mama’s boy. Just learning the use of the parts won’t tell a second-language learner of English that the whole is insulting.

ARGUMENTS FOR COMPOSITIONALITY

Argument from Understanding The principal argument in favor of compositionality is simply the argument from our ability to understand a potential infinitude of novel utterances to the purported best explanation of this fact, compositionality.

Premise 1: Productivity English is productive: There are infinitely many grammatical, meaningful sentences of English possessing an infinite number of distinct meanings.

Premise 2: Finitude Human beings have finite minds. In particular, they can only store or remember a finite amount of information. If the mind = the brain, the brain is finite. There is a finite amount of time that children have to learn language.

Premise 3: The Only Game in Town It is impossible for beings with finite minds to learn/ understand productive languages unless those languages have compositional semantics.

Conclusion Since some humans do in fact learn/ understand English, it must have a compositional semantics.

Systematicity There is another argument for compositionality. It starts with the observation that English is systematic. Suppose that E1, E2, E3, and E4 are all English expressions.

When This Is True: E1 can combine with E2 to form a grammatical sentence [E1 E2]. E3 can combine with E4 to form a grammatical sentence [E3 E4]. E1 is of the same grammatical category as E3 E2 is of the same grammatical category as E4

When This Is True: Example: ‘Dogs’ can combine with ‘chase cars’ to form the sentence ‘Dogs chase cars.’ ‘Cats’ can combine with ‘eat mice’ to form the sentence ‘Cats eat mice.’ ‘Dogs’ is of the same grammatical category as ‘cats.’ (Plural Noun Phrases) ‘Chase cars’ is of the same grammatical category as ‘eat mice.’ (Verb Phrases)

Claim 1 Anyone who can understand [E1 E2] and [E3 E4] can also understand [E1 E4] and [E3 E2], when the latter are well-formed. Example: Anyone who can understand ‘dogs chase cars’ and ‘cats eat mice’ can also understand ‘dogs eat mice’ and ‘cats chase cars.’

Claim 2 The meanings of [E1 E2] and [E3 E4] are predictably related to the meanings of [E1 E4] and [E3 E2], when the latter are well-formed. Example: ‘dogs chase cars’ has a meaning that is predictably related to both ‘dogs eat mice’ and ‘cats chase cars.’

The Argument from Claim 1 If English is compositional, then understanding ‘dogs chase cars’ and ‘cats eat mice’ involves (a)knowing the meanings of all the words in the two sentences and (b)being able to recognize the syntactic structure of both sentences.

The Argument from Claim 1 Furthermore, if English is compositional, such knowledge and abilities suffice to understand ‘dogs eat mice’ and ‘cats chase cars.’ These sentences are composed of the same morphemes, put together in the same syntactic structures.

The Argument from Claim 1 Therefore, if English is compositional claim 1 is true: anyone who can understand ‘dogs chase cars’ and ‘cats eat mice’ can also understand ‘dogs eat mice’ and ‘cats chase cars.’ The best explanation for why Claim 1 is true of English is that English is in fact compositional.

The Argument from Claim 2 If English is compositional, then the meanings of English expressions are completely determined by (a)their syntactic structure and (b)the meanings of their words.

The Argument from Claim 2 Since the expressions ‘dogs chase cars’ and ‘dogs eat mice’ partially overlap in their morphemes, they partially overlap in what determines their meanings, if compositionality is true. Thus the fact that they have related meanings is some evidence that English is in fact compositional.

ARGUMENTS AGAINST COMPOSITIONALITY

Against Locality As we saw before, compositionality is local. In the expression [old [brown dog]] what “brown dog” means cannot depend on what “old” means, even though that’s also part of the expression containing “brown dog.”

Donkey Sentences Normally, sentences S(‘a donkey’) are made true by the existence of a donkey who satisfies S(x). For example: A donkey pooped on the train. John punched a donkey.

Geach Sentence However, consider the following sentence (due to Peter Geach): Every farmer who owns a donkey beats it. This sentence is (emphatically!) not made true by a donkey who satisfies “Every farmer who owns x beats x.”

Universal Interpretation It means something more like “For every farmer F and every donkey D, if F owns D, then F beats D.” Notice, in particular, that this sentence talks about every donkey, not some donkey. But that’s not normally how ‘a donkey’ works, as we saw.

Non-Local Donkeys In normal contexts ‘…a donkey…’ means AT LEAST ONE DONKEY, whereas in contexts like ‘if… a donkey…., then ….’ things are different. Here, ‘…a donkey…’ means EVERY DONKEY. You can calculate what ‘a donkey’ means in each context, but its meaning is non-local and hence non-compositional.

Against Semantic Closure Compositionality includes semantic closure: the meanings of expressions depend only on the meanings of their parts and how they’re combined, not things other than their meanings.

Quotation In English we use quote marks in lots of different ways: Pure: “Cat” has 3 letters. Direct: John said “it’s raining.” Scare: Mommy and Daddy are having “special time.” Greengrocer: “Fresh” fruit!

Pure Quotation Pure quotation is an interesting phenomena. Consider that “bachelor” and “unmarried man” are synonymous. The substitutability criterion (compositionality) says: “For any sentence S(E) containing some expression E as part, if E and E* have the same meaning, then S(E) and S(E*) have the same meaning.”

A Counterexample So let E = “bachelor” E* = “unmarried man” S(E) = “‘bachelor’ used to mean squire.” S(E*) = “‘unmarried man’ used to mean squire.” The substitutability criterion fails!

No Semantic Closure Intuitively, the failure of the substitutability of synonyms here arises from the fact that the meanings of “‘bachelor’” and of “‘unmarried man’” do not depend on the meanings of “bachelor” and “unmarried man,” respectively. Instead they depend on the words themselves..

No Semantic Closure This is unproblematic: we can still calculate the meaning of any pure quotation, even though semantic closure fails and pure quotation isn’t compositional.

The Lesson The important lesson is that if non-locality or semantic closure failed in these particular ways, it wouldn’t matter as far as the understandability argument goes. We could still calculate the meanings of donkey sentences and pure quotation.

Computability What I’d suggest then is that the real constraint is not that the meaning of any expression is determined by and only by the meanings of its parts and the way they’re combined.

Computability Rather, it’s that the meaning of sentences (and not necessarily their parts) is computable from the sentences themselves and the meanings of the simple parts.

Rejecting Semantic Closure This suggestion rejects semantic closure in favor of “semantic and syntactic closure”: both the words themselves and their meanings can determine the meaning of the whole.

Limited Locality It also encapsulates a limited form of locality: a sentences meanings can’t depend on the meanings of things outside the sentence, but parts of a sentence can depend for their meanings on the meanings of other parts of the same sentence (e.g. in donkey sentences).

Computability What computability requires is that we actually be able to work out the meanings of sentences from the meanings of their parts and how they’re combined, not that their meanings merely depend on or co-vary with the meanings of their parts and how they’re combined.

DOES IT MATTER?

Does It Matter to Our Arguments? While I think this is right, in some ways it doesn’t even matter. Why? Because locality and semantic closure didn’t really enter into our compositionality-flavored objections earlier. For instance, you can’t calculate the pet fish idea from the pet idea and the fish idea. And you can’t calculate the use of “mama’s boy” from the use of “mama” and “’s” and “boy.”

Except! Well, there’s one exception. Our argument against the Direct Reference theorist crucially relied on semantic closure: 1.Lois Lane believes that Superman can fly. 2.Lois Lane believes that Clark Kent can fly. So the Direct Reference theorists has the resources to respond by rejecting compositionality as a constraint and instead endorsing computability.