Meanings as Instructions for how to Build Concepts Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy

Slides:



Advertisements
Similar presentations
First-Order Logic Chapter 8.
Advertisements

Hyperintensionality and Impossible Worlds: An Introduction
Two Kinds of Concept Introduction Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy.
Semantics (Representing Meaning)
Lexicalizing and Combining Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy.
Ambiguous contents? Arvid Båve, Higher seminar in Theoretical Philosophy, FLoV, Gothenburg University, 8 May 2013.
Language and Cognition Colombo, June 2011 Day 8 Aphasia: disorders of comprehension.
Albert Gatt LIN1180/LIN5082 Semantics Lecture 2. Goals of this lecture Semantics -- LIN 1180 To introduce some of the central concepts that semanticists.
Albert Gatt LIN1180 – Semantics Lecture 10. Part 1 (from last week) Theories of presupposition: the semantics- pragmatics interface.
Knowledge Representation I Suppose I tell you the following... The Duck-Bill Platypus and the Echidna are the only two mammals that lay eggs. Only birds.
LTAG Semantics on the Derivation Tree Presented by Maria I. Tchalakova.
Mostly Framing Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy.
Introduction to Linguistics and Basic Terms
Term 2 Week 3 Semantics.
Logic in general Logics are formal languages for representing information such that conclusions can be drawn Syntax defines the sentences in the language.
Language, Mind, and Brain by Ewa Dabrowska Chapter 10: The cognitive enterprise.
Meanings as Instructions for how to Build Concepts Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy
Describing I-Junction Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Meanings as Instructions for how to Build Concepts Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy
Meanings as Instructions for how to Build Concepts Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy
Let remember from the previous lesson what is Knowledge representation
Logical Agents Chapter 7 Feb 26, Knowledge and Reasoning Knowledge of action outcome enables problem solving –a reflex agent can only find way from.
4:5 (blue:yellow) “scattered random”
Syntax and Grammar John Goldsmith Cognitive Neuroscience May 1999.
CS 330 Programming Languages 09 / 16 / 2008 Instructor: Michael Eckmann.
Second Language Acquisition and Real World Applications Alessandro Benati (Director of CAROLE, University of Greenwich, UK) Making.
The Problem of Induction Reading: ‘The Problem of Induction’ by W. Salmon.
Meaning and Language Part 1.
Lecture 1 Introduction: Linguistic Theory and Theories
Unit One: Parts of Speech
The Langue/Parole distinction`
NARRATIVE LITERATURE PREACHING GOD’S STORIES THE HOMILETICAL PATTERN.
EFL Anthony’s model: Approach Method Technique
CAS LX 502 Semantics 3a. A formalism for meaning (cont ’ d) 3.2, 3.6.
Ontology Development Kenneth Baclawski Northeastern University Harvard Medical School.
Cognitive Development: Language Infants and children face an especially important developmental task with the acquisition of language.
Some animals are born early, and acquire a “second nature” catterpillars become butterflies infants become speakers of human languages, whose meaningful.
MIDTERM EXAMINATION THE MIDTERM EXAMINATION WILL BE ON FRIDAY, MAY 2, IN THIS CLASSROOM, STARTING AT 1:00 P.M. BRING A BLUE BOOK. THE EXAM WILL COVER:
LOGIC AND ONTOLOGY Both logic and ontology are important areas of philosophy covering large, diverse, and active research projects. These two areas overlap.
7. Parsing in functional unification grammar Han gi-deuc.
Questioning Techniques
Beyond Truth Conditions: The semantics of ‘most’ Tim HunterUMD Ling. Justin HalberdaJHU Psych. Jeff LidzUMD Ling. Paul PietroskiUMD Ling./Phil.
Entity Theories of Meaning. Meaning Talk Theory should make sense of meaning talk Theory should make sense of meaning talk What sorts of things do we.
Albert Gatt LIN3021 Formal Semantics Lecture 4. In this lecture Compositionality in Natural Langauge revisited: The role of types The typed lambda calculus.
LECTURE 2: SEMANTICS IN LINGUISTICS
Paul M. Pietroski University of Maryland
Subjects, Predicates, and Systematicity Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy.
AP Lang and Comp April 8, 2014 Ms. Bugasch Goals 1. Compare/Contrast Essay - Edits and Revisions 2. AP Terms 3. Introduction to the rhetorical mode: definition.
Procedure Matters Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy
Lecture 2 (Chapter 2) Introduction to Semantics and Pragmatics.
Lecture №1 Role of science in modern society. Role of science in modern society.
Some Thoughts to Consider 5 Take a look at some of the sophisticated toys being offered in stores, in catalogs, or in Sunday newspaper ads. Which ones.
From NARS to a Thinking Machine Pei Wang Temple University.
PHILOSOPHY OF LANGUAGE Some topics and historical issues of the 20 th century.
SEMANTICS DEFINITION: Semantics is the study of MEANING in LANGUAGE Try to get yourself into the habit of careful thinking about your language and the.
LI 2013 NATHALIE F. MARTIN L ANGUAGE & G RAMMAR. Table of Content W HAT IS LANGUAGE ? W HAT IS LANGUAGE ? R EVIEW : L ANGUAGE ( ABILITY ), L ANGUAGE (
Logical Agents. Inference : Example 1 How many variables? 3 variables A,B,C How many models? 2 3 = 8 models.
1884 – Grundlagen der Arithmetik (Foundations of Arithmetics)
COP Introduction to Database Structures
Semantics (Representing Meaning)
Language, Logic, and Meaning
Tim Hunter A W l e e l x l i w s o o d Darko Odic J e f L i d z
A new perspective on philosophical debates
Rationalism versus Empiricism
Most Meanings and Minds
Most, Mass, and maybe More
Introduction to Semantics
Traditional Grammar VS. Generative Grammar
ALI JABBER KARAM Presented by :
Presentation transcript:

Meanings as Instructions for how to Build Concepts Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy

In previous episodes…

What are words, concepts, and grammars? How are they related? How are they related to whatever makes humans distinctive? Did a relatively small change in our ancestors lead to both the "linguistic metamorphosis” that human infants undergo, and significant cognitive differences between us and other primates? Maybe… we’re cognitively special because we’re linguistically special, and we’re linguistically special because we acquire words (After all, kids are really good at acquiring words.) Humans acquire words, concepts, and grammars

Language Acquisition Device in a Mature State (an I-Language): GRAMMAR LEXICON  SEMs other acquired------> concepts initial concepts  introduced concepts what kinds of concepts do SEMs interface with?

Concept of adicity n Concept of adicity n Concept of adicity n Concept of adicity n Word: adicity n Perceptible Signal (initial concept) Concept of adicity n Concept of adicity n Concept of adicity k Concept of adicity k Perceptible Signal Word: adicity k Further lexical information further lexical information Two Pictures of Lexicalization

Puzzles for the idea that Words simply Label Concepts Apparent mismatches between how words combine (grammatical form) and how concepts combine (logical form) KICK(x 1, x 2 ) The baby kicked RIDE(x 1, x 2 ) Can you give me a ride? BEWTEEN(x 1, x 2, x 3 )I am between him and her BIGGER(x 1, x 2 )That is bigger than that FATHER(…?...)Fathers father MORTAL(…?...)Socrates is mortal A mortal wound is fatal

Lexicalization as Monadic-Concept-Abstraction Concept of adicity n Concept of adicity n (before) Concept of adicity n Concept of adicity n Concept of adicity Concept of adicity Perceptible Signal Word: adicity -1 KICK(x 1, x 2 ) KICK(event) further lexical information

A Possible Mind KICK(x 1, x 2 ) a prelexical concept KICK(x 1, x 2 ) ≡ df for some _, KICK(_, x 1, x 2 ) AGENT(_, x 1 )generic “action” concepts PATIENT(_, x 2 ) KICK(_, x 1, x 2 ) ≡ df AGENT(_, x 1 ) & KICK(_) & PATIENT(_, x 2 ) CAESAR, PF:‘Caesar’mental labels for a person and a sound Called(CAESAR, PF:‘Caesar’)a thought about what the person is called Called(_, PF:‘Caesar’) ≡ df CAESARED(_) KICK(_) introduced via: KICK(_, x 1, x 2 ) AGENT(_, x 1 ) PATIENT(_, x 2 ) & KICK(_, x 1, x 2 ) introduced via: KICK(_, x 1, x 2 ) for some _ CAESARED(_) introduced via: Called CAESAR PF:‘Caesar’

A Possible Mind

Two Roles for Words on this View (1) In lexicalization… acquiring a (spoken) word is a process of pairing a sound with a concept—the concept lexicalized—storing that sound/concept pair in memory, and then using that concept to introduce a concept that can be combined with others via certain (limited) composition operations sound-of-‘kick’/KICK(x 1, x 2 )sound-of-‘kick’/KICK(x 1, x 2 )/KICK(_) at least for “open class” lexical items (nouns, verbs, adjectives/adverbs) the introduced concepts are monadic and conjoinable (2) in subsequent comprehension… a word is an instruction to fetch an introduced concept from the relevant address in memory

Meanings as Instructions for how to build (Conjunctive) Concepts The meaning (SEM) of [ride V fast A ] V is the following instruction: Executing this instruction yields a concept like RIDE(_) & FAST(_) The meaning (SEM) of [ride V horses N ] V is the following instruction: DirectObject:SEM(‘horses’)] Thematize-execute:‘SEM(‘horses’)] Executing this instruction would yield a concept like RIDE(_) &  [THEME(_, _) & HORSES(_)] RIDE(_) &  [THEME(_, _) & HORSE(_) & PLURAL(_)]

Meanings as Instructions for how to build (Conjunctive) Concepts The meaning of [[ride V horses N ] V fast A ] V is the following instruction: CONJOIN[execute:SEM([ride V horses N ] V ), execute:SEM(fast A )] Executing this instruction yields a concept like RIDE(_) &  [THEME(_, _) & HORSES(_)] & FAST(_) The meaning of [[ride V [fast A horses N ] N ] V is the following instruction: DirectObject:SEM([fast A horses N ] N )] Executing this instruction yields a concept like RIDE(_) &  [THEME(_, _) & FAST(_) & HORSES(_)]

Meanings as Instructions for how to build (Conjunctive) Concepts On this view, meanings are neither extensions nor concepts. Familiar difficulties for the idea that lexical meanings are concepts polysemy 1 meaning, 1 cluster of concepts (in 1 mind) intersubjectivity 1 meaning, 2 concepts (in 2 minds) jabber(wocky) 1 meaning, 0 concepts (in 1 mind) But a single instruction to fetch a concept from a certain address can be associated with more (or less) than one concept Meaning constancy at least for purposes of meaning composition

Plenty of Work to do Every dog barked Peter said that every dog barked I bet you five dollars that every dog barked Every dog that barked is brown Everyone who said that a dog barked arrived short response:

Plenty of Work to do Every dog barked EVERY(_) &  [INTERNAL(_, _) & MAXIMIZE-DOG(_)] &  [EXTERNAL(_, _) & MAXIMIZE-BARKED(_)] Peter said that every dog barked  [AGENT(_, _) & THAT-PETER(_)] & SAY(_) & PAST(_) &  [CONTENT(_, _) & THAT-EVERY-DOG-BARKED(_)] I bet you a dollar that every dog barked  [AGENT(_, _) & SPEAKER(_)] & BET(_) &  [???(_, _) & AUDIENCE(_)] &  [THEME(_, _) & DOLLAR(_)]  [CONTENT(_, _) & THAT-EVERY-DOG-BARKED(_)]

What is a Theory of Meaning (for a given language) a Theory of? a partial theory of… abstract meanings (propositions and their parts) how speakers use the language to communicate how expressions of language are related to aspects of the environment that speakers share certain speakers’ linguistic knowledge how expressions generated by a certain I-language interface with human conceptual systems

Historical Remark When Frege invented the modern logic that semanticists now take as given, his aim was to recast the Dedekind-Peano axioms (which had been formulated with “subject-predicate” sentences, like ‘Every number has a successor’) in a new format, by using a new language that allowed for “fruitful definitions” and “transparent derivations.” Frege’s invented language (Begriffsschrift) was a tool for abstracting formally new concepts, not just a tool for signifying existing concepts

Concept of adicity n Concept of adicity n Concept of adicity n Concept of adicity n Word: adicity n Perceptible Signal (initial concept) Concept of adicity n Concept of adicity n Concept of adicity k Concept of adicity k Perceptible Signal Word: adicity k Further lexical information further lexical information Two Pictures of Lexicalization

But Frege… wanted a fully general Logic for (Ideal Scientific) Polyadic Thought treated monadicity as a special case of relationality: relations objects bear to truth values often recast predicates like ‘number’ in higher-order relational terms… thing to which zero bears the (identity-or-)ancestral-of-the-predecessor-relation relation Number(x) ≡ df {ANCESTRAL[Predecessor(y, z)]} allowed for such definitional introduction by having composition signify function-application and not having constraints on atomic types (and so respecting only weak composition constraints)

By Contrast, my suggestion is that… I-Languages let us create concepts that formally efface adicity distinctions already exhibited by the concepts we lexicalize (and presumably share with other animals) The payoff lies in creating I-concepts that can be combined quickly, via dumb but implementable operations (like concept-conjunction) CONTRAST… FREGEAN ABSTRACTION: use powerful operations to extract logically interesting polyadic concepts from subject-predicate thoughts NEO-DAVIDSONIAN ABSTRACTION: use simple operations to extract logically boring monadic concepts from diverse animal thoughts

Can we experimental methods to see what kinds of concepts speakers construct in response to (1) certain complex expressions or perhaps (2) certain words that already indicate complex concepts?

Tim Hunter Darko Odic Jeff Lidz Justin Halberda

More Questions why do expressions (PHON/SEM pairs) exhibit nontrivial logical patterns? (A/Sm) pink lamb arrived Every/All the/No lamb arrived (A/Sm) lamb arrived Every/All the/No pink lamb arrived No butcher who sold all the pink lamb arrived No butcher who sold all the lamb arrived how are phrases and “logical words” related to logic? why are some claims (e.g., pink lamb is lamb) analytic?

An Old Idea about Concepts complex concepts have logical concepts as constituents phrases indicate complex (i.e., analyzable) concepts words indicate concepts that may be atomic or complex atomic complex logical empirical ~[S] &{S, S’}  {S, S’}…unpack as ~[&{S, ~[S’]}]  x[Φ(x)] 3x[Φ(x)]…unpack as  x  y  z[…] 1-to-1[Φx, Ψx] MOST{Φ(x), Ψ(x)} …unpack as… etc. etc. BROWN ( X ) &{ BROWN ( X ), ANIMAL ( X )} ANIMAL ( X )  { ANIMAL ( X ), ~ BROWN ( X )} LOUD ( X ) &{ LOUD ( X ), BEFORE ( X, Y )] BEFORE ( X, Y )  x[&{ LOUD ( X ), BEFORE ( X, Y )}] etc. etc.

Sidebar: Ways to Spoil a Nice Idea Define the logical/empirical contrast in terms of human sensory capacities (with Cartesian malice aforethought) Define the logical/empirical contrast in terms of first- order variables (with Quinean malice aforethought) Confuse normative projects with descriptive projects

Don’t Spoil a Nice Idea Analysis is related to verification, however sensory/experiential capacities are related to empirical concepts (see Dummett, Horty) Consider a conjunctive thought: &{a fish flew, every pig perished} Its form—&{S, S’}—is also an obvious form of verification: verify one conjunct; if success, verify the other conjunct; if success, the thought is verified This procedure is neither required (you can always phone a friend) nor sure to be executable (a conjunct might be unverifiable). But complex concepts can formally encode sufficient conditions for verification in terms of empirical concepts; though of course, speakers may not know which procedures are logically equivalent.

Formal Differences Compare two truth-conditionally equivalent thoughts: v{ (a fish flew), (every pig perished)} ~(&{~(a fish flew), ~(every pig perished)}) the presence/absence of “negation verification” might be evidence for/against claims about how ‘or’ is understood at least in some domains, I-languages might interface “transparently” with other cognitive systems, with semantic instructions being used to assemble concepts whose forms can guide verification Interface Transparency Thesis: using a SEM to assemble a concept will bias the speaker/thinker towards verification procedures that mirror the assembled concept

‘Most’ as a Case Study Many provably equivalent “truth specifications” for a sentence like ‘Most of the dots are blue’ a standard textbook specification is… #{x:Dot(x) & Blue(x)} > #{x:Dot(x) & ~Blue(x)} but is it plausible to posit cardinality concepts? and is it plausible to specify lexical meanings in terms of negation?

‘Most’ as a Case Study Many provably equivalent “truth specifications” for a sentence like ‘Most of the dots are blue’ a standard textbook specification is… #{x:Dot(x) & Blue(x)} > #{x:Dot(x) & ~Blue(x)} some other options… #{x:Dot(x) & Blue(x)} > #{x:Dot(x)}/2 #{x:Dot(x) & Blue(x)} > #{x:Dot(x)} − #{x:Dot(x) & Blue(x)} avoids negation…but division by 2? subtraction?

‘Most’ as a Case Study Many provably equivalent “truth specifications” for a sentence like ‘Most of the dots are blue’ a standard textbook specification is… #{x:Dot(x) & Blue(x)} > #{x:Dot(x) & ~Blue(x)} some other options… #{x:Dot(x) & Blue(x)} > #{x:Dot(x)}/2 #{x:Dot(x) & Blue(x)} > #{x:Dot(x)} − #{x:Dot(x) & Blue(x)} {x:Dot(x) & Blue(x)} 1-To-1-Plus {x:Dot(x) & ~Blue(x)}

Hume’s Principle #{x:F(x)} = #{x:G(x)} iff {x:F(x)} 1-To-1 {x:G(x)} _______________________________________________ #{x:F(x)} > #{x:G(x)} iff {x:F(x)} 1-To-1-Plus {x:G(x)} α 1-To-1-Plus β iff there is a proper subset of α, α , such that: α  1-To-1 β (and it is not the case that β 1-To-1 α)

‘Most’ as a Case Study Some provably equivalent “truth specifications” for ‘Most of the dots are blue’ #{x:Dot(x) & Blue(x)} > #{x:Dot(x) & ~Blue(x)} #{x:Dot(x) & Blue(x)} > #{x:Dot(x)}/2 #{x:Dot(x) & Blue(x)} > #{x:Dot(x)} − #{x:Dot(x) & Blue(x)} {x:Dot(x) & Blue(x)} 1-To-1-Plus {x:Dot(x) & ~Blue(x)}

‘Most’ as a Case Study Most of the red cows arrived Most of the cows arrived Most of the cows arrived Most of the red cows arrived note that both inferences are bad but ‘most’ is not logically willy-nilly Most of the cows arrived late Most of the cows arrived Most of the cows arrived More than half of the cows arrived seems unlikely that ‘most’ fetches an atomic concept seems more like a good candidate for nontrivial analysis

‘Most’ as a Case Study much discussed by logicians and semanticists essentially relational quantifier  x(Fx & Gx)  x:Fx(Gx)  x(Fx  Gx)  x:Fx(Gx) μx(Fx ? Gx)μx:Fx(Gx) mass/count flexibility Most of the cows are brown / Most of the beef is brown I saw most of the cows/beef determiner/adjectival flexibility I saw the most cows/beef available methods for revealing verification strategies

Some Relevant Facts many animals are good cardinality-estimaters, by dint of a much studied system ( see Dehaene, Gallistel/Gelman, etc. ) appeal to subtraction operations is far from crazy ( see Gallistel and King ) even infants can do one-to-one comparison ( see Wynn ) Frege’s versions of the arithmetic axioms can be derived from Hume’s Principle (and definitions), using only a consistent fragment of Frege’s logic Lots of references in… The Meaning of 'Most’. Mind and Language 24: (2009). Transparency and the Psycholsemantics of ‘most’. Natural Language Semantics (in press).

a model of the “Approximate Number System” (key feature: ratio-dependence of discriminability) distinguishing 8 dots from 4 (or 16 from 8) is easier than distinguishing 10 dots from 8 (or 20 from 10)

a model of the “Approximate Number System” (key feature: ratio-dependence of discriminability) correlatively, as the number of dots rises, “acuity” for estimating of cardinality decreases-- but still in a ratio-dependent way, with wider “normal spreads” centered on right answers