Meanings as Instructions for how to Build Concepts Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy
In previous episodes…
What are words, concepts, and grammars? How are they related? How are they related to whatever makes humans distinctive? Did a relatively small change in our ancestors lead to both the "linguistic metamorphosis” that human infants undergo, and significant cognitive differences between us and other primates? Maybe… we’re cognitively special because we’re linguistically special, and we’re linguistically special because we acquire words (After all, kids are really good at acquiring words.) Humans acquire words, concepts, and grammars
Language Acquisition Device in a Mature State (an I-Language): GRAMMAR LEXICON SEMs other acquired------> concepts initial concepts introduced concepts what kinds of concepts do SEMs interface with?
Concept of adicity n Concept of adicity n Concept of adicity n Concept of adicity n Word: adicity n Perceptible Signal (initial concept) Concept of adicity n Concept of adicity n Concept of adicity k Concept of adicity k Perceptible Signal Word: adicity k Further lexical information further lexical information Two Pictures of Lexicalization
Puzzles for the idea that Words simply Label Concepts Apparent mismatches between how words combine (grammatical form) and how concepts combine (logical form) KICK(x 1, x 2 ) The baby kicked RIDE(x 1, x 2 ) Can you give me a ride? BEWTEEN(x 1, x 2, x 3 )I am between him and her BIGGER(x 1, x 2 )That is bigger than that FATHER(…?...)Fathers father MORTAL(…?...)Socrates is mortal A mortal wound is fatal
Lexicalization as Monadic-Concept-Abstraction Concept of adicity n Concept of adicity n (before) Concept of adicity n Concept of adicity n Concept of adicity Concept of adicity Perceptible Signal Word: adicity -1 KICK(x 1, x 2 ) KICK(event) further lexical information
A Possible Mind KICK(x 1, x 2 ) a prelexical concept KICK(x 1, x 2 ) ≡ df for some _, KICK(_, x 1, x 2 ) AGENT(_, x 1 )generic “action” concepts PATIENT(_, x 2 ) KICK(_, x 1, x 2 ) ≡ df AGENT(_, x 1 ) & KICK(_) & PATIENT(_, x 2 ) CAESAR, PF:‘Caesar’mental labels for a person and a sound Called(CAESAR, PF:‘Caesar’)a thought about what the person is called Called(_, PF:‘Caesar’) ≡ df CAESARED(_) KICK(_) introduced via: KICK(_, x 1, x 2 ) AGENT(_, x 1 ) PATIENT(_, x 2 ) & KICK(_, x 1, x 2 ) introduced via: KICK(_, x 1, x 2 ) for some _ CAESARED(_) introduced via: Called CAESAR PF:‘Caesar’
A Possible Mind
Two Roles for Words on this View (1) In lexicalization… acquiring a (spoken) word is a process of pairing a sound with a concept—the concept lexicalized—storing that sound/concept pair in memory, and then using that concept to introduce a concept that can be combined with others via certain (limited) composition operations sound-of-‘kick’/KICK(x 1, x 2 )sound-of-‘kick’/KICK(x 1, x 2 )/KICK(_) at least for “open class” lexical items (nouns, verbs, adjectives/adverbs) the introduced concepts are monadic and conjoinable (2) in subsequent comprehension… a word is an instruction to fetch an introduced concept from the relevant address in memory
Meanings as Instructions for how to build (Conjunctive) Concepts The meaning (SEM) of [ride V fast A ] V is the following instruction: Executing this instruction yields a concept like RIDE(_) & FAST(_) The meaning (SEM) of [ride V horses N ] V is the following instruction: DirectObject:SEM(‘horses’)] Thematize-execute:‘SEM(‘horses’)] Executing this instruction would yield a concept like RIDE(_) & [THEME(_, _) & HORSES(_)] RIDE(_) & [THEME(_, _) & HORSE(_) & PLURAL(_)]
Meanings as Instructions for how to build (Conjunctive) Concepts The meaning of [[ride V horses N ] V fast A ] V is the following instruction: CONJOIN[execute:SEM([ride V horses N ] V ), execute:SEM(fast A )] Executing this instruction yields a concept like RIDE(_) & [THEME(_, _) & HORSES(_)] & FAST(_) The meaning of [[ride V [fast A horses N ] N ] V is the following instruction: DirectObject:SEM([fast A horses N ] N )] Executing this instruction yields a concept like RIDE(_) & [THEME(_, _) & FAST(_) & HORSES(_)]
Meanings as Instructions for how to build (Conjunctive) Concepts On this view, meanings are neither extensions nor concepts. Familiar difficulties for the idea that lexical meanings are concepts polysemy 1 meaning, 1 cluster of concepts (in 1 mind) intersubjectivity 1 meaning, 2 concepts (in 2 minds) jabber(wocky) 1 meaning, 0 concepts (in 1 mind) But a single instruction to fetch a concept from a certain address can be associated with more (or less) than one concept Meaning constancy at least for purposes of meaning composition
Plenty of Work to do Every dog barked Peter said that every dog barked I bet you five dollars that every dog barked Every dog that barked is brown Everyone who said that a dog barked arrived short response:
Plenty of Work to do Every dog barked EVERY(_) & [INTERNAL(_, _) & MAXIMIZE-DOG(_)] & [EXTERNAL(_, _) & MAXIMIZE-BARKED(_)] Peter said that every dog barked [AGENT(_, _) & THAT-PETER(_)] & SAY(_) & PAST(_) & [CONTENT(_, _) & THAT-EVERY-DOG-BARKED(_)] I bet you a dollar that every dog barked [AGENT(_, _) & SPEAKER(_)] & BET(_) & [???(_, _) & AUDIENCE(_)] & [THEME(_, _) & DOLLAR(_)] [CONTENT(_, _) & THAT-EVERY-DOG-BARKED(_)]
What is a Theory of Meaning (for a given language) a Theory of? a partial theory of… abstract meanings (propositions and their parts) how speakers use the language to communicate how expressions of language are related to aspects of the environment that speakers share certain speakers’ linguistic knowledge how expressions generated by a certain I-language interface with human conceptual systems
Historical Remark When Frege invented the modern logic that semanticists now take as given, his aim was to recast the Dedekind-Peano axioms (which had been formulated with “subject-predicate” sentences, like ‘Every number has a successor’) in a new format, by using a new language that allowed for “fruitful definitions” and “transparent derivations.” Frege’s invented language (Begriffsschrift) was a tool for abstracting formally new concepts, not just a tool for signifying existing concepts
Concept of adicity n Concept of adicity n Concept of adicity n Concept of adicity n Word: adicity n Perceptible Signal (initial concept) Concept of adicity n Concept of adicity n Concept of adicity k Concept of adicity k Perceptible Signal Word: adicity k Further lexical information further lexical information Two Pictures of Lexicalization
But Frege… wanted a fully general Logic for (Ideal Scientific) Polyadic Thought treated monadicity as a special case of relationality: relations objects bear to truth values often recast predicates like ‘number’ in higher-order relational terms… thing to which zero bears the (identity-or-)ancestral-of-the-predecessor-relation relation Number(x) ≡ df {ANCESTRAL[Predecessor(y, z)]} allowed for such definitional introduction by having composition signify function-application and not having constraints on atomic types (and so respecting only weak composition constraints)
By Contrast, my suggestion is that… I-Languages let us create concepts that formally efface adicity distinctions already exhibited by the concepts we lexicalize (and presumably share with other animals) The payoff lies in creating I-concepts that can be combined quickly, via dumb but implementable operations (like concept-conjunction) CONTRAST… FREGEAN ABSTRACTION: use powerful operations to extract logically interesting polyadic concepts from subject-predicate thoughts NEO-DAVIDSONIAN ABSTRACTION: use simple operations to extract logically boring monadic concepts from diverse animal thoughts
Can we experimental methods to see what kinds of concepts speakers construct in response to (1) certain complex expressions or perhaps (2) certain words that already indicate complex concepts?
Tim Hunter Darko Odic Jeff Lidz Justin Halberda
More Questions why do expressions (PHON/SEM pairs) exhibit nontrivial logical patterns? (A/Sm) pink lamb arrived Every/All the/No lamb arrived (A/Sm) lamb arrived Every/All the/No pink lamb arrived No butcher who sold all the pink lamb arrived No butcher who sold all the lamb arrived how are phrases and “logical words” related to logic? why are some claims (e.g., pink lamb is lamb) analytic?
An Old Idea about Concepts complex concepts have logical concepts as constituents phrases indicate complex (i.e., analyzable) concepts words indicate concepts that may be atomic or complex atomic complex logical empirical ~[S] &{S, S’} {S, S’}…unpack as ~[&{S, ~[S’]}] x[Φ(x)] 3x[Φ(x)]…unpack as x y z[…] 1-to-1[Φx, Ψx] MOST{Φ(x), Ψ(x)} …unpack as… etc. etc. BROWN ( X ) &{ BROWN ( X ), ANIMAL ( X )} ANIMAL ( X ) { ANIMAL ( X ), ~ BROWN ( X )} LOUD ( X ) &{ LOUD ( X ), BEFORE ( X, Y )] BEFORE ( X, Y ) x[&{ LOUD ( X ), BEFORE ( X, Y )}] etc. etc.
Sidebar: Ways to Spoil a Nice Idea Define the logical/empirical contrast in terms of human sensory capacities (with Cartesian malice aforethought) Define the logical/empirical contrast in terms of first- order variables (with Quinean malice aforethought) Confuse normative projects with descriptive projects
Don’t Spoil a Nice Idea Analysis is related to verification, however sensory/experiential capacities are related to empirical concepts (see Dummett, Horty) Consider a conjunctive thought: &{a fish flew, every pig perished} Its form—&{S, S’}—is also an obvious form of verification: verify one conjunct; if success, verify the other conjunct; if success, the thought is verified This procedure is neither required (you can always phone a friend) nor sure to be executable (a conjunct might be unverifiable). But complex concepts can formally encode sufficient conditions for verification in terms of empirical concepts; though of course, speakers may not know which procedures are logically equivalent.
Formal Differences Compare two truth-conditionally equivalent thoughts: v{ (a fish flew), (every pig perished)} ~(&{~(a fish flew), ~(every pig perished)}) the presence/absence of “negation verification” might be evidence for/against claims about how ‘or’ is understood at least in some domains, I-languages might interface “transparently” with other cognitive systems, with semantic instructions being used to assemble concepts whose forms can guide verification Interface Transparency Thesis: using a SEM to assemble a concept will bias the speaker/thinker towards verification procedures that mirror the assembled concept
‘Most’ as a Case Study Many provably equivalent “truth specifications” for a sentence like ‘Most of the dots are blue’ a standard textbook specification is… #{x:Dot(x) & Blue(x)} > #{x:Dot(x) & ~Blue(x)} but is it plausible to posit cardinality concepts? and is it plausible to specify lexical meanings in terms of negation?
‘Most’ as a Case Study Many provably equivalent “truth specifications” for a sentence like ‘Most of the dots are blue’ a standard textbook specification is… #{x:Dot(x) & Blue(x)} > #{x:Dot(x) & ~Blue(x)} some other options… #{x:Dot(x) & Blue(x)} > #{x:Dot(x)}/2 #{x:Dot(x) & Blue(x)} > #{x:Dot(x)} − #{x:Dot(x) & Blue(x)} avoids negation…but division by 2? subtraction?
‘Most’ as a Case Study Many provably equivalent “truth specifications” for a sentence like ‘Most of the dots are blue’ a standard textbook specification is… #{x:Dot(x) & Blue(x)} > #{x:Dot(x) & ~Blue(x)} some other options… #{x:Dot(x) & Blue(x)} > #{x:Dot(x)}/2 #{x:Dot(x) & Blue(x)} > #{x:Dot(x)} − #{x:Dot(x) & Blue(x)} {x:Dot(x) & Blue(x)} 1-To-1-Plus {x:Dot(x) & ~Blue(x)}
Hume’s Principle #{x:F(x)} = #{x:G(x)} iff {x:F(x)} 1-To-1 {x:G(x)} _______________________________________________ #{x:F(x)} > #{x:G(x)} iff {x:F(x)} 1-To-1-Plus {x:G(x)} α 1-To-1-Plus β iff there is a proper subset of α, α , such that: α 1-To-1 β (and it is not the case that β 1-To-1 α)
‘Most’ as a Case Study Some provably equivalent “truth specifications” for ‘Most of the dots are blue’ #{x:Dot(x) & Blue(x)} > #{x:Dot(x) & ~Blue(x)} #{x:Dot(x) & Blue(x)} > #{x:Dot(x)}/2 #{x:Dot(x) & Blue(x)} > #{x:Dot(x)} − #{x:Dot(x) & Blue(x)} {x:Dot(x) & Blue(x)} 1-To-1-Plus {x:Dot(x) & ~Blue(x)}
‘Most’ as a Case Study Most of the red cows arrived Most of the cows arrived Most of the cows arrived Most of the red cows arrived note that both inferences are bad but ‘most’ is not logically willy-nilly Most of the cows arrived late Most of the cows arrived Most of the cows arrived More than half of the cows arrived seems unlikely that ‘most’ fetches an atomic concept seems more like a good candidate for nontrivial analysis
‘Most’ as a Case Study much discussed by logicians and semanticists essentially relational quantifier x(Fx & Gx) x:Fx(Gx) x(Fx Gx) x:Fx(Gx) μx(Fx ? Gx)μx:Fx(Gx) mass/count flexibility Most of the cows are brown / Most of the beef is brown I saw most of the cows/beef determiner/adjectival flexibility I saw the most cows/beef available methods for revealing verification strategies
Some Relevant Facts many animals are good cardinality-estimaters, by dint of a much studied system ( see Dehaene, Gallistel/Gelman, etc. ) appeal to subtraction operations is far from crazy ( see Gallistel and King ) even infants can do one-to-one comparison ( see Wynn ) Frege’s versions of the arithmetic axioms can be derived from Hume’s Principle (and definitions), using only a consistent fragment of Frege’s logic Lots of references in… The Meaning of 'Most’. Mind and Language 24: (2009). Transparency and the Psycholsemantics of ‘most’. Natural Language Semantics (in press).
a model of the “Approximate Number System” (key feature: ratio-dependence of discriminability) distinguishing 8 dots from 4 (or 16 from 8) is easier than distinguishing 10 dots from 8 (or 20 from 10)
a model of the “Approximate Number System” (key feature: ratio-dependence of discriminability) correlatively, as the number of dots rises, “acuity” for estimating of cardinality decreases-- but still in a ratio-dependent way, with wider “normal spreads” centered on right answers