Meanings as Instructions for how to Build Concepts Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy

Slides:



Advertisements
Similar presentations
The Subject-Matter of Ethics
Advertisements

Two Kinds of Concept Introduction Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy.
Semantics Static semantics Dynamic semantics attribute grammars
Semantics (Representing Meaning)
Lexicalizing and Combining Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy.
Ambiguous contents? Arvid Båve, Higher seminar in Theoretical Philosophy, FLoV, Gothenburg University, 8 May 2013.
F22H1 Logic and Proof Week 7 Clausal Form and Resolution.
Ontology From Wikipedia, the free encyclopedia In philosophy, ontology (from the Greek oν, genitive oντος: of being (part. of εiναι: to be) and –λογία:
Albert Gatt LIN3021 Formal Semantics Lecture 5. In this lecture Modification: How adjectives modify nouns The problem of vagueness Different types of.
CPSC 411, Fall 2008: Set 12 1 CPSC 411 Design and Analysis of Algorithms Set 12: Undecidability Prof. Jennifer Welch Fall 2008.
Language, Mind, and Brain by Ewa Dabrowska Chapter 10: The cognitive enterprise.
Meanings as Instructions for how to Build Concepts Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy
Meanings as Instructions for how to Build Concepts Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy
Describing I-Junction Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy
Taylor 6 Polysemy & Meaning Chains. Overview Many linguistic categories are associated with several prototypes. This chapter will talk about family resemblance.
Meanings as Instructions for how to Build Concepts Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy
The Square Root of 2, p, and the King of France: Ontological and Epistemological Issues Encountered (and Ignored) in Introductory Mathematics Courses Martin.
Let remember from the previous lesson what is Knowledge representation
4:5 (blue:yellow) “scattered random”
Normal forms for Context-Free Grammars
CS 330 Programming Languages 09 / 16 / 2008 Instructor: Michael Eckmann.
Lesson 6. Refinement of the Operator Model This page describes formally how we refine Figure 2.5 into a more detailed model so that we can connect it.
Monadic Predicate Logic is Decidable Boolos et al, Computability and Logic (textbook, 4 th Ed.)
Programming Language Semantics Denotational Semantics Chapter 5 Part III Based on a lecture by Martin Abadi.
Meaning and Language Part 1.
Lecture 1 Introduction: Linguistic Theory and Theories
CAS LX 502 Semantics 3a. A formalism for meaning (cont ’ d) 3.2, 3.6.
CHAPTER 3: DEVELOPING LITERATURE REVIEW SKILLS
Logical Equivalence & Predicate Logic
Computational Thinking The VT Community web site:
LDK R Logics for Data and Knowledge Representation Modeling First version by Alessandro Agostini and Fausto Giunchiglia Second version by Fausto Giunchiglia.
Some animals are born early, and acquire a “second nature” catterpillars become butterflies infants become speakers of human languages, whose meaningful.
Declarative vs Procedural Programming  Procedural programming requires that – the programmer tell the computer what to do. That is, how to get the output.
LOGIC AND ONTOLOGY Both logic and ontology are important areas of philosophy covering large, diverse, and active research projects. These two areas overlap.
Virtual Canada 2.0. » Knowledge is not just information » Knowledge is not philosophy (but it can be approached through philosophical inquiry) » There.
Course Overview and Road Map Computability and Logic.
Beyond Truth Conditions: The semantics of ‘most’ Tim HunterUMD Ling. Justin HalberdaJHU Psych. Jeff LidzUMD Ling. Paul PietroskiUMD Ling./Phil.
Entity Theories of Meaning. Meaning Talk Theory should make sense of meaning talk Theory should make sense of meaning talk What sorts of things do we.
Albert Gatt LIN3021 Formal Semantics Lecture 4. In this lecture Compositionality in Natural Langauge revisited: The role of types The typed lambda calculus.
CMP 131 Introduction to Computer Programming Violetta Cavalli-Sforza Week 3, Lecture 1.
Paul M. Pietroski University of Maryland
Key Concepts Representation Inference Semantics Discourse Pragmatics Computation.
Topic #1: Introduction EE 456 – Compiling Techniques Prof. Carl Sable Fall 2003.
Programming Languages and Design Lecture 3 Semantic Specifications of Programming Languages Instructor: Li Ma Department of Computer Science Texas Southern.
Locating Human Meanings: Less Typology, More Constraint Paul M. Pietroski, University of Maryland Dept. of Linguistics, Dept. of Philosophy.
Subjects, Predicates, and Systematicity Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy.
Data Structures and Algorithms Dr. Tehseen Zia Assistant Professor Dept. Computer Science and IT University of Sargodha Lecture 1.
ISBN Chapter 3 Describing Syntax and Semantics.
Introduction Chapter 1 Foundations of statistical natural language processing.
CS 285- Discrete Mathematics Lecture 4. Section 1.3 Predicate logic Predicate logic is an extension of propositional logic that permits concisely reasoning.
Procedure Matters Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy
Building Abstractions with Variables (Part 2) CS 21a: Introduction to Computing I First Semester,
1 Logic Our ability to state invariants, record preconditions and post- conditions, and the ability to reason about a formal model depend on the logic.
Onlinedeeneislam.blogspot.com1 Design and Analysis of Algorithms Slide # 1 Download From
Metalogic Soundness and Completeness. Two Notions of Logical Consequence Validity: If the premises are true, then the conclusion must be true. Provability:
Instructor: Todd Ganson.  Φιλοσοφία (philo-sophia)
Operational Semantics Mooly Sagiv Reference: Semantics with Applications Chapter 2 H. Nielson and F. Nielson
Describing Syntax and Semantics
Knowledge Representation Techniques
PSYC 206 Lifespan Development Bilge Yagmurlu.
Learning Usage of English KWICly with WebLEAP/DSR
7/3/2018 EMR 17 Logical Reasoning Lecture 11.
CHAPTER 4 Designing Studies
Tim Hunter A W l e e l x l i w s o o d Darko Odic J e f L i d z
A new perspective on philosophical debates
Most Meanings and Minds
Most, Mass, and maybe More
Levels of Linguistic Analysis
Validity and Soundness, Again
Truth tables.
Presentation transcript:

Meanings as Instructions for how to Build Concepts Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy

In previous episodes…

What are words, concepts, and grammars? How are they related? How are they related to whatever makes humans distinctive? Did a relatively small change in our ancestors lead to both the "linguistic metamorphosis” that human infants undergo, and significant cognitive differences between us and other primates? Maybe… we’re cognitively special because we’re linguistically special, and we’re linguistically special because we acquire words (After all, kids are really good at acquiring words.) Humans acquire words, concepts, and grammars

Language Acquisition Device in a Mature State (an I-Language): GRAMMAR LEXICON  SEMs other acquired------> concepts initial concepts  introduced concepts what kinds of concepts do SEMs interface with?

Lexicalization as Monadic-Concept-Abstraction Concept of adicity n Concept of adicity n (before) Concept of adicity n Concept of adicity n Concept of adicity Concept of adicity Perceptible Signal Word: adicity -1 KICK(x 1, x 2 ) KICK(event) further lexical information

Meanings as Instructions for how to build (Conjunctive) Concepts The meaning (SEM) of [ride V fast A ] V is the following instruction: Executing this instruction yields a concept like RIDE(_) & FAST(_) The meaning (SEM) of [ride V horses N ] V is the following instruction: DirectObject:SEM(‘horses’)] Thematize-execute:‘SEM(‘horses’)] Executing this instruction would yield a concept like RIDE(_) &  [THEME(_, _) & HORSES(_)] RIDE(_) &  [THEME(_, _) & HORSE(_) & PLURAL(_)]

Meanings: neither extensions nor concepts Familiar difficulties for the idea that lexical meanings are concepts polysemy 1 meaning, 1 cluster of concepts (in 1 mind) intersubjectivity 1 meaning, 2 concepts (in 2 minds) jabber(wocky) 1 meaning, 0 concepts (in 1 mind) But don’t conclude that meanings are extensions (or referents, or aspects of the environment that speakers share) A single instruction to fetch a concept from a certain address can be associated with more (or less) than one concept Meaning constancy at least for purposes of meaning composition

Meanings: neither extensions nor concepts Meaning constancy at least for purposes of meaning composition Don’t confuse (linguistic) meaning with use: meaning is compositional; use isn’t Don’t forget that shared languages may reflect shared biology more than a shared environment “Poverty of Stimulus Revisited” (Berwick, Pietroski, Yankama, & Chomsky) current issue of Cognitive Science “”The Language Faculty” (Pietroski and Crain) Oxford Handbook of the Philosophy of Cognitive Science

More Questions why do expressions (PHON/SEM pairs) exhibit nontrivial logical patterns? (A/Sm) pink lamb arrived Every/All the/No lamb arrived (A/Sm) lamb arrived Every/All the/No pink lamb arrived No butcher who sold all the pink lamb arrived No butcher who sold all the lamb arrived how are phrases and “logical words” related to logic? why are some claims (e.g., pink lamb is lamb) analytic? old idea: some words indicate logical concepts, some of which are complex…logical words are not independent…and so some such words indicate complex logical concepts

‘Most’ as a Case Study Many provably equivalent “truth specifications” for a sentence like ‘Most of the dots are blue’ a standard textbook specification is… #{x:Dot(x) & Blue(x)} > #{x:Dot(x) & ~Blue(x)} some other options… #{x:Dot(x) & Blue(x)} > #{x:Dot(x)}/2 #{x:Dot(x) & Blue(x)} > #{x:Dot(x)} − #{x:Dot(x) & Blue(x)} {x:Dot(x) & Blue(x)} 1-To-1-Plus {x:Dot(x) & ~Blue(x)}

a model of the “Approximate Number System” (key feature: ratio-dependence of discriminability) distinguishing 8 dots from 4 (or 16 from 8) is easier than distinguishing 10 dots from 8 (or 20 from 10)

fits for trials (apart from Sorted-Columns) to a standard psychophysical model for predicting ANS-driven performance fits for Sorted-Columns trials to an independent model for detecting the longer of two line segments

no effect of number of colors

discriminability is BETTER for ‘goo’ (than for ‘dots’)

final episode…

Hunch (to be refined) Meanings are simple they systematically compose in ways that emerge naturally for humans they are perceived/computed automatically, often in the absence of relevant context and in contrast to reasonable expectations hiker, lost, walked, circles: The hiker who lost was walked in circles There are decent first-pass theories of meaning/understanding Meaning relies on rudimentary linking of unsaturated conceptual “slots” Truth is complicated It depends on context in apparently diverse ways Making a claim truth-evaluable often requires work, especially if you want people to agree on which truth-evaluable claim got made There are paradoxes Truth requires fancy (Tarskian) variables

‘I’ Before ‘E’ Frege: each Function determines a "Course of Values" Church: function-in-intension vs. function-in-extension --a procedure that pairs inputs with outputs in a certain way --a set of ordered pairs (no instances of and where y ≠ z) Chomsky: I-language vs. E-language --a procedure, implementable by child biology, that pairs phonological structures (PHONs) with semantic structures (SEMs) --a set of pairs

I-Language/E-Language function in Intensionimplementable procedure that pairs inputs with outputs function in Extensionset of input-output pairs |x – 1| + √(x 2 – 2x + 1) {…(-2, 3), (-1, -2), (0, 1), (1, 0), (2, 1), …} λx. |x – 1| = λx. + √(x 2 – 2x + 1) λx. |x – 1| ≠ λx. + √(x 2 – 2x + 1) Extension[λx. |x – 1|] = Extension[λx. + √(x 2 – 2x + 1)]

list of atomic pairs, each with an “address” that can be assigned to one or more concepts modes of composition e.g., CONJOIN, SATURATE, ABSTRACT complex concepts that are available for use  further details beyond my pay grade

an I-language in Chomsky’s sense: the expression-generator generates semantic instructions; and executing these instructions yields concepts that can be used in thought  complex concepts that are available for use

 e.g., BROWN(_) & COW(_) e.g.,

Varieties of E-functions sets of pairs sets of pairs> pairs : t = 1 iff x is a sample of H20}> : t = 1 iff x is a sample of XYZ}> sets of pairs> pairs> pairs sets of pairs … might wonder which of these are represented (by non-theorists)

Going To Church given a procedure P that maps each widget to a gizmo, and a procedure P’ that maps each gizmo to a hoosit, there is a procedure P’’ that maps each widget to a hoosit in this sense, procedures compose (and some can be compiled) but a mind might implement P via certain representations/operations, and implement P’ via different representations/operations, yet lack the capacity to use outputs of P as inputs to P’ if S and S’ are recursively specifiable sets, and S pairs each widget with a gizmo, and S’ pairs each gizmo with a hoosit, then some recursively specifiable set S’’ pairs each widget with a hoosit of course, sets don’t compose: S’’ is no more complex than S or S’ but procedural descriptions of sets might compose

Familiar Point given a procedure P that maps each widget to a gizmo, and a procedure P’ that maps each gizmo to a hoosit, there is a procedure P’’ that maps each widget to a hoosit specifying a procedure in the lambda calculus (without cheating) tells us that the outputs in can be computed in the Church-Turing sense, given the inputs (and any posited capacities/oracles) such specification can raise interesting Chomsky-Marr questions what kind of algorithm is needed to compute the outputs from the inputs? could animals use such an algorithm, given their cognitive resources? if so, do they somehow compute the e-function question? and if so, how?

Composition before Context Many philosophers and linguists ask which aspects of context are tracked by expressions of a human I-language. I want to focus on an issue concerning the kind of composition exhibited by the concepts we assemble in response to simple phrases like brown^cow already several kinds of context sensitivity to worry about big ant, brown calf, brown house, brown vegetables, blue sky something missing in logical forms like: BROWN(x) & COW(x) but logical forms with ‘&’ (and ‘x’) may also be too sophisticated in any case, given an I-language perspective, we must ask: which conjoiner?

Kinds of Conjoiners If P and P* are propositions (sentences with no free variables), then: &(P, P*) is true iff P is true and P* is true If S and S* are sentential expressions (with zero or more free variables) then for any sequence of domain entities σ: &(S, S*) is satisfied by σ iff S is satisfied by σ, and S* is satisfied by σ If M and M* are monadic predicates, then for each entity x: 1 &(M, M*) applies to x iff M applies to x and M* applies to x If D and D* are dyadic predicates, then for each ordered pair : 2 &(D, D*) applies to iff D applies to and so does D*

The Bold Ampersand &(Fx, Gx) is satisfied by (a sequence) σ iff Fx is satisfied by σ, and Gx is satisfied by σ &(Rxx’, Gx’) is satisfied by σ iff Rxx’ is satisfied by σ, and Gx’ is satisfied by σ &(Fx, Gx’) is satisfied by σ iff Fx is satisfied by σ, and Gx’ is satisfied by σ &(Rxx’, Gx’’) is satisfied by σ iff Rxx’ is satisfied by σ, and Gx’ is satisfied by σ &(Wxx’x’’, Rx’’’x’’’’) is satisfied by σ iff Wxx’x’’ is satisfied by σ, and Rx’’’x’’’’ is satisfied by σ The adicity of &(S, S*) can exceed that of either conjunct but think about ‘from under’, which does NOT have these readings: Fxx’ & Ux’’x’’’, Fxx’ & Ux’x, etc.

Frege-to-Tarski Fregean Judgment: Unsaturated(saturated) Planet(Venus) Number(Two) Precedes( ); Precedes(Two, Three) First-Order Judgment-Frames: Unsaturated(_) Planet(_) Number(_) Precedes(_, Three); Precedes(Two, _); Precedes(_, _) Second-Order Judgment-Frames: __(Saturater) __(Venus); __(Two); __( )

Frege-to-Tarski Tarskian Variables (first-order): x, x', x'', … Tarskian Sentences: Planet(x), Planet(x'),... Precedes(x, x'), Precedes(x', x), Precedes(x, x), … any variable can "fill" any slot of a first-order Judgment-Frame Sentences (open or closed) satisfied by sequences: σ satisfies Number(x'') iff σ (x'') is a number σ satisfies Precedes(x'', x''') iff σ(x'') precedes σ(x''') σ satisfies Precedes(x'', x''') & Number(x'') iff σ satisfies Precedes(x'', x''') and σ satisfies Number(x'’)

Tarski-to-Kaplan Constants as (finitely many) special cases of variables: c, c', c'',..., c''''' T-sequences of the form: Kaplanian indices: s, p, t K-sequences of the form: indices constants variables

The Bold Ampersand &(Fx, Gx) is satisfied by σ iff Fx is satisfied by σ, and Gx is satisfied by σ &(Rxx’, Gx’) is satisfied by σ iff Rxx’ is satisfied by σ, and Gx’ is satisfied by σ &(Fx, Gx’) is satisfied by σ iff Fx is satisfied by σ, and Gx’ is satisfied by σ &(Rxx’, Gx’’) is satisfied by σ iff Rxx’ is satisfied by σ, and Gx’ is satisfied by σ &(Wxx’x’’, Rx’’’x’’’’) is satisfied by σ iff Wxx’x’’ is satisfied by σ, and Rx’’’x’’’’ is satisfied by σ The adicity of &(S, S*) can exceed that of either conjunct

Kinds of Conjoiners If P and P* are propositions (sentences with no free variables), then: &(P, P*) is true iff P is true and P* is true If S and S* are sentential expressions (with zero or more free variables) then for any sequence σ: &(S, S*) is satisfied by σ iff S is satisfied by σ, and S* is satisfied by σ If M and M* are monadic predicates, then for each entity x: 1 &(M, M*) applies to x iff M applies to x and M* applies to x If D and D* are dyadic predicates, then for each ordered pair : 2 &(D, D*) applies to iff D applies to and so does D*

Kinds of Conjoiners (now using y instead of x’) Note the difference between 2 &(D, D*) and &(Pxy, Qxy) no need for variables in the former, and hence no analogs of: &(Pxy, Qyx); &(Pyx, Qxy); &(Pxx, Qxx); &(Pxx, Qxy);...; &(Pyy, Qyy) We could stipulate that 2 +(D, D*) applies to iff D applies to and D* applies to. But this still leaves no freedom with regard to variable positions There is a big difference between (1) a mind that can fill any unsaturated slot with any variable, and (2) a mind that has "unsaturated" concepts like D(_, _) but cannot "fill" the slots with variables and create open sentences

Kinds of Conjoiners If D is a dyadic predicate, and M is a monadic predicate, then for each entity x: ^(D, M) applies to x iff for some entity y, D applies to and M applies to y _____ | | ^(D, M)  [D(_, _)^M(_)] |___________| Note the difference between ^(D, M) and &(Pxy, Qy) no need for variables in the former; hence no analogs of: &(Pxy, Qx) … We could define other “mixed” conjunctions. But ^(D, M) is a simple one: its monadic conjunct is closed, leaving another monadic predicate

Meanings compose. Truth is (at best) recursively specifiable Meaning really is compositional for this general point… it doesn’t matter what the composition operations/concepts are it doesn’t matter if meanings are concepts or instructions to assemble concepts SEM(‘brown’) = BROWN(_) &[BROWN(_), ___(_) SEM(‘cow’) = COW(_) SEM(‘brown cow’) = &[BROWN(_), COW(_)]

Meanings compose. Truth is (at best) recursively specifiable Meaning really is compositional, even if follows that meanings have a syntax SEM(‘brown’) = BROWN(_) &[BROWN(_), ___(_)] SEM(‘cow’) = COW(_) SEM(‘brown cow’) = &[BROWN(_), COW(_)] Truth really isn’t compositional Tarski characterized truth as satisfaction by all sequences, and he specified satisfaction recursive. But satisfaction is not compositional σ can satisfy ‘  xFx’ without satisfying ‘Fx’, and σ can satisfy ‘Fx’ without satisfying ‘  xFx’ the satisfiers of ‘Bx & Cx’ are not composed of the satisfiers of ‘Bx’ and ‘Cx’

A Versatile (but simple) Conjoiner If D is a dyadic predicate, and M is a monadic predicate, then for each entity x: ^(D, M) applies to x iff for some entity y, D applies to and M applies to y _____ | | ^(D, M)  [D(_, _)^M(_)] |___________| a separate talk to show that given plausible lexical meanings, and a limited form of abstraction that is required on any view, this can handle ‘quickly eat (sm) grass’ ‘saw cows eat grass’ ‘think I saw most of the cows that ate every bit of grass in the field

Modes of Composition Constrain Composables The only lexical items permitted are those that can be combined in permitted ways One reason for being suspicious of appeal to “SATURATE” as a general semantic instruction for human I-languages: by itself, it imposes no constraints on lexical types But whatever composition operations/concepts one posits, along with whatever further constraints on lexical meanings, human I-language SEMs, there will be (perhaps severe) constraints on the “types” of indices

Modes of Composition Constrain Composables if a dimension of context-sensitivity is indexed by a concept C that can be a constituent of a concept built by executing a human I-language SEM…

list of atomic pairs, each with an “address” that can be assigned to one or more concepts modes of composition e.g., CONJOIN (^) complex concepts that are available for use 

Modes of Composition Constrain Composables if a dimension of context-sensitivity is indexed by a concept C that can be a constituent of a concept built by executing a human I-language SEM… then C must be of the right formal type to compose with other concepts via the composition operations/concepts that human I-languages invoke This might be a huge source of constraint on which dimensions of the context-sensitivity of truth can be indexed by human I-language SEMs The issue is not merely if grammatically generated structures/instructions contain enough indices There is also the question of whether a posited (covert) index can be used to fetch a concept that has both the posited character and a form that human I-languages can deal with…just think about adicities

Modes of Composition Constrain Composables of course, one can say that the truth of assertions made by using human I-language expressions depends on context in ways that the compositional meanings of such expressions do not track I suspect that’s right, somehow But I don’t know what assertions are, or how they relate to I-language expressions of the sort that human children can generate And while I know how to specify Tarskian satisfaction conditions for many E-languages that have indices of many sorts, I don’t know which of these E-languages are generable by natural procedures as opposed to normative regimentations of actions of using I-language expressions

Hunch Refined Meanings are simple They systematically compose in ways that emerge naturally for humans They are context invariant and perceived/computed automatically Meaning relies on rudimentary linking of unsaturated conceptual “slots” Simple theories possible (and no paradoxes) Truth is complicated It isn’t compositional It depends on context (including norms) in apparently diverse ways Truth requires fancy (Tarskian) variables At best complex theories (and familiar paradoxes loom)

THANKS

Tim Hunter Darko Odic Jeff Lidz Justin Halberda not pictured: Norbert Hornstein

Meanings as Instructions for how to Build Concepts Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy