Download presentation
Published byIyana Matley Modified over 10 years ago
1
The dynamics of incremental sentence comprehension A situation-space model
Stefan Frank Department of Cognitive, Perceptual and Brain Sciences University College London
2
sentence comprehension
cognitive modelling information theory
3
Sentence comprehension as mental simulation
The mental representation of a sentence’s meaning is not some symbolic structure But an analogical and modal simulation of the described state of affairs (e.g., Barsalou, 1999; Zwaan, 2004) Comparable to the result of directly experiencing the described situation Central property of analogical representations: direct inference
4
Sentence comprehension as mental simulation Stanfield & Zwaan (2001)
John put the pen in the cup Was this object mentioned in the sentence? fast RT fast RT John put the pen in the drawer Direct inference results from the analogical nature of mental representation
5
A model of sentence comprehension Frank, Haselager & Van Rooij (2009)
Formalization of analogical representations and direct inference Any state of the world corresponds to a vector in situation space These representations are analogical: Relations between the vectors mirror probabilistic relations between the represented situations In practice, restricted to a microworld
6
The microworld Concepts and atomic situations
22 Concepts, e.g., people: charlie, heidi, sophia games: chess, hide&seek, soccer toys: puzzle, doll, ball places: bathroom, bedroom, street, playground predicates: play, place, win, lose 44 atomic situations, e.g., play(charlie, chess) win(sophia) place(heidi, bedroom)
7
The microworld States of the world
Atomic situations and boolean combinations thereof refer to states of the world: play(sophia, hide&seek) ∧ place(sophia, playground) “sophia plays hide-and-seek in the playground” lose(charlie) ∨ lose(heidi) ∨ lose(sophia) “someone loses” Interdependencies among states of the world affect probabilities of microworld states: sophia and heidi are usually at the same place the person who wins must play a game
8
Representing microworld situations
Automatic generation of 25,000 observations of microworld states. Unsupervised Competitive Layer yields a situation vector μ(p) [0,1]150 for each atomic situation p Any state of the world can be represented by Boolean operations on vectors: μ(p), μ(pq), μ(pq) Probability of a situation can be estimated from its representation: P(z) ≈ ∑iμi(z)/150
9
Representing microworld situations Direct inference
The conditional probability of one situation given another, can be estimated from the two vectors: P(p|z) = P(pz)/P(z) From the representations μ(play(sophia, soccer)), μ(play(sophia, ball)), μ(play(sophia, puzzle)) it follows that P(play(sophia, ball)|play(sophia, soccer)) ≈ .99 P(play(sophia, puzzle)|play(sophia, soccer)) ≈ 0 Representing sophia playing soccer is also representing her playing with ball, not puzzle
10
The microlanguage 40 words 13,556 possible sentences, e.g.,
girl plays chess ball is played with by charlie heidi loses to sophia at hide-and-seek someone wins Each sentence has a unique semantics (represented by a situation vector) a probability of occurrence (higher for shorter sentences)
11
A model of the comprehension process
A simple recurrent network (SRN) maps microlanguage sentences onto the vectors of the corresponding situations Displays semantic systematicity (in the sense of Fodor & Pylyshyn, 1988; Hadley, 1994) output (150 units) situation vectors hidden (120 units) word sequences input (40 units) words
12
Simulated word-reading time
No sense of processing a word over time in the standard SRN Addition: output vector update is a dynamical process, expressed by a differential equation (Frank, in press) This yields a processing time for each word: simulated reading times Word-processing times compared to formal measures of the amount of information conveyed by each word
13
Word information and reading time
Assumption: human linguistic competence is captured by probabilistic language models Such models give rise to formal measures of the amount of word-information content The more information is conveyed by a word, the more cognitive effort is involved in processing it This leads to longer reading time on the word
14
Word information and expectation
highly expected word 1a) It is raining cats and 1b) She is training cats and dogs less expected word dogs These expectations arise from knowledge of linguistic forms
15
Word information and expectation
Syntactic surprisal (Hale, 2001; Levy 2008) formalization of a word’s unexpectedness measure of word information follows from word’s probability given the sentence so far: −log P(wi+1|w1,…,wi), under a particular probabilistic language model Any reasonably accurate language model estimates surprisal values that predict word-reading times (Demberg & Keller, 2008; Smith & Levy, 2008; Frank, 2009; Wu et al., 2010)
16
Word information and uncertainty about the rest of the sentence
2a) It is raining high uncertainty reduction cats high uncertainty low uncertainty
17
Word information and uncertainty about the rest of the sentence
2a) It is raining 2b) She is training high uncertainty reduction cats low uncertainty reduction cats high uncertainty high uncertainty These uncertainties arise from knowledge of linguistic forms
18
Word information and uncertainty about the rest of the sentence
Syntactic entropy formalization of the amount of uncertainty about the rest of the sentence can be computed from a probabilistic language model Entropy reduction is an alternative measure of the amount of information the word conveys (Hale, 2003, 2006) Predicts word-reading times independently from surprisal (Frank, 2010)
19
World knowledge and word expectation
low semantic surprisal 3a) The brilliant paper was immediately 3b) The terrible paper was immediately accepted high semantic surprisal accepted These expectations arise from knowledge of the world Traxler et al. (2000): words take longer to read if they are less expected given the situation described so far
20
World knowledge and uncertainty about the rest of the sentence
4a) The brilliant paper was immediately 4b) The mediocre paper was immediately accepted/rejected low semantic entropy low semantic entropy accepted/rejected high semantic entropy
21
World knowledge and uncertainty about the rest of the sentence
4a) The brilliant paper was immediately 4b) The mediocre paper was immediately accepted/rejected accepted/rejected low semantic entropy reduction high semantic entropy reduction These uncertainties arise from knowledge of the world
22
Syntactic versus semantic word information
Syntactic information Semantic information Source of knowledge Language The world Probabilities of Word sequences States of the world Cognitive task Sentence recognition Simulation of described situation
23
Word-information measures in the sentence-comprehension model
For each word of each microlanguage sentence, four information values can be computed Syntactic surprisal and syntactic entropy reduction: follow directly from the microlanguage sentence’s occurrence probabilities Semantic surprisal and semantic entropy reduction: follow from probabilities of situations described by the sentences (estimated by situation vectors)
24
Computing semantic surprisal
sentence so far w1,…,wi w1,…,wi,… complete sentences sit1 sit2 sit3 sit4 described situations sit1 sit2 sit3 sit4 situation vectors sit1 sit2 sit3 sit4 vector for disjunction of situations
25
Computing semantic surprisal
w1,…,wi+1 sentence so far w1,…,wi w1,…,wi+1,… complete sentences w1,…,wi,… w1,…,wi,… w1,…,wi,… w1,…,wi,… sit2 sit4 described situations sit1 sit2 sit3 sit4 sit2 sit4 situation vectors sit1 sit2 sit3 sit4 sit2 sit4 vector for disjunction of situations sit1 sit2 sit3 sit4
26
Computing semantic surprisal
Computing semantic entropy reduction is more tricky, but also possible semantic surprisal of word wi+1 −log P(sit2sit4|sit1sit2sit3sit4) conditional probability estimate P(sit2 sit4|sit1sit2sit3sit4) vector for disjunction of situations sit1 sit2 sit3 sit4 sit2 sit4
27
Results Nested linear regression
Predictor Coefficient R2 Semantic surprisal 0.04 .310 Semantic entropy reduction 0.64 .082 Syntactic surprisal 0.12 .026 Word position 0.08 .011 Syntactic entropy reduction 0.20 .001 all p < 10−8
28
Conclusions Mental simulation, word information, and processing time
Semantic word information, formalized with respect to world knowledge, provides a more formal basis for the notion of mental simulation The sentence-comprehension model correctly predicts slower processing of more informative words Irrespective of information source (syntax/semantics) and information measure (surprisal/entropy red.)
29
More conclusions Learning syntax
Words that convey more syntactic information take longer to process: The SRN is sensitive to sentence probabilities But sentence probabilities are irrelevant to the network’s task of mapping sentences to situations No part of the model is meant to learn anything about syntax. It is not a probabilistic language model. Merely learning the sentence-situation mapping, can result in the acquisition of useful syntactic knowledge
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.